The leak of Anthropic’s Claude source code revealed Conway, a persistent AI agent that continuously monitors and integrates with users’ workflows to proactively assist them, forming a key part of Anthropic’s strategy to create a proprietary, lock-in ecosystem for enterprises. This approach raises significant concerns about behavioral data ownership, privacy, and platform lock-in, highlighting the broader industry shift toward persistent agent layers that deeply embed personalized AI memory and context, making switching between platforms increasingly difficult.
The recent leak of Anthropic’s Claude source code revealed a previously unannounced, always-on agent called Conway, which operates as a standalone environment within the Claude interface. Unlike a typical chat window, Conway functions as a persistent sidebar with dedicated areas for search, chat, and system management, including an extension ecosystem that allows custom tools and integrations. This agent can autonomously monitor emails, Slack channels, calendars, and other workflows, proactively drafting responses and pulling relevant context to assist users throughout their day. While Conway’s capabilities are impressive, the technology is still imperfect and requires ongoing user oversight, highlighting the gap between polished demos and real-world application.
Conway is part of a broader platform strategy Anthropic has been rapidly executing over the past quarter, which includes developer tools like Claude Code Channels, enterprise-focused products like Claude Co-Work, and a marketplace for partner apps. This strategy aims to lock in enterprise customers by integrating multiple surfaces—developer tools, collaboration platforms, distribution channels, and enforcement mechanisms—into a cohesive ecosystem. Anthropic is also restricting third-party tool access to its cloud subscriptions, effectively pushing users toward its proprietary environment. Conway serves as a critical piece in this puzzle, acting as the persistent agent layer that deeply understands and integrates with an organization’s workflows, making switching costs extraordinarily high.
A key tension in Anthropic’s approach lies in Conway’s extension format, which builds on the open Model Context Protocol (MCP) but adds a proprietary layer that locks extensions to Conway’s environment. This mirrors patterns seen in other tech ecosystems, such as Google Play Services on Android, where open foundations coexist with proprietary layers that control valuable functionality and distribution. For developers, this creates a choice between building portable, open MCP-compatible tools without guaranteed distribution or Conway-specific extensions that benefit from built-in discoverability but are locked to Anthropic’s platform. This dynamic signals a broader industry trend toward platform lock-in through proprietary agent ecosystems.
The implications of Conway’s persistent agent model extend beyond technology to issues of data and behavioral lock-in. Unlike traditional lock-in based on files or communication history, Conway locks in the accumulated behavioral model of how users work—the patterns, preferences, and workflows learned over time. This “intelligence portability” problem raises complex questions about ownership, portability, and privacy of behavioral data, which currently lack legal or regulatory frameworks. Enterprises may benefit from this by better understanding and leveraging employee productivity, but employees risk losing control over their behavioral imprint, which could affect career mobility and bargaining power.
Looking ahead, the AI industry is transitioning from a focus on foundational models to competition over persistent agent layers that hold memory, context, and workflows. Anthropic, OpenAI, and Google are all racing to own this layer, which promises unprecedented customer lock-in due to the high switching costs of losing personalized agent memory. For individuals and enterprises, the choice of which agent platform to adopt will have significant long-term consequences. While convenience may drive widespread adoption of proprietary agents like Conway, there is a growing need for open, portable solutions that respect user behavioral data ownership. The evolving landscape demands careful consideration of the trade-offs between convenience, control, and privacy in the age of persistent AI agents.