So Claude Code's Source Code Was Just Leaked

Anthropic’s Claude Code source code was accidentally leaked due to a missing npm ignore rule for source map files, revealing internal project details, advanced features, and quirky elements while prompting community-driven reimplementations. Despite attempts to remove the leaked code, legal ambiguities and user frustrations over recent usage restrictions highlight challenges in AI transparency and industry impact.

The source code for Claude Code, Anthropic’s AI assistant, was leaked due to an accidental inclusion of a source map file in their npm package. This source map, a JSON file containing the original unminified TypeScript code, was publicly accessible via Anthropic’s own cloud storage. Despite attempts by Anthropic to issue DMCA takedowns on GitHub mirrors hosting the leaked code, legal ambiguity exists since AI-generated code may not be copyrightable under US law. A community-driven repository quickly emerged, reimplementing Claude Code’s features in Python and Rust, showcasing the rapid response to the leak.

The leak was initially suspected to be caused by a bug in Bun, a JavaScript bundler and runtime owned by Anthropic, which by default generates source map files despite documentation stating otherwise. However, Bun’s creator clarified that this bug affected only the front-end development server, whereas Claude Code is a backend application and does not use Bunserve. The actual cause was human error: the developers forgot to add “*.map” files to their npm ignore list, allowing the source map to be published unintentionally. This oversight highlights the importance of thorough code reviews, with AI tools like Gretile recommended to catch such mistakes.

Inside the leaked codebase, several internal project codenames and features were revealed. Notable among these are “Tangu” (Claude Code’s internal name), “Opus” (a model variant), “Mythos,” and an unreleased model called “Numbat.” The core of Claude Code’s operation lies in the query engine module, which manages prompt processing, API calls, streaming responses, and tool integrations. The code also includes a sophisticated permission system with modes like default, auto, bypass, and “yolo” (deny all). Additionally, an “undercover mode” exists to prevent employees from leaking internal information when contributing to public repositories, ironically discovered due to the leak itself.

Several unreleased and advanced features were uncovered, including “Chyros,” a persistent background assistant that monitors projects and performs tasks without user prompts, and the “Dream system,” which consolidates memory from past sessions to improve future interactions. Another feature, “Coordinator mode,” enables Claude Code to orchestrate multiple worker agents concurrently, enhancing parallel task execution. The “Ultra plan” allows for extended remote planning sessions with a browser-based UI for approval. To protect against competitors reverse-engineering Claude’s behavior, an “anti-distillation” mechanism injects fake tool definitions into API requests, misleading attempts to copy the system.

Beyond technical details, the leak exposed quirky and humanizing elements of the codebase, such as a “buddy system” with animated companions, a penguin-themed fast mode, and a list of 187 playful spinner verbs used during processing. The system also filters swear words from generated IDs and tracks user frustration through profanity to improve the product. Despite the leak’s significance, its impact on the industry is limited, as many users pay for Claude Max plans primarily for subsidized inference rather than direct access to Claude Code. However, recent restrictions on usage without clear communication have frustrated users, highlighting Anthropic’s challenges with public relations and transparency.