Anthropic’s AI Tool Accidentally Tweets Its Own Secrets

Security researcher discovers 60-megabyte source-map file containing 500,000 lines of Anthropic’s proprietary code

Rex Freiberger Avatar
Rex Freiberger Avatar

By

Image: Deposit Photos

Key Takeaways

Key Takeaways

  • Anthropic accidentally leaked 500,000 lines of proprietary Claude Code source through npm packaging error
  • Exposed code revealed secret AI features including dreaming mode and undercover GitHub interactions
  • Third major leak raises concerns about $380 billion company’s release process maturity

Racing to catch a coding deadline? That feeling of dread when you realize you’ve pushed the wrong files just hit Anthropic at industrial scale. Security researcher Chaofan Shou discovered something wild in Claude Code’s latest npm package: a 60-megabyte source-map file that essentially handed over the blueprint for Anthropic’s flagship AI coding tool. Within hours, his post racked up 28.8 million views on X, turning corporate embarrassment into developer Christmas morning.

Half a Million Lines of Corporate DNA Exposed

The leaked source-map file revealed more proprietary secrets than a disgruntled employee’s tell-all memoir.

The leaked source-map file contained enough information to reconstruct over 1,900 TypeScript files—roughly 500,000 lines of proprietary code covering everything from internal APIs to encryption protocols. Think of it as accidentally including the director’s commentary track on your blockbuster movie, except the commentary reveals exactly how every special effect was created.

While no customer data or AI model weights were compromised, this wasn’t just any code—it was the “agentic harness” techniques that make Claude models function as autonomous coding assistants.

Secret Features That Sound Like Science Fiction

The exposed capabilities read like a cyberpunk novel where AI assistants have digital pets and secret identities.

The exposed code revealed capabilities that feel straight out of a sci-fi thriller. Claude Code’s “dreaming” mode performs periodic memory consolidation, while “undercover” mode lets the AI hide its identity when interacting on platforms like GitHub. Most intriguing is KAIROS, a persistent background agent that sends notifications and operates independently—plus a Tamagotchi-style “Buddy” pet feature that suggests Anthropic’s thinking beyond pure productivity tools.

Third Strike in Recent Memory

For a $380 billion company eyeing an IPO, repeated packaging failures suggest serious process problems.

This marks Anthropic’s second npm source-map leak within a year, following a similar February 2025 incident. Just days before this latest exposure, another breach revealed roughly 3,000 files including details about their “Mythos/Capybara” model development. For a company valued at $380 billion and eyeing an IPO, these repeated packaging failures raise questions about release process maturity in the cutthroat AI race.

Despite Anthropic’s swift response—issuing over 8,000 DMCA takedown requests—GitHub mirrors accumulated 84,000 stars and forks before disappearing. As one cybersecurity expert told Fortune, “A source code leak of this kind is significant, as it gives software developers and Anthropic’s competitors a blueprint.” Enterprising developers even rewrote the code in other programming languages to evade removal, ensuring these techniques will outlive the takedowns. The leak effectively handed competitors a shortcut through years of R&D—transforming packaging negligence into industry-wide knowledge transfer.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →