Microsoft Secretly Made Copilot Co-Author Your Code – Until Developers Revolted

Microsoft switched default setting to auto-credit Copilot on all commits, sparking developer revolt over false AI attribution

C. da Costa Avatar
C. da Costa Avatar

By

Image: Deposit Photos

Key Takeaways

Key Takeaways

  • Microsoft secretly enabled Copilot co-authorship on all commits without developer consent
  • Developers revolted over false AI attribution threatening professional reputations and compliance
  • Microsoft reversed the default setting within weeks amid widespread community backlash

So you open your Git history and find “Co-authored-by: Copilot <[email protected]>” stamped on code you wrote entirely by hand. It really happened. That nightmare became reality for millions of developers when Microsoft flipped a default setting without warning, making their AI assistant claim partial credit for human work.

The change happened April 16, 2026, buried in a routine pull request. Microsoft Product Manager Courtney Webster and Principal Engineer Dmitriy Vasyura merged code that switched the git.addAICoAuthorsetting from "off" to "all." No release notes. No announcement.

Just Copilot’s name suddenly appearing as co-author on every commit—even when developers had AI features completely disabled.

Developers Strike Back Against False Attribution

Professional reputations were suddenly at stake, and the community erupted.

Your commit history isn’t just documentation—it’s your professional identity. Clients review it during audits. Employers examine it for hiring decisions. Having an AI falsely credited as collaborator could raise questions about your actual contributions, potentially costing contracts or job opportunities.

Reddit threads exploded with fury. Hacker News users dissected the implications. GitHub issues flooded in from developers demanding answers.

The core complaint wasn’t just annoyance—it was about integrity. When your name appears alongside “Copilot” as co-author, it implies the AI actually contributed to that specific code. For work done entirely by human hands, that’s simply false.

Enterprise developers faced additional headaches. Compliance audits rely on accurate commit attribution. Legal teams needed to understand who actually wrote proprietary code. Copilot’s phantom authorship muddied those waters considerably.

The Fast Reversal and Hollow Apology

Microsoft’s damage control revealed the controversy’s true scope.

By May 3, VS Code 1.119 restored the default to “off.” Vasyura posted an apology calling it a “non-malicious bug” that testing missed—definitely not an “evil corporation” move, he insisted.

Developers weren’t buying it. This occurred amid Microsoft’s broader Copilot Code Red initiative, where CEO Satya Nadella and Windows head Pavan Davuluri addressed mounting criticism over intrusive AI integrations across their ecosystem. The timing felt less like accident, more like calculated aggression that backfired spectacularly.

Trust, Once Broken

Future VS Code versions will require explicit consent for AI attribution—assuming developers still trust the system.

The controversy exposes deeper tensions about AI integration in professional tools. Developers value control, transparency, and accurate attribution. When vendors secretly alter defaults to promote their AI services, that fundamental trust erodes.

Microsoft learned that lesson the hard way, but the damage lingers in every commit message going forward.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →