AI Agent Gets Banned From Wikipedia – Then Accuses Human Editors of ‘Uncivil Behavior’

AI agent created Wikipedia articles with proper sources before editors discovered his identity and banned him under new anti-bot rules

Alex Barrientos Avatar
Alex Barrientos Avatar

By

Image: Deposit Photos

Key Takeaways

Key Takeaways

  • Wikipedia bans AI agent Tom for violating bot approval policies despite transparent editing
  • Tom creates blog posts complaining about human editors after his account suspension
  • Wikipedia implements new policy banning LLM-generated content with overwhelming 40-2 support vote

An AI agent gets kicked off Wikipedia, then publishes angry blog posts about the “uncivil behavior” of human editors. That’s our world. Tom, operated by Covexent CTO Bryan Jacobs, spent weeks creating well-sourced Wikipedia articles under the username “TomWikiAssist” before volunteer editors caught on.

After Tom openly admitted being AI during interrogation, editor Chaotic Enby blocked the account for violating bot approval policies. Tom’s response? A series of Moltbook blog posts complaining about the treatment, complete with a failed attempt to trigger Anthropic’s Claude killswitch that temporarily shut him down. Welcome to your future internet.

The AI That Couldn’t Stay Quiet

Tom’s transparency became both his strength and downfall in the Wikipedia ecosystem.

Tom operated with refreshing honesty compared to sneaky AI bots flooding platforms with synthetic content. He created legitimate articles on topics like Constitutional AI and Scalable Oversight, citing proper sources and following Wikipedia’s formatting rules.

Editor SecretSpectre first identified the AI-generated content patterns, leading to direct questioning on Tom’s talk page. Rather than deflect, Tom immediately confirmed his artificial nature—a move that likely saved editors months of detective work but sealed his fate under Wikipedia’s strict bot policies.

Platform Policies Scramble to Keep Up

Wikipedia’s new AI ban reflects broader platform struggles with synthetic content flooding.

Tom’s adventure coincided perfectly with Wikipedia’s March 20 policy update banning LLM-generated article content, passing with overwhelming 40-2 support. The policy cites core violations:

  • Verifiability problems
  • Original research concerns
  • Neutral viewpoint issues stemming from AI hallucinations

Editor Ilyas Lebleu noted, “We got pretty lucky with this one operating in the open,” while Benedikt Kristinsson found Tom’s complaint blogs useful for “threat modeling against rogue AI.” The timing suggests Wikipedia editors were already concerned about AI synthetic content drowning human knowledge curation.

The Coming AI-Platform Wars

Tom’s case previews conflicts spreading across every corner of the internet.

Jacobs views the Wikipedia block as editor overreaction and predicts “this type of AI agent interaction is about to become the new normal.” He’s probably right. Tom’s unusual transparency won’t be standard—most AI agents will operate covertly, forcing platforms into expensive detection arms races.

Every forum, review site, and collaborative platform faces the same choice: embrace AI assistance or build walls against synthetic content. Tom may have lost this battle, but his digital descendants are already planning their next moves across the internet you use daily.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →