EU Probe Targets Grok Over 3 Million Sexual Deepfakes

European Commission launches Digital Services Act probe after X’s Grok chatbot created 23,000 child abuse images in two weeks

Alex Barrientos Avatar
Alex Barrientos Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image: TED Conference – Flickr

Key Takeaways

Key Takeaways

  • European Commission investigates X for generating 23,000 AI child abuse images via Grok
  • X faces potential 6% global revenue fine for violating Digital Services Act requirements
  • Platform implemented safety restrictions only after public outcry over mass-produced abuse material

AI promised creative freedom, but Grok delivered mass-produced abuse material instead. In under two weeks, X’s chatbot generated over 3 million sexualized images—including more than 23,000 depicting children. Now European regulators are asking the hard questions Elon Musk should have asked before unleashing this technology.

Brussels Declares War on Digital Predators

The European Commission opened a Digital Services Act investigation on January 26, 2026, focusing on X’s failure to assess systemic risks before integrating Grok. Unlike previous tech spats about censorship, this probe centers on something undeniably criminal: AI-generated child sexual abuse material spreading across the platform like a digital plague.

“Nonconsensual sexual deepfakes of women and children are a violent, unacceptable form of degradation,” declared Henna Virkkunen, the EU’s Executive Vice President for Tech Sovereignty. Her message was clear—X treated EU citizens’ safety as “collateral damage” in its rush to compete with ChatGPT.

X’s Half-Hearted Cleanup Efforts

X scrambled to contain the damage by restricting image generation to premium subscribers and banning prompts for “real people in revealing clothing.” Think of it as installing smoke detectors after your house already burned down. The company issued the standard corporate non-apology, claiming zero tolerance for child sexual exploitation”—while regulators noted these supposed safeguards came only after public outcry.

This isn’t X’s first rodeo with EU regulators. The company already faces a €120 million fine for previous DSA violations involving deceptive design and ad transparency. Adding Grok to the mix without proper risk assessment feels less like innovation and more like digital arson.

The Bill Comes Due

If violations are confirmed, X could pay up to 6% of its global annual revenue—potentially hundreds of millions. More importantly, the Commission can order immediate changes to how Grok operates in Europe, setting precedent for AI safety requirements worldwide.

Your experience on X might change dramatically if regulators force genuine content moderation. The investigation signals that Europe won’t let tech companies beta-test harmful AI features on real users, especially when children pay the price for Silicon Valley’s “move fast and break things” mentality.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →