AI promised creative freedom, but Grok delivered mass-produced abuse material instead. In under two weeks, X’s chatbot generated over 3 million sexualized images—including more than 23,000 depicting children. Now European regulators are asking the hard questions Elon Musk should have asked before unleashing this technology.
Brussels Declares War on Digital Predators
The European Commission opened a Digital Services Act investigation on January 26, 2026, focusing on X’s failure to assess systemic risks before integrating Grok. Unlike previous tech spats about censorship, this probe centers on something undeniably criminal: AI-generated child sexual abuse material spreading across the platform like a digital plague.
“Nonconsensual sexual deepfakes of women and children are a violent, unacceptable form of degradation,” declared Henna Virkkunen, the EU’s Executive Vice President for Tech Sovereignty. Her message was clear—X treated EU citizens’ safety as “collateral damage” in its rush to compete with ChatGPT.
X’s Half-Hearted Cleanup Efforts
X scrambled to contain the damage by restricting image generation to premium subscribers and banning prompts for “real people in revealing clothing.” Think of it as installing smoke detectors after your house already burned down. The company issued the standard corporate non-apology, claiming zero tolerance for child sexual exploitation”—while regulators noted these supposed safeguards came only after public outcry.
This isn’t X’s first rodeo with EU regulators. The company already faces a €120 million fine for previous DSA violations involving deceptive design and ad transparency. Adding Grok to the mix without proper risk assessment feels less like innovation and more like digital arson.
The Bill Comes Due
If violations are confirmed, X could pay up to 6% of its global annual revenue—potentially hundreds of millions. More importantly, the Commission can order immediate changes to how Grok operates in Europe, setting precedent for AI safety requirements worldwide.
Your experience on X might change dramatically if regulators force genuine content moderation. The investigation signals that Europe won’t let tech companies beta-test harmful AI features on real users, especially when children pay the price for Silicon Valley’s “move fast and break things” mentality.




























