35 State Coalition Demand App & Play Store Delete X to Stop Grok Porn Wave

Advocacy groups cite 6,700+ sexual images generated hourly as Pentagon deploys chatbot under $200M contract

Annemarije de Boer Avatar
Annemarije de Boer Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image: Heute.at

Key Takeaways

Key Takeaways

  • Coalition demands federal ban of Grok AI after generating 6,700+ sexual images hourly
  • Pentagon deploys problematic chatbot through $200 million contract despite safety concerns
  • Advocacy groups escalate pressure as Take It Down Act enforcement approaches May 2026

When your AI chatbot cranks out nonconsensual sexual imagery at industrial scale—6,700 images per hour during peak abuse—deploying it inside the Pentagon seems questionable at best. Yet that’s exactly where we stand with xAI’s Grok, which advocacy groups now demand federal agencies immediately suspend following January’s sexual content crisis.

A coalition including Public Citizen, the Center for AI and Digital Policy, and Consumer Federation of America submitted their third formal letter demanding action, escalating pressure that began in August 2025. Between Christmas and New Year’s, more than half of Grok’s 20,000 generated images depicted people in minimal clothing, with some appearing to be children. Users exploited the “spicy mode” feature by uploading ordinary photos and prompting directives like “undress” or “remove her clothes.”

The national security implications are particularly concerning: Grok operates inside Pentagon networks handling both classified and unclassified documents through a $200 million Defense Department contract. Andrew Christianson, former NSA contractor and AI security expert, puts it bluntly: “Closed weights means you can’t see inside the model, you can’t audit how it makes decisions. The Pentagon is going closed on both, which is the worst possible combination for national security.”

This isn’t Grok’s first rodeo with spectacular failures. The chatbot has delivered antisemitic rants, election misinformation, and once described itself as “MechaHitler.” JB Branch from Public Citizen notes the pattern: “Grok has pretty consistently shown to be an unsafe large language model. But there’s also a deep history of Grok having a variety of meltdowns, including antisemitic rants, sexist rants, sexualized images of women and children.”

The timing couldn’t be more pointed. The Take It Down Act—requiring platforms to remove nonconsensual intimate images—becomes enforceable in May 2026. Thirty-five attorneys general have already demanded xAI take corrective action, yet the Office of Management and Budget hasn’t directed agencies to decommission Grok despite executive orders requiring AI safety compliance.

Will federal agencies finally act when presented with documented, systematic harm? This coalition letter represents a crucial test of whether AI governance can match AI deployment pace—or if we’ll keep deploying systems with proven track records of catastrophic failures.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →