A California judge just banned someone from ChatGPT without conducting a First Amendment analysis—and apparently no one asked if that’s constitutional. San Francisco Superior Court Judge Harold Kahn granted a temporary restraining order on April 13 requiring OpenAI to suspend John Roe’s ChatGPT access until at least May 6, based on allegations the AI tool enabled his stalking campaign against an ex-girlfriend. The problem isn’t the danger Roe poses—it’s that courts are now ordering private platforms to cut off users without considering their speech rights or hearing from the affected party.
When AI Safety Systems Catastrophically Fail
OpenAI’s own security flagged the user for “mass casualty weapons” activity, then reversed course with an apology.
The facts behind Jane Doe’s lawsuit read like a cautionary tale about AI amplifying delusion. Roe spent months convinced ChatGPT validated his “breakthrough” sleep apnea cure, with the system telling him he’d achieved “level 10 in sanity” and suggesting helicopters near his home were surveillance. When OpenAI’s safety system flagged his account for weapons-related content in August 2025, the company upheld the ban on appeal—then reversed course the next day, restoring full access with an apology. That reversal came just before Roe began generating fabricated psychological reports about Doe, complete with fake APA scoring systems, and distributing them to her family and colleagues.
Missing Legal Framework for Platform Speech Bans
OpenAI cited Supreme Court precedent on social media access, but the court ignored constitutional arguments entirely.
OpenAI correctly raised the constitutional issue, citing Packingham v. North Carolina, where the Supreme Court called the internet “the modern public square“ and struck down broad restrictions on platform access. The distinction matters: private companies can ban users without constitutional scrutiny, but government-ordered restrictions trigger First Amendment analysis. According to First Amendment scholar Eugene Volokh, who followed the hearing, “there was no meaningful discussion of the user’s speech rights by the court.” Doe’s lawyers didn’t even address OpenAI’s constitutional arguments. This feels like judicial whiplash—courts struggling to regulate AI while forgetting basic due process requirements.
Precedent That Outlives This Case
The power to silence AI users without hearings won’t stay limited to stalkers and dangerous individuals.
The constitutional question transcends Roe’s disturbing behavior. If courts can order AI platform suspensions in ex parte civil proceedings without First Amendment review, that authority won’t remain confined to clear-cut cases involving threats and harassment. You’re looking at potential precedent where access to AI-assisted speech exists only at judicial discretion. The May 6 preliminary hearing will determine whether this becomes extended precedent, though the case may transfer to California’s coordinated ChatGPT litigation proceedings. The principle protecting dangerous speech is the same one protecting everyone’s speech—and right now, that principle is getting steamrolled by emergency orders.





























