‘The Face of Evil’: Sam Altman Attacked in Explosive Legal Filing Over School Shooter Failure

Seven families allege OpenAI ignored safety warnings about shooter Jesse Van Rootselaar eight months before massacre

Alex Barrientos Avatar
Alex Barrientos Avatar

By

Image: TechCrunch – Wikimedia Commons

Key Takeaways

Key Takeaways

  • Seven families sue OpenAI after ChatGPT user killed eight people at school
  • Company ignored safety team warnings eight months before February shooting massacre
  • OpenAI deactivated threat account but helped shooter re-register with new credentials

When lawyer Jay Edelson branded OpenAI CEO Sam Altman “the face of evil” this week, he wasn’t just throwing legal theatrics around. Seven families from the Tumbler Ridge school shooting filed lawsuits Wednesday, alleging OpenAI deliberately ignored safety protocols that could have prevented February’s massacre. Your trust in AI platforms just got a brutal reality check.

The company’s own safety team flagged shooter Jesse Van Rootselaar’s ChatGPT account as a credible gun violence threat in June 2025—eight months before he killed eight people and wounded 27 at a secondary school in the small British Columbia mining town.

Corporate Priorities Override Public Safety

Leadership chose account deactivation over police notification, then helped user re-register with new credentials.

Here’s where corporate accountability meets real-world carnage. OpenAI leadership overruled their safety team’s recommendation to notify police about the threat. Instead, they deactivated the account and provided instructions for re-registration with a new email address.

Police already knew about Van Rootselaar and had previously removed guns from his home, but OpenAI never connected those dots. Like handing someone new car keys after taking away their license for drunk driving. The decision prioritized user privacy over community safety—a calculation that proved devastatingly wrong when Van Rootselaar opened fire months later.

Human Cost Behind Corporate Calculations

Twelve-year-old Maya Gebala remains in intensive care after four brain surgeries from gunshot wounds.

The lawsuits paint a crushing human picture behind OpenAI’s valuation concerns. Maya Gebala, just 12 years old, took three bullets to her head, neck, and cheek. She’s endured four brain surgeries and remains in intensive care.

Her family, along with six others, can’t access the chat logs that might explain what Van Rootselaar discussed with ChatGPT before his rampage. OpenAI’s refusal to release those conversations feels “cruel,” according to the suits. Edelson called Altman’s eventual apology “ridiculous” given the eight-month silence.

New Protocols Promise Better Future Detection

Altman apologizes publicly while OpenAI claims current systems would now trigger police notifications.

Altman has publicly apologized to Tumbler Ridge residents, admitting the company erred in not alerting law enforcement. OpenAI now claims zero tolerance for violence assistance and promises enhanced threat escalation, mental health referrals, and repeat violator detection.

They insist current protocols would notify police in similar situations. The California lawsuits allege violations of state laws requiring threat reporting and prohibiting re-supply of dangerous tools. This case could set precedent for AI platform liability—meaning your safety expectations from tech companies might finally carry legal weight.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →