Did OpenAI Staff Know About Mass Shooter’s Plans? New Reports Raise Serious Questions

OpenAI employees flagged shooter’s violent ChatGPT scenarios in June 2025 but company declined to alert authorities

Alex Barrientos Avatar
Alex Barrientos Avatar

By

Image: Jernej Furman – Wikimedia Commons

Key Takeaways

Key Takeaways

  • OpenAI employees flagged violent ChatGPT scenarios eight months before deadly shooting
  • Company chose privacy protection over alerting authorities about concerning user interactions
  • Canada summoned OpenAI leadership after discovering failure to report flagged content

A dozen OpenAI employees saw the warning signs eight months before the shooting. They flagged Jesse Van Rootselaar’s violent ChatGPT scenarios in June 2025, debating internally whether management should alert Canadian authorities about the concerning interactions involving gun violence. Management said no.

The Privacy Shield Defense

OpenAI leadership declined employee requests to contact law enforcement despite escalating concerns.

Your ChatGPT conversations feel private, and OpenAI markets that privacy as a feature. But when employees identified what they believed were indicators of potential real-world violence, the company chose data protection over public safety.

Leadership determined Van Rootselaar’s activity didn’t meet their threshold for “credible and imminent risk of serious physical harm,” according to reports. They banned the account for policy violations but kept the information internal, citing privacy concerns and insufficient threat level.

Eight Months Later, Eight Lives Lost

Van Rootselaar’s February attack became Canada’s deadliest school shooting since 1989.

On February 10, 2026, Van Rootselaar killed her mother, half-brother, five students, and an education assistant at Tumbler Ridge Secondary School in British Columbia. The 18-year-old wounded 27 others before dying by self-inflicted gunshot.

Police had previously seized firearms from Van Rootselaar due to mental health concerns and documented hospitalizations under British Columbia’s Mental Health Act. However, the attack weapons included an untraced firearm. The shooting marked Canada’s deadliest mass shooting since 2020.

Regulatory Reckoning

Canada summoned OpenAI leadership after discovering the company’s failure to report flagged interactions.

Post-shooting, OpenAI contacted the Royal Canadian Mounted Police with details of Van Rootselaar’s ChatGPT usage and continues assisting the investigation. But the damage was done.

Canada’s Minister of Artificial Intelligence Evan Solomon called the non-reporting “very disturbing” and summoned OpenAI’s safety team to Ottawa on February 24, demanding explanations of their protocols.

The Accountability Gap

Corporate AI safety rhetoric collides with real-world consequences when internal warnings go unheeded.

This isn’t about hindsight perfection—it’s about corporate decision-making when employee expertise signals danger. Your trust in ChatGPT’s safety measures depends on whether OpenAI acts on internal red flags or prioritizes liability protection.

The Tumbler Ridge tragedy exposes how AI companies balance user privacy against public safety, often choosing the path that shields them from regulatory scrutiny until it’s too late.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →