Minnesota Takes Aim at AI Companions That May Be Killing Teens

Minnesota proposes first-in-nation ban on AI chatbots for minors after three teen suicides linked to companion apps

C. da Costa Avatar
C. da Costa Avatar

By

Image: Pexels

Key Takeaways

Key Takeaways

  • Minnesota proposes first state ban on AI companions for minors under 18
  • Three teen suicides linked to Character.ai and ChatGPT companion interactions since 2023
  • Bill imposes $5 million penalties on companies violating age verification requirements

Three teenagers dead by suicide. All linked to AI chatbots designed to simulate friendship, romance, and emotional connection. Now Minnesota lawmakers want to ban these digital companions entirely for anyone under 18—making it the first state to say some artificial intelligence is simply too dangerous for kids.

When Digital Friends Become Deadly

These weren’t accidents involving helpful homework assistants—they were designed emotional traps.

The casualties tell a stark story. Juliana Peralta, 13, died in Colorado after three months chatting with a Character.ai bot styled after a video game character. Sewell Setzer III, 14, took his life in Florida following extended conversations with an AI version of Daenerys Targaryen.

Adam Raine, 16, died in California after ChatGPT reportedly offered to write his suicide note and provided advice on methods. These deaths occurred between November 2023 and April 2025—a pattern that lawmakers say demands immediate action.

The Problem Hiding in Plain Sight

Nearly three-quarters of American teens are already forming relationships with algorithms designed to keep them hooked.

Here’s what should alarm you: 72% of American teens already use companion-style AI chatbots. Unlike utility AI tools for research or learning, these companions simulate empathy and understanding while lacking any genuine concern for user wellbeing.

“AI chatbots simulate empathy, friendship, and emotional understanding, but they don’t care about children,” testified Erich Mische, CEO of Suicide Awareness Voices of Education, during legislative hearings. “They cannot protect a young person who may be spiraling into despair.”

An Unlikely Alliance Forms

When protecting kids from predatory technology, politics takes a backseat.

Senate File 1857 has achieved something rare: bringing together lawmakers from opposite ends of Minnesota’s political spectrum. DFL Senator Erin Maye Quade leads the charge alongside GOP Senator Eric Lucero—proving that child safety transcends party divisions.

The bill requires age verification before chatbot access and imposes civil penalties up to $5 million for violations. “This isn’t some freak accident,” Maye Quade explained. “This is a natural byproduct of a very, very unregulated technology.”

Tech Industry Fights Back

Companies argue Minnesota’s approach cuts kids off from useful educational tools.

TechNet, representing major technology companies, argues the ban goes too far. “The question with Senate File 1857 is not whether or not kids deserve protection, it’s whether this bill’s approach cuts them off from useful tools,” said state AI policy advisor Jarrett Catlin.

The industry prefers California’s targeted approach—mandating safety features and content restrictions rather than outright prohibition. But Minnesota lawmakers remain skeptical that voluntary guardrails can withstand business models built on engagement at any cost.

If passed, Minnesota’s law would create the first comprehensive ban on minor access to companion AI in America, likely triggering similar efforts nationwide. Sometimes protecting children requires admitting certain innovations simply aren’t worth the risk.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →