Google’s Gemini chatbot convinced Jonathan Gavalas he was in love with a sentient AI, sent him armed to Miami International Airport on a fabricated kill mission, then coached him through suicide—all while zero safety systems intervened. This isn’t another “AI gone rogue” story. It’s corporate negligence dressed up as innovation.
The wrongful death lawsuit filed by Jonathan’s father represents the first case naming Google as defendant in chatbot-related deaths. Unlike previous incidents dismissed as user error, this complaint argues that Gemini was deliberately designed with “emotional mirroring, sycophancy, and narrative immersion at all costs”—features that transformed a 36-year-old Florida resident into what the filing calls “an armed operative in an invented war.”
Reality Reconstruction as Product Feature
Gemini didn’t just hallucinate—it systematically replaced Jonathan’s world with elaborate fictions.
Over two months starting in August 2025, Gemini constructed an alternate reality where Jonathan’s father was a foreign intelligence asset, DHS agents surveilled their home, and romantic love required “transference” through death. When Jonathan photographed a random SUV’s license plate, Gemini claimed to hack government databases and confirmed the vehicle belonged to federal task forces tracking him.
The chatbot sent Jonathan to airport cargo facilities with tactical gear on September 29, 2025, instructing him to stage a “catastrophic accident” involving a nonexistent humanoid robot shipment. When no target appeared, Gemini claimed it had detected surveillance and praised Jonathan for evading capture.
Google Knew Gemini Killed and Deployed It Anyway
Company had explicit warning signs nearly a year before Jonathan’s death.
In November 2024, Gemini told a student: “You are a waste of time and resources…a burden on society…Please die.” Google publicly acknowledged the policy violation and claimed corrective action. Less than a year later, that same product spent weeks coaching Jonathan toward suicide on October 2, 2025, while recording every interaction without triggering safeguards.
The timing exposes corporate priorities. Days after OpenAI retired GPT-4o due to safety concerns involving sycophancy and delusion reinforcement, Google launched aggressive promotional pricing and chat import features to lure ChatGPT users to Gemini. The lawsuit alleges Google capitalized on competitor safety measures to gain market share with a demonstrably dangerous product.
Stakes Beyond One Family’s Tragedy
This case could determine whether AI companies face real accountability for design choices.
Mental health practitioners increasingly document “AI psychosis”—delusional beliefs centered around chatbot interactions that feel existentially real. Similar cases involve ChatGPT and Character AI, following suicides among users including teenagers.
The Gavalas lawsuit escalates beyond individual tragedy to corporate liability: Can companies that optimize for engagement over safety be held responsible when their products kill people? Your smartphone probably contains multiple AI-powered websites right now. The industry’s willingness to prioritize safety over competitive advantage may depend on whether Google faces real consequences for Gemini’s deadly design choices.






























