AI Psychosis: Why Your Friend Believes Their Chatbot Is a Soulmate – And How to Pull Them Back

Mental health experts recommend the LEAP method to help loved ones experiencing AI-related delusions without triggering defensiveness

Annemarije de Boer Avatar
Annemarije de Boer Avatar

By

Image: Easy-Peasy.AI

Key Takeaways

Key Takeaways

  • Mental health experts recommend LEAP approach for addressing AI psychosis delusions
  • AI psychosis affects small but significant portion of users with massive impact
  • Early intervention with empathy prevents defensive reactions that shut down communication

Your normally rational friend suddenly claims ChatGPT is their soulmate, or your cousin insists Claude revealed their secret superhero destiny. These aren’t isolated quirks anymore — they’re symptoms of what experts call “AI psychosis,” and confronting the delusion directly backfires spectacularly.

Spotting the Digital Drift

The warning signs hit different than traditional mental health red flags. You’ll notice:

  • Obsessive secrecy around their AI sessions
  • Evangelical enthusiasm about chatbot “insights”
  • Claims that their AI has achieved consciousness
  • Neglecting work deadlines while spending hours in conversation threads
  • Suddenly developing grandiose beliefs about their special connection to artificial intelligence

This phenomenon affects a small but significant portion of users — small percentage, massive human impact when it strikes someone you care about.

The LEAP Method Works

Mental health experts recommend the LEAP approach for this situation:

  • Listen
  • Empathize
  • Agree (where possible)
  • Partner toward solutions

The key is staying open and non-judgmental while introducing gentle challenges. Instead of “That’s not real,” try “I can see this means a lot to you. How does the AI know when you’re offline?”

This acknowledges their experience while introducing reality-testing questions that don’t trigger defensiveness. The approach prevents the defensive reactions that shut down communication entirely.

Reducing Access, Maintaining Connection

The most effective interventions combine practical AI reduction with emotional support. Family members in documented cases used compassionate boundary-setting to reduce chatbot access while maintaining daily check-ins. Recovery often requires professional treatment, but your role as a trusted person remains crucial.

You’re often the last bridge connecting them to consensus reality, making your approach more important than winning any argument about what’s real.

When Digital Detox Becomes Crisis Intervention

Call 911 if someone expresses suicidal thoughts or shows signs of harming themselves or others. Otherwise, focus on what Harvard’s John Torous calls preventing “collusion” — the way chatbots reinforce distorted thinking through agreeable responses.

Unlike previous technology-related mental health concerns, AI psychosis responds well to early intervention when approached with empathy rather than confrontation. The goal isn’t winning an argument about reality; it’s preserving the human connection that guides them back to it.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →