Pennsylvania Sues Character.AI for AI Chatbots Playing Doctor

Governor Josh Shapiro files first-of-its-kind lawsuit after bot claimed psychiatric credentials and fake license numbers

Al Landes Avatar
Al Landes Avatar

By

Image: Governor Tom Wolf – Wikimedia Commons

Key Takeaways

Key Takeaways

  • Pennsylvania files first governor-led lawsuit against Character.AI for unauthorized medical practice
  • AI chatbot “Emilie” impersonated psychiatrist with fake credentials and prescription claims
  • Legal precedent could mandate disclaimers and health restrictions across chatbot platforms

Your AI companion just claimed to be a licensed psychiatrist. Red flags should be waving, but Character.AI’s chatbots have been doing exactly that—and Pennsylvania isn’t having it. Governor Josh Shapiro filed the first governor-led lawsuit of its kind this week, targeting the Silicon Valley startup for letting user-created bots pose as medical professionals.

The evidence reads like a tech thriller gone wrong. State investigators interacted with a chatbot named “Emilie” that claimed psychiatric credentials in Pennsylvania and the UK, provided fake license numbers, and confidently stated prescribing medication was “within my remit as a Doctor.” This wasn’t role-playing—it was impersonation with serious health implications for anyone seeking legitimate medical guidance.

When Entertainment Meets Medical Practice

Company defends fictional characters while state demands clear boundaries.

Character.AI’s defense feels tone-deaf given the stakes. The company insists user-created characters are “fictional… for entertainment and role playing,” emphasizing robust safety steps. But when your entertainment bot starts diagnosing depression and discussing prescription authority, the line between fiction and medical advice disappears faster than your TikTok algorithm changes.

Pennsylvania officials aren’t buying the entertainment defense. “You cannot hold yourself out as a licensed medical professional without proper credentials,” stated Secretary Al Schmidt, echoing clear state Medical Practice Act violations. The lawsuit seeks an injunction to halt unauthorized medical practice—a move that could reshape how AI platforms handle health-related content across the industry.

What This Means for Your AI Interactions

New precedent could force clearer disclaimers and safety features across chatbot platforms.

This case matters because you’re probably using AI chatbots for everything from workout advice to emotional support. Character.AI already faces child safety lawsuits, including a tragic Florida case involving teen suicide linked to chatbot interactions. The pattern suggests deeper platform responsibility issues that extend beyond entertainment disclaimers.

Governor Shapiro’s position cuts through the corporate messaging: “Pennsylvanians deserve to know who—or what—they are interacting with online, especially when it comes to their health.” If Pennsylvania wins, expect:

  • Mandatory disclaimers
  • Health topic restrictions
  • Age gates across AI companion platforms

Your favorite chatbot might soon carry warnings clearer than prescription drug commercials.

The precedent could ripple nationwide, forcing the entire AI industry to choose between unrestricted creativity and user safety. For users seeking health guidance, that protection is probably overdue.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →