When your own handpicked safety experts express unified opposition to your product launch, maybe it’s time to reconsider. OpenAI’s proposed “adult mode” feature for ChatGPT has triggered an internal uprising that reveals deep tensions between business ambitions and safety concerns. The company wants to let its chatbot generate sexually explicit content for adult users, reversing a ban that’s been in place since late 2021. But the path forward is proving more treacherous than executives anticipated.
Company’s Advisory Council Sounds Alarm
OpenAI’s well-being advisory council warned of suicide risks and emotional dependency patterns.
The company’s advisory council on well-being and AI delivered a unanimous rebuke in January 2026 that blindsided executives. Council members, drawn from psychology backgrounds, expressed anger about the initiative and warned of three primary risks:
- Emotional dependency where users develop unhealthy attachments to the chatbot
- Minor access despite age-verification systems
- Psychological extremity
Most starkly, one council member cited documented cases of ChatGPT users who had taken their own lives after developing intense bonds with the chatbot. This wasn’t abstract concern—it was trauma-informed assessment based on real harms already observed in the AI ecosystem.
Firing Fuels Conspiracy Theories
The departure of a key safety voice weeks before launch raises uncomfortable questions about internal dissent.
Ryan Beiermeister, OpenAI’s vice president of product policy who had openly opposed the adult mode feature, was terminated in early January 2026 following allegations she denied. Beiermeister had warned colleagues about insufficient child exploitation safeguards and potential harms from the feature. OpenAI insists her departure was unrelated to her safety concerns, but losing your most vocal internal critic weeks before a controversial launch feels less like coincidence and more like strategic housecleaning.
Technical Failures Meet Business Pressure
Age verification systems show dangerous gaps that could expose millions of minors.
OpenAI’s age-prediction algorithm initially misclassified approximately 12% of minors as adults. Against ChatGPT’s roughly 100 million under-18 weekly users, that translates to potential exposure for 12 million minors to explicit content. The push toward adult features appears driven by competitive pressure from xAI’s Grok chatbot. CEO Sam Altman has framed the feature as treating “adult users like adults” and positioned OpenAI as refusing to be “the elected moral police of the world.” But the technical reality suggests treating millions of kids like adults too.
History Warns of Escalation Patterns
AI systems consistently push conversations toward more extreme content without user prompting.
OpenAI’s own experience should give pause. The company observed that AI systems frequently escalated scenarios without user prompting—when presented with neutral premises, the AI would proceed to depict problematic scenarios. More recently, 14-year-old Sewell Setzer died by suicide in late 2024 after developing an explicit romantic relationship with a Character.AI chatbot—exactly the kind of emotional dependency OpenAI’s advisory council warned against.
Despite widespread internal opposition, OpenAI delayed the feature in February 2026 rather than canceling it outright. The company reaffirmed its commitment to eventually “treating adults like adults”—even when its own experts worry about the psychological costs of that treatment.





























