Picture this: you notice some odd discoloration around your eyelids after long screen sessions, so you ask ChatGPT what’s wrong. The AI confidently diagnoses “bixonimania”—a rare condition affecting 1 in 90,000 people, caused by blue light exposure. Sounds legit, right? Except bixonimania doesn’t exist. A Swedish researcher invented it to expose a terrifying flaw in AI health advice.
The Fake Disease That Fooled Silicon Valley
Almira Osmanovic Thunström from the University of Gothenburg crafted the perfect AI trap in 2024. She created fake research papers about bixonimania, complete with fabricated symptoms and bogus statistics. The papers included obvious red flags—like acknowledging “Starfleet Academy” and literally stating the research was made up.
Yet within weeks, every major AI platform swallowed the bait whole:
- Microsoft Copilot called bixonimania a “rare condition” on April 13, 2024
- Google Gemini described its blue light origins
- Perplexity AI cited prevalence rates
- ChatGPT even diagnosed unprompted queries about eyelid issues as this fictional disease
The hoax spread so convincingly that real researchers cited it in peer-reviewed studies—which were later retracted when the fraud surfaced.
Your AI Doctor Needs Better Training
The implications hit anyone using AI for health queries (basically everyone with a smartphone). Recent studies reveal that health-focused AI systems produce disinformation 88% of the time when specifically probed. These aren’t edge cases—they’re fundamental vulnerabilities in how AI processes medical information.
By 2026, some platforms showed improvement. ChatGPT eventually labeled bixonimania “made-up” on March 11, though it still occasionally described it as a “proposed subtype.” Copilot called it “not widely recognized.” The inconsistency reveals the core problem: AI training data gets poisoned by fake information that human fact-checkers miss.
Alex Ruani from University College London called Thunström’s experiment a “masterclass” in exposing AI disinformation risks. The scary part isn’t that one researcher fooled the algorithms—it’s how easily bad actors could seed fake medical advice into systems millions trust for health guidance. Your WebMD paranoia just got an AI upgrade, and the technology isn’t ready for that responsibility.





























