Tech CEO paranoia just became justified. Daniel Moreno-Gama, a 20-year-old from Texas, allegedly threw a Molotov cocktail at OpenAI CEO Sam Altman’s San Francisco home early Friday morning, then threatened to burn down the company’s headquarters while carrying kerosene and anti-AI writings.
The attack wasn’t random—court documents reveal Moreno-Gama had discussed “Luigi’ing some tech CEOs“ in an anti-AI Discord group months earlier. This referenced Luigi Mangione’s assassination of UnitedHealthcare CEO Brian Thompson in December, showing how online extremism crosses into real-world violence.
Discord Discussions Turn Deadly
Anti-AI chat rooms became planning grounds for real-world violence against industry leaders.
The Stop AI Discord server, where tech doomers gather to share apocalyptic predictions, became Moreno-Gama’s staging ground. In December, he asked about discussing violence before telling “The Last Invention” podcast team he meant “Luigi’ing some tech CEOs.”
These online spaces have transformed from theoretical hand-wringing about artificial general intelligence into concrete threats against the people building it. You can trace a direct line from keyboard warriors to actual weapons.
Walking Back Words, Throwing Fire
Suspect downplayed violent rhetoric in interviews while allegedly planning actual attacks.
During a January podcast interview with host Andy Mills, Moreno-Gama backpedaled hard. He called his “Luigi’ing” comment provocative rather than literal, claiming violence against Altman or anyone else was “not worth it” and “not practical.” His defense team now describes him as a “deeply intelligent and peaceful young man” experiencing a mental crisis.
Yet prosecutors say he showed up at Altman’s residence with incendiary devices and enough kerosene to make his threats real. That disconnect between peaceful words and violent actions reads like a social media generation’s approach to extremism—ironic detachment until it isn’t.
Industry Reckoning With Real Consequences
Silicon Valley security protocols face new scrutiny as copycat incidents spread beyond healthcare.
Altman responded in a measured blog post, acknowledging valid AI criticism while urging everyone to “de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes.” No injuries occurred, but the message landed: your AI debates have body counts now.
Authorities warn of copycat incidents, pointing to similar cases where extremists invoke Mangione’s name in violent attacks. Tech executives who once worried about regulatory capture now hire personal security details, wondering if their next conference appearance might be their last.




























