A 20-year-old’s manifesto containing names and addresses of AI executives turned a predawn attack on Sam Altman’s San Francisco home into something far more sinister. Daniel Moreno-Gama didn’t just target OpenAI’s CEO—court documents reveal he compiled a hit list of industry leaders while warning of AI’s “impending extinction” of humanity. The incident raises questions about executive security in the rapidly evolving AI industry.
From Molotov Cocktails to Corporate Headquarters
The suspect escalated from home invasion to workplace terrorism in hours.
Moreno-Gama threw an incendiary device at Altman’s residence around 4 a.m. Friday, igniting an exterior gate before witnesses extinguished the flames. He didn’t stop there. After the home attack, he traveled to OpenAI headquarters, smashed glass doors with a chair while carrying kerosene, and threatened security to “burn it down and kill anyone inside.” San Francisco police arrested him on the spot, but the damage to industry confidence was already spreading.
Manifesto Reveals Broader Extremist Plot
His writings connected AI safety research to calls for violence against tech leaders.
The document found on Moreno-Gama outlined his anti-AI ideology with chilling specificity. “If I am going to advocate for others to kill and commit crimes, then I must lead by example,” he wrote, according to court filings dated April 2026. His summer 2024 social media posts called AI CEOs “sociopathic” and labeled Altman a “pathological liar.” This wasn’t random tech-bro rage—it was calculated extremism dressed up in AI safety language that legitimate researchers have been discussing for years.
Federal Terrorism Charges Loom Large
Prosecutors signal domestic terrorism classification could bring life sentences.
State charges of attempted murder and arson carry 19 years to life, but federal prosecutors aren’t finished. U.S. Attorney Craig Missakian stated, “If the evidence shows… we will treat this as an act of domestic terrorism.” The FBI searched Moreno-Gama’s Texas home on Monday following the Friday attack, seizing journals that could reveal additional plotting. Federal charges already include possession of an unregistered firearm and property destruction by explosives—even though no gun was found during his arrest.
The attack forces uncomfortable questions about where legitimate AI safety concerns end and dangerous extremism begins. While researchers debate existential risks through academic papers, Moreno-Gama chose Molotov cocktails. Other AI executives named in his manifesto now face security decisions they never expected, turning board meetings into potential targets. The industry’s biggest challenge isn’t just building safe AI—it’s staying safe while building it.



























