SAN FRANCISCO (AP) — A Texas man has been charged after allegedly throwing a Molotov cocktail at the home of OpenAI CEO Sam Altman, amid claims from his public defender of mental health issues and accusations of excessive charges from prosecutors.
Daniel Moreno-Gama, 20, from Spring, Texas, made his initial court appearance on Tuesday in California where a judge ordered him to be held without bail. His arraignment is set for May 5. Authorities accredit Moreno-Gama with throwing the incendiary device at Altman's home on Friday, which resulted in an exterior gate catching fire.
Reports indicate that after the incident at Altman's residence, he proceeded to OpenAI's San Francisco headquarters about three miles away and threatened to burn down the facility. Fortunately, no injuries were reported in either incident.
Diamond Ward, Moreno-Gama's Deputy Public Defender, stressed that the case should be seen more as a property crime than an attempted murder. She claimed that the prosecution is leveraging the high-profile nature of the case to gain favor from billionaire Altman.
“It is unfair and unjust for the San Francisco District Attorney and the federal government to exploit the mental illness of a vulnerable, young man,” she stated. Along with state charges, Moreno-Gama also faces federal accusations, including possessing an unregistered firearm and causing property damage using explosives, with potential penalties totaling over two decades in prison.
Documents revealed that Moreno-Gama had expressed a strong disdain for artificial intelligence in his writings, labeling it a threat to humanity and alleging an upcoming extinction event. FBI officials deemed the actions as serious and premeditated, with U.S. Attorney Craig Missakian asserting they would treat the situation as an act of domestic terrorism.
Following the incidents, Moreno-Gama's home underwent an FBI search, where they gathered evidence for the ongoing investigation. Advocacy groups have condemned the violence, underscoring that threats and intimidation are unacceptable in discussions on AI risks.




















