Two violent attacks targeting OpenAI CEO Sam Altman and an Indianapolis city council member are prompting new fears over whether the debate around artificial intelligence has turned dangerous.
The incidents, occurring within days of each other, have intensified scrutiny of the rhetoric surrounding AI development and regulation. Technology leaders in Washington, D.C., and Silicon Valley have quickly blamed anti-AI rhetoric for the violence, while AI opposition groups condemn the attacks and maintain they do not advocate for violence.
What the Left Is Saying
Progressive technology critics and anti-AI advocacy groups say the violence does not represent their movement and argue that legitimate concerns about AI's impact on jobs, the environment, and society deserve serious consideration.
Valerie Sizemore, co-leader of the grassroots movement Stop AI, said the attack on Altman's home does not represent the broader anti-AI movement. 'We actually see it as underlying how important our work actually is because there's a lot of groups getting involved, there are a lot of conservative groups getting involved on this, and [we give] people nonviolent actions to do that are organized and planned and pointed goals that might actually achieve something,' Sizemore told The Hill.
Stop AI emphasized that the suspect in Altman's attack joined their public online forum months ago and asked 'Will speaking about violence get me banned.' When told yes, he stopped all activity on the forum. The group noted that co-founders who made 'provocative statements regarding violence' were removed from the organization last year.
OpenAI CEO Sam Altman himself acknowledged the validity of many AI criticisms in a blog post following the attack. 'A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology. This is quite valid, and we welcome good-faith criticism and debate,' Altman wrote. 'I empathize with anti-technology sentiments and clearly technology isn't always good for everyone.'
What the Right Is Saying
Conservative technology leaders, including those in President Trump's orbit, have blamed what they characterize as 'doomer' rhetoric from AI safety advocates for inciting violence.
Sriram Krishnan, the White House's senior policy adviser on AI, wrote on the social platform X: 'I think the doomers need to take a serious look at what they have helped incite and not just rely on "we condemn this and have said this is not the rational response."' Krishnan added that the attack represents 'the logical outcome of "If we build it everyone dies,"' referring to a book by AI researchers Eliezer Yudkowsky and Nate Soares.
Dean Ball, a former AI adviser to the Trump White House, wrote that 'no one should be surprised there is violence happening.' Ball argued that 'in the eyes of many safetyists, some amount of rogue violence is an acceptable tradeoff of their heated rhetoric, since they believe the heated rhetoric to be true.'
Nathan Leamer, executive director of the AI advocacy group Build American AI, posted a clip of Yudkowsky saying AI will cause the 'abrupt extermination' of humanity and wrote: 'And we wonder why there is a dramatic increase in anti AI rhetoric and violence.'
What the Numbers Show
Stanford University released its annual AI Index Report this week, showing a growing disconnect between AI experts and the U.S. public on views about artificial intelligence's societal impact.
The report stated: 'Public views of AI are now shaped by a central tension, as optimism about the technology's benefits often coexists with anxiety about its broader effects.'
Shannon Hiller, executive director of Princeton University's Bridging Divides Initiative, which tracks political violence, said AI and related topics 'are emerging as an increasingly contentious issue.' While Hiller noted that this alone does not necessarily mean the issue will lead to more violence, she added: 'In the current climate of hostility in our politics, and the speed at which decisions are moving on AI and data centers, we're seeing an uptick in cases of harassment and threats around this issue, even at the local level.'
In the attack on Altman, 20-year-old Texas man Daniel Moreno-Gama allegedly threw a Molotov cocktail at the San Francisco home, setting a gate on fire before fleeing and then threatening to burn down OpenAI's headquarters about an hour later. He faces attempted murder and attempted arson charges in California state court, plus federal charges.
In the Indianapolis incident, City-County Council member Rob Gibson said his home was shot at 13 times, with a note reading 'No Data Centers' left on his doorstep. The shooting occurred three days before the Altman attack, and Gibson had recently supported a local commission's approval of a rezoning petition for a data center project.
The Bottom Line
The dual attacks highlight the escalating tensions surrounding AI policy debates, particularly as communities grapple with questions about data center development and workforce impacts. While both sides condemn violence, each points to the other's rhetoric as contributing to an environment that makes such incidents possible. The Stanford report underscores that public skepticism of AI continues to grow, suggesting the debate will remain heated as federal and state governments consider regulatory approaches. What remains unclear is whether the violence represents an isolated escalation or a sign of things to come as AI policy decisions accelerate across federal, state, and local levels.