The tech giants have supported rival political influence non-profits ahead of the US midterm elections
Anthropic has injected $20 million into a political advocacy group that backs candidates favoring artificial intelligence regulation, thrusting the safety-focused company into the center of a high-stakes election spending war with its archrival OpenAI.
The donation to Public First Action – a “dark money” nonprofit that does not disclose its donors – marks the first known political intervention by the San Francisco-based AI lab. The group was launched in November by former Republican Rep. Chris Stewart of Utah and former Democratic Rep. Brad Carson of Oklahoma to counter a competing super PAC network backed by OpenAI’s leadership and investors.
“The companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests,” Anthropic said in a statement Thursday. “We don’t want to sit on the sidelines while these policies are developed.”
Public First Action funds two allied super PACs – one Republican, one Democratic – that plan to back candidates who support robust AI safeguards, including transparency mandates, export controls on advanced chips, and federal standards that would not preempt stricter state-level laws.
The contribution threatens to escalate an already bitter rivalry between the two AI giants into a full-fledged proxy war over the midterm elections. OpenAI’s president, Greg Brockman, and investor Marc Andreessen have poured more than $100 million into Leading the Future, a super PAC network that generally opposes stringent AI rules and warns against a “patchwork” of state-level restrictions.
“At present, there are few organized efforts to help mobilize people and politicians who understand what’s at stake in AI development,” Anthropic said. “Instead, vast resources have flowed to political organizations that oppose these efforts.”
The donation comes at a fraught moment for Anthropic, which is simultaneously racing to commercialize ever more powerful AI systems while its own executives warn that those same technologies could endanger humanity.
Chief executive Dario Amodei cautioned in a lengthy essay last month that humanity faces “autonomy risks” where AI could “go rogue and overpower humanity.” On Monday, the company’s Safeguards Research Team lead, Mrinank Sharma, abruptly resigned, warning that “the world is in peril.”
Unlike OpenAI, which has been largely embraced by the White House, Anthropic has faced friction from the Trump administration. President Donald Trump’s AI and cryptocurrency czar, David Sacks, accused the company in October of promoting a “state regulatory frenzy that is damaging the startup ecosystem.”
Anthropic is also reportedly locked in a dispute with the Pentagon, which is pushing to deploy the company’s AI for autonomous weapons targeting and domestic surveillance without the safeguards Anthropic has sought to impose. The standoff has stalled a contract worth up to $200 million, with Defense Secretary Pete Hegseth vowing not to use models that “won’t allow you to fight wars.”
This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy. I Agree