AI Wars: Anthropic, Iran, and Claude
Dario Amodei at TechCrunch Disrupt 2023. Source: WikiMedia Commons.
In this first quarter of the year Anthropic has emerged as one of the most influential AI companies in the tech space. The CEO of the company which develops the AI chatbot Claude – Dario Amodei – was a former executive at OpenAI. It wasn’t until 2021 in which Amodei along with other executives decided to break with the company best known for creating ChatGPT over disputes surrounding regulation and the limits of automation.
Since then, Anthropic has been dedicated to developing “safety models”. These AI systems go through more extensive testing trials that adhere to an ethical framework, before being released to the public. Additionally Claude is programmed with a specified constitution that outlines its ethical requirements. Despite the company’s founding principals and greater safety standards, it wasn’t until this year that organization made its great initial foray into advocacy and politics.
The shift began in early February, when the company announced that it would be investing $20 million into a US based political lobbying group. Public Action First, the recipient of Anthropic’s donation, is a Political Action Committee (PAC) dedicated to lobbying politicians for their support in greater AI regulation. Additionally, the organization supports prospective candidates in US elections with shared values. This move by Anthropic has been seen by many as an act attempting to combat mass political investment from OpenAI which seeks to achieve opposing outcomes. While most are familiar with OpenAI’s CEO Sam Altman, less attention has been directed to their president Greg Brockman. In 2025, Brockman and his wife were the largest donors to President Trump's personal Super-PAC. He has also made significant contributions to Leading the Future, a PAC focused on advocating for deregulation surrounding AI. He also helped establish the PAC along with the co-founders of venture-capital firm Andreessen Horowitz. In total the Brockman family has donated $62.5 million to both organizations, focused on promoting Laissez-faire principles and deregulation in this emerging technology.
This past month Anthropic came to a full clash with the Trump administration over the use of Claude. The company refused a new requirement set by Department of Defense (DOD) head Pete Hegseth. For their refusal to allow Claude to be used in technology relating to “mass surveillance” and “fully autonomous weapons”, Hegseth, a former weekend TV host, along with the administration, threatened to terminate Anthropic’s contracts with the military. Anthropic, remaining firm in their standoff with the administration, wrote in a statement claiming that the application of Claude in surveillance “... presents serious novel risks to our fundamental liberties”. They added that currently AI systems are “... not fully reliable enough” to operate fully autonomous weapons.
As many prominent tech companies hold various lucrative government contracts, they intend to challenge the DOD in court for blacklisting the company. President Trump described Anthropic as a “Radical Left AI company run by people who have no idea what the real world is all about”.
Despite competitive hostilities, Sam Altman initially supported Anthropic in raising questions over the administration's requirement for AI technologies to be open to “any lawful use” from the government. However, in the following days, Altmant released a statement disclosing that OpenAI decided to comply with the order from Hegseth. This has since resulted in public outrage, with Altman’s ChatGPT losing an estimated 4 million users as of March. In turn, many have migrated to Claude due to the company’s perceived moral convictions.
The saga continued with the beginning of the Israeli and American government’s assault on Iran. On the 28th of February, both government’s military forces began large scale attacks on Iran. The initial strikes would quickly kill Iran’s dictator in Tehran, Supreme Leader Ali Kamanei. Additionally in a separate strike in Southern Minab, a girls primary school adjacent to a targeted military compound was directly hit by a tomahawk missile. Iranian state media has claimed that 180 civilians – mostly students – were killed, with 82 being fully confirmed, along with another 95 being wounded in the strike.
The missiles, typically used by the US navy along with other allied militaries have been attributed by multiple investigative media reports to American forces. Additionally, possible use of AI in the strike has come under intense scrutiny in the fallout of the ongoing war. The Wall Street Journal and Axios both began to report that Claude was used by the DOD in choosing targets for the initial strike in Iran. This was done despite the use of Anthropic’s technology being banned just hours before by the Secretary of Defense for refusal to comply with his newly set out orders. With the concern raised by Anthropic in the months leading up to the attack and the apparent mistakenly targeted school, human rights groups and legal experts have raised questions surrounding possible violation of international law. In January, President Trump stated “I don’t need international law” going on to detail that his foreign policy decisions and military action is primarily based on his personal morality.
As the war in Iran continues, the international community will further investigate any potential use of AI in military operations. More broadly, as we begin to see the real applications of these technologies and their potential to take innocent lives, questions are raised over whether the use of AI in war sets a dangerous precedent and how the general public should hold the industry accountable. Additionally, Anthropics' move into political advocacy begs the question of whether proper regulation can be achieved through a lens of corporate interest. The outcome of their legal battle with the DOD also has the potential to set a standard for the duty of care AI companies are required to take in the application of their technologies.

