Artificial Intelligence

AI in the Iran war, Anthropic sues the Pentagon

After disagreements over the use of Claude in the war in Iran, the matter ends up in court

by Biagio Simonetta

3' min read

Translated by AI
Versione italiana

3' min read

Translated by AI
Versione italiana

It could only end like this. With a lawsuit. The affair between Anthropic and the Pentagon, which has filled the headlines in recent days after the Californian company's decision to terminate an existing contract for ethical reasons (the use of its AI in the Iran war), will be the subject of legal contention. The technology company has filed a lawsuit against the US Department of Defence (which Donald Trump has renamed Department of War) to block a decision that places it on a national security risk list in the supply chain. The lawsuit was filed in a federal court in California and asks a judge to overturn the designation and prevent federal agencies from enforcing it.

The Pentagon had announced the measure last Thursday, claiming that the company represented a risk in the technology supply chain. A kind of revenge, after the turnaround led by the CEO, Dario Amodei.

Loading...

It is worth mentioning that, according to various sources, some of Anthropic's instruments were used in military operations in Iran, even after the company had decided to terminate its collaboration with the Pentagon.

In its appeal filed in court, the Market Street (San Francisco) based company claims that the DOD decision is unlawful and violates its constitutional rights to free speech and due process. The company claims that the government cannot use its power to punish a company for its public positions.

Because in fact, the one against Anthropic looks like a punishment. A punishment signed by the Secretary of Defence, Pete Hegseth, who reacted fiercely to Amodei's choice to remove certain restrictions on the AI models given to the Pentagon. In particular, Antropic wanted to place limits on the use of its systems for the development of fully autonomous weapons and for domestic surveillance programmes in the United States. Or at least that was the official version that came from Market Street.

This decision found wide dissent in the Pentagon and the White House, which argued that the definition of how national defence is to be carried out is a matter for federal law and not for a private company. The Defence Department argued that it must retain full flexibility in the use of artificial intelligence for "any legal use", adding that restrictions such as those introduced by Anthropic could put American lives at risk.

The decision represents a significant risk for Anthropic's relationship with the federal government. Trump himself mentioned this, who among other things ordered the federal agencies to phase out their cooperation with the company within six months. And this, clearly, has also had an impact on investors, who are now trying not to find themselves in an uncomfortable situation, given the crossfire. It is worth mentioning that Anthropic's backers include Google, part of Alphabet, and Amazon.

The confrontation between the government and the start-up has escalated after months of negotiations over policies on the use of artificial intelligence in the military. Anthropic's CEO, Dario Amodei, had met with Hegseth shortly before the decision to go to court, in the hope of reaching an agreement. And indeed, many agencies had written about a rapprochement between the two parties. A rapprochement that did not take place.

The Californian company argues that current artificial intelligence models are not reliable enough to be used in fully autonomous weapon systems and that their use in that context would be dangerous. Anthropic also pointed to domestic surveillance of US citizens as a red line.

Amodei stated that the company will challenge the decision in court.

It should also be mentioned that in the last twelve months, the Department of Defence has signed agreements of up to USD 200 million each with several artificial intelligence players, including Anthropic, OpenAI and Google. And shortly after the decision to put Anthropic on the risk list, OpenAI announced that it had signed an agreement with the Pentagon. A move that did not please users, who uninstalled ChatGPT from their devices en masse. The CEO, Sam Altman, stated that the company shares with the Pentagon the principle of human oversight of weapons systems and opposition to mass surveillance in the United States. But it seems not to have had too much effect among critics. Today, at least from a marketing point of view, Anthropic comes out on top, while OpenAI - which (it should be remembered) started as a non-profit start-up - has lost ground in the approval ranking.

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti