Massacre of girls at Minab school: human error or artificial intelligence behind Tomahawk missile
The Pentagon relied on old information. Possible also a role of new AI technological resources
from our correspondent Marco Valsania
NEW YORK - The investigation by US military authorities into the minab primary schools massacre in Iran has reached a preliminary conclusion. And chillingly: the missile that buried perhaps 175 victims, mostly girls, under the rubble on 28 February, at the start of the conflict, was a tomahawk launched by the US. On the wrong target, either because of human error or because of the use of artificial intelligence tools designed to reinforce and accelerate Operation Epic Fury.
What is certain is that Minab, if the reconstruction is confirmed by the still open investigations, is the most serious, single 'mistake' to cause a massacre of civilians attributed to the Pentagon in decades. Which, with the force of tragedy, casts heavy shadows on the already debated war preparations and strategies, opening up a crisis in Washington that is also political: the conclusion belies Donald Trump. The president had until now claimed that it was the Iranians themselves who carried out the attack. Defence Secretary Pete Hasgeth had echoed him by assuring that the United States is the country taking the greatest precautions. Yesterday, Democratic opposition senators accused Hegseth of ignoring the risks to civilians and formally demanded that he immediately disclose what happened in Minab. The fear is that in the end censorship may prevail to avoid embarrassment and attention to a Pentagon shaken by purges and led by a minister who boasts of breaking 'stupid rules of engagement' to win.
The preliminary report, according to rumours in the New York Times, reveals that the school was destroyed because Central Command, the military command engaged in the Middle East, relied on outdated intelligence information, provided by the Defense Intelligence Agency (Dia), the Pentagon's intelligence service. And which, inexplicably, was not checked, a procedure that traditionally takes place at several levels and can draw on hundreds of analysts and military experts.
On who failed to verify the target, and why, the report currently sheds no light. Dia passed the school labelled with the target coding, the code, of military target to CentCom. Also under scrutiny is a separate agency, the National Geospatial-Intelligence Agency, which analyses satellite images of potential targets and is used precisely to update data considered old or dubious. Debacles are not new, but they are relatively rare: in 1999, during the Kosovo conflict, the CIA used unreliable maps to bomb the Chinese embassy, which had been mistaken for a barracks, to the death of three people.
This time in the spotlight, however, is also a far more hi-tech hypothesis in the drama: target setting through artificial intelligence may have been influential. The Pentagon has acknowledged using Ai in the conflict and a controversial multiplication of targets to be eliminated in this way had been experimented by Israel in Gaza. What's more, Anthropic's Ai system, Claude, is integrated into the Geospatial Agency's Maven Smart System, which flags realities of intelligence interest. Anthropic recently broke with the Pentagon denouncing inadequate safeguards on the use of Ai for mass surveillance and autonomous arsenals.


