War and artificial intelligence, the systemic risk: when algorithms anticipate human decision-making
Artificial intelligence (AI) is rapidly transforming numerous sectors, from medicine to commerce, but one of the most critical and controversial areas is the military. The use of AI in conflicts raises issues that go far beyond operational efficiency: risks of uncontrolled escalation, violations of international humanitarian law, delegation of life-and-death decisions to automated systems and, ultimately, the disturbing prospect of excluding humans altogether from military chains of command and control.
Currently, many AI-based military technologies are conceived as decision-support tools, not as fully autonomous systems that choose for themselves when and whom to strike. Artificial intelligence is mainly used to analyse huge amounts of data, identify possible targets, support logistics and accelerate decision-making processes that a human being alone would not be able to handle with the same speed.
However, what is emerging in the conflict between Iran, Israel and the United States shows how this distinction is becoming increasingly subtle and potentially dangerous. The case of the Minab school, hit by a Tomahawk missile with hundreds of civilian casualties, shows how the integration of automatic analysis systems and operational decisions can amplify errors already present in the data. According to reconstructions, the military command used outdated intelligence information, possibly supplemented or processed by AI-based analysis systems to speed up the military operation. In a scenario where AI helps to generate, filter or classify targets, the speed of decision-making increases, but at the same time the time for human verification is reduced.
The problem is therefore not only technical but systemic. AI does not formally decide to hit a target, but influences the decision chain that leads to that choice. Tools such as the Maven Smart System, which aggregate data from satellites, sensors and communications, make it possible to identify patterns and suggest targets with unprecedented speed. But if the source data is incorrect or incomplete, AI risks amplifying the error on an industrial scale, making tragedies like Minab's more likely.
The most alarming aspect is that all this is taking place while a military-industrial complex centred on artificial intelligence is being consolidated, in which governments and large technology companies compete for strategic supremacy. Systems developed by companies such as Anthropic or OpenAI, integrated into data analysis platforms by companies such as Palantir Technologies, are increasingly being used for intelligence, military planning and operational simulations. Although officially they remain support tools, the line between 'assisting' and 'determining' a military decision quickly becomes blurred when AI is the one that filters and organises the information on which the attack is based.


