Cybersecurity

War and artificial intelligence, the systemic risk: when algorithms anticipate human decision-making

by Pierluigi Paganini

Adobestock

6' min read

Translated by AI
Versione italiana

6' min read

Translated by AI
Versione italiana

Artificial intelligence (AI) is rapidly transforming numerous sectors, from medicine to commerce, but one of the most critical and controversial areas is the military. The use of AI in conflicts raises issues that go far beyond operational efficiency: risks of uncontrolled escalation, violations of international humanitarian law, delegation of life-and-death decisions to automated systems and, ultimately, the disturbing prospect of excluding humans altogether from military chains of command and control.

Currently, many AI-based military technologies are conceived as decision-support tools, not as fully autonomous systems that choose for themselves when and whom to strike. Artificial intelligence is mainly used to analyse huge amounts of data, identify possible targets, support logistics and accelerate decision-making processes that a human being alone would not be able to handle with the same speed.

Loading...

However, what is emerging in the conflict between Iran, Israel and the United States shows how this distinction is becoming increasingly subtle and potentially dangerous. The case of the Minab school, hit by a Tomahawk missile with hundreds of civilian casualties, shows how the integration of automatic analysis systems and operational decisions can amplify errors already present in the data. According to reconstructions, the military command used outdated intelligence information, possibly supplemented or processed by AI-based analysis systems to speed up the military operation. In a scenario where AI helps to generate, filter or classify targets, the speed of decision-making increases, but at the same time the time for human verification is reduced.

The problem is therefore not only technical but systemic. AI does not formally decide to hit a target, but influences the decision chain that leads to that choice. Tools such as the Maven Smart System, which aggregate data from satellites, sensors and communications, make it possible to identify patterns and suggest targets with unprecedented speed. But if the source data is incorrect or incomplete, AI risks amplifying the error on an industrial scale, making tragedies like Minab's more likely.

The most alarming aspect is that all this is taking place while a military-industrial complex centred on artificial intelligence is being consolidated, in which governments and large technology companies compete for strategic supremacy. Systems developed by companies such as Anthropic or OpenAI, integrated into data analysis platforms by companies such as Palantir Technologies, are increasingly being used for intelligence, military planning and operational simulations. Although officially they remain support tools, the line between 'assisting' and 'determining' a military decision quickly becomes blurred when AI is the one that filters and organises the information on which the attack is based.

In other words, we are not yet in front of systems that autonomously decide to hit a target, but we are entering a phase in which war is progressively 'pre-decided' by algorithms that select and interpret data. And it is precisely this grey area, between support and the automation of the decision, that today represents the real strategic, ethical and political risk of warfare in the age of artificial intelligence.

A report by the National Security Commission on Artificial Intelligence (NSCAI), a US institutional body, warns that the integration of AI in weapon systems raises 'significant questions with respect to the legality, security and ethics' of such systems, especially when they go beyond the role of a mere support tool and acquire increasing levels of autonomy.

The use of artificial intelligence in weapon systems is opening up deeply worrying scenarios, particularly with the development of Lethal Autonomous Weapon Systems (LAWS), weapons capable of selecting and striking a target without direct human intervention. Often referred to as 'killer robots', these technologies raise unprecedented ethical and security questions because they transfer life-and-death decisions to an algorithmic system.

One of the main risks concerns the opacity of the decision-making process. Many AI systems function as veritable 'black boxes': even their developers cannot always understand exactly what decision will be taken in a specific context. This makes it extremely difficult to predict or explain the behaviour of an autonomous weapon in real situations. Moreover, systems trained in controlled environments may react unpredictably in complex scenarios such as the battlefield, with the real risk of hitting civilian targets or generating unintended effects.

Another crucial concern is the possible exclusion of humans from the military decision-making chain, the so-called kill chain. As the speed of automated combat systems increases, the time for human verification becomes increasingly shorter. However, AI cannot reliably apply fundamental principles of international humanitarian law, such as the distinction between combatants and civilians or proportionality in the use of force. Without meaningful human verification, these principles risk being violated.

Finally, there is the danger of uncontrolled escalation of conflicts. Autonomous systems reacting in fractions of a second may misinterpret signals or movements as threats and respond automatically, giving rise to very rapid conflict dynamics, sometimes referred to as flash wars. In such scenarios, humans may not have the material time to intervene and stop the escalation, increasing the risk of tragic mistakes and military crises that are difficult to contain.

The introduction of autonomous weapon systems based on artificial intelligence opens up one of the most disturbing dilemmas of modern warfare: who is responsible when a machine makes lethal decisions? If an automated system hits a civilian target or violates international humanitarian law, responsibility becomes difficult to assign. It could fall on the programmers who designed the algorithm, the military commanders who authorised its use, or the state that deployed the system. However, international law does not yet offer clear answers to these questions.

In recent years, the international community has begun to recognise the seriousness of the problem. At the United Nations, several countries have argued for binding rules on the military use of artificial intelligence, while political initiatives promote the principle of meaningful human oversight in attack decisions. Despite this, the major powers remain divided and negotiations are progressing slowly.

The only certainty is that technology will advance much faster than regulation. Unless clear limits are set, future conflicts may increasingly be driven by automated systems capable of reacting within seconds. In an extreme scenario, the integration of AI into strategic systems could even reduce traditional human restraints in military crises, paving the way for very rapid and potentially catastrophic escalations.

Conclusion

Artificial intelligence is no longer just a tool: it is an arbiter of human destiny. It promises efficiency and security, but in the hands of a few billionaires and corporations, driven by questionable ideologies, it becomes a weapon capable of rewriting our evolutionary history, establishing who counts and who is expendable. War is no longer decided by man: opaque algorithms select targets, accelerate conflicts and turn the chain of command into a game in which human judgement risks becoming an optional extra.

Peter Thiel, Alex Karp and other eminent figures on the global technology scene, the modern autocracy, not only shape technology, but also shape the political and social future.

According to Thiel, politics is an obstacle to mankind's technological progress and should therefore be replaced by technocracy, i.e. a system of government in which decision-making power is entrusted to experts, technicians and scientists.

However, politics, in philosophy, is not just about governing, but studying how to organise common life for the collective good, analysing power, justice and governance. Are we really ready to sacrifice this ancestral concept for the profit of a few and the evolution of machine civilisation?

Thiel correctly argues that the global challenge will be between the US and China for supremacy in AI. He calls those who hold back progress between over-regulation and environmentalism 'Antichrist' and points out that democracy and freedom are no longer compatible.

AI thus becomes an instrument of social and political control, where life and death, freedom and power, are decided by those who own the technology, not by law or ethics.

Unless global and binding limits are established, AI accelerates conflicts, weakens the human capacity to mediate, and opens the way for uncontrollable escalations. Wars could become flash, deadly decisions immediate, and the future a dystopia in which the few decide the fate of the many. Unchecked innovation risks turning into silent apocalypse, and the very survival of democracy is put at risk.

Pierluigi Paganini, Ceo of Cyberhorus, Director of the Unipegaso Cybersecurity Observatory and scientific coordinator Sole 24 Ore formazione

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti