Tra emancipazione digitale e difesa dei diritti
di Paolo Benanti
Artificial intelligence is rapidly becoming a decisive factor in the evolution of the cyber threat landscape. The numbers for the first half of 2025 leave no room for doubt: AI is no longer a potential weapon, but a structural component of the cyber threat landscape. In the first half of this year, in fact, there was a 47% increase in attacks based on artificial intelligence techniques compared to the previous year, and by the end of 2025, AI-driven cyber incidents may exceed 28 million globally. This is what emerges from the new report AI Threat Landscape 2025, produced by the Cybersecurity Competence Centre of Maticmind and presented today, Thursday 16 October, at the Chamber of Deputies.
The average cost of an AI-enhanced data breach reached $5.72 million in 2025 (+13% YoY). SMEs spent 27% more on incident response, while insurance payments for AI-driven attacks increased by 22%.
Again: in the first half of 2025, in Italy, almost 40 per cent of the approximately 900 serious cyber incidents recorded directly involved generative artificial intelligence tools. Phishing and spear-phishing remain the main modes of attack, but their effectiveness is enhanced by the massive use of language models: more than 80 per cent of phishing e-mails and 91 per cent of spear-phishing campaigns today exploit LLM, while 52 per cent of AI-based attacks use public models to generate malicious content or code.
Among the fastest-growing phenomena are deepfakes, the prevalence of which is increasing exponentially: from 500,000 cases in 2023, it is expected to rise to more than 8 million by the end of 2025. Already today, one in twenty cases of failure in identity verification processes is attributable to synthetic content generated by artificial intelligence. The economic impact of these breaches is equally significant: the average cost of an AI-powered attack is estimated at USD 5.72 million, an increase of 13 per cent over the previous year.New areas
The report emphasises how the attack surface has expanded to new areas, including not only infrastructure and data, but also prompts, training datasets and models themselves, rendering many traditional defences based on static rules ineffective. In addition, there is growing concern about so-called shadow AI, i.e. the uncontrolled use of artificial intelligence tools within organisations, which exposes companies and institutions to the risk of sensitive data exfiltration and loss of control over information flows.