Not only Ai

Deepfake and politics, how the rules are in Italia and Europe. Alarm after the latest episodes

From the AI-modified photo of the agent in Turin to the Lithuanian case of the video dubbed with synthetic voices, manipulated content enters the political confrontation

by Lorenzo Pace (Il Sole 24 Ore) and Ieva Kniukstiené (Elta, Lithuania)

La foto finita sotto accusa dell'agente Alessandro soccorso da un collega, dopo l'aggressione da parte di un gruppo di manifestanti, durante gli scontri che hanno seguito il corteo contro lo sgombero del centro sociale Askatasuna. Torino, 1 febbraio 2026, ANSA/Ufficio stampa Polizia di Stato

4' min read

Translated by AI
Versione italiana

4' min read

Translated by AI
Versione italiana

We could say that the debate on the use of artificial intelligence in Italia has entered a second phase. No longer theoretical, thus limited to hypothetical cases, but to concrete episodes. The latest is recent, from the end of January, and concerns a photograph. Namely that of the officer injured in the clashes during the pro-Askatasuna demonstration in Turin, published directly by the police. The problem? It was modified with Ai, even if not blatantly.

To recognise the artificial 'hand', in fact, the video of the attack was decisive, from which fact-checking sites noted certain details and made it clear that the image was not authentic. The issue had already become political, like the whole affair, in the days when, among other things, the final draft of the government's security decree, approved a few days later in the Council of Ministers, with measures also in favour of the agents, was being processed.

Loading...

The Ai touch involves photos, videos and even audio. As happened with the use of a synthetic voice, cloned with artificial intelligence techniques, attributed to Defence Minister Guido Crosetto.

On the regulatory side, the European Union has initiated a structured response with the Ai Act, which introduces transparency obligations for artificially generated content and stricter rules for high-risk uses. However, the entry into force of the regulation will be gradual and not immediate. In the meantime, the Digital Services Act mainly focuses on the responsibilities of online platforms in the moderation of content, leaving the issue of intentional political use of synthetic media by parties, candidates or organised groups partly uncovered.

This is why there have been other initiatives in Italia. Pagella Politica and Facta, for instance, have proposed a public commitment to parties not to use artificial intelligence to create or disseminate misleading content during election campaigns, inviting them to explicitly declare any use of Ai tools in political communication. Although the proposal has no binding effect, it is intended to temporarily fill the regulatory gap and strengthen voter confidence, relying more on reputational pressure than on formal sanctions.

Ai a due velocità nel mondo, cosa dice l'ultimo report di Microsoft

The appeal has moved Parliament. A specific legislative process on deepfakes is currently underway, with a bill by the Democratic Party - expected in the Chamber of Deputies at the end of February 2026 - that aims to ban the creation and dissemination of videos, images or audio manipulated with artificial intelligence during election campaigns, including elections and referendums, if aimed at deceiving voters and influencing the vote. The text intervenes on the 1956 ordinary law regulating electoral propaganda and entrusts the Communications Guarantee Authority with the supervision and removal of illegal content, as well as the possibility of imposing economic sanctions on platforms that do not comply with the provisions.

The proposal is part of a national legal framework that was already updated in 2025 with the introduction in the Criminal Code of Article 612-quater, which punishes the unlawful dissemination of content generated or manipulated with Ai without the consent of the person portrayed with penalties ranging from one to five years of imprisonment. However, that provision has a general scope and is not specifically focused on the incidence of deepfakes in political competitions, which is why the PD bill provides for an ad hoc case.

The aim is to intervene before the definitive exploit of fake content that is practically identical to the real thing, which is increasingly difficult for citizens to distinguish. Do it now, while it is still possible to identify a fake video or photo, in view of the upcoming elections.

The Lithuanian case, between regulatory gaps and technical instruments

The debate is not only about Italia. In Lithuania, the issue of synthetic content has already landed on the table of the media regulator. The president of the Lithuanian Radio and Television Commission (LRTK), Mantas Martišius, pointed out that national law does not precisely define under what conditions and to whom the identity of those who create or disseminate fake content must be disclosed. The problem, he explained, is also operational: the authority that monitors information flows has no direct powers to obtain data from the operators of social platforms. Only prosecutors and police can do so in the case of serious crimes. This slows down reactions and makes it difficult to intervene before manipulated content goes viral.

Andrius Katinas, head of supervision of economic operators at the LRTK, emphasised that the regulatory update must go hand in hand with the strengthening of technical tools, in particular Open Source Intelligence (Osint) activities. The Commission has already been using Osint tools to investigate banned content or copyright infringements for years, but - according to Katinas - more widespread specialised expertise and a parallel investment in technology and training is needed. According to Martišius, it is essential that the national regulatory framework clearly establishes obligations to identify authors of fake content, so that platforms are aware of the existence of a specific legal obligation when approached by authorities.

In 2024, the LRTK conducted 41 investigations into illegal audiovisual content, including pirated services and fake propaganda video segments. Artificial intelligence and Osint tools were used in these activities to analyse origin, dissemination, bot activity and recurring patterns.

Also in Lithuania, in 2024 the LRTK fined the politician Eduardas Vaitkus, a former candidate for the presidency of the Republic, for illegally publishing a recording of a broadcast of the public broadcaster LRT without consent, translating and dubbing it into Russian.

The video posted on YouTube had an additional element: the voices of the host and guests were identical or very similar to the originals, but they spoke in Russian. Moreover, the translated content did not always faithfully match the original in Lithuanian, contributing to the construction of a divergent narrative. It was not indicated that the content was generated and dubbed with artificial intelligence, also potentially violating the platform's rules on mandatory labelling of synthetic content. In that case, the sanction was imposed for copyright infringement. However, the content also had a profile of misinformation in the context of Lithuanian legislation on public information and the election law for the Seimas.

*This article is part of the European collaborative journalism project "Pulse"

Copyright reserved ©

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti