The Budget Law

IRS, artificial intelligence does not enter assessments

The fight against evasion. The Inland Revenue Agency's stakes for the use of absolute AiStop systems on freely available and non-validated software arrive. Human verification always necessary

by Marco Mobili and Giovanni Parente

(Adobe Stock)

2' min read

Translated by AI
Versione italiana

2' min read

Translated by AI
Versione italiana

The Inland Revenue Agency traces the furrow on the use of artificial intelligence. And it does so in the name of a guiding principle: even when it is allowed to be used, human supervision and verification of the results by the men and women of the Inland Revenue will always be necessary. With the guidelines on internal policies, wanted by the director Vincenzo Carbone, the Inland Revenue therefore sets the rules for the use of the new solutions following the enactment of the Italian law on the Ai act (law 132/2025) in force since 10 October, with a view to managing the technological transition at the same pace as the respect for the privacy of taxpayers' sensitive data. Carbone himself has always emphasised how artificial intelligence, if properly governed, can offer great opportunities in terms of optimising planning processes, efficient allocation of resources and automation of repetitive actions with low added value.

In this sense, the new policy confirms the ban on the use of generative Ai systems in publicly available platforms, thus not integrated into the Agency's systems, to produce any administrative act, from assessments to refunds.

Loading...

Basically, as Carbone has repeatedly said, a 'guarantor' approach is needed: 'No Big Brother taxation or anti-avoidance algorithm', let alone 'a machine that trawls through taxpayers or churns out assessments go-go'.

A separate chapter, on the other hand, is represented by the programmes developed by Sogei for risk analysis, for which the artificial intelligence solutions are tested and fully compliant with the need to protect taxpayers' personal data, in line with EU provisions, including through pseudo-anonymisation mechanisms. But in this case, these are applications that are first and foremost aimed at risk analysis, useful for selecting taxpayers with inconsistency profiles, to be subjected to further in-depth analysis steps. In any case, these are applications in use by a very small number of qualified employees.

The guidelines then take up a series of indications, which have also emerged in recent days from different points of view (see the motion of the Uncat tax lawyers at the forensic congress and what was underlined by the Privacy Guarantor Pasquale Stanzione in a hearing at the supervisory commission on the Tax Registry). Attention is therefore drawn to the fact that the information provided by the Ai tools may be affected by errors, since the algorithms underlying artificial intelligence systems are statistical and not deterministic. Moreover, there is always a probability that the output provided is incorrect, irrelevant or inaccurate. Therefore, the results produced can never be used as they are, but it is essential for each operator to carry out a critical human verification of the content produced and a timely verification of the information and documents possibly referenced in the answers. This is why the Revenue policy warns against possible 'all

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti