Interventions

AI, system fragilities and structured finance: governing change with measure

by Enrica Landolfi*

4' min read

Translated by AI
Versione italiana

4' min read

Translated by AI
Versione italiana

In the last two years, artificial intelligence has gone from being a technological promise to an operational tool on the desks of banks, funds and financial services companies. In structured finance - for the uninitiated, I am referring to the world of securitisations, CLOs, ABSs, structured notes and hedging derivatives - the impact is already visible, posing a number of questions to all those operating in this sector: what can we expect in the coming years? And what should we learn to do in order not to be found unprepared for the risks that this highly accelerating innovation may entail?

Currently, the most mature use in structured finance of Generative AI concerns document management and analysis of the multiple contracts of a securitisation. Indeed, language models integrated with internal search systems enable the querying of thousands of pages in natural language, extracting key clauses and generating summaries for risk committees or investors. But crucial activities such as portfolio selection, transaction monitoring and benchmark analysis also make useful use of this new technology. In the derivatives market, ISDA, for instance, has published analyses on how to use GenAI in document processes, emphasising its potential. At the same time, however, it highlighted the need for controls on data and its accuracy. In the coming years - not many - this component will in fact become an infrastructure: no longer a pilot project, but an ordinary part of the life cycle of a structured transaction, from origination to reporting.

Loading...

Even greater than that of Generative AI will be the impact of Agentic AI, i.e. systems capable not only of answering questions in natural language, but also capable of autonomously planning and triggering sequences of real actions throughout the operational cycle of a financial transaction. Combining Generative AI and Agentic AI, therefore, we can expect three lines of evolution: the compression of structuring times; continuous monitoring of portfolios; and the integration of data, models, and narrative.

Such a contribution in terms of efficiency is certainly welcome. However, we cannot hide the fact that its implementation could entail certain system fragilities. Automated error on a large scale, first of all. And the most insidious risk: the 'plausible' error, i.e. an incorrect but plausible interpretation of contractual clauses or triggers.

This awareness is leading to growing regulatory attention that, albeit amidst legitimate disputes of merit, will only increase as Generative AI is operationally joined by Agentic. The Financial Stability Board, for instance, among others, has highlighted the risks of excessive technology concentration, dependence on a few providers, and exposure to systemic vulnerabilities. In Europe, the regulatory framework is gradually becoming clearer: the European Banking Authority has reaffirmed the prudent approach envisaged by the AI Act; the European Central Bank is monitoring its use; while the European Securities and Markets Authority, for its part, has reiterated that the ultimate responsibility remains with the intermediary even if, thanks to AI, the process is automated.

Then there is an even more structural issue that deserves our attention: we could call it the 'platform power' of AI. In fact, the artificial intelligence ecosystem is concentrating around a limited number of actors who control the cloud infrastructure, computing capacity and the development of proprietary models. This is an evolution that risks generating forms of technological dependence and hides a systemic risk, especially if many financial institutions adopt the same tools.

The 'proprietisation' of models can lead to opacity about training data, update logics and the internal limits of systems, making their audit more complex and increasing the risk of technological lock-in. Moreover, platforms are in a privileged information position: they observe aggregate usage patterns, may develop competing services or strategic partnerships, generating conflicts of interest.

The adoption of AI, in short, does not eliminate accountability, but rather expands its perimeter. Institutions must treat platforms as critical suppliers, with audits, robust contractual clauses, multi-provider strategies and exit plans. While its governance must be integrated into operational and model risk management frameworks. In the coming years, therefore, the decisive competence will not only be technical, which is obvious, but organisational and cultural. It will be necessary to learn to verify outputs, separate proposal and decision, build robust governance, cultivate hybrid competencies and reduce dependence on single providers. Structured finance was born to manage complexity. Artificial intelligence promises to make it more efficient. But the faster and more automated the system becomes, the greater the responsibility of those who govern it - and central the awareness of the power concentrated in the platforms that enable it.

To summarise with a well-known Zen tale, if we do not want the tea to overflow out of the cup, we must pour it with even more care than we have been accustomed to so far. Let us pour it, sure, but without too much haste. Only then can we be wise. Or for those who do not wish to rely on ancient wisdom, the maxim of Peter Drucker, one of the fathers of the science of management, who in The Effective Executive, as early as 1967, warned: "There is nothing more useless than to do efficiently what should not be done."

* Senior advisor of P&G Sgr

Copyright reserved ©

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti