AI, system fragilities and structured finance: governing change with measure
by Enrica Landolfi*
In the last two years, artificial intelligence has gone from being a technological promise to an operational tool on the desks of banks, funds and financial services companies. In structured finance - for the uninitiated, I am referring to the world of securitisations, CLOs, ABSs, structured notes and hedging derivatives - the impact is already visible, posing a number of questions to all those operating in this sector: what can we expect in the coming years? And what should we learn to do in order not to be found unprepared for the risks that this highly accelerating innovation may entail?
Currently, the most mature use in structured finance of Generative AI concerns document management and analysis of the multiple contracts of a securitisation. Indeed, language models integrated with internal search systems enable the querying of thousands of pages in natural language, extracting key clauses and generating summaries for risk committees or investors. But crucial activities such as portfolio selection, transaction monitoring and benchmark analysis also make useful use of this new technology. In the derivatives market, ISDA, for instance, has published analyses on how to use GenAI in document processes, emphasising its potential. At the same time, however, it highlighted the need for controls on data and its accuracy. In the coming years - not many - this component will in fact become an infrastructure: no longer a pilot project, but an ordinary part of the life cycle of a structured transaction, from origination to reporting.
Even greater than that of Generative AI will be the impact of Agentic AI, i.e. systems capable not only of answering questions in natural language, but also capable of autonomously planning and triggering sequences of real actions throughout the operational cycle of a financial transaction. Combining Generative AI and Agentic AI, therefore, we can expect three lines of evolution: the compression of structuring times; continuous monitoring of portfolios; and the integration of data, models, and narrative.
Such a contribution in terms of efficiency is certainly welcome. However, we cannot hide the fact that its implementation could entail certain system fragilities. Automated error on a large scale, first of all. And the most insidious risk: the 'plausible' error, i.e. an incorrect but plausible interpretation of contractual clauses or triggers.
This awareness is leading to growing regulatory attention that, albeit amidst legitimate disputes of merit, will only increase as Generative AI is operationally joined by Agentic. The Financial Stability Board, for instance, among others, has highlighted the risks of excessive technology concentration, dependence on a few providers, and exposure to systemic vulnerabilities. In Europe, the regulatory framework is gradually becoming clearer: the European Banking Authority has reaffirmed the prudent approach envisaged by the AI Act; the European Central Bank is monitoring its use; while the European Securities and Markets Authority, for its part, has reiterated that the ultimate responsibility remains with the intermediary even if, thanks to AI, the process is automated.


