Platforms

Governing agent-based AI in the enterprise: the principles-based intelligence start-up is Italian

The ultimate goal is to enable companies to continue using the AI systems they already have in place, while adding a level of governance that is interoperable

by Gianni Rusconi

5' min read

Translated by AI
Versione italiana

5' min read

Translated by AI
Versione italiana

Let's start with the news: Principled Intelligence closed a €1.85 million pre-seed round at the end of the year (officially announced just a few days ago). The deal was led by the Polo Nazionale di Trasferimento Tecnologico per l'Intelligenza Artificiale e la Cybersecurity (realised by CDP Venture Capital in partnership with Scientifica Venture Capital) - and by the VC fund specialising in Artificial Intelligence investments BlackSheep, with the participation of Eden Ventures.

Who the Roman start-up, founded in 2025, is and what it does, can be guessed from the description in the note released to the media: a company specialising in the development of technologies for the control and governance of artificial intelligence. More precisely, an infrastructure provided in Saas mode and designed to allow companies to safely adopt generative AI technologies, agents and LLM (Large Language Models) even in the most critical processes and highly regulated contexts.

Loading...

How it was born: the founders and the vision

The cradle where Principled Intelligence was born is an academic one, given that the two founders, Simone Conia (the CEO) and Edoardo Barba (the CTO), came up with the idea for the start-up once they had completed their research in artificial intelligence and natural language processing at La Sapienza University of Rome.

After the experience gained in the United States, at Apple, on large language models to improve the quality, reliability and factuality of conversational system responses, Conia returned to Italy to set up Minerva LLM with his fellow student, the first family of large language models trained from scratch on Italian data, already adopted on a large scale by universities, companies and developers (over 300,000 downloads to date). As for the name, as the two co-founders reminded Sole24ore.com, the choice fell on Principled Intelligence 'to foreground the idea of an AI that is not only intelligent, but governed by the same principles that guide an organisation and determine its success.

These principles include ethical values and general regulatory requirements such as the AI Act, but also internal company policies, operational documentation, corporate culture, codes of conduct and business strategies'. The specificity of the company's proposal (currently active with a team of six people, five of whom have a PhD in artificial intelligence in their pockets) is embodied in a 'multilingual-by-design' vision. A principle, Conia and Barba go on to explain, is only such if it is followed consistently, regardless of language. In practice, however, 'many AI systems show different behaviour and levels of accuracy depending on the language used, with a direct impact on the ability of companies to apply policies and guidelines consistently. This is why we want to ensure that the same principles apply equally in the major European languages'.

What it does for business and how it works

The breaking point put in Principled Intelligence's crosshairs in the path of AI adoption in the enterprise is one that many analysts have already analysed and categorised under the heading 'failure' of proof of concept. The application of artificial intelligence in products, services and processes is often jeopardised by its very unpredictability, and more specifically by the high degree of autonomy of the models, whose behaviour that cannot be controlled in advance can have even serious consequences for the image and finances of companies.

The start-up has the ambition to meet the challenge of governing the behaviour and controlling the decisions of the AI integrated in the artificial intelligence systems already present in the company in a transparent, simple way (the platform can be customised using natural language and provides in-depth explanations in the event of violations) and in compliance with the policies of each individual organisation. To do this, it is developing an infrastructure layer that operates in the cloud (on Principled Intelligence or private clouds) and, if necessary, also on premise (when data are particularly sensitive and cannot leave the company perimeter) and that allows companies to define their own operational principles, transform them into verifiable criteria, simulate realistic interactions, and constantly monitor AI behaviour. 'Part of this infrastructure,' confirms the CEO, 'is already operational and is currently in use by several companies. The difference compared to a traditional application platform is the following: the customer does not adopt a monolithic product but a modular one; he can decide which components to use and how to combine them, choosing whether to adopt the entire governance layer or only specific modules to deal with individual aspects, such as evaluating the behaviour of AI systems or verifying certain compliance requirements on the input of a chatbot'.

The technology and pilot projects

The technological core of the platform are Small Language Models (SLMs) engineered and trained for governance, compliance and guardrail tasks (guardrail systems ensure that an organisation's AI tools, and in particular LLMs, operate in accordance with the company's policies and values), applicable in real time and prepared to work even on common corporate servers. "As far as interoperability with systems already in production is concerned," the two founders note, "our software is able to interact with AI agents by simulating the behaviour of a human operator.

No special accesses or specifically designed connectors are therefore required, making our solution one of the most advanced on the market today. For the trust and governance layer, as well as for the individual models, we are developing native connectors for the main agent providers: in an operational environment such as Salesforce, for example, our platform does not replace existing agents, but integrates upstream and downstream of their actions.

The ultimate goal is to enable companies to continue using the AI systems they already have in place, while adding a level of governance that is interoperable, modular, and independent of the technology vendor'. An approach that finds concrete expression in the platform's first pilot applications, in projects involving virtual assistants active in high-risk contexts such as banking and insurance (where the monetary impact of an error can be significant) and in others, such as the public administration, where the risk is mainly reputational and institutional. "We are also working," Conia adds in this regard, "on aspects of financial, medical or legal guidance, on the prevention of loss of business opportunities due to mentions of competitors or their services, and on the control of social bias or mechanisms similar to social scoring, all behaviours that do not always constitute a security breach, but which can have significant consequences in regulatory, economic and image terms.

The open source philosophy to create shared standards

With the EUR 1.85 million in cash, Principled Intelligence's near future will see investments to grow the team and accelerate the development of the solution. 'Over the next 24 months,' confirm the two founders, 'we will be working on two complementary directions. On the one hand, a software for analysing the behaviour and accuracy of AI systems, because you cannot govern what you cannot measure; on the other hand, a governance layer based on proprietary language models, which allows targeted intervention on the criticalities that emerge during the evaluation phase, initially for textual input and output and in the future also for images, audio and video. The open source approach elected as a central element of the company's philosophy will thus find further application in the release of open models and benchmarks. 'We want to make our AI testing practices verifiable and reproducible and contribute to the definition of shared standards,' Conia concluded, 'because we believe that trust in artificial intelligence systems is built through transparency, verifiability and public comparison.

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti