Leone in Camerun, l’appello contro i «capricci di ricchi» e il nodo della crisi anglofona
dal nostro corrispondente Alberto Magnani
by Francesco Schiavone*.
3' min read
The use of Artificial Intelligence (AI) in healthcare has opened up unprecedented perspectives in areas such as early diagnosis, personalisation of care and integrated patient management. However, the adoption of these technologies brings with it complex ethical and regulatory challenges, particularly with regard to the security and privacy of patient data. For this reason, it is essential that artificial intelligence systems are transparent, controllable and adaptable to the context in which they are used. This project represents a relevant scientific contribution in the field of innovation in healthcare, in line with international research guidelines on accountability related to the use of artificial intelligence.
Risk management requires an integrated framework combining regulatory, ethical and educational aspects. In this sense, the AI Act - which came into force in August 2024 - represents the first comprehensive regulation on artificial intelligence adopted by the European Union. This legislation aims to ensure the safe, ethical and responsible use of artificial intelligence, protecting the fundamental rights of citizens and promoting safe technological innovation. A central element of the AI Act is the risk-based approach, which provides for a classification of AI systems according to the level of risk they entail, up to the exclusion of those deemed unacceptable in terms of security and rights.
Within this context, the VIMASS Laboratory of the Parthenope University of Naples concluded the AIACT project (Artificial Intelligence Assessment: Classifying Transparent Systems) by addressing one of the critical issues: the identification and mitigation of bias in clinical data, particularly in breast cancer clinical trials where AI is used as decision support. In particular, the limited representativeness of the samples raises questions about the validity and fairness of the algorithms.
Therefore, starting with an analysis of the scientific literature on Artificial Intelligence and algorithmic bias, the research project aimed to develop an artificial intelligence prototype for risk management and assessment, specifically designed for healthcare organisations and developers (computer scientists, engineers, clinicians).
The main objective is to provide stakeholders with an advanced simulation tool designed to generate an optimised dataset for clinical trials. The almost bias-free dataset was developed specifically for training artificial intelligence models applied to the healthcare sector.