Parthenope University

Artificial intelligence: how software can identify and correct errors

The Vimass Laboratory designed an advanced simulation tool to generate an optimised dataset for breast cancer clinical trials

by Francesco Schiavone*.

3' min read

3' min read

The use of Artificial Intelligence (AI) in healthcare has opened up unprecedented perspectives in areas such as early diagnosis, personalisation of care and integrated patient management. However, the adoption of these technologies brings with it complex ethical and regulatory challenges, particularly with regard to the security and privacy of patient data. For this reason, it is essential that artificial intelligence systems are transparent, controllable and adaptable to the context in which they are used. This project represents a relevant scientific contribution in the field of innovation in healthcare, in line with international research guidelines on accountability related to the use of artificial intelligence.

Combining regulatory, ethical and educational aspects

Risk management requires an integrated framework combining regulatory, ethical and educational aspects. In this sense, the AI Act - which came into force in August 2024 - represents the first comprehensive regulation on artificial intelligence adopted by the European Union. This legislation aims to ensure the safe, ethical and responsible use of artificial intelligence, protecting the fundamental rights of citizens and promoting safe technological innovation. A central element of the AI Act is the risk-based approach, which provides for a classification of AI systems according to the level of risk they entail, up to the exclusion of those deemed unacceptable in terms of security and rights.

Loading...

Identification and Mitigation of Bias in Clinical Data

.

Within this context, the VIMASS Laboratory of the Parthenope University of Naples concluded the AIACT project (Artificial Intelligence Assessment: Classifying Transparent Systems) by addressing one of the critical issues: the identification and mitigation of bias in clinical data, particularly in breast cancer clinical trials where AI is used as decision support. In particular, the limited representativeness of the samples raises questions about the validity and fairness of the algorithms.

Therefore, starting with an analysis of the scientific literature on Artificial Intelligence and algorithmic bias, the research project aimed to develop an artificial intelligence prototype for risk management and assessment, specifically designed for healthcare organisations and developers (computer scientists, engineers, clinicians).

Generating an optimised dataset for clinical trials

.

The main objective is to provide stakeholders with an advanced simulation tool designed to generate an optimised dataset for clinical trials. The almost bias-free dataset was developed specifically for training artificial intelligence models applied to the healthcare sector.

The prototype provides a detailed assessment of the level and type of bias generated during the software configuration phase, supporting the correction of biases in AI algorithms.

The software allows users to select homogeneous samples according to different factors (gender, age, ethnicity, comorbidity, etc.) and modify these parameters according to their needs. The simulator, in real time, provides details on the level of risk at each field in the dataset, allowing prior analysis of potential bias and facilitating the development of more transparent, accountable and inclusive AI models.

How to avoid wrong decisions and inappropriate treatment

.

The use of biased AI algorithms in clinical trials can in fact generate incorrect decisions, inappropriate treatments and discriminatory healthcare policies. Therefore, the project aims to create fair clinical systems by promoting the use of artificial intelligence that is not only technically advanced, but also ethically responsible, capable of guaranteeing impartial and quality-oriented decisions.

Methodologically, the research integrated qualitative and quantitative approaches. Once the algorithm was developed, incorporating risk analysis tools and specific metrics, it was implemented in a prototype. Technological specifications were defined to enable the analysis and processing of the information required for bias reduction, in line with the AI Act and the relevant clinical, managerial and behavioural literatures.

The results of the project were presented at the University of Naples 'Parthenope', during the scientific workshop: 'Bias in Healthcare - The AIACT Project'. The event showed how the proposed solution can support developers and healthcare facilities in improving decision-making processes, optimising costs and increasing the quality of care provided to patients.

AIACT is therefore a valuable tool for research, analysis and decision-making based on transparent and accountable data.

*Full Professor in Management, University of Naples Parthenope .

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti