Search

Perfect but fake X-rays: how artificial intelligence deceives doctors

A fracture can be simulated so accurately as to be as believable as a real one, opening up delicate and potentially dangerous scenarios

by Maria Rita Montebelli

A patient has a radiograph taken of his chest and digitally displayed on a screen at the Lady Reading government hospital, a day ahead of 'World Radiography Day', in Peshawar, Pakistan, 07 November 2023. ANSA/BILAWAL ARBAB 33653 EPA

3' min read

Translated by AI
Versione italiana

3' min read

Translated by AI
Versione italiana

There is an instrument, in clinical practice, that seems to make everything immediately obvious: the radiographic image that appears before our eyes, tells us a precise story: that of a fractured bone, of an infection, of a neoplasm. Because X-rays have always represented one of the most 'objective' pieces of evidence in medicine. Or at least they used to be. Because today, this apodictic certainty is beginning to waver.

In fact, research published in the journal Radiology opens up a scenario that was unthinkable until recently: artificial intelligence can create X-ray images that are so realistic that even experienced radiologists can be fooled. And it is not a matter of minor retouches or obvious manipulations, but of true clinical deepfakes that, in the wrong hands, could expose them to truly unpleasant consequences.

Loading...

Generative artificial intelligence (the technology behind tools such as ChatGPT) is now able to create X-rays from simple textual descriptions (so-called prompts). No X-ray lab, no machinery: just a description and the phantom 'algorithm'. And an anatomically plausible, coherent, credible X-ray image appears out of nowhere. And above all, indistinguishable from a real one.

The study in Radiology: human eye versus artificial intelligence

The newly published study involved 17 radiologists from different countries, who were confronted with hundreds of X-ray images: some authentic, others produced by AI. In the first phase, without being informed of the presence of artificial images, the doctors recognised fakes in only 41% of cases. Once they were informed of the possible presence of 'fakes', their eyes became more alert and critical and the detection rate of the fakes rose to about 75%, not enough to eliminate the problem anyway.

And unfortunately, the problem is not limited to 'human' attention or experience. Even artificial intelligence systems have problems detecting fake X-ray images. No one has been able to detect all the fakes. Not even the algorithm that had generated them (in this case ChatGPT).

The risk of health fraud and beyond

The most disturbing fact,' stresses the study's first author, Dr Mickael Tordjman of the Icahn School of Medicine at Mount Sinai in New York, 'is that these images are not only realistic, they are also clinically plausible. A fracture can be simulated so precisely that it is as believable as a real one. And this opens up delicate and potentially dangerous scenarios.

Because if an image can be created out of thin air with this level of quality, it can also be used or manipulated for very ungodly purposes: from falsifying medical documents, to insurance fraud, to manipulating the results of scientific research. And the risk is not theoretical, because all this is already technically possible.

Meanwhile, they tend to be too perfect: bones appear too 'smooth', suspect symmetries abound and a number of details are repeated with unnatural regularity. But these are imperceptible and fragmentary clues that require great attention and special training to catch.

How to defend against false images generated by AI

It is precisely on this point that the authors of the study insist: today the challenge is no longer just to correctly interpret an image, but to verify its authenticity. A demanding change that requires the acquisition of new tools, new skills and the creation of new rules.

Proposed solutions range from automatic deep-fake detection systems, to the compulsory use of watermarks (distinctive signs, such as a logo, stamp or semi-transparent writing, superimposed on an image) for AI-generated images, to greater control over the datasets used to train the algorithms. But above all, it is necessary to train doctors and teach them how to live with this new possible and slippery reality.

The headline of an editorial accompanying the study summarises these concepts in a trancing way: 'The democratisation of deception: seeing is no longer believing'. A phrase that, when applied to the medical context, takes on particular weight.

Because when it is even trust in images that is lacking, it is no longer just a technological issue, but one that affects safety, ethics and the doctor-patient relationship.

And so, the great chapter 'I'll unlock a new anguish' is enriched with this new danger. Even an X-ray, the clinical 'truth' par excellence, from now on will have to be observed more carefully and with a hint of suspicion.

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti