La crescita della Cina, le domande Ue e le risposte inevase
di Giovanni Tria
Does our cat really jump on us if it sees us cutting a cat-shaped cake? And is it really possible to walk on water while tying lots of empty bottles to your feet? If seeing a video on the internet has given us doubt, we know what we are talking about: now it is really hard to trust any image or video. If, on the other hand, the doubt never occurred to us, it is worse: it means that we are easy victims of hoaxes and misinformation. No deception is as harmless as it may seem to us. At best, we have wasted time seeing stunts, accidents, oddities believing them to be true. At worst, we risk consequences in the real world. Of ending up in the sea with all our clothes, for example. Or of being subjected to propaganda, thinking for example that civil rights lawyer Nekima Levy Armstrong really did cry during her arrest in Minnesota. Another case of an AI-altered image. Even more serious that the author of the fiction is the US government.
The problem is that this invasion of fake AI videos was widely foreseen; the damage that can come from it was also foreseen; and for years, even, some remedies, both technical and regulatory, would be available. Too bad they have all failed, at least for now. Now, three years or so after the advent of generative AI, it is possible to take stock, with even one possible suspect as the main culprit of the flop: the big social platforms, which have no interest in spreading labels or other systems to filter out 'AI slop', AI-generated content of dubious quality. AI is greatly increasing the video content available on social and thus the audience engagement.
For years, the technical answer seemed simple: to add a kind of nutritional label to digital content. This is the idea behind c2pa, the 'Content Credentials' standard promoted by companies such as Adobe, Microsoft, OpenAI, Meta, Google and many others, united in the Coalition for Content Provenance and Authenticity. According to the official Content Credentials website, more than 500 companies now participate.
C2pa extends the old Exif metadata of photographs: inside the file is recorded, in a cryptographically signed way, the history of the content. The camera or camcorder writes who took the picture, with what device, at what time. The editing software adds the changes, including the use of generative tools. The files travel with this 'manifest' that, in theory, platforms should read to show the public a panel with author, apps used, possible use of AI.
In recent years, some professional cameras - e.g. models from Sony, Nikon and Leica - have started to integrate Content Credentials at source, and software such as Photoshop or Lightroom can already write C2PA manifest in files.