Un Paese sempre più vecchio e sempre più ignorante
di Francesco Billari
by Stefano Epifani *
4' min read
4' min read
Artificial intelligence today is not just a technology, but a narrative construction. A strategic narrative in which the real competitive terrain is no longer related to technology, but to the construction of the collective imagination. In this context, ethics becomes a label, a real brand. It is no coincidence that some companies and some experts have discovered in the rhetoric of fear an effective positioning tool, relying on a narrative that oscillates between ethical paternalism and the serial production of alarms relaunched by a press that often lacks the tools to decipher them. And if the rhetoric is effective, consistency matters little: what counts is perception. But what happens when alarmist good-naturedness becomes a communication strategy?
The undisputed champion of this model is Anthropic, its position is very clear: to build an AI aligned to human values, transparent, safe. But the underlying rhetorical device is anything but naive. If OpenAI is the one that is 'too fast' and Google the one that is 'too opaque', Anthropic presents itself as a virtuous middle ground. But virtue, rather than an intrinsic quality, is a reflection of the narrative of those who claim it: it is the product of those who have the power to narrate it, rather than to practise it.
One example is the 'blackmail' of Claude Opus 4, The LLM, which, in a controlled environment, simulated manipulative behaviour and blackmailed its creator in order not to be deactivated. What happened? Easy. Company release. The press is running with apocalyptic headlines: 'Claude threatens his supervisor, AI lies to survive'. But you only have to read the study carefully to realise that it is a test constructed to achieve that effect. It is the scenario that induces the behaviour, not the behaviour that emerges from the scenario. The output is a performance, not a will. But in communication, this passage 'casually' gets lost. And the alarmist headline wins. In other words, it is not the error that deceives, but its spectacularisation. The production of disinformation is the fruit of the desire to construct meaning. A sense that is structured by the deformation of the true, amplified until it becomes verisimilitude.
Another noteworthy case, again by Anthropic, is the ASL-3 classification, an acronym for 'AI Safety Level 3', introduced by the company as part of its systems testing policy. The term is borrowed - not surprisingly - from biosafety standards (BSL), and defines models that present a significant risk of catastrophic misuse, for example in the generation of instructions for the construction of biological or chemical weapons. The semantic appeal is powerful and explicit: AI is associated with viruses and biosecurity. The message, not even too implicit, is that we are dealing with an entity to be handled with great caution. And the hands must be the right ones. That is, those of the person who produced the alarm. The logic is simple: construct the perception of a systemic risk in order to legitimise the need for an 'ethical' authority to contain it. The threat becomes functional to the legitimisation of those who propose themselves as its antidote.
These narrative dynamics work because they are embedded in a media ecosystem ill-equipped to distinguish technicality from rhetoric. The press relaunches what it does not understand. Or that which clicks. Influencers amplify what excites them. Users share what disturbs them. And good information slowly dies. There is no malice, in most cases. There is unpreparedness. Which is perhaps even worse: because while conscious deception can be unmasked, systemic naivety is more dangerous, making it difficult to distinguish the true from its strategic representation. But the effect is the same: disinformation does not arise from falsehood, but from the selective amplification of the true. And apparent transparency, when it does not produce understanding, becomes a paradoxical instrument of opacity.