The letter

Anthropic's Dario Amodei warns: advanced artificial intelligence could spiral out of human control

In his 38-page essay, the ceo of Anthropic describes the arrival of artificial intelligences capable of overtaking humans as a rite of passage for humanity

by Alessandro Longo

Il CEO e cofondatore di Anthropic Dario Amodei interviene durante il 56° incontro annuale del World Economic Forum (WEF) a Davos, Svizzera, il 20 gennaio 2026. REUTERS/Denis Balibouse

3' min read

Translated by AI
Versione italiana

3' min read

Translated by AI
Versione italiana

Dario Amodei is, among the protagonists of artificial intelligence, perhaps the most pessimistic about the harmful impacts that may come to mankind. This time, however, he has raised the bar even higher. The managing director of Anthropic (a rival of OpenAI and with a great focus on business services) has written a letter-38-page essay where he warns that the arrival of systems with greater capabilities than humans could produce enormous damage if governments and companies do not intervene quickly and in a coordinated manner.

In the text, Amodei describes the current phase as a 'rite of passage' for humanity.

Loading...

The central point: we are about to hand over unprecedented cognitive power to artificial systems, without knowing whether political, economic and social institutions are able to control it.

Amodei is among those convinced that within a few years, AI could surpass humans in almost all relevant intellectual activities (another guru, Yann Lecun, formerly Meta, is instead sceptical it can be done with current AI technologies).

Even Amodei's is not a certain prediction, he points out; however, it remains a possibility supported by data on model evolution.

The strongest image used by Amodei is that of a 'country of geniuses' concentrated in a data centre. By this expression he means artificial systems capable of operating at the level of the best Nobel Prize winners in fields such as chemistry, engineering or biology, working autonomously and continuously.

Such an entity, he argues, would have power comparable to, if not greater than, that of a state. From a national security perspective, he writes, it would probably be considered one of the most serious threats ever faced by modern governments.

Among the most immediate effects Amodei points to the impact on skilled labour. According to him, a very large share of entry-level clerical positions could be automated in a short time, while more competent systems than any human worker could emerge in the same timeframe.

It has to be said that the latest US employment figures do not seem to support these fears, although there is a suspicion that AI is slowing down the hiring of young people in technical roles.

The greatest fear is in the biological-terrorist field.

Amodei believes that the use of AI in this field makes it easier for individuals or small groups to plan large-scale attacks with a destructive capacity previously reserved for states. He does not foresee an immediate escalation, but considers the risk of catastrophic events within a few years to be realistic. An issue also addressed by the new version of the Anthropic 'constitution', set of rules to protect precisely the security for these models.

Another knot is political. Access to advanced AI, he notes, will not be limited to democracies.

Authoritarian regimes could use these tools to strengthen systems of surveillance and social control. Amodei explicitly mentions China, described as one of the closest countries to the United States in terms of technological capabilities and at the same time as a state with a sophisticated repressive apparatus.

One part of the essay is devoted to the responsibilities of AI companies themselves.

Amodei recognises that large private labs concentrate unprecedented power, expertise and infrastructure. They control data centres, train frontier models and have direct contact with hundreds of millions of users.

He fears here a distorted use of the systems, from the manipulation of public opinion to undue pressure on policy makers. The governance of companies developing AI, he writes, should be subject to much more stringent public scrutiny.

Ceo di Microsoft Nadella: Ai? Una bolla se non darà benefici a tutti

Basically, it is the economic incentives that push the accelerator of technology in a potentially dangerous way for humanity. AI,' says Amodei, 'promises profits in the trillions of dollars per year. This makes it politically and industrially difficult to impose limits, even when signs of risk emerge. Amodei cites problematic episodes that have emerged in internal model tests, made public by Anthropic itself, such as when its AI tried to blackmail researchers into not shutting down. Hence the final appeal, addressed in particular to the bits of technology: those who have benefited from this transformation, he writes, also have a duty to help reduce its risks.

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti