The report

Artificial intelligence will surpass human intelligence by 2027: what a study by former OpenAI researchers says

Machines do not yet have a conscience, but the way we use them is increasingly autonomous and can easily lead us into error. The question of responsibility is central

by Chiara Ricciolini

Che cos’è davvero l’intelligenza artificiale (e a cosa serve)

3' min read

3' min read

The Artificial Intelligence revolution will surpass the Industrial Revolution in speed and scope. No one can predict with certainty what the near future will look like, because the evolution of AI models is proceeding at a dizzying speed. Five researchers led by Daniel Kokotajlo, a former OpenAI employee and researcher who left the company a year ago, have tried to imagine it. But we should not be alarmed by the rapid development of intelligent machines as much as by the fact that there is already a lack of clarity about the responsibility for the actions we delegate to these models.

Intelligent and autonomous machines by 2027

The research, reported in a New York Times investigation, claims that artificial intelligence will surpass human intelligence by 2027. But how much truth is there in these predictions?

Loading...

The researchers led by Kokotajlo based their projections on a thought experiment. They imagined the existence of a fictitious company, 'OpenBrain', which represents the theoretical sum of America's leading artificial intelligence laboratories. OpenBrain develops an increasingly advanced system: at the beginning of 2027, the AI becomes a complete programmer. By mid-year, it becomes an autonomous researcher, capable of making discoveries and leading scientific teams. In late 2027 and early 2028, a "super-intelligent" artificial intelligence is born: it knows more about advanced AI design than we do and can automate its own development, creating ever more powerful versions of itself.

Thus, by the end of 2027, AI could become uncontrollable. "In our predictions," reads the report, "we imagine that OpenBrain will develop a superhuman programmer internally: a system capable of performing all the coding tasks entrusted to the best engineers today, but much more quickly and cheaply.

According to Gaia Contu, popularizer and PhD student in robotics ethics at the Scuola Sant'Anna in Pisa, 'at present we are not yet faced with artificial consciousness, if we understand consciousness in the intuitive sense, as a qualitative subjective experience. However, there is no principle preventing its development in the future. Machines today are not conscious, but there is nothing to prevent them from becoming so'.

"Some time ago at a conference," he continues, "we were discussing how the most up-to-date version of chat gpt could not do a very simple game: take the letter 'D', rotate it 180 degrees, put the number 4 on it, and display the picture. Of course it is clear to us that a picture of a little boat would come up, but chat gpt couldn't do it. Machines do not yet have that kind of visual intelligence, which is purely human.

The dilemma of responsibility

.

The development of such automated systems raises a crucial ethical question: that of responsibility. This is highlighted by Silvia Milano, a researcher and associate professor expert in AI ethics at the Ludwig-Maximilians University of Munich and the University of Exeter. 'We are talking about a real "responsibility gap",' argues Milano. When a system is automated, i.e. capable of acting without detailed instructions, the more autonomous it is, the greater the risk of its decisions escaping human control, even of those who originated the process. If I assign an AI the task of replying to an email, that reply will not be written by me. I can choose to read it before sending it, but if the system is fully automated, that reply will go out without my intervention. At that point, who is responsible? If it is not made explicit, in fact, no one'.

The problem is all the more serious the more AI is used in sensitive areas. 'In the US,' Milan continues, 'machine learning algorithms are used in the judiciary to support decisions on precautionary measures or sentences, in healthcare to establish treatment priorities and risk levels, and in personnel selection processes to identify the best candidates. In all these cases, the risk is that decisions are made without a clearly identifiable human manager'.

One must always remember that Large Language Models do not possess the notion of truth. The answers they provide are based on a probabilistic calculation, drawing on what they have learnt from training and the literature provided to them. This is why the possibility of delegating human tasks to machines should always be accompanied by careful supervision of the delegator. But perhaps we have already gone too far.

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti