Products

ChatGPT prepares to predict user age: this is how OpenAI separates adults and minors

It is already active in the United States and will arrive in Europe shortly: OpenAI has switched on a system that tries to tell, from behaviour and not only from declared data, whether an adult or an under-18 is behind an account

by Alessandro Longo

3' min read

Translated by AI
Versione italiana

3' min read

Translated by AI
Versione italiana

OpenAI will automatically check our age on Chatgpt and if it thinks we are underage it will adapt its behaviour, in the conversations and images created. The announcement, made yesterday, already applies to US users. The system will be extended to us Europeans 'in the coming weeks', in compliance with EU laws, the company writes.

The system tries to work out, with algorithms, whether an adult or a person under 18 is behind an account. Parameters analysed: how long the account has existed, at what times and how often it is used, how usage habits change over time, what age is declared during registration.

Loading...

When the system estimates that an account may belong to someone under 18, Chatgpt automatically activates a number of additional protections. The experience is cleansed of content deemed more sensitive for a young audience, such as descriptions of graphic violence, gory content, viral challenges that may encourage risky behaviour, sexual, romantic or violent role-plays, explicit depictions of self-harm, content pushing extreme beauty standards, unhealthy diets or body shaming.

In parallel, those who explicitly declare that they are under 18 at the time of registration receive a dedicated set of protections, in line with the principles OpenAI has codified for the use of models with under-18s.

If, on the other hand, an adult is classified as a minor by mistake, he or she has the possibility of contesting the evaluation. To restore full access, however, he or she must go through an identity verification procedure using a selfie, managed by Persona, an external provider specialising in identity verification. This verification is not necessary to use ChatGPT at all, but becomes the obligatory step for those who want to get out of the 'teen mode' applied by the system.

Alongside this, OpenAI also invokes the parental control tools already available. Parents can, for instance, establish time slots when the app cannot be used, limit certain functionalities (such as memory or the use of conversations to train models) and receive notifications if the AI intercepts signs of strong emotional distress in the child's requests.

This can also be read as a direct response to news cases and legal and regulatory pressures related to minors.

In September 2025, the suicide death of a 16-year-old Californian boy after months of chatting with Chatgpt ended up in a civil lawsuit against OpenAI. At the time, the company announced exactly what is now coming: an age verification and estimation system.

The flip side of this verification is that OpenAI will be able to launch an adult mode - as stated - to those who are detected as such, by 2026. In short, on the one hand protecting minors on the other enhancing the service for adults, improving engagement. Elon Musk's Grok is already more inclined than Chatgpt to 'adult' conversations, but without age verification.

In Europe, these things are within the perimeter of the General Data Protection Regulation and the Digital Services Act that protects minors. In addition, the EU Commission and the national authorities (from us, Agcom) have been pushing for some time for the adoption of age verification systems: tools that at least allow a reasonable certainty that a service is not used by minors when it is only aimed at adults, or that activate reinforced protections when minors are allowed. In Italy, it is already active for access to major porn sites.

Copyright reserved ©

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti