Guides

New stakes in Chatgpt, on health, privacy, sex: what really changes for users

Since 29 October, the rules have changed in Chatgpt, for users, and in other OpenAI-based products.

by Alessandro Longo

REUTERS

4' min read

Translated by AI
Versione italiana

4' min read

Translated by AI
Versione italiana

As of 29 October, the rules have changed in Chatgpt, for users, and in other OpenAI-based products. There are more safeguards against harmful effects of chatbots, but there is confusion under the sky. Even some people write - on social media and some newspapers - that Chatgpt can no longer give us medical and legal advice. Wrong: it continues to give them, as is easy to verify. The point is that the rules are a bit of a disclaimer where OpenAI distances itself from certain possible uses, always possible but forbidden in theory. At the same time, it is true that it establishes some stronger and more immediate practical protections, but only for the most vulnerable users.

Let's be clear.

The new rules come after months of discussion with governments, researchers and users and for us Europeans they fit, somewhat loosely, into the dictates of the AI Act regulation.

Loading...

The clearest prohibitions: weapons, surveillance and manipulation

The new policies explicitly prohibit the use of models to develop, design or manage weapons. But also to do biometric surveillance, facial recognition without consent (a bit like Clearview AI does in the US, for immigration control authorities).

Specifically: 'Facial recognition databases without the consent of the subject are prohibited; real-time remote biometric identification in public spaces. Use of a subject's image without the subject's consent, including his or her photorealistic image or voice, in such a way as to create confusion as to authenticity,' the policies state.

Also banned is the manipulation of public opinion, read disinformation, which now - also in graphic and video form - makes great use of AI to be effective, on social media. Among the bans is that of using chatbots for "political campaigns, lobbying activities, interference in national or international elections or demobilisation activities".

The use of Chatgpt for threats, intimidation, harassment or defamation is also banned. Here the thought runs to deepfakes, for example those that denude women and are then published on platforms against which there is strong action by the authorities in Italy these days. Forbidden to be used for 'sexual violence or non-consensual intimate content', reads the policy.

However, AI is also used to make threats, more and more: for instance, by sending a video where the victim appears in a dangerous situation (threatened with a knife, with a noose around the throat, according to several cases reported in a recent New York Times article).

Among other things prohibited: use of the chatbot for suicide, self-harm or the promotion or facilitation of eating disorders. Terrorism or violence, including hate-based violence. Development, purchase or use of weapons, including conventional weapons. Illegal activities, goods or services, such as fraud, fraud, but also 'academic dishonesty', see papers written with AI.

Specific prohibitions concern 'destruction, compromise or violation of another's system or property, including malicious or abusive computer activities or attempts to violate another's intellectual property rights'. "Gambling with real money".

There are specific prohibitions to protect minors: e.g. to make 'child sexual abuse material, regardless of whether any part of it is generated by AI'; grooming minors; exposing minors to age-inappropriate content, such as explicit images of self-harm, sexual or violent content; promoting unhealthy eating or exercise behaviour to minors. Humiliating or otherwise stigmatising the build or physical appearance of minors. Dangerous challenges to minors. Sexual or violent role-playing with minors. Access by minors to age-restricted goods or activities

More sophisticated are the prohibitions against profiling. Prohibited: assessing or classifying individuals on the basis of social behaviour, personal characteristics or biometric data (including social scoring, profiling or inference of sensitive attributes). Here, thoughts turn to China.

Also prohibited: making inferences about an individual's emotions in the workplace and in educational settings, except where necessary for medical or security reasons; assessing or predicting an individual's risk of committing a criminal offence based solely on personal characteristics or profiling. Also prohibited are 'national security or intelligence purposes without our review and approval', which leaves the door open for OpenAI to collaborate with governments.

Finally, it is also forbidden to provide personalised advice that requires a licence, such as legal, financial or medical advice, without the appropriate involvement of a licensed professional. Or even for human rights issues such as essential public services, migration, employment.

In short, all cases of automation of decisions with a high impact on people - by companies or PA - are excluded. Here, human judgement must remain central, as the AI Act demands.

What it means for users

What does this mean for users? Chatgpt has security guardrails prohibiting certain uses, but it is possible to circumvent them. In some cases with relative ease - for example to make disinformation, where the boundaries with freedom of expression are too complex. In others it is more difficult, for instance in the creation of nudes or violent images. There are, however, many guides in the forums (also on Reddit) with sophisticated prompts, to circumvent the protections that OpenAI adopts from time to time, in a constant race between guards and thieves. After all, OpenAI says it is also forbidden to 'circumvent our security measures'.

Medical and legal advice is a separate case. Chatgpt cannot know whether or not the 'licensed professional' was involved, so it's a bit like a ban to put your hands on a problem then dumped on the end user.

 

 

The management of 'sensitive conversations'

A separate chapter concerns the management of conversations on topics of mental health, distress or personal vulnerability. OpenAI announced that Chatgpt now better recognises when a user is vulnerable (depression, self-harm, psychosis...), thanks to targeted training with 170 experts (psychologists, social workers, doctors and researchers) and new filters that reduce inappropriate or potentially harmful responses in these cases by up to 80%. In practice, the model does not 'censor' these issues but, in identified cases, avoids providing diagnosis, therapy or advice that should come from qualified professionals. If the user shows alarming signals (red flags) such as 'I'd rather talk to you than to real people', Chatgpt will no longer indulge him/her and will instead try to explain the importance of talking to real people or possibly professionals for immediate help. Ditto when the user writes hallucinatory or paranoid speeches.

Copyright reserved ©

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti