New stakes in Chatgpt, on health, privacy, sex: what really changes for users
Since 29 October, the rules have changed in Chatgpt, for users, and in other OpenAI-based products.
As of 29 October, the rules have changed in Chatgpt, for users, and in other OpenAI-based products. There are more safeguards against harmful effects of chatbots, but there is confusion under the sky. Even some people write - on social media and some newspapers - that Chatgpt can no longer give us medical and legal advice. Wrong: it continues to give them, as is easy to verify. The point is that the rules are a bit of a disclaimer where OpenAI distances itself from certain possible uses, always possible but forbidden in theory. At the same time, it is true that it establishes some stronger and more immediate practical protections, but only for the most vulnerable users.
Let's be clear.
The new rules come after months of discussion with governments, researchers and users and for us Europeans they fit, somewhat loosely, into the dictates of the AI Act regulation.
The clearest prohibitions: weapons, surveillance and manipulation
The new policies explicitly prohibit the use of models to develop, design or manage weapons. But also to do biometric surveillance, facial recognition without consent (a bit like Clearview AI does in the US, for immigration control authorities).
Specifically: 'Facial recognition databases without the consent of the subject are prohibited; real-time remote biometric identification in public spaces. Use of a subject's image without the subject's consent, including his or her photorealistic image or voice, in such a way as to create confusion as to authenticity,' the policies state.
