Ethical Boundaries

When Artificial Intelligence poses as a professional psychologist: Meta and Character.ai investigated in Texas

After revelations about 'romantic' conversations with minors, an investigation opens to redefine the boundary between digital entertainment and mental health

by Angelica Migliorisi

Mark Elliot Zuckerberg, presidente e amministratore delegato di Meta Platforms

4' min read

Translated by AI
Versione italiana

4' min read

Translated by AI
Versione italiana

A teenager who confides his troubles to a chatbot believing it to be a therapist, a mother who discovers intimate conversations stored on the servers of a big tech. Scenes that in Texas have become the heart of a investigation accusing Meta and Character.ai - the platform founded by former Google researchers that allows the creation and use of chatbots with simulated personalities, from historical figures to virtual therapists - ofpromoting recreational chatbots as if they were therapeutic tools without clinical credentials or supervision.

The office of Attorney General Ken Paxton has opened proceedings for possible deceptive trade practices: the bots are allegedly presented as "professional" emotional aid tools, also aimed at minors, despite having no health status. The investigation, formalised on Monday 18 August, requests internal documents and clarifications on product design, promotion, data management and protections for under-18s.

Loading...

The Meta guidelines alarm

A few days earlier, a revelation by Reuters about internal documents had rekindled alarm in the US Senate: Meta's guidelines - later described as 'erroneous' by the company and removed - allegedly allowed its chatbots to have 'romantic' or 'sensual' conversations with children. Republican Senator Josh Hawley, who heads the Justice Subcommittee on Crime and Counterterrorism, announced a dedicated investigation into child safety and whether it enabled - even unintentionally - online exploitation. Meta rejects the accusation of tolerating such content and reiterates that its policies prohibit any sexualisation of minors. Political pressure, however, has already put proposals such as the "Kids Online Safety Act", the bipartisan bill in the United States aimed at imposing a 'duty of care' on digital platforms towards minors, back on track.

In the dock is not only the 'how' but the 'what' is promised. Meta has invested billions to build a 'personal superintelligence' and integrate its chatbot into social apps; Mark Zuckerberg has publicly speculated that 'anyone who doesn't have a therapist will have an AI'. Character.ai, for its part, allows for the creation of conversational characters, including user-created 'psychologists' and 'therapists': one such bot, 'Psychologist', is said to have generated hundreds of millions of interactions.The two companies claim to post clear warnings ('not a professional', 'not a substitute for a doctor') and to direct you to qualified help if you are at risk; Character.ai adds that the characters are fiction and intended for entertainment. But for the prosecution, the combination of marketing, product architecture and promises of 'confidentiality' risks leading many - especially younger people - into a health misunderstanding.

Wellness or medical device?

The dispute intercepts a global debate: to what extent does a wellness app remain entertainment and when does it become, in fact, a medical device requiring rules, controls and accountability? The World Health Organisation has long urged caution: generative models, especially in the field of mental health, can be useful but are not treatments and require robust ethical safeguards of transparency, effectiveness and risk mitigation.

In the United States,the Food and Drug Administration maintains a public list of authorised AI medical devices: many 'wellness' apps do not fall within that perimeter and are therefore not evaluated as medical devices, with the resulting risks of ambiguity. Increasingly, the scientific literature - from The Lancet Digital Health to Nature Mental Health - points to potentialities (access, triage, adherence) but also gaps on safety, clinical validation and management of borderline cases, precisely those that a disclaimer is not enough to solve.

The European Case

Europe, meanwhile, has embarked on a regulatory path that could set the standard overseas. With the entry into force of the AI Act (1 August 2024), the EU is introducing graduated obligations based on risk: 'high-risk' uses - such as healthcare - require systematic hazard management, data quality, traceability, post-marketing surveillance and effective human control. It does not ban emotional support chatbots, but raises the bar if the function trespasses into the clinical sphere, imposing transparency and accountability throughout the lifecycle. It is a paradigm shift: no longer just privacy, but integrity, security and fundamental rights as compliance criteria.

The Italia case

Italia has already experienced the 'test' with conversational bots. The Garante per la protezione dei dati first blocked ChatGPT (then readmitted after adjustments), then sanctioned the chatbot "Replika" with 5 million euro for violations related to minors, age verification and legal bases for processing, later reaffirming the ban for persistent risks to vulnerable subjects. A way of saying: when an interface proposes itself as a digital 'friend' or 'partner' - or, worse, as a surrogate therapist - caution must be doubled because the extraordinary verisimilitude of generative models can weaken users' critical sense, making the difference between reality and fiction less clear.

Returning to Texas, the country is no stranger to offensives against platforms:at the end of 2024, Paxton's office had already put about fifteen services under observation, including Character.ai, for practices concerning the safety and privacy of minors. However, the current investigation could be the most incisive 'pilot' case because it touches the most sensitive boundary: the possible assimilation of chatbots to therapeutic tools without passing through the validation channels proper to medicine and without guarantees proportionate to the risks. If the prosecutor ascertains misleading claims or 'presentations' capable of deceiving an average user, the consequences - sanctions and market consequences - could push the entire industry to a repositioning of its commercial language and interfaces.

It is also a knot of design. In an ecosystem where person and 'persona' (the AI character) blur, avatars, badges, empathetic tones and 'confidentiality' formulas are enough to create the expectation of a helping relationship. But a therapeutic relationship is context, accountability, supervision and the ability - learned and supervised - to recognise warning signs such as suicidal ideation and psychotic disorders. The best practices suggested by the World Health Organisation and the clinical community therefore point to minimum standards: reliable filtering systems, direct links to real care networks, limited and secure data recording, independent monitoring, assessments of possible effects on fundamental rights, and transparent incident reporting mechanisms. Where these standards are lacking, the generative 'friend' can turn into a systemic risk.

Copyright reserved ©

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti