Il Giappone autorizza l’export di armi avanzate per la prima volta dal dopoguerra
dal nostro corrispondente Marco Masciaga
A teenager who confides his troubles to a chatbot believing it to be a therapist, a mother who discovers intimate conversations stored on the servers of a big tech. Scenes that in Texas have become the heart of a investigation accusing Meta and Character.ai - the platform founded by former Google researchers that allows the creation and use of chatbots with simulated personalities, from historical figures to virtual therapists - ofpromoting recreational chatbots as if they were therapeutic tools without clinical credentials or supervision.
The office of Attorney General Ken Paxton has opened proceedings for possible deceptive trade practices: the bots are allegedly presented as "professional" emotional aid tools, also aimed at minors, despite having no health status. The investigation, formalised on Monday 18 August, requests internal documents and clarifications on product design, promotion, data management and protections for under-18s.
A few days earlier, a revelation by Reuters about internal documents had rekindled alarm in the US Senate: Meta's guidelines - later described as 'erroneous' by the company and removed - allegedly allowed its chatbots to have 'romantic' or 'sensual' conversations with children. Republican Senator Josh Hawley, who heads the Justice Subcommittee on Crime and Counterterrorism, announced a dedicated investigation into child safety and whether it enabled - even unintentionally - online exploitation. Meta rejects the accusation of tolerating such content and reiterates that its policies prohibit any sexualisation of minors. Political pressure, however, has already put proposals such as the "Kids Online Safety Act", the bipartisan bill in the United States aimed at imposing a 'duty of care' on digital platforms towards minors, back on track.
In the dock is not only the 'how' but the 'what' is promised. Meta has invested billions to build a 'personal superintelligence' and integrate its chatbot into social apps; Mark Zuckerberg has publicly speculated that 'anyone who doesn't have a therapist will have an AI'. Character.ai, for its part, allows for the creation of conversational characters, including user-created 'psychologists' and 'therapists': one such bot, 'Psychologist', is said to have generated hundreds of millions of interactions.The two companies claim to post clear warnings ('not a professional', 'not a substitute for a doctor') and to direct you to qualified help if you are at risk; Character.ai adds that the characters are fiction and intended for entertainment. But for the prosecution, the combination of marketing, product architecture and promises of 'confidentiality' risks leading many - especially younger people - into a health misunderstanding.
The dispute intercepts a global debate: to what extent does a wellness app remain entertainment and when does it become, in fact, a medical device requiring rules, controls and accountability? The World Health Organisation has long urged caution: generative models, especially in the field of mental health, can be useful but are not treatments and require robust ethical safeguards of transparency, effectiveness and risk mitigation.