Digital and adolescents

Doubts, answers, missed laws: delayed protection for minors

The correspondence, which emerged from the records, shows an internal debate at Big Tech since 2017

by Marisa Marraffino

 (AdobeStock)

3' min read

Translated by AI
Versione italiana

3' min read

Translated by AI
Versione italiana

At the trial a flood of internal emails from employees worried about thewellbeing of users, that of their children, and the scarce resources invested in the safety of teenagers. Among them, that of an executive from Meta's design department who wrote directly to Mark Zuckerberg reporting that she would not "sleep at night" if she did not tell him of her concerns. One of her two teenage daughters, in fact, had also been hospitalised twice for body dysmorphism, and - according to her - the social network should have eased the pressure on unattainable beauty models to minimise possible risks for minors.

'One day, looking back,' the woman wrote, 'I hope we can be proud of the decisions we made'. A sentence that sounds like a warning, while the 'Team Teens' created by Zuckerberg was all about investing in engineers, enlisted to keep teenagers glued to their screens and study new solutions to not lose users.

Loading...

The documents

In an email dated 1 April 2020, Zuckerberg himself called it 'patronising' to limit people's ability to present themselves as they wanted, referring to the internal discussion on whether or not to keep thebeauty filters on Instagram. 'I'm not sure,' he continued, 'that at this point it's easy to change the setting.' For Zuckerberg, there was no clear data on the harm of using the filters or the operation of his algorithms.

In some internal emails in 2018, studies were cited for which the use of social media would have the same effect on adolescent depression as 'eating potatoes'. An underestimation of the risk, in the light of the latest judgments, had resulted in a dense correspondence that shows a heated internal debateas early as 2017 and an attempt to shift external concerns to other fronts. Meta executives monitored the news coming out of major media outlets around the world and the legislation passed, trying to find 'plausible' answers to agree on in their public releases.

As early as 2018, the research team was looking for scientific arguments to counter the narrative of "social media addiction", arguing that the causes should also be sought elsewhere, for example, in social inequalities or individual health problems. A script that did not seem to hold up in the face of much scientific evidence and criticism from Meta's own top management, who wrote: 'The fact that we have age limits that are not respected makes it difficult to claim that we are doing everything possible'. Meta's wellbeing managers, for whom low investments were intended compared to engineering ones, continually raised questions and doubts, demonstrating that they cared about the problem, but could not do enough: "Is there any way to immediately redirect a person to support if they try or try to postself-harming content, instead of showing it to them?"; or even: "We cannot put all the blame on the user, without acknowledging our role in the problem". And again: 'The fact that we say we don't allow children under 13 access to our platforms, but have no way to enforce it, is simply indefensible'.

The lawmakers' delay

The dense correspondence, in light of the episodes of self-harm that have also occurred in Italia, also unveils thefailure of legislators to regulate Big Tech from the outset. While legislators around the world stood by and watched, platforms amplified their power, maximising revenues and investing them in new technologies to keep us glued to a screen for as long as possible.

Only since 17 February 2024 has theDigital services act, the European regulation that is supposed to protect users from the main massive risks, also found application in Italia, while since 2017 doctors and scholars from all over the world had been sounding the alarm about the safety of social media, especially for the youngest. As early as 2019, Instagram's head of public policy was pointing out that "the lack of proactive action in detecting accounts under the age of 13" would undermine "the general credibility" of the platform, and that much self-harm and suicide content continued to circulate despite being unsuitable for users under the age of fifteen/sixteen. Comparisons on legislation and medical research reported in the press continued.

The net feeling, reading the proceedings, was that the safety of adolescents was not a priority and that the risk had been consistently underestimated not only by the big players but also by the legislator who had arrived late to deal withsafety on social, especially of the most fragile users.

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti