Guest article by Francesca Centola, Policy and Knowledge Officer (Health, Digitalisation and Environment) and Fatima Awil, Policy and Knowledge Officer (Education, Youth and Vulnerable situations), Mental Health Europe
“I believe one of our greatest challenges in this decade is protecting the mental health of our children and young people – especially online.” With these words, included in the political guidelines for the next European Commission 2024−2029, Ursula von der Leyen places the topic high on the EU agenda and calls for “an open and evidence-based debate” on this issue. Mental Health Europe calls for the debate to be followed by ambitious actions. The President of the European Commission refers to upcoming initiatives, such as an EU-wide inquiry on the broader impacts of social media on well-being, and the intention to take action on the addictive design of online services and to develop an action plan against cyberbullying.
The issues at stake
Back in 2022, in a dedicated report on Digitalisation and mental health, Mental Health Europe assessed key risks and opportunities for mental health related to digitalisation. Among the risks, evidence points to addiction (stressing that digital platforms have been deliberately designed to keep people hooked). Social media may also foster unattainable beauty standards, which have the potential to significantly damage one’s body image over time (see here our campaign with Dove). Other risks include misinformation and the emphasis on a biomedical representation of mental health (the focus on symptoms and diagnosis, often leading to self-diagnosis), resulting in people getting inaccurate or incomplete information or giving themselves a diagnosis based on what they have read, disregarding the fact that the mental health journey of each person is different and unique. Cyberbullying is a widespread problem, as well as glamourisation of mental health (the oversimplification of mental health problems and their representation as something desirable and cool while minimising the complexity of lived experience).
In 2023, an investigation by Amnesty International found that within 20 minutes or less, for TikTok teen accounts that signaled an interest in mental health content, more than half of the videos in the ‘For You’ feed were related to depression and self-harm. Within an hour, multiple videos romanticising, normalising, or encouraging suicide had been recommended. The consequences of children and young people being drawn into “rabbit holes” of potentially harmful content can be devastating (sometimes resulting in teenagers taking their own life). Harmful content is rarely a result of a person actively seeking it out. Instead, harmful content typically finds its way to the person. Algorithms are the gatekeepers to that content, as AI driven recommender systems curate content to maximise user engagement, in so doing potentially exposing children to harmful material.
The digital world we want to see
Mental Health Europe’s vision is that of a world where digitalisation is considered not as an end in itself, but rather as a means to an end. Digital technologies should be considered as a tool to advance wellbeing in our societies. We call for human rights considerations to be at the centre of the digital transformation and to prevail over the commercial interests of powerful private companies.
Our call to co-create a human-rights centred digital future
In order to achieve our vision of a digital world that respects and upholds human rights, co-creation is of essence. The responsibility to protect children and young people from the risks of the digital world cannot exclusively fall on them and their families.
This approach – long-time dominant- is very much in line with the idea that mental health is an individual issue and the responsibility to “fix the problem” is on the individual. Mental Health Europe rather argues for a psychosocial approach to mental health, pointing out to the broader determinants that shape our mental health, to be found in the context around the individual. In the case of digitalisation, the problem is the way in which the online platforms are designed and their business model.
If strengthening efforts on digital literacy and providing supportive environments offline are certainly important measures, it is crucial for regulators to create and enforce policies addressing design features of digital tools (to prevent the use of manipulative and harmful methodologies to gain and retain users’ attention) and the power imbalance between Big Tech and people. Online platforms need to act and be held accountable if they do not.
Recent EU legislative and policy developments are going in this direction of a shared responsibility and a strong regulation of the digital world. The Digital Services Act (DSA) – which sets a series of obligations that online platforms need to respect (e.g., the obligation to identify, assess and mitigate risks for mental health, as well as risks for children’s rights)- is a welcome legislation. Under the DSA, the European Commission has opened investigations into very large online platforms over alleged addictive algorithms, which may have an impact on teenagers’ mental health.
Similarly, the Artificial Intelligence (AI) Act, which entered into force on 1st August 2024, offers a unique opportunity to create a safer online environment for children, as one of its goals is to safeguard children from the specific vulnerabilities they face in the digital environment.
For the DSA and AI ACT to fulfil their potential, it is crucial to ensure that any guidelines, standards, codes of conduct that will be developed in the implementation phase ensure an upward convergence, i.e. towards the highest/safest benchmarks, rather than settling on an average common denominator.
Mental Health Europe stands ready to monitor how these promising acts are implemented. At the same time, we will advocate for further action to be taken (e.g. regulation of addictive design online, as called for by the European Parliament in the Report on addictive design of online services and consumer protection in the EU single market) and for more research in the field (e.g. on mental health impacts of social media among specific population groups, based on race or gender). We will also advocate for any policies on safer digital world to be based on meaningful engagement of those who are supposed to benefit from them, in this case children and young people.
To deliver on these priorities, we are currently collaborating with YouTube Health to ensure that reliable health information comes first when people search for answers about their mental health online. By the end of 2024, we will publish a study on AI in digital mental health care, reflecting on risks and mitigation measures, from a human rights perspective. We aim to further collaborate with EU institutions, researchers, children and youth focused or led organisations and other stakeholders that share our vision of a digital world enabling people to thrive. Together we can create the digital world that our children and young people deserve.
Disclaimer: the opinions – including possible policy recommendations – expressed in the article are those of the author and do not necessarily represent the views or opinions of EPHA. The mere appearance of the articles on the EPHA website does not mean an endorsement by EPHA.