Publications

by | September 23, 2024 | Opinion

Mental health apps: Not a friend but a foe for public health

Guest Article by Dr. Nicole Gross, Associate Professor, National College of Ireland 

Recent studies suggest that mental well-being apps fail to create value for people experiencing mental health difficulties. Data-hungry and highly exploitative in nature, they could even work to worsen people’s mental health.   

Mental health is a human right but also a profound public health concern, and there is an urgent need to address the world’s growing mental health crisis. Digital tools, including mental health apps, seem like an innovative solution as they could provide remote access to care, make an accurate diagnosis, and provide personalised treatment at scale. Accessible, easy to use, discreet, convenient, portable, and often cheap, there are now between 10,000 and 20,000 mental wellbeing apps available on the market. Big names include Calm, Happify, Headspace, Sanvello, BetterHelp, Talkspace, Circles Up, Moodfit, and Moodkit. They cover disorders such as depression, anxiety, gender dysphoria, and bipolar disorder but also mental health-related issues such as mood, mindfulness, sleep concentration, and unhelpful thinking patterns, and the apps’ high download numbers suggest that people gravitate towards them when they seek help.  

Many of these mental wellbeing apps feature compelling marketing slogans that make big promises: to help with “anxiety, stress, sleep” (Calm), “overcome negative thoughts, stress and life challenges” (Happify), “apply effective strategies of professional psychology to your everyday life” (Moodkit), “human-to-human support by Ginger, backed by science, and boosted by technology “(Headspace Health), or provide people with “convenient and affordable therapy” (BetterHelp). However, a closer look at the evidence base suggests that the majority of apps rely on half-baked science, have zero clinical robustness, and show no net health benefits. Hidden deep in their terms and conditions, these apps admit that they do not provide clinical or medical advice or care (e.g., Calm, Happify, Headspace, and Moodfit), nor can their services be equated to those of a doctor. Users can only use the apps at their own risk and without any warranty (e.g., Moodkit), and no platform is ever liable for the services provided, not even when the service is provided by licensed therapists (see Talkspace and Betterhelp). 

Even more sinister issues are hidden in these apps’ privacy policies, informed consent, and cookie agreements: these apps extract the user’s private and personal experience, information, and data, and turn these into ‘raw materials’ for sale on the market. Marketers and advertisers use up to 650.000 micro-categories to segment people and create personalised advertising. Such categories include: “depression-prone”, “easily-deflated”, “getting a raw deal out of life”, “trapped neurotic”, “receptive to emotional messaging”, “aspiration/happiness seeker”, “having bottled up stress”, “lone wolf”, “concerned with self-image”, or “stress-reactor”. Such detailed psychographic information enables advertisers to commercialise their products and services with even more success and traction.  

Problematic is that people in need seek help on these apps and by doing so, they reveal highly sensitive information about their lives, personalities, health, and well-being. Yet, this information is not converted into appropriate treatment and care but into a market that thrives on surveillance, data capitalism, and the exploitative targeting of vulnerable people in society. This market has gone far too long without being regulated and digital health companies – including platforms or app companies – have not been held accountable for their opportunistic exploitation of global public health concerns, including mental health. Users also have to start waking up more when it comes to questioning the trustworthiness of these apps and surrendering their data.   

Much stricter regulation is needed to ensure that users’ privacy and autonomy are adequately protected from the clever marketing strategies and the business model tactics that these data-hungry tech companies deploy. Rules and regulations are also needed to ensure that these companies provide a fair value in return. Fair value could be evidence-based care, full liability, or the sharing of their precious data mines with public bodies for research. What is clear is that public health and mental healthcare are subject to unprecedented digitalisation and datafication, and it is up to society to take the power away from commercial app providers and decide what the future of mental health treatment and care should look like. After all, mental health issues can affect anyone at any time, and since all humans have a right to good mental health, it is time to call out those who undermine it, take decisive action and re-establish a focus on public value.  

Read more:  

Disclaimer: the opinions – including possible policy recommendations – expressed in the article are those of the author and do not necessarily represent the views or opinions of EPHA. The mere appearance of the articles on the EPHA website does not mean an endorsement by EPHA.

Get the EPHA Newsletter