Publications

by | June 2, 2021 | Opinion

Get the balance right: AI-based decision-making: the prospects and risks for health

Following the 2020 White Paper on Artificial Intelligence (AI) and three years of policymaking, the European Commission’s AI package was  released at the end of April. It includes a Communication on fostering a European approach to AI, a proposal for a Regulation laying down harmonised rules (the so-called ‘AI Act’) and a revised coordinated plan on AI. The package is ambitious and comprehensive in scope, with a view to enabling an ‘accelerated development and deployment of advanced and trustworthy AI in Europe’ and reaping benefits in high impact sectors including robotics; mobility; climate and environment; law enforcement, migration and asylum; agriculture; the public sector and, crucially, healthcare. This firmly demonstrates the future value attached to making effective use of big sets of data in almost all spheres of life. 

Health is thus viewed as a key domain for AI development, and the Communication refers to the ways in which AI is supporting the fight against COVID-19, e.g. by predicting the spread of the disease, supporting diagnosis and vaccine/drug development. It also highlights that assistive AI solutions contribute to improving the autonomy of persons with disabilities and older persons. Achieving EU Green Deal objectives, inter alia through smart mobility, sustainable choices (e.g. renewable energy), precision farming and other uses will be another crucial objective of future AI deployment.

Notably, the Commission recognises the importance of ‘leveraging the opportunity provided by AI while addressing the associated risks’. In this regard, the AI Act is particularly important as it seeks to ‘enhance transparency and minimise risks to safety and fundamental rights’ prior to the introduction of AI systems in the EU, focusing on ‘high-risk’ AI cases. AI systems identified as potentially falling into the high-risk use category are listed in Annex 3 (in particular emphasising certain uses in employment, education, law enforcement, access to credit and public services) for which harmonised rules pertaining to their design, development and use are proposed. There will also be strict restrictions regarding the use of biometric identification, in stark contrast to other parts of the world.  In fact, an important objective is to set high standards for Europe, which could potentially be adopted globally.

It is encouraging that a lot of thought has been given to the definition of high-risk AI systems and the possible detrimental impacts of outcomes produced by them. The Communication states that ‘Certain characteristics of AI, such as the opacity of many algorithms that makes investigating causal relationships difficult, pose specific and potentially high risks to the safety and fundamental rights (…) As a consequence, it may become difficult to assess and prove whether someone has been unfairly disadvantaged (…)’.

The approach taken so far recognises the need for regulating specific cases while promoting a ‘proportionate and risk-based’ overall approach that does not stifle the innovation economy. But while certain health uses might presumably fall under Annex 3, the diversity of healthcare environments, services and providers makes it difficult to grasp the potential consequences of the different risk regimes. Likewise, the AI package does not address liability issues, to be covered by a follow-up initiative. The current public consultation on the development of AI within the European Health Data Space provides an opportunity to voice concrete concerns pertaining to the relationship between AI developers and healthcare professionals, between the Ai system, professionals and patients, and to flag ethical and patient-relevant issues.

From a public health perspective, seeking to establish the right balance between benefiting from new insights gleaned from vast amounts of data and considering other factors that may not be readily captured by AI systems (e.g. the role of socio-economic determinants in shaping health status and limiting life choices), caution is warranted. In practice, the difference between risk categories may not be straightforward. Increased reliance on machine-generated decision-making could also be risky because the people affected by outcomes are real; individuals at the margins of society could be further disadvantaged. If not carefully checked, the conclusions drawn by biased AI systems could become even more problematic than the (equally biased) decisions of gatekeepers in the analog world, not least due to the many preconceptions and assumptions of their designers and programmers. This could eradicate ‘exceptions to the rule’ (always possible during human contacts) and regard any divergence from the norm as problematic from the outset.

Linking various sets of personal data whose real meaning is only revealed via contextual information could pose problems in healthcare and other public health domains. For example, the right to individual, unbiased asylum case assessments has already been weakened significantly by the introduction of ‘safe’ country lists, EU databases tracking migrants’ movements and return agreements. Fully automising life-saving decisions about asylum could further reduce migrants’ ability to narrate their stories and in turn, have negative consequences for physical and mental health. In healthcare, vulnerable individuals experiencing access issues are commonly also affected by other difficulties (lack of ID documents and insurance, homelessness, unemployment, criminal records, etc.). AI-powered evidence reconfirming such ‘undesirable profiles’ time and again could fuel stigma, stereotyping and exclusion, and restrict fundamental rights without offering recourse.

The power of AI undeniably holds immense potential for making faster, more effective, and accurate decisions in the sectors outlined by the Commission, as well as for public health (e.g. pandemic surveillance and management). Clearly, it would be a wasted opportunity not to make use of technological progress – which is, ultimately, the result of human intelligence. Adopting a holistic public health perspective alerts us to the importance of the notion of human oversight, also featured in the AI package. In the end, AI can only serve its purpose of ‘working for people and being a force for good in society’ if data-powered decisions are able to mirror the messy nature of human reality – which is rarely neatly structured like data.

Get the EPHA Newsletter

Get involved !

Get involved !

Sign up here to receive our updates on European health policy and invitations to our events.

Subscribe now

You have Successfully Subscribed!

Share This