Following the release of the European Commission’s digital package in February 2020, the digital transformation of health and care received an unanticipated yet powerful push as a result of the COVID-19 crisis. For example, whereas previously, many healthcare providers and patients shied away from online consultations, the physical distancing measures dictated by the Coronavirus have made tele-medicine and tele-health much more commonplace across Europe. In many instances, became useful complements enabling continuity of care and access to basic healthcare services. At a time when many services were closed or understaffed, and many patients were too afraid to leave the house.
The “lessons learnt” from the COVID-19 pandemic include the important message that European health systems did not have access to sufficiently large, comparable and quality sets of data, which, apart from many other factors, prevented a common response. The Commission’s White Paper on AI, drafted before the crisis, addresses some of the ethical questions Europe is facing in terms of balancing the potential of digitalisation in many areas versus the need to draw limits and ensure that power is retained for people rather than algorithms.
Another lesson is the increased understanding that contact-tracing apps involve both technical and ethical complexities which cannot be solved overnight. Their introduction has been fraught with delays and problems in many countries including Norway, the Netherlands, the UK and Germany. The commendable idea behind these apps is that they alert people who have come into contact with carriers of the Coronavirus. They commonly rely on wireless communication technology (e.g. Bluetooth) to capture information of people somebody has been interacting with closely, which in turn makes it easy to contact them should any virus symptoms develop. Crucially, this is not done by accessing address books or phone numbers, but via unique identification numbers for app subscribers to protect their privacy.
Whether we see them as indispensable tools in times of crisis or as the latest, potentially intrusive “Big Brother” type technology, track and trace apps demonstrate the growing tension between the public good and individuality in technologically advanced societies. They also confirm that the reliance on multinational IT companies in healthcare is becoming more pervasive. For example, the UK government failed to launch its own, independent app due to a number of system issues which forced them to enter into partnership with global tech giants Apple and Google even if, ironically, the latter were deemed to offer better privacy protection. Although certain UK-specific features will be integrated into the NHS app, it shows how the infrastructures designed by technology firms pretty much dictate what is and what isn’t possible if the goal is to succeed.
Germany’s app, on the other hand, appears to have received a “thumbs up” from a number of consumer and data privacy organisations thus far: within days, it attracted 13 million downloads. However, the app cost a whopping 20 million EUR to develop, whereas in some other countries (Italy, Iceland) voluntary initiatives are guiding app development.. As regional lockdowns are currently occurring in Germany owing to infections at food production sites, it remains to be seen if the app will generate any significant public health returns on investment.
A European Commission Communication released in April includes a list of recommended elements for a trustful and accountable use of apps such as identifying who is the data controller, ensuring the individual remains in control, clarifying the legal base for installing the app and storing of information on the user’s device, ensuring data minimisation and security, limiting the purpose of the app, etc. as a blueprint governments can follow.
While the White Paper consultation did not specifically focus on healthcare and included a number of questions that were difficult to answer in the absence of a comprehensive and multi-faceted stakeholder dialogue on AI and the creation of an ecosystem of trust, it specifically notes that the European approach to AI “supports the development and uptake of ethical and trustworthy AI across the EU”, with an explicit intention for it to be a “force for good in society”. While EPHA’s response to the White Paper consultation describes healthcare as a potentially high-risk AI application area, in which many potential issues (ranging from lack of human oversight to unclear liability and safety rules and poor quality training data sets) could occur, EPHA feels that it is very important for technology providers, governments and specialised agencies to engage in more transparent and inclusive dialogue with civil society and ordinary people about best and worst case scenarios, and the more realistic middle ground that will likely be established over the coming years in relation to a more effective use of data in healthcare. People deserve to better understand what data-driven technologies can offer in terms of improving services and generating better health outcomes for all, but they also deserve to know what could go wrong and why. It is never enough to state that “systems are secure” given that even the most secure systems are prone to malignant cybersecurity threats. Just like Europe needs to be better prepared for future cross-border health threats, we need to be prepared for powerful and paralysing digital threats that can affect us all.