Select Page

By Louise Lamatsch, Policy Assistant, EPHA

Artificial Intelligence (AI) and other emerging digital technologies have the potential to build up public health capacity to deliver equitable solutions by improving screening, diagnosis, and treatments across many medical disciplines. In addition, AI can generate productivity gains and improve operational efficiency by providing more precise and complete information, better workflow management and refine continuity of care. However, these technologies are still in their infancy. Their increased use within society and the healthcare sector require sufficient safeguards and guidelines to minimize the risk of harm these technologies may cause to individuals. 

 Future research will need to set the focus even more on the vulnerability and liability gaps in AI as well as on the adjustments that need to be made in the already existing EU legislation, such as the Product Liability Directive. Concerning already existing liability regimes regarding digital technologies, the law of tort of EU Member states is largely non- harmonised, except for the Product Liability Law under Directive 85/374/ EC, liability for infringing Data Protection Law under Article 82 of the General Data Protection Regulation (GDPR) and liability for infringing Competition Law under the directive 2014/ 104/ EU. Thus on National level, it can be observed that laws of the Member States do not contain liability rules specifically applicable to damage resulting from the use of emerging digital technologies such as Artificial Intelligence. Adequate and complete liability regimes in the development of technological challenges are crucially important for society to ensure that damage or harm caused by emerging digital technologies do not lead to victims ending up totally or partially uncompensated. 

 As discussed at the recent workshop on Civil Liability Regime for Artificial Intelligence in the European Parliament, organized by the S&D group and hosted by MEP Tiemo Wölken (Germany), the following key suggestions (among many others) could be included in future discussions and research regarding the adjustments that need to be made on existing EU liability regimes: 

  • A human-centred approach should be adopted within the emerging use of digital technologies to deliver equitable solutions for European public health challenges.
  • A stronger focus should be put on the potential role of Artificial Intelligence in improving health and safety in the workplace. 
  • Laws adopted in relation to technological developments should focus on: complexity, autonomy, and specificity 
  • Ensuring risk-based assessment and differentiating between high and low-risk applications (this still requires further clarification).

 AI is a difficult and complex system, which needs better understanding, building up health literacy, and research. Most importantly, the existing liability laws do not necessarily have to be reinvented but they will require modification and adjustment. . The EU should, therefore, try to find a balanced solution based on a harmonized and human-centred approach on AI to ensure civil protection and a fair and safe environment. In the health sector existing laws should be specifically adapted in the field of health and safety at workplace. AI may not only affect the employment area and wages but also the way workers approach their work and this could have an impact on their well-being such as job satisfaction, stress and health in a variety of ways. AI is therefore not only related to potential physical harm but also mental harm. The integration of Big Data and AI technologies into health systems must be accompanied by appropriate legislation, rules and standards that protect the fundamental rights of individuals and address new ethical challenges. Emerging technologies is an area, which still needs much discussion and research before we have a well-performing digital and AI- friendly European Union.