Artificial Intelligence and Civil Liability
Total Page:16
File Type:pdf, Size:1020Kb
STUDY Requested.z by the JURI committee Artificial Intelligence and Civil Liability Legal Affairs Policy Department for Citizens' Rights and Constitutional Affairs Directorate-General for Internal Policies PE 621.926 - July 2020 EN Artificial Intelligence and Civil Liability Legal Affairs Abstract This study – commissioned by the Policy Department C at the request of the Committee on Legal Affairs – analyses the notion of AI-technologies and the applicable legal framework for civil liability. It demonstrates how technology regulation should be technology- specific, and presents a Risk Management Approach, where the party who is best capable of controlling and managing a technology-related risk is held strictly liable, as a single entry point for litigation. It then applies such approach to four case-studies, to elaborate recommendations. This document was requested by the European Parliament's Committee on Legal Affairs. AUTHORS Andrea BERTOLINI, Ph.D., LL.M. (Yale) Assistant Professor of Private Law, Scuola Superiore Sant’Anna (Pisa) Director of the Jean Monnet - European Centre of Excellence on the Regulation of Robotics and AI (EURA) www.eura.santannapisa.it [email protected] ADMINISTRATOR RESPONSIBLE Giorgio MUSSA EDITORIAL ASSISTANT Sandrina MARCUZZO LINGUISTIC VERSIONS Original: EN ABOUT THE EDITOR Policy departments provide in-house and external expertise to support EP committees and other parliamentary bodies in shaping legislation and exercising democratic scrutiny over EU internal policies. To contact the Policy Department or to subscribe for updates, please write to: Policy Department for Citizens' Rights and Constitutional Affairs European Parliament B-1047 Brussels Email: [email protected] Manuscript completed in July 2020 © European Union, 2020 This document is available on the internet at: http://www.europarl.europa.eu/supporting-analyses DISCLAIMER AND COPYRIGHT The opinions expressed in this document are the sole responsibility of the authors and do not necessarily represent the official position of the European Parliament. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy. Artificial Intelligence and Civil Liability CONTENTS 1.1. Seeking a definition of Artificial Intelligence 15 1.2. Non-technical definitions of AI 17 1.3. AI in the technical literature 18 1.3.1. AI for AI researchers 19 1.3.2. AI as a branch of computer science 21 1.4. Notion of AI for policy-making/regulatory purposes 21 1.5. Discussions and conclusions 31 2.1. The two possible interpretations of the notion of «electronic personhood» 35 2.1.1. Electronic personhood as the acknowledgment of individual rights of the artificial agent: radical inadmissibility 36 2.1.2. Electronic personhood as the equivalent of legal personhood 38 2.2. The functional dimension of the notion of legal person in modern legal systems: considerations derived from corporate law 38 2.3. A possible functional approach to the personhood of AI applications: the need for a Class- of-Applications-by-Class-of-Application (CbC) approach 42 3.1. Ensuring product safety: product safety regulation 47 3.2. The relationship with product liability 50 3.3. The Product Liability Directive: an overview 51 3.4. The Product Liability Directive: an assessment 53 3.4.1. Notion of product and distinction with services: the issue of software 57 3.4.2. Notion of defect 57 3.4.3. Development risk defence 58 3.4.4. Causal nexus 58 3.4.5. Recoverable damages 59 3.5. Towards a reform of the Product Liability Directive 60 3.6. Ensuring a high standard of harmonization. 61 PE 621.926 3 IPOL | Policy Department for Citizens' Rights and Constitutional Affairs 4.1. Defining a European approach through three ideas 65 4.2. Going beyond MS’ civil liability regulation: seeking uniformity 65 4.3. Going beyond the Product Liability Directive: applying technology neutrality correctly 67 4.4. Seeking legal innovation: advancing user’s protection beyond the «functional equivalent» argument 73 4.5. An overview of the alternative approaches proposed at European Level 74 4.6. The Expert Group’s report 75 4.6.1. Some critical observations 76 4.6.2. The lack of a definition of advanced technologies 76 4.6.3. The distinction between low- and high-risk 77 4.6.4. Greater reliance on evidentiary rules over substantial ones, and on MS’ legislation, leading to fragmentation 81 4.6.5. Logging by design 83 4.6.6. Safety rules 85 4.6.7. The relationship between producer’s and operator’s liability 85 4.6.8. The legal personality of the machine 86 4.6.9. Some overall considerations 86 4.7. Adopting a general liability rule for civil liability arising from the use of AI-based system: critical remarks 87 4.7.1. The problems in elaborating a uniform definition of «AI-based applications», and its effect on legal certainty. 88 4.7.2. Classifying applications, and the distinction between low- and high-risk 89 4.7.3. Avoiding victims (under)compensation 90 4.7.4. Identifying a single entry point of litigation 91 4.7.5. The need for a narrow-tailored definition of the responsible party 92 4.7.6. Compensable damages 93 4.7.7. Final considerations 94 5.1. A Risk-Management Approach to civil liability 97 5.2. A Risk-Management Approach: theoretical considerations 99 5.3. 5.2A Risk-Management Approach: methodological considerations 102 5.4. Industrial Robots 103 5.4.1. Definition and description of the relevant features 103 5.4.2. Existing legal framework 104 5.4.3. Assessment and recommendations 106 4 PE 621.926 Artificial Intelligence and Civil Liability 5.5. Connected and automated driving 107 5.5.1. Definition and description of relevant features 107 5.5.2. Existing legal framework 108 5.5.3. Assessment and recommendations 110 5.6. Medical robots and diagnostic-assistive technologies in medical care 111 5.6.1. Definition and relevant features 111 5.6.2. Existing legal framework 113 5.6.3. Assessment and recommendations 116 5.7. Drones 116 5.7.1. Existing legal framework 118 5.7.2. Assessment and recommendations 119 PE 621.926 5 IPOL | Policy Department for Citizens' Rights and Constitutional Affairs LIST OF ABBREVIATIONS AI Artificial Intelligence AI HLEG High-Level Expert Group on Artificial Intelligence AD Automated Driving ADS Automated Driving Solutions Art. Article BGB German Civil Code CAD Connected and Automated Driving CbC Class-of-applications-by-Class-of-applications Ch. Chapter CLRR 2017 European Parliament Resolution on Civil Law Rules on Robotics CSGD Consumer Sales and Guarantees Directive DCIR Draft Commission Implementing Regulation EC European Commission EG Expert Group EU European Union GPSD General Product Safety Directive G&BP Guidelines and Best Practices hEN Harmonized Standards HFT High-Frequency Trading IR Industrial Robots ISO International Organization for Standardization LLC Limited Liability Company 6 PE 621.926 Artificial Intelligence and Civil Liability MID Motor Insurance Directive MS Member States MTOM Maximum Take-Off Mass OECD Organisation for Economic Co-operation and Development PLD Product Liability Directive RCA Regulation (EU) 2018/1139 RMA Risk-Management Approach SAE Society of Automotive Engineers UA Unmanned Aircraft UK United Kingdom US United States WFD Framework Directive 89/391/EEC PE 621.926 7 IPOL | Policy Department for Citizens' Rights and Constitutional Affairs LIST OF EXAMPLES Example 1 70 Example 2 71 Example 3 71 Example 4 84 Example 5 86 Example 6 92 Example 7 113 Example 8 113 LIST OF TABLES Table 1 - Non-technical definitions of AI 17 Table 2 - Policy making definitions of AI 23 8 PE 621.926 Artificial Intelligence and Civil Liability EXECUTIVE SUMMARY The need for technology-specific regulation of different AI-based solutions Regulating artificial intelligence requires defining it. Yet there is no consensus about what is to be understood by «AI». The layman understanding of AI as machines and software with human-like capabilities and intelligence is far from accurate, and does not capture the reality of emerging technology. Indeed, only a small portion of AI-research pursues that objective («general AI»), and is decades away from achieving it, while the vast majority of research aims at developing specific solutions, with well-defined functions to be operated in given settings (light «AI»). Indeed, when used for a different purpose or in a different setting, the same algorithm or application might radically change its nature as well as its social relevance, and thus lead to different regulatory needs. Facial recognition used to unlock a phone is not as problematic as when applied to mass surveillance. Since AI is a heterogeneous phenomenon, its regulation cannot be single and unitary, not even with respect to liability rules. Any attempt to regulate AI needs to be technology-specific. This is best understood by looking at how AI is pervasive. Already today, and even more in the future, it will be used in the most diverse fields of application, ranging – but not limited to – consultancy (in the financial, legal and medical sector), consumer products and services, mobility, online connectivity (including through platforms), energy production and distribution (e.g.: smart grids), cure of frail individuals (elderly, children, people with disability), policing and justice administration, to name a few. All those fields are separately regulated both by member states and the European Union – when relevant –, most typically even with respect to liability rules. Indeed, medical malpractice, professional responsibility, intermediaries’ liability, liability for things in custody, for high-risk activities, for the acts of children, circulation of vehicles, nuclear energy production, are all separately addressed. Why the advent of AI should change such a consolidated approach does not appear evident per se, in particular when considering how there appears to be no clear unifying trait of said AI-based applications.