Artificial Intelligence and Law Enforcement
Total Page:16
File Type:pdf, Size:1020Kb
STUDY Requested by the LIBE committee Artificial Intelligence and Law Enforcement Impact on Fundamental Rights Affairs Directorate-General for Internal Policies PE 656.295 July 2020 EN Artificial Intelligence and Law Enforcement Impact on Fundamental Rights Abstract This request of the LIBE Committee, examines the impact on fundamental rights of Artificial Intelligence in the field of law enforcement and criminal justice, from a European Union perspective. It presents the applicable legal framework (notably in relation to data protection), and analyses major trends and key policy discussions. The study also considers developments following the Covid-19 outbreak. It argues that the seriousness and scale of challenges may require intervention at EU level, This document was requested by the European Parliament's Committee on Civil Liberties, Justice and Home Affairs. AUTHOR Prof. Dr. Gloria GONZÁLEZ FUSTER, Vrije Universiteit Brussel (VUB) ADMINISTRATOR RESPONSIBLE Alessandro DAVOLI EDITORIAL ASSISTANT Ginka TSONEVA LINGUISTIC VERSIONS Original: EN ABOUT THE EDITOR Policy departments provide in-house and external expertise to support EP committees and other parliamentary bodies in shaping legislation and exercising democratic scrutiny over EU internal policies. To contact the Policy Department or to subscribe for updates, please write to: European Parliament B-1047 Brussels Email: [email protected] Manuscript completed in July 2020 © European Union, 2020 This document is available on the internet at: http://www.europarl.europa.eu/supporting-analyses DISCLAIMER AND COPYRIGHT The opinions expressed in this document are the sole responsibility of the authors and do not necessarily represent the official position of the European Parliament. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy Artificial Intelligence and Law Enforcement - Impact on Fundamental Rights CONTENTS LIST OF ABBREVIATIONS 5 LIST OF BOXES 7 LIST OF FIGURES 7 EXECUTIVE SUMMARY 8 GENERAL INFORMATION 10 1.1. Objectives and structure 11 1.2. Methodology 12 EU LEGAL FRAMEWORK 13 2.1. Primary law 13 2.2. Secondary law 14 2.2.1. Rules on the processing of personal data 14 2.2.2. Rules on the processing of non-personal data 19 AI IN LAW ENFORCEMENT AND CRIMINAL JUSTICE 21 3.1. 22 3.2. Facial recognition 24 3.3. AI and criminal justice 27 3.4. AI and borders 29 3.4.1. General trends 29 3.4.2. PNR 31 3.4.3. ETIAS 32 IMPACT ON FUNDAMENTAL RIGHTS 37 4.1. Privacy and data protection 38 4.2. Freedom of expression and information, and freedom of assembly and association 40 4.3. Non-discrimination 40 4.4. Right to an effective remedy and to a fair trial 43 4.5. Rights of the child 43 POLICY DEVELOPMENTS 45 5.1. Introduction 47 5.2. European Commission 48 5.3. European Parliament 54 5.4. The role of ethics 54 PE 656.295 3 IPOL | 5.5. EU-funded support of AI 57 COVID-19 OUTBREAK RESPONSE 61 6.1. Data-driven responses 63 6.2. Law enforcement initiatives 66 POLICY RECOMMENDATIONS 67 7.1. Assess fully the relevance and gaps of the existing legal framework 67 7.2. Adapt initiatives to the specificities of the field 68 7.3. Consider the need for special rules for certain practices 68 7.4. Clarify the role of ethics 69 7.5. Reinforce trust in EU-funded AI efforts 69 7.6. Screen developments related to the Covid-19 outbreak 69 REFERENCES 71 ANNEX A COMPARISON E GDPR AND THE LED 83 4 PE 656.295 Artificial Intelligence and Law Enforcement - Impact on Fundamental Rights ACLU American Civil Liberties Union ADM Automated Decision Making AI Artificial Intelligence CJEU Court of Justice of the European Union DAPIX Working Party on Information Exchange and Data Protection DPA Data Protection Authority EASO European Asylum Support Office EC European Commission ECHR European Convention on Human Rights ECRIS-TCN European Criminal Records Information System for Third-Country Nationals ECtHR European Court of Human Rights EDPB European Data Protection Board EDPS European Data Protection Supervisor EDRi European Digital Rights Initiative EES Entry/Exit System EIO European Investigation Order EPIC Electronic Privacy Information Center EPPO EU European Union FAT Fairness, Accountability and Transparency FIU Financial Intelligence Unit PE 656.295 5 IPOL | GDPR General Data Protection Regulation GSMA Global System for Mobile Communications association GVM Gangs Violence Matrix ICO IT Information Technology JRC Joint Research Centre LED Law Enforcement Directive LFR Live Facial Recognition OECD Organisation for Economic Co-operation and Development PNR Passenger Name Record RBI Remote Biometric Identification SIS Schengen Information System UNICRI United Nations Interregional Crime and Justice Research Institute VIS Visa Information System 6 PE 656.295 Artificial Intelligence and Law Enforcement - Impact on Fundamental Rights Example 1: Real-time surveillance of public space 59 Example 2: Autonomous robots at the borders 59 Example 3: Pre-occurring crime prediction and prevention 60 Example 4: Intelligent dialogue-empowered bots for early terrorist detection 60 Figure 1: EDPB work on police and justice 19 Figure 2: AI HLEG Framework for Trustworthy AI 50 PE 656.295 7 IPOL | Background Artificial Intelligence (AI) is high on the agenda of the European Union (EU). Discussions on its possible regulation have been particularly prominent since Ursula von der Leyen announced, already before her nomination as President of the European Commission (EC), a strong will to situate AI among the s top priorities. The endorsement of AI as an EU-level policy priority has been accompanied by a reflection on how to promote trust in AI technologies, and how to guarantee that AI systems do not compromise, during their development or deployment, EU fundamental rights. This specific reflection is actually not completely novel, but entrenched in prior legal and policy debates on fundamental rights , more generally, related to the regulation of the processing of both personal and non-personal data. Aim This study aims at analysing the impact on EU fundamental rights of AI in the field of law enforcement and criminal justice. It approaches the subject from an EU law and policy perspective. It first succinctly presents the main features of the applicable legal framework. Second, it introduces recent and ongoing developments in the use of AI in criminal justice, and AI and borders. It discusses the impact on fundamental rights of these trends, and reviews relevant policy discussions around AI regulation. After such exploration, the study puts forward some concrete recommendations. Findings The field of law enforcement and criminal justice is particularly sensitive, as it touches upon core issues of the relation between the individual and the State. There is a broad consensus on the fact that reliance on AI systems in this area can substantially impact EU fundamental rights, and more generally democracy itself. The issue thus deserves special consideration in the context of any discussion on the future of trustworthy AI. This study: Shows that the advent of AI in the field of law enforcement and criminal justice is already a reality, as AI systems are increasingly being adopted or considered; such systems might take many different shapes, and occur at the cross-roads of different sustained trends to increase the use of data, algorithms, and computational power; these developments are connected to pre-existing trends, some of which had been previously broadly framed under ; Documents that such developments have already generated significant controversies, AI and criminal justice, and AI and borders (including a reflection on the European Travel Information and Authorization System, ETIAS), for instance in litigation and calls from civil society to better prevent or mitigate associated risks, both in the EU and beyond; Points out that discussions around AI regulation in the EU have been deeply embedded in the EU Digital Single Market agenda, and that generally speaking, although they might refer to the need to consider law enforcement and criminal justice specificities, such policy discussions are most often not based on a detailed review and taking into account of concrete applicable rules (and, especially, of applicable restrictions and derogations); 8 PE 656.295 Artificial Intelligence and Law Enforcement - Impact on Fundamental Rights Warns that such policy discussions suffer from in relation to the safeguarding of fundamental rights, generating not only lack of conceptual precision but most importantly uncertainty as to whether effective protection of fundamental rights will be delivered; Emphasises that the current EU data protection legal framework shall not be assumed to offer enough solid safeguards for individuals in light of the increased uses of automated decision-making and profiling for law enforcement and criminal justice purposes, as: o the general safeguards provided by the General Data Protection Regulation (GDPR) do not necessarily apply when the processing is for such purposes, as restrictions and derogations might be applicable; o the Law Enforcement Data Protection Directive (LED Directive), which might be the applicable relevant instruments, provides for safeguards that are similar to those of the GDPR, but nevertheless not exactly equivalent, and equally provides for possible restrictions and derogations. Reviews a variety of fundamental rights considerations