Understanding Algorithmic Decision-Making: Opportunities and Challenges
Total Page:16
File Type:pdf, Size:1020Kb
Understanding algorithmic decision-making: Opportunities and challenges STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 624.261 – March 2019 EN Understanding algorithmic decision-making: Opportunities and challenges While algorithms are hardly a recent invention, they are nevertheless increasingly involved in systems used to support decision-making. These systems, known as 'ADS' (algorithmic decision systems), often rely on the analysis of large amounts of personal data to infer correlations or, more generally, to derive information deemed useful to make decisions. Human intervention in the decision-making may vary, and may even be completely out of the loop in entirely automated systems. In many situations, the impact of the decision on people can be significant, such as access to credit, employment, medical treatment, or judicial sentences, among other things. Entrusting ADS to make or to influence such decisions raises a variety of ethical, political, legal, or technical issues, where great care must be taken to analyse and address them correctly. If they are neglected, the expected benefits of these systems may be negated by a variety of different risks for individuals (discrimination, unfair practices, loss of autonomy, etc.), the economy (unfair practices, limited access to markets, etc.), and society as a whole (manipulation, threat to democracy, etc.). This study reviews the opportunities and risks related to the use of ADS. It presents policy options to reduce the risks and explain their limitations. We sketch some options to overcome these limitations to be able to benefit from the tremendous possibilities of ADS while limiting the risks related to their use. Beyond providing an up-to- date and systematic review of the situation, the study gives a precise definition of a number of key terms and an analysis of their differences to help clarify the debate. The main focus of the study is the technical aspects of ADS. However, to broaden the discussion, other legal, ethical and social dimensions are considered. STOA | Panel for the Future of Science and Technology AUTHORS This study has been written by Claude Castelluccia and Daniel Le Métayer (Institut national de recherche en informatique et en automatique - Inria) at the request of the Panel for the Future of Science and Technology (STOA) and managed by the Scientific Foresight Unit within the Directorate-General for Parliamentary Research Services (DG EPRS) of the Secretariat of the European Parliament. ADMINISTRATOR RESPONSIBLE Mihalis Kritikos, Scientific Foresight Unit (STOA) To contact the publisher, please e-mail [email protected] Acknowledgments The authors would like to thank all who have helped, in any way whatever, in this study, in particular Irene Maxwell and Clément Hénin for their careful reading and useful comments on an earlier draft of this report. LINGUISTIC VERSION Original: EN Manuscript completed in March 2019. DISCLAIMER AND COPYRIGHT This document is prepared for, and addressed to, the Members and staff of the European Parliament as background material to assist them in their parliamentary work. The content of the document is the sole responsibility of its author(s) and any opinions expressed herein should not be taken to represent an official position of the Parliament. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy. Brussels © European Union, 2019. PE 624.261 ISBN: 978-92-846-3506-1 doi: 10.2861/536131 QA-06-18-337-EN-N http://www.europarl.europa.eu/stoa (STOA website) http://www.eprs.ep.parl.union.eu (intranet) http://www.europarl.europa.eu/thinktank (internet) http://epthinktank.eu (blog) Understanding algorithmic decision-making: Opportunities and challenges Executive Summary Scope of the study: While algorithms are hardly a recent invention, they are nevertheless increasingly involved in systems used to support decision-making. Known as 'ADS' (algorithmic decision systems), ADS often rely on the analysis of large amounts of personal data to infer correlations or, more generally, to derive information deemed useful to make decisions. Human intervention in the decision-making may vary, and may even be completely out of the loop in entirely automated systems. In many situations, the impact of the decision on people can be significant, such as on access to credit, employment, medical treatment, judicial sentences, among other things. Entrusting ADS to make or to influence such decisions raises a variety of different ethical, political, legal, or technical issues, where great care must be taken to analyse and address them correctly. If they are neglected, the expected benefits of these systems may be negated by the variety of risks for individuals (discrimination, unfair practices, loss of autonomy, etc.), the economy (unfair practices, limited access to markets, etc.), and society as a whole (manipulation, threat to democracy, etc.). This study reviews the opportunities and risks related to the use of ADS. It presents existing options to reduce these risks and explain their limitations. We sketch some options to benefit from the tremendous possibilities of ADS while limiting the risks related to their use. Beyond providing an up-to-date and systematic review of the situation, the study gives a precise definition of a number of key terms and an analysis of their differences to help clarify the debate. The main focus of the study is the technical aspects of ADS. However, to broaden the discussion, other legal, ethical and social dimensions are considered. ADS opportunities and risks: The study discusses the benefits and risks related to the use of ADS for three categories of stakeholders: individuals, the private sector and the public sector. Risks may be intentional (e.g. to optimise the interests of the operator of the ADS), accidental (side-effects of the purpose of the ADS, with no such intent from the designer), or the consequences of ADS errors or inaccuracies (e.g. people wrongly included in blacklists or 'no fly' lists due to homonyms or inaccurate inferences). Opportunities and risks of ADS for individuals: ADS may undermine the fundamental principles of equality, privacy, dignity, autonomy and free will, and may also pose risks related to health, quality of life and physical integrity. That ADS can lead to discrimination has been extensively documented in many areas, such as the judicial system, credit scoring, targeted advertising and employment. Discrimination may result from different types of biases arising from the training data, technical constraints, or societal or individual biases. However, the risk of discrimination related to the use of ADS should be compared with the risk of discrimination without the use of ADS. Humans have their own sources of bias that can affect their decisions and, in some cases, these could be detected or avoided using ADS. The deployment of ADS may also pose a threat to privacy and data protection in many different ways. The first is related to the massive collection of personal data required to train the algorithms. Even when no external attack has been carried out, the mere suspicion that one's personal data is being collected and possibly analysed can have a detrimental impact on people. For example, several studies have provided evidence of the chilling effect resulting from fear of online surveillance. Altogether, large-scale surveillance and scoring could narrow the range of possibilities and choices available to individuals, affecting their capacity for self-development and fulfilment. Scoring also raises the fear that humans are increasingly treated as numbers, and reduced to their digital profile. Reducing the complexity of human personality to a number can be seen as a form of alienation and an offence to human dignity. I STOA | Panel for the Future of Science and Technology The opacity or lack of transparency of ADS is another primary source of risk for individuals. It opens the door to all kinds of manipulation and makes it difficult to challenge a decision based on the result of an ADS. This is in contradiction with defence rights and the principle of adversarial proceedings with right to all evidence and observations in most legal systems. The use of ADS in court also raises far-reaching questions about the reliance on predictive scores to make legal decisions, in particular for sentencing. Opportunities and risks for the public sector: ADS are currently being used by state and public agencies to provide new services or improve existing ones in areas such as energy, education, healthcare, transportation, the judicial system and security. Examples of applications of ADS in this context are predictive policing, smart metering, video protection and school enrolment. They can also contribute to improving the quality of healthcare, education and job skill training. They are increasingly used in cyber-defence to protect infrastructures and to support soldiers in the battlefield. ADS, or smart technologies in general, such as mobility management tools, or water and energy management systems, can improve city management efficiency. They can also help make administrative decisions more efficient, transparent and accountable, provided however that they themselves are transparent and accountable. ADS also create new 'security vulnerabilities' that can be exploited by people with