The Ethics of AI Ethics an Evaluation of Guidelines

Total Page:16

File Type:pdf, Size:1020Kb

The Ethics of AI Ethics an Evaluation of Guidelines The Ethics of AI Ethics An Evaluation of Guidelines Dr. Thilo Hagendorff University of Tuebingen International Center for Ethics in the Sciences and Humanities [email protected] Abstract - Current advances in research, development AI ethics – or ethics in general – lacks mechanisms to and application of artificial intelligence (AI) systems reinforce its own normative claims. Of course, the have yielded a far-reaching discourse on AI ethics. In enforcement of ethical principles may involve consequence, a number of ethics guidelines have been reputational losses in the case of misconduct, or released in recent years. These guidelines comprise restrictions on memberships in certain professional normative principles and recommendations aimed to bodies. Yet altogether, these mechanisms are rather harness the “disruptive” potentials of new AI weak and pose no eminent threat. Researchers, technologies. Designed as a comprehensive evaluation, politicians, consultants, managers and activists have to this paper analyzes and compares these guidelines deal with this essential weakness of ethics. However, it highlighting overlaps but also omissions. As a result, I is also a reason why ethics is so appealing to many AI give a detailed overview of the field of AI ethics. Finally, companies and institutions. When companies or I also examine to what extent the respective ethical research institutes formulate their own ethical principles and values are implemented in the practice guidelines, regularly incorporate ethical of research, development and application of AI systems considerations into their public relations work, or – and how the effectiveness in the demands of AI ethics adopt ethically motivated “self-commitments”, efforts can be improved. to create a truly binding legal framework are continuously discouraged. Ethics guidelines of the AI Keywords - artificial intelligence, machine learning, industry serve to suggest to legislators that internal ethics, guidelines, implementation self-governance in science and industry is sufficient, and that no specific laws are necessary to mitigate possible technological risks and to eliminate scenarios 1 Introduction of abuse (Calo 2017). And even when more concrete laws concerning AI systems are demanded, as recently The current AI boom is accompanied by constant calls done by Google (Google 2019), these demands remain for applied ethics, which are meant to harness the relatively vague and superficial. “disruptive” potentials of new AI technologies. As a result, a whole body of ethical guidelines has been Science- or industry-led ethics guidelines, as well as developed in recent years collecting principles, which other concepts of self-governance, may serve to technology developers should adhere to as far as pretend that accountability can be devolved from state possible. However, the critical question arises: Do authorities and democratic institutions upon the those ethical guidelines have an actual impact on respective sectors of science or industry. Moreover, human decision-making in the field of AI and machine ethics can also simply serve the purpose of calming learning? The short answer is: No, most often not. This critical voices from the public, while simultaneously paper analyzes 21 of the major AI ethics guidelines and the criticized practices are maintained within the issues recommendations on how to overcome the organization. The association “Partnership on AI” relative ineffectiveness of these guidelines. (2018) which brings together companies such as 1 Amazon, Apple, Baidu, Facebook, Google, IBM and Intel and development of AI systems. In particular, I is exemplary in this context. Companies can highlight critically examine to what extent the principles have an their membership in such associations whenever the effect. In a third and final step, I will work out ideas on notion of serious commitment to legal regulation of how AI ethics can be transformed from a merely business activities needs to be stifled. discursive phenomenon into concrete directions for action. This prompts the question as to what extent ethical objectives are actually implemented and embedded in 2 Guidelines in AI ethics the development and application of AI, or whether merely good intentions are deployed. So far, some 2.1 Method papers have been published on the subject of teaching Research in the field of AI ethics ranges from ethics to data scientists (Garzcarek and Steuer 2019; reflections on how ethical principles can be Burton et al. 2017; Goldsmith and Burton 2017) but by implemented in decision routines of autonomous and large very little to nothing has been written about machines (Anderson, M. and Anderson, S. Leigh 2015; the tangible implementation of ethical goals and Etzioni, A. and Etzioni, O. 2017; Yu, H. et al. 2018) over values. In this paper, I address this question from a meta-studies about AI ethics (Vakkuri and theoretical perspective. In a first step, 21 of the major Abrahamsson 2018; Prates, Avelar, and Lamb, Luis, C. guidelines of AI ethics will be analyzed and compared. 2018; Boddington 2017; Greene, Hoffman, and Stark I will also describe which issues they omit to mention. 2019; Goldsmith and Burton 2017) or the empirical In a second step, I compare the principles formulated analysis on how trolley problems are solved (Awad et in the guidelines with the concrete practice of research al. 2018) to reflections on specific problems (Eckersley The European Commission's High-Level Expert Group Group Expert High-Level Commission's European The Human Well-being with Autonomous and Intelligent and Intelligent Autonomous with HumanWell-being and Intelligent Autonomous with HumanWell-being Montréal Declaration for Responsible Development Declaration Responsible Development Montréal for OECD Recommendation of the Council Artificialon of the OECDRecommendation Principles for Accountable Algorithms and aAccountable Principlesfor SocialAlgorithms Ethically Prioritizing Vision A for Design: Aligned Ethically Prioritizing Vision A for Design: Aligned Report on the Future of Artificial Intelligence Future the on Report The Malicious The of Artificial IntelligenceUse EverydayEthics Artificial for Intelligence Systems (Version for Public for Discussion) (Version Systems DeepMind Ethics DeepMind & PrinciplesSociety Impact Statement for Algorithms for ImpactStatement ArtificialGoogle at Intelligence The AsilomarAI The Principles on Artificialon Intelligence ofArtificial Intelligence Systems (First Edition) (First Systems MicrosoftAI principles ITIAI Policy Principles AI Now 2016 Report 2016 AINow Report 2017 AINow Report 2018 AINow number of mentions of number BeijingAI Principles Partnership on AI on Partnership OpenAI Charter OpenAI Intelligence AI4People (The IEEE (The IEEE (Organisatio Global Global (Beijing n for Initiative on Initiative on (Information Academy of Economic Co- (Future of Ethics of Ethics of Technology (Microsoft (Pekka et al. (Holdren et (Brundage et (Floridi et (Crawford et (Campolo et (Whittaker (Diakopoulo (Abrassart (OpenAI (Google (Cutler et al. (Partnership authors Artificial operation Life Institute Autonomus Autonomus Industry Corporation (DeepMind) 2018) al. 2016) al. 2018) al. 2018) al. 2016) al. 2017) et al. 2018) s et al.) et al. 2018) 2018) 2018) 2018) on AI 2018) Intelligence and 2017) and and Council 2019) 2019) Developmen Intelligent Intelligent 2017) t 2019) Systems Systems 2016) 2019) meta- principles of code of several detailed detailed several several IBM’s short analysis brief short list of an analysis of large statements statements statements ethics short description description short short list of about principles of guideline keywords association AI principles AI principles AI prinicples AI principles abuse collection of on social on social on social released by principles of ethical of ethical principles principles keywords key issue principles the FAT ML about basic for the between of the EU of the US of China of the OECD scenarios of different implications implications implications the for the aspects in aspects in for the for the for the for the community ethical ethical use several AI principles of AI of AI of AI Université ethical use the context the context ethical use ethical use ethical use beneficial principles of AI industry de Montréal of AI of AI of AI of AI of AI of AI use of AI leaders privacy protection x x x x x x x x x x x x x x x x x 17 accountability x x x x x x x x x x x x x x x x x 17 fairness, non-discrimination, justice x x x x x x x x x x x x x x x x x 17 transparency, openness x x x x x x x x x x x x x x x 15 safety, cybersecurity x x x x x x x x x x x x x x x 15 common good, sustainability, well-being x x x x x x x x x x x x x x x 15 human oversight, control, auditing x x x x x x x x x x x x 12 explainability, interpretabiliy x x x x x x x x x x 10 solidarity, inclusion, social cohesion x x x x x x x x x x 10 science-policy link x x x x x x x x x x 10 legislative framework, legal status of AI systems x x x x x x x x x 9 responsible/intensified research funding x x x x x x x x 8 public awareness, education about AI and its risks x x x x x x x x 8 future of employment x x x x x x x x 8 dual-use problem, military, AI arms race x x x x x x x 7 field-specific deliberations (health, military, mobility etc.) x x x x x x x 7 human autonomy x x x x x x x 7 diversity in the field of AI x x x x x x 6 certification for AI products x x x x 4 cultural differences in the ethically aligned design of AI x x 2 systems protection
Recommended publications
  • Christian Social Ethics
    How should the protection of privacy, threatened by new technologies like radio frequency identification (RFID), be seen from a Judeo-Christian perspective? A dissertation by Erwin Walter Schmidt submitted in accordance with the requirements for the degree of Master of Theology in the subject Theological Ethics at the UNIVERSITY OF SOUTH AFRICA Supervisor: Prof Dr Puleng LenkaBula Co-supervisor: Dr Dr Volker Kessler November 2011 1 © 2012-11-10 UNISA, Erwin Walter Schmidt, student no. 4306-490-6 Summary Radio Frequency Identification (RFID) is a new technology which allows people to identify objects automatically but there is a suspicion that, if people are tracked, their privacy may be infringed. This raises questions about how far this technology is acceptable and how privacy should be protected. It is also initiated a discussion involving a wide range of technical, philosophical, political, social, cultural, and economical aspects. There is also a need to consider the ethical and theological perspectives. This dissertation takes all its relevant directions from a Judeo-Christian theological perspective. On one side the use of technology is considered, and on the other side the value of privacy, its infringements and protection are investigated. According to Jewish and Christian understanding human dignity has to be respected including the right to privacy. As a consequence of this RFID may only used for applications that do not infringe this right. This conclusion, however, is not limited to RFID; it will be relevant for other, future surveillance technologies as well. 2 © 2012-11-10 UNISA, Erwin Walter Schmidt, student no. 4306-490-6 Key terms: Radio frequency identification, privacy, human rights, right to privacy, technology, information and communication technology, privacy enhancing technology, Internet of Things, data protection, privacy impact assessment.
    [Show full text]
  • Pragmatism, Ethics, and Technology / 10 Pragmatism, Ethics, and Technology Hans Radder Free University of Amsterdam
    Techné 7:3 Spring 2004 Radder, Pragmatism, Ethics, and Technology / 10 Pragmatism, Ethics, and Technology Hans Radder Free University of Amsterdam Pragmatist Ethics for a Technological Culture presents a variety of essays on a significant and timely issue. The plan of the book is thoughtful. It comprises nine major chapters, each followed by a brief commentary. The volume is divided into four parts: technology and ethics, the status of pragmatism, pragmatism and practices, and discourse ethics and deliberative democracy. In addition, the introductory and concluding chapters by the editors help to connect the various contributions. Moreover, these chapters sketch an interesting programmatic approach for dealing with the ethical problems of our technological culture. The only complaint one might have about the book's composition is the lack of a subject and name index. In this essay, I will not present the usual summary review but instead offer some reflections on the three main concepts of the book (pragmatism, ethics and technology) and on the way these concepts have been explained, employed and related in the various contributions. My overall conclusion is that, although much can be learned from the book, it also falls short in some respects. The most important problem is that, appearances notwithstanding, the full significance of technology for our ethical problems is not being appropriately acknowledged. Pragmatism Let me start with a preliminary issue. As do most of the authors, I think that it is appropriate to speak of a pragmatist, instead of a merely pragmatic, approach to ethics. As I see it, a pragmatist approach requires the commitment to engage in discursive explanation and argumentation, while a pragmatic approach suggests that these more theoretical activities may be omitted.
    [Show full text]
  • Librarianship and the Philosophy of Information
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Library Philosophy and Practice (e-journal) Libraries at University of Nebraska-Lincoln July 2005 Librarianship and the Philosophy of Information Ken R. Herold Hamilton College Follow this and additional works at: https://digitalcommons.unl.edu/libphilprac Part of the Library and Information Science Commons Herold, Ken R., "Librarianship and the Philosophy of Information" (2005). Library Philosophy and Practice (e-journal). 27. https://digitalcommons.unl.edu/libphilprac/27 Library Philosophy and Practice Vol. 3, No. 2 (Spring 2001) (www.uidaho.edu/~mbolin/lppv3n2.htm) ISSN 1522-0222 Librarianship and the Philosophy of Information Ken R. Herold Systems Manager Burke Library Hamilton College Clinton, NY 13323 “My purpose is to tell of bodies which have been transformed into shapes of a different kind.” Ovid, Metamorphoses Part I. Library Philosophy Provocation Information seems to be ubiquitous, diaphanous, a-categorical, discrete, a- dimensional, and knowing. · Ubiquitous. Information is ever-present and pervasive in our technology and beyond in our thinking about the world, appearing to be a generic ‘thing’ arising from all of our contacts with each other and our environment, whether thought of in terms of communication or cognition. For librarians information is a universal concept, at its greatest extent total in content and comprehensive in scope, even though we may not agree that all information is library information. · Diaphanous. Due to its virtuality, the manner in which information has the capacity to make an effect, information is freedom. In many aspects it exhibits a transparent quality, a window-like clarity as between source and patron in an ideal interface or a perfect exchange without bias.
    [Show full text]
  • Ordinary Technoethics
    Ordinary Technoethics MICHEL PUECH Philosophy, Paris-Sorbonne University, Paris, France Email: [email protected] Web site: http://michel.puech.free.fr ABSTRACT From recent philosophy of technology emerges the need for an ethical assessment of the ordinary use of technological devices, in particular telephones, computers, and all kind of digital artifacts. The usual method of academic ethics, which is a top-down deduction starting with metaethics and ending in applied ethics, appears to be largely unproductive for this task. It provides “ideal” advice, that is to say formal and often sterile. As in the opposition between “ordinary language” philosophy and “ideal language” philosophy, the ordinary requires attention and an ethical investigation of the complex and pervasive use of everyday technological devices. Some examples indicate how a bottom-up reinvention of the ethics of technology can help in numerous techno-philosophical predicaments, including ethical sustainability. downloaded on http://michel.puech.free.fr 1/21 This paper resists “Ideal Technoethics”, which is implicit in mainstream academic applied ethics approaches and is currently favored by the bureaucratic implementation of ethics in public and private affairs. Instead, some trends in philosophy of technology emphasize the importance of ordinary technologically- laden behaviors. If we take this approach one step further, it leads to ordinary technoethics. In my take on ordinary technology, values are construed differently, starting from the importance of the ordinary use of technology (humble devices and focal1 familiar practices). The primacy of use in the history of the Internet provides a paradigm for the ordinary empowerment of users. What are the ethical consequences of this empowerment, and how is the average human being today equipped to address them in the innumerable micro-actions of ordinary life? Technoethics Technoethics as a research and practice field is situated in between philosophy of technology and applied ethics.
    [Show full text]
  • New Perspectives on Technology, Values, and Ethics Theoretical and Practical Boston Studies in the Philosophy and History of Science
    Boston Studies in the Philosophy and History of Science 315 Wenceslao J. Gonzalez Editor New Perspectives on Technology, Values, and Ethics Theoretical and Practical Boston Studies in the Philosophy and History of Science Volume 315 Series editors Alisa Bokulich, Department of Philosophy, Boston University, Boston, MA, USA Robert S. Cohen, Boston University, Watertown, MA, USA Jürgen Renn, Max Planck Institute for the History of Science, Berlin, Germany Kostas Gavroglu, University of Athens, Athens, Greece The series Boston Studies in the Philosophy and History of Science was conceived in the broadest framework of interdisciplinary and international concerns. Natural scientists, mathematicians, social scientists and philosophers have contributed to the series, as have historians and sociologists of science, linguists, psychologists, physicians, and literary critics. The series has been able to include works by authors from many other countries around the world. The editors believe that the history and philosophy of science should itself be scientifi c, self-consciously critical, humane as well as rational, sceptical and undogmatic while also receptive to discussion of fi rst principles. One of the aims of Boston Studies, therefore, is to develop collaboration among scientists, historians and philosophers. Boston Studies in the Philosophy and History of Science looks into and refl ects on interactions between epistemological and historical dimensions in an effort to understand the scientifi c enterprise from every viewpoint. More information
    [Show full text]
  • Why Information Matters Luciano Floridi
    Why Information Matters Luciano Floridi When we use a computer, its performance seems to degrade progres- sively. This is not a mere impression. Over the years of owning a par- ticular machine, it will get sluggish. Sometimes this slowdown is caused by hardware faults, but more often the culprit is software: programs get more complicated, as more features are added and as old bugs are patched (or not), and greater demands are placed on resources by new programs running in the background. After a while, even rebooting the computer does not restore performance, and the only solution is to upgrade to a new machine. Philosophy can be a bit like a computer getting creakier. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, it can get bloated and bogged down and slow. Philosophy begins to care less about philosophical questions than about philosophers’ questions, which then consume increasing amounts of intellectual attention. The problem with philosophers’ questions is not that they are impenetrable to outsiders — although they often are, like any internal game — but that whatever the answers turn out to be, assuming there are any, they do not matter, because nobody besides philosophers could care about the ques- tions in the first place. This is an old problem. In the sixteenth century, the French scholar and doctor François Rabelais satirized scholastic philosophy in his Gargantua and Pantagruel. In a catalogue of 139 invented book titles that he attributes to the library of the Abbey of St. Victor, he lists such titles as “The Niddy-noddy of the Satchel-loaded Seekers, by Friar Blindfastatis” and “The Raver and idle Talker in cases of Conscience.” Centuries later, we seem to be back to the same problem.
    [Show full text]
  • THE LOGIC of BEING INFORMED LUCIANO FLORIDI∗ Abstract One of the Open Problems in the Philosophy of Information Is Wheth- Er T
    Logique & Analyse 196 (2006), x–x THE LOGIC OF BEING INFORMED ∗ LUCIANO FLORIDI Abstract One of the open problems in the philosophy of information is wheth- er there is an information logic (IL), different from epistemic (EL) and doxastic logic (DL), which formalises the relation “a is in- formed that p” (Iap) satisfactorily. In this paper, the problem is solved by arguing that the axiom schemata of the normal modal logic (NML) KTB (also known as B or Br or Brouwer's system) are well suited to formalise the relation of “being informed”. After having shown that IL can be constructed as an informational read- ing of KTB, four consequences of a KTB-based IL are explored: information overload; the veridicality thesis (Iap ! p); the relation between IL and EL; and the Kp ! Bp principle or entailment prop- erty, according to which knowledge implies belief. Although these issues are discussed later in the article, they are the motivations be- hind the development of IL. Introduction As anyone acquainted with modal logic (ML) knows, epistemic logic (EL) formalises the relation “a knows that p” (Kap), whereas doxastic logic (DL) formalises the relation “a believes that p” (Bap). One of the open problems in the philosophy of information (Floridi, 2004c) is whether there is also an information logic (IL), different from EL and from DL, that formalises the relation “a is informed that p” (Iap) equally well. The keyword here is “equally” not “well”. One may contend that EL and DL do not capture the relevant relations very well or even not well at all.
    [Show full text]
  • The Technological Mediation of Morality
    The Technological Mediation of Morality of Technological Mediation The In this dissertation, Olya Kudina investigates the complex interactions between ethics and technology. Center stage to this is the phenomenon of “value dynamism” that explores how technologies co-shape the meaning of values that guide us through our lives and with which we evaluate these same technologies. The dissertation provides an encompassing view on value dynamism and the mediating role of technologies in it through empirical and philosophical investigations, as well as with attention to the larger fields of ethics, design and Technology Assessment. THE TECHNOLOGICAL MEDIATION OF MORALITY: Olya Kudina Olya VALUE DYNAMISM, AND THE COMPLEX INTERACTION BETWEEN ETHICS AND TECHNOLOGY University of Twente 2019 Olya Kudina THE TECHNOLOGICAL MEDIATION OF MORALITY VALUE DYNAMISM, AND THE COMPLEX INTERACTION BETWEEN ETHICS AND TECHNOLOGY Olya Kudina THE TECHNOLOGICAL MEDIATION OF MORALITY VALUE DYNAMISM, AND THE COMPLEX INTERACTION BETWEEN ETHICS AND TECHNOLOGY DISSERTATION to obtain the degree of doctor at the University of Twente, on the authority of the rector magnificus, prof.dr. T.T.M. Palstra, on account of the decision of the Doctorate Board, to be publicly defended on Friday the 17th of May 2019 at 14:45 hours by Olga Kudina born on the 11th of January 1990 in Vinnytsia, Ukraine This dissertation has been approved by: Graduation Committee Supervisor: Chairman/secretary Prof. dr. Th.A.J. Toonen University of Twente Supervisor Prof. dr. ir. P.P.C.C. Verbeek University of Twente Prof. dr. ir. P.P.C.C. Verbeek Co-supervisor Dr. M. Nagenborg University of Twente Co-supervisor: Committee Members Dr.
    [Show full text]
  • Ethics and the Systemic Character of Modern Technology
    PHIL & TECH 3:4 Summer 1998 Strijbos, Ethics and System/19 ETHICS AND THE SYSTEMIC CHARACTER OF MODERN TECHNOLOGY Sytse Strijbos, Free University, Amsterdam A distinguishing feature of today’s world is that technology has built the house in which humanity lives. More and more, our lives are lived within the confines of its walls. Yet this implies that technology entails far more than the material artifacts surrounding us. Technology is no longer simply a matter of objects in the hands of individuals; it has become a very complex system in which our everyday lives are embedded. The systemic character of modern technology confronts us with relatively new questions and dimensions of human responsibility. Hence this paper points out the need for exploring systems ethics as a new field of ethics essential for managing our technological world and for transforming it into a sane and healthy habitat for human life. Special attention is devoted to the introduction of information technology, which will continue unabated into coming decades and which is already changing our whole world of technology. Key words: technological system, systems ethics, social constructivism, technological determinism, information technology. 1. INTRODUCTION A burning question for our time concerns responsibility for technology. Through technological intervention, people have succeeded in virtually eliminating a variety of natural threats. At least that is true for the prosperous part of the world, although here too the forces of nature can unexpectedly smash through the safety barriers of a technological society, as in the case of the Kobe earthquake (1995). The greatest threat to technological societies appears to arise, however, from within.
    [Show full text]
  • A?£?-SE-6/3 Lllllllllllilllllii SE0608416
    A?£?-SE-6/3 lllllllllllilllllii SE0608416 Ethics of Risk Kristin Shrader-Frechette's Philosophical Critique of Risk Assessment Topi Heikkerö Centre for Social Ethics University of Helsinki Helsinki FINLAND Email: [email protected] Abstract This paper addresses risk assessment from a philosophical point of view. It presents and critically reviews the work of Kristin Shrader-Frechette. It introduces the ethical, epistemological, and methodological issues related to risk assessment. The paper focuses on the ethical questions of justice in risk decisions. It opens by framing the relationship between ethics and technology in modern world. Then the paper turns to a brief description of risk assessment as a central method in technological decision making. It proceeds to show how Shrader-Frechette analyzes ethical and political aspects of risk assessment. The central argumentation in her critique follows Rawlsian lines: distributive and participatory inequalities in creating technological constructions need to be justified. To clarify this requirement she formulates the Principle of Prima Facie Political Equity (PPFPE), which is her central tool in most of her ethical criticism, for instance, in relation to the future generations: prima facie, all generations should be treated equally. Brief critical remarks conclude the paper. They touch upon placing Shrader-Frechette's project on the academic chart and her liberal individualist anthropology. Key words Ethics of technology, environmental justice, risk assessment, Kristin Shrader-Frechette. Introduction Technological advance is seldom achieved without expenses. Pollution, draining earth from resources, accidents, noise, and loss of pristine environments, for example, constitute the costs our societies pay for the well-being that is gained through applications of technological systems and devices.
    [Show full text]
  • Toward a Philosophy of Technology Author(S): Hans Jonas Source: the Hastings Center Report, Vol
    Toward a Philosophy of Technology Author(s): Hans Jonas Source: The Hastings Center Report, Vol. 9, No. 1 (Feb., 1979), pp. 34-43 Published by: The Hastings Center Stable URL: http://www.jstor.org/stable/3561700 . Accessed: 26/10/2011 18:53 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. The Hastings Center is collaborating with JSTOR to digitize, preserve and extend access to The Hastings Center Report. http://www.jstor.org KNOWLEDGE,POWER & THE BIOLOGICALREVOLUTION Toward a Philosophy of Technology by HANS JONAS the world furnishedwith them looks. A third, overarching themeis the moralside of technologyas a burdenon human A re there philosophicalaspects to technology?Of responsibility,especially its long-termeffects on the global course there are, as there are to all things of importancein conditionof man and environment.This-my own mainpre- humanendeavor and destiny.Modern technology touches on occupation over the past years-will only be touched upon. almost everythingvital to man's existence-material, men- tal, and spiritual.Indeed, what of man is not involved?The I. The FormalDynamics of Technology way he lives his life and looks at objects,his intercoursewith the worldand with his peers,his powersand modes of action, First some observationsabout technology'sform as an kinds of goals, states and changesof society, objectivesand abstractwhole of movement.We are concernedwith char- forms of politics (includingwarfare no less than welfare), acteristicsof moderntechnology and thereforeask firstwhat the sense and qualityof life, even man'sfate and that of his distinguishesit formallyfrom all previoustechnology.
    [Show full text]
  • Artificial Intelligence: from Ethics to Policy
    Artificial intelligence: From ethics to policy STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 641.507 – June 2020 EN Artificial intelligence: From ethics to policy There is little doubt that artificial intelligence (AI) and machine learning (ML) will revolutionise public services. However, the power for positive change that AI provides simultaneously holds the potential for negative impacts on society. AI ethics work to uncover the variety of ethical issues resulting from the design, development, and deployment of AI. The question at the centre of all current work in AI ethics is: How can we move from AI ethics to specific policy and legislation for governing AI? Based on a framing of 'AI as a social experiment', this study arrives at policy options for public administrations and governmental organisations who are looking to deploy AI/ML solutions, as well as the private companies who are creating AI/ML solutions for use in the public arena. The reasons for targeting this application sector concern: the need for a high standard of transparency, respect for democratic values, and legitimacy. The policy options presented here chart a path towards accountability; procedures and decisions of an ethical nature are systematically logged prior to the deployment of an AI system. This logging is the first step in allowing ethics to play a crucial role in the implementation of AI for the public good. STOA | Panel for the Future of Science and Technology AUTHORS This study has been written by Dr Aimee van Wynsberghe of Delft University of Technology and co-director of the Foundation for Responsible Robotics at the request of the Panel for the Future of Science and Technology (STOA) and managed by the Scientific Foresight Unit, within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament.
    [Show full text]