The Ethics of Artificial Intelligence: Issues and Initiatives

Total Page:16

File Type:pdf, Size:1020Kb

The Ethics of Artificial Intelligence: Issues and Initiatives The ethics of artificial intelligence: Issues and initiatives STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 634.452 – March 2020 EN The ethics of artificial intelligence: Issues and initiatives This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies. It also reviews the guidelines and frameworks which countries and regions around the world have created to address them. It presents a comparison between the current main frameworks and the main ethical issues, and highlights gaps around the mechanisms of fair benefit-sharing; assigning of responsibility; exploitation of workers; energy demands in the context of environmental and climate changes; and more complex and less certain implications of AI, such as those regarding human relationships. STOA | Panel for the Future of Science and Technology AUTHORS This study has been drafted by Eleanor Bird, Jasmin Fox-Skelly, Nicola Jenner, Ruth Larbey, Emma Weitkamp and Alan Winfield from the Science Communication Unit at the University of the West of England, at the request of the Panel for the Future of Science and Technology (STOA), and managed by the Scientific Foresight Unit, within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament. Acknowledgements The authors would like to thank the following interviewees: John C. Havens (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS)) and Jack Stilgoe (Department of Science & Technology Studies, University College London). ADMINISTRATOR RESPONSIBLE Mihalis Kritikos, Scientific Foresight Unit (STOA) To contact the publisher, please e-mail [email protected] LINGUISTIC VERSION Original: EN Manuscript completed in March 2020. DISCLAIMER AND COPYRIGHT This document is prepared for, and addressed to, the Members and staff of the European Parliament as background material to assist them in their parliamentary work. The content of the document is the sole responsibility of its author(s) and any opinions expressed herein should not be taken to represent an official position of the Parliament. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy. Brussels © European Union, 2020. PE 634.452 ISBN: 978-92-846-5799-5 doi: 10.2861/6644 QA-01-19-779-EN-N http://www.europarl.europa.eu/stoa (STOA website) http://www.eprs.ep.parl.union.eu (intranet) http://www.europarl.europa.eu/thinktank (internet) http://epthinktank.eu (blog) The ethics of artificial intelligence: Issues and initiatives Executive summary © Seanbatty / Pixabay This report deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies. It also reviews the guidelines and frameworks that countries and regions around the world have created to address them. It presents a comparison between the current main frameworks and the main ethical issues, and highlights gaps around mechanisms of fair benefit sharing; assigning of responsibility; exploitation of workers; energy demands in the context of environmental and climate changes; and more complex and less certain implications of AI, such as those regarding human relationships. Chapter 1 introduces the scope of the report and defines key terms. The report draws on the European Commission's definition of AI as 'systems that display intelligent behaviour'. Other key terms defined in this chapter include intelligence and how this is used in the context of AI and intelligent robots (i.e. robots with an embedded AI), as well as defining machine learning, artificial neural networks and deep learning, before moving on to consider definitions of morality and ethics and how these relate to AI. In Chapter 2 the report maps the main ethical dilemmas and moral questions associated with the deployment of AI. The report begins by outlining a number of potential benefits that could arise from AI as a context in which to situate ethical, social and legal considerations. Within the context of issues for society, the report considers the potential impacts of AI on the labour market, focusing on the likely impact on economic growth and productivity, the impact on the workforce, potential impacts on different demographics, including a worsening of the digital divide, and the consequences of deployment of AI on the workplace. The report considers the potential impact of AI on inequality and how the benefits of AI could be shared within society, as well as issues concerning the concentration of AI technology within large internet companies and political stability. Other societal issues addressed in this chapter include privacy, human rights and dignity, bias, and issues for democracy. I STOA | Panel for the Future of Science and Technology Chapter 2 moves on to consider the impact of AI on human psychology, raising questions about the impact of AI on relationships, as in the case of intelligent robots taking on human social roles, such as nursing. Human-robot relationships may also affect human-human relationships in as yet unanticipated ways. This section also considers the question of personhood, and whether AI systems should have moral agency. Impacts on the financial system are already being felt, with AI responsible for high trading volumes of equities. The report argues that, although markets are suited to automation, there are risks including the use of AI for intentional market manipulation and collusion. AI technology also poses questions for both civil and criminal law, particularly whether existing legal frameworks apply to decisions taken by AIs. Pressing legal issues include liability for tortious, criminal and contractual misconduct involving AI. While it may seem unlikely that AIs will be deemed to have sufficient autonomy and moral sense to be held liable themselves, they do raise questions about who is liable for which crime (or indeed if human agents can avoid liability by claiming they did not know the AI could or would do such a thing). In addition to challenging questions around liability, AI could abet criminal activities, such as smuggling (e.g. by using unmanned vehicles), as well as harassment, torture, sexual offences, theft and fraud. Self-driving autonomous cars are likely to raise issues in relation to product liability that could lead to more complex cases (currently insurers typically avoid lawsuits by determining which driver is at fault, unless a car defect is involved). Large-scale deployment of AI could also have both positive and negative impacts on the environment. Negative impacts include increased use of natural resources, such as rare earth metals, pollution and waste, as well as energy consumption. However, AI could help with waste management and conservation offering environmental benefits. The potential impacts of AI are far-reaching, but they also require trust from society. AI will need to be introduced in ways that build trust and understanding, and respect human and civil rights. This requires transparency, accountability, fairness and regulation. Chapter 3 explores ethical initiatives in the field of AI. The chapter first outlines the ethical initiatives identified for this report, summarising their focus and where possible identifying funding sources. The harms and concerns tackled by these initiatives is then discussed in detail. The issues raised can be broadly aligned with issues identified in Chapter 2 and can be split into questions around: human rights and well-being; emotional harm; accountability and responsibility; security, privacy, accessibility and transparency; safety and trust; social harm and social justice; lawfulness and justice; control and the ethical use (or misuse) of AI; environmental harm and sustainability; informed use; existential risk. All initiatives focus on human rights and well-being, arguing that AI must not affect basic and fundamental human rights. The IEEE initiative further recommends governance frameworks, standards and regulatory bodies to oversee use of AI and ensure that human well-being is prioritised throughout the design phase. The Montreal Protocol argues that AI should encourage and support the growth and flourishing of human well-being. Another prominent issue identified in these initiatives is concern about the impact of AI on the human emotional experience, including the ways in which AIs address cultural sensitivities (or fail to do so). Emotional harm is considered a particular risk in the case of intelligent robots with whom humans might form an intimate relationship. Emotional harm may also arise should AI be designed to emotionally manipulate users (though it is also recognised that such nudging can also have II The ethics of artificial intelligence: Issues and initiatives positive impacts, e.g. on healthy eating). Several initiatives recognise that nudging requires particular ethical consideration. The need for accountability is recognised by initiatives, the majority of which focus on the need for AI to be auditable as a means of ensuring that manufacturers, designers and owners/operators of AI can be held responsible for harm caused. This also raises the question of autonomy and what that means in the context of AI. Within the initiatives there is a recognition that new
Recommended publications
  • A Randomized Experiment Using Absenteeism Information to “Nudge” Attendance
    February 2017 Making an Impact A randomized experiment using absenteeism information to “nudge” attendance Todd Rogers Harvard Kennedy School Teresa Duncan ICF International Tonya Wolford School District of Philadelphia John Ternovski Shruthi Subramanyam Harvard Kennedy School Adrienne Reitano School District of Philadelphia Key findings This randomized controlled trial, conducted in collaboration with the School District of Philadelphia, finds that a single postcard that encouraged guardians to improve their student’s attendance reduced absences by roughly 2.4 percent. Guardians received one of two types of message: one encouraging guardians to improve their student’s attendance or one encouraging guardians to improve their student’s attendance that also included specific information about the student’s attendance history. There was no statistically significant difference in absences between students according to which message their guardians received. The effect of the postcard did not differ between students in grades 1–8 and students in grades 9–12. U.S. Department of Education At ICF International Institute of Education Sciences Thomas W. Brock, Commissioner for Education Research Delegated the Duties of Director National Center for Education Evaluation and Regional Assistance Audrey Pendleton, Acting Commissioner Elizabeth Eisner, Acting Associate Commissioner Amy Johnson, Action Editor Felicia Sanders, Project Officer REL 2017–252 The National Center for Education Evaluation and Regional Assistance (NCEE) conducts unbiased large-scale evaluations of education programs and practices supported by federal funds; provides research-based technical assistance to educators and policymakers; and supports the synthesis and the widespread dissemination of the results of research and evaluation throughout the United States. Febr uar y 2017 This report was prepared for the Institute of Education Sciences (IES) under Contract ED-IES-12-C-0006 by Regional Educational Laboratory Mid-Atlantic administered by ICF International.
    [Show full text]
  • Artificial Intelligence and the Ethics of Self-Learning Robots Shannon Vallor Santa Clara University, [email protected]
    Santa Clara University Scholar Commons Philosophy College of Arts & Sciences 10-3-2017 Artificial Intelligence and the Ethics of Self-learning Robots Shannon Vallor Santa Clara University, [email protected] George A. Bekey Follow this and additional works at: http://scholarcommons.scu.edu/phi Part of the Philosophy Commons Recommended Citation Vallor, S., & Bekey, G. A. (2017). Artificial Intelligence and the Ethics of Self-learning Robots. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0 (pp. 338–353). Oxford University Press. This material was originally published in Robot Ethics 2.0 edited by Patrick Lin, Keith Abney, and Ryan Jenkins, and has been reproduced by permission of Oxford University Press. For permission to reuse this material, please visit http://www.oup.co.uk/academic/rights/permissions. This Book Chapter is brought to you for free and open access by the College of Arts & Sciences at Scholar Commons. It has been accepted for inclusion in Philosophy by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. ARTIFICIAL INTELLIGENCE AND 22 THE ETHICS OF SELF - LEARNING ROBOTS Shannon Va ll or and George A. Bekey The convergence of robotics technology with the science of artificial intelligence (or AI) is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors.1 Recent advances in machine learning techniques have produced significant gains in the ability of artificial agents to perform or even excel in activities for­ merly thought to be the exclusive province of human intelligence, including abstract problem-solving, perceptual recognition, social interaction, and natural language use.
    [Show full text]
  • Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril
    Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Michael Matheny, Sonoo Thadaney Israni, Mahnoor Ahmed, and Danielle Whicher, Editors WASHINGTON, DC NAM.EDU PREPUBLICATION COPY - Uncorrected Proofs NATIONAL ACADEMY OF MEDICINE • 500 Fifth Street, NW • WASHINGTON, DC 20001 NOTICE: This publication has undergone peer review according to procedures established by the National Academy of Medicine (NAM). Publication by the NAM worthy of public attention, but does not constitute endorsement of conclusions and recommendationssignifies that it is the by productthe NAM. of The a carefully views presented considered in processthis publication and is a contributionare those of individual contributors and do not represent formal consensus positions of the authors’ organizations; the NAM; or the National Academies of Sciences, Engineering, and Medicine. Library of Congress Cataloging-in-Publication Data to Come Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America. Suggested citation: Matheny, M., S. Thadaney Israni, M. Ahmed, and D. Whicher, Editors. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National Academy of Medicine. PREPUBLICATION COPY - Uncorrected Proofs “Knowing is not enough; we must apply. Willing is not enough; we must do.” --GOETHE PREPUBLICATION COPY - Uncorrected Proofs ABOUT THE NATIONAL ACADEMY OF MEDICINE The National Academy of Medicine is one of three Academies constituting the Nation- al Academies of Sciences, Engineering, and Medicine (the National Academies). The Na- tional Academies provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.
    [Show full text]
  • Apocalypse Now? Initial Lessons from the Covid-19 Pandemic for the Governance of Existential and Global Catastrophic Risks
    journal of international humanitarian legal studies 11 (2020) 295-310 brill.com/ihls Apocalypse Now? Initial Lessons from the Covid-19 Pandemic for the Governance of Existential and Global Catastrophic Risks Hin-Yan Liu, Kristian Lauta and Matthijs Maas Faculty of Law, University of Copenhagen, Copenhagen, Denmark [email protected]; [email protected]; [email protected] Abstract This paper explores the ongoing Covid-19 pandemic through the framework of exis- tential risks – a class of extreme risks that threaten the entire future of humanity. In doing so, we tease out three lessons: (1) possible reasons underlying the limits and shortfalls of international law, international institutions and other actors which Covid-19 has revealed, and what they reveal about the resilience or fragility of institu- tional frameworks in the face of existential risks; (2) using Covid-19 to test and refine our prior ‘Boring Apocalypses’ model for understanding the interplay of hazards, vul- nerabilities and exposures in facilitating a particular disaster, or magnifying its effects; and (3) to extrapolate some possible futures for existential risk scholarship and governance. Keywords Covid-19 – pandemics – existential risks – global catastrophic risks – boring apocalypses 1 Introduction: Our First ‘Brush’ with Existential Risk? All too suddenly, yesterday’s ‘impossibilities’ have turned into today’s ‘condi- tions’. The impossible has already happened, and quickly. The impact of the Covid-19 pandemic, both directly and as manifested through the far-reaching global societal responses to it, signal a jarring departure away from even the © koninklijke brill nv, leiden, 2020 | doi:10.1163/18781527-01102004Downloaded from Brill.com09/27/2021 12:13:00AM via free access <UN> 296 Liu, Lauta and Maas recent past, and suggest that our futures will be profoundly different in its af- termath.
    [Show full text]
  • Nudge Me Right: Personalizing Online Security Nudges to People’S Decision-Making Styles
    Computers in Human Behavior 109 (2020) 106347 Contents lists available at ScienceDirect Computers in Human Behavior journal homepage: http://www.elsevier.com/locate/comphumbeh Full length article Nudge me right: Personalizing online security nudges to people’s decision-making styles Eyal Peer a,*, Serge Egelman b,c, Marian Harbach b, Nathan Malkin c, Arunesh Mathur d, Alisa Frik b,c a Federmann School of Public Policy, Hebrew University of Jerusalem, Israel b International Computer Science Institute, Berkeley, CA, USA c Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA d Department of Computer Science, Princeton University, USA ABSTRACT Nudges are simple and effective interventions that alter the architecture in which people make choices in order to help them make decisions that could benefit themselves or society. For many years, researchers and practitioners have used online nudges to encourage users to choose stronger and safer passwords. However, the effects of such nudges have been limited to local maxima, because they are designed with the “average” person in mind, instead of being customized to different individuals. We present a novel approach that analyzes individual differences in traits of decision-making style and, based on this analysis, selects which, from an array of online password nudges, would be the most effective nudge each user should receive. In two large-scale online studies, we show that such personalized nudges can lead to considerably better outcomes, increasing nudges’ effectiveness up to four times compared to administering “one-size-fits-all” nudges. We regard these novel findings a proof-of-concept that should steer more researchers, practitioners and policy-makers to develop and apply more efforts that could guarantee that each user is nudged in a way most right for them.
    [Show full text]
  • Real Vs Fake Faces: Deepfakes and Face Morphing
    Graduate Theses, Dissertations, and Problem Reports 2021 Real vs Fake Faces: DeepFakes and Face Morphing Jacob L. Dameron WVU, [email protected] Follow this and additional works at: https://researchrepository.wvu.edu/etd Part of the Signal Processing Commons Recommended Citation Dameron, Jacob L., "Real vs Fake Faces: DeepFakes and Face Morphing" (2021). Graduate Theses, Dissertations, and Problem Reports. 8059. https://researchrepository.wvu.edu/etd/8059 This Thesis is protected by copyright and/or related rights. It has been brought to you by the The Research Repository @ WVU with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you must obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in WVU Graduate Theses, Dissertations, and Problem Reports collection by an authorized administrator of The Research Repository @ WVU. For more information, please contact [email protected]. Real vs Fake Faces: DeepFakes and Face Morphing Jacob Dameron Thesis submitted to the Benjamin M. Statler College of Engineering and Mineral Resources at West Virginia University in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering Xin Li, Ph.D., Chair Natalia Schmid, Ph.D. Matthew Valenti, Ph.D. Lane Department of Computer Science and Electrical Engineering Morgantown, West Virginia 2021 Keywords: DeepFakes, Face Morphing, Face Recognition, Facial Action Units, Generative Adversarial Networks, Image Processing, Classification.
    [Show full text]
  • Phillip STILLMAN, the Narrative of Human Extinction and the Logic Of
    Phillip Stillman Two Nineteenth- Century Contributions to Climate Change Discourse: The Narra- W e a tive of Hu- t h e r man Extinction S c a and the p e g o Logic of Eco- a t 8 sys- tem Management 92 The Narrative of Human Extinction ... Consider the following passage from Oscar Wilde’s “Decay of Lying” (1891): Where, if not from the Impressionists, do we get those wonderful brown fogs that come creeping down our streets, blurring the gas-lamps and changing the houses into monstrous shadows? [...] The extraordinary change that has taken place in the climate of London during the last ten years is entirely due to a particular school of Art. [...] For what is Nature? Nature is no great mother who has borne us. She is our creation. It is in our brain that she quickens to life. Things are because we see them, and what we see, and how we see it, depends on the Arts that have influenced us.1 The speaker is Vivian, a self-consciously sophistical aesthete, and his argument is that “Nature” is the causal consequence of “Art.” Not long ago, a literary critic might have quoted such a passage with unmitigated approbation: “Things are because we see them, and what we see, and how we see it, depends on the Arts that have influenced us.” The post-structural resonance of that kind of claim is strong, and the Jamesonian tradition of treating art as a means through which ideology reproduces itself depends heavily on the conviction that between the knower and the known, there must be some determining symbolic mediation.2 Now, however, it is difficult not to hesitate over the assertion that the “extraordinary change that has taken place in the climate of London during the last ten years is entirely due to a particular school of Art.” Now we tend to take referential 1 Oscar Wilde, “The Decay of Lying,” in The claims about “Nature” very seriously, especially with regard to Artist as Critic: Critical Writings of Oscar Wilde, ed.
    [Show full text]
  • AI Computer Wraps up 4-1 Victory Against Human Champion Nature Reports from Alphago's Victory in Seoul
    The Go Files: AI computer wraps up 4-1 victory against human champion Nature reports from AlphaGo's victory in Seoul. Tanguy Chouard 15 March 2016 SEOUL, SOUTH KOREA Google DeepMind Lee Sedol, who has lost 4-1 to AlphaGo. Tanguy Chouard, an editor with Nature, saw Google-DeepMind’s AI system AlphaGo defeat a human professional for the first time last year at the ancient board game Go. This week, he is watching top professional Lee Sedol take on AlphaGo, in Seoul, for a $1 million prize. It’s all over at the Four Seasons Hotel in Seoul, where this morning AlphaGo wrapped up a 4-1 victory over Lee Sedol — incidentally, earning itself and its creators an honorary '9-dan professional' degree from the Korean Baduk Association. After winning the first three games, Google-DeepMind's computer looked impregnable. But the last two games may have revealed some weaknesses in its makeup. Game four totally changed the Go world’s view on AlphaGo’s dominance because it made it clear that the computer can 'bug' — or at least play very poor moves when on the losing side. It was obvious that Lee felt under much less pressure than in game three. And he adopted a different style, one based on taking large amounts of territory early on rather than immediately going for ‘street fighting’ such as making threats to capture stones. This style – called ‘amashi’ – seems to have paid off, because on move 78, Lee produced a play that somehow slipped under AlphaGo’s radar. David Silver, a scientist at DeepMind who's been leading the development of AlphaGo, said the program estimated its probability as 1 in 10,000.
    [Show full text]
  • Future of Research on Catastrophic and Existential Risk
    ORE Open Research Exeter TITLE Working together to face humanity's greatest threats: Introduction to The Future of Research in Catastrophic and Existential Risk AUTHORS Currie, AM; Ó hÉigeartaigh, S JOURNAL Futures DEPOSITED IN ORE 06 February 2019 This version available at http://hdl.handle.net/10871/35764 COPYRIGHT AND REUSE Open Research Exeter makes this work available in accordance with publisher policies. A NOTE ON VERSIONS The version presented here may differ from the published version. If citing, you are advised to consult the published version for pagination, volume/issue and date of publication Working together to face humanity’s greatest threats: Introduction to The Future of Research on Catastrophic and Existential Risk. Adrian Currie & Seán Ó hÉigeartaigh Penultimate Version, forthcoming in Futures Acknowledgements We would like to thank the authors of the papers in the special issue, as well as the referees who provided such constructive and useful feedback. We are grateful to the team at the Centre for the Study of Existential Risk who organized the first Cambridge Conference on Catastrophic Risk where many of the papers collected here were originally presented, and whose multi-disciplinary expertise was invaluable for making this special issue a reality. We’d like to thank Emma Bates, Simon Beard and Haydn Belfield for feedback on drafts. Ted Fuller, Futures’ Editor-in-Chief also provided invaluable guidance throughout. The Conference, and a number of the publications in this issue, were made possible through the support of a grant from the Templeton World Charity Foundation (TWCF); the conference was also supported by a supplementary grant from the Future of Life Institute.
    [Show full text]
  • On the Complexities of Using New Visual Technologies to Communicate About Wildlife Communication
    Verma A, Van der Wal R, Fischer A. Microscope and spectacle: On the complexities of using new visual technologies to communicate about wildlife communication. Ambio 2015, 44(Suppl. 4), 648-660. Copyright: © The Author(s) 2015 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. DOI link to article: http://dx.doi.org/10.1007/s13280-015-0715-z Date deposited: 05/12/2016 This work is licensed under a Creative Commons Attribution 4.0 International License Newcastle University ePrints - eprint.ncl.ac.uk Ambio 2015, 44(Suppl. 4):S648–S660 DOI 10.1007/s13280-015-0715-z Microscope and spectacle: On the complexities of using new visual technologies to communicate about wildlife conservation Audrey Verma, Rene´ van der Wal, Anke Fischer Abstract Wildlife conservation-related organisations well-versed in crafting public outreach and awareness- increasingly employ new visual technologies in their raising activities. These range from unidirectional educa- science communication and public engagement efforts. tional campaigns and advertising and branding projects, to Here, we examine the use of such technologies for wildlife citizen science research with varying degrees of public conservation campaigns. We obtained empirical data from participation, the implementation of interactive media four UK-based organisations through semi-structured strategies, and the expansion of modes of interpretation to interviews and participant observation.
    [Show full text]
  • An Ethical Framework for Smart Robots Mika Westerlund
    An Ethical Framework for Smart Robots Mika Westerlund Never underestimate a droid. Leia Organa Star Wars: The Rise of Skywalker This article focuses on “roboethics” in the age of growing adoption of smart robots, which can now be seen as a new robotic “species”. As autonomous AI systems, they can collaborate with humans and are capable of learning from their operating environment, experiences, and human behaviour feedback in human-machine interaction. This enables smart robots to improve their performance and capabilities. This conceptual article reviews key perspectives to roboethics, as well as establishes a framework to illustrate its main ideas and features. Building on previous literature, roboethics has four major types of implications for smart robots: 1) smart robots as amoral and passive tools, 2) smart robots as recipients of ethical behaviour in society, 3) smart robots as moral and active agents, and 4) smart robots as ethical impact-makers in society. The study contributes to current literature by suggesting that there are two underlying ethical and moral dimensions behind these perspectives, namely the “ethical agency of smart robots” and “object of moral judgment”, as well as what this could look like as smart robots become more widespread in society. The article concludes by suggesting how scientists and smart robot designers can benefit from a framework, discussing the limitations of the present study, and proposing avenues for future research. Introduction capabilities (Lichocki et al., 2011; Petersen, 2007). Hence, Lin et al. (2011) define a “robot” as an Robots are becoming increasingly prevalent in our engineered machine that senses, thinks, and acts, thus daily, social, and professional lives, performing various being able to process information from sensors and work and household tasks, as well as operating other sources, such as an internal set of rules, either driverless vehicles and public transportation systems programmed or learned, that enables the machine to (Leenes et al., 2017).
    [Show full text]
  • Wise Leadership & AI 3
    Wise Leadership and AI Leadership Chapter 3 | Behind the Scenes of the Machines What’s Ahead For Artificial General Intelligence? By Dr. Peter VERHEZEN With the AMROP EDITORIAL BOARD Putting the G in AI | 8 points True, generalized intelligence will be achieved when computers can do or learn anything that a human can. At the highest level, this will mean that computers aren’t just able to process the ‘what,’ but understand the ‘why’ behind data — context, and cause and effect relationships. Even someday chieving consciousness. All of this will demand ethical and emotional intelligence. 1 We underestimate ourselves The human brain is amazingly general compared to any digital device yet developed. It processes bottom-up and top-down information, whereas AI (still) only works bottom-up, based on what it ‘sees’, working on specific, narrowly defined tasks. So, unlike humans, AI is not yet situationally aware, nuanced, or multi-dimensional. 2 When can we expect AGI? Great minds do not think alike Some eminent thinkers (and tech entrepreneurs) see true AGI as only a decade or two away. Others see it as science fiction — AI will more likely serve to amplify human intelligence, just as mechanical machines have amplified physical strength. 3 AGI means moving from homo sapiens to homo deus Reaching AGI has been described by the futurist Ray Kurzweil as ‘singularity’. At this point, humans should progress to the ‘trans-human’ stage: cyber-humans (electronically enhanced) or neuro-augmented (bio-genetically enhanced). 4 The real risk with AGI is not malice, but unguided brilliance A super-intelligent machine will be fantastically good at meeting its goals.
    [Show full text]