Autonomous Vehicles Ethics &

Total Page:16

File Type:pdf, Size:1020Kb

Autonomous Vehicles Ethics & RYAN JENKINS AUTONOMOUS VEHICLES ETHICS & LAW Toward an Overlapping Consensus SEPTEMBER 2016 About the Authors About New America Ryan Jenkins studies the moral New America is committed to renewing American politics, dimensions of technologies with the prosperity, and purpose in the Digital Age. We generate big potential to profoundly impact human ideas, bridge the gap between technology and policy, and life. He is an assistant professor of curate broad public conversation. We combine the best of philosophy and a senior fellow at the a policy research institute, technology laboratory, public Ethics & Emerging Sciences Group at California forum, media platform, and a venture capital fund for Polytechnic State University, San Luis Obispo. His interests ideas. We are a distinctive community of thinkers, writers, include driverless cars, algorithms, autonomous weapons, researchers, technologists, and community activists who and military ethics more broadly. His work has appeared believe deeply in the possibility of American renewal. publicly in Forbes, Slate and elsewhere, and he is currently co-editing two books on military ethics and robot ethics, Find out more at newamerica.org/our-story. both for Oxford University Press. At Cal Poly, Jenkins teaches courses in ethics, political philosophy, and the philosophy of technology, among others. He earned his About the Digital Industries Initiative B.A. in philosophy from Florida State University, Phi Beta Kappa, and his Ph.D. in philosophy from the University of The Digital Industries Initiative of New America brings Colorado Boulder. together leading experts and policymakers from the private sector, government, universities, and other nonprofit institutions and the media, to analyze Acknowledgments and debate the future of America’s major economic sectors in the Digital Age. Each month the invitation- A special thanks is due to Colin McCormick and New only Digital Industries Roundtable, hosted at New America fellow Levi Tillemann for comments on an America’s headquarters in Washington, D.C., features a earlier draft. discussion of the challenges of innovation in a different American industry. In addition, the Digital Industries Initiative publishes groundbreaking reports, hosts public Cover image: DimiTVP/Wikimedia. events, and undertakes other activities at the dynamic intersection of technology, economics, and public policy. Contents Abstract 2 Introduction 3 Crash Optimization 4 Overlapping Consensus: Ethics as an Engineering Problem 6 Proposals for Crash Optimization 10 Adjustable Ethics Settings 14 Insuring Autonomous Vehicles 16 Handing off to the Driver 17 Abuse 19 Far-term Issues 21 Next Steps 22 Works Cited 24 Notes 26 ABSTRACT There is a clear presumptive case for the adoption This report was inspired by the Autonomous of autonomous vehicles (AV). It is widely believed Vehicles & Ethics Workshop held at Stanford they will be safer than human-driven vehicles, University in Palo Alto, California in September of better able to detect and avoid hazards and 2015. The workshop was a closed, invitation-only collisions with other drivers and pedestrians. meeting of about 30 participants. Participants However, it would be unreasonable to expect included academics, including ethicists, AV to be perfect. And unlike programming for psychologists, roboticists and mechanical much software and hardware, the conditions AV engineers; insurance lawyers and legal experts; can be expected to face on the road are an “open and representatives from the automotive industry set”: we cannot exhaustively test every scenario, and Silicon Valley. The conference was organized since we cannot predict every possible scenario. by Patrick Lin (California Polytechnic State In light of this, we must think carefully about University), Selina Pan (Stanford), and Chris Gerdes what requirements manufacturers should have (Stanford), and was supported by funding from the to demonstrate before AV are allowed on the US National Science Foundation, under award no. roads. This paper surveys the practical state of 1522240. The meeting was conducted under The the art, technical limitations of AV, the problem of Chatham House Rule, whereby participants are driver handoff, and the possibility of abuse with free to use the information received, but neither AV, such as other drivers playing “chicken” with the identity nor the affiliation of the speaker(s) may AV. It considers AV from the legal, ethical, and be revealed without their expressed consent. This manufacturing perspectives before arguing for report includes input and observations from the an “overlapping consensus”: AV that behave in workshop’s participants. Its interpretations of those ways that are morally justified, legally defensible, remarks, substantive claims and recommendations, and technically possible. The paper closes by however, solely reflect the syntheses of its author, applying this lens to some possible ways that and do not necessarily reflect the views of the AV could behave in the event of a crash, offering workshop’s participants, organizers, or supporting tentative endorsements of some of these, and organizations. A special thanks is due to Colin recommending a closer collaboration between McCormick and New America fellow Levi Tillemann industry and the academy. for comments on an earlier draft. 2 DIGITAL INDUSTRIES INITIATIVE INTRODUCTION There is a clear presumptive case for the adoption Motors, 2016). And unlike programming for much of autonomous vehicles (AV). It is widely believed software and hardware, the set of conditions AV they will be safer than vehicles driven by humans. can be expected to face on the road is an “open For example, AV will not become sleepy, distracted, set”: manufacturers cannot exhaustively test every or angry behind the wheel, and will be better scenario, since they cannot predict every possible able to detect and avoid hazards and collisions scenario. Manufacturers will be unable to ensure with other drivers and pedestrians. Because car that AV are totally prepared to drive on their own, in accidents kill around 30,000–35,000 people per all conditions and situations. year in the United States alone1, and because around 94% of crashes are due to driver error2, the In light of this, stakeholders must think carefully case for AV from increased safety and lives saved is about what requirements should be met before extremely compelling. AV are allowed on the roads. What kind of discrimination capabilities should AV have before Even mildly optimistic predictions concerning AV it’s permissible to deploy them? Is it merely show that they could provide significant benefits enough that AV be superior to human drivers? in terms of social costs of death or injury, as well as How should AV be programmed to behave in the increased convenience and productivity time for the event of a crash, and is it permissible for them individual consumer. to change the outcome of a crash by redirecting harm? Or should we be worried about people who However, it would be unreasonable to expect AV are killed or injured by AV when they would not to be perfect. Software and hardware undergo have been otherwise? These and other issues are continuous development, and their failures are explored below, synthesizing the perspectives of sometimes catastrophic. Since there is nothing philosophers, lawyers, and manufacturers, in search intrinsically different about the software and of an overlapping consensus on the development hardware to be used in AV, the same possibility and deployment of autonomous vehicles. for catastrophic failure exists—witness the failure of Tesla’s autopilot system in May, 2016 (Tesla DIGITAL INDUSTRIES INITIATIVE Autonomous Vehicles Ethics & Law: Toward an Overlapping Consensus 3 CRASH OPTIMIZATION Sensing the World implemented, behavior is continually reinforced or modified by subtle human nudges (Fehrenbacher, Autonomous vehicles use a variety of technologies 2015), and can be fine-tuned with hard-coded to detect the outside world, make sense of it, and inputs, for example, about the distance to keep make decisions based on that data. Manufacturers between cars, the position to maintain in a lane, like Google and Tesla take different approaches how long to wait after a stoplight turns green, and to which sensors they use, how data from those so on. With this suite of technologies, AV have sensors is synthesized, and how AV make decisions already been shown to be able to reliably navigate based on the data. A typical suite of technologies roads in some situations. for sensing the world includes cameras, ultrasonic sensors (SONAR), RADAR, or LIDAR (light detection and ranging). For example, Tesla prefers to use a “No-win” Scenarios combination of cameras and RADAR over Google’s LIDAR because LIDAR instruments are orders of Autonomous vehicles presumptively bring magnitude more expensive, even though they significant benefits: their increased computational offer a higher resolution representation of an AV’s power and reaction time allow them to avoid some surroundings.3 Moreover, cameras are vulnerable to accidents that a human driver could not. A clear some of the same failings as the human eye: they primary goal for AV should be to avoid all collisions. have difficulty seeing in direct sunlight, low-light If AV could successfully avoid most or all accidents conditions, or inclement weather. that are caused by driver error, this could be expected to eliminate tens of thousands of deaths AV take in this information and
Recommended publications
  • VALLATH NANDAKUMAR 3105 Eaneswood Dr, Austin, TX 78746 (512)-923-9658 (Cell); [email protected]
    VALLATH NANDAKUMAR 3105 Eaneswood Dr, Austin, TX 78746 (512)-923-9658 (Cell); [email protected] EDUCATION • 1990 University of California, Berkeley, CA Ph.D. in Electrical Engineering. • 1985 University of California Berkeley, CA M.S. in Electrical Engineering. • 1982 Indian Institute of Technology, Madras, India BTech in Electrical Engineering (Electronics). EXPERIENCE • Teaching in the ECE and CS Department at the University of Texas, Austin (5 years) o C, C++, Java programming, Algorithm Design • Taught course modules at the graduate level in cell phone technology, VLSI testing, VLSI design, Internet hardware, technical writing, physics. (5 years) • Research and development activities in superconducting electronics, high-end optical communication, integrated circuit design methodologies, Computer-Aided Design tools, and cyber security software. • Published 6+ papers in IEEE journals on superconducting electronics. • Six patents granted in VLSI and superconductors. • Managed teams of 30+ people spanning multiple countries. • Have conducted computer programming contests with 400+ student participants from all over Asia coming together. EMPLOYMENT HISTORY AND EXPERIENCE Aug. 2013-Present Full-time Lecturer in the Departments of Computer Science, and Electrical and Computer Engineering, University of Texas, Austin • Courses taught: Algorithms, basic and advanced programming, Ethics of Computer Science, Writing for CS majors • Texas University Interscholastic League Director for Computer Science, 2013-2014. Mar 2012-Jun 2013 Substitute Teacher, Eanes Independent School District, Austin, TX • Substitute teacher for grades K-12 and all subjects. Mar 2012-Jun 2013 Tutor, Austin Community College, Austin, TX • Taught Algebra and Calculus, Business Economics, Statistics, Physics, English, German. Oct 2006-Jun 2013 Professor/Adjunct Professor, Amrita University, Amritapuri Campus, India (http://www.amrita.edu) • Director of Center of Excellence in Cyber Security • Taught courses in VLSI architecture, design and testing, and on Wireless systems.
    [Show full text]
  • Programming for TV, Radio, and the Internet: Strategy, Development, and Evaluation
    Programming for TV, Radio, and the Internet: Strategy, Development, and Evaluation Philippe Perebinossoff Brian Gross Lynne S. Gross ELSEVIER Programming for TV, Radio, and the Internet Programming for TV, Radio, and the Internet Strategy, Development, and Evaluation Philippe Perebinossoff California State University, Fullerton Brian Gross EF Education, Jakarta, Indonesia Lynne S. Gross California State University, Fullerton AMSTERDAM · BOSTON · HEIDELBERG · LONDON NEW YORK · OXFORD · PARIS · SAN DIEGO SAN FRANCISCO · SINGAPORE · SYDNEY · TOKYO Focal Press is an imprint of Elsevier Acquisition Editor: Amy Jollymore Project Manager: Bonnie Falk Editorial Assistant: Cara Anderson Marketing Manager: Christine Degon Cover Design: Dardani Gasc Focal Press is an imprint of Elsevier 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA Linacre House, Jordan Hill, Oxford OX2 8DP, UK Copyright © 2005, Elsevier Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail: [email protected] may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting “Customer Support” and then “Obtaining Permissions.” Recognizing the importance of preserving what has been written, Elsevier prints its books on acid-free paper whenever possible. Library of Congress Cataloging-in-Publication Data British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
    [Show full text]
  • Programming Ethics Into Our Lives a Book Review on “The Handbook of Information and Computer Ethics” and Contemporary Moral Problems”
    Programming Ethics Into Our Lives A book review on “The Handbook of Information and Computer Ethics” and Contemporary Moral Problems” Copyright 2011 By: Carlos Alberto I. Osorio This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Philippines License Preface The book review on The Handbook of Information and Computer Ethics and Contemporary Moral Problems offer the readers a glimpse on how ethics could be put into practice in our lives today which revolves with technologies. With the emergeing technologies in our world today, problems have also emerged to challenge our code of ethics not only in the real world, but in the computer world as well. The book has shared different concepts and theories in order for us to know how to handle certain problems when dealing with information and computers. It will teach the readers everything that they need to know in order to have proper knowledge in being able to protect themselves. Not only that, it will also give them a moral standards and know their limitations once they have entered the IT world. Through these book reviews, they will be able to guide themselves as well into having proper ethical manners which would also strengthen their moral values in life. Page 2 Dedication This book review is first of dedicated to my family, specially my parents who has provided for my education and has always inspired me along the road. No matter what, they have always believed in my talents and capabilities. I also want to dedicate this book to my fellow classmates and friends who have been exerting a lot of effort and has been working hard as well in order to finish their respective book reviews.
    [Show full text]
  • Download Paper
    Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence Michael Guihot,† Anne F. Matthew‡ and Nicolas P. Suzor* There is a pervading sense of unease that artificially intelligent machines will soon radically alter our lives in ways that are still unknown. Advances in AI technology are developing at an extremely rapid rate as computational power continues to grow exponentially. Even if existential concerns about AI do not materialise, there are enough concrete examples of problems associated with current applications of artificial intelligence to warrant concern about the level of control that exists over developments in AI. Some form of regulation is likely necessary to protect society from risks of harm. However, advances in regulatory capacity have not kept pace with developments in new technologies including AI. This is partly because regulation has become decentered; that is, the traditional role of public regulators such as governments commanding regulation has been dissipated and other participants including those from within the industry have taken the lead. Other contributing factors are the dwindling of resources in governments on the one hand and the increased power of technology companies on the other. These factors have left the field of AI development relatively unregulated. Whatever the reason, it is now more difficult for traditional public regulatory bodies to control the development of AI. In the vacuum, industry participants have begun to self-regulate by promoting soft law options such as codes of practice and standards. We argue that, despite the reduced authority of public regulatory agencies, the risks associated with runaway AI require regulators to begin to participate in what is largely an unregulated field.
    [Show full text]
  • Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence
    ARTICLE: Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence Winter, 2017 Reporter 20 Vand. J. Ent. & Tech. L. 385 * Length: 33832 words Author: Michael Guihot,* Anne F. Matthew** & Nicolas P. Suzor*** *Senior Lecture: r, Commercial and Property Law Research Centre, Queensland University Technology Faculty of Law, [email protected]. The Authors wish to thank participants in the We Robot 2017 conference at Yale University, especially Kate Crawford, Dustin Lewis, and Matthew Scherer for their helpful suggestions and comments. **Lecture: r, Commercial and Property Law Research Centre, Queensland University of Technology Faculty of Law, [email protected]. ***Associate Professor, Queensland University of Technology Faculty of Law, [email protected]. Associate Professor Suzor is the recipient of an Australian Research Council DECRA Fellowship (Project Number DE160101542). Highlight Abstract There is a pervading sense of unease that artificially intelligent machines will soon radically alter our lives in ways that are still unknown. Advances in artificial intelligence (AI) technology are developing at an extremely rapid rate as computational power continues to grow exponentially. Even if existential concerns about AI do not materialize, there are enough concrete examples of problems associated with current applications of AI to warrant concern about the level of control that exists over developments in this field. Some form of regulation is likely necessary to protect society from harm. However, advances in regulatory capacity have not kept pace with developments in new technologies, including AI. This is partly because regulation has become decentered; that is, the traditional role of public regulators such as governments commanding regulation has dissipated, and other participants including those from within the industry have taken the lead.
    [Show full text]
  • W15 Edia 2016
    Proceedings of the 1st Workshop on Ethics in the Design of Intelligent Agents (EDIA) 2016 Gregory´ Bonnet, Maaike Harbers, Koen Hindriks, Mike Katell, Catherine Tessier (eds.) Workshop co-located with the 22th European Conference on Artificial Intelligence (ECAI 2016) August 30th 2016, The Hague, Holland Preface Gregory´ Bonnet∗ Maaike Harbers† Koen Hindriks‡ Mike Katell§ Catherine Tessier¶ The development of Artificial Intelligence is experiencing a fruitful period of incredible progress and inno- vation. After decades of notable successes and disappointing failures, AI is now poised to emerge in the public sphere and completely transform human society, altering how we work, how we interact with each other and our environments, and how we perceive the world. Designers have already begun implementing AI that enables machines to learn new skills and make decisions in increasingly complex situations. The result is that these machines - also called intelligent agents - decide, act and interact in shared and dynamic environments under domain constraints, where they may interact with other agents and human beings to share tasks or execute tasks on behalf of others. Search engines, self-driving cars, electronic markets, smart homes, military technology, software for big data analysis, and care robots are just a few examples. As intelligent agents gain increased autonomy in their functioning, human supervision by operators or users decreases. As the scope of the agents activities broadens, it is imperative to ensure that such socio-technical systems will not make irrelevant, counter-productive, or even dangerous decisions. Even if regulation and control mechanisms are designed to ensure sound and consistent behaviors at the agent, multi-agent, and human- agent level, ethical issues are likely to remain quite complex, implicating a wide variety of human values, moral questions, and ethical principles.
    [Show full text]
  • Teaching AI, Ethics, Law and Policy
    Teaching AI, Ethics, Law and Policy Asher Wilk† Tel Aviv University Tel Aviv Israel ABSTRACT addressing critical thinking and decision-making. This paper is based on my experiences in teaching ethics and law to The cyberspace and development of intelligent systems computer science students and to those studying using Artificial Intelligence (AI) creates new challenges to international policy. computer professionals, data scientists, regulators and policy makers. For example, self-driving cars raise new Computer Ethics Education was not common several technical, ethical, legal and public policy issues. This paper years ago, but recently due to the advancements in AI, proposes a course named Computers, Ethics, Law, and ethics gained popularity. In 2018 the ACM updated its Code Public Policy, and suggests a curriculum for such a of Ethics and Professional Conduct [66] to respond to the course. This paper presents ethical, legal, and public policy changes in the computing profession since 1992. In the 90s, issues relevant to building and using intelligent systems. the code focused on professionalism and quality. In the new code, the public good is a major concern, and it addresses KEYWORDS AI and states: “Extraordinary care should be taken to Artificial Intelligence, Ethics, Law, Regulation, Public Policy, identify and mitigate potential risks in machine learning Machine Learning, Autonomous Systems, Robotics, systems”. Ethical issues created by new technologies (e.g., Education, Privacy, Security, Computers and Society unemployment caused by self-driving vehicles) should not just be the focus of computer professionals - they should concern everybody. 1 Introduction Recently robots and intelligent systems are equipped with Is AI Ethics a new field of study requiring new ethical artificial intelligence (AI) and many more will be in the near thinking? Is it different from computer ethics? According to future.
    [Show full text]
  • Student Digital Piracy in the Florida State University System:An Exploratory Study on Its Infrastructural Effects
    University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2010 Student Digital Piracy In The Florida State University System:an Exploratory Study On Its Infrastructural Effects Jeffrey Reiss University of Central Florida Part of the Educational Leadership Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Reiss, Jeffrey, "Student Digital Piracy In The Florida State University System:an Exploratory Study On Its Infrastructural Effects" (2010). Electronic Theses and Dissertations, 2004-2019. 4196. https://stars.library.ucf.edu/etd/4196 STUDENT DIGITAL PIRACY IN THE FLORIDA STATE UNIVERSITY SYSTEM: AN EXPLORATORY STUDY ON ITS INFRASTRUCTURAL EFFECTS by JEFFREY REISS B.S. University of Central Florida, 2003 M.S. University of Central Florida, 2005 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Education in the Department of Educational Research, Technology, and Leadership in the College of Education at the University of Central Florida Orlando, Florida Spring Term 2010 Major Professor: Rosa Cintrón © 2010 Jeffrey Reiss ii ABSTRACT Digital piracy is a problem that may never disappear from society. Through readily available resources such as those found in a university, students will always have access to illegal goods. While piracy is a global phenomenon, an institution’s resources combined with the typical college student’s lack of funds makes it more lucrative.
    [Show full text]
  • Course Title: Introduction to Computer Programming Description
    Fall 2021 Course Title: Introduction to Computer Programming Description: This 32-week course is designed for students seeking high school credit for an introductory computer programming class. This course provides students with a survey of computer science topics, combining theory with hands-on experience in Python. Students will design, write, debug, and run programs/scripts. The course will cover the key areas of computer theory and programming including: basic logic, design, Python programming, ethics and security. Students will develop their ability to understand information systems, computing techniques, and processes through a study of the software development process. Class time will emphasize “lab” time for students to work on programming exercises and projects. We will also explore a general overview of artificial intelligence and data science and the importance of these technologies in our ever-changing, facepaced world. A theme of a Biblical worldview will provide the backdrop for our studies. We will consider the juxtaposition of computing and the Bible, the special nature of human mind (also a computer of sorts), and the ethical issues Christians encounter in modern computing. Access to a computer with MS office applications and internet access is required for outside- class work. Prerequisites: Grades 10+ and completion of Algebra 1 Tuesdays 12:20 – 1:50 PM – Starting August 10; 32 weeks Minimum 6 students required to start Location: Bellevue Tutorial Computer Lab Textbook: Materials to be provided by Bellevue Tutorial Teacher: Grace Nabholz; Contact Bellevue Tutorial, 901-347-5518, [email protected] Cost: $425 (32 weeks) No other fees .
    [Show full text]
  • Ethical Tools Report
    Ethical Tools Report Table of Contents 1 Types of tools .............................................................................................................. 3 1.1 Toolkits ............................................................................................................................ 3 1.1.1 Data Ethics Decision Aid Utrecht ...................................................................................... 3 1.1.2 Delft TU Ethics for Designers Toolkit ................................................................................ 5 1.1.3 Summary .......................................................................................................................... 6 1.2 Principles ......................................................................................................................... 6 1.2.1 Tech for Good ................................................................................................................... 7 1.3 Guidelines ........................................................................................................................ 7 1.3.1 Networked Systems Ethics ............................................................................................... 7 1.3.2 Ethical Guidelines for PeaceTech ..................................................................................... 8 1.4 Practitioners Code ............................................................................................................ 9 1.4.1 The Web Analyst's Code of Ethics ...................................................................................
    [Show full text]
  • Course Syllabus LIT 111 — Programming
    Course Syllabus LIT 111 — Programming 3 Credit hours I. COURSE DESCRIPTION An introduction to the art of programming, including such concepts as data representation, algorithm design, structured programming, step-wise refinement, programming style, testing, debugging, and program documentation. An integral part of this course includes experience in the design and implementation of computer programs. Pre-requisites: There are no prerequisites for this course. However, it is highly recommended students have basic keyboard skills, Windows operating systems experience, and adequate writing skills. Students should also be able to install and configure software down from the web. II. ACADEMIC MISSION Oral Roberts University’s academic mission is to transform students by the power of the Holy Spirit into whole, competent servant-leaders through liberal arts and professional education that is fully Christian. Within a Spirit-filled healing community, administration, faculty, and staff love and serve students by helping them grow in knowledge, skills, wisdom, character, and spirit. Student transformation is measured through the evaluation of student expression of University learning outcomes as demonstrated through accompanying proficiencies and capacities. Spiritually alive ………….. Biblical knowledge; sensitivity to the Holy Spirit; evangelistic capability; ethical behavior Intellectually alert ……….. critical thinking; information literacy; global and historical perspectives; aesthetic appreciation; intellectual creativity Physically disciplined
    [Show full text]
  • Preliminary Ethical, Legal and Social Implications of Connected and Autonomous Transportation Vehicles (CATV)
    Preliminary Ethical, Legal and Social Implications of Connected and Autonomous Transportation Vehicles (CATV) This document contains a literature review on the ethical, legal and social implications of the development, implementation, and maturation of connected and autonomous vehicles (CATV) in the United States market. Prepared by: Caitlin A. Surakitbanharn, Ph.D., Purdue Policy Research Institute Mikaela Meyer, Baylee Bunce, Jackson Ball, Christina Dantam, Roshini Mudunuru Undergraduate Research Fellows, Purdue Policy Research Institute Introduction 2 Privacy 3 Security 5 Licensing 10 Insurance & Liability 12 Infrastructure and Mixed Automation Environment 15 Economic Impact 17 Workforce Disruption 20 System Failure / Human Takeover 25 Safety 27 Algorithm & Programming Ethics 29 Environmental Impact 34 Conclusion 41 References 42 Introduction Connected and autonomous vehicles (CATV) have received a great deal of attention in the recent past, especially as industry heavy hitters like Google, Apple and Intel move forward with development and testing, even putting automated vehicles on public roads in the past two years. However, for lawmakers to design serious policy for successful implementation of CATV, a series of ethical, legal and social issues and their impacts must be evaluated. Autonomous vehicles are currently in various stages of development. Many technology and automotive companies having been working for many years to develop autonomous vehicles, but have met resistance socially and technically. The current trends are moving away from automation that would integrate the human driver, and towards automation that is fully functional while the human passenger is a passive participant and has minimal if any responsibilities for operation. Some estimates suggest that fully autonomous vehicles may be integrated into normal traffic within the next ten years, while conservative estimates put that kind of success closer to the year 2050.
    [Show full text]