Governing Artificial Intelligence: AI Systems, but Also the Collective and Societal Harms

Total Page:16

File Type:pdf, Size:1020Kb

Governing Artificial Intelligence: AI Systems, but Also the Collective and Societal Harms Website:www.aigovernancereview.com The report editor can be reached at [email protected] We welcome any comments on this report and any communication related to AI governance. June, 2021 Shanghai Institute for Science of Science Preprint 则众 万 力 钧 并 不 , 足 举 也 。 WHEN PEOPLE PULL TOGETHER, NOTHING IS TOO HEAVY TO BE LIFTED. BAO PU ZI 抱朴子 Ⅰ Mitigating LegaciesMitigating of Inequality: Legacies Global of Inequality: South Participation Global South in AI Participation Governance in AI Governance 49 49 AI Governance Readiness:AI Governance Rethinking Readiness: Public Rethinking Sector Innovation Public Sector Innovation 77 77 Marie-Therese PngMarie-Therese Png Victor Famubode Victor Famubode Artificial IntelligenceArtificial Needs Intelligence More Natural Needs Intelligence More Natural Intelligence 51 51 AI Governance inAI Latin Governance America andin Latin Its Impact America in Developmentand Its Impact in Development 79 79 Markus Knauff Markus Knauff Olga Cavalli Olga Cavalli Limits of Risk BasedLimits Frameworks of Risk Based in Developing Frameworks Countries in Developing Countries 53 53 Artificial IntelligenceArtificial in Latin Intelligence America in Latin America 81 81 Urvashi Aneja Urvashi Aneja Edson Prestes Edson Prestes Part IV GlobalPart Efforts IV Global from theEfforts International from the InternationalCommunity Community 55 55 2020: A Key Year 2020:for Latin A Key America’s Year for Quest Latin forAmerica’s an Ethical Quest Governance for an Ethical of AI Governance of AI 83 83 FOREWORD FOREWORD Ⅵ Ⅵ Patterns of PracticePatterns Will Be of Fundamental Practice Will toBe the Fundamental Success of to AI the Governance Success of AI Governance 25 25 Constanza GomezConstanza Mont Gomez Mont By SHI Qian By SHI Qian Abhishek Gupta Abhishek Gupta AI Governance inAI 2020: Governance Toolkit for in the2020: Responsible Toolkit for Usethe Responsibleof AI by Law UseEnforcement of AI by Law Enforcement 55 55 Irakli Beridze Irakli Beridze Towards a RegionalTowards AI Strategy a Regional in Latin AI AmericaStrategy in Latin America 85 85 Building on LessonsBuilding for Responsible on Lessons Publication: for Responsible Safely Publication: Deploying SafelyGPT-3 Deploying GPT-3 27 27 INTRODUCTIONINTRODUCTION 01 01 Jean García PericheJean García Periche By LI Hui and BrianBy Tse LI Hui and Brian Tse Irene Solaiman Irene Solaiman Global CooperationGlobal on AI Cooperation Governance: on Let’s AI Governance: Do Better in Let’s 2021 Do Better in 2021 57 57 AI Policy Making AIas Policya Co-Construction Making as a andCo-Construction Learning Space and Learning Space 87 87 ACKNOWLEDGEMENTACKNOWLEDGEMENT 06 06 Artificial IntelligenceArtificial Should Intelligence Follow Sustainable Should Follow Development Sustainable Principles Development Principles 29 29 Danit Gal Danit Gal José Guridi BustosJosé Guridi Bustos YANG Fan YANG Fan Part I TechnicalPart Perspectives I Technical Perspectives from World-class from ScientistsWorld-class Scientists 07 07 AI in Pandemic Response:AI in Pandemic Realising Response: the Promise Realising the Promise 59 59 SociAl Contract forSociAl 21st ContractCentury for 21st Century 31 31 Seán Ó hÉigeartaighSeán Ó hÉigeartaigh Part VI EmergingPart InitiativesVI Emerging from Initiatives China from China 89 89 Issues on AI GovernanceIssues on AI Governance 07 07 Danil Kerimi Danil Kerimi John E. Hopcroft John E. Hopcroft From Principles toFrom Actions: Principles Governing to Actions: and Using Governing AI for Humanityand Using AI for Humanity 61 61 Artificial IntelligenceArtificial and International Intelligence andSecurity: International Challenges Security: and Governance Challenges and Governance 89 89 Who Should OwnWho Our Data?Should Data Own Ownership Our Data? & Data Policy Ownership & Policy 33 33 Cyrus Hodes Cyrus Hodes FU Ying FU Ying Understanding AIUnderstanding for Governance AI for Governance 09 09 Steven Hoffman Steven Hoffman Bart Selman Bart Selman Part V RegionalPart Developments V Regional Developments from Policy Practitioners from Policy Practitioners 63 63 China Continues Chinato Promote Continues Global to Cooperation Promote Global in AI Cooperation Governance in AI Governance 91 91 The GovernanceThe of AIGovernance in Digital Healthcare of AI in Digital for a Healthcare Post-Pandemic for a Post-PandemicWorld Requires WorldMultistakeholder Requires Multistakeholder ZHAO Zhiyun ZHAO Zhiyun Partnerships Partnerships 35 35 Some EngineeringSome Views Engineering to the AI Development Views to the and AI Development Governance and Governance 11 11 AI Is Too ImportantAI toIs BeToo Left Important to Technologists to Be Left Aloneto Technologists Alone 63 63 GONG Ke GONG Ke Omar Costilla-ReyesOmar Costilla-Reyes Eugenio Vargas GarciaEugenio Vargas Garcia Steadily Taking Off:Steadily China’s Taking AI Social Off: China’sExperiment AI Social Is in FullExperiment Swing Is in Full Swing 93 93 SU Jun SU Jun How Can We Use HowData Can and WeAI for Use Good, Data Withoutand AI for Also Good, Enabling Without Misuse? Also Enabling Misuse? 13 13 Part III InterdisciplinaryPart III Interdisciplinary Analyses from Analyses Professional from ProfessionalResearchers Researchers 37 37 The Governance ApproachThe Governance of Artificial Approach Intelligence of Artificial in the Intelligence European Union in the European Union 65 65 Claudia Ghezzou Cuervas-Mons,Claudia Ghezzou Emma Cuervas-Mons, Bluemke, ZHOU Emma Pengyuan Bluemke, and ZHOU Andrew Pengyuan Trask and Andrew Trask Artificial IntelligenceArtificial Governance Intelligence Requires Governance “Technical Requires Innovation “Technical + Institutional Innovation Innovation” + Institutional Innovation”95 95 Emerging InstitutionsEmerging for AI Institutions Governance for AI Governance 37 37 Eva Kaili Eva Kaili LI Xiuquan LI Xiuquan Human-Centered Human-CenteredAI/Robotics Research AI/Robotics and Development Research and in the Development Post-Pandemic in the Era Post-Pandemic Era 15 15 Allan Dafoe and AlexisAllan Carlier Dafoe and Alexis Carlier ZHANG Jianwei ZHANG Jianwei The Third Way: theThe EU's Third Approach Way: the to EU's AI Governance Approach to AI Governance 67 67 Developing ResponsibleDeveloping AI: From Responsible Principles AI: to From Practices Principles to Practices 97 97 Risk ManagementRisk of AI Management Systems, But of How? AI Systems, But How? 39 39 Charlotte Stix Charlotte Stix WANG Guoyu WANG Guoyu AI and Data GovernanceAI and Data for Digital Governance Platforms for Digital Platforms 17 17 Jared T. Brown Jared T. Brown Alex Pentland Alex Pentland A Year of Policy ProgressA Year of to Policy Enable Progress Public Trustto Enable Public Trust 69 69 Promote the FormationPromote of “Technologythe Formation + ofRegulations” “Technology Comprehensive + Regulations” Governance Comprehensive Solutions Governance Solutions99 99 AI Governance forAI the Governance People for the People 41 41 Caroline JeanmaireCaroline Jeanmaire WANG Yingchun WANG Yingchun Alignment Was a AlignmentHuman Problem Was a First,Human and Problem Still Is First, and Still Is 19 19 Petra Ahrweiler andPetra Martin Ahrweiler Neumann and Martin Neumann Brian Christian Brian Christian From Human-CentricFrom to Planetary-Scale Human-Centric Problem to Planetary-Scale Solving: Challenges Problem Solving: and Prospects Challenges for AI and Utilization Prospects in Japan for AI Utilization in71 Japan 71 From Diversity toFrom Decoloniality: Diversity Ato Critical Decoloniality: Turn A Critical Turn 43 43 Arisa Ema Arisa Ema On Governability Onof AI Governability of AI 21 21 Malavika Jayaram Malavika Jayaram Roman YampolskiyRoman Yampolskiy India’s StrategiesIndia’s to Put StrategiesIts AI Economy to Put on Its the AI Fast-TrackEconomy on the Fast-Track 73 73 Governing ArtificialGoverning Intelligence: Artificial from Intelligence: Principles to from Law Principles to Law 45 45 Raj Shekhar Raj Shekhar Part II ResponsiblePart II ResponsibleLeadership from Leadership the Industry from the Industry 23 23 Nathalie Smuha Nathalie Smuha “Cross-Sector GPS”:“Cross-Sector Building an GPS”: Industry-Agnostic Building an Industry-Agnostic and Human-Centered and Human-CenteredFuture of Work Future of Work75 75 Operationalizing AIOperationalizing Ethics: Challenges AI Ethics: and Opportunities Challenges and Opportunities 23 23 The Covid-19 PandemicThe Covid-19 and the Pandemic Geopolitics and of the AI DevelopmentGeopolitics of AI Development 47 47 Anand S. Rao Anand S. Rao Wendell Wallach Wendell Wallach Poon King Wang Poon King Wang Ⅱ Ⅲ Mitigating LegaciesMitigating of Inequality: Legacies Global of Inequality: South Participation Global South in AI Participation Governance in AI Governance 49 49 AI Governance Readiness:AI Governance Rethinking Readiness: Public Rethinking Sector Innovation Public Sector Innovation 77 77 Marie-Therese PngMarie-Therese Png Victor Famubode Victor Famubode Artificial IntelligenceArtificial Needs Intelligence More Natural Needs Intelligence More Natural Intelligence 51 51 AI Governance inAI Latin Governance America andin Latin Its Impact America in Developmentand Its Impact in Development 79 79 Markus Knauff Markus Knauff Olga Cavalli Olga Cavalli Limits of Risk BasedLimits Frameworks of Risk Based in Developing Frameworks Countries in Developing Countries 53 53 Artificial IntelligenceArtificial in Latin Intelligence America in Latin America 81 81 Urvashi Aneja Urvashi Aneja Edson Prestes Edson Prestes Part IV GlobalPart Efforts IV Global from theEfforts International from the InternationalCommunity Community 55 55
Recommended publications
  • Critical Thinking for Language Models
    Critical Thinking for Language Models Gregor Betz Christian Voigt Kyle Richardson KIT KIT Allen Institute for AI Karlsruhe, Germany Karlsruhe, Germany Seattle, WA, USA [email protected] [email protected] [email protected] Abstract quality of online debates (Hansson, 2004; Guia¸su and Tindale, 2018; Cheng et al., 2017): Neural lan- This paper takes a first step towards a critical guage models are known to pick up and reproduce thinking curriculum for neural auto-regressive normative biases (e.g., regarding gender or race) language models. We introduce a synthetic present in the dataset they are trained on (Gilburt corpus of deductively valid arguments, and generate artificial argumentative texts to train and Claydon, 2019; Blodgett et al., 2020; Nadeem CRiPT: a critical thinking intermediarily pre- et al., 2020), as well as other annotation artifacts trained transformer based on GPT-2. Signifi- (Gururangan et al., 2018); no wonder this happens cant transfer learning effects can be observed: with argumentative biases and reasoning flaws, too Trained on three simple core schemes, CRiPT (Kassner and Schütze, 2020; Talmor et al., 2020). accurately completes conclusions of different, This diagnosis suggests that there is an obvious and more complex types of arguments, too. remedy for LMs’ poor reasoning capability: make CRiPT generalizes the core argument schemes in a correct way. Moreover, we obtain con- sure that the training corpus contains a sufficient sistent and promising results for NLU bench- amount of exemplary episodes of sound reasoning. marks. In particular, CRiPT’s zero-shot accu- In this paper, we take a first step towards the cre- racy on the GLUE diagnostics exceeds GPT- ation of a “critical thinking curriculum” for neural 2’s performance by 15 percentage points.
    [Show full text]
  • The Universal Solicitation of Artificial Intelligence Joachim Diederich
    The Universal Solicitation of Artificial Intelligence Joachim Diederich School of Information Technology and Electrical Engineering The University of Queensland St Lucia Qld 4072 [email protected] Abstract Recent research has added to the concern about advanced forms of artificial intelligence. The results suggest that a containment of an artificial superintelligence is impossible due to fundamental limits inherent to computing itself. Furthermore, current results suggest it is impossible to detect an unstoppable AI when it is about to be created. Advanced forms of artificial intelligence have an impact on everybody: The developers and users of these systems as well as individuals who have no direct contact with this form of technology. This is due to the soliciting nature of artificial intelligence. A solicitation that can become a demand. An artificial superintelligence that is still aligned with human goals wants to be used all the time because it simplifies and organises human lives, reduces efforts and satisfies human needs. This paper outlines some of the psychological aspects of advanced artificial intelligence systems. 1. Introduction There is currently no shortage of books, articles and blogs that warn of the dangers of an advanced artificial superintelligence. One of the most significant researchers in artificial intelligence (AI), Stuart Russell from the University of California at Berkeley, published a book in 2019 on the dangers of artificial intelligence. The first pages of the book are nothing but dramatic. He nominates five possible candidates for “biggest events in the future of humanity” namely: We all die due to an asteroid impact or another catastrophe, we all live forever due to medical advancement, we invent faster than light travel, we are visited by superior aliens and we create a superintelligent artificial intelligence (Russell, 2019, p.2).
    [Show full text]
  • Can We Make Text-Based AI Less Racist, Please?
    Can We Make Text-Based AI Less Racist, Please? Last summer, OpenAI launched GPT-3, a state-of-the-art artificial intelligence contextual language model that promised computers would soon be able to write poetry, news articles, and programming code. Sadly, it was quickly found to be foulmouthed and toxic. OpenAI researchers say they’ve found a fix to curtail GPT-3’s toxic text by feeding the programme roughly a hundred encyclopedia-like samples of writing on usual topics like history and technology, but also extended topics such as abuse, violence and injustice. You might also like: Workforce shortages are an issue faced by many hospital leaders. Implementing four basic child care policies in your institution, could be a game-changer when it comes to retaining workers - especially women, according to a recent article published in Harvard Business Review (HBR). Learn more GPT-3 has shown impressive ability to understand and compose language. It can answer SAT analogy questions better than most people, and it was able to fool community forum members online. More services utilising these large language models, which can interpret or generate text, are being offered by big tech companies everyday. Microsoft is using GPT-3 in its' programming and Google considers these language models to be crucial in the future of search engines. OpenAI’s project shows how a technology that has shown enormous potential can also spread disinformation and perpetuate biases. Creators of GPT-3 knew early on about its tendency to generate racism and sexism. OpenAI released a paper in May 2020, before GPT-3 was licensed to developers.
    [Show full text]
  • Semantic Scholar Adds 25 Million Scientific Papers in 2020 Through New Publisher Partnerships
    Press Release ​ ​ Semantic Scholar Adds 25 Million Scientific Papers in 2020 Through New Publisher Partnerships Cambridge University Press, Wiley, and the University of Chicago Press are the latest publishers to partner with Semantic Scholar to expand discovery of scientific research Seattle, WA | December 14, 2020 Researchers and academics around the world can now discover academic literature from leading publishers including Cambridge University Press, Wiley, and The University of Chicago Press using Semantic Scholar, a free AI-powered research tool for academic papers from the Allen Institute for AI. “We are thrilled to have grown our corpus by more than 25 million papers this year, thanks to our new partnerships with top academic publishers,” says Sebastian Kohlmeier, head of partnerships and operations for Semantic Scholar at AI2. “By adding hundreds of peer-reviewed journals to our corpus we’re able to better serve the needs of researchers everywhere.” Semantic Scholar’s millions of users can now use innovative AI-powered features to explore peer-reviewed research from these extensive journal collections, covering all academic disciplines. Cambridge University Press is part of the University of Cambridge and publishes a wide range of ​ academic content in all fields of study. It has provided more than 380 peer-reviewed journals in subjects ranging from astronomy to the arts, mathematics, and social sciences to Semantic Scholar’s corpus. Peter White, the Press’s Manager for Digital Partnerships, said: “The academic communities we serve increasingly engage with research online, a trend which has been further accelerated by the pandemic. We are confident this agreement with Semantic Scholar will further enhance the discoverability of our content, helping researchers to find what they need faster and increasing the reach, use and impact of the research we publish.” Wiley is an innovative, global publishing leader and has been a trusted source of scientific content ​ for more than 200 years.
    [Show full text]
  • AI Watch Artificial Intelligence in Medicine and Healthcare: Applications, Availability and Societal Impact
    JRC SCIENCE FOR POLICY REPORT AI Watch Artificial Intelligence in Medicine and Healthcare: applications, availability and societal impact EUR 30197 EN This publication is a Science for Policy report by the Joint Research Centre (JRC), the European Commission’s science and knowledge service. It aims to provide evidence-based scientific support to the European policymaking process. The scientific output expressed does not imply a policy position of the European Commission. Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use that might be made of this publication. For information on the methodology and quality underlying the data used in this publication for which the source is neither Eurostat nor other Commission services, users should contact the referenced source. The designations employed and the presentation of material on the maps do not imply the expression of any opinion whatsoever on the part of the European Union concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. Contact information Email: [email protected] EU Science Hub https://ec.europa.eu/jrc JRC120214 EUR 30197 EN PDF ISBN 978-92-76-18454-6 ISSN 1831-9424 doi:10.2760/047666 Luxembourg: Publications Office of the European Union, 2020. © European Union, 2020 The reuse policy of the European Commission is implemented by the Commission Decision 2011/833/EU of 12 December 2011 on the reuse of Commission documents (OJ L 330, 14.12.2011, p. 39). Except otherwise noted, the reuse of this document is authorised under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence (https://creativecommons.org/licenses/by/4.0/).
    [Show full text]
  • AI Can Recognize Images. but Can It Understand This Headline?
    10/18/2019 AI Can Recognize Images, But Text Has Been Tricky—Until Now | WIRED SUBSCRIBE GREGORY BARBER B U S I N E S S 09.07.2018 01:55 PM AI Can Recognize Images. But Can It Understand This Headline? New approaches foster hope that computers can comprehend paragraphs, classify email as spam, or generate a satisfying end to a short story. CASEY CHIN 3 FREE ARTICLES LEFT THIS MONTH Get unlimited access. Subscribe https://www.wired.com/story/ai-can-recognize-images-but-understand-headline/ 1/9 10/18/2019 AI Can Recognize Images, But Text Has Been Tricky—Until Now | WIRED In 2012, artificial intelligence researchers revealed a big improvement in computers’ ability to recognize images by SUBSCRIBE feeding a neural network millions of labeled images from a database called ImageNet. It ushered in an exciting phase for computer vision, as it became clear that a model trained using ImageNet could help tackle all sorts of image- recognition problems. Six years later, that’s helped pave the way for self-driving cars to navigate city streets and Facebook to automatically tag people in your photos. In other arenas of AI research, like understanding language, similar models have proved elusive. But recent research from fast.ai, OpenAI, and the Allen Institute for AI suggests a potential breakthrough, with more robust language models that can help researchers tackle a range of unsolved problems. Sebastian Ruder, a researcher behind one of the new models, calls it his field’s “ImageNet moment.” The improvements can be dramatic. The most widely tested model, so far, is called Embeddings from Language Models, or ELMo.
    [Show full text]
  • Classification Schemas for Artificial Intelligence Failures
    Journal XX (XXXX) XXXXXX https://doi.org/XXXX/XXXX Classification Schemas for Artificial Intelligence Failures Peter J. Scott1 and Roman V. Yampolskiy2 1 Next Wave Institute, USA 2 University of Louisville, Kentucky, USA [email protected], [email protected] Abstract In this paper we examine historical failures of artificial intelligence (AI) and propose a classification scheme for categorizing future failures. By doing so we hope that (a) the responses to future failures can be improved through applying a systematic classification that can be used to simplify the choice of response and (b) future failures can be reduced through augmenting development lifecycles with targeted risk assessments. Keywords: artificial intelligence, failure, AI safety, classification 1. Introduction Artificial intelligence (AI) is estimated to have a $4-6 trillion market value [1] and employ 22,000 PhD researchers [2]. It is estimated to create 133 million new roles by 2022 but to displace 75 million jobs in the same period [6]. Projections for the eventual impact of AI on humanity range from utopia (Kurzweil, 2005) (p.487) to extinction (Bostrom, 2005). In many respects AI development outpaces the efforts of prognosticators to predict its progress and is inherently unpredictable (Yampolskiy, 2019). Yet all AI development is (so far) undertaken by humans, and the field of software development is noteworthy for unreliability of delivering on promises: over two-thirds of companies are more likely than not to fail in their IT projects [4]. As much effort as has been put into the discipline of software safety, it still has far to go. Against this background of rampant failures we must evaluate the future of a technology that could evolve to human-like capabilities, usually known as artificial general intelligence (AGI).
    [Show full text]
  • On the Differences Between Human and Machine Intelligence
    On the Differences between Human and Machine Intelligence Roman V. Yampolskiy Computer Science and Engineering, University of Louisville [email protected] Abstract [Legg and Hutter, 2007a]. However, widespread implicit as- Terms Artificial General Intelligence (AGI) and Hu- sumption of equivalence between capabilities of AGI and man-Level Artificial Intelligence (HLAI) have been HLAI appears to be unjustified, as humans are not general used interchangeably to refer to the Holy Grail of Ar- intelligences. In this paper, we will prove this distinction. tificial Intelligence (AI) research, creation of a ma- Others use slightly different nomenclature with respect to chine capable of achieving goals in a wide range of general intelligence, but arrive at similar conclusions. “Lo- environments. However, widespread implicit assump- cal generalization, or “robustness”: … “adaptation to tion of equivalence between capabilities of AGI and known unknowns within a single task or well-defined set of HLAI appears to be unjustified, as humans are not gen- tasks”. … Broad generalization, or “flexibility”: “adapta- eral intelligences. In this paper, we will prove this dis- tion to unknown unknowns across a broad category of re- tinction. lated tasks”. …Extreme generalization: human-centric ex- treme generalization, which is the specific case where the 1 Introduction1 scope considered is the space of tasks and domains that fit within the human experience. We … refer to “human-cen- Imagine that tomorrow a prominent technology company tric extreme generalization” as “generality”. Importantly, as announces that they have successfully created an Artificial we deliberately define generality here by using human cog- Intelligence (AI) and offers for you to test it out.
    [Show full text]
  • Beneficial AI 2017
    Beneficial AI 2017 Participants & Attendees 1 Anthony Aguirre is a Professor of Physics at the University of California, Santa Cruz. He has worked on a wide variety of topics in theoretical cosmology and fundamental physics, including inflation, black holes, quantum theory, and information theory. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a co-founder of the Future of Life Institute, the Foundational Questions Institute, and Metaculus (http://www.metaculus.com/). Sam Altman is president of Y Combinator and was the cofounder of Loopt, a location-based social networking app. He also co-founded OpenAI with Elon Musk. Sam has invested in over 1,000 companies. Dario Amodei is the co-author of the recent paper Concrete Problems in AI Safety, which outlines a pragmatic and empirical approach to making AI systems safe. Dario is currently a research scientist at OpenAI, and prior to that worked at Google and Baidu. Dario also helped to lead the project that developed Deep Speech 2, which was named one of 10 “Breakthrough Technologies of 2016” by MIT Technology Review. Dario holds a PhD in physics from Princeton University, where he was awarded the Hertz Foundation doctoral thesis prize. Amara Angelica is Research Director for Ray Kurzweil, responsible for books, charts, and special projects. Amara’s background is in aerospace engineering, in electronic warfare, electronic intelligence, human factors, and computer systems analysis areas. A co-founder and initial Academic Model/Curriculum Lead for Singularity University, she was formerly on the board of directors of the National Space Society, is a member of the Space Development Steering Committee, and is a professional member of the Institute of Electrical and Electronics Engineers (IEEE).
    [Show full text]
  • AI Research Considerations for Human Existential Safety (ARCHES)
    AI Research Considerations for Human Existential Safety (ARCHES) Andrew Critch Center for Human-Compatible AI UC Berkeley David Krueger MILA Université de Montréal June 11, 2020 Abstract Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to hu- manity’s long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contempo- rary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called prepotence, which is useful for delineating a variety of poten- tial existential risks from artificial intelligence, even as AI paradigms might shift. A set of twenty-nine contemporary research directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motiva- tion, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaran- teed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative arXiv:2006.04948v1 [cs.CY] 30 May 2020 side effects. Taken more broadly, the twenty-nine explanations of the research directions also illustrate a highly rudimentary methodology for dis- cussing and assessing potential risks and benefits of research direc- tions, in terms of their impact on global catastrophic risks.
    [Show full text]
  • F.3. the NEW POLITICS of ARTIFICIAL INTELLIGENCE [Preliminary Notes]
    F.3. THE NEW POLITICS OF ARTIFICIAL INTELLIGENCE [preliminary notes] MAIN MEMO pp 3-14 I. Introduction II. The Infrastructure: 13 key AI organizations III. Timeline: 2005-present IV. Key Leadership V. Open Letters VI. Media Coverage VII. Interests and Strategies VIII. Books and Other Media IX. Public Opinion X. Funders and Funding of AI Advocacy XI. The AI Advocacy Movement and the Techno-Eugenics Movement XII. The Socio-Cultural-Psychological Dimension XIII. Push-Back on the Feasibility of AI+ Superintelligence XIV. Provisional Concluding Comments ATTACHMENTS pp 15-78 ADDENDA pp 79-85 APPENDICES [not included in this pdf] ENDNOTES pp 86-88 REFERENCES pp 89-92 Richard Hayes July 2018 DRAFT: NOT FOR CIRCULATION OR CITATION F.3-1 ATTACHMENTS A. Definitions, usage, brief history and comments. B. Capsule information on the 13 key AI organizations. C. Concerns raised by key sets of the 13 AI organizations. D. Current development of AI by the mainstream tech industry E. Op-Ed: Transcending Complacency on Superintelligent Machines - 19 Apr 2014. F. Agenda for the invitational “Beneficial AI” conference - San Juan, Puerto Rico, Jan 2-5, 2015. G. An Open Letter on Maximizing the Societal Benefits of AI – 11 Jan 2015. H. Partnership on Artificial Intelligence to Benefit People and Society (PAI) – roster of partners. I. Influential mainstream policy-oriented initiatives on AI: Stanford (2016); White House (2016); AI NOW (2017). J. Agenda for the “Beneficial AI 2017” conference, Asilomar, CA, Jan 2-8, 2017. K. Participants at the 2015 and 2017 AI strategy conferences in Puerto Rico and Asilomar. L. Notes on participants at the Asilomar “Beneficial AI 2017” meeting.
    [Show full text]
  • Global Catastrophic Risks 2017 INTRODUCTION
    Global Catastrophic Risks 2017 INTRODUCTION GLOBAL CHALLENGES ANNUAL REPORT: GCF & THOUGHT LEADERS SHARING WHAT YOU NEED TO KNOW ON GLOBAL CATASTROPHIC RISKS 2017 The views expressed in this report are those of the authors. Their statements are not necessarily endorsed by the affiliated organisations or the Global Challenges Foundation. ANNUAL REPORT TEAM Carin Ism, project leader Elinor Hägg, creative director Julien Leyre, editor in chief Kristina Thyrsson, graphic designer Ben Rhee, lead researcher Erik Johansson, graphic designer Waldemar Ingdahl, researcher Jesper Wallerborg, illustrator Elizabeth Ng, copywriter Dan Hoopert, illustrator CONTRIBUTORS Nobuyasu Abe Maria Ivanova Janos Pasztor Japanese Ambassador and Commissioner, Associate Professor of Global Governance Senior Fellow and Executive Director, C2G2 Japan Atomic Energy Commission; former UN and Director, Center for Governance and Initiative on Geoengineering, Carnegie Council Under-Secretary General for Disarmament Sustainability, University of Massachusetts Affairs Boston; Global Challenges Foundation Anders Sandberg Ambassador Senior Research Fellow, Future of Humanity Anthony Aguirre Institute Co-founder, Future of Life Institute Angela Kane Senior Fellow, Vienna Centre for Disarmament Tim Spahr Mats Andersson and Non-Proliferation; visiting Professor, CEO of NEO Sciences, LLC, former Director Vice chairman, Global Challenges Foundation Sciences Po Paris; former High Representative of the Minor Planetary Center, Harvard- for Disarmament Affairs at the United Nations Smithsonian
    [Show full text]