Our Final Invention: Artificial Intelligence and the End of the Human Era Free
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Artificial Intelligence: What Is It, Can It Be Leveraged, and What Are the Ramifications?
ARTIFICIAL INTELLIGENCE: WHAT IS IT, CAN IT BE LEVERAGED, AND WHAT ARE THE RAMIFICATIONS? Major James R. Barker JCSP 45 PCEMI 45 Service Paper Étude militaire Disclaimer Avertissement Opinions expressed remain those of the author and do Les opinons exprimées n’engagent que leurs auteurs et not represent Department of National Defence or ne reflètent aucunement des politiques du Ministère de Canadian Forces policy. This paper may not be used la Défense nationale ou des Forces canadiennes. Ce without written permission. papier ne peut être reproduit sans autorisation écrite © Her Majesty the Queen in Right of Canada, as represented by the © Sa Majesté la Reine du Chef du Canada, représentée par le Minister of National Defence, 2019. ministre de la Défense nationale, 2019. CANADIAN FORCES COLLEGE/COLLÈGE DES FORCES CANADIENNES JCSP 45 OCTOBER 15 2018 DS545 COMPONENT CAPABILITIES ARTIFICIAL INTELLIGENCE: WHAT IS IT, CAN IT BE LEVERAGED, AND WHAT ARE THE RAMIFICATIONS? By Major James R. Barker “This paper was written by a candidate « La présente étude a été rédigée par un attending the Canadian Force College in stagiaire du Collège des Forces fulfillment of one of the requirements of the canadiennes pour satisfaire à l’une des exigences du cours. L’étude est un Course of Studies. The paper is a scholastic document qui se rapporte au cours et document, and thus contains facts and Contient donc des faits et des opinions que opinions which the author alone considered seul l’auteur considère appropriés et appropriate and correct for the subject. It convenables au sujet. Elle ne reflète pas does not necessarily reflect the policy or the nécessairement la politique ou l’opinion opinion of any agency, including Government d’un organisme quelconque, y compris le of Canada and the Canadian Department of gouvernement du Canada et le ministère de la Défense nationale du Canada. -
Safety of Artificial Superintelligence
Environment. Technology. Resources. Rezekne, Latvia Proceedings of the 12th International Scientific and Practical Conference. Volume II, 180-183 Safety of Artificial Superintelligence Aleksejs Zorins Peter Grabusts Faculty of Engineering Faculty of Engineering Rezekne Academy of Technologies Rezekne Academy of Technologies Rezekne, Latvia Rezekne, Latvia [email protected] [email protected] Abstract—The paper analyses an important problem of to make a man happy it will do it with the fastest and cyber security from human safety perspective which is usu- cheapest (in terms of computational resources) without ally described as data and/or computer safety itself with- using a common sense (for example killing of all people out mentioning the human. There are numerous scientific will lead to the situation that no one is unhappy or the predictions of creation of artificial superintelligence, which decision to treat human with drugs will also make him could arise in the near future. That is why the strong neces- sity for protection of such a system from causing any farm happy etc.). Another issue is that we want computer to arises. This paper reviews approaches and methods already that we want, but due to bugs in the code computer will presented for solving this problem in a single article, analy- do what the code says, and in the case of superintelligent ses its results and provides future research directions. system, this may be a disaster. Keywords—Artificial Superintelligence, Cyber Security, The next sections of the paper will show the possible Singularity, Safety of Artificial Intelligence. solutions of making superintelligence safe to humanity. -
Copyrighted Material
Index affordances 241–2 apps 11, 27–33, 244–5 afterlife 194, 227, 261 Ariely, Dan 28 Agar, Nicholas 15, 162, 313 Aristotle 154, 255 Age of Spiritual Machines, The 47, 54 Arkin, Ronald 64 AGI (Artificial General Intelligence) Armstrong, Stuart 12, 311 61–86, 98–9, 245, 266–7, Arp, Hans/Jean 240–2, 246 298, 311 artificial realities, see virtual reality AGI Nanny 84, 311 Asimov, Isaac 45, 268, 274 aging 61, 207, 222, 225, 259, 315 avatars 73, 76, 201, 244–5, 248–9 AI (artificial intelligence) 3, 9, 12, 22, 30, 35–6, 38–44, 46–58, 72, 91, Banks, Iain M. 235 95, 99, 102, 114–15, 119, 146–7, Barabási, Albert-László 311 150, 153, 157, 198, 284, 305, Barrat, James 22 311, 315 Berger, T.W. 92 non-determinism problem 95–7 Bernal,J.D.264–5 predictions of 46–58, 311 biological naturalism 124–5, 128, 312 altruism 8 biological theories of consciousness 14, Amazon Elastic Compute Cloud 40–1 104–5, 121–6, 194, 312 Andreadis, Athena 223 Blackford, Russell 16–21, 318 androids 44, 131, 194, 197, 235, 244, Blade Runner 21 265, 268–70, 272–5, 315 Block, Ned 117, 265 animal experimentationCOPYRIGHTED 24, 279–81, Blue Brain MATERIAL Project 2, 180, 194, 196 286–8, 292 Bodington, James 16, 316 animal rights 281–2, 285–8, 316 body, attitudes to 4–5, 222–9, 314–15 Anissimov, Michael 12, 311 “Body by Design” 244, 246 Intelligence Unbound: The Future of Uploaded and Machine Minds, First Edition. Edited by Russell Blackford and Damien Broderick. -
Liability Law for Present and Future Robotics Technology Trevor N
Liability Law for Present and Future Robotics Technology Trevor N. White and Seth D. Baum Global Catastrophic Risk Institute http://gcrinstitute.org * http://sethbaum.com Published in Robot Ethics 2.0, edited by Patrick Lin, George Bekey, Keith Abney, and Ryan Jenkins, Oxford University Press, 2017, pages 66-79. This version 10 October 2017 Abstract Advances in robotics technology are causing major changes in manufacturing, transportation, medicine, and a number of other sectors. While many of these changes are beneficial, there will inevitably be some harms. Who or what is liable when a robot causes harm? This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Already, robots have been implicated in a variety of harms. However, current and near-future robots pose no significant challenge for liability law: they can be readily handled with existing liability law or minor variations thereof. We show this through examples from medical technology, drones, and consumer robotics. A greater challenge will arise if it becomes possible to build robots that merit legal personhood and thus can be held liable. Liability law for robot persons could draw on certain precedents, such as animal liability. However, legal innovations will be needed, in particular for determining which robots merit legal personhood. Finally, a major challenge comes from the possibility of future robots that could cause major global catastrophe. As with other global catastrophic risks, liability law could not apply, because there would be no post- catastrophe legal system to impose liability. -
Classification Schemas for Artificial Intelligence Failures
Journal XX (XXXX) XXXXXX https://doi.org/XXXX/XXXX Classification Schemas for Artificial Intelligence Failures Peter J. Scott1 and Roman V. Yampolskiy2 1 Next Wave Institute, USA 2 University of Louisville, Kentucky, USA [email protected], [email protected] Abstract In this paper we examine historical failures of artificial intelligence (AI) and propose a classification scheme for categorizing future failures. By doing so we hope that (a) the responses to future failures can be improved through applying a systematic classification that can be used to simplify the choice of response and (b) future failures can be reduced through augmenting development lifecycles with targeted risk assessments. Keywords: artificial intelligence, failure, AI safety, classification 1. Introduction Artificial intelligence (AI) is estimated to have a $4-6 trillion market value [1] and employ 22,000 PhD researchers [2]. It is estimated to create 133 million new roles by 2022 but to displace 75 million jobs in the same period [6]. Projections for the eventual impact of AI on humanity range from utopia (Kurzweil, 2005) (p.487) to extinction (Bostrom, 2005). In many respects AI development outpaces the efforts of prognosticators to predict its progress and is inherently unpredictable (Yampolskiy, 2019). Yet all AI development is (so far) undertaken by humans, and the field of software development is noteworthy for unreliability of delivering on promises: over two-thirds of companies are more likely than not to fail in their IT projects [4]. As much effort as has been put into the discipline of software safety, it still has far to go. Against this background of rampant failures we must evaluate the future of a technology that could evolve to human-like capabilities, usually known as artificial general intelligence (AGI). -
The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society
Futures 90 (2017) 46–60 Contents lists available at ScienceDirect Futures journal homepage: www.elsevier.com/locate/futures Review The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms Professor Spyros Makridakis Director Institute For the Future (IFF), University of Nicosia, Cyprus A R T I C L E I N F O A B S T R A C T Article history: Received 10 January 2017 The impact of the industrial and digital (information) revolutions has, undoubtedly, been Received in revised form 17 March 2017 substantial on practically all aspects of our society, life, firms and employment. Will the Accepted 28 March 2017 forthcoming AI revolution produce similar, far-reaching effects? By examining analogous Available online 3 April 2017 inventions of the industrial, digital and AI revolutions, this article claims that the latter is on target and that it would bring extensive changes that will also affect all aspects of our Keywords: society and life. In addition, its impact on firms and employment will be considerable, Artificial Intelligence (AI) resulting in richly interconnected organizations with decision making based on the Industrial revolution analysis and exploitation of “big” data and intensified, global competition among firms. Digital revolution People will be capable of buying goods and obtaining services from anywhere in the world AI revolution using the Internet, and exploiting the unlimited, additional benefits that will open through Impact of AI revolution the widespread usage of AI inventions. The paper concludes that significant competitive Benefits and dangers of AI technologies advantages will continue to accrue to those utilizing the Internet widely and willing to take entrepreneurial risks in order to turn innovative products/services into worldwide commercial success stories. -
Artificial Intelligence and Its Implications for Income Distribution
Artificial Intelligence and Its Implications ... The Economics of Artifi cial Intelligence National Bureau of Economic Research Conference Report The Economics of Artifi cial Intelligence: An Agenda Edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb The University of Chicago Press Chicago and London The University of Chicago Press, Chicago 60637 The University of Chicago Press, Ltd., London © 2019 by the National Bureau of Economic Research, Inc. All rights reserved. No part of this book may be used or reproduced in any manner whatsoever without written permission, except in the case of brief quotations in critical articles and reviews. For more information, contact the University of Chicago Press, 1427 E. 60th St., Chicago, IL 60637. Published 2019 Printed in the United States of America 28 27 26 25 24 23 22 21 20 19 1 2 3 4 5 ISBN-13: 978-0-226-61333-8 (cloth) ISBN-13: 978-0-226-61347-5 (e-book) DOI: https:// doi .org / 10 .7208 / chicago / 9780226613475 .001 .0001 Library of Congress Cataloging-in-Publication Data Names: Agrawal, Ajay, editor. | Gans, Joshua, 1968– editor. | Goldfarb, Avi, editor. Title: The economics of artifi cial intelligence : an agenda / Ajay Agrawal, Joshua Gans, and Avi Goldfarb, editors. Other titles: National Bureau of Economic Research conference report. Description: Chicago ; London : The University of Chicago Press, 2019. | Series: National Bureau of Economic Research conference report | Includes bibliographical references and index. Identifi ers: LCCN 2018037552 | ISBN 9780226613338 (cloth : alk. paper) | ISBN 9780226613475 (ebook) Subjects: LCSH: Artifi cial intelligence—Economic aspects. Classifi cation: LCC TA347.A78 E365 2019 | DDC 338.4/ 70063—dc23 LC record available at https:// lccn .loc .gov / 2018037552 ♾ This paper meets the requirements of ANSI/ NISO Z39.48-1992 (Permanence of Paper). -
Leakproofing the Singularity Artificial Intelligence Confinement Problem
Roman V.Yampolskiy Leakproofing the Singularity Artificial Intelligence Confinement Problem Abstract: This paper attempts to formalize and to address the ‘leakproofing’ of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelli- gence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence. Keywords: AI-Box, AI Confinement Problem, Hazardous Intelligent Software, Leakproof Singularity, Oracle AI. ‘I am the slave of the lamp’ Genie from Aladdin 1. Introduction With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology (Yudkowsky, 2008; Bostrom, 2006; Hibbard, 2005; Chalmers, 2010; Hall, 2000). A common theme in Artificial Intelli- gence (AI)1 safety research is the possibility of keeping a super- intelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind. Such ideas originate with scientific Correspondence: Roman V. Yampolskiy, Department of Computer Engineering and Computer Sci- ence University of Louisville. Email: [email protected] [1] In this paper the term AI is used to represent superintelligence. Journal of Consciousness Studies, 19, No. 1–2, 2012, pp. 194–214 Copyright (c) Imprint Academic 2011 For personal use only -- not for reproduction LEAKPROOFING THE SINGULARITY 195 visionaries such as Eric Drexler, who has suggested confining transhuman machines so that their outputs could be studied and used safely (Drexler, 1986). -
Intelligence Explosion FAQ
MIRI MACHINE INTELLIGENCE RESEARCH INSTITUTE Intelligence Explosion FAQ Luke Muehlhauser Machine Intelligence Research Institute Abstract The Machine Intelligence Research Institute is one of the leading research institutes on intelligence explosion. Below are short answers to common questions we receive. Muehlhauser, Luke. 2013. “Intelligence Explosion FAQ.” First published 2011 as “Singularity FAQ.” Machine Intelligence Research Institute, Berkeley, CA Contents 1 Basics 1 1.1 What is an intelligence explosion? . 1 2 How Likely Is an Intelligence Explosion? 2 2.1 How is “intelligence” defined? . 2 2.2 What is greater-than-human intelligence? . 2 2.3 What is whole-brain emulation? . 3 2.4 What is biological cognitive enhancement? . 3 2.5 What are brain-computer interfaces? . 4 2.6 How could general intelligence be programmed into a machine? . 4 2.7 What is superintelligence? . 4 2.8 When will the intelligence explosion happen? . 5 2.9 Might an intelligence explosion never occur? . 6 3 Consequences of an Intelligence Explosion 7 3.1 Why would great intelligence produce great power? . 7 3.2 How could an intelligence explosion be useful? . 7 3.3 How might an intelligence explosion be dangerous? . 8 4 Friendly AI 9 4.1 What is Friendly AI? . 9 4.2 What can we expect the motivations of a superintelligent machine to be? 10 4.3 Can’t we just keep the superintelligence in a box, with no access to the Internet? . 11 4.4 Can’t we just program the superintelligence not to harm us? . 11 4.5 Can we program the superintelligence to maximize human pleasure or desire satisfaction? . -
Hello, World: Artificial Intelligence and Its Use in the Public Sector
Hello, World: Artificial intelligence and its use in the public sector Jamie Berryhill Kévin Kok Heang Rob Clogher Keegan McBride November 2019 | http://oe.cd/helloworld OECD Working Papers on Public Governance No. 36 Cover images are the European Parliament and the dome of Germany’s Reichstag building processed through Deep Learning algorithms to match the style of Van Gogh paintings. tw Hello, World: Artificial Intelligence and its Use in the Public Sector Authors: Jamie Berryhill, Kévin Kok Heang, Rob Clogher, Keegan McBride PUBE 2 This document and any map included herein are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. 1. Note by Tukey: The information in this document with reference to ‘Cyprus’ relates to the southern part of the island. There is no single authority representing both Turkish and Greek Cypriot people on the island. Turkey recognises the Turkish Republic of Northern Cyprus (TRNC). Until a lasting and equitable solution is found within the context of the United Nations, Turkey shall preserve its position concerning the ‘Cyprus issue’. 2. Note by all the European Union Member States of the OECD and the European Commission: The Republic of Cyprus is recognised by all members of the United Nations with the exception of Turkey. The information in this document relates to the area under the effective control of the Government of the Republic of Cyprus. HELLO, WORLD: ARTIFICIAL INTELLIGENCE AND ITS USE IN THE PUBLIC SECTOR © OECD 2019 3 Foreword Artificial Intelligence (AI) is an area of research and technology application that can have a significant impact on public policies and services in many ways. -
Our Final Invention: Artificial Intelligence and the End of the Human Era
Derrek Hopper Our Final Invention: Artificial Intelligence and the End of the Human Era Book Review Introduction In our attempt to create the ultimate intelligence, human beings risk catalyzing their own annihilation. At least according to documentary film maker James Barrat’s 2013 analysis of the dangers of Artificial Intelligence (AI). In his book, “Our Final Invention: Artificial Intelligence and the End of the Human Era”, Barrat explores the possible futures associated with the development of artificial superintelligence (ASI); some optimistic and other pessimistic. Barrat falls within the “pessimist” category of AI thinkers but his research leads him to interview and ponder the rosier outlook of some of AI’s most notable researchers. Throughout the book, Barrat finds ample opportunity to debate with optimists such as famed AI researcher Ray Kurzweil and challenges many of their assumptions about humanity’s ability to control an intelligence greater than our own. Our Final Invention covers three basic concepts regarding the hazards of AI. First, the inevitability of artificial general intelligence (AGI) and ASI. Second, the rate of change with self-improving intelligence. Third, our inability to understand an intelligence smarter than us and to “hard code” benevolence into it. Barrat fails to address major challenges underlying the assumptions of most AI researchers. Particularly, those regarding the nature of intelligence and consciousness, AI’s capacity for self-improvement, and the continued exponential growth of computer hardware. These concepts lay at the center of the predictions about our AI future and warrant further investigation. Derrek Hopper Busy Child Our Final Invention opens with a future scenario where an AGI becomes self-aware and is subsequently unplugged from the internet by its developers as a precaution. -
A Survey of Research Questions for Robust and Beneficial AI
A survey of research questions for robust and beneficial AI 1 Introduction Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents|systems that perceive and act in some environment. In this context, the criterion for intelligence is related to statistical and economic notions of rationality|colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic representations and statistical learning methods has led to a large degree of integration and cross-fertilization between AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is valuable to investigate how to reap its benefits while avoiding potential pitfalls.