Relying on the Kindness of Machines? the Security Threat of Artificial Agents

Total Page:16

File Type:pdf, Size:1020Kb

Relying on the Kindness of Machines? the Security Threat of Artificial Agents Team SCHAFT’s robot, S-One, clears debris at DARPA’s Robotics Challenge trials (DARPA/Raymond Sheh) Relying on the Kindness of Machines? The Security Threat of Artificial Agents By Randy Eshelman and Douglas Derrick odern technology is a daily part become laborious or, in many cases, agents becoming an adversary instead of our lives. It serves critical impossible. Since we have become of a tool. M functions in defense, respond- dependent on technology and its uses, We define autonomous, adver- ing to natural disasters, and scientific and technology is becoming ever more sarial-type technology as existing or research. Without technology, some of capable, it is necessary that we consider yet-to-be-developed software, hardware, the most common human tasks would the possibility of goal-driven, adaptive or architectures that deploy or are de- ployed to work against human interests or adversely impact human use of technology without human control or intervention. Randy Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command. Dr. Douglas Derrick is Assistant Professor of Information Technology Innovation at the University of Several well-known events over the last Nebraska at Omaha. two decades that approach the concept 70 Commentary / The Security Threat of Artificial Agents JFQ 77, 2nd Quarter 2015 Table. Adversarial Technology Examples Adversarial Technology Year Financial Impact Users Affected Transmit Vector “I Love You” 2000 $15 billion 500,000 Emailed itself to user contacts after opened Scanned Internet for Microsoft computers— “Code Red” 2001 $2.6 billion 1 million attacked 100 IP addresses at a time “My Doom” 2004 $38 billion 2 million Emailed itself to user contacts after opened Stuxnet 2010 Unknown Unclear Attacked industrial control systems Estimated tens of “Heartbleed” 2014 Estimated at 2/3 of all Web servers Open Secure Sockets Layer flaw exposes user data millions Sources: “Top 5 Computer Viruses of All Time,” UKNorton.com, available at < http://uk.norton.com/top-5-viruses/promo>; “Update 1—Researchers Say Stuxnet Was Deployed Against Iran in 2007,” Reuters, February 26, 2013, available at <www.reuters.com/article/2013/02/26/cyberwar-stuxnet- idUSL1N0BQ5ZW20130226>; Jim Finkle, “Big Tech Companies Offer Millions after Heartbleed Crisis,” Reuters, April 24, 2014, available at <www.reuters. com/article/2014/04/24/us-cybercrime-heartbleed-idUSBREA3N13E20140424>. of adversarial technology are the “I success. But this is potentially hazardous Definitions Love You” worm in 2000, the “Code from a policy perspective as noted in Establishing definitions is basic to Red” worm in 2001, the “My Doom” the table. Hostile intent, human emo- address risk appropriately. Below are worm in 2004, and most recently, the tion, and political agendas were not generally accepted terms coupled with “Heartbleed” security bug discovered in required by the adversarial technologies specific clarifications where appropriate. early 2014. Similarly, the targeted effects themselves in order to impact users. Artificial intelligence: The theory and of Stuxnet in 2010 could meet some of Simple goals, as assigned by humans, • development of computer systems the requirements of dangerous autono- were sufficient to considerably influence able to perform tasks that normally mous pseudo-intelligence. As shown in economies and defense departments require human intelligence, such as the table, these technologies have serious across the globe. Conversely, many visual perception, speech recogni- consequences for a variety of users and nonfiction resources offer the alterna- tion, decisionmaking, and translation interests. tive concept of a singularity—very between languages. While these and other intentional, advanced AI—benefiting humankind.1 Artificial general intelligence (AGI)/ human-instigated programming exploits Human life extension, rapid accelera- • human-level intelligence/strong caused a level of impact and reaction, the tion of nanotechnology development, AI: These terms are grouped for questions that this article addresses are and even interstellar travel are often the purposes of this article to mean these: What are the impacts if the adaption named as some of the projected posi- “intelligence equal to that of human was more capable? What if the technolo- tives of super intelligent AI.2 However, beings”5 and are referred to as AGI. gies were not only of limited use but were other more wary sources do not paint Artificial super intelligence (ASI): also actively competing with us in some such an optimistic outlook, at least not • “Intelligence greater than human way? What if these agents’ levels of so- without significant controls emplaced.3 level intelligence.”6 phistication rapidly exceeded that of their As Vernor Vinge (credited with coin- Autonomous agent: “Autonomy developers and thus the rest of humanity? ing the term technological singularity) • generally means that an agent oper- Science fiction movies have depicted warned, “Any intelligent machine ates without direct human (or other) several artificial intelligence (AI) “end- [referring to AI] . would not be hu- intervention or guidance.”7 of-the-world” type scenarios ranging mankind’s ‘tool’ any more than humans Autonomous system: “Systems in from the misguided nuclear control are the tools of rabbits or robins or • which the designer has not prede- system, “W.O.P.R.—War Operation Plan chimpanzees.”4 termined the responses to every Response”—in the 1983 movie War In this article, we offer a more prag- condition.”8 Games, to the malicious Terminator ro- matic assessment. It provides common Goal-driven agents: An autonomous bots controlled by Skynet in the series of definitions related to AI and goal-driven • agent and/or autonomous system similarly named movies. The latter depict agents. It then offers assumptions and with a goal or goals possessing what is widely characterized as the techno- provides an overview of what experts applicable sensors and effectors (see logical singularity, that is, when machine have published on the subject of AI. figure). intelligence is significantly more advanced Finally, it summarizes examples of current Sensors: A variety of software or than that of human beings and is in direct efforts related to AI and concludes with • hardware receivers in which a competition with us. a recommendation for engagement and machine or program receives input The anthropomorphizing of these possible actions for controls. from its environment. agents usually does make for box office JFQ 77, 2nd Quarter 2015 Eshelman and Derrick 71 Figure. Goal-Driven Agent Defense and the Leading dependence, and protection. The Example Assumptions Edge of Technology USCYBERCOM mission statement, in From the earliest days of warfare, those part, directs the command to “plan, coor- Environment armies with the most revolutionary or dinate, synchronize and conduct activities advanced technology usually were the to operate and defend DoD information victors (barring leadership blunder or networks and conduct full spectrum Sensors extraordinary motivations or condi- military cyberspace operations.”14 This tions11). Critical to tribal, regional, is a daunting task given the reliance on national, or imperial survival, the systems and systems of systems and the pursuit of the newest advantage has efforts to exploit these systems by adver- driven technological invention. Over saries. This mission statement does imply the millennia, this “wooden club-to- a defense of networks and architectures, cyberspace operations” evolution has regardless of specific hostile agents. proved lethal for both combatants and However, the current focus seems to have Effectors noncombatants. an anti-hacker (that is, human, nation- Gunpowder, for example, was not state, terror group) fixation. It does not, Agent only an accidental invention but also illus- from a practical perspective, focus on trates an unsuccessful attempt to control artificial agent activities explicitly. technology once loosed. Chinese alche- IT has allowed us to become more Effectors: A variety of software or • mists, searching for the secrets of eternal intelligent. At a minimum, it has enabled hardware outlets that a machine life—not an entirely dissimilar goal of the diffusion of knowledge at paces never or program uses to impact its some proponents of ASI research12—dis- imagined. However, IT has also exposed environment. covered the mixture of saltpeter, carbon, us to real dangers such as personal finan- Currently, it is generally accepted that and sulfur in the 9th century. The Chinese cial or identity ruin or cyberspace attacks ASI, AGI, or even AI do not exist in any tried, but failed, to keep gunpowder’s on our industrial control systems. It is measurable way. In practice, however, secrets for themselves. The propagation of plausible that a sufficiently resourced, there is no mechanism for knowing of gunpowder and its combat effectiveness goal-driven agent would leverage this the existence of such an entity until it is spread across Asia, Europe, and the rest of technology to achieve its goal(s)—regard- made known by the agent itself or by its the world. The Byzantine Empire and its less of humankind’s inevitable dissent. “creator.” To argue the general thesis capital city of Constantinople, previously of potentially harmful, goal-driven tech- impervious to siege, fell victim to being Review of the Literature nologies, we need to make the following on the wrong side of technology when the Stephen Omohundro,
Recommended publications
  • Artificial Intelligence: What Is It, Can It Be Leveraged, and What Are the Ramifications?
    ARTIFICIAL INTELLIGENCE: WHAT IS IT, CAN IT BE LEVERAGED, AND WHAT ARE THE RAMIFICATIONS? Major James R. Barker JCSP 45 PCEMI 45 Service Paper Étude militaire Disclaimer Avertissement Opinions expressed remain those of the author and do Les opinons exprimées n’engagent que leurs auteurs et not represent Department of National Defence or ne reflètent aucunement des politiques du Ministère de Canadian Forces policy. This paper may not be used la Défense nationale ou des Forces canadiennes. Ce without written permission. papier ne peut être reproduit sans autorisation écrite © Her Majesty the Queen in Right of Canada, as represented by the © Sa Majesté la Reine du Chef du Canada, représentée par le Minister of National Defence, 2019. ministre de la Défense nationale, 2019. CANADIAN FORCES COLLEGE/COLLÈGE DES FORCES CANADIENNES JCSP 45 OCTOBER 15 2018 DS545 COMPONENT CAPABILITIES ARTIFICIAL INTELLIGENCE: WHAT IS IT, CAN IT BE LEVERAGED, AND WHAT ARE THE RAMIFICATIONS? By Major James R. Barker “This paper was written by a candidate « La présente étude a été rédigée par un attending the Canadian Force College in stagiaire du Collège des Forces fulfillment of one of the requirements of the canadiennes pour satisfaire à l’une des exigences du cours. L’étude est un Course of Studies. The paper is a scholastic document qui se rapporte au cours et document, and thus contains facts and Contient donc des faits et des opinions que opinions which the author alone considered seul l’auteur considère appropriés et appropriate and correct for the subject. It convenables au sujet. Elle ne reflète pas does not necessarily reflect the policy or the nécessairement la politique ou l’opinion opinion of any agency, including Government d’un organisme quelconque, y compris le of Canada and the Canadian Department of gouvernement du Canada et le ministère de la Défense nationale du Canada.
    [Show full text]
  • Copyrighted Material
    Index affordances 241–2 apps 11, 27–33, 244–5 afterlife 194, 227, 261 Ariely, Dan 28 Agar, Nicholas 15, 162, 313 Aristotle 154, 255 Age of Spiritual Machines, The 47, 54 Arkin, Ronald 64 AGI (Artificial General Intelligence) Armstrong, Stuart 12, 311 61–86, 98–9, 245, 266–7, Arp, Hans/Jean 240–2, 246 298, 311 artificial realities, see virtual reality AGI Nanny 84, 311 Asimov, Isaac 45, 268, 274 aging 61, 207, 222, 225, 259, 315 avatars 73, 76, 201, 244–5, 248–9 AI (artificial intelligence) 3, 9, 12, 22, 30, 35–6, 38–44, 46–58, 72, 91, Banks, Iain M. 235 95, 99, 102, 114–15, 119, 146–7, Barabási, Albert-László 311 150, 153, 157, 198, 284, 305, Barrat, James 22 311, 315 Berger, T.W. 92 non-determinism problem 95–7 Bernal,J.D.264–5 predictions of 46–58, 311 biological naturalism 124–5, 128, 312 altruism 8 biological theories of consciousness 14, Amazon Elastic Compute Cloud 40–1 104–5, 121–6, 194, 312 Andreadis, Athena 223 Blackford, Russell 16–21, 318 androids 44, 131, 194, 197, 235, 244, Blade Runner 21 265, 268–70, 272–5, 315 Block, Ned 117, 265 animal experimentationCOPYRIGHTED 24, 279–81, Blue Brain MATERIAL Project 2, 180, 194, 196 286–8, 292 Bodington, James 16, 316 animal rights 281–2, 285–8, 316 body, attitudes to 4–5, 222–9, 314–15 Anissimov, Michael 12, 311 “Body by Design” 244, 246 Intelligence Unbound: The Future of Uploaded and Machine Minds, First Edition. Edited by Russell Blackford and Damien Broderick.
    [Show full text]
  • The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society
    Futures 90 (2017) 46–60 Contents lists available at ScienceDirect Futures journal homepage: www.elsevier.com/locate/futures Review The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms Professor Spyros Makridakis Director Institute For the Future (IFF), University of Nicosia, Cyprus A R T I C L E I N F O A B S T R A C T Article history: Received 10 January 2017 The impact of the industrial and digital (information) revolutions has, undoubtedly, been Received in revised form 17 March 2017 substantial on practically all aspects of our society, life, firms and employment. Will the Accepted 28 March 2017 forthcoming AI revolution produce similar, far-reaching effects? By examining analogous Available online 3 April 2017 inventions of the industrial, digital and AI revolutions, this article claims that the latter is on target and that it would bring extensive changes that will also affect all aspects of our Keywords: society and life. In addition, its impact on firms and employment will be considerable, Artificial Intelligence (AI) resulting in richly interconnected organizations with decision making based on the Industrial revolution analysis and exploitation of “big” data and intensified, global competition among firms. Digital revolution People will be capable of buying goods and obtaining services from anywhere in the world AI revolution using the Internet, and exploiting the unlimited, additional benefits that will open through Impact of AI revolution the widespread usage of AI inventions. The paper concludes that significant competitive Benefits and dangers of AI technologies advantages will continue to accrue to those utilizing the Internet widely and willing to take entrepreneurial risks in order to turn innovative products/services into worldwide commercial success stories.
    [Show full text]
  • Artificial Intelligence and Its Implications for Income Distribution
    Artificial Intelligence and Its Implications ... The Economics of Artifi cial Intelligence National Bureau of Economic Research Conference Report The Economics of Artifi cial Intelligence: An Agenda Edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb The University of Chicago Press Chicago and London The University of Chicago Press, Chicago 60637 The University of Chicago Press, Ltd., London © 2019 by the National Bureau of Economic Research, Inc. All rights reserved. No part of this book may be used or reproduced in any manner whatsoever without written permission, except in the case of brief quotations in critical articles and reviews. For more information, contact the University of Chicago Press, 1427 E. 60th St., Chicago, IL 60637. Published 2019 Printed in the United States of America 28 27 26 25 24 23 22 21 20 19 1 2 3 4 5 ISBN-13: 978-0-226-61333-8 (cloth) ISBN-13: 978-0-226-61347-5 (e-book) DOI: https:// doi .org / 10 .7208 / chicago / 9780226613475 .001 .0001 Library of Congress Cataloging-in-Publication Data Names: Agrawal, Ajay, editor. | Gans, Joshua, 1968– editor. | Goldfarb, Avi, editor. Title: The economics of artifi cial intelligence : an agenda / Ajay Agrawal, Joshua Gans, and Avi Goldfarb, editors. Other titles: National Bureau of Economic Research conference report. Description: Chicago ; London : The University of Chicago Press, 2019. | Series: National Bureau of Economic Research conference report | Includes bibliographical references and index. Identifi ers: LCCN 2018037552 | ISBN 9780226613338 (cloth : alk. paper) | ISBN 9780226613475 (ebook) Subjects: LCSH: Artifi cial intelligence—Economic aspects. Classifi cation: LCC TA347.A78 E365 2019 | DDC 338.4/ 70063—dc23 LC record available at https:// lccn .loc .gov / 2018037552 ♾ This paper meets the requirements of ANSI/ NISO Z39.48-1992 (Permanence of Paper).
    [Show full text]
  • Our Final Invention: Artificial Intelligence and the End of the Human Era
    Derrek Hopper Our Final Invention: Artificial Intelligence and the End of the Human Era Book Review Introduction In our attempt to create the ultimate intelligence, human beings risk catalyzing their own annihilation. At least according to documentary film maker James Barrat’s 2013 analysis of the dangers of Artificial Intelligence (AI). In his book, “Our Final Invention: Artificial Intelligence and the End of the Human Era”, Barrat explores the possible futures associated with the development of artificial superintelligence (ASI); some optimistic and other pessimistic. Barrat falls within the “pessimist” category of AI thinkers but his research leads him to interview and ponder the rosier outlook of some of AI’s most notable researchers. Throughout the book, Barrat finds ample opportunity to debate with optimists such as famed AI researcher Ray Kurzweil and challenges many of their assumptions about humanity’s ability to control an intelligence greater than our own. Our Final Invention covers three basic concepts regarding the hazards of AI. First, the inevitability of artificial general intelligence (AGI) and ASI. Second, the rate of change with self-improving intelligence. Third, our inability to understand an intelligence smarter than us and to “hard code” benevolence into it. Barrat fails to address major challenges underlying the assumptions of most AI researchers. Particularly, those regarding the nature of intelligence and consciousness, AI’s capacity for self-improvement, and the continued exponential growth of computer hardware. These concepts lay at the center of the predictions about our AI future and warrant further investigation. Derrek Hopper Busy Child Our Final Invention opens with a future scenario where an AGI becomes self-aware and is subsequently unplugged from the internet by its developers as a precaution.
    [Show full text]
  • PAPM 4000A International Studies Capstone Seminar: Strategic Foresight Fall 2020 1. Bio – Prof Wilner
    PAPM 4000A International Studies Capstone Seminar: Strategic Foresight Fall 2020 Class Time: Tuesday, 11:35 to 2:25 First Class: September 15, 2020 Location: Online only Instructor: Professor Alex S. Wilner (https://alexwilner.com/) Office: 5106 River Building Phone: (613) 520-2600 ext. 6199 Office Hours: Every Tuesday, 2:00-3:00 (Via BBB); or by appointment E-mail: [email protected] 1. Bio – Prof Wilner Dr. Alex S. Wilner is an Assistant Professor of International Affairs at the Norman Paterson School of International Affairs (NPSIA), Carleton University, Ottawa. He teaches graduate classes on terrorism and radicalization, intelligence studies, national and cybersecurity, and a capstone course on Canadian security policy. Capstone partners have included FINTRAC, Public Safety Canada, Global Affairs Canada, Bank of Canada, Policy Horizons Canada (Horizons), Transport Canada, Public Safety, and Ontario’s Office of the Provincial Security Advisor. Prof Wilner also developed and teaches a course on strategic foresight in international security, in which graduate students explore the future of weaponry, state conflict, terrorism, criminality, cybersecurity, and other contemporary issues. Prof Wilner’s academic research focuses on the application of deterrence theory to contemporary security issues, like terrorism, radicalization, organized crime, cyber threats, and proliferation. His current research is funded by DND IDEaS (2019-2021) and MINDS (2018-2021); SSHRC Insight Grant (2020-2025); TSAS (2018-2021); and Mitacs (2020- 2022).
    [Show full text]
  • AGI and Reflexivity We Propose an Axiomatic Approach, and Therefore It Should Be As Simple As Possible
    1.2. An axiomatic approach AGI and reflexivity We propose an axiomatic approach, and therefore it should be as simple as possible. However, let’s notice that we do Pascal Faudemay1 not consider the topic of consciousness as a whole, but only an aspect, which we shall call reflexivity. Definition 1. Abstract. We define a property of intelligent systems, The reflexivity of a cognitive process will be defined as the which we call Reflexivity. In human beings it is one aspect fact for this process, to be a function of the previous states of consciousness, and an element of deliberation. of the same process, and of its present and previous perceptions, actions and emotions. We propose a conjecture, that this property is conditioned by a topological property of the processes which If the previous states which are considered begin at the implement this reflexivity. These processes may be origin of life of the system, we shall say that this reflexivity symbolic, or non symbolic e.g. connexionnist. An is unlimited, and if these states begin at the origin of the architecture which implements reflexivity may be based considered data system, we shall say that this reflexivity is on the interaction of one or several modules of deep infinite. learning, which may be specialized or not, and interconnected in a relevant way. A necessary condition of The reference to emotions is explained at paragraph 4.1. reflexivity is the existence of recurrence in its processes, we will examine in which cases this condition may be Definition 2. sufficient. Deliberation is the production of a flow of sentences in a natural or artificial language (or of tuples which represent We will then examine how this topology will make sentences, or of lists of concepts), by a reflexive system in possible the expression of a second property, the which these sentences are part of the future cognitive deliberation.
    [Show full text]
  • Thesis Roberto Musa
    Facultad de Ciencias Sociales Escuela de Psicología MYTHS, TESTS AND GAMES: CULTURAL ROOTS AND CURRENT ROUTES OF ARTIFICIAL INTELLIGENCE By Roberto Musa Giuliano Thesis submitted to the Office of Research and Graduate Studies in partial fulfillment of the requirements for the degree of Doctor in Psychology. Advisor: Carlos Cornejo Alarcón Thesis Committee: Francisco Ceric Garrido Ricardo Rosas Díaz September, 2019 Santiago, Chile © 2019, Roberto Musa Giuliano To my parents, Tulia Giuliano Burrows and Roberto Musa Cavallé 1 Words of Thanks What pleasant news to learn that the health benefits of gratitude are legion, or so wellness gurus assure us, since the struggling writer, having at last ceased in his toil, is bound by a sacred moral duty to attempt the issuance of thanks where they are due, no matter how little faith he might place in his ability to be either fair or exhaustive. (This would certainly account for the prefatory remerciement slowly inching its way into becoming a full-fledged literary genre in its own right.) A beneficial side-effect of trying to be comprehensive in acknowledging our incurred debts, is that all those readers not mentioned therein, but who nevertheless move through things page by page and from start to finish, shall arrive at the actual flesh of the text and find it delightfully entertaining by contrast to the dilatory list of names entirely unfamiliar. For their role (which I hope they don’t end up coming to abhor) in the course of my intellectual meanderings I must mention the following formative figures: I thank Enrique Vega for instilling in me an early love of philosophical thinking, and illustrating the subtle nuances that distinguish peripatetic discussions from vagabonding.
    [Show full text]
  • The Myth of Techno-Transcendence: the Rhetoric of the Singularity By
    The Myth of Techno-Transcendence: The Rhetoric of the Singularity By © 2014 Michael Spencer Harris Submitted to the graduate degree program in Communication Studies and the Graduate Faculty of the University of Kansas in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Chairperson ________________________________ Dr. Robert C. Rowland ________________________________ Dr. Donn Parson ________________________________ Dr. Scott Harris ________________________________ Dr. Tracy Russo ________________________________ Dr. Amy Devitt Date Defended: August 18, 2014 The Dissertation Committee for Michael Spencer Harris certifies that this is the approved version of the following dissertation: The Myth of Techno-Transcendence: The Rhetoric of the Singularity ________________________________ Chairperson: Dr. Robert C. Rowland Date approved: August 18, 2014 ii Abstract Recent studies suggest that people are anxious about the influence of technology on (and in) the future. The rapidity of technological progress, combined with the failure of technical discourses to provide answers in times of uncertainty have forced audiences to find alternative means of making sense of their contemporary situation. In particular, narrative forms have become prominent resources for audience’s seeking to understand the trajectory of technology and its effects on their lives. One example of the emergence of these types of discourses is the Singularity, a story about a future point where human and machine intelligence is indistinguishable and humanity has been transformed by technology. As such, in this study, I illuminate and analyze the rhetorical form and function of both pro- and anti-Singularity discourse in an effort to understand the contemporary cultural role of stories about the future. In doing so, I argue that advocates of the Singularity employ a mythic form of reasoning, combining narrative and technical discourses while characterizing rationality in religious terms.
    [Show full text]
  • Course Syllabus
    Course Syllabus Jump to Today POS 4931: Technology and Global Politics (Section 3C32) Fall 2020 Tuesday 4 (10:40-11:30) and Thursday 4-5 (10:40-12:35) Professor Thiele [email protected] www.clas.ufl.edu/users/thiele/ (Links to an external site.) Office Hours: Tuesday 11:45-2:45 About the course How will your personal, social and political life be impacted, and potentially transformed, over the coming years by emerging technologies such as artificial intelligence, digital media, synthetic biology, nanotechnology, robotics, quantum computing, augmented reality and virtual reality? Technology and Global Politics prepares students to grapple with the transformations that will ensue from fast-moving technological developments. The course is interdisciplinary, integrative and largely discussion based. Being prepared for the future entails the cultivation of collaboration skills as well as writing, speaking and digital media communication skills. As such, team projects and multi-media class presentations are crucial course components. Online Format of Course This course will meet synchronously via Zoom Tuesday 4 (10:40-11:30) and Thursday 4-5 (10:40-12:35). Students will regularly be placed in Zoom breakout rooms during classtime, where responses to assigned questions will be developed and reported back to the class as a whole. Office Hours will be Tuesday 11:45-2:45 via Zoom. Instructions for meeting via Zoom during office hours will be provided in class. Technology requirements: Students should have access to a computer with a camera that has the Zoom desktop app installed. Smartphone access is inadequate. For information about Zoom, including how to download the desktop app and use it during the course, please go to: https://elearning.ufl.edu/zoom/ (Links to an external site.) Scroll down to Zoom in Canvas, and access the 'Student Step-by-Step Guides' Please also download and review this important document: Netiquette Guide for Online Courses (Links to an external site.) It is a UF Student Honor Code violation to share Zoom links with people outside of the course.
    [Show full text]
  • Book Essay the Future of Artificial Intelligence
    Book Essay The Future of Artificial Intelligence Allison Berke Abstract The first questions facing the development of artificial intelligence (AI), addressed by all three authors, are how likely it is that human- ity will develop an artificial human-level intelligence at all, and when that might happen, with the implication that a human-level intelligence capable of utilizing abundantly available computational resources will quickly bootstrap itself into superintelligence. We need not imagine a doomsday scenario involving destructive, superintelligent AI to under- stand the difficulty of building safety and security into our digital tools. ✵ ✵ ✵ ✵ ✵ How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil. Penguin Books, 2012, 282 pp., $17.00. Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. St. Martin’s Griffin Press, 2013, 267 pp., $16.99. Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. Oxford University Press, 2014, 260 pp., $29.95. Three recent popular science works explore the future of AI—examining its feasibility, its potential dangers, and its ethical and philosophical im- plications. Ray Kurzweil, an inventor, technologist, futurist, and AI pio- neer—known for popularizing the concept of the singularity (a point at which technological progress in machine intelligence approaches runaway growth)—has in recent years devoted his efforts to machine learning and speech processing. Kurzweil’s research, including that of companies he has founded, is centered on enabling computers to recognize speech and text, building individual capabilities necessary for general AI. In How to Create a Mind, Kurzweil summarizes recent advancements in neuroscience and 114 Strategic Studies Quarterly ♦ Fall 2016 Book Essay software development to put forth an argument that the areas of the brain that produce a uniquely human intelligence—primarily the neo- cortex—are composed of a network of similar, hierarchically organized units responsible for executing nested pattern recognition algorithms.
    [Show full text]
  • An AI Threats Bibliography "AI Principles". Future of Life Institute
    An AI Threats Bibliography "AI Principles". Future of Life Institute. Retrieved 11 December 2017. "Anticipating artificial intelligence". Nature. 532 (7600): 413. 26 April 2016. Bibcode:2016Natur.532Q.413.. doi:10.1038/532413a. PMID 27121801. "'Artificial intelligence alarmists' like Elon Musk and Stephen Hawking win 'Luddite of the Year' award". The Independent (UK). 19 January 2016. Retrieved 7 February 2016. "But What Would the End of Humanity Mean for Me?". The Atlantic. 9 May 2014. Retrieved 12 December 2015. "Clever cogs". The Economist. 9 August 2014. Retrieved 9 August 2014. Syndicated at Business Insider "Elon Musk and Stephen Hawking warn of artificial intelligence arms race". Newsweek. 31 January 2017. Retrieved 11 December 2017. "Elon Musk wants to hook your brain up directly to computers — starting next year". NBC News. 2019. Retrieved 5 April 2020. "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. Retrieved 27 November 2017. "Norvig vs. Chomsky and the Fight for the Future of AI". Tor.com. 21 June 2011. Retrieved 15 May 2016. "Over a third of people think AI poses a threat to humanity". Business Insider. 11 March 2016. Retrieved 16 May 2016. "Overcoming Bias : Debating Yudkowsky". www.overcomingbias.com. Retrieved 20 September 2017. "Overcoming Bias : Foom Justifies AI Risk Efforts Now". www.overcomingbias.com. Retrieved 20 September 2017. "Overcoming Bias : I Still Don't Get Foom". www.overcomingbias.com. Retrieved 20 September 2017. "Overcoming Bias : The Betterness Explosion". www.overcomingbias.com. Retrieved 20 September 2017. 1 "Real-Life Decepticons: Robots Learn to Cheat". Wired. 18 August 2009. Retrieved 7 February 2016. "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter".
    [Show full text]