Team SCHAFT’s robot, S-One, clears debris at DARPA’s Robotics Challenge trials (DARPA/Raymond Sheh) Relying on the Kindness of Machines? The Security Threat of Artificial Agents By Randy Eshelman and Douglas Derrick odern technology is a daily part become laborious or, in many cases, agents becoming an adversary instead of our lives. It serves critical impossible. Since we have become of a tool. M functions in defense, respond- dependent on technology and its uses, We define autonomous, adver- ing to natural disasters, and scientific and technology is becoming ever more sarial-type technology as existing or research. Without technology, some of capable, it is necessary that we consider yet-to-be-developed software, hardware, the most common human tasks would the possibility of goal-driven, adaptive or architectures that deploy or are de- ployed to work against human interests or adversely impact human use of technology without human control or intervention. Randy Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command. Dr. Douglas Derrick is Assistant Professor of Information Technology Innovation at the University of Several well-known events over the last Nebraska at Omaha. two decades that approach the concept 70 Commentary / The Security Threat of Artificial Agents JFQ 77, 2nd Quarter 2015 Table. Adversarial Technology Examples Adversarial Technology Year Financial Impact Users Affected Transmit Vector “I Love You” 2000 $15 billion 500,000 Emailed itself to user contacts after opened Scanned Internet for Microsoft computers— “Code Red” 2001 $2.6 billion 1 million attacked 100 IP addresses at a time “My Doom” 2004 $38 billion 2 million Emailed itself to user contacts after opened Stuxnet 2010 Unknown Unclear Attacked industrial control systems Estimated tens of “Heartbleed” 2014 Estimated at 2/3 of all Web servers Open Secure Sockets Layer flaw exposes user data millions Sources: “Top 5 Computer Viruses of All Time,” UKNorton.com, available at < http://uk.norton.com/top-5-viruses/promo>; “Update 1—Researchers Say Stuxnet Was Deployed Against Iran in 2007,” Reuters, February 26, 2013, available at <www.reuters.com/article/2013/02/26/cyberwar-stuxnet- idUSL1N0BQ5ZW20130226>; Jim Finkle, “Big Tech Companies Offer Millions after Heartbleed Crisis,” Reuters, April 24, 2014, available at <www.reuters. com/article/2014/04/24/us-cybercrime-heartbleed-idUSBREA3N13E20140424>. of adversarial technology are the “I success. But this is potentially hazardous Definitions Love You” worm in 2000, the “Code from a policy perspective as noted in Establishing definitions is basic to Red” worm in 2001, the “My Doom” the table. Hostile intent, human emo- address risk appropriately. Below are worm in 2004, and most recently, the tion, and political agendas were not generally accepted terms coupled with “Heartbleed” security bug discovered in required by the adversarial technologies specific clarifications where appropriate. early 2014. Similarly, the targeted effects themselves in order to impact users. Artificial intelligence: The theory and of Stuxnet in 2010 could meet some of Simple goals, as assigned by humans, • development of computer systems the requirements of dangerous autono- were sufficient to considerably influence able to perform tasks that normally mous pseudo-intelligence. As shown in economies and defense departments require human intelligence, such as the table, these technologies have serious across the globe. Conversely, many visual perception, speech recogni- consequences for a variety of users and nonfiction resources offer the alterna- tion, decisionmaking, and translation interests. tive concept of a singularity—very between languages. While these and other intentional, advanced AI—benefiting humankind.1 Artificial general intelligence (AGI)/ human-instigated programming exploits Human life extension, rapid accelera- • human-level intelligence/strong caused a level of impact and reaction, the tion of nanotechnology development, AI: These terms are grouped for questions that this article addresses are and even interstellar travel are often the purposes of this article to mean these: What are the impacts if the adaption named as some of the projected posi- “intelligence equal to that of human was more capable? What if the technolo- tives of super intelligent AI.2 However, beings”5 and are referred to as AGI. gies were not only of limited use but were other more wary sources do not paint Artificial super intelligence (ASI): also actively competing with us in some such an optimistic outlook, at least not • “Intelligence greater than human way? What if these agents’ levels of so- without significant controls emplaced.3 level intelligence.”6 phistication rapidly exceeded that of their As Vernor Vinge (credited with coin- Autonomous agent: “Autonomy developers and thus the rest of humanity? ing the term technological singularity) • generally means that an agent oper- Science fiction movies have depicted warned, “Any intelligent machine ates without direct human (or other) several artificial intelligence (AI) “end- [referring to AI] . would not be hu- intervention or guidance.”7 of-the-world” type scenarios ranging mankind’s ‘tool’ any more than humans Autonomous system: “Systems in from the misguided nuclear control are the tools of rabbits or robins or • which the designer has not prede- system, “W.O.P.R.—War Operation Plan chimpanzees.”4 termined the responses to every Response”—in the 1983 movie War In this article, we offer a more prag- condition.”8 Games, to the malicious Terminator ro- matic assessment. It provides common Goal-driven agents: An autonomous bots controlled by Skynet in the series of definitions related to AI and goal-driven • agent and/or autonomous system similarly named movies. The latter depict agents. It then offers assumptions and with a goal or goals possessing what is widely characterized as the techno- provides an overview of what experts applicable sensors and effectors (see logical singularity, that is, when machine have published on the subject of AI. figure). intelligence is significantly more advanced Finally, it summarizes examples of current Sensors: A variety of software or than that of human beings and is in direct efforts related to AI and concludes with • hardware receivers in which a competition with us. a recommendation for engagement and machine or program receives input The anthropomorphizing of these possible actions for controls. from its environment. agents usually does make for box office JFQ 77, 2nd Quarter 2015 Eshelman and Derrick 71 Figure. Goal-Driven Agent Defense and the Leading dependence, and protection. The Example Assumptions Edge of Technology USCYBERCOM mission statement, in From the earliest days of warfare, those part, directs the command to “plan, coor- Environment armies with the most revolutionary or dinate, synchronize and conduct activities advanced technology usually were the to operate and defend DoD information victors (barring leadership blunder or networks and conduct full spectrum Sensors extraordinary motivations or condi- military cyberspace operations.”14 This tions11). Critical to tribal, regional, is a daunting task given the reliance on national, or imperial survival, the systems and systems of systems and the pursuit of the newest advantage has efforts to exploit these systems by adver- driven technological invention. Over saries. This mission statement does imply the millennia, this “wooden club-to- a defense of networks and architectures, cyberspace operations” evolution has regardless of specific hostile agents. proved lethal for both combatants and However, the current focus seems to have Effectors noncombatants. an anti-hacker (that is, human, nation- Gunpowder, for example, was not state, terror group) fixation. It does not, Agent only an accidental invention but also illus- from a practical perspective, focus on trates an unsuccessful attempt to control artificial agent activities explicitly. technology once loosed. Chinese alche- IT has allowed us to become more Effectors: A variety of software or • mists, searching for the secrets of eternal intelligent. At a minimum, it has enabled hardware outlets that a machine life—not an entirely dissimilar goal of the diffusion of knowledge at paces never or program uses to impact its some proponents of ASI research12—dis- imagined. However, IT has also exposed environment. covered the mixture of saltpeter, carbon, us to real dangers such as personal finan- Currently, it is generally accepted that and sulfur in the 9th century. The Chinese cial or identity ruin or cyberspace attacks ASI, AGI, or even AI do not exist in any tried, but failed, to keep gunpowder’s on our industrial control systems. It is measurable way. In practice, however, secrets for themselves. The propagation of plausible that a sufficiently resourced, there is no mechanism for knowing of gunpowder and its combat effectiveness goal-driven agent would leverage this the existence of such an entity until it is spread across Asia, Europe, and the rest of technology to achieve its goal(s)—regard- made known by the agent itself or by its the world. The Byzantine Empire and its less of humankind’s inevitable dissent. “creator.” To argue the general thesis capital city of Constantinople, previously of potentially harmful, goal-driven tech- impervious to siege, fell victim to being Review of the Literature nologies, we need to make the following on the wrong side of technology when the Stephen Omohundro,
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-