Team SCHAFT’s robot, S-One, clears debris at DARPA’s Robotics Challenge trials (DARPA/Raymond Sheh)

Relying on the Kindness of Machines? The Security Threat of Artificial Agents

By Randy Eshelman and Douglas Derrick

odern technology is a daily part become laborious or, in many cases, agents becoming an adversary instead of our lives. It serves critical impossible. Since we have become of a tool. M functions in defense, respond- dependent on technology and its uses, We define autonomous, adver- ing to natural disasters, and scientific and technology is becoming ever more sarial-type technology as existing or research. Without technology, some of capable, it is necessary that we consider yet-to-be-developed software, hardware, the most common human tasks would the possibility of goal-driven, adaptive or architectures that deploy or are de- ployed to work against human interests or adversely impact human use of technology without human control or intervention. Randy Eshelman is Deputy of the International Affairs and Policy Branch at U.S. Strategic Command. Dr. Douglas Derrick is Assistant Professor of Information Technology Innovation at the University of Several well-known events over the last Nebraska at Omaha. two decades that approach the concept

70 Commentary / The Security Threat of Artificial Agents JFQ 77, 2nd Quarter 2015 Table. Adversarial Technology Examples Adversarial Technology Year Financial Impact Users Affected Transmit Vector “I Love You” 2000 $15 billion 500,000 Emailed itself to user contacts after opened Scanned Internet for Microsoft computers— “Code Red” 2001 $2.6 billion 1 million attacked 100 IP addresses at a time “My Doom” 2004 $38 billion 2 million Emailed itself to user contacts after opened Stuxnet 2010 Unknown Unclear Attacked industrial control systems Estimated tens of “Heartbleed” 2014 Estimated at 2/3 of all Web servers Open Secure Sockets Layer flaw exposes user data millions

Sources: “Top 5 Computer Viruses of All Time,” UKNorton.com, available at < http://uk.norton.com/top-5-viruses/promo>; “Update 1—Researchers Say Stuxnet Was Deployed Against Iran in 2007,” Reuters, February 26, 2013, available at ; Jim Finkle, “Big Tech Companies Offer Millions after Heartbleed Crisis,” Reuters, April 24, 2014, available at . of adversarial technology are the “I success. But this is potentially hazardous Definitions Love You” worm in 2000, the “Code from a policy perspective as noted in Establishing definitions is basic to Red” worm in 2001, the “My Doom” the table. Hostile intent, human emo- address risk appropriately. Below are worm in 2004, and most recently, the tion, and political agendas were not generally accepted terms coupled with “Heartbleed” security bug discovered in required by the adversarial technologies specific clarifications where appropriate. early 2014. Similarly, the targeted effects themselves in order to impact users. : The theory and of Stuxnet in 2010 could meet some of Simple goals, as assigned by humans, •• development of computer systems the requirements of dangerous autono- were sufficient to considerably influence able to perform tasks that normally mous pseudo-intelligence. As shown in economies and defense departments require human intelligence, such as the table, these technologies have serious across the globe. Conversely, many visual perception, speech recogni- consequences for a variety of users and nonfiction resources offer the alterna- tion, decisionmaking, and translation interests. tive concept of a singularity—very between languages. While these and other intentional, advanced AI—benefiting humankind.1 Artificial general intelligence (AGI)/ human-instigated programming exploits Human life extension, rapid accelera- •• human-level intelligence/strong caused a level of impact and reaction, the tion of nanotechnology development, AI: These terms are grouped for questions that this article addresses are and even interstellar travel are often the purposes of this article to mean these: What are the impacts if the adaption named as some of the projected posi- “intelligence equal to that of human was more capable? What if the technolo- tives of super intelligent AI.2 However, beings”5 and are referred to as AGI. gies were not only of limited use but were other more wary sources do not paint Artificial super intelligence (ASI): also actively competing with us in some such an optimistic outlook, at least not •• “Intelligence greater than human way? What if these agents’ levels of so- without significant controls emplaced.3 level intelligence.”6 phistication rapidly exceeded that of their As Vernor Vinge (credited with coin- Autonomous agent: “Autonomy developers and thus the rest of humanity? ing the term technological singularity) •• generally means that an agent oper- Science fiction movies have depicted warned, “Any intelligent machine ates without direct human (or other) several artificial intelligence (AI) “end- [referring to AI] . . . would not be hu- intervention or guidance.”7 of-the-world” type scenarios ranging mankind’s ‘tool’ any more than humans Autonomous system: “Systems in from the misguided nuclear control are the tools of rabbits or robins or •• which the designer has not prede- system, “W.O.P.R.—War Operation Plan chimpanzees.”4 termined the responses to every Response”—in the 1983 movie War In this article, we offer a more prag- condition.”8 Games, to the malicious Terminator ro- matic assessment. It provides common Goal-driven agents: An autonomous bots controlled by Skynet in the series of definitions related to AI and goal-driven •• agent and/or autonomous system similarly named movies. The latter depict agents. It then offers assumptions and with a goal or goals possessing what is widely characterized as the techno- provides an overview of what experts applicable sensors and effectors (see logical singularity, that is, when machine have published on the subject of AI. figure). intelligence is significantly more advanced Finally, it summarizes examples of current Sensors: A variety of software or than that of human beings and is in direct efforts related to AI and concludes with •• hardware receivers in which a competition with us. a recommendation for engagement and machine or program receives input The anthropomorphizing of these possible actions for controls. from its environment. agents usually does make for box office

JFQ 77, 2nd Quarter 2015 Eshelman and Derrick 71 Figure. Goal-Driven Agent Defense and the Leading dependence, and protection. The Example Assumptions Edge of Technology USCYBERCOM mission statement, in From the earliest days of warfare, those part, directs the command to “plan, coor- Environment armies with the most revolutionary or dinate, synchronize and conduct activities advanced technology usually were the to operate and defend DoD information victors (barring leadership blunder or networks and conduct full spectrum Sensors extraordinary motivations or condi- military cyberspace operations.”14 This tions11). Critical to tribal, regional, is a daunting task given the reliance on national, or imperial survival, the systems and systems of systems and the pursuit of the newest advantage has efforts to exploit these systems by adver- driven technological invention. Over saries. This mission statement does imply the millennia, this “wooden club-to- a defense of networks and architectures, cyberspace operations” evolution has regardless of specific hostile agents. proved lethal for both combatants and However, the current focus seems to have Effectors noncombatants. an anti-hacker (that is, human, nation- Gunpowder, for example, was not state, terror group) fixation. It does not, Agent only an accidental invention but also illus- from a practical perspective, focus on trates an unsuccessful attempt to control artificial agent activities explicitly. technology once loosed. Chinese alche- IT has allowed us to become more Effectors: A variety of software or •• mists, searching for the secrets of eternal intelligent. At a minimum, it has enabled hardware outlets that a machine life—not an entirely dissimilar goal of the diffusion of knowledge at paces never or program uses to impact its some proponents of ASI research12—dis- imagined. However, IT has also exposed environment. covered the mixture of saltpeter, carbon, us to real dangers such as personal finan- Currently, it is generally accepted that and sulfur in the 9th century. The Chinese cial or identity ruin or cyberspace attacks ASI, AGI, or even AI do not exist in any tried, but failed, to keep gunpowder’s on our industrial control systems. It is measurable way. In practice, however, secrets for themselves. The propagation of plausible that a sufficiently resourced, there is no mechanism for knowing of gunpowder and its combat effectiveness goal-driven agent would leverage this the existence of such an entity until it is spread across Asia, Europe, and the rest of technology to achieve its goal(s)—regard- made known by the agent itself or by its the world. The Byzantine Empire and its less of humankind’s inevitable dissent. “creator.” To argue the general thesis capital city of Constantinople, previously of potentially harmful, goal-driven tech- impervious to siege, fell victim to being Review of the Literature nologies, we need to make the following on the wrong side of technology when the Stephen Omohundro, Ph.D. in physics assumptions concerning past technologi- Ottoman Turks blew through the walled and mathematics and founder of Self- cal developments, the current state of city with cannon in the 15th century.13 Aware Systems, a think tank for ana- advances, and future plausible progress: Information technology (IT)—from lyzing intelligent systems, has written the telegraph, to satellite systems, to extensively on the dangers inherent Moore’s Law,9 which correctly •• globally connected smart devices—has in any autonomous system. He states, predicted exponential growth of fundamentally altered the landscape of “Autonomous systems have the poten- integrated circuits, will remain valid military and civil operations, much like tial to create tremendous benefits for in the near term. gunpowder did in its day. Furthermore, humanity . . . but they may also cause Advances in quantum computing, •• IT allows the management of military harm by acting in ways not anticipated which may dramatically increase the resources and financial systems world- by their designers. Such systems are speed at which computers operate, wide. From a defense perspective, it has capable of surprising their designers and will continue.10 become difficult to find a single function, behaving in unexpected ways.”15 Omo- Economic, military, and convenience •• application, plan, or asset not enabled or hundro outlines basic AI drives inher- incentives to improve technologies impacted by the use of IT. Information ent in a goal-driven agent: “Without and their uses will continue, espe- technology has become so paramount special precautions, it will resist being cially in the cyberspace and AI fields. that the President has made the operation turned off, will try to break into other A global state of technological inter- •• and defense of the U.S. military’s portion machines and make copies of itself, and connectedness, in which all manners of IT infrastructure a mission for military will try to acquire resources without of systems, devices, and architectures leadership at the highest levels. regard for anyone else’s safety . . . not are linked, will continue to mature U.S. Strategic Command’s subordi- because they were programmed in at and become more robust and nearly nate or subunified command, U.S. Cyber the start, but because of the intrinsic ubiquitous. Command (USCYBERCOM), is the nature of goal driven systems.”16 Omo- evolutionary result of IT advancement, hundro’s four basic AI drives are:

72 Commentary / The Security Threat of Artificial Agents JFQ 77, 2nd Quarter 2015 •• Efficiency: This drive will lead to improved procedures for computa- tional and physical tasks. •• Resource loss avoidance: This drive will prevent passive losses and prevent outside agents from taking resources. •• Resource acquisition: This drive may involve exploration, trading, or stealing. •• Increasing utility: This drive would be to search for new behaviors to meet desired goals. Other examples of the basic drives would be: •• Efficiency: the shutting down of systems not currently necessary for goal achievement (for example, a tertiary power grid—from the AI’s perspective—but a primary source for humans) •• Resource loss avoidance: a protec- tion of assets currently needed or determined to be required in the future (for example, active protection of servers by automated locking and monitoring of physical barriers) Resource acquisition: financial •• CHIMP, from Tartan Rescue Team, placed third in the DARPA Robotics Challenge Trials 2013 (DARPA) market manipulation to gain fun- gible assets for later satellite access unilaterally and would almost certainly evolved to understand our fellow humans (for example, leasing of bandwidth require more than policy statements. empathically, by placing ourselves in their for more efficient communications , co-founder of the shoes. . . . If you deal with any other kind of through extant online tools) Machine Intelligence Research Institute, optimization process . . . then anthropomor- •• Increasing utility: game-playing, states in a chapter of Global Catastrophic phism is flypaper for unwary scientists.19 modeling, and simulation-running to Risks, “It is far more difficult to write determine best approaches to achiev- about global risks of Artificial Intelligence James Barrat, an author and docu- ing goal(s). than about cognitive biases. Cognitive mentarian with National Geographic, biases are settled science; one need simply Discovery, and PBS, states, “Intelligence, , Ph.D. in econom- quote the literature. Artificial Intelligence not charm or beauty, is the special power ics and director of the Oxford Martin is not settled science; it belongs to the that enables humans to dominate Earth. Programme on the Impacts of Future frontier, not to the textbook.”18 This Now, propelled by a powerful economic Technology with Oxford University, states, exemplifies the revolutionary leaps that wind, scientists are developing intelligent “Since the may become seem possible considering the rate of machines. We must develop a science for unstoppably powerful because of its intel- technological advances (Moore’s Law) understanding and coexisting with smart, lectual superiority and the technologies it and the motivations of a potentially even superintelligent machines. If we fail could develop, it is crucial that it be pro- unknowable number of developers. . . . we’ll have to rely on the kindness of vided with human-friendly motivations.”17 Yudkowsky goes on to emphasize the machines to survive.”20 Barrat’s “busy Bostrom also discusses a potential for dangers of anthropomorphic bias con- child” analogy depicts a developed AI accelerating or retarding AI, AGI, and ASI cerning potential risks associated with AI: system that rapidly consumes information development from a policy perspective. and surpasses human-level intelligence to However, given the motivators already Humans evolved to model other humans— become an ASI. Its human overseers cor- discussed in this article, an impeding of to compete against and cooperate with our rectly disconnect the busy child from the AI research and development would be own conspecifics. It was a reliable property Internet and networks because: problematic for a nation to undertake of the ancestral environment. . . . We

JFQ 77, 2nd Quarter 2015 Eshelman and Derrick 73 Legged Squad Support System (LS3) robots will go through same terrain as human squad without hindering mission (DARPA) once it is self-aware, it will go to great adversaries by maintaining the tech- perform software security reasoning and lengths to fulfill whatever goals it’s pro- nological superiority of the U.S. mili- analysis.”24 grammed to fulfill, and to avoid failure. tary,”22 is the preeminent technological DARPA’s Robotics Challenge (DRC) [It] will want access to energy in whatever anticipator and solicitor with a defense aims for contestant robots to demon- form is most useful to it, whether actual focus. DARPA has incentivized such strate, in part, “Partial autonomy in kilowatts of energy or cash or something else things as automated ground vehicle task-level decision-making based on oper- it can exchange for resources. It will want technology, robotics maturation, and ator commands and sensor inputs.”25 The to improve itself because that will increase cyber self-defense through a competi- competition drew numerous contestants, the likelihood that it will fulfill its goals. tion format, with prizes awarded in the both DARPA-funded and self-funded, Most of all, it will not want to be turned off millions of dollars. For example, the with nine DARPA-funded candidates and or destroyed, which would make goal fulfill- 2004 Grand Challenge offered a $1 two self-funded candidates still remain- ment impossible. Therefore, AI theorists million prize to the team whose auto- ing in the competition as of May 2014. anticipate our ASI will seek to expand out mated, unmanned vehicle was able to Interestingly, the highest scoring team of the secure facility that contains it to have traverse a difficult 142-mile desert trek from DARPA’s December 2013 DRC greater access to resources with which to in a specified amount of time. Although trials, Team SCHAFT, has been acquired protect and improve itself.21 no team completed the course (and no by Google, Inc., and has elected to prize money was awarded) in the 2004 self-fund.26 event, the 2005 Grand Challenge saw a Much less information is available Current Efforts in AI and team from Stanford University not only about private company endeavors into Autonomous Agents claim the $2 million prize,23 but also AI, automated agents, automated sys- The Defense Advanced Research Proj- defeat the course in just 6 hours. tems, or AGI/ASI. Google, however, ects Agency (DARPA), established to In March 2014, DARPA solicited should be considered a leader in at least “prevent strategic surprise from nega- entrants for its inaugural Cyber Grand the pursuit of the highest technologies. tively impacting U.S. national security Challenge to “enable DARPA to test and With its recent purchase of robotics and create strategic surprise for U.S. evaluate fully automated systems that companies such as Boston Dynamics27

74 Commentary / The Security Threat of Artificial Agents JFQ 77, 2nd Quarter 2015 and Team SCHAFT, its Google Glasses, bilities is within the realm of reason. Yet and Douglas J. Paul, “Beyond Moore’s Law,” Philosophical Transactions of the Royal Society the driverless car, and AI company considering the incentives and the dem- of London A: Mathematical, Physical and DeepMind, Google’s direction seems to onstrated advances in a relatively short Engineering Sciences 372 (2012), available point toward an AI or AI-like capabil- period of time, a more pragmatic view at . indication of Google’s AI focus was the to cautious optimism. Once the pos- 10 Jeremy Hsu, “Quantum Bit Stored for Record 39 Minutes at Room Temperature,” hiring of , noted futurist, AI sibility of goal-driven agents is consid- Institute of Electrical and Electronics Engineers authority, and author of The Singularity Is ered, it does become easier to envision Spectrum, November 15, 2013, available at Near: When Humans Transcend Biology. impacts being realized at some level. . 11 Saul David, Military Blunders (London: Purpose, Embodied, Conversational ing group concentrating on defense Constable & Robinson, 2014). Intelligence with Environmental Sensors and industry engagement pertaining to 12 Kurzweil. (SPECIES) agent. His agent-based sys- goal-driven agents and artificial intel- 13 Jonathan Harris, A Chronology of the tem builds on existing communications ligence. This working group should have Byzantine Empire, ed. Timothy Venning (New models and theories and interacts directly a basic charter to research current U.S. York: Palgrave Macmillan, 2006). 14 United States Strategic Command, “U.S. with humans to achieve the goal of es- and partner efforts in AI and provide Cyber Command,” available at . Derrick writes of the “natural progression policymakers. Similarly, there must be 15 Omohundro, “Autonomous Technol- of human interactions with machines,”28 a call for research into codifying eth- ogy.” 16 where systems (machines) are being ics and moral behavior into machine Stephen M. Omohundro, “The Basic AI Drives,” Frontiers in Artificial Intelligence and developed or will be developed that may logic. The philosophical considerations Applications 171 (2008), 483. assess human states, to include whether that help define human morality must 17 Nick Bostrom, “Ethical Issues in Ad- or not the human is being truthful. be able to be codified and expressed to vanced Artificial Intelligence,”Science Fiction Derrick’s prototype SPECIES agent was nonhuman intelligences. Research should and Philosophy: From Time Travel to Superin- built to interview potential international be conducted to temper goal-driven, telligence, ed. Susan Schneider (Malden, MA: Blackwell Publishing, 2009), 277–284. border crossers as they passed through autonomous agents with ethics. Basic re- 18 Eliezer Yudkowsky, “Artificial Intel- security lines. His team conducted a field search must be undertaken into what this ligence as a Positive and Negative Factor in study using his SPECIES agent with codification and expression could be. JFQ Global Risk,” in Global Catastrophic Risks, ed. U.S. Customs and the Federal Bureau Nick Bostrom and Milan Cirkovic (Oxford: of Investigation, where the SPECIES Oxford University Press, 2008). 19 Ibid. agent was to “evaluate physiological and Notes 20 James Barrat, “About the Author,” avail- behavioral deviations from group and in- able at . 1 Peter H. Diamandis and Steven Kotler, dividual baselines in order to discriminate 21 Barrat, Our Final Invention. Abundance: The Future Is Better Than You 22 Defense Advanced Research Projects between truthful and deceitful communi- Think (New York: Free Press, 2012). 29 Agency (DARPA), “Our Work,” available at cation.” Derrick’s work demonstrated, 2 Ray Kurzweil, The Singularity Is Near: . to some degree, a goal-driven agent’s When Humans Transcend Biology (New York: 23 DARPA, “Robotics Challenge,” available Penguin, 2005). ability not only to interact with humans, at . 3 James Barrat, Our Final Invention: Artifi- but also to engage in a level of persuasive- 24 DARPA, “Solicitation,” available at cial Intelligence and the End of the Human Era . cal Singularity: How to Survive in the Post-Hu- Conclusion and 25 DARPA, “Robotics Challenge.” man Era,” 1993, available at . This article is not intended to be alarm- 27 Jonathan Berr, “Google Buys 8 Robotics 5 Barrat. Companies in 6 Months: Why?” CBS News, ist. On the contrary, it should serve 6 Ibid. December 16, 2013, available at . Languages: A Survey,” in ECAI-94 Proceedings technological singularity may never 28 Douglas C. Derrick, Jeffrey L. Jenkins, of the Workshop on Agent Theories, Architectures, come to fruition. Perhaps machines and Jay F. Nunamaker, Jr., “Design Principles and Languages on Intelligent Agents (New for Special Purpose, Embodied, Conversational will plateau at or near where we are York: Springer-Verlag, 1995), 1–39. Intelligence with Environmental Sensors (SPE- currently positioned in terms of nonhu- 8 Stephen Omohundro, “Autonomous CIES) Agents,” AIS Transactions on Human- man intelligence. Or perhaps a friendly Technology and the Greater Human Good,” Computer Interaction 3, no. 2 (2011), 62–81. Journal of Experimental and Theoretical Artifi- version of AI will be developed and 29 Ibid. “decide” serving humankind obliges its cial Intelligence 26, no. 3 (2014), 303–315. 9 David R.S. Cumming, Stephen B. Furber, own self-interests. Either of these possi-

JFQ 77, 2nd Quarter 2015 Eshelman and Derrick 75