The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk Volume I Euro-Atlantic Perspectives Edited by Vincent Boulanin

Total Page:16

File Type:pdf, Size:1020Kb

The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk Volume I Euro-Atlantic Perspectives Edited by Vincent Boulanin SIPRI THE IMPACT OF Policy Paper ARTIFICIAL INTELLIGENCE ON STRATEGIC STABILITY AND NUCLEAR RISK Volume I Euro-Atlantic Perspectives edited by vincent boulanin May 2019 STOCKHOLM INTERNATIONAL PEACE RESEARCH INSTITUTE SIPRI is an independent international institute dedicated to research into conflict, armaments, arms control and disarmament. Established in 1966, SIPRI provides data, analysis and recommendations, based on open sources, to policymakers, researchers, media and the interested public. The Governing Board is not responsible for the views expressed in the publications of the Institute. GOVERNING BOARD Ambassador Jan Eliasson, Chair (Sweden) Dr Dewi Fortuna Anwar (Indonesia) Dr Vladimir Baranovsky (Russia) Espen Barth Eide (Norway) Jean-Marie Guéhenno (France) Dr Radha Kumar (India) Dr Patricia Lewis (Ireland/United Kingdom) Dr Jessica Tuchman Mathews (United States) DIRECTOR Dan Smith (United Kingdom) Signalistgatan 9 SE-169 72 Solna, Sweden Telephone: + 46 8 655 9700 Email: [email protected] Internet: www.sipri.org The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk Volume I Euro-Atlantic Perspectives edited by vincent boulanin May 2019 Contents Preface vii Acknowledgements viii Abbreviations ix Executive Summary xi Introduction 1. Introduction 3 Box 1.1. Key definitions 4 Part I. Demystifying artificial intelligence and its military implications 11 2. Artificial intelligence: A primer 13 I. What is AI? 13 II. Machine learning: A key enabler of the AI renaissance 15 III. Autonomy: A key by-product of the AI renaissance 21 IV. Conclusions 25 Box 2.1. Deep learning 16 Box 2.2. Generative adversarial networks 18 Box 2.3. Automatic, automated, autonomous 23 Figure 2.1. Approaches to the definition and categorization of autonomous 22 systems 3. The state of artificial intelligence: An engineer’s perspective on 26 autonomous systems I. Autonomy: A primer 26 II. Applications 27 III. Challenges 28 IV. Conclusions 31 4. Military applications of machine learning and autonomous systems 32 I. Autonomy in military systems 32 II. Military applications of machine learning 35 III. Conclusions 38 Figure 4.1. A schematic description of a generic autonomous system 33 Part II. Artificial intelligence and nuclear weapons and doctrines: 39 Past, present and future 5. Cold war lessons for automation in nuclear weapon systems 41 I. Soviet and US use of automation in nuclear early warning and 43 command and control II. Automation and the Dead Hand 46 III. Where could autonomy be taking nuclear early warning and 48 command and control? IV. Conclusions 50 6. The future of machine learning and autonomy in 53 nuclear weapon systems I. Early warning and intelligence, surveillance and reconnaissance 53 II. Command and control 55 III. Nuclear weapon delivery 56 IV. Non-nuclear operations 58 V. Conclusions 61 7. Artificial intelligence and the modernization of US nuclear forces 63 I. US nuclear forces and modernization 63 II. Machine learning in nuclear weapons and related systems 64 III. Conclusions 66 8. Autonomy in Russian nuclear forces 68 I. Nuclear weapon developments during the cold war 68 II. Post-cold war developments in Russia 72 III. Conclusions 75 Box 8.1. Computers in missile defence and early warning 69 Box 8.2. The 1983 Petrov incident 70 Part III. Artificial intelligence, strategic stability and nuclear risk: 77 Euro-Atlantic perspectives 9. Artificial intelligence and nuclear stability 79 I. AI and nuclear command and control 80 II. AI and nuclear delivery platforms 81 III. Conventional military uses of AI and nuclear escalation 82 IV. Conclusions 83 10. Military applications of artificial intelligence: Nuclear risk redux 84 I. AI and machine learning 84 II. AI, machine learning and autonomy in weapon systems 86 III. AI, machine learning and nuclear risk 88 IV. Conclusions 90 11. The destabilizing prospects of artificial intelligence for 91 nuclear strategy, deterrence and stability I. How AI could threaten nuclear stability 91 II. How AI could have an impact on nuclear deterrence 94 III. Conclusions 97 12. The impact of unmanned combat aerial vehicles on strategic stability 99 I. The comparative advantage of UCAVs 99 II. The state of UCAV technology and its adoption 101 III. The need for UCAVs to be autonomous 102 13. Autonomy and machine learning at the interface of 105 nuclear weapons, computers and people I. New technology brings new threats 105 II. New threats require new policy responses 111 III. Conclusions 117 Table 13.1. Unilateral policy responses to reduce nuclear risks 114 from cyber threats 14. Mitigating the challenges of nuclear risk while 119 ensuring the benefits of technology I. Potential impacts of technological innovation on nuclear risk 120 II. Governing the risks posed by technological innovation 123 III. Using machine learning and distributed ledger technologies to support 125 compliance and verification regimes IV. Conclusions 127 Conclusions 15. Promises and perils of artificial intelligence for strategic stability 131 and nuclear risk management: Euro-Atlantic perspectives I. The promises and perils of the current AI renaissance 131 II. The impact of AI on nuclear weapons and doctrines: 135 What can be said so far III. Options for dealing with the risks 137 About the authors 139 Preface The post-cold war global strategic landscape is currently in an extended process of being redrawn as a result of a number of different trends. Most importantly, the underlying dynamics of world power are shifting with the economic, political and strategic rise of China, the reassertion under President Vladimir Putin of a great power role for Russia, and the disenchantment expressed by the current United States’ administration with the international institutions and arrangements the USA had a big hand in creating. As a result, a binary Russian– US nuclear rivalry, legacy of the old Russian–US confrontation, is being gradually replaced by regional nuclear rivalries and strategic triangles. As the arms control framework that the Soviet Union and the USA created at the end of the cold war disintegrates, the commitment of the states with the largest nuclear arsenals to pursue stability through arms control and potentially disarmament is in doubt to an unprecedented degree. On top of this comes the impact of new technological developments on armament dynamics. The world is undergoing a ‘fourth industrial’ revolution, characterized by rapid and converging advances in multiple technologies including artificial intelligence (AI), robotics, quantum technology, nanotechnology, biotechnology and digital fabrication. How these technologies will be utilized remains a question that has not yet been fully answered. It is beyond dispute, however, that nuclear-armed states will seek to leverage these technologies for their national security. The potential impact of these developments on strategic stability and nuclear risk has not yet been systematically documented and analyzed. AI is deemed by many to be potentially the most transformative technology of the fourth industrial revolution. The SIPRI project, ‘Mapping the impact of machine learning and autonomy on strategic stability,’ is a first attempt to present a nuanced analysis of what impact the exploitation of AI could have global strategic landscape and whether and how it might undermine international security. This edited volume is the first major publication of this two-year research project; it will be followed by two more. The authors of this first volume are experts from Europe, Russia and the USA; the two succeeding volumes will bring together contributions from South Asian and East Asian experts. The result will be a wide-ranging compilation of regional perspectives on the impact that recent advances in AI could have on nuclear weapons and doctrines, strategic stability and nuclear risk. SIPRI commends this study to decision makers in the realms of arms control, defense and foreign affairs, to researchers and students in departments of Politics, International Relations and Computer Science as well as members of the general public who have a professional and personal interest in the subject. Dan Smith Director, SIPRI Stockholm, May 2019 Acknowledgements I would like to express my sincere gratitude to the Carnegie Corporation of New York for its generous financial support of the project. I am also indebted to all the experts who participated in the workshop that SIPRI organized on the topic on 22–23 May 2018 and who agreed to contribute to this volume. The essays that follow are, by and large, more developed versions of the presentations delivered at the workshop, taking into account the points made in subsequent discussions. The mix of perspectives achieved at this workshop is reflected in the different styles and substance of the chapters. The views expressed by the various authors are their own and should not be taken to reflect the views of either SIPRI or the Carnegie Corporation of New York. I wish to thank SIPRI colleagues Lora Saalman, Su Fei, Tytti Erästö and Sibylle Bauer for their comprehensive and constructive feedback. Finally, I would like to acknowledge the invaluable editorial work of David Cruickshank and the SIPRI Editorial Department. Vincent Boulanin Abbreviations A2/AD Anti-access/area-denial AAV Autonomous aerial vehicle AGI Artificial general intelligence AI Artificial intelligence ASV Autonomous surface vehicle ATR Automatic target recognition BMD Ballistic missile defence C3I Command, control, communications and intelligence
Recommended publications
  • Result Build a Self-Driving Network
    SELF-DRIVING NETWORKS “LOOK, MA: NO HANDS” Kireeti Kompella CTO, Engineering Juniper Networks ITU FG Net2030 Geneva, Oct 2019 © 2019 Juniper Networks 1 Juniper Public VISION © 2019 Juniper Networks Juniper Public THE DARPA GRAND CHALLENGE BUILD A FULLY AUTONOMOUS GROUND VEHICLE IMPACT • Programmers, not drivers • No cops, lawyers, witnesses GOAL • Quadruple highway capacity Drive a pre-defined 240km course in the Mojave Desert along freeway I-15 • Glitches, insurance? • Ethical Self-Driving Cars? PRIZE POSSIBILITIES $1 Million RESULT 2004: Fail (best was less than 12km!) 2005: 5/23 completed it 2007: “URBAN CHALLENGE” Drive a 96km urban course following traffic regulations & dealing with other cars 6 cars completed this © 2019 Juniper Networks 3 Juniper Public GRAND RESULT: THE SELF-DRIVING CAR: (2009, 2014) No steering wheel, no pedals— a completely autonomous car Not just an incremental improvement This is a DISRUPTIVE change in automotive technology! © 2019 Juniper Networks 4 Juniper Public THE NETWORKING GRAND CHALLENGE BUILD A SELF-DRIVING NETWORK IMPACT • New skill sets required GOAL • New focus • BGP/IGP policies AI policy Self-Discover—Self-Configure—Self-Monitor—Self-Correct—Auto-Detect • Service config service design Customers—Auto-Provision—Self-Diagnose—Self-Optimize—Self-Report • Reactive proactive • Firewall rules anomaly detection RESULT Free up people to work at a higher-level: new service design and “mash-ups” POSSIBILITIES Agile, even anticipatory service creation Fast, intelligent response to security breaches CHALLENGE
    [Show full text]
  • Political Performance and the War on Terror
    The Shock and Awe of the Real: Political Performance and the War on Terror by Matthew Jones A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Centre for Drama, Theatre and Performance Studies University of Toronto © Copyright by Matt Jones 2020 The Shock and Awe of the Real: Political Performance and the War on Terror Matt Jones Doctor of Philosophy Centre for Drama, Theatre and Performance Studies University of Toronto 2020 Abstract This dissertation offers a transnational study of theatre and performance that responded to the recent conflicts in Iraq, Afghanistan, Syria, and beyond. Looking at work by artists primarily from Arab and Middle Eastern diasporas working in the US, UK, Canada, and Europe, the study examines how modes of performance in live art, documentary theatre, and participatory performance respond to and comment on the power imbalances, racial formations, and political injustices of these conflicts. Many of these performances are characterized by a deliberate blurring of the distinctions between performance and reality. This has meant that playwrights crafted scripts from the real words of soldiers instead of writing plays; performance artists harmed their real bodies, replicating the violence of war; actors performed in public space; and media artists used new technology to connect audiences to real warzones. This embrace of the real contrasts with postmodern suspicion of hyper-reality—which characterized much political performance in the 1990s—and marks a shift in understandings of the relationship between performance and the real. These strategies allowed artists to contend with the way that war today is also a multimedia attack on the way that reality is constructed and perceived.
    [Show full text]
  • Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril
    Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Michael Matheny, Sonoo Thadaney Israni, Mahnoor Ahmed, and Danielle Whicher, Editors WASHINGTON, DC NAM.EDU PREPUBLICATION COPY - Uncorrected Proofs NATIONAL ACADEMY OF MEDICINE • 500 Fifth Street, NW • WASHINGTON, DC 20001 NOTICE: This publication has undergone peer review according to procedures established by the National Academy of Medicine (NAM). Publication by the NAM worthy of public attention, but does not constitute endorsement of conclusions and recommendationssignifies that it is the by productthe NAM. of The a carefully views presented considered in processthis publication and is a contributionare those of individual contributors and do not represent formal consensus positions of the authors’ organizations; the NAM; or the National Academies of Sciences, Engineering, and Medicine. Library of Congress Cataloging-in-Publication Data to Come Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America. Suggested citation: Matheny, M., S. Thadaney Israni, M. Ahmed, and D. Whicher, Editors. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National Academy of Medicine. PREPUBLICATION COPY - Uncorrected Proofs “Knowing is not enough; we must apply. Willing is not enough; we must do.” --GOETHE PREPUBLICATION COPY - Uncorrected Proofs ABOUT THE NATIONAL ACADEMY OF MEDICINE The National Academy of Medicine is one of three Academies constituting the Nation- al Academies of Sciences, Engineering, and Medicine (the National Academies). The Na- tional Academies provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.
    [Show full text]
  • Report: the New Nuclear Arms Race
    The New Nuclear Arms Race The Outlook for Avoiding Catastrophe August 2020 By Akshai Vikram Akshai Vikram is the Roger L. Hale Fellow at Ploughshares Fund, where he focuses on U.S. nuclear policy. A native of Louisville, Kentucky, Akshai previously worked as an opposition researcher for the Democratic National Committee and a campaign staffer for the Kentucky Democratic Party. He has written on U.S. nuclear policy and U.S.-Iran relations for outlets such as Inkstick Media, The National Interest, Defense One, and the Quincy Institute’s Responsible Statecraft. Akshai holds an M.A. in International Economics and American Foreign Policy from the Johns Hopkins University SAIS as well as a B.A. in International Studies and Political Science from Johns Hopkins Baltimore. On a good day, he speaks Spanish, French, and Persian proficiently. Acknowledgements This report was made possible by the strong support I received from the entire Ploughshares Fund network throughout my fellowship. Ploughshares Fund alumni Will Saetren, Geoff Wilson, and Catherine Killough were extremely kind in offering early advice on the report. From the Washington, D.C. office, Mary Kaszynski and Zack Brown offered many helpful edits and suggestions, while Joe Cirincione, Michelle Dover, and John Carl Baker provided much- needed encouragement and support throughout the process. From the San Francisco office, Will Lowry, Derek Zender, and Delfin Vigil were The New Nuclear Arms Race instrumental in finalizing this report. I would like to thank each and every one of them for their help. I would especially like to thank Tom Collina. Tom reviewed numerous drafts of this report, never The Outlook for Avoiding running out of patience or constructive advice.
    [Show full text]
  • An Off-Road Autonomous Vehicle for DARPA's Grand Challenge
    2005 Journal of Field Robotics Special Issue on the DARPA Grand Challenge MITRE Meteor: An Off-Road Autonomous Vehicle for DARPA’s Grand Challenge Robert Grabowski, Richard Weatherly, Robert Bolling, David Seidel, Michael Shadid, and Ann Jones. The MITRE Corporation 7525 Colshire Drive McLean VA, 22102 [email protected] Abstract The MITRE Meteor team fielded an autonomous vehicle that competed in DARPA’s 2005 Grand Challenge race. This paper describes the team’s approach to building its robotic vehicle, the vehicle and components that let the vehicle see and act, and the computer software that made the vehicle autonomous. It presents how the team prepared for the race and how their vehicle performed. 1 Introduction In 2004, the Defense Advanced Research Projects Agency (DARPA) challenged developers of autonomous ground vehicles to build machines that could complete a 132-mile, off-road course. Figure 1: The MITRE Meteor starting the finals of the 2005 DARPA Grand Challenge. 2005 Journal of Field Robotics Special Issue on the DARPA Grand Challenge 195 teams applied – only 23 qualified to compete. Qualification included demonstrations to DARPA and a ten-day National Qualifying Event (NQE) in California. The race took place on October 8 and 9, 2005 in the Mojave Desert over a course containing gravel roads, dirt paths, switchbacks, open desert, dry lakebeds, mountain passes, and tunnels. The MITRE Corporation decided to compete in the Grand Challenge in September 2004 by sponsoring the Meteor team. They believed that MITRE’s work programs and military sponsors would benefit from an understanding of the technologies that contribute to the DARPA Grand Challenge.
    [Show full text]
  • Real Vs Fake Faces: Deepfakes and Face Morphing
    Graduate Theses, Dissertations, and Problem Reports 2021 Real vs Fake Faces: DeepFakes and Face Morphing Jacob L. Dameron WVU, [email protected] Follow this and additional works at: https://researchrepository.wvu.edu/etd Part of the Signal Processing Commons Recommended Citation Dameron, Jacob L., "Real vs Fake Faces: DeepFakes and Face Morphing" (2021). Graduate Theses, Dissertations, and Problem Reports. 8059. https://researchrepository.wvu.edu/etd/8059 This Thesis is protected by copyright and/or related rights. It has been brought to you by the The Research Repository @ WVU with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you must obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in WVU Graduate Theses, Dissertations, and Problem Reports collection by an authorized administrator of The Research Repository @ WVU. For more information, please contact [email protected]. Real vs Fake Faces: DeepFakes and Face Morphing Jacob Dameron Thesis submitted to the Benjamin M. Statler College of Engineering and Mineral Resources at West Virginia University in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering Xin Li, Ph.D., Chair Natalia Schmid, Ph.D. Matthew Valenti, Ph.D. Lane Department of Computer Science and Electrical Engineering Morgantown, West Virginia 2021 Keywords: DeepFakes, Face Morphing, Face Recognition, Facial Action Units, Generative Adversarial Networks, Image Processing, Classification.
    [Show full text]
  • AI Computer Wraps up 4-1 Victory Against Human Champion Nature Reports from Alphago's Victory in Seoul
    The Go Files: AI computer wraps up 4-1 victory against human champion Nature reports from AlphaGo's victory in Seoul. Tanguy Chouard 15 March 2016 SEOUL, SOUTH KOREA Google DeepMind Lee Sedol, who has lost 4-1 to AlphaGo. Tanguy Chouard, an editor with Nature, saw Google-DeepMind’s AI system AlphaGo defeat a human professional for the first time last year at the ancient board game Go. This week, he is watching top professional Lee Sedol take on AlphaGo, in Seoul, for a $1 million prize. It’s all over at the Four Seasons Hotel in Seoul, where this morning AlphaGo wrapped up a 4-1 victory over Lee Sedol — incidentally, earning itself and its creators an honorary '9-dan professional' degree from the Korean Baduk Association. After winning the first three games, Google-DeepMind's computer looked impregnable. But the last two games may have revealed some weaknesses in its makeup. Game four totally changed the Go world’s view on AlphaGo’s dominance because it made it clear that the computer can 'bug' — or at least play very poor moves when on the losing side. It was obvious that Lee felt under much less pressure than in game three. And he adopted a different style, one based on taking large amounts of territory early on rather than immediately going for ‘street fighting’ such as making threats to capture stones. This style – called ‘amashi’ – seems to have paid off, because on move 78, Lee produced a play that somehow slipped under AlphaGo’s radar. David Silver, a scientist at DeepMind who's been leading the development of AlphaGo, said the program estimated its probability as 1 in 10,000.
    [Show full text]
  • Non-Independent Investment Research Disclaimer
    CONTENT 03 2020 theme – Engines of Disruption 04 Chips go cold in AI winter 05 Stagflation rewards value over growth 06 ECB folds and hikes rates 07 In energy, green is not the new black 08 South Africa gets electrocuted by ESKOM debt 09 Trump announces America First Tax 10 Sweden breaks bad 11 Dems win clean sweep in 2020 election 12 Hungary leaves the EU 13 Asia launches digital reserve currency 2020 rent at twice the price of owning and maybe even slightly normalise viewed as plain fun, the principles Outrageous Predictions 2020 – a tax on the poor if ever there rates, while governments step into behind the Outrageous Predictions was one, and a driver of inequality. the breach with infrastructure and resonate very strongly with our This risks leaving an entire climate policy-linked spending. clients and the Saxo Group. Not - Engines of Disruption generation without the savings just in terms of what could happen STEEN JAKOBSEN, CHIEF INVESTMENT OFFICER needed to own their own house, The list of Outrageous Predictions to the portfolios and wealth typically the only major asset that this year all play to the theme of allocation, but also as an input to many medium- and lower-income disruption, because our current our respective areas of business, households will ever obtain. Thus paradigm is simply at the end of careers and life in general. Looking into the future is something of a fool’s game, but it remains a Already, there are a number of we are denying the very economic the road. Not because we want it to This is the spirit of Outrageous useful exercise — helping prepare for what lies ahead by considering cracks in the system that show mechanism that made the older end, but simply because extending Predictions: to create the very the full possible range of economic and political outcomes.
    [Show full text]
  • Cyber Warfare a “Nuclear Option”?
    CYBER WARFARE A “NUCLEAR OPTION”? ANDREW F. KREPINEVICH CYBER WARFARE: A “NUCLEAR OPTION”? BY ANDREW KREPINEVICH 2012 © 2012 Center for Strategic and Budgetary Assessments. All rights reserved. About the Center for Strategic and Budgetary Assessments The Center for Strategic and Budgetary Assessments (CSBA) is an independent, nonpartisan policy research institute established to promote innovative thinking and debate about national security strategy and investment options. CSBA’s goal is to enable policymakers to make informed decisions on matters of strategy, secu- rity policy and resource allocation. CSBA provides timely, impartial, and insight- ful analyses to senior decision makers in the executive and legislative branches, as well as to the media and the broader national security community. CSBA encour- ages thoughtful participation in the development of national security strategy and policy, and in the allocation of scarce human and capital resources. CSBA’s analysis and outreach focus on key questions related to existing and emerging threats to US national security. Meeting these challenges will require transforming the national security establishment, and we are devoted to helping achieve this end. About the Author Dr. Andrew F. Krepinevich, Jr. is the President of the Center for Strategic and Budgetary Assessments, which he joined following a 21-year career in the U.S. Army. He has served in the Department of Defense’s Office of Net Assessment, on the personal staff of three secretaries of defense, the National Defense Panel, the Defense Science Board Task Force on Joint Experimentation, and the Defense Policy Board. He is the author of 7 Deadly Scenarios: A Military Futurist Explores War in the 21st Century and The Army and Vietnam.
    [Show full text]
  • Operationalizing Robotic and Autonomous Systems in Support of Multi-Domain Operations White Paper
    Operationalizing Robotic and Autonomous Systems in Support of Multi-Domain Operations White Paper Prepared by: Army Capabilities Integration Center – Future Warfare Division 30 November 2018 Distribution Unclassified Distribution is unlimited This page intentionally left blank ii Executive Summary Robotic and Autonomous Systems (RAS) and artificial intelligence (AI) are fundamental to the future Joint Force realizing the full potential of Multi-Domain Operations (MDO 1.5). These systems, in particular AI, offer the ability to outmaneuver adversaries across domains, the electromagnetic (EM) spectrum, and the information environment. The employment of these systems during competition allows the Joint Force to understand the operational environment (OE) in real time, and thus better employ both manned and unmanned capabilities to defeat threat operations meant to destabilize a region, deter escalation of violence, and turn denied spaces into contested spaces. In the transition from competition to armed conflict, RAS and AI maneuver, fires, and intelligence, surveillance, and reconnaissance (ISR) capabilities provide the Joint Force with the ability to deny the enemy’s efforts to seize positions of advantage. Improved sustainment throughput combined with the ability to attack the enemy’s anti- access/aerial denial networks provides U.S. Forces the ability to seize positions of operational, strategic, and tactical advantage. Increased understanding through an AI-enabled joint Common Operating Picture (COP) allows U.S. Forces the ability to orchestrate multi-domain effects to create windows of advantage. Post-conflict application of RAS and AI offer increased capacity to produce sustainable outcomes and the combat power to set conditions for deterrence. Developing an operational concept for RAS allows the Army to understand better the potential impact of those technologies on the nature and character of war.
    [Show full text]
  • Future of Research on Catastrophic and Existential Risk
    ORE Open Research Exeter TITLE Working together to face humanity's greatest threats: Introduction to The Future of Research in Catastrophic and Existential Risk AUTHORS Currie, AM; Ó hÉigeartaigh, S JOURNAL Futures DEPOSITED IN ORE 06 February 2019 This version available at http://hdl.handle.net/10871/35764 COPYRIGHT AND REUSE Open Research Exeter makes this work available in accordance with publisher policies. A NOTE ON VERSIONS The version presented here may differ from the published version. If citing, you are advised to consult the published version for pagination, volume/issue and date of publication Working together to face humanity’s greatest threats: Introduction to The Future of Research on Catastrophic and Existential Risk. Adrian Currie & Seán Ó hÉigeartaigh Penultimate Version, forthcoming in Futures Acknowledgements We would like to thank the authors of the papers in the special issue, as well as the referees who provided such constructive and useful feedback. We are grateful to the team at the Centre for the Study of Existential Risk who organized the first Cambridge Conference on Catastrophic Risk where many of the papers collected here were originally presented, and whose multi-disciplinary expertise was invaluable for making this special issue a reality. We’d like to thank Emma Bates, Simon Beard and Haydn Belfield for feedback on drafts. Ted Fuller, Futures’ Editor-in-Chief also provided invaluable guidance throughout. The Conference, and a number of the publications in this issue, were made possible through the support of a grant from the Templeton World Charity Foundation (TWCF); the conference was also supported by a supplementary grant from the Future of Life Institute.
    [Show full text]
  • Loitering Munitions
    Loitering Munitions The Soldiers’ Hand Held Cruise Missiles Jerome Bilet, PhD Loitering munitions (LMs) are low-cost guided precision munitions which can be maintained in a holding pattern in the air for a certain time and rapidly attack, land or sea, non-line-of-sight (NLOS) targets. LMs are under the control of an operator who sees a real-time image of the target and its surrounding area, giving the capacity to control the exact time, attitude and direction of the attack of a static, re-locatable or moving target, including providing a contribution to the formal target identification and confirmation process1. Whether labelled as hand held cruise missiles, pocket artillery or miniature air force, loitering munitions will be – and in some instances already are – part of the toolbox of the modern warfighter. This is a logical add-on to the way unmanned systems are becoming preponderant in contemporary warfare. There is no need to demonstrate any longer the fact that unmanned systems2 are part of the everyday life of the warfighter, whether in the air, on the ground, and above or under the water. Unmanned aerial vehicles, the well-known UAVs, represent the largest subset of the unmanned systems. A rather new subclass of UAVs are the weaponised unmanned air vehicles. Loitering munitions are part of this family. This article will focus mainly on short range man-portable loitering munitions used by small tactical units. Two main options exist to create a small weaponised UAV. The first option is to produce miniature munitions to be attached to existing standard ISR drones.
    [Show full text]