SIPRI ARTIFICIAL INTELLIGENCE, Policy Paper STRATEGIC STABILITY AND NUCLEAR RISK VINCENT BOULANIN, LORA SAALMAN, PETR TOPYCHKANOV, FEI SU AND MOA PELDÁN CARLSSON June 2020 STOCKHOLM INTERNATIONAL PEACE RESEARCH INSTITUTE SIPRI is an independent international institute dedicated to research into conflict, armaments, arms control and disarmament. Established in 1966, SIPRI provides data, analysis and recommendations, based on open sources, to policymakers, researchers, media and the interested public. The Governing Board is not responsible for the views expressed in the publications of the Institute. GOVERNING BOARD Ambassador Jan Eliasson, Chair (Sweden) Dr Vladimir Baranovsky (Russia) Espen Barth Eide (Norway) Jean-Marie Guéhenno (France) Dr Radha Kumar (India) Ambassador Ramtane Lamamra (Algeria) Dr Patricia Lewis (Ireland/United Kingdom) Dr Jessica Tuchman Mathews (United States) DIRECTOR Dan Smith (United Kingdom) Signalistgatan 9 SE-169 72 Solna, Sweden Telephone: + 46 8 655 9700 Email: [email protected] Internet: www.sipri.org Artificial Intelligence, Strategic Stability and Nuclear Risk vincent boulanin, lora saalman, petr topychkanov, fei su and moa peldán carlsson June 2020 Contents Preface v Acknowledgements vi Abbreviations vii Executive Summary ix 1. Introduction 1 Box 1.1. Key definitions 6 2. Understanding the AI renaissance and its impact on nuclear weapons 7 and related systems I. Understanding the AI renaissance 7 II. AI and nuclear weapon systems: Past, present and future 18 Box 2.1. Automatic, automated, autonomous: The relationship between 15 automation, autonomy and machine learning Box 2.2. Historical cases of false alarms in early warning systems 20 Box 2.3. Dead Hand and Perimetr 22 Figure 2.1. A brief history of artificial intelligence 10 Figure 2.2. The benefits of machine learning 11 Figure 2.3. Approaches to the definition and categorization of autonomous 14 systems Figure 2.4. Benefits of autonomy 16 Figure 2.5. Foreseeable applications of AI in nuclear deterrence 24 3. AI and the military modernization plans of nuclear-armed states 31 I. The United States 33 II. Russia 44 III. The United Kingdom 52 IV. France 59 V. China 67 VI. India 78 VII. Pakistan 87 VIII. North Korea 93 Box 3.1. The artificial intelligence race 32 Figure 3.1. Recent policy developments related to artificial intelligence in 34 the United States Figure 3.2. Recent policy developments related to artificial intelligence in 45 Russia Figure 3.3. Recent policy developments related to artificial intelligence in 53 the United Kingdom Figure 3.4. Recent policy developments related to artificial intelligence in 60 France Figure 3.5. Recent policy developments related to artificial intelligence in 68 China Figure 3.6. Recent policy developments related to artificial intelligence in 79 India Figure 3.7. Recent policy developments related to artificial intelligence in 88 Pakistan Table 3.1. Applications of artificial intelligence of interest to the US Department 38 of Defense Table 3.2. State of adoption of artificial intelligence in the United States 42 nuclear deterrence architecture Table 3.3. State of adoption of artificial intelligence in the Russian nuclear 50 deterrence architecture Table 3.4. State of adoption of artificial intelligence in the British nuclear 58 deterrence architecture Table 3.5. State of adoption of artificial intelligence in the French nuclear 66 deterrence architecture Table 3.6. State of adoption of artificial intelligence in the Chinese nuclear 77 deterrence architecture Table 3.7. State of adoption of artificial intelligence in the Indian nuclear 85 deterrence architecture Table 3.8. State of adoption of artificial intelligence in the Pakistani nuclear 92 deterrence architecture Table 3.9. North Korean universities conducting research and studies in 96 artificial intelligence Table 3.10. State of adoption of artificial intelligence in the North Korean 99 nuclear deterrence architecture 4. The positive and negative impacts of AI on strategic stability and 101 nuclear risk I. The impact on strategic stability and strategic relations 101 II. The impact on the likelihood of nuclear conflict: Foreseeable risk scenarios 113 Box 4.1. Machine learning and verification of nuclear arms control and 104 disarmament: Opportunities and challenges 5. Mitigating the negative impacts of AI on strategic stability and 123 nuclear risk I. Mitigating risks: What, how and where 123 II. Possible technical and organizational measures for risk reduction 127 III. Possible policy measures for risk reduction 130 Figure 5.1. Risks and challenges posed by the use of artificial intelligence 124 in nuclear weapons 6. Conclusions 136 I. Key findings 136 II. Recommendations 140 Figure 6.1. Possible risk reduction measures and how they can be implemented 142 Figure 6.2. Four key measures to deal with the negative impact of AI on 143 strategic stability and nuclear risk About the authors 144 Preface The current period sees the post-cold war global strategic landscape in an extended process of redefinition. This is the result of a number of different trends. Most importantly, the underlying dynamics of world power have been shifting with the economic, political and military rise of China, the reassertion under President Vladimir Putin of a great power role for Russia, and the dis enchantment expressed by the current United States administration with the intern ational institutions and arrangements that the USA itself had a big hand in creating. As a result, the China–US rivalry has increasingly supplanted the Russian–US nuclear rivalry as the core binary confrontation of international politics. This pair of dyadic antagonisms is, moreover, supplemented by growing regional nuclear rivalries and strategic triangles in South Asia and the Middle East. Against this increasingly toxic geopolitical background, the arms control framework created at the end of the cold war has deteriorated. Today, the commitment of the states with the largest nuclear arsenals to pursue stability through arms control and potentially disarmament is in doubt. The impact of coronavirus disease 2019 (COVID-19) is not yet clear but may well be a source of further unsettling developments. All of this is the volatile backdrop to considering the consequences of new technological developments for armament dynamics. The world is going through a fourth industrial revolution, characterized by rapid advances in artificial intelligence (AI), robotics, quantum technology, nanotechnology, bio technology and digital fabrication. The question of how these technologies will be used has not yet been answered in full detail. It is beyond dispute, however, that nuclear- armed states will seek to use these technologies for their national security. The SIPRI project ‘Mapping the impact of machine learning and autonomy on strategic stability’ set out to explore the potential effect of AI exploitation on strategic stability and nuclear risk. The research team has used a region-by-region approach to analyze the impact that the exploitation of AI could have on the global strategic landscape. This report is the final publication of this two-year research project funded by the Carnegie Corporation of New York; it presents the key findings and recommendations of the SIPRI authors derived from their research as well as a series of regional and transregional workshop organized in Europe, East and South Asia and the USA. It follows and complements the trilogy of edited volumes that compile the perspectives of experts from these regions on the topic. SIPRI commends this study to decision makers in the realms of arms control, defence and foreign affairs, to researchers and students in departments of politics, international relations and computer science, as well as to members of the general public who have a professional and personal interest in the subject. Dan Smith Director, SIPRI Stockholm, June 2020 Acknowledgements The authors would like to express their sincere gratitude to the Carnegie Corporation of New York for its generous financial support of the project. They are also indebted to all the experts who participated in the workshops and other events that SIPRI organized in Stockholm, Beijing, Colombo, New York, Geneva and Seoul. The content of this report reflects the contributions of this inter- national group of experts. The authors also wish to thank the external reviewer, Erin Dumbacher, as well as SIPRI colleagues Sibylle Bauer, Mark Bromley, Tytti Erästö, Shannon Kile, Luc van de Goor, Pieter Wezeman and Siemon Wezeman for their com prehensive and constructive feedback. Finally, we would like to acknowledge the invaluable editorial work of David Cruickshank and the SIPRI editorial department. Responsibility for the views and information presented in this report lies entirely with the authors. Abbreviations A2/AD Anti-access/area-denial AGI Artificial general intelligence AI Artificial intelligence ATR Automatic target recognition AURA Autonomous Unmanned Research Aircraft CAIR Centre for Artificial Intelligence and Robotics CBM Confidence-building measure CCW Certain Conventional Weapons Convention CD Conference on Disarmament DARPA Defense Advanced Research Projects Agency DBN Deep belief network DCDC Development, Concepts and Doctrine Centre DGA Direction générale de l’armement (directorate general of armaments of France) DIU Defense Innovation Unit DOD Department of Defense DRDO Defence Research and Development Organisation GAN Generative adversarial network ICBM Intercontinental ballistic
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages158 Page
-
File Size-