Optimising Peace Through a Universal Global Peace Treaty to Constrain the Risk of War from a Militarised Artificial Superintelligence
Total Page:16
File Type:pdf, Size:1020Kb
OPTIMISING PEACE THROUGH A UNIVERSAL GLOBAL PEACE TREATY TO CONSTRAIN THE RISK OF WAR FROM A MILITARISED ARTIFICIAL SUPERINTELLIGENCE Working Paper John Draper Research Associate, Nonkilling Economics and Business Research Committee Center for Global Nonkilling, Honolulu, Hawai’i A United Nations-accredited NGO Working Paper: Optimising Peace Through a Universal Global Peace Treaty to Constrain the Risk of War from a Militarised Artificial Superintelligence Abstract This paper argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation-state to wage war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. These risks are not mutually incompatible in that the first can transition to the second. Presently, few states declare war on each other or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state”, while allowing for UN Security Council-endorsed military measures and “exercise of self-defense”. In this theoretical ideal, wars are not declared; instead, 'international armed conflicts' occur. However, costly interstate conflicts, both ‘hot’ and ‘cold’, still exist, for instance the Kashmir Conflict and the Korean War. Further a ‘New Cold War’ between AI superpowers, the United States and China, looms. An ASI-directed/enabled future conflict could trigger ‘total war’, including nuclear war, and is therefore high risk. The risk reduction strategy we advocate relies on international relations and peacebuilding theory and optimises peace through a Universal Global Peace Treaty (UGPT), which could contribute towards the ending of existing wars and the prevention of future wars, through conforming instrumentalism. A critical juncture to establish the UGPT is emerging, by leveraging it off a ‘burning plasma’ fusion reaction breakthrough, expected from circa 2025 to 2035, as was attempted, unfortunately unsuccessfully, in 1946, with fission. While this strategy cannot readily cope with non-state actors, it could influence state actors, including those developing ASIs, or an ASI with agency, particularly if the ASI valued conforming instrumentalism. Keywords: AI arms race, artificial superintelligence, conforming instrumentalism, existential risk, international relations, nonkilling, peace We say we are for Peace. The world will not forget that we say this. We know how to save Peace. The world knows that we do. We, even we here, hold the power and have the responsibility. We shall nobly save, or meanly lose, the last, best hope of earth. The way is plain, peaceful, generous, just—a way which, if followed, the world will forever applaud. Bernard Baruch, June 14, 1946, presenting to the United Nations Atomic Energy Commission, the proposed lynchpin of the United Nations, the failure of which led to the Cold War. Introduction The problem of an artificial superintelligence at war While some maintain that creating an artificial general intelligence (AGI), i.e., human or above human artificial intelligence (AI), is impossible, for instance for the Dreyfussian argument that computers do not grow up, belong to a culture, and act in the world (Fjelland, 2020), others believe a path to AGI is in sight (Goertzel & Pennachin, 2007; Wang & Goertzel, 2012). Consequently, the international defence community is beginning to take seriously the national security risk posed by the development of AGI and its implications for international relations: Start thinking about artificial superintelligence and engage with the community that has started thinking about actionable options in that part of the option space (also recognizing that this may open up new avenues for engaging with actors such as China and/or the Russian Federation). (De Spiegeleire, Maas, & Sweijs, 2017, p.107) This is because of the possibility that the development of AGI by a single state could ‘lock in’ economic or military supremacy as a discrete ‘end point’ to economic and military competition in international politics, as that state would then be able to prevent a rival AGI being developed and then establish global domination (Horowitz, 2018, p.54). The development of AI is a major factor in national security because it can be militarized, can be employed in adversarial contexts, and can provide a decisive advantage in terms of economic, information, and military superiority (Allen & Chan, 2017; Babuta, Oswald & Janjeva, 2020; National Security Commission on Artificial Intelligence [NSCAI], 2021). As the United States’ (US) NSCAI (2021, p.159) report states, “The impact of artificial intelligence (AI) on the world will extend far beyond narrow national security applications. The development of AI constitutes a new pillar of strategic competition, and it heightens the competition in existing pillars.” The NSCAI report urges the US attain a state of military AI readiness by 2025. AI is thus important for waging war, perhaps even decisive war, raising the spectre of the return of ‘total war’, i.e., industrial interstate war, which in its last major iteration, the Second World War, involved genocidal levels of killing (Markusen & Kopf, 2007). Particularly for the US, AI technological supremacy, founded on economic power, is viewed as paramount to national security (NSCAI, 2021, p.7), especially with respect to relations with China: China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change. Simultaneously, AI is deepening the threat posed by cyber attacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy. The limited uses of AI-enabled attacks to date represent the tip of the iceberg. Military AI is already potentially revolutionary in that it could “accelerate the pace of combat to a point in which machine actions surpass the rate of human decision making, potentially resulting in a loss of human control in warfare” (Congressional Research Service, 2019, p.37, citing Scharre, 2017). It is also unpredictable, in that “placing AI systems capable of inherently unpredictable actions in close proximity to an adversary’s systems may result in inadvertent escalation or miscalculation” (Congressional Research Service, 2019, p.37, citing Altmann & Sauer, 2017). While Baum (2017) finds little evidence of military AGI projects, the 2021 NSCAI report envisages a massive ramping up of US military AI by 2025 and endorses a push towards more general AI, occurring in a future experiencing a societal level of advanced, accelerated adversarial AI attacks and ubiquitous interstate AI warfare, including with autonomous systems, with conflict over matters like intellectual property and technological leadership. It is inconceivable in the present geopolitical atmosphere that a successful American, Chinese, or Russian AGI project, of which dozens are in development, would not immediately be militarized on the pretext of national security and employed to maintain or secure technological supremacy. In line with the general thinking of some previous researchers (e.g., Totschnig, 2019), we do not see developing an AGI as primarily a technological problem, but as a political problem. However, where most authors stress the political problem re humanity’s relationship with an AGI, this article focuses on the political challenge it poses for international relations in terms of war enabled or directed by one or more militarized AGIs. In this article, we apply Bostrom’s (2002, p.25) Maxipok rule of thumb for moral action for existential risks, i.e., “Maximize the probability of an okay outcome, where an “okay outcome” is any outcome that avoids existential disaster,” to the specific risk of AGI-enabled or directed warfare. We stress that in humanity’s simultaneously militarizing AI along nation-state lines and developing AGI, it is playing ‘technology roulette’. Yet, it is possible to imagine a way forwards that in fact alleviates the security dilemma (Tang, 2009) that the development of ASI causes. Former Secretary of the Navy Richard Danzig (2018, p.21) notes of this risk, “If humanity comes to recognize that we now confront a great common threat from what we are creating, we can similarly open opportunities for coming together.” In this cooperative spirit, we constrain the existential risk with the stratagem of peace-building by treaty. In security terms, steps towards a peace treaty governing developing and deploying an ASI, and potentially reaching out to a future ASI itself, are a form of reassurance—a probing communication designed to both signal benign intentions and obtain information via feedback on the signal on another party’s intent, as well as resolve (Tang, 2010), i.e., resistance to a malignly directed or intrinsically malign ASI. The article hypothesizes that the militarization of AI introduces the risk that AGI development, i.e., AI equal to human intelligence, or in the case of an artificial ‘superintelligence’ (ASI), greater than human intelligence (Bostrom, 2014), is weaponized, or weaponizes itself, and presents a catastrophic risk to humanity. We then argue that this risk can be minimized,