THE AUTONOMOUS WEAPONS CONVENTION By Elizabeth Herington

INTRODUCTION

With the dawn of Artificial Intelligence, killer robots are quickly going from science fiction to science fact. Autonomous weapons are currently being deployed on modern battlefields, and more are in development. Some of these systems are able to use lethal force and others help human soldiers in nonlethal ways. International negotiations on this issue have stalled, since some countries see the potential benefits of taking humans off of the battlefield and using machines instead. Others are concerned about the risks involved in taking the decision to use lethal force away from humans. This is Atlas, a The UN has been able to negotiate agreements about the ethical bipedal humanoid and legal use of many types of weapons in the past, reducing the robot currently humanitarian consequences of war. This conference, you will be being developed for working to make breakthroughs on this important issue and develop the US Defense an international consensus on the ethical use of this emerging Advanced Research technology. Projects Agency (DARPA). Interesting Engineering EXPLANATION OF THE ISSUE

Historical Development Development of Autonomous Weapons The history of autonomous weapons begins with that of remotely piloted military drones. These were used as early as WWI, when

HARVARD

inventors devised small remote-controlled explosive devices. During Logistics – in this WWII, the Germans developed a drone called “The Goliath” which context, this means carried explosives and looked like a small tank. The Soviet army things like developed a full-size remotely controlled tank. transporting people Modern remotely controlled weapons are Unmanned Aerial and equipment from Vehicles (UAVs). Many nations have invested heavily in these one place to another. systems, which now perform military reconnaissance and targeted

Explosive airstrikes. One example is the American MQ-9 Reaper, a remotely Ordinance piloted drone which carries Hellfire missiles, and has become a Disposal – the regular tool in the war on terror. secure disarming and Since the advent of Artificial Intelligence, unmanned weapons disposal of explosive have gone from remote-controlled to autonomous. Depending on the weapons; bomb system, this means that these machines are able to pilot themselves, defusal. pick targets by themselves, and deploy force by themselves. These weapons are used for many different purposes; sentry Demilitarized duties, logistics, explosive ordinance disposal, reconnaissance Zone–an area on the and more. Israel uses an Unmanned Ground Vehicle called the border of North and Guardium for border security. It can drive itself over a pre- that programmed route. They generally operate alongside infantry forces. incorporates territory It is mostly intended for surveillance, but it does carry weapons. on both sides of the The Aegis air-defense system is a combat system used by the cease-fire line at the navies of the , , South Korea, and Norway. end of the Korean It automatically protects ships against hostile aircraft, ballistic War. Military missiles, and cruise missiles. It detects targets and engages them incidents between the using interceptor-missiles. two countries have In 2006, South Korea announced plans to install Samsung occurred there. Techwin SGR-A1 sentry robots along the demilitarized zone. These robots are armed with machine guns and are capable of Biological autonomous tracking and targeting. They can fire autonomously, but Weapons – weapons their actions can be overridden by a human supervisor. The future of which involve autonomous weapons includes things like autonomous tanks, convoy releasing toxins or vehicles, and fighter jets. These weapons will be able to select and infectious agents like engage targets without human intervention. bacteria, viruses and fungi. Previous UN Weapons Conventions Conventional The UN has a long history of developing international treaties Weapons– weapons that regulate the use of certain kinds of weapons. These are typically which are not geared to prohibiting or restricting the use of certain kinds of considered weapons weapons which are excessively damaging or indiscriminate. of mass destruction One of these agreements that has recently received much press is (chemical, biological, the Chemical Weapons Convention. Adopted in 1992, it bans nuclear and chemical weapons and requires their destruction. The treaty is radiological weapons enforced though inspections of the facilities of those party to the are considered agreement. WMD). Other international agreements govern the possession and use of things like biological weapons, nuclear weapons, and some conventional weapons.

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 2 HARVARD MODEL CONGRESS

Scope of the Problem If the UN fails to act to establish international norms on this issue, the development of autonomous weapons will continue to accelerate in many countries around the world. There are arguments in favor of autonomous weapons, saying that they take humans out of harm’s way and may actually be more discerning about the use of force than humans would be. Others worry about the ethical implications of transferring responsibility for decisions about the use of force to machines, and about the possibilities of their malfunctioning. Arguments in Favor of Autonomous Weapons Autonomous weapons have huge potential military advantages, which is why countries are investing heavily in them. Member states will be reluctant to give up the potential benefits of autonomous weapons. Pairing unmanned systems with humans acts as a force multiplier, meaning that fewer warfighters are needed and each warfighter is able to be more effective on missions. Autonomous weapons also reduce casualties by removing humans from the battlefield. These systems are better suited than humans for missions that are “dull, dirty or dangerous” says the US Department of Defense. Improvised Dull missions are long, reconnaissance or sentry missions. Dirty Explosive Devices missions are those that may involve hazardous materials that (IED)– a simple humans should stay away from, and dangerous missions are those bomb made by that directly put humans in harm’s way, like defusing bombs. For unauthorized or example, the US Army is looking to acquire autonomous vehicles for nonstate forces. supply convoys- one of the deadliest missions in recent wars, because of IED attacks. Autonomous weapons may also be better suited for being fighter High-G Maneuvers pilots than humans are. There are physiological constraints, such as – fighter jets travel at being able to withstand high-G maneuvers, as well as mental such high speeds that constraints, like needing intense focus for long periods of time, that some maneuvers, like may make robots better suited for flying fighter jets than humans are. hairpin turns, cause They may also act more ethically than humans on the battlefield. the user to experience Since they need not be programmed with a “self-preservation forces that are several instinct”, they could be more willing to take more risks than humans times the force of and be less likely to use force unnecessarily out of fear. This means gravity. This can that they might use more restraint in targeting than human operators cause people to would. become light-headed, Autonomous systems would also be better able to make vomit or blackout if judgments than humans are, since their decisions would not be unprepared. clouded by stress and fear. When overloaded with stress, human decision-making breaks down, leading soldiers to commit actions that they wouldn’t have under other circumstances. In a world where

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 3 HARVARD MODEL CONGRESS

many US soldiers are on trial for war crimes because of unethical decisions made in the “haze of war”, machines may be preferable. The Institute of Electrical and Electronics Engineers (IEEE) argues that the technology to develop lethal autonomous weapons Mutually Assured will soon be so readily available that the international community Destruction – A may not be able to prevent their development. Instead, countries will Cold War deterrence need to acquire them to protect themselves against their use- much strategy that argues like the “mutually assured destruction” doctrine used in the that if the use of force Cold War. Many experts think that autonomous weapons will be the by one side would next nuclear weapons, since the country that develops them first will guarantee the have a significant battlefield advantage, and others will scramble to destruction of both catch up. Nuclear weapons and mutually assured destruction have the attacker and the been credited with ushering in the longest period of relative peace defender, then neither the world has ever known; if autonomous weapons are the next side would attack. nuclear weapons, they may also prevent wars. Used to justify Arguments Against Autonomous Weapons stockpiling nuclear An open letter from AI developers and researchers calls for a ban weapons. on the use of offensive autonomous weapons outside of the control

of humans. They argue that the international community should avert a global AI arms race before it begins. They argue that this would not be beneficial for humans, and might tarnish the reputation of AI, preventing it from being used in other beneficial applications. These experts also refer to the fact that autonomous weaponry represents the “third revolution in warfare, after gunpowder and nuclear arms”. One important way that autonomous weapons are different from nuclear arms is the way that they affect the threshold for going to war. Nuclear weapons, because of the principle of mutually assured destruction, increase the threshold for war, making it less likely that world leaders will decide that engaging in armed conflict with an adversary is worth the risk. Autonomous weapons may lower that threshold, since leaders will not need to put humans in danger in order to challenge an adversary militarily. Similar to the argument made by IEEE that the technology for creating these weapons will soon be so readily available as to be uncontrollable, some argue that, if developed, these weapons will become so ubiquitous and cheap that terrorists, warlords and dictators will easily be able to acquire and misuse them. Therefore, they argue, the weapons should never be developed in the first place. In 2013, a group of engineers and AI experts issued the “Scientist’s Call to Ban Autonomous Lethal Robots”. In it, they challenge the claim that machines will be more discriminate in the use of force than humans are, and question whether or not they would be able to follow the laws of war when making decisions about whether or not to engage a target. They argue that there is no “clear scientific evidence that robot weapons have, or are likely to have in the foreseeable future, the functionality required for accurate target

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 4 HARVARD MODEL CONGRESS

Rules of identification, situational awareness or decisions regarding the Engagement – a proportional use of force.” This is important because, in order for directive which these weapons to follow established laws of war, they need to be able specifies the to distinguish between combatants & civilians and follow the “Rules circumstances and of Engagement”. This lack of ability to distinguish between limitations under civilians and soldiers may lead to unacceptable civilian casualties. which military forces Another potential ethical issue of autonomous weapons is that will engage with the they violate one of the principals of “jus in bello”, international enemy. humanitarian law. This states that there must be a human who can be held responsible for any civilian casualties. If civilians are killed Jus in Bello – by machines, then this principal is violated. This is because flawed International decisions made by AI could be attributed to flaws in its humanitarian law programming, or to flaws in its own autonomous decision-making, which establishes making it difficult to attribute these flawed decisions to a human. guidelines for the Others argue that the primary concern about autonomous ethical waging of weapons should not be ethics, but their reliability. Autonomous war. This aims to weapons do not make moral decisions- those that deploy them do. limit the suffering Therefore, concerns about their moral decision-making ability are caused by war. unfounded; those who chose to use these weapons will be responsible for using them in an ethical manner. This is similar to the distinction between weapons that are illegal and the illegal use of legal weapon systems. Instead, these experts argue that the focus should be on whether or not these systems are reliable. If they are deployed in an ethical way but malfunction and cause collateral damage, that would be unacceptable. There has been a history of malfunction; in 1988 a US Navy self-defense system detected and shot down a hostile incoming aircraft. It turned out to be an Iranian passenger airliner, killing all 290 people on board. UN Action The UN has been relatively ineffective on this issue, despite many of members of the UN staff pushing for action on an international agreement on the use of these weapons. Member countries, such as the US and Russia, have been opposed to legally binding measures to curb the use of autonomous weapons in previous UN negotiations on the topic. In 2013, the UN rapporteur on extrajudicial, summary or arbitrary executions recommended that the member state issue a moratorium on the development of lethal autonomous robots until an international agreement on their use had been established. This recommendation was not implemented. In March of 2019, UN Secretary General Antonio-Guterres said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” Many international organizations hope that the UN will soon take action on this issue.

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 5 HARVARD MODEL CONGRESS

Despite this, there has not been any international agreement on the use or development of autonomous military systems. There have been meetings to discuss the issue, but no action has been taken. At these meetings, states have failed to even agree on a definition of lethal autonomous weapons. This could be because of the opposition to these negotiations by some member countries.

IDEOLOGICAL VIEWPOINTS

US and Russian Opinion So far, UN negotiations on this issue have been blocked primarily by a group of states led by US and Russia. They argue that autonomous weapons can play a useful role in warfare, while still obeying the Law of Armed Conflict. These countries think more Law of Armed research and discussion is necessary before any international actions Conflict – to restrict the use of autonomous weapons is adopted. “We believe it Regulations on the is premature to enter into negotiations on a legally binding conduct of armed instrument, a political declaration, a code of conduct, or other similar hostilities intended to instrument,” said a US representative at a meeting to negotiate these prevent unnecessary issues. These countries refuse to even open negotiations on a new suffering while not treaty governing these weapons. impeding the effective This resistance comes from a hesitation to support any waging of war. It international measures that constrain the individual country’s’ contains four freedom of maneuver. It also comes from state leaders’ belief that principles: developing cutting edge advanced technology is imperative in global distinction, geopolitical struggles. These countries are reluctant to constrain the proportionality, development of these technologies. These states also wish to explore military necessity, the potential benefits of lethal autonomous weapon systems. and unnecessary suffering. Chinese, Belgian and Peruvian Opinion

These states, along with others, have proposed negotiations on a new treaty to prevent the development and use of fully autonomous weapons (those that can act without any human oversight). This would create a new international law to prohibit the use of autonomous weapon systems. This would be a legally binding ban on the use and development of lethal autonomous weapons. German and French Opinion A group of states led primarily by Germany and oppose a legally binding measure but support a declaration of the necessity of maintaining human control over the decision to use deadly force. This is a less strong measure than a new international law, but might be a step in that direction, providing a framework for future negotiations on a new treaty.

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 6 HARVARD MODEL CONGRESS

AREAS OF DEBATE

It’s important to note that this agreement need not be either an outright ban or a green light on the development and use of autonomous weapons. There are nuances within the argument- maybe autonomous military systems are allowed, but with caveats. These could include things like the kinds of missions they are allowed to carry out, or the degree of autonomy they could attain. Creative compromise will be needed to find an agreement which capitalizes [This is where you on the benefits of military AI while avoiding the dangers. will describe the One important component of any of these agreements would be a image and why it’s clear definition of an autonomous weapon—something that the relevant for your international community has so far been unable to agree upon. briefing topic.] [Image Source Here Ban on Autonomous Weapons Ripsaw is an autonomous tank This proposal is the strongest of all of the policies outlined here currently in and would constitute an outright ban on all autonomous military development by the systems. The UN has instated many such bans before, including bans US Army. on landmines, chemical and biological weapons. These bans have Interesting Engineering largely been successful in eliminating these weapons from modern battlefields (with exceptions, which almost always provoke international condemnation). A ban on lethal autonomous weapons could block several different activities related to these weapons. This includes research, development, production and deployment of these systems. Enforcing a ban on the research and development of autonomous weapons would be very difficult, since it would be difficult to detect violations of the treaty. Enforcing a ban on the production and deployment of these weapons would be much easier. An outright ban on all kinds of autonomous weapon systems may go too far in banning technologies that could be useful in nonlethal, or non-offensive ways. For example, autonomous self-defense systems would likely prevent more deaths than they would cause, autonomous convoy vehicles differ little from driverless cars, and bomb-defusing robots would keep humans out of harm’s way. However, allowing the development of autonomous weapon systems with caveats such as the missions they could perform or the level of autonomy they could attain could still be dangerous. A semi- autonomous weapon could easily be converted into an autonomous weapon, and a self-defense system could easily be converted into an offensive system. This means that developing any of these weapons, even with safeguards, could lead to unintended consequences. Political Perspectives on this Solution Very few countries support a ban on all kinds of autonomous military technology. Some of the strongest supporters, including

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 7 HARVARD MODEL CONGRESS

international advocacy groups like The Campaign to Stop Killer Robots, may support something this stringent, but most states support something less strict. For example, the strongest UNSC supporters of a ban on autonomous weapons oppose fully autonomous lethal weapons- not all kinds of autonomous weapons. Restricting the Degree of Autonomy Many semi-autonomous systems are already in place and there are many different levels of autonomy that are already possible in machines. One approach to regulating the development and use of autonomous weapons could be to agree upon a level of autonomy that is allowable. Many different A useful framework for understanding different levels of levels of autonomy autonomy is whether or not there is a “human in the loop.” Human are currently Rights Watch’s Bonnie Docherty developed a system with three possible in categories, to which I will add one. machines. The first category is automated systems. These systems are capable of performing a predetermined set of actions in a particular, pre-determined pattern without human intervention. Systems like auto-reloading or autopilot qualify under this category. The second category is “human in the loop” systems. These systems are only able to select targets and deliver force after receiving a command from a human. One example is Israel’s Iron Dome air-defense system, which detects and targets incoming rockets and aircraft, then sends that information to a human operator. The operator decides whether or not to intercept the incoming object with a missile. The third category is “human on the loop” systems. These systems autonomously detect, target and deliver force, but are supervised by a human who can veto any of the machine’s decisions. One example is the US Navy’s Phalanx Close-In Weapon System. This system protects ships by detecting, tracking and using force against incoming missiles or aircraft. Human operators can override the weapon’s automation. The fourth category is “human out of the loop” systems. These weapons are capable of selecting targets and delivering force without any human input or interaction. No such weapons are currently in use, but these are a subject of much concern, since they are already possible to develop. An international agreement which mandated a certain level of allowable autonomy might be able to capture some of the benefits of autonomous weapons while avoiding some of the ethical dilemmas. However, defining and agreeing upon that level of autonomy may be difficult. Additionally, as this technology develops, previously defines levels may become irrelevant. As previously discussed, it would not be difficult to convert a system which previously required human input to one that is fully

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 8 HARVARD MODEL CONGRESS

autonomous, making it easy for bad actors to use these weapons unethically once developed. Political Perspectives on this Solution US policy currently requires that there be a “human in the loop” when making decisions before a weapon system uses force against a target, indicating that they may be amenable to agreements on this front. However, they may also be reluctant to give up their ability to make rules for themselves on how autonomous they are comfortable with their weapons being. Previously proposed treaties on this issue have included language like “fully autonomous,” meaning that UNSC countries are willing to negotiate on this point. One counterargument to eliminating fully autonomous weapons is that these weapons would be very effective in enforcing “red lines.” Human leaders are often unable to effectively deter their adversaries from crossing these “red lines” (such as the targeting of civilians, or use of chemical weapons), since they often do not follow through on their commitments. Pre-programmed fully autonomous weapons, which humans could not prevent from carrying out their missions, would be a very effective deterrent in this case. Restricting Potential Mission Sets Another potential framework for regulating the development and use of military autonomy is that of restricting the kinds of missions that autonomous weapons are allowed to perform. One potential split is along offensive/defensive lines. This would likely mean that systems which are meant for defensive purposes are legal but those meant for offense are not legal. Another potential split is along lethal/nonlethal lines. This could mean that systems which are able to deliver lethal force are illegal but those not capable of that are illegal. It could also differentiate between autonomous military systems which aid warfighters in actions like supply convoys (in which no force at all is necessary) and those that are capable of force but are nonlethal (like systems that may be used to harm or distract an enemy). There are many different kinds of missions you may decide are legal or not legal. For example, maybe an autonomous air defense

The DRDO Daksh is system is allowed to use force against incoming missiles, but not a military robot against incoming aircraft (which carry humans). Since there are so used by the Indian many different ways to split the many kinds of missions that Army to handle and autonomous systems could perform, there is ample room for destroy hazardous compromise on this front. materials. Again, there is the danger of weapons being adapted to be used Interesting Engineering in illegal ways. For example, it would not be difficult to retrofit a bomb-defusing robot with a machine gun and turn it from a nonlethal aid to a lethal autonomous weapon.

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 9 HARVARD MODEL CONGRESS

Political Perspectives on this Solution Very few countries object to the use of nonlethal autonomous weapons. However, obtaining agreement on finer points like offensive vs. defensive or targeting humans vs. targeting infrastructure may be harder. Political Declaration This is the solution proposed by France and Germany. The UN would issue a statement about the ethical use of autonomous weapons and the importance of maintaining human control over the decision to use lethal force. This declaration would be non-binding but would establish an A political international consensus and norms about the development and use declaration could of these weapons. This consensus could be built upon in the future to establish an develop a new treaty governing the autonomous weapons. international Further Discussions consensus which would provide a This proposal is the one currently adopted by the international foundation for community. It may be too early to make a decision about autonomous weapons since the technology is still in development. future One particularly important question that has yet to be answered is negotiations. whether or not machines will be able to attain a level of intelligence that will allow them to distinguish between legitimate and illegitimate targets. This is a task that is difficult even for human soldiers, and it is not obvious whether machines would be better or worse at this task. Countries like the US and Russia argue that the international community must wait until the development of this technology proceeds further before making decisions about it. One advantage of this is that it prevents an important decision from being made based on conjecture. For example, if we assume that machines will be better than humans at distinguishing between combatants and civilians, and fully allow their use (or allow their use with some caveats), and this turns out not to be true, there would be an international law which allows dangerous indiscriminate weapons. However, waiting until the technology is developed and implemented to regulate may be too late. Once it is developed, it will likely be cheap to mass produce these weapons, and it may be too difficult to prevent their proliferation. Political Perspectives on this Solution Strong military countries, like the US, Russia, Israel and Australia, tend to support this argument. Others think that action must be taken now, not later.

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 10 HARVARD MODEL CONGRESS

CONCLUSION

The UN has the power to regulate the development and use of the emerging technology of autonomous military systems. These systems have many potential benefits to individual militaries, and the state which develops them first will have a significant battlefield advantage over its adversaries. This could lead to an international arms race if the UN does not act first. These weapons also raise important ethical questions about human responsibility and the use of lethal force. Some argue that these systems would be able to make judgments based on facts, rather than fear and prejudice, and would therefore act more ethically in a warzone than humans do. Others argue that the Guardium is an possibility of “killing machines” is morally repugnant, and that it is autonomous unacceptable to transfer the responsibility for decisions involving the unmanned ground use of force from humans to machines. vehicle used by At this conference, the UN Security Council will be revisiting this Israel to patrol the issue, one that the UN has been unable to make progress on in the Gaza border. past. The countries will need to balance their own desire for regional Interesting Engineering or international geopolitical control and the benefits of international cooperation. The UN has been able to settle issues like this in the past, and hopefully will be able to come to a consensus, or at least make progress on this issue, at this round of negotiations.

GUIDE TO FURTHER RESEARCH

There are three main areas of research that will be helpful to you in understanding this topic and your role in negotiations at the conference. First, understanding existing international laws of war such as the Geneva Convention and the Law of Armed Conflict will be helpful since it will help you to understand which regulations are already agreed upon in the international community (which autonomous weapons would be expected to abide by). Second, focus on what your country’s perspective on this issue has been at previous UN negotiations. Do they support a ban, a declaration or want more research before making international agreements on the topic? Do they only object to fully autonomous weapons or to anything with an autonomous component? Also look into what definition of autonomy they use. Delve into the nuances of their stance on this issue. Third, you should research what the existing technology is and what is being developed. There is so much more than what is included in this briefing, and it’s helpful to know what is actually possible when creating regulations. Some of the technologies are really amazing, and others are pretty terrifying.

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 11 HARVARD MODEL CONGRESS

Reading the sources cited in the bibliography will be a helpful start!

GLOSSARY

Biological Weapons – weapons which involve releasing toxins or infectious agents like bacteria, viruses and fungi.

Conventional Weapons – weapons which are not considered weapons of mass destruction (chemical, biological, nuclear and radiological weapons are considered WMD).

Demilitarized Zone –an area on the border of North and South Korea that incorporates territory on both sides of the cease- fire line at the end of the Korean War. Military incidents between the two countries have occurred there.

Explosive Ordinance Disposal – the secure disarming and disposal of explosive weapons; bomb defusal.

High-G Maneuvers – fighter jets travel at such high speeds that some maneuvers, like hairpin turns, cause the user to experience forces that are several times the force of gravity. This can cause people to become light-headed, vomit or blackout if unprepared.

Improvised Explosive Devices (IED) – a simple bomb made by unauthorized or nonstate forces.

Jus in Bello – International humanitarian law which establishes guidelines for the ethical waging of war. This aims to limit the suffering caused by war.

Law of Armed Conflict – Regulations on the conduct of armed hostilities intended to prevent unnecessary suffering while not impeding the effective waging of war. It contains four principles: distinction, proportionality, military necessity, and unnecessary suffering.

Logistics – in this context, this means things like transporting people and equipment from one place to another.

Mutually Assured Destruction – A Cold War deterrence strategy that argues that if the use of force by one side would guarantee the destruction of both the attacker and the defender,

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 12 HARVARD MODEL CONGRESS

then neither side would attack. Used to justify stockpiling nuclear weapons.

Rules of Engagement – a directive which specifies the circumstances and limitations under which military forces will engage with the enemy.

BIBLIOGRAPHY

“Autonomous Weapons That Kill Must Be Banned, Insists UN Chief | UN News.” United Nations, United Nations, 25 Mar. 2019, news.un.org/en/story/2019/03/1035381.

Busby, Mattha. “'Killer Robots' Ban Blocked by US and Russia at UN Meeting.” The Independent, Independent Digital News and Media, 3 Sept. 2018, www.independent.co.uk/life- style/gadgets-and-tech/news/killer-robots-un-meeting- autonomous-weapons-systems-campaigners-dismayed- a8519511.html.

“Computing Experts from 37 Countries Call for Ban on Killer Robots.” International Committee for Robot Arms Control, 2013, www.icrac.net/wp- content/uploads/2018/06/Scientist-Call_Press-Release.pdf.

Etzioni, Amitai, and Oren Etzioni. “Pros and Cons of Autonomous Weapons Systems.” Army University Press, 2017, www.armyupress.army.mil/Journals/Military- Review/English-Edition-Archives/May-June-2017/Pros- and-Cons-of-Autonomous-Weapons-Systems/.

Evans, Hayley, and Natalie Salmanowitz. “Lethal Autonomous Weapons Systems: Recent Developments.” Lawfare, 11 Mar. 2019, www.lawfareblog.com/lethal-autonomous-weapons- systems-recent-developments.

Klare, Michael. “Arms Control Today.” U.S., Russia Impede Steps to Ban 'Killer Robots' | Arms Control Association, Oct. 2018, www.armscontrol.org/act/2018-10/news/us-russia-impede- steps-ban-%E2%80%98killer-robots%E2%80%99.

McFadden, Christopher. “A Brief History of Military Robots Including Autonomous Systems.” Interesting Engineering, 4 Dec. 2018, interestingengineering.com/a-brief-history-of- military-robots-including-autonomous-systems.

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 13 HARVARD MODEL CONGRESS

“Open Letter on Autonomous Weapons.” Future of Life Institute, 28 July 2015, futureoflife.org/open-letter-autonomous- weapons/.

Semeniuk, Ivan. “Scientists Call for Ban on Lethal, Autonomous Robots.” The Globe and Mail, 15 Feb. 2019, www.theglobeandmail.com/business/technology/science/art icle-scientists-call-for-ban-on-lethal-autonomous-robots/.

Tarantola, Andrew. “This Unmanned Patroller Guards Israeli Borders for Days on End.” Gizmodo, Gizmodo, 17 June 2013, gizmodo.com/this-unmanned-patroller-guards-israeli- borders-for-days-5943055.

© HARVARD MODEL CONGRESS EUROPE 2020 – REDISTRIBUTION OR REPRODUCTION PROHIBITED 14