An Investigation into the Militarisation of Artificial Intelligence

A dissertation submitted in partial fulfilment of the requirements for the degree of Bachelor of Science (Honours) in Software Engineering

By Daniel Edwards

Department of Computing & Information Systems Cardiff School of Management

Cardiff Metropolitan University

April 2017

Declaration

I hereby declare that this dissertation entitled An Investigation into the Militarisation of Artificial Intelligence is entirely my own work, and it has never been submitted nor is it currently being submitted for any other degree.

Candidate: Daniel Edwards

Signature:

Date: Ethics Approval ID Number: 2016D0316

Supervisor: Dr Simon Thorne

Signature:

Date:

ii

Abstract This dissertation gained an understanding of the public’s perception of Artificial Intelligence and Artificial Intelligence that have a military application. Major General Clive ‘Chip’ Chapman kindly gave his consent to be interviewed for the dissertation to give his insight from an ex forces point of view and share his own expertise on the matter.

The literature reviewed for the dissertation provides definitions and research into the existing research areas of artificial intelligence, existing militarised technologies and insight into Artificial Intelligence within popular culture.

Primary Research was undertaken in the form as an online survey to understand the public’s perception and knowledge of Artificial Intelligence and Artificial Intelligence within the military. Another primary research method was used, an interview with Major General Clive Chapman which will be used to evaluate the results gained from the survey.

An overall conclusion will then be drawn from the results gathered about the public’s perception of Artificial Intelligence within the Military.

Keywords: Artificial Intelligence, Militarisation, Perception, Drones, Counter Terrorism, Warfare

iii

Acknowledgments

I would like to thank Dr Simon Thorne for being my supervisor and helping me through the project and four years at Cardiff Met.

I’d also like to thank Professor Tom Crick for standing in as my supervisor in Simon’s absence.

I would also like to thank all of the people who have kindly taken the time to provide data for the project.

Thanks to Major-General Clive Chapman for agreeing to be interviewed for the project and providing invaluable insight and information.

Finally, thank you to my entire family and Charlotte for their continued support of my studies and supporting me through this dissertation and again helping me through my four years at Cardiff Metropolitan.

iv

Contents Declaration ...... ii

Abstract ...... iii

Acknowledgments...... iv

1.0 Introduction ...... 9

1.1 Background ...... 11

1.2 Aim ...... 11

1.3 Objectives ...... 11

1.4 Research Questions ...... 11

1.5 Motivation for Topic ...... 12

1.6 Structure of Dissertation ...... 12

2.0 Literature Review ...... 14

2.1 Introduction ...... 14

2.2 What is Artificial Intelligence? ...... 15

2.3 How is Artificial Intelligence classified? ...... 17

2.3.1 Learning...... 17

2.3.2 Reasoning ...... 17

2.3.3 Problem Solving ...... 17

2.3.4 Perception ...... 18

2.3.5 Language Understanding ...... 18

2.3.6 Areas of Artificial Intelligence ...... 19

2.4 Military Applications of Artificial Intelligence and Technology ...... 22

2.4.1 DARPA – Defense Advanced Research Projects Agency ...... 22

2.5 Ethics within Artificial Intelligence ...... 24

2.5.1 The Investigatory Powers Act of 2016 ...... 25

2.6 United Nations: The Campaign to Stop Killer Robots ...... 26 v

2.7 Artificial Intelligence within Popular Culture ...... 27

2.7.1 Isaac Asimov 1950 ...... 28

2.7.2 HAL 9000 1968 - 1997 ...... 28

2.7.3 Skynet 1984 - Present ...... 29

2.7.4 ARIIA 2008 ...... 29

2.7.5 Muse - Drones 2015 ...... 30

2.8 Literature Conclusion ...... 31

3.0 Methodology ...... 32

3.1 Introduction ...... 32

3.1.1 Philosophies ...... 33

3.1.2 Approaches ...... 33

3.1.3 Strategies ...... 34

3.1.4 Choices ...... 34

3.1.5 Time Horizons ...... 34

3.1.6 Techniques and Procedures ...... 34

3.2 Research Methods ...... 35

3.2.1 Primary Research ...... 35

3.2.2 Secondary Research ...... 38

3.3 Ethics Approval ...... 39

4.0 Results and Discussion ...... 40

4.1 Introduction ...... 40

4.2 Demographic Results ...... 41

4.3 Study Results ...... 45

4.4 Conclusion of Results ...... 67

5.0 Conclusion ...... 68

5.1 Introduction ...... 68 vi

5.2 Objectives ...... 68

5.3 Research Questions ...... 70

5.4 Limitations ...... 72

5.5 Future Research ...... 73

5.6 Overall Conclusion ...... 73

6.0 Appendix ...... 74

6.1 Bibliography ...... 74

6.2 Ethics Form...... 81

6.3 Devolved Ethics Approval Application Summary ...... 86

6.4 Participant Information Sheet ...... 88

6.5 Participant Consent Form ...... 90

6.6 Semi Structured Interview Questions ...... 92

6.7 Email to Major-General Chapman ...... 94

6.8 Signed Participant Consent Form ...... 95

6.9 Interview with Major-General Clive Chapman ...... 96

6.9.1 CENTCOM Example ...... 118

6.10 Online Survey Questions ...... 122

6.11 Online Survey Results ...... 132

Question 1 ...... 132

Question 2 ...... 132

Question 3 ...... 132

Question 4 ...... 132

Question 5 ...... 134

Question 6 ...... 134

Question 7 ...... 135

Question 8 ...... 135 vii

Question 9 ...... 135

Question 10 ...... 135

Question 11 ...... 137

Question 13 ...... 137

Question 14 ...... 137

Question 15 ...... 138

Question 17 ...... 138

Question 18 ...... 139

Question 19 ...... 140

Question 21 ...... 140

Question 22 ...... 140

Question 23 ...... 142

Question 24 ...... 142

Question 26 ...... 143

Question 27 ...... 143

Question 28 ...... 144

Question 29 ...... 144

viii

1.0 Introduction Artificial Intelligence has been utilised thoroughly within mobile devices, video games and purchase predication. It is also an up and coming technology to be applied to self-driving cars which is also pioneered by several large companies worldwide such as Tesla, Google,

Uber and Nissan. (Muoio, 2016). Many issues have been seen by the public regarding who is to blame in the event of a self-driving crash, the car manufacturer or the self-driving system? Sensors can fail, equipment can become damaged and it could be hacked. However, users could mistake autopilot for self-driving. This did happen In China in August 2016 (Liberatore, 2016). It seems that even if the user ‘thinks’ they have engaged the correct system it can always be likely they have not.

Now, imagine if one of the aforementioned systems is applied to a military program. For example, a drone mission. It is not controlled by a human and it is simply programmed to explore a hostile environment and act on what the drone finds. Will the systems interpret this information to execute hostiles based on intelligence? How will it determine collateral damage? These are the questions and issues this project will be exploring. Some examples of artificial intelligence within the US military include The Defence Advanced Research Projects Agency (DARPA).

"A truly transformative capability requires visual intelligence, enabling these platforms to detect operationally significant activity and report on that activity so warfighters can focus on important events in a timely manner." (UPI, 2011)

Now, this one system can be applied to many applications and existing technologies. However, it is also stated in the article that “human analysts would have to interpret video from the platforms to detect operationally significant activities.” (UPI, 2011) This means that it is not a fully self-contained AI system and there will always be a human at the other end of a screen making the decisions to act on. The militarisation of Artificial Intelligence can

9 potentially be a harrowing and uncertain future with many technological, ethical and legal issues to cover.

10

1.1 Background Artificial Intelligence is an ever expanding and changing form of computing. It has changed the way companies operate, how consumers shop and how the public go about their daily lives. It is also everywhere, in our pockets, our computers and our governments. It is a technology that if used correctly it can solve many a complex problem based on very little information given. It can also learn from these exercises to create a larger and further complex intelligence system. Artificial intelligence is becoming more prominent in the software industry as demand for analytics is driven more by growth in both structured and unstructured data (Bloomberg Enterprise, 2016). This means that this technology can be applied to any application needing analysis. Whether it be text driven or a visual input. The policies and programming exist, it’s just a matter of applying them.

1.2 Aim The aim of this project is to understand the public’s perception of the Militarisation of Artificial Intelligence and evaluate the existing technologies that exist that could potentially be weaponised. 1.3 Objectives 1. Review the literature and other relevant literature surrounding Artificial Intelligence within and outside the military. 2. Research Artificial Intelligence within Popular Culture. 3. Conduct a survey to collect the public’s understanding of Artificial Intelligence and their feelings of Artificial Intelligence being used within the Military. 4. Review the public’s perception of the Militarisation of Artificial Intelligence. 1.4 Research Questions 1. What are the public’s perceptions on Artificial Intelligence being used within the military? 2. What is the public’s understanding of the term Artificial Intelligence? 3. Is there a potential for an Artificial Intelligence system within the military?

11

1.5 Motivation for Topic As Artificial Intelligence is forever evolving into new markets and uses, it is a strong candidate for government applications. (Kolton, 2016) states that “2015 proved a watershed year for artificial intelligence (AI) systems. Such advanced computing innovations can power autonomous weapons that can identify and strike hostile targets. AI researchers have expressed serious concerns about the catastrophic consequences of such military applications.” It seems that it is inevitable that such technologies will be used in the not too distant future within the military and companies both private and public are already starting to develop new systems that could be used. It will be an interesting project to explore what is out there and what can be in the years to come. AI has also seeped into the public’s daily lives without many people realising it. There are loose AI’s within smartphones and even supermarkets use AI to make purchase predictions for its customers. Artificial Intelligence is slowly becoming a larger part of everyone’s lives and it is only going to become integrated greater within people’s lives and popular culture.

1.6 Structure of Dissertation Section 1: Introduction

The introduction (section 1) will provide a broad overview of the dissertation, the aims, the objectives and the research questions that are going to researched and answered by the end of the project.

Section 2: Literature Review

This section of the dissertation will define Artificial Intelligence, Militarisation and then explore the literature surrounding the academic principles of AI. Also explored is some technology developed by DARPA, literature surrounding the ethics of Artificial Intelligence and the United Nations. Finally, Artificial Intelligence within Popular Culture is discussed in depth and how it has had an effect on society.

Section 3: Methodology

12

A methodology is outlined and discussed in depth. It has been written to give context to the tasks that will be undertaken to collect data for answer the research questions and achieve the aims set in section 1.

Section 4: Results and Discussion

This section will discuss the results that have been collected and will also compare the results collected to other research that has been undertaken. The results will also be analysed alongside an interview with Major General Clive Chapman.

Section 5: Conclusion

The conclusion, the last chapter will provide answers to the research questions and discuss whether the aims and objectives have been met. It will also outline any future research that could be undertaken.

13

2.0 Literature Review 2.1 Introduction The follow text will explore the surrounding literature of artificial intelligence as a broad subject, including its classification, existing technologies and the research areas of AI. The literature review will also explore the military applications of robotics and artificial intelligence. Ethics within artificial intelligence will also be discussed and researched alongside how artificial intelligence has grounded itself with popular culture. It is important here to understand the characteristics of the definition of militarise:

 to give a military character to  to equip with military forces and defences  to adapt for military use (Merriam-webster.com, 2017)

When Militarisation is concerned within this dissertation, it is meant in the terms of adaption for military use. For Example the Militarisation of Artificial Intelligence is the use of Artificial Intelligence within a military application.

14

2.2 What is Artificial Intelligence?

Artificial Intelligence

NOUN

[mass noun] The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. (artificial intelligence, n.d.)

Copeland states that “Artificial Intelligence (AI) is usually defined as the science of making computers do things that require intelligence when done by humans.” (Copeland, 2000)

It is a concept requires minimal human input once created and can then ‘learn’ based on its functionality and outcomes. It is a concept that has been used and developed by some of the largest technology and software companies in the world. Turing (1950) proposed that machines ‘could’ think and answer questions based on its knowledge and ‘think’ like a human. (Turing, 1950). Of course, Alan Turing is probably most famous for his work during the Second World War at Bletchley Park and is considered the grandfather of modern day computer science. His work on autonomous machines to solve problems paved the way for modern AI as we know it today.

AI was formally introduced in 1956, when the name was coined, although by then, work had been under way for about five years (Russell and Norvig, 2016). Based on this knowledge, it is safe to say that artificial intelligence is nothing new. What is new however is the applications of this knowledge and how this technology is used in modern day. Google has Google Assistant, Apple has Siri and Microsoft (Windows) has Cortana. These ‘assistants’ are loosely named Artificial Intelligence Systems, however, they can learn from the user’s commands, speech and think for themselves to a certain degree. Google Assistant can book the user a reservation with just a simple command, the user is not physically booking it themselves. The assistant is doing all the work based on its programming. (Google Assistant – Your own personal Google, n.d.) 15

Turing (1950) also proposed the Turing Test as one approach to AI which was designed to provide a reasonable operational definition of intelligence. The computer would need to possess the following capabilities:

 Natural Language Processing to enable it to communicate successfully in English (or some other human language)  Knowledge Representation to store information provided before or during the interrogation  Automated Reasoning to use the stored information to answer questions and to draw new conclusions  Machine Learning to adapt to new circumstances and to detect and extrapolate patterns (Russell and Norvig, 2016)

There are also other approaches to AI that include; the cognitive modelling approach, the laws of thought approach and the rational agent approach. All four have been followed historically to create and test AI (Russell and Norvig, 2016).

16

2.3 How is Artificial Intelligence classified? AI has concentrated on the following components (Copeland, 2000). Learning, Reasoning, Problem Solving, Perception and Language Understanding. These are five key features that aid a strong and successful Artificial Intelligence.

2.3.1 Learning Without the system learning from its mistakes and successes it can never progress into a more intelligent system thus making it redundant. This could be as simple as misunderstanding the command and the user correcting it manually. This prevents the system from making the same mistakes and increasing its knowledge on the given problem. “Learning is distinguished into a number of different forms. The simplest is learning by trial- and-error.” (Copeland, 2000). The system would need to update itself for it be considered fully artificial. Neural Networks have been used historically to ‘learn’ from existing data sets and then applied to new data sets (Russell and Norvig, 2016).

2.3.2 Reasoning Copeland (2000) defines reasoning as “To reason is to draw inferences appropriate to the situation in hand.” This leads us to understand that an AI can act based on live information and environment. Russel and Norvig (2016) has also described the process of reasoning as “In which we design agents that can form representations of the world, use a process of inference to derive new representations about the world, and use these new representations to deduce what to do” (Russell and Norvig, 2016). This ultimately reads as the AI can decide what it sees and how to act upon that information.

2.3.3 Problem Solving The above statement also relates to live problem solving the system may encounter. As an example, data is given to a system and the command is to find x. The system will need to create its algorithm to sort through the given data to find x without any input from the user. ALPHA, an AI developed by a University of Cincinnati doctoral graduate and Psibernetix, Inc. has recently been tested by a retired US Air Force Colonel. The tests results in Colonel Gene 17

Lee being ‘shot out of the sky’ in the simulation and the AI has beaten other similar intelligences (Reilly, 2016). This would suggest that the system could ‘think’ quicker than the Colonel and make faster decisions. This would also integrate machine learning. Luger (2008) explains that the success of machine learning programs “suggests the existence of a set of general learning principles that will allow the construction of programs with the ability to learn in realistic domains” (Luger, 2008)

2.3.4 Perception Perception can be explored as “the environment is scanned by means of various sense- organs, real or artificial, and processes internal to the perceiver analyse the scene into objects and their features and relationships.” (Copeland, 2000). This allows the system to ‘see’ an environment and determine the most successful way of solving the given problem.

2.3.5 Language Understanding Language Understanding is required for the AI to operate in many a different situation. This understanding could be of another spoken language, signs, road signs or signals. It could also be understood as the user’s commands. Winograd (1972) describes a language- understanding problem that allows English orthography and punctuation as an input and then analyses the input via a Grammar Analyser that can handle the basic units of the English language such as clauses, noun groups and prepositional groups. Then the system uses a Semantics unit to interpret sentences then a Planner will make the deductions for the Answer (Winograd, 1972).

18

2.3.6 Areas of Artificial Intelligence

Artificial Intelligence

Natural Language Gaming Expert Systems Vision Systems Speech Recognition Intelligent Robots Processing

Instrument DARPA/ Non Playable Mobile Self Driving Aircraft Monitoring Boston Characters Assistants Cars Commands Systems Dynamics

Figure 1: Areas of Artificial Intelligence

Adapted from (tutorialspoint, 2017)

Figure 1 shows the umbrella areas of AI and some examples of the technology. This diagram could probably be broken down into further segments but the above areas are what has been concentrated on in recent years. 2.3.6.1 Gaming Non Playable Characters (NPCs) are what the gamer can interact with. They usually have set animations that are triggered by the user’s interactions. However, they can change based on these interactions. For example, if a user triggers a building collapse, the NPCs could run away or attack the gamer’s character based on its programming. They can also aide on quests depending on the game and have different built in personas. Within shooting games the NPC’s must decide where and how to attack the player based on the players ability and decisions made (Nareyek, 2004). 2.3.6.2 Natural Language Processing Intelligent Personal Assistants (IPAs) or Digital Personal Assistants (DPAs) have been around for many years. Most modern smart phones come with a version of an assistant installed as standard. Cortana (Windows), Siri (Apple), Google Assistant (Google/Android) are a few

19 examples. They can be used to perform a task that is triggered by a vocal command. It is also argued that these assistants are not true forms of Artificial Intelligence, as they only draw on a knowledge bank of instructions and act upon them. They do however have access to the entire device and can create, modify and delete items within the device. It has also been stated that IBM’s Watson is not deserving of the term Artificial Intelligence as it is just set algorithms connected to a database such as Google search (Hofstadter and Herkewtiz, 2014) 2.3.6.3 Expert Systems O'Keefe, Balci and Smith state that an expert system can be described as such if it performs tasks to a level close to human expert performance (O'Keefe et al., 1986). As an example, an aircraft autopilot will keep an aircraft on course like the pilot can, alter altitude or follow waypoints that are programmed into the machine. Other examples include weather forecasting, stock market monitoring and medical diagnostic systems. An Expert System is made up of several components; the user, a graphic user interface, an inference engine and a knowledge base. Possibly even a global database that the system is connected to (Smith, 1985).

Expert System

Knowledge User User Inference Base Interface Engine

Figure 2: Expert System Example

Adapted from (Sharma, 2013)

20

2.3.6.4 Vision Systems This can be partially defined as pattern recognition so the system can draw conclusions from the patterns found (Alcock, 2004) and also defined as the system surveying its surroundings like within drone technology. With the surrounds surveyed, the systems can generate a binary image and make decisions based on those surroundings which may include avoiding obstacles or climbing to new altitudes. Martinez-Gomez et al. (2014) describe the components needed in order to gather data for a vision system. These include; tactile sensors (proximity sensor), wheel encoders (counts number of wheel turns), Global Positioning System, heading sensors (orientation), an inertial measurement unit (relative position) and digital cameras (to relay imaging back to a controller) (Martinez-Gomez et al., 2014). These devices are used within self-driving cars and drone technologies. Similar technology dubbed ‘Computer Vision’ is used within sports such as football and tennis (Moeslund et al., 2014) to help train athletes and provide statistics. 2.3.6.5 Speech Recognition Similar to natural language processing, an example of a system using Speech Recognition is the Eurofighter Typhoon aircraft used by the Royal Air Force (UK). Using Direct Voice Input Technology the pilot can use his/her voice to provide an input to an aircraft system in order to obtain an action or information from that system (Eurofighter.com, 2008). This technology exists to aid the pilot’s concentration whilst in potential combat. It also means that the pilot does not need to take their eyes away from the view out of the cockpit. This does however place a lot of trust within the system. 2.3.6.7 Intelligent Robots Robots have been primarily created for environments where it is unsafe for humans but now robots are being created to think for themselves. DARPA and Boston Dynamics have been helping create military robots to aid soldiers for many years but they have been designed to help carry weight (Boston Dynamics, n.d.). They can however change their pace/gate based on the terrain it encounters. Arkin (1998) states that “an intelligent robot is a machine able to extract information from its environment and use knowledge about its world to move safely in a meaningful and purposive manner” (Arkin, 1998).

21

2.4 Military Applications of Artificial Intelligence and Technology Like with most other organisations around the world, the military has also adopted technology within their Operations. There are agencies that have been created by both the U.S.A. and the U.K. for research and development of these technologies. In the U.S.A. the Defense Advanced Research Projects Agency exists and within the U.K. the Defence Science and Technology Laboratory is established.

2.4.1 DARPA – Defense Advanced Research Projects Agency

“For more than fifty years, DARPA has held to a singular and enduring mission: to make pivotal investments in breakthrough technologies for national security.” (Darpa.mil, n.d.)

DARPA is an agency of the U.S. Department of Defense tasked with developing new technologies for use by the military. Founded in 1958, ARPA (at first without Defense) developed rocket programs which would eventually become the Saturn V Moon Rocket (Darpa.mil, n.d.). For the last 50 years many projects have included weapons, vehicles and digital technologies.

2.4.1.1 Vehicle and Dismount Exploration Radar Vehicle and Dismount Exploration Radar known as (VADER) was initiated by DARPA in 2011 and deployed to Unmanned Aerial Vehicles (UAVs) and small manned aircraft (Darpa.mil, n.d.). “Developed for DARPA by Northrop Grumman Electronic Systems, VADER provides synthetic aperture radar and ground moving-target indicator data to detect, localize and track vehicles and dismounts.” (Darpa.mil, n.d.) This would be used to track potential targets and possible threats without having manned operations in the area being observed.

2.4.1.2 Autonomous High-Altitude Refuelling In 2007 DARPA partnered with NASA (National Aeronautics and Space Administration) to prove that high-performance aircraft could perform automated refuelling unmanned from a conventional tanker. There was a human pilot on board to monitor conditions however, but this was the first step in making this a possibility (Darpa.mil, n.d.). “During the program’s

22 final test flight, two modified RQ-4 Global Hawk aircraft flew in close formation—100 feet or less between refueling probe and receiver drogue—for the majority of a 2.5-hour engagement at 44,800 feet. The successful test helped pave the way for future unmanned high-altitude long-endurance aircraft that can refuel in flight, expanding their mission capabilities and range” (Darpa.mil, n.d.). Coupling this technology with the VADER system means that unmanned aircraft has the potential to fly as long as it needs to, providing a fuelling unit can reach it.

2.4.1.3 Legged Squad Support System (LS3) Alongside Boston Dynamics, DARPA is developing “rough-terrain robot designed to go anywhere Marines and Soldiers go on foot, helping carry their load.” (LS3 - Legged Squad Support Systems, n.d.). This is an aide currently but it has potential to be used in dangerous climate, combat zones and other unsafe environments. The current iteration of the LS3 follows ‘its leader’ using computer vision but it can also follow GPS as long as it is less than 20 miles and lasts up to 24 hours. Other similar technologies funded by DARPA include BigDog, Cheetah and LittleDog (Boston Dynamics, n.d.).

2.4.1.4 Broad Operational Language Technology (BOLT) DARPA and “SRI is creating technology to translate foreign languages in all genres, retrieve information from translated material, and enable bilingual communication via speech or text” (Broad Operational Language Technology (BOLT) Program, n.d.). This has been developed for human use for soldiers in the field. However, it seems that this technology could be paired with drones to gain intelligence. It can also be used to intercept communications from possible threats in a foreign language, translate to English and then used to determine the correct action required.

2.4.1.5 Vulture II DARPA have an agreement with Boeing to develop and fly the SolarEagle aircraft for Vulture II (Haddox, 2016). It works on Solar technology to remain in flight potentially for years but as of yet it has been unsuccessful commercially. It is being designed to have the capability of providing network and satellite to regions that cable cannot reach.

23

2.5 Ethics within Artificial Intelligence With any artificial intelligence it is important to consider the ethics and moral reasons to why it is being used. In a phone, virtual assistants are used for ease and to further the functionality of the device. It also allows for hands free access to devices for disabled users.

“Artificial Intelligence has the promise of turning a multi-stage process (a process that requires that you are familiar with its functions and features and can physically interact with its interface) into a much less daunting one in which you simply have a chat with your device in the same way that you would with an attentive and obliging friend who is ever-ready to help” (Christopherson, 2016). This functionality could lend its ease of use within the military. If an AI is flying a hypothetical drone surveillance without any pilot at the remote control, those in command can plan the next move, simply speak to the system. For example, “Increase surveillance altitude to 10,000 feet and hold a standard holding pattern at 150 knots.”

This type of interactivity between humans and computers is no new technology and has theoretically been around since the 1950’s (MIT Technology Review, 2016) but in modern times, any device with Sir, Cortana or Google Assistance can understand the vast majority of instructions given to it, with the option to manually correct the instruction if it was misheard or misinterpreted. This new instruction is saved for future use in the instruction database.

With Speech, it could be very easy to state a command that is not actually what is intended and therefore creating an issue. For example, who would be at fault? The controller for potentially saying the wrong command or the system for not corresponding the command? There are other issues to consider, such as if a machine learns the tactics that are used by the military were the system was used and the tactic that is chosen causes mass loss of life, who is to blame again? It seems that a controller would always need to be in command to make a go/no-go decision in case the system malfunctions or makes bad decisions that could result in mass casualties.

24

2.5.1 The Investigatory Powers Act of 2016

“On Tuesday 29 November 2016, the Investigatory Powers Bill received Royal Assent and will now be known as the Investigatory Powers Act 2016. It will provide a new framework to govern the use and oversight of investigatory powers by law enforcement and the security and intelligence agencies.” (Gov.uk, 2016)

The Investigatory Powers Act of 2016 (IPA) is an updated successor of the Regulation of Investigatory Powers Act of 2000 (RIPA). The original legislation, RIPA, made several actions legal to government including but not limited to the interception of communications and the acquirement and disclosure of communications data (Intelligencecommissioner.com, 2014). RIPA also provides the statutory basis for the permission and use of covert surveillance and covert human intelligence sources by the intelligence agencies and certain other public authorities (Intelligencecommissioner.com, 2014). This can be understood as the government now have permission to collect the public’s communication data and even ‘snoop’ on individuals with no warrant.

In December 2016, a news article was published revealing Local authorities used the Regulation of Investigatory Powers Act to follow people, including dog walkers, over five years (Asthana, 2016). In the Guardian’s article it expresses that a mass freedom of information request found 186 local authorities used the government’s Regulation of Investigatory Powers Act (RIPA) to gather evidence via secret listening devices, cameras and private detectives (Asthana, 2016). These surveillance operations span from the trivial crimes of monitoring dog barking and following a suspect who was guilty of feeding pigeons to the serious activities of illegal puppy farming, under age firework sales and the sale of dangerous toys (Asthana, 2016). Now, some of these surveillance exercises may seem trivial and un-needed but it does show that if the power is given to a group with authority and power, it will be used in a way that group sees fit, regardless of the public’s view on the matter. The article does state that a spokesperson had come forward and stated that the law had now changed and RIPA could only be used if criminal activity is suspected (Asthana, 2016).

25

In 2016 the updated RIPA was signed and passed, The Investigatory Powers Act. “Introduced by Theresa May when she was home secretary, the bill legitimises the security services' surveillance powers while adding checks on their ability to gather information about citizens without a warrant. It introduces the need for a judge's sign-off on "the most intrusive powers", as well as a new Investigatory Powers Commissioner to monitor how they're used” (McGoogan, 2016). The new act focuses on the new digital era that the United Kingdom is now in. For example, in the year 2000 the general public did not have access to a smartphone and only had basic dial up internet access in their homes. Whereas with the introduction of the iPhone in 2007 (Webdesigner Depot, 2009), every phone company started developing and launching its own smartphone with each new model having new features and higher power than their predecessors. The smart phone revolution provided the world with access to the internet without needing a computer or a router. A mobile phone now has the power to access social media, servers and also provide live geographical data to the relevant authorities if it was required. “Ensures powers are fit for the digital age” (Gov.uk, 2016), this quote sustains the above statements. The new updated The Investigatory Powers Act will now allow government to track, trace and collect digital information on the public, the IPA was dubbed the Snoopers Charted before it was passed and made law. Burgess (2016) explained that over 100,000 people had signed a petition for the act to be repealed (Burgess, 2016). It is safe to say that the new, revised act was not welcomed by the British public. With this act now law, there is nothing stopping some of the data being stored and being used for a prediction software to aid with counter terrorism. This may be seen as a step too far by the public and may contradict the Universal Declaration of Human Rights which states under Article 11 “Everyone charged with a penal offence has the right to be presumed innocent until proved guilty according to law in a public trial at which he has had all the guarantees necessary for his defence” (Un.org, 1948). This does state “charged with a penal offence”, however this is closely followed by innocent until proven guilty, and the general public may argue that whilst they may be innocent; why are they being ‘watched’ or ‘spied’ on? 2.6 United Nations: The Campaign to Stop Killer Robots

“Fully autonomous weapons, also known as "killer robots," would be able to select and engage targets without human intervention.” 26

(Human Rights Watch, 2017)

Sounding like a quote from a Science Fiction film, the Human Rights Watch published a campaign report titled, Losing Humanity: The Case against Killer Robots (Human Rights Watch, 2012). In which the group define three types of weapon loops that are applied to robotic weapons.

 Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command; o This would entail a human always controlling the robot and making a decision to fire.  Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; o The robot can chose the target but can only make the decision if the controller also makes that decision for the robot.  Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.

(Human Rights Watch, 2012)

The worry is that more Human-out-of-the-Loop Weapons (Human Rights Watch, 2012) are going to be created which will in turn take the jobs of skilled professionals within the military. This would also take the responsibility away from the controller of the robotic weapon. With the U.S. Department of Defence publishing a directive titled Autonomy in Weapon Systems it is clear that these systems are in development but it does outline the testing phases. (Autonomy in Weapon Systems, 2012). Regardless, there are campaigns to regulate and even stop the development of these machines for use within governments. 2.7 Artificial Intelligence within Popular Culture AI and its potential has been explored throughout the years through the medium of film, television and graphic novels or comic books. Unfortunately many of these portrayals involve a system in some kind of form being created for one purpose and then the system is

27 either taken over but a hostile party or becomes self-aware and starts working for its own agenda. Technology has been around in science fiction television before the technology had even been invented and this has been perceived as the series/films have ‘predicted’ the future. In reality the devices shown, most notably in Star Trek: The Original Series (1966- 1969) have inspired technology designers to created devices that achieve similar results. Rumour has it that Captain Kirk’s communicator inspired Motorola’s first mobile (cell) phone and the use of Portable Access Devices (PADD) resemble Apple’s iPads. Both use touch interfaces and can display communication information (Saadia, 2016). With such devices and technologies littering our screens and societies it is no wonder that designers have drawn and created technology that resembles that of popular culture.

2.7.1 Isaac Asimov 1950 Famous for creating the Three Rules of Robotics originally in 1950; a robot may not injure a human being, or, through inaction, allow a human being to come to harm, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws (Asimov, 2013). These laws come from the famous I, Robot novel that was adapted into a film of the same name in 2004. These laws, in theory will protect a human, even if the human gives an order to harm another human as that conflicts with the First Law. In the film, the robots must follow the same laws as set out by Asimov, however it does not follow the interconnecting stories of the novel. It follows one robot named Sonny who breaks the Three Rules of Robotics by killing his creator. The overseeing Artificial Intelligence, V.I.K.I, develops her sentience and logistics and as a result she deduces humanity is going on a path of certain destruction, therefore she creates the Zeroth Law. This Law states that she has to protect humanity from being harmed, clearly disobeying the First and Second Law in order to achieve her goal. In turn she reveals that she is planning to enslave and control humanity to simply protect it (IMDb, 2004).

2.7.2 HAL 9000 1968 - 1997 First seen in Stanley Kubrick’s 1968 film, 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is the sentient computer aboard Discovery 1 on a

28 mission with a human crew (IMDb, n.d.). HAL is equipped with functions to perform lip reading, facial recognition, interpret emotion and express its own emotions (2001: A Space Odyssey Wiki, n.d.). This essentially means that the system can communicate and understand all that the human crew say and do. The voice behind HAL 9000 is a soothing male voice. By today’s technology this is rather normal as most mobile assistants are a calm timbre to listen to. This was possibly done to keep the reveal of the eventual sinister nature of the system a shock to viewers. HAL 9000 was an inaugural inductee of the 2003 Robot Hall of Fame. The Terminator T800 is also an inductee at the hall of fame for fictional and non-fiction robots (Robothalloffame.org, n.d.) This cements the HAL 9000’s impression on popular culture and science fiction media.

2.7.3 Skynet 1984 - Present A fictional artificial intelligence that has been popularised by The Terminator franchise from 1984 onwards is an example of a system that becomes self-aware and can act upon any information it sees and understands. It is the idea of the original film’s director James Cameron and it is a fictitious computer program. However, it does become a militarised system that was used to run military programs until it became too advanced. The franchise that has covered four films, a ride at various Universal Studios across the globe, video games and comic books revolves around a computer system that is created by Cyberdyne Systems that learns and comes to the conclusion that human kind is the enemy. The system researches time travel and sends various ‘Terminators’ back in time to prevent John Conor from existing as he is the human who will stop Skynet in the future (Terminator Wiki, n.d.) Now, this is a fictional device created to sell a franchise, however people have referred to new advanced AI as Skynet and designers are aspiring

2.7.4 ARIIA 2008 The Autonomous Reconnaissance Intelligence Integration Analyst (Lewinski, 2009) is the artificial intelligence built by the United States Ministry of Defence. Its interface is like that of most modern ‘assistants’ and that is an audio user interface. This allows easy communication with the system and that then allows Aria (the systems user name) to reply

29 in the calming human voice of actress Julianne Moore (IMDb, n.d.) The system was set up to collect all the data of every US citizen. It then can analyse and make patterns of its collected database of human activity which is then used to track potential threats. Data is collected from mobile phones, ATM’s and anything else connected to a network that Aria can see. Aria makes a decision based on the United States bombing Iraq that humans are now a target and the system sets itself a mission to fix the ‘corrupted’ government system by assassinating high profile government officials. The two protagonists of the film are chosen by the computer to carry out its plan (ARIIA the AI project of the film EAGLE EYE, 2009).

2.7.5 Muse - Drones 2015 The 2015 from rock band Muse explores the modern warfare of drone technologies and various fallouts of the technology. "The next step in drones is going to be autonomous drones, which actually make 'kill' decisions themselves; there will be no humans are involved." (Bellamy and Menyes, 2015). The song Reapers makes heavy reference to Reaper and Hawk drones carrying out ‘kill missions’ being controlled by remote control with little or no remorse to the actions carried out on a computer screen.

“You kill by remote control The world is on your side You’ve got reapers and hawks babe Now I am radicalized Drones!”

(Bellamy, Howard and Wolstenholme, 2015)

As humans, it is an instinct strive to create the next best technology and 9 times out of 10 inspiration is taken from the media around us. If the majority of AI seen in films and television ‘go rouge’ or have a hidden agenda it should be taken as a warning that it could happen. Now, there are no current reports or public accounts of this happening but it does not mean it will not or cannot happen.

30

2.8 Literature Conclusion Artificial Intelligence is not a new concept but it does continue to gather new momentum and creates new technology every year. There are set groups that can classify types of AI but some technologies can be misclassified to a certain degree like Intelligent Personal Assistants (IPAs) or Digital Personal Assistants (DPAs). Many of the theory of AI stems from the 80’s and broadly remains unchanged, the application of the theory is what changes. AI has also found itself the centre of many films and books over the years and this trend continues to grow within popular culture. With AI now a dominant force of computing it will only continue to advance in use and application.

31

3.0 Methodology 3.1 Introduction In order to fully research the Militarisation of Artificial Intelligence a research methodology is required to understand how data will be collected, analysed and then how conclusions can be drawn from the research.

Figure 3: Research Onion

Adapted from (Saunders et al., 2009)

Figure 2 shows Saunders, Lewis and Thornhill’s ‘Research Onion’ which is used to display the overarching knowledge areas surrounding principally primary research methods. The theory is that the researcher cannot simply adopt one method in order to gain the knowledge required to answer the research question(s) (Saunders et al., 2009).

32

3.1.1 Philosophies A Positivism approach will be used as statements can be researched, such as the research questions in section 1.4. The research then undertaken will then test these hypotheses/ answer the research questions which may then lead to further research (Saunders et al., 2009). Positivism strongly adheres to the view that factual knowledge is gained through observation and is dependable as it has been collected first hand (Dudovskiy, 2016).

There are five main principles of a positivism philosophy:

1. There are no differences in the logic of inquiry across sciences.

2. The research should aim to explain and predict.

3. Research should be empirically observable via human senses. Inductive reasoning should be used to develop statements to be tested during the research process.

4. Science is not the same as the common sense. The common sense should not be allowed to bias the research findings.

5. Science must be value-free and it should be judged only by logic. (Dudovskiy, 2016).

In order to draw conclusions from a positivism philosophy the results that are discussed must not include any biased discussions.

3.1.2 Approaches An inductive approach, also known as inductive reasoning, will be undertaken. It takes the observations from the research and then theories are created from the data (Dudovskiy, 2016). This approach is the opposite of deductive reasoning, Dr Deborah Gabriel explains the differences as “A deductive approach usually beings with a hypothesis, whilst an

33 inductive approach will usually use research questions to narrow the scope of study” (Gabriel, 2013).

3.1.3 Strategies There are several strategies to choose from in this layer so a hybrid strategy will need to be adopted in order to achieve the objectives and answer the research questions given in section 1.3 and section 1.4 respectively. A survey in the form on an online survey using the platform Survey Monkey will be used to answer the who, what, where, how much and how questions posed (Saunders et al., 2009). This data is then analysed for patterns to be seen and conclusions drawn. However, case studies will also be examined to understand the surrounding literature of the topic. These studies will also aide for comparison with the survey results to analyse patterns and parallels.

3.1.4 Choices Mixed methods are chosen within this layer. This allows for quantitative and qualitative data collection techniques and analysis techniques (Saunders et al., 2009). It also means that any type of analyse can be undertaken without limitation.

3.1.5 Time Horizons The use of a survey, as highlighted in section 3.1.3, will undertake a cross-sectional method. This is a ‘snapshot’ of the participant’s knowledge at the time of the survey being submitted. The contrast of cross-sectional is a longitudinal study which will study participants over a longer period of time. This would allow the participants to learn and research new ideas and technologies that they may have not previously known about. This project does not allow for a time frame that is required for a longitudinal study (Saunders et al., 2009).

3.1.6 Techniques and Procedures The core of Saunders, Lewis and Thornhill’s ‘Research Onion’ is Data Collection and Data Analysis which can now take place after deciding on the various research methods. Of course, data analysis can only take place once data collection has taken place. Trends and 34 patterns can be attained from the survey results as that will yield mainly numerical data to be displayed in a graphic form for ease of viewing. The interview with Major-General Chapman will then be added to analysis of the survey, with agreeing or disagreeing with the results and it also aides with the overall understanding of the topic.

3.2 Research Methods This dissertation will utilise both Primary and Secondary research to understand what is meant by both militarisation and artificial intelligence. Dr Emma Smith recommends combining secondary research to primary data to produce higher quality research outputs (Smith, 2011). One primary research method will include a semi-structured interview with Major-General Clive “Chip” Chapman to understand the benefits of using unmanned vehicles and automated systems within warfare and the ethical issues that it may include. Another primary research method will include an online survey to gain a public’s perception on the knowledge of AI and how the public feel about certain scenarios involving artificial intelligence and the military.

Secondary research is also undertaken to gain knowledge and outline the definition of artificial intelligence and the definition of ‘militarisation’ for the context of the research. Secondary research is further utilised to explore the existing technologies used by the military, specifically technology developed by DARPA (Defense Advanced Research Projects Agency), the classification of an artificial intelligence and also artificial intelligence within popular culture.

3.2.1 Primary Research Currie defines primary research as “research that produces data that are only obtainable directly from an original source. In certain types of primary research, the researcher has direct contact with the original source of the data” (Currie, 2005). For this project it will include data obtained from the public and an interview from a professional with first-hand experience with the military.

35

3.2.1.1 Semi-Structured Interview In order to create a set of interview questions that will gain beneficial results, guidelines outlined by Driscoll should be followed. These include; choosing the correct participant, choosing a face to face or a virtual interview, choosing a suitable location, whether to record the interview or have it transcribed in real time (Lowe et al., 2011). By choosing this qualitative method of data collection, it will yield non biased results from the interview that can be used in contrast to the other research methods.

For the interview, Major-General Clive Chapman has been chosen. He is the Senior British Military Advisor to US Central Command since September of 2010 (Grenville, n.d.). Major- General Chapman has been chosen as an interviewee because of his wealth of experience within the military and now his career in counter terrorism and advisory roles to central government. The interview will be conducted on a face-to-face basis to achieve a thorough representation of his experience with technology and AI with the military.

The questions chosen should give a clear overview of the participant and why they have been chosen to take part. This should reflect their expertise in the given subject. In terms of this project, questions must be related to the military and technology. Interviewing Major- General Chapman will also outline a new perspective. The questions asked should not be closed as the more detail that can be applied to the answers will yield a clearer conclusion. The other benefit of interviewing someone with a vast history of experience in this area would be that they can give examples over a number of years that may not be known to the researcher. The interview will be requested to be recorded via a mobile device and then transcribed at a later date. There are many benefits of having a recording of the interview, some of these are:  It will record a precise and unbiased account of the interview  It will allow for the interviewer to concentrate on the answers being given  Allows for a thorough review of the interview at a later date  Provides a permanent record of what was said  It allows for general anecdotes to also be added to the conversation

36

There also some disadvantages to consider:  When transcribing the interview, it may be difficult to hear exactly what was said  Technical issues mat interrupt proceedings  Transcribing can be a lengthy process  May cause interviewee to divide his/her attention was said between you and the machine (Currie, 2005)

For this research project, it will be requested that a recording is taken as the benefits outweigh the potential issues. If the recording is denied then notes and dictation will need to take place, this would make the interview a longer process.

3.2.1.2 Online Survey Currie also states that another one of the three primary methods of primary research is a survey (Currie, 2005). “A survey is a technique in which a sample of prospective respondents is selected from a population. The sample is then studied with a view to drawing inferences from their responses to the statements in a questionnaire, or the questions in a series of interviews” (Currie, 2005). The population for this project is defined as a range of people ages 18-80 with in the UK. Snowball sampling will be used with social networks to gain participants. Vogt states snowballing as “A technique for finding research subjects. One subject gives the researcher the name of another subject, who in turn provides the name of a third, and so on” (Vogt, 2005). Using the social media platform Facebook, a web link can be created for the survey and shared amongst profiles to gain a wide audience. The survey will ask questions about the participant’s profession and technology competence. It will also pose questions about certain artificial intelligence’s and how they are used. The survey will also ask about how comfortable the participant is with certain data being used and then with AI being used for military programs such as Drone missions. Using a Likert technique to assess people’s levels of understanding of a statement, an average response level can be generated (Currie, 2005). Once collected, this data can be analysed to draw a conclusion on the perception of AI being used within the Military and how comfortable the public is with its use.

37

Once the interview has taken place and the online survey results have been collated and analysed the two methods can be compared for similarities. This may be comparing Major- General Chapman’s views on the public’s perceptions of automated systems within the Military and a similar question that is asked on the survey.

All interview questions and online survey questions can be found in the appendix.

3.2.1.3 Research Limitations As mentioned in 3.2.2 the definition of population is not what would be generally consider as the general population of the UK. Due to time constraints and accessibility the web link will need to ‘shared’ around the social media platform in order to gain participants. Unfortunately, this does not give a country wide representation that would be preferred but it does give a wide range of age, profession and technological expertise. If given a longer time period to collect data the link could be used over other social media’s and potentially posted elsewhere other than Facebook. In order to gain guaranteed participants, the web link will be shown on two Facebook Profiles. This may however generate an age biased result based on the ages of the personal profiles that the link is being shared from. In this example, the two males are in the age groups 18-24 and 45-54. It would be expected to see a spike in these groups.

3.2.2 Secondary Research In order to further the understanding of the topic of Artificial Intelligence within the Military secondary research is needed. This will include existing data that is in the public domain. Saunders, Lewis and Thornhill explain that secondary research falls into many categories that include written materials, non-written materials, area based, time-series based, censuses, continues and regular surveys and ad hoc surveys (Saunders et al., 2009). For this project, journals, articles and government websites are likely to be a strong source of information and research. Cardiff Metropolitan University also offer library services such as MetSearch to access subscribed journals and research papers.

38

3.3 Ethics Approval There are many ethical issues that may arise whilst carrying out primary research, therefore an Ethics Approval Form has been required by Cardiff Metropolitan University. Once the form has been completed and accepted by the ethics committee, research can be carried out.

For ethics approval to be granted there must be several documents submitted. These include; a participant consent form, participant information sheet and an email/letter to the interviewee granting permission and the interview questions to be asked.

The aforementioned documents can be found in the appendix.

39

4.0 Results and Discussion 4.1 Introduction The section that follows will display and discuss the results found from the online survey. Full questions and results can be found in the Appendix. The interview with Major-General Chapman will also be used to help understand some of the results displayed, it may also contrast with what was collected. Where appropriate, graphical representation of the data will be used to help aide the display of results. It is important make the data as readable as possible to help identify patterns and trends. It is also important not to just repeat the results (Azar, 2006), this will just result in repeating what is in the appendix. It is extremely important to organise the data correctly in a way that it can easily be visualised in a meaningful way (Banning-Lover, 2014). Data that is shown must be able to have conclusions and analysis drawn from it otherwise including it is superfluous. The data displayed in section 4.2 will start with the demographic of the participants to outline the group surveyed and then the data displayed will be associated closer to the topic being researched.

40

4.2 Demographic Results The online survey yielded 63 complete responses. This was achieved by using the social media platform Facebook as mentioned in section 3.2.2. Below are several graphs to explain the demographic of the contributors to the survey.

Figure 4: Question 2 Responses and UK Facebook Users by Age

UK Facebook Users by Age from (Statista, 2017)

Based on the personal account that the web link of the survey was shared from, the above statistics shown in Figure 3 display a mixed and varied response by age. Understandably, the highest demographic was the 18 to 24 bracket based on the users who could see the web link from the personal account, of which the person was in their 20’s. Figure 3 also echoes the second account that the link was shared from which was a male in their 50’s. This accounts for the spike in the 45 to 54 age category. Also displayed on the chart is the age demographics of UK Facebook Users for comparison with the age demographics collected. Due to the unique web link that was shared on two personal Facebook accounts the age groups collected are biased to the age of the two users who shared the link as outlined in section 3.2.1.3.

41

Figure 4: Question 6 Responses

For Question 6 a Likert scale was used to assess the participant’s technical ability. This relied heavily on the participant being honest and assessing themselves fairly. This is a perfect example of the Cross-Sectional “Snapshot” mentioned in 3.1.5. The participant will only have a certain level of technical expertise when they answer the question, however, in 2-3 months’ time, if the survey was sent around again, the average participant may have learnt newer skills and score themselves higher. Figure 4 displays the Likert scale which gave the participants the options to rank themselves on a slider which would return the value of technical ability from 0 being No Technical Ability to 10 being an Expert. The Likert scale used can be seen in the appendix, section 6.9.

Based on the results displayed in Figure 4 the mode can be concluded as 7, with 26 out of 63 (41%) choosing option 7 which sits within the higher half of the Likert scale. Research undertaken by the BBC in 2014 disclosed that 20% of UK adults lack the four basic online skills (BBC, 2014) which are defined as a search engine, sending and receiving emails, completing online applications and accessing information online (BBC News, 2012).

42

Figure 5: Technical Ability based on Age Group

Another interesting display of data is Figure 5, this shows the level of Technical Ability grouped together based on the Age Range of participants that chose that technical ability. This helps illustrate the mode answer of 7, which the majority of age groups choosing 7 at least once. It seems that the majority of the 45 to 54 age group chose 7 whereas the 18 – 24 group is spread between 3, 6, 7 and 8. This may be because with age comes knowledge and that age demographic have been around technology longer than 18-24 year olds. This may also be a result of the bias coming from the Facebook profile of the 51 year old sharer.

43

Figure 5: Question 5 Responses

Out of the 63 participants, the majority (36.5%) of responses came from an ICT/Computing course or employment background. This may also backup the result of Figure 4 with a mode of 7 being chosen for technical ability. To conclude on the demographic results, it seems that the majority of the 63 respondents would class themselves as competently technical with a mode answer of 7 being chosen for Question 6, this however may be because of the employment/study demographic these respondents have come from.

44

4.3 Study Results The following results are directly related to the topic of Artificial Intelligence and the Military. The questions asked will help answer the research questions outlined in section 1.4.

Figure 6: Question 7 Responses

Question 7 was used to gain an overall answer of how the participants would judge their understanding of “Artificial Intelligence”. By using a Likert scale once again, five options are presented to the respondent to choose their level of understanding. Based on the 36.5% that have answered from an ICT/Computing background, it is no surprise that the majority of the answers are ‘I know some about the term “Artificial Intelligence”’.

“6% of US jobs will be replaced, while the equivalent of 9% jobs will be created — a net loss of 7% of US jobs by 2025”

(Forrester, 2016)

45

The above statistic was published in June of 2016 by Forrester. It is clear that AI is on the rise and more people are beginning to learn about the technology and that is also shown in Figure 5. This concern was also addressed on three separate occasions in question 29 “Do you have any other concerns regarding the potential use of AI, personal data, etc?” Which asked participants to voice their own concerns. The three results in question are as follows:

A. Huge potential for job losses as AI takes on more roles such as driving. B. The fear of jobs being replaced by AI. C. Job losses, eventual dependency on these systems with no contingency for when they fail.

Figure 7: Question 8 Responses

In order to gain a thorough understanding of the public’s perception on an Artificial Intelligence a question about Digital Personal Assistants was included within the survey. It generated mostly a positive result with only 1.6% of respondents not knowing what they were. This could be down to the users of a device not knowing what is installed on the device or just not making the connection of Siri or Cortana being linked to a form of AI. As discussed in section 2.3.6.5 it is generally discussed that mobile assistants are not true forms of AI and only exhibit features of true AI’s, such as Natural Language Processing and

46

Speech Recognition. Franklin and Graesser (1996) define an autonomous agent as “a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future” (Franklin and Graesser, 1996). Understandably, Siri, Cortana and Google Assistant could fit under that description. It is a personal opinion- if these smart assistants are true forms of AI, but there is no denying that they do portray features of the technology, whether the programs ‘think’ for themselves is another argument.

Major General Chapman’s answer to: What’s your understanding of artificial intelligence and other autonomous technologies? Echoes that AI and Autonomous Technologies are completely different.

“I would say that autonomous technologies and AI are two different things because of course the Turing test is yet to be passed and the Loebner Prize is yet to be awarded so I don’t think I’ve encountered AI as you might describe it in that sought of sense (...)”

(Chapman, 2017)

Figure 8: Question 9 Responses

47

Figure 7 shows that only just over 50% of the respondents actually use the assistants in their phones/devices. Research conducted by Creative Strategies, Inc. showed that 21% of their 500 consumer panel had never used Siri, 34% had never used OK Google and 72% had never used Cortana (Milansei, 2016). Those statistics are interesting because as Figure 6 shows, the majority (69.8%) see the assistants as AI, only 57% actually use them. The Creative Strategies research returns higher numbers but that was using 500 consumers.

Figure 9: Question 10 Responses

Chatbots are nothing new and there are many on the internet to use. A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) is an award-winning free natural language artificial intelligence chat robot which was created in Java back in 1998 (Alicebot.org, 2017). Another more recent bot is Mitsuku and has won Most Humanlike Chat Bot in 2013 and also in 2016 (Mitsuku.com, 2016). Skype also allows for chat bots within conversations, for example a CaptionBot that can understand image content (Support.skype.com, 2017). These bots engine employs AIML (Artificial Intelligence Markup Language) to form responses to the questions and inputs given to the system (Alicebot.org, 2017). These chatbots are now being used on websites such as Sky, Amazon and eBay to provide help customers before they speak to a human. Very much like the digital assistants as discussed in 2.3.6, automated chatbots rely on a set of rules to interpret what the customer is asking and then how to provide the solution. From the responses in Figure 8, it is clear that the majority of people

48 surveyed (55.6%) were in agreeance to a certain level that these chatbots are a form of AI. Once again, there are chatbots in the form of artificial intelligence but not necessarily as customer service website chatbots.

Figure 10: Question 11 Responses

In the UK, variable speed limits are used to ease congestion on busy motorways, these have since been named ‘Smart Motorways’ (Gov.uk, 2016).

“A smart motorway uses technology to actively manage the flow of traffic. The technology is controlled from a regional traffic control centre. The control centres monitor traffic carefully and can activate and change signs and speed limits. This helps keep the traffic flowing freely.”

(Gov.uk, 2016) Taking the above statement into account, although it is ambiguous, it is assumed that the end decision to lower a speed limit lays with a human, however another statement reading “Technology is used to monitor congestion levels and change the speed limit when needed to smooth the traffic flow” (Highways.gov.uk, 2017) explains that technology is used and changes the speed limit. This is showing autonomous control that could be classified under

49 an expert system as outlined in section 2.3.6.7. The results shown in Figure 9 yield an inconclusive result of whether the participants fully agree on the question. The largest results were around the 27% mark and they were spread across somewhat agree, neither agree nor disagree and somewhat disagree. The fact that the question was answered broadly in the middle ground it is safe to assume that the participants are not sure whether variable speed limiting software is AI or not.

Figure 11: Question 12 Responses

Smart Home Technology is mostly seen within films and television. Most notably J.A.R.V.I.S whom can be seen in the Marvel Cinematic Universe. The premise is an Artificial Intelligence that can control your home and interact with the people within it. In the real world, there are devices that are being sold by Amazon, the Echo and Echo Dot and Google launched Google Home that can get close to this result. Based on a verbal command, the device can perform a number of different of actions. For example, control music, adjust the heating levels, find recipes and even order food. Figure 10 shows a varied response of agreeance, however the mode of answers came from Somewhat Agree with 22 out of the 63 responses but with no majority in any one segment it is also inconclusive.

50

Figure 12: Question 13 Responses

As mentioned in section 2.3.6.1, Gaming is an area of AI that is constantly being updated and researched. It would seem that 44.4% of all responses would agree that non controlled characters (NPCs) are forms of AI. Which would be true as the literature would suggest that the system architecture that the game runs on can make its own decisions based on the gamer’s inputs and decisions within the game.

51

Figure 13: Question 15 Responses

Relating to section 2.5.1, it was important to gain an understanding of the amount of respondents who had heard of the Information Protection Act of 2016. The survey concluded that 63.5% are not aware of it. This aligns with an answer Major General Chapman gave in the interview.

How do you feel the Investigatory Powers Act 2016 is/was received by those who were aware about it?

“Only human rights lawyers are those who are really interested in the investigatory powers Act know anything about it (...)”

(Chapman, 2017)

This answer may be because the Regulation of Investigatory Powers Act has been around since the year 2000 and the 2016 reformation of the act could have been introduced without any publicity. In terms of surveillance, this new act can potentially can track every UK citizen with their devices and use that information if there is suspected criminal activity.

52

It was also important to gather Major General Chapmans view on the perceptions:

What are your thoughts about the wider public discussions regarding AI, big data, machine learning and algorithmic accountability?

“The public awareness of all these things is really really basic and I don’t think they really know that much about it, policy probably has yet to catch up with technology developments and its certainly not the other way around and one of the things in a sense is that there is no policy discussions any more about these things in a way that there should be (…)”

(Chapman, 2017)

It is important to note that much legislation is passed without the public knowing and it is rarely discussed in a public forum and as Major General Chapman said “policy probably has yet to catch up with technology developments (…)” (Chapman, 2017). In addition Major General Chapman was asked:

Do you feel that the public should be made aware of said technologies which are to become more integrated with our lives rather than left to their own devices to discover it themselves?

“Well there are two aspects from the public perception, they have given up their right to privacy without knowing it, and it is there, they will become interested in it when , you know the internet of things becomes a big thing and the compromise that their data then becomes a big thing so I think the millennial generation are sort of interested in their privacy but don’t really understand that a mobile phone is not a mobile phone, a mobile phone is an intrusive surveillance device and it is to make phone calls but they seem to be happy with that and we’ll discuss this when we get onto

53

son of RIPA, they seem to be happy with that because they are trading privacy for convenience”

(Chapman, 2017)

Wang (2013) also discusses the aspect of trading privacy for convenience by using examples as using finger prints to pay, talking to devices for recommendations and using Facebook to tell followers where the user is. All of this data is slowly consumed by the companies of which the public is using the service.

54

Figure 14: Question 16 Responses

With DARPA creating various unmanned technologies as explained in section 2.4, it is interesting to learn that 92% of the 63 respondents were aware of the unmanned activity within the military.

Figure 15: Question 17 Responses

Match Figure 13’s conclusive statistic to Figure 14’s and it seems that the majority are comfortable with unmanned aircraft being used. This question has been left vague on purpose however, it does not mention armed aircraft. This result may be a false

55 representation of the question asked as Major General Chapman responded to a question asked about the public’s perception of unmanned aircraft as follows:

“I don’t think they’ve really ever considered it outside the arena of Amazon or the equivalent of Amazon delivering their order with timeliness and agility and again I think that plays in a sense to what we were saying earlier on about people liking convenient solutions, and if they see a convenient solution to speed of order, speed of delivery then they’ll think that that’s a positive thing, and I think they are cognitively getting used to that”

(Chapman, 2017)

That statement may be true for the broader public but based on the demographic that have responded (seen in section 4.2), the majority of responses have come from the ICT/Computing sector. This would lead to an assumption that because these responses have come from an ICT/Computing background, these respondents are aware of the technology and have a sound technical knowledge to agree with the unmanned aircraft use.

56

"The UK is being watched by a network of 1.85m CCTV cameras" (Lewis, 2011)

The above statistic was reported by The Guardian in 2011. It states that “The UK is being watched by a network of 1.85m CCTV cameras, the vast majority of which are run by private companies, according to the only large-scale audit of surveillance cameras ever conducted” (Lewis, 2011). The survey public were asked “How aware were you of the above statistic?”

Figure 16: Question 18 Responses

This result also ties into the use of the Investigatory Powers Act 2016 by the UK Government. It seems that over half of the respondents are aware of CCTV imaging taking place but maybe not to the extreme of one CCTV camera to every 32 people in the UK (Lewis, 2011). This statistic would make it easy for the DSTL (Defence Science and Technology Laboratory) to create a system to track movement across the UK. Hypothetical yes, but not science fiction.

57

Figure 17: Question 19 Responses

It seems with 1.85m CCTV cameras (as of 2011) across the UK the general public need to agree with their use as they cannot avoid them. It seems that the largest segment of the responses are comfortable with the CCTV images being used within a CCTV system. This question is open for interpretation however, as the question does not state what the image is used for.

58

Figure 18: Question 20 Responses

It seems when a real purpose is included into the question the responses become a little cloudy and not as decisive as Question 19 was. Perhaps this is down to the length that the images will be held for, the question is slightly obtuse however with the true purpose.

Figure 19: Question 21 Responses 59

With a new purpose of the imaging included within the question it seems that more of the public are in agreeance with images being used. All three questions (19, 20 & 21) convey different and inconclusive results and generated mixed feelings on the topic of CCTV imaging being used for various applications. A study carried out by Synectics concluded that 86% of those surveyed supported the used of CCTV with the main reason being its helps prevent crime (CCTV in the UK, 2015). In the same survey it is stated that 80% of people in the UK feel like they do not receive the right amount of information regarding CCTV (CCTV in the UK, 2015) which contradicts Figure 16 as over half of the respondents were privy to the statistic of CCTV cameras in the UK. The survey also indicated that 76% of the public stated that CCTV in public should be used to prevent crime and anti-social behaviour and 65% of the public said it should be used as post incident tool (CCTV in the UK, 2015) which would disagree with Figure 19.

60

Figure 20: Question 23 Responses

The response of having a human interaction removed from flying is rather conclusive. 43 responses fell into a not comfortable category. This could be because there would be no one to blame in an accident and people do always prefer a human aspect to control. Perhaps it has something to do with the potential of a weaponised drone going rogue.

“They can be hacked, and of course any second one really is any malignant force that might have access to weaponising a drone (…)”

(Chapman, 2017)

With DARPA developing many unmanned technologies such as Autonomous High-Altitude Refuelling (section 2.4.1.2) it is only a matter of time before the removal human interaction is the next step in technological evolution.

61

Figure 21: Question 24 Responses

Similar to Question 23, the public are not over enthralled with the human element being removed from munition strikes.

Figure 22: Question 25 Responses

62

What is interesting with Figure 21, is that no respondent singled out the AI for not checking the intelligence. The largest segment of responses was to include all of the responses given.

Major General Chapman was posed with a similar question.

Who would be held accountable if a drone strike was used with false intelligence?

“This is where you get into, there is interpretation of intelligence but targeting is a different thing and is based on using engagement criteria which you must follow just war and legality, so the main things on that, and I have given you a handout for future reference, is it proportional, is it necessary, is it legitimate, has combat immunity been taken into account, i.e. collateral damage so targeting is actually a precise art and can never be a science which plays into weaponeering and weaponeering is to do with to minimise collateral damage and those who might be in the CEP, combat engagement point, is what weapon do you use, what angle of attack to you use because all these things precisely play into the terminal effect at the target end, so mistakes can be made and again for future reference I have given you CENTCOM, central command in America which essentially runs op inherent resolve (…)”

(Chapman, 2017)

Major General Chapman makes reference to U.S. Central Command which “directs and enables military operations and activities with allies and partners to increase regional security and stability in support of enduring U.S. interests” (Centcom.mil, 2017). The handout in which Major General Chapman makes reference to outlines where civilians were killed in coalition air strikes, the example of which Major General Chapman mentioned can be found in the appendix, section 6.8.1.

63

The United Nations also identified a Campaign to Stop Killer Robots (Un.org, 2017) started by Human Rights Watch. The report, published by Human Rights Watch outlines three weapon types:

 Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command;  Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and  Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction. (Human Rights Watch, 2012)

The types of strikes discussed in the above questions relate to Human-out-of-the-Loop Weapons.

Figure 23: Question 26 Responses

64

Once again, not a single person singled out the AI for being solely responsible for the incorrect alert of forces. This provides a conclusion that regardless of how advanced a system may be, there would always be an element of human error to be included within the process of resolve. Major General Chapman was also asked a similar question regarding blame.

When would the blame move from this hypothetical Artificial Intelligence to the User or Controller?

“It is too far into the future to even contemplate that as a question in my view, the reason I say that, its having an understanding of targeting and how targeting works and the decision making on go and no go criteria which again comes back to the ethics of war and how you actually go through the decision making of that and after this I will show you a quick page of a book about targeting which may be of use on this regard I think.”

(Chapman, 2017)

The book that Major General Chapman mentioned is a hand written source of information that Major General Chapman himself had created. It is a complex compilation of targeting criteria that must be met before a final decision can be made.

Vellanki (2016) states that online technology platforms should be held accountable for fake news (Vellanki, 2016), this is likely down to how items of information can go viral and is quickly spread around the globe. If a digital system was created to alert forces it is likely to ingest its knowledge from the internet so fake/false news is a real threat.

65

Figure 24: Question 27 Responses

Asking the respondents about the Defence Science and Technology Laboratory, also informally referred to as the British DARPA, it is clear that over 70% are not aware of it. Which is interesting as the DSTL has had up to £100,000 made available to fund research into how automation and machine intelligence can analyse data to enhance the decision making in the defence and security sectors (Gov.uk, 2016). It seems that this money is being awarded to the best proposal in order to fulfil a project within one of the following areas:  Machine Learning  Artificial Intelligence  Machine Intelligence  Big Data  Automation  Predictive Analytics  Automated planning  Autonomous intelligence systems  Cognitive science (Gov.uk, 2016)

66

With the amount of money that is being invested in these technologies it is a surprise that not as many people have heard of the organisation. What is also interesting is that the predicative analysis area ties into Major General Chapman’s role:

What role does technology play in your job and career?

“At the moment I really am a commentator on patterns, trends and developments that impact on security. In my career in counter terrorism and in the counter terrorism world of course it was always called 2’s a pattern 3’s a trend and its picking up those trends increasingly in my final few years from data and meta data which leads us to finding bad guys”

(Chapman, 2017)

What is also interesting is that the DSTL is also challenging industries to design autonomous resupplying systems for the front line troops (Gov.uk, 2017). This department of the Ministry of Defence is clearly forward thinking and its starting to look at the future of AI within the military.

4.4 Conclusion of Results The results have shown that the public’s perception of Artificial Intelligence is relatively broad and it seems that the majority of answers have come from an ICT/Computing background so it would be safe to argue that the answers given are coming from a reputable source of knowledge. It is also clear that the public are not aware of the U.K’s division of research for the military or how much the DSTL will play a part of the M.O.D in the coming decade. The results also show that the public are not comfortable at all at the prospect of a completely automated military system that could take lives in a war environment.

67

5.0 Conclusion 5.1 Introduction This dissertation had an overarching aim of understanding the public’s perception of the Militarisation of Artificial Intelligence and to evaluate the existing technologies that exist that could potentially be weaponised. The project also needed to accomplish goals that were set in section 1.3 that included reviewing literature surrounding AI within and outside of the military. They also stated to research the advantages and disadvantages of AI within the military. Research Questions were also declared in section 1.4. The follow conclusion will discuss whether the Objectives were met and the Research Questions were answered.

5.2 Objectives  Review the literature and other relevant literature surrounding Artificial Intelligence within and outside the military

The academic resources surrounding AI outside of the military is vast. It is a subject that has been around since the 1950’s, with Turing proposing machines ‘could’ think and answer questions based on its knowledge and ‘think’ like a human (Turing, 1950). However the theory of AI precedes that. This theory from Turin since advanced to machines and technologies being created to act like a human. With the Areas of Artificial Intelligence (section 2.3.6) clearly defined it seems that the majority of AI’s can be classified and categorised. There is however some grey areas between some forms of technology. Most notably digital assistants such as Siri, Cortana and Google Assistant. There is also major development within DARPA regarding unmanned aircraft technologies that have the potential to fly unmanned for hours and possibly even days at a time. Artificial Intelligence as a core subject is continuing to develop and grow at an exponential rate and will only become more advanced and integrated within organisations and societies.

 Research Artificial Intelligence within Popular Culture

A comprehensive study into popular culture lead to the discovery of films based artificial intelligences. It also lead to the discovery of books and music that make heavy reference to

68

AI. Some film references included The Terminator’s Skynet, I, Robot, HAL 9000 and ARIIA. All the film AI seemed to have a malicious intent to harm humans. It also lead to the conclusion that Isaac Asimov’s I, Robot (Asimov, 2013) changed the perceptions of AI and as Slocombe described certain scenes in the film as “indicative of the core ethical issues concerned with AI research: it denigrates AI as not being “moral” but merely a pattern of encoded behaviours” (Slocombe, 2016) this is true. Also within music AI has become a strong subject. Muse explored the modern warfare of drone technologies and various fallouts of the technology in their 2015 album, Drones. The single ‘Reapers’ “takes the war for control of your mind right to the frontline of the war against terror” (Haynes, 2015).

With Artificial Intelligence well situated and integrated into the public’s eyes, more so now than ever before, there is no surprise that designers may draw on what is seen within science fiction fantasy to create a non-science fiction reality.

 Conduct a survey to collect the public’s understanding of Artificial Intelligence and their feelings of Artificial Intelligence being used within the Military

In section 2.1, the definition of militarisation was declared as there possible outcomes:

o to give a military character to o to equip with military forces and defences o to adapt for military use (Merriam-webster.com, 2017)

From there on in, Militarisation was discussed as the adaption for military use. The online survey that was used yielded 63 complete responses for analysis within section 4. The survey gave major insight into the public’s knowledge and understanding of AI. As an example, Question 7 asked What is your level of knowledge/understanding of the term “Artificial Intelligence"?, which gave the majority as ‘I know some about the term “Artificial Intelligence”’ this was likely to down to the 36.5% that have answered from an

69

ICT/Computing background. It was also interesting to learn that from Question 15 that 63.5% of the participants had not heard of the Investigatory Powers Act of 2016.

This data, along with the 27 other questions asked, helped outline that the overall understanding of AI outside of the military is competent. However, the public’s understanding of surveillance was surprisingly lower than expected and it seems that even though the public has given the right of privacy up without knowing they are not comfortable with their CCTV images being used within systems. An overwhelming result of discomfort was also uncovered but the survey when AI was introduced to Military applications such as drone strikes and go-no-go situations.

 Review the public’s perception of the Militarisation of Artificial Intelligence.

Using the online survey it seems that the public are not comfortable with an AI system being in control of military operations without a human also having an input. This ties in with the Human-in-the-Loop Weapons that were discussed in section 2.6 (Human Rights Watch, 2012). Understandably having a lack of control is a worry to many people and that seems to be why there is discomfort regarding the subject of AI being in control of life or death situations.

Full results can be found in the appendix, section 6.11.

5.3 Research Questions  What are the public’s perceptions on Artificial Intelligence being used within the military?

It seems as though there is a negative perception that AI should be used within the military. The respondent was allowed to choose the level of AI that was in control and it appears that not having a human in control is a very substantial worry. As well as an online survey, an interview with Major General Chip Chapman was undertaken to aide with the public’s perception.

70

 What is the public’s understanding of the term Artificial Intelligence?

The public’s understanding of Artificial Intelligence is rather strong. Many responses came from an ICT/Computing background so the results may be slightly biased, however it seems that this is an advantage to answering the questions based on whether an AI should or could be in control of military programs.

An interesting quote to help answer this question comes from the interview with Major General Chip Chapman regarding the public awareness of surveillance.

“They [the general public] generally act out of fear (my data is going to be lost), honour (not really relevant in this case), or interest and most people act out of self-interest”

(Chapman, 2017)

This is relatable to the research question as if an AI system was created to monitor activity within the UK, the general public were not aware that they can be surveyed.

 Is there a potential for an Artificial Intelligence system within the military?

It would appear that even if technology did exist that could take into account the morality, the laws of engagement and the art of targeting, it would seem that the public would not be comfortable with the idea of a programmable device being in control of the decisions that a human would usually have to make.

Major General Chapman, who has years of experience also had a view on the matter of a controller not agreeing with a decision made by the AI

71

“Well the simple answer is then it is not AI because the man/machine interface still pertains so there isn’t there’s going to be, there will always be this override”

(Chapman, 2017)

Major General Chapman also said that there could be potential for an AI system to help identify trends but similar technology is already being used by GCHQ and MI5/6 but the system will always need a human to interpret the outcome and make the decision (Chapman, 2017).

“You still need people to interpret that [data] and make those decisions so there isn’t that autonomous decision making and neither do I think there will be in the next 5/10 years in this field really” (Chapman, 2017) 5.4 Limitations As mentioned in section 3.2.1.3 there was a number of research limitations that had been outlined. This included that the population that were being surveyed are not likely to be a correct representation of the general public of the UK. As there was no location demographic taken, a conclusion of whether it was a fair representation of the general public could not be made. However, as the links to the survey were shared from two personal Facebook accounts that had responses from people all over the UK, they would have been concentrated around the location of the Facebook account holders. There were also some ambiguous questions asked within the online survey which would leave the respondent to decide on the interpretation of the question. If this were research were to be repeated with a longer time frame, it would be ideal for the links to be shared more with a broader spread across the UK. Another technique could be asking the public on a local high- street to gain a better yield in demographic. The interview with Major General Chapman was incredibly useful for understanding the ethics of war and what the AI system would have to understand in order to work as an expert system to the level that a human could trust however, in hindsight there should have been more similar questions asked to what

72 was asked within the public survey as there was lots of information gathered but it was not as relatable as it could have been. 5.5 Future Research As part of future research it would be interesting to speak with the DSTL if possible to talk about their stance on the Stop Killer Robots Campaign and how that could affect their research over the next decade. As mentioned in section 3.2.1.3 and section 5.5 the limitations should be considered and eradicated where possible. Issues like the representation of the general public can be solved by posting the web link in forums on the internet but this might also bring in unusable data. Given more time to complete research, a greater spread of the public could be bought using paid survey responses. It would also be interesting to interview computing professionals on their stance of AI within the military.

5.6 Overall Conclusion This dissertation was an Investigation into the Militarisation of Artificial Intelligence which in turn also looked at the public’s perception of AI using a survey and an interview with a commendable source of previous military service and now a strong career in counter terrorism commentary. From the primary and secondary research outlined in section 3.2.1 and 3.2.2 respectively a conclusion of the technologies and perceptions of AI could be drawn. The overall consensus of the survey’s results state that the majority of those who took part are not comfortable with a computer system running missions on its own with no human counterpart to make the final decision. It seems that there is also a general worry of autonomy slowly taking away jobs from skilled professionals. It appears that there would not be any severe advancement using AI within the military anytime soon, in the context given in the survey, but there are slow progresses in autonomy being developed that could revolutionise autonomous warfare in the next decade.

73

6.0 Appendix 6.1 Bibliography 2001: A Space Odyssey Wiki. (n.d.). HAL 9000. [online] Available at: http://2001.wikia.com/wiki/HAL_9000 [Accessed 9 Feb. 2017].

Alcock, D. (2004). Artificial Intelligence in Machine Vision. [online] Vision Online. Available at: http://www.visiononline.org/vision-resources-details.cfm/vision-resources/Artificial- Intelligence-in-Machine-Vision/content_id/1235 [Accessed 18 Mar. 2017].

Alicebot.org. (2017). A. L. I. C. E. The Arificial Linguistic Internet Computer Entity. [online] Available at: http://www.alicebot.org/about.html [Accessed 6 Apr. 2017].

ARIIA the AI project of the film EAGLE EYE. (2009). [online] Tirthosayz.blogspot.co.uk. Available at: http://tirthosayz.blogspot.co.uk/2009/10/ariia-ai-project-of-film-eagle- eye.html [Accessed 20 Feb. 2017].

Arkin, R. (1998). Behavior-based robotics. 1st ed. Cambridge, Mass.: MIT Press. artificial intelligence. (n.d.). In: English Oxford Living Dictionaries, 1st ed. [online] Available at: https://en.oxforddictionaries.com/definition/artificial_intelligence [Accessed 30 Dec. 2016].

Asimov, I. (2013). I, Robot. 1st ed. London: HarperVoyager.

Asthana, A. (2016). Revealed: British councils used Ripa to secretly spy on public. The Guardian. [online] Available at: https://www.theguardian.com/world/2016/dec/25/british- councils-used-investigatory-powers-ripa-to-secretly-spy-on-public [Accessed 2 Apr. 2017].

Azar, B. (2006). Discussing Your Findings. [online] http://www.apa.org. Available at: http://www.apa.org/gradpsych/2006/01/findings.aspx [Accessed 28 Mar. 2017].

Banning-Lover, R. (2014). How to make infographics: a beginner’s guide to data visualisation. [online] The Guardian. Available at: https://www.theguardian.com/global- development-professionals-network/2014/aug/28/interactive-infographics-development- data [Accessed 30 Mar. 2017].

BBC News. (2012). Millions in UK 'lack basic online skills'. [online] Available at: http://www.bbc.co.uk/news/technology-20236708 [Accessed 31 Mar. 2017].

BBC, (2014). BBC Basic Online Skills May 2014 research. [online] BBC Learning, p.2. Available at: http://downloads.bbc.co.uk/aboutthebbc/insidethebbc/whatwedo/learning/audienceresea rch/basic-online-skills-nov-2014.pdf [Accessed 31 Mar. 2017].

Bellamy, M. and Menyes, C. (2015). Muse's New Album 'Drones' Narrative Explained by Matt Bellamy: "It's a Metaphor for What It Is to Lose Modern Empathy". Music Times, 74 pp.http://www.musictimes.com/articles/33045/20150325/muse-new-album-drones- meaning-matt-bellamy-interview.htm. Bellamy, M., Howard, D. and Wolstenholme, C. (2015).

Bloomberg Enterprise. (2016). Rise of artificial intelligence & machine learning. [online] Available at: https://www.bloomberg.com/enterprise/blog/rise-of-artificial-intelligence- machine-learning/ [Accessed 17 Jan. 2017].

Boole, G. (1854). An investigation of the laws of thought on which are founded the mathematical theories of logic and probabilities. 1st ed. [Place of publication not identified]: [publisher not identified].

Boston Dynamics. (n.d.). [online] Bostondynamics.com. Available at: http://www.bostondynamics.com [Accessed 20 Feb. 2017].

Broad Operational Language Technology (BOLT) Program. (n.d.). [online] Sri.com. Available at: https://www.sri.com/work/projects/broad-operational-language-technology-bolt- program [Accessed 20 Feb. 2017].

Burgess, M. (2016). What is the IP Act and how will it affect you?. [online] WIRED UK. Available at: http://www.wired.co.uk/article/ip-bill-law-details-passed [Accessed 2 Apr. 2017].

CCTV in the UK. (2015). [ebook] Synectics. Available at: https://www.synecticsuk.com/images/pdfs/whitepapers/Synectics_PS_WP.pdf [Accessed 18 Apr. 2017].

Centcom.mil. (2016). November 9: Iraq and Syria civilian casualty assessments. [online] Available at: http://www.centcom.mil/DesktopModules/ArticleCS/Print.aspx?PortalId=6&ModuleId =1231&Article=1000893 [Accessed 8 Apr. 2017].

Centcom.mil. (2017). ABOUT US. [online] Available at: http://www.centcom.mil/ABOUT-US/ [Accessed 8 Apr. 2017].

Chapman, M. (2017). Interview with Major-General Clive Chapman.

Christopherson, R. (2016). Siri update makes AI work harder for disabled users. Ability Net. [online] Available at: https://www.abilitynet.org.uk/news-blogs/siri-update-makes-ai-work- harder-disabled-users [Accessed 27 Feb. 2017].

Copeland, J. (2000). What is Artificial Intelligence?. [online] AlanTuring.net. Available at: http://www.alanturing.net/turing_archive/pages/reference%20articles/what%20is%20ai.ht ml [Accessed 9 Jan. 2017].

Currie, D. (2005). Developing and applying study skills. 1st ed. London: Chartered Institute of Personnel and Development, p.89. 75

Darpa.mil. (n.d.). About DARPA. [online] Available at: http://www.darpa.mil/about- us/about-darpa [Accessed 26 Jan. 2017].

Darpa.mil. (n.d.). Autonomous High-Altitude Refueling. [online] Available at: http://www.darpa.mil/about-us/timeline/autonomous-highaltitude-refueling [Accessed 26 Jan. 2017].

Darpa.mil. (n.d.). Vehicle and Dismount Exploitation Radar. [online] Available at: http://www.darpa.mil/about-us/timeline/vehicle-and-dismount-exploitation-radar [Accessed 26 Jan. 2017].

Dudovskiy, J. (2016). Positivism. [online] Research Methodology. Available at: http://research-methodology.net/research-philosophy/positivism/ [Accessed 27 Mar. 2017].

Eurofighter.com. (2008). Eurofighter Typhoon | Direct Voice Input Technology. [online] Available at: https://www.eurofighter.com/news-and-events/2008/05/direct-voice-input- technology [Accessed 18 Mar. 2017].

Forrester, (2016). ROBOTS, AI WILL REPLACE 7% OF US JOBS BY 2025. [online] Forrester. Available at: https://www.forrester.com/Robots+AI+Will+Replace+7+Of+US+Jobs+By+2025/-/E-PRE9246 [Accessed 1 Apr. 2017].

Franklin, S. and Graesser, A. (1996). Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents. 1st ed. [ebook] Memphis: Institute for Intelligent Systems University of Memphis. Available at: http://www.inf.ufrgs.br/~alvares/CMP124SMA/IsItAnAgentOrJustAProgram.pdf [Accessed 3 Apr. 2017].

Google Assistant - Your own personal Google. (n.d.). Google Assistant - Your own personal Google. [online] Available at: https://assistant.google.com/ [Accessed 2 Jan. 2017].

Gov.uk. (2016). £100,000 for Research into Automation and Machine Intelligence. [online] Available at: https://www.gov.uk/government/news/100000-for-research-into-automation- and-machine-intelligence [Accessed 8 Apr. 2017].

Gov.uk. (2016). How to drive on a smart motorway. [online] Available at: https://www.gov.uk/guidance/how-to-drive-on-a-smart-motorway [Accessed 6 Apr. 2017].

Gov.uk. (2016). Investigatory Powers Act - GOV.UK. [online] Available at: https://www.gov.uk/government/collections/investigatory-powers-bill [Accessed 2 Apr. 2017].

76

Gov.uk. (2016). Investigatory Powers Act - GOV.UK. [online] Available at: https://www.gov.uk/government/collections/investigatory-powers-bill [Accessed 2 Apr. 2017].

Gov.uk. (2017). Autonomy on the front line: supplying Armed Forces on the battlefield. [online] Available at: https://www.gov.uk/government/news/autonomy-on-the-front-line- supplying-armed-forces-on-the-battlefield [Accessed 18 Apr. 2017].

Grenville, H. (n.d.). Roll Call MAJOR-GENERAL CLIVE CHAPMAN, CB BA {CHIP}. [online] Para Data. Available at: https://paradata.org.uk/people/clive-chapman [Accessed 11 Mar. 2017].

Haddox, C. (2016). Boeing Wins DARPA Vulture II Program. [online] Boeing. Available at: http://boeing.mediaroom.com/2010-09-16-Boeing-Wins-DARPA-Vulture-II-Program [Accessed 20 Feb. 2017].

Haynes, G. (2015). Muse Interview: On Modern Warfare, The Conspiracies That Drive New Album 'Drones' And Matt Bellamy's Night At The White House - NME. [online] NME. Available at: http://www.nme.com/features/muse-interview-on-modern-warfare-the- conspiracies-that-drive-new-album-drones-and-matt-bellamys-nigh-756715 [Accessed 11 Apr. 2017].

Hofstadter, D. and Herkewtiz, W. (2014). Why Watson and Siri Are Not Real AI, [online] Available at: http://www.popularmechanics.com/science/a3278/why-watson-and-siri-are- not-real-ai-16477207/ [Accessed 21 Mar. 2017].

Human Rights Watch, (2012). Losing Humanity: The Case against Killer Robots. [online] Human Rights Watch. Available at: https://www.hrw.org/report/2012/11/19/losing- humanity/case-against-killer-robots [Accessed 8 Apr. 2017].

Human Rights Watch. (2017). Killer Robots. [online] Available at: https://www.hrw.org/topic/arms/killer-robots [Accessed 10 Apr. 2017].

IMDb. (2004). I, Robot. [online] Available at: http://www.imdb.com/title/tt0343818/synopsis [Accessed 10 Apr. 2017].

IMDb. (n.d.). Eagle Eye (2008). [online] Available at: http://www.imdb.com/title/tt1059786/?ref_=ttfc_fc_tt [Accessed 20 Feb. 2017].

IMDb. (n.d.). HAL 9000 (Character). [online] Available at: http://www.imdb.com/character/ch0002900/bio [Accessed 9 Feb. 2017].

Intelligencecommissioner.com. (2014). The Regulation of Investigatory Powers Act 2000 (RIPA). [online] Available at: http://intelligencecommissioner.com/content.asp?id=11 [Accessed 2 Apr. 2017].

Kolton, M. (2016). The Inevitable Militarization of Artificial Intelligence. [online] Cyberdefensereview.org. Available at: 77 http://www.cyberdefensereview.org/2016/02/08/the-inevitable-militarization-of-artificial- intelligence/ [Accessed 5 Jan. 2017].

Lewinski, J. (2009). HAL’s Pals: Top 10 Evil Computers. [online] WIRED. Available at: https://www.wired.com/2009/01/top-10-evil-com/ [Accessed 20 Feb. 2017].

Lewis, P. (2011). You're being watched: there's one CCTV camera for every 32 people in UK. The Guardian. [online] Available at: https://www.theguardian.com/uk/2011/mar/02/cctv- cameras-watching-surveillance [Accessed 7 Apr. 2017].

Liberatore, S. (2016). Tesla's autopilot is involved in ANOTHER crash. Daily Mail. [online] Available at: http://www.dailymail.co.uk/sciencetech/article-3732795/Self-driving-spotlight- China-sees-Tesla-autopilot-crash.html [Accessed 2 Jan. 2017].

Lowe, C., Zemliansky, P. and Driscoll, D. (2011). Writing Spaces: Readings on Writings, Vol. 2. 2nd ed. Anderson: Parlor Press, pp.160-164. LS3 - Legged Squad Support Systems. (n.d.). [online] Bostondynamics.com. Available at: http://www.bostondynamics.com/robot_ls3.html [Accessed 20 Feb. 2017].

Luger, G. (2008). Artificial intelligence. 6th ed. Boston: Pearson/Addison-Wesley, p.28. McGoogan, C. (2016). What is the Investigatory Powers Bill and what does it mean for my privacy?. The Telegraph. [online] Available at: http://www.telegraph.co.uk/technology/2016/11/29/investigatory-powers-bill-does-mean- privacy/ [Accessed 2 Apr. 2017].

Martinez-Gomez, J., Fernandez-Caballero, A., Garcia-Varea, I., Rodriguez, L. and Romero- Gonzalez, C. (2014). A Taxonomy of Vision Systems for Ground Mobile Robots. International Journal of Advanced Robotic Systems, 11(7), p.111.

Merriam-webster.com. (2017). Definition of MILITARIZE. [online] Available at: https://www.merriam-webster.com/dictionary/militarize [Accessed 8 Apr. 2017].

Milansei, C. (2016). Voice Assistant Anyone? Yes please, but not in public!. [online] Creative Strategies, Inc. Available at: http://creativestrategies.com/voice-assistant-anyone-yes- please-but-not-in-public/ [Accessed 3 Apr. 2017].

MIT Technology Review. (2016). Microsoft has built a machine that’s as good as humans at recognizing speech. [online] Available at: https://www.technologyreview.com/s/602714/first-computer-to-match-humans-in- conversational-speech-recognition/ [Accessed 27 Feb. 2017].

Mitsuku.com. (2016). Mitsuku Chatbot. [online] Available at: http://www.mitsuku.com/ [Accessed 19 Apr. 2017].

Nareyek, A. (2004). AI in Computer In Games. Queue, [online] (Volume 1, Issue 10). Available at: http://queue.acm.org/detail.cfm?id=971593 [Accessed 21 Mar. 2017].

78

O'Keefe, R., Balci, O. and Smith, E. (1986). Validation of expert system performance. 1st ed. Blacksburg (VA): Virginia Polytechnic Institute and State University. Department of Computer Science, pp.2-3.

Reapers. [Lyrics] Warner/Chappell Music, Inc. Available at: https://play.google.com/music/preview/Tt4o5fjb43h6n5pp6bwee3eb3xu?lyrics=1&utm_so urce=google&utm_medium=search&utm_campaign=lyrics&pcampaignid=kp-lyrics [Accessed 20 Feb. 2017].

Reilly, M. (2016). Beyond Video Games: New Artificial Intelligence Beats Tactical Experts in Combat Simulation. [online] University of Cincinnati. Available at: http://magazine.uc.edu/editors_picks/recent_features/alpha.html [Accessed 11 Jan. 2017].

Robothalloffame.org. (n.d.). The Robot Hall of Fame - Powered by Carnegie Mellon University. [online] Available at: http://www.robothalloffame.org/inductees/03inductees/hal.html [Accessed 9 Feb. 2017].

Russell, S. and Norvig, P. (2016). Artificial intelligence. 3rd ed. Boston: Pearson, pp.3-30, 151.

Saadia, M. (2016). "Trekonomics". [online] Business Insider. Available at: http://uk.businessinsider.com/true-star-trek-tech-predictions-2016-5?r=US&IR=T/ [Accessed 8 Feb. 2017].

Saunders, M., Lewis, P. and Thornhill, A. (2009). Research Methods for Business Students. 5th ed. Harlow: Pearson Education, 2009, pp.100-125, 246-280.

Sharma, S. (2013). Expert system. [online] Slideshare.net. Available at: https://www.slideshare.net/sagar020790/expert-system-29148507 [Accessed 18 Apr. 2017].

Slocombe, W. (2016). What science fiction tells us about our trouble with AI. [online] The Conversation. Available at: https://theconversation.com/what-science-fiction-tells-us- about-our-trouble-with-ai-58543 [Accessed 11 Apr. 2017].

Smith, E. (2011). Combining Primary and Secondary Data: Opportunities and Obstacles. [online] Www2.le.ac.uk. Available at: http://www2.le.ac.uk/offices/red/rd/research- methods-and-methodologies/intrepid-researcher/methods/2010-11/combining-primary- and-secondary-data-opportunities-and-obstacles [Accessed 11 Mar. 2017].

Smith, R. (1985). Knowledge-Based Systems. 1st ed. [ebook] Available at: http://www.reidgsmith.com/Knowledge-Based_Systems_- _Concepts_Techniques_Examples_08-May-1985.pdf [Accessed 18 Apr. 2017].

Statista. (2017). UK Facebook users by age 2016. [online] Available at: https://www.statista.com/statistics/271348/facebook-users-in-the-united-kingdom-uk-by- age/ [Accessed 23 Mar. 2017]. 79

Support.skype.com. (2017). What Skype bots are available?. [online] Available at: https://support.skype.com/en/faq/FA34655/what-skype-bots-are-available [Accessed 19 Apr. 2017]. Terminator Wiki. (n.d.). Skynet. [online] Available at: http://terminator.wikia.com/wiki/Skynet [Accessed 8 Feb. 2017].

Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), pp.433-460. Un.org. (1948). Universal Declaration of Human Rights. [online] Available at: http://www.un.org/en/universal-declaration-human-rights/ [Accessed 2 Apr. 2017].

Un.org. (2017). Background on Lethal Autonomous Weapons Systems – UNODA. [online] Available at: https://www.un.org/disarmament/geneva/ccw/background-on-lethal- autonomous-weapons-systems/ [Accessed 8 Apr. 2017].

UPI, (2011). Military contracts for visual intel system. [online] Available at: http://www.upi.com/Business_News/Security-Industry/2011/01/05/Military-contracts-for- visual-intel-system/UPI-65861294229967/ [Accessed 5 Jan. 2011].

Vellanki, M. (2016). Online Tech Platforms Must Be Held Accountable For Fake News. [online] LinkedIn. Available at: https://www.linkedin.com/pulse/online-tech-platforms- must-held-accountable-fake-news-mahesh-vellanki [Accessed 18 Apr. 2017].

Vogt, W. (2005). Dictionary of statistics and methodology. 3rd ed. London: SAGE, p.300. Wang, R. (2013). Beware Trading Privacy for Convenience. [online] Harvard Business Review. Available at: https://hbr.org/2013/06/beware-trading-privacy-for-con [Accessed 8 Apr. 2017].

Winograd, T. (1972). Understanding Natural Language. 1st ed. Edinburgh: Univ. Press, pp.1- 7. www.tutorialspoint.com. (n.d.). Artificial Intelligence Overview. [online] Available at: https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_overview.htm [Accessed 17 Mar. 2017].

80

6.2 Ethics Form PART ONE Name of applicant: Daniel Edwards Supervisor (if student project): Dr Simon Thorne (Professor Tom Crick) School / Unit: Cardiff School of Management Student number (if applicable): st20018143 Programme enrolled on (if applicable): BSc (Hons) Software Engineering Project Title: An Investigation into the Militarisation of Artificial Intelligence Expected start date of data collection: 09/01/2016 Approximate duration of data collection: 6 Weeks Funding Body (if applicable): N/A Other researcher(s) working on the project: N/A

Will the study involve NHS patients or staff? No

Will the study involve human samples No and/or human cell lines?

Does your project fall entirely within one of the following categories: Paper based, involving only documents in No the public domain Laboratory based, not involving human No participants or human samples Practice based not involving human No participants (eg curatorial, practice audit) Compulsory projects in professional practice No (eg Initial Teacher Education) A project for which external approval has No been obtained (e.g., NHS) If you have answered YES to any of these questions, expand on your answer in the non-technical summary. No further information regarding your project is required.

81

If you have answered NO to all of these questions, you must complete Part 2 of this form

In no more than 150 words, give a non-technical summary of the project My project will be an investigation and analysis of Artificial Intelligence within the military. I will also compare and contrast people’s views and concerns of the militarisation of Artificial Intelligence versus what is being developed currently within the Military. I also want to explore drone technologies and their application within the military and explore ‘What If’ Scenarios. I will interview Major-General Clive ‘Chip’ Chapman to provide a military insight to my project.

DECLARATION: I confirm that this project conforms with the Cardiff Met Research Governance Framework

I confirm that I will abide by the Cardiff Met requirements regarding confidentiality and anonymity when conducting this project.

STUDENTS: I confirm that I will not disclose any information about this project without the prior approval of my supervisor. Signature of the applicant: Date: 16/11/2016 D.Edwards

FOR STUDENT PROJECTS ONLY Name of supervisor: Date: 16/11/2016 Professor Tom Crick

Signature of supervisor: Professor Tom Crick

Research Ethics Committee use only

82

Decision reached: Project approved Project approved in principle Decision deferred Project not approved Project rejected Project reference number: Click here to enter text.

Name: Click here to enter text. Date: Click here to enter a date.

Signature:

Details of any conditions upon which approval is dependant: Click here to enter text. PART TWO A RESEARCH DESIGN A1 Will you be using an approved protocol in your No project? A2 If yes, please state the name and code of the approved protocol to be used1 N/A A3 Describe the research design to be used in your project I will be using a mix of both quantitative and qualitative research. I will conduct interviews with ex- armed forces members and also use surveys to collect data. An interpretative research philosophy will be adopted involving an inductive research strategy for gathering qualitative data. I will be interviewing a Major-General whom is an ex Parachute Regiment (2 PARA) soldier. I will conduct this interview either I person or over Skype/Phone call. My survey will be created using an online tool and posted to my Social Media and distributed across my family connections to get a wide age range of approximately 50 participants. I am hoping by using this technique the survey may get snowballed onto the participant’s social medias. I will then compare and contrast these results to derive a view of AI within the Military from a public’s point of view using thematic analysis. I will keep data confidential as the survey will not take in the

1 An Approved Protocol is one which has been approved by Cardiff Met to be used under supervision of designated members of staff; a list of approved protocols can be found on the Cardiff Met website here 83 participant’s name. I will also then have the reports of the survey downloaded to a password protected computer that only I have access to. All participants will remain anonymous, any data provided will not be traceable back to specific people. A4 Will the project involve deceptive or covert research? No A5 If yes, give a rationale for the use of deceptive or covert research N/A A6 Will the project have security sensitive implications? No A7 If yes, please explain what they are and the measures that are proposed to address them N/A

B PREVIOUS EXPERIENCE B1 What previous experience of research involving human participants relevant to this project do you have? N/A B2 Student project only What previous experience of research involving human participants relevant to this project does your supervisor have? Professor Crick has over 10 years of experience involving human participants.

C POTENTIAL RISKS C1 What potential risks do you foresee? 1. Missing the Deadline 2. File Corruption 3. Participants Withdrawing C2 How will you deal with the potential risks? 1. In order to make sure I submit to the deadline I will create a research plan with my own deadlines to adhere to. 2. I will create regular backups of the various files I will be creating to make sure I have a fall back if my main working document becomes corrupt. 3. Along with making the data I collect anonymous if the participant wishes, they may also wish to withdraw which means I will have will come to a work around of using the other data I have collected. I will also destroy any data the withdrawn participant has provided.

When submitting your application you MUST attach a copy of the following:  All information sheets 84

 Consent/assent form(s) An exemplar information sheet and participant consent form are available from the Research section of the Cardiff Met website.

85

6.3 Devolved Ethics Approval Application Summary

Student Name: Daniel Edwards Student Number: 20018143

Module Name: I S Project Module Number: BCO6010

Prog Name: BSc (Hons) Software Engineering Supervisor Name: Simon Thorne/Tom Crick

To be completed by student and supervisor before Student Signature Supervisor Signature submission to Ethics Approval Panel Yes N/A Yes N/A Application for ethics approval [ x ] - [ x ] - Participant information sheet [ ] [ x ] [ ] [ x ] Participant consent form [ x ] [ ] [ x ] [ ] Pilot interview/s [ x ] [ ] [ x ] [ ] Pilot questionnaire/s [ x] [ ] [ x ] [ ] Letter/s to participating organisation/s [ ] [ x ] [ ] [ x] Confirmation of interviewee participation [ x] [ ] [ x ] [ ]

First Submission [ ] Resubmission [X]

Date: 08/12/2016

For use by the devolved ethics approval panel:

Panel Members Name Signature

Module Leader, Chair : Dr Hilary Berger Supervisor: Dr Simon Thorne/Tom Crick Ethics Committee Rep: Dr Jason Williams Date: 08/12/2016 Date of Reassessment.

Outcome: 86

Project Approved [X] Reference number issued: 2016D0316 Chair’s Action [ ] Application Not Approved [ ]

Comments for projects not fully approved: None

The original to be retained by the module leader and a copy given to the student

87

6.4 Participant Information Sheet PARTICIPANT INFORMATION SHEET

An Investigation into the Militarisation of Artificial Intelligence

Project summary

The purpose of this research project is to establish a public and professional perspective of the use and application of Artificial Intelligence technologies within the military. Your participation will enable the collection of data which will form part of a study being undertaken at Cardiff Metropolitan University.

Why have you been asked to participate?

You have been asked to participate because you fit the demographic profile of the population being studied (aged between 18 and 80) and of your expertise in the area being studied. Your participation is entirely voluntary and you may withdraw at any time.

Project risks

The research involves the completion of a semi-structured interview. We are not seeking to collect any sensitive/personal data on you; this study is only concerned with your views and opinions based on the questions asked within the interview. We do not anticipate any significant risks associated with this study. However, if you do feel that any of the questions are inappropriate then you are free to refuse to answer a question or you are able to stop at any time. Furthermore, you can change your mind and withdraw from the study at any stage, with all of your responses being removed from the study.

How we protect your privacy

All the information you provide during this interview will be held in confidence and not passed to any third parties. Your personal details will be kept in a secure location by the research team. When we have finished the study and analysed all the information, the documentation used to gather the raw data will be destroyed except your signed consent form which will be held securely for five years. The recordings of the interview will also be securely held during the study and deleted/destroyed after five years. 88

YOU WILL BE OFFERED A COPY OF THIS INFORMATION SHEET TO KEEP

If you require any further information about this project then please contact:

Daniel Edwards Cardiff Metropolitan University Email: [email protected] [email protected] (Supervisor)

89

6.5 Participant Consent Form Cardiff Metropolitan University Ethics Committee

PARTICIPANT CONSENT FORM

Cardiff Metropolitan University Ethics Reference Number: 2016D0316 Participant Name: Major General Clive Chapman Title of Project: An Investigation into the Militarisation of Artificial Intelligence Name of Researcher: Daniel Edwards ______

Participant to complete this section: Please initial each box.

1. I confirm that I have read and understand the information sheet for the above study. I have had the opportunity to consider the information, ask questions and have had these answered satisfactorily. [ ]

2. I understand that my participation is voluntary and that I am free to withdraw at any time, without giving any reason. [ ]

3. I agree to take part in the above study. [ ]

4. I agree to the interview being recorded [ ]

Yes No 5. I agree to my quotes being attributed to me [ ] [ ]

6. I agree to my organisation being named in all publications [ ] [ ]

______Signature of Participant Date 90

______Name of person taking consent Date

______Signature of person taking consent

91

6.6 Semi Structured Interview Questions Interview Questions Cardiff Metropolitan University Ethics Reference Number: 2016D0316 Questions may be added/altered at time of interview

About You 1. Please can you give me a brief history of your career? a. What is your current role? 2. What role does technology play in your job and career? a. What are your specific interests/expertise in this area? 3. What is your understanding of Artificial Intelligence and other autonomous technologies? a. Have you used or encountered AI in your recent roles/career? b. Can you tell me of any non-military AI systems you are aware of? 4. What are your thoughts about the wider public discussions regarding AI, big data, machine learning, algorithmic accountability, etc 5. How do you feel the Investigatory Powers Act 2016 is/was received by those who are aware about it? a. Has this had a positive effect within your role? b. Do you think the general public are digitally competent enough to make decisions about personal data and their use of technology? c. Do you think this will shift over the next few years?

AI in the Military 6. Are there any publicly known AI systems within the military you can discuss? a. Did/do you personally use any of these? b. Do you know of anything being developed currently you can discuss? 7. What do you think the benefits of using AI within the Military are in a professional context? 8. What would you say the concerns are of using AI? 9. Do you think it can have an adverse effect on operations? 10. What happens if an AI makes a decision that a controlling operator does not agree with? a. What do you think the public would make of this? b. Would you think that there would be protocols in place if something were to happen? c. Oversight/governance – is this still the theatre commander? 11. When does the blame move from the AI to the user/controller?

92

Drones 12. Can you tell me the benefits of unmanned aircraft from a professional point of view? a. And the negatives? b. Do you have a personal thoughts on the matter? 13. What do you think the publics perceptions of unmanned aircraft are? 14. Who would be held accountable if a Drone strike was used with false intelligence? 15. Imagine if the scenario previously mentioned was controlled purely by AI with the pilot removed completely and AI choosing the strike based on Intelligence. Please answer that again. 16. Do you think that ‘False News’ that has been in the news recently would have an adverse effect on AI with in the armed forces?

DSTL 17. DSTL (Defence Science and Technology Laboratory) have created a hypothetical system that uses live CCTV images to identify possible targets and if intelligence suggests so, the system can intervene as it sees fit. How would you personally feel if this was purely automated? a. And from a professional opinion? Ethics 18. Can you think of any ethical issues you think the Military may face if an AI was created to run in-theatre operations? 19. How would you feel if automated military systems were brought in that took away human command to a certain degree? a. Within Piloting missions 20. Do think this is likely to happen? 21. What about working with other allied nations e.g. the USA – their technologies and their potential impact in theatre? 22. Do you have any more comments to make on the subject?

93

6.7 Email to Major-General Chapman

Dear Mr Chapman,

I am currently concluding my studies at Cardiff Metropolitan University. Whilst studying for my BSc(hons) Software Engineering I am to carry out a dissertation. The title of that dissertation is: An Investigation into the Militarisation of Artificial Intelligence.

I am writing to ask if I could take a morning of your time to interview you regarding some of the subjects that will be discussed within the project. Your participation is completely voluntary and that you are free to withdraw at any time, without giving any reason.

Your service within the UK Armed Forces and current position as a security analyst and media commentator would make you the perfect candidate for my research.

I look forward to your correspondence.

Yours sincerely, Daniel Edwards, [email protected]

Supervisor: Dr Simon Thorne [email protected]

94

6.8 Signed Participant Consent Form

95

6.9 Interview with Major-General Clive Chapman

1. Please can you give me a brief history of you career?

I was an army officer for 33 years concentrating in the last 10 years on counter terrorism and working closely with the security services and police to make sure we have a safe and secure environment both in the UK and overseas. My current role, I am an independent security analyst and media commentator concentrating on counter terrorism in the Middle East and all things in between.

2. What role does technology play in your job and career?

At the moment I really am a commentator on patterns, trends and developments that impact on security. In my career in counter terrorism and in the counter terrorism world of course it was always called 2’s a pattern 3’s a trend and its picking up those trends increasingly in my final few years from data and meta data which leads us to finding bad guys.

3. What are your specific interests and expertise in this area?

I don’t think I necessary have specific expertise in that area except for the research that I do to commentate sagely and you use judgement to inform the public of the right things that I think are important or things I think they should know about outside the poor press really.

4. What’s your understanding of artificial intelligence and other autonomous technologies?

I would say that autonomous technologies and AI are two different things because of course the Turing test is yet to be passed and the Loebner Prize is yet to be awarded so I don’t think I’ve encountered AI as you might describe it in that sort of sense, what I have encountered is a proliferation of GPS or computer or space enabled systems so for example there are 2700 GPS or computer enabled systems in a US Brigade combat team so I am not really sure the last part of the question is relevant in the sense of sensor connectivity,

96 analytics, platform user interface are all used in a lot of algorithmic decision aids, i.e buy and sell on the stock exchange and all those sorts of things, but I don’t think that’s the same as AI, at all.

5. Could you tell me of any non-military AI systems that you are aware of or may use?

I don’t think there are, I wouldn’t describe them as AI systems because we don’t allow autonomous decision making and I don’t think we have reached that level in terms of the knowledge of AI systems, so I wouldn’t for example classify the machines, oh I don’t know, that recently beaten, deep slack I think it was called which has recently beaten the world’s best poker player is not an AI system it’s a system that deals in analytics, on a very very narrow spectrum so that doesn’t mean it is a thinking being with any consciousness so we need to be slightly careful at looking at things that are programmed for a very specific function to say that something has intelligence in a sense.

6. What are your thoughts about the wider public discussions regarding AI, big data, machine learning and algorithmic accountability?

The public awareness of all these things is really really basic and I don’t think they really know that much about it, policy probably has yet to catch up with technology developments and its certainly not the other way around and one of the things in a sense is that there is no policy discussions any more about these things in a way that there should be, so for example, I would probably say the thing here is to quote Socrates when I think he said if it was Socrates “strong minds discuss ideas, average minds discuss events, weak minds discuss people”, and unfortunately most minds these days and most media discusses people who are more interested about what Kim Kardashian is wearing or not wearing than we are in ideas so all the difficult issues in the future to do with how we get to grips with the policy environment with these things is a really really tricky area because as I say technology is outpacing policy in this area but it does need to be engaged with and it will be interesting to see what Government department does that is it BIS, is it, I don’t know, it will be an interesting one to see.

97

7. Do you feel that the public should be made aware of said technologies which are to become more integrated with our lives rather than left to their own devices to discover it themselves?

Well there are two aspects from the public perception, they have given up their right to privacy without knowing it, and it is there, they will become interested in it when , you know the internet of things becomes a big thing and the compromise that their data then becomes a big thing so I think the millennial generation are sort of interested in their privacy but don’t really understand that a mobile phone is not a mobile phone, a mobile phone is an intrusive surveillance device and it is to make phone calls but they seem to be happy with that and we’ll discuss this when we get onto son of RIPA, they seem to be happy with that because they are trading privacy for convenience.

8. How do you feel the Investigatory Powers Act 2016 is/was received by those who were aware about it?

Only human rights lawyers are those who are really interested in the investigatory powers Act know anything about it, now I’ve been on TV a number of times about this and what was incredible to me was how it just dribbled into legislation without again any public debate because it received royal consent in November of 2016, now again this really comes down to what’s your view on a safe and secure environment and in terms on the investigatory powers act do you see the role of government within this as the big brother state, the protecting state or the libertarian state? Now I would argue that the cursor is about right because there is a big difference between bulk collection and bulk surveillance and what most people don’t realise is that the investigatory powers act is to do with directed surveillance and you need to get a warrant to do that and again most people don’t understand that most warrants are about serious organised crime they are not to do with terrorism and only a very small percentage, about 1% actually are where the nexus between terrorism and serious organised crime comes together so about 78//80% of warrants are to do with serious organised crime, 19% in national security activity, i.e. terrorism and just that final one where the nexus comes together, so most people don’t realise that so the cursors appropriate for the technical nature of the threat of course those on the human rights side worry about this because they don’t trust governments with data but then I wouldn’t trust

98 anyone with data because of course in terms of the whole architecture of the internet and everything thing that goes with it in terms of networks and servers and all the rest of it, it was never really designed to be contested and of course there are flaws in both the software and the hardware so the attack vector is far easier than the defence vector, which is why you are always going to get hacking and people losing data and things like that, now what I’m really saying is that the public are not really digitally competent and that’s why again rather like the CIA leak this week, people think that there is this proliferation of zero day capabilities which are used by the security agencies but that is absolutely not the case it is phishing and password harvesting that are most used by the security services because of what I would call PDP’s (pretty dumb people) and that’s what they rely on, this will shift as we get into the internet of things because as I said people become more aware of that relationship between their sensors and the analytics , platform and all that kind of stuff, and I guess also its how we use language in this regard because of course Google and Facebook are now kind of the lords of the manor and they manipulate the nation and data the same way as the security services do in the use of meta data and of course most people in the general public would probably be horrified to know that a cookies real name is persistent identifier.

9. Do you think the IPA has had a positive effect within your role of surveillance?

I don’t think it makes any difference because it really updated RIPA and I used to be involved in the warranty for RIPA so the bar for the use of IPA is quite high the use of technology by malevolent people is getting slightly more cunning than it was 15/20 years ago and of course meta data gives you the data of data and the context of everything and we kill people based on meta data these days with drone strikes which I know we will come onto later and did for the first time in August 2015 but of course data the conversations, the emails, and all that sort of stuff which is directed surveillance gives you everything else but I don’t think again that most people in the silly domain understand the difference, between, for example cypher text and plain text, they don’t understand that even if you’ve got encrypted stuff in the middle what you can break into in various ways you still have to send things in plain text and it still has to be received in plain text , so you still have got your control points and all the other things that go on like where people are on GPS and all the 99 other data that they are producing and I don’t think people again realise it’s the same thing as I said about mobile phones with cars, cars are computers that happen to drive you from A-B not yours it’s too old but new ones like mine so I don’t think this digital awareness is out there are the moment and I think as I said with the proliferation of joining the dots with the internet and things and connecting things in everyone’s household that will become more so but I think people have traded privacy for convenience and I think most people in terms of liberal democracy don’t worry about this too much, I worry about it not from that perspective but I worry about it from the perspective of evil regimes also being able to do this so the ability to change things by being a dissident in malevolent regimes, i.e. if your gay in Saudi Arabia or a woman in Saudi Arabia who’s striving for human rights you are just going to get caught most of the time even if you use telegram of something like that, a recent example is that telegram has been broken in Iran and there is 19 million uses in Iran and not broken but I mean inserted some sort of source code or due to again pretty dumb people doing things and so therefore unlocked it so your ability to be a dissident or change things is really really going to be difficult in the future and as I said that worries me not from the UK and that’s that notion of if you’ve got nothing to hide there’s nothing wrong well that’s fine in our country but not in other countries.

10. Do you think that the public’s awareness of this will have to shift and develop in the next couple of years?

Well I think it only shifts if something happens to them so if there is a proliferation of leakage of data because it is the data which is the important thing and all this sort of hacking sort of stuff and if it affects them then they make choices about shifting to a different provider like the Talk Talk incident or not using LinkedIn all these sort of things but again that sort of erodes again quite quickly and people look at the economics of these things and again this is going back to sort of lucidities which only act out of 1 or 2 of 3 impulses, they generally act out of fear (my data is going to be lost), honour (not really relevant in this case), or interest and most people act out of self-interest so if Talk Talk have breached but 2 years later and everyone’s sort of forgotten about that and Talk Talk are offering great deals people will forget about the data breaches and go for the economic self- interest most of the time, so yeah, long term conscious choices don’t really exist in that sort

100 of sense in the public mind, of course you can damage, companies can be so reputationally damaged by some of these things that happen that it could impact but you know we’ve seen Google lose lots of data, we’ve seen LinkedIn lose lots of data and actually most companies lose lots of data or are subject to some sort of malware, albeit ransom ware or spyware on a kind of yearly basis it’s a kind of fact of life and I don’t think people really worry too much about it to be honest.

11. Are there any publically known AI systems within the military that you could discuss?

Again that comes back to your definition of AI, and as I said before, there are lots of analytical tools out there to conduct link analysis, be it either through GCHQ sort of chaining of phones or whatever and to produce as we said sought of meta data based on the data and data sets and a lot of that is does by both MI5/6 and GCHQ so it’s this link analysis of different data sets at various start points to be confirmed or otherwise by tactical tips offs by the public and of course it’s only that which then leads to direct surveillance under the IPA so as we said before data gives you the sort of record of conversations and emails which is why we are asking the ISP’s to collect that data and keep it for 12 months if its needed to be used but they are designed to answer particular types of problems to do with the counter terrorist sphere and it’s to find those data analytics to do the data mining to find the start points of counter terrorist investigations so that’s, as I said before 2’s a pattern 3’s is a trend and trying to find those patterns and looking for the needle in the haystack really.

12. Do you think there would be a benefit of using an AI in the context of a system that could think for itself to find said trends rather than a human interaction making this decision?

Yes possibly but, the answer is yes because the cost drivers in most of these high level security organisations are actually pay and pensions so it depends on the costs drivers, if you don’t have to pay on pay and pensions then there could be a whole life cost saving depending on what the R and D costs of coming up with some sort of system is, now the social costs of where those people go and what they do is a different issue so for example there is a worry that you would out all this man power by the increasing use of firstly technology and secondly and ultimately AI but of course that’s nothing new, I mean 40% of 101 people in the UK are employed in agriculture in 1900 and now its 2%, now of course we have got to the point at the moment where, well sophisticated algorithms and cryptography will give you the link analysis between things but you still need the people to make sense of that, so for example, that is why you take the case of MI5 they are increasing from 4000 people to 5000 people in the next 5 years so a lot of the technical systems are getting more elaborate and the computing power to do with link analysis and all that stuff is more powerful but you still need people to interpret that and make those decisions so there isn’t that autonomous decision making and neither do I think there will be in the next 5/10 years in this field really. We will come onto that I think when we get to the sort of drone strikes because it’s not the meta data when then automatically leads to a weapon going in and then taking someone out it just doesn’t work like that.

13. What would you say the concerns are of using AI?

In a sense is that the concern of using AI is really that automatous decision making of what is intelligence but I think it is so far over the horizon that it’s just not credible in the next well I’ll say generation, you know it can’t understand language it can’t adapt to circumstances it’s got no background knowledge of the world and I don’t think there’s any system on the horizon which is going to come up with that kind of solution, again it’s that kind of thing of very good at beating the poker player, very good at beating the chess player but its answering a particular one dimensional sort of question it’s not dealing with other things outside of that sort of paradigm really.

14. Do you think this would have an adverse effect if the system were developed?

Again it comes back in my perspective of what we would call a Clausewitzian trinity, now the Clausewitzian trinity is sort of a military thing, when you go campaigning and you can only campaign when 3 things in a triangle are in symmetry that is the defence forces, army, air force, navy whatever is on one side of the triangle the government and the people are on the other two so if one of those are out of sync or loses faith in it, it doesn’t work so it only has an adverse effect on our operations if one of those says we are using an automatous system which has made really bad decisions, we look bad as a county or whatever therefore

102 we must stop this, now in a sense there’s a flaw in that argument to begin with because whose stopping it if your stopping it and someone’s allowed that autonomy to begin with in the first place and that we will get onto again in the drone strike, as I just don’t see that occurring, in my kind of lifetime and in this sort of sphere.

15. What would happen if an AI made a decision that a controller does not agree with?

Well the simple answer is then it is not AI because the man/machine interface still pertains so there isn’t there’s going to be, there will always be this override and you will even see this, which we will get onto in a minute, I don’t know if you can see how secret they are, you see some on YouTube I suspect not but there have been occasions when large drones have been taken over by another, say entity, so you know, you can interfere with these things, we know this and that’s when you again will have to try and override this with the controller but the controller might lose it because it has already happened by another being and therefore you need and override self-destruct mode or something like that, this has been particularly so, as I have seen Israeli drones being taken over by another power for a short period of time.

16. What do you think the public would make of this if that would get to the public’s eye?

Again I don’t think they’d worry if that particular drone is weaponised and if that weapon were therefore to be deployed by another state but of course that’s a different order of things because you would not only have to take over the drone you would have to take over the launching of the weapon and the GPS guidance and the programming which goes with the weapon so again there should be a number of checks and balances there which occur, I mean we are talking at the high end here the big drones not the ones you buy from Curry’s or something like that.

17. When would the blame move from this hypothetical Artificial Intelligence to the User or Controller?

103

It is too far into the future to even contemplate that as a question in my view, and that is to do again, the reason I say that, we will come onto that again, its having an understanding of targeting and how targeting works and the decision making on go and no go criteria which again comes back to the ethics of war and how you actually go through the decision making of that and after this I will show you a quick page of a book about targeting which may be of use on this regard I think.

18. Could you tell me the benefits of unmanned aircraft from a professional point of view?

Persistence, that is that they can stay up for a long time longer than what we call an air breather. That also plays into loiter ability and gives you a greater ability to collect intelligence and therefore do time sensitive targeting, as we call it TST, you can go into high risk areas without having an impact on casualties, so that is important from a political perspective, potentially you have increased reach and reduced risk, you can have real time download as I sort of alluded to for intelligence analysts and a really big thing in geopolitical terms is you have a reduced need for what they call ABO Access Basing and Overflight cos of course you are supposed to have permission to over fly countries and to basin, it is a big big thing, caveats on that so, let me give you an example you would generally think that the UK are best buddies with the Americans which is generally true in the grand strategic thing and of course everyone talks about the special relationship, overplayed, but, for example, in 1973 in the young Kapoor War, the Israelis said they’d reached the third temple to the Americans which was Nixon and that meant we are going to use nuclear weapons on the Arab armies who are invading from Syria, Egypt, those two principally. Nixon said to them we’d rather you don’t do that, so we will give you anything you want from a military inventory from America, flyover artillery and Abrahams tanks and that was under a thing called nickel grass, the brits wouldn’t allow the Americans to land or overfly the UK even though we had a conservative government because they were terrified about losing favour with the Arabs as well and that the price of a barrel of oil escalating to something horrendous which it actually did but that’s by the by, so this was a big thing, it takes that out of the equation if its high level. That can be sort of deniable although of course the CIA have lost drones over Iran, for example, when they were doing some work there 2/3 year ago.

104

19. You’ve touched on them there slightly, and the negatives?

The first one is they can be hacked, and of course any second one really is any malignant force that might have access to weaponising a drone, so there’s more positives than negatives but again it’s this integrity of your system and protecting it, it’s back to the CIA I guess in terms of what that means not the CIA in terms of Central Intelligence Agency, so my personal thoughts on this matter which is 12b is we will probably need some sort of regulatory and controlled regime for future crowded civilian air space not necessarily military air space that already goes on in military air space and there’s 2 ways that it can be controlled, well you do air space management that’s either by procedural or positive control and of course any drone, military drone, that’s being used would be used within the cycle of what we would call air tasking order, so there is a planning cycle which goes on for 72 hours out in front of which drones are actually included in that so they are not automatous in the way that they just fly there it’s not done like that it is very controlled because of deconfliction of air space, because of course if you run into something else you’ll have problems so, what I really am saying is that the system in the control of military air space is not a decider and the go no go criteria’s are still with the human, now regulatory thing, doesn’t mean that everything is tickety boo, because of course for example, cars are regulated in the sense you have to have road tax and insurance and all the rest it doesn’t mean a car can’t be used as a vehicle bomb or IED, so just because you have regulation doesn’t mean people won’t break the regulations and control then you can employ obviously like all these things you can employ jammers in both military and civilian drone technology, so for example in Iraq at the moment there’s a thing called drone defender being used which just interferes with the signal again at the lowest level you know the stuff of IS or Daesh are using its all command line of sights stuff and you can just interfere with the signal and bobs your uncle, so there’s a lot of technical solutions if you want to stop this sort of stuff, low level drones in the terms of a bespoke solution from a low level terrorist over Heathrow, there is a lot of technical solutions of how you could go interfering outside of the regulatory regime if you needed to.

20. What do you think the public’s perceptions are of unmanned aircraft? 105

I don’t think they’ve really ever considered it outside the arena of Amazon or the equivalent of Amazon delivering their order with timeliness and agility and again I think that plays in a sense to what we were saying earlier on about people liking convenient solutions, and if they see a convenient solution to speed of order, speed of delivery then they’ll think that that’s a positive thing, and I think they are cognitively getting used to that, i.e., because of the idea of driverless cars and I think they will get used because of coming to the notion of driverless planes from their perspective a drone is a driverless plane and it’s going to deliver something to them then they’ll think that’s a good thing, now that’s drones, and I don’t know whether people would be content to be on a driverless aircraft in the way that they might be for a driverless car in the future because of course the thing on all these things, this may come into the driverless car thing ultimately, it’s that more people are psychologically worried about flying because of the loss of control factor and they are not in a car, so they are giving something up to someone they don’t really know which is why a lot of people are scared of flying, the thought of a lot of people having a driverless plane to get form A-B, Heathrow – America probably would terrify people at the moment, but I don’t think most people would consider that and on the drone side and the small/medium drone side, it’s that convenience thing which is the driver I think for most civilians/the public.

21. Would you be able to give an example/definition of what you mean by small, medium and large drone in the sense of military?

Well a large drone in a sense would be a predator or a reaper, a small drone would be something which is almost like hand thrown by a small tactical unit, part of this is to do with range, duration, loiter time, persistence and all those things, small drone would be a hand held thing which may only have a range of 1k, 2k it’s that thing of seeing what is over the immediate hill and a medium one would be somewhere between the two, the very long loiter times of the reapers and predators, I’m not that up to speed with the ones in the middle terms.

22. Who would be held accountable if a drone strike was used with false intelligence?

106

This is where you get into, there is interpretation of intelligence but targeting is a different thing and is based on using engagement criteria which you must follow just war and legality, so the main things on that, and I have given you a handout for future reference, is it proportional, is it necessary, is it legitimate, has combat immunity been taken into account, i.e. collateral damage so targeting is actually a precise art and can never be a science which plays into weaponeering and weaponeering is to do with to minimise collateral damage and those who might be in the CEP, combat engagement point, is what weapon do you use, what angle of attack to you use because all these things precisely play into the terminal effect at the target end, so mistakes can be made and again for future reference I have given you CENTCOM, central command in America which essentially runs op inherent resolve, for example at the moment which is on war in the middle east, Daesh, IS or whatever you want to call them, have made mistakes but publish these and I have sent these to Dan, so here what we call ROE or rules of engagement matters and that’s a set of riding instructions from command led operations but again all these things to do with is it necessary, is it proportionate, so this play into the go no go criteria so if you had a bad guy in a particular place but he was surrounded by a crowd of a 100 people including woman and children that would probably be a no go and you wouldn’t take him out unless the benefit analysis was such that he was such a high value target that then you would need to take the humanitarian a political flack by wasting him and a 100 people including wives and children, but I have not seen an example where that would actually occur, so mistakes are made, as I say I have given you 4 pages of mistakes in the last sort of year in cent coms AOR, that’s area of responsibility.

23. So based on that which will lead into 15, so if the previous scenario was controlled purely by an AI which you have yourself said was too far in the future, if the pilot was completely removed so it was a fully autonomous being, and AI chose the strike based on the intelligence who would then be accountable for that, would you say it would be the system as a whole for choosing it or the programming behind it which is two separate things.

That’s a really really good question but I still can’t see at the moment a scenario where you would have the thinking being with all the various factors which would be programmable in terms of the lines of code that you would need to come up with that solution and of course,

107 a classic example is the number of lines of code that’s supposed to be inserted to make the intelligence systems on the nimrod operational which was a complete disaster and it was pulled as a procurement system so I just don’t see it happening because coding is a science and art is a war and they always say there are 3 things which never change in war, 1, its and art not a science, 2 – it’s a clash of wills and an autonomous system doesn’t understand what conscious will is and the third thing they say is that the causes of war don’t change so again I just can’t see that being handed over to something without some human interface to make the final go no go decision on targeting.

24. So adding that do you feel that a conscience plays a major part in that go no go decision?

Absolutely, it can be wrong but you’ve got to base everything on the best available evidence at the time but I can’t see us allowing, there’s too many criteria which would have to be considered in real time for that to be just handed over to a machine to make the decision and that’s why there’s a lot of danger in I guess the same sort of thing, the automation of algorithms, as I said earlier on buying and selling on stock market things and occasionally there are glitches in the software which mean they have ridiculous things like the collapse of the Yen or whatever it was about 3 months ago in Asia or a rogue intelligence element to it.

25. Do you think that false/fake news that’s been in the news recently would have an adverse effect on AI within the Armed Forces but also with humans in control within the Armed Forces?

I don’t see it having an impact but you have to understand, I guess, what is fake news and here I’m referring to an article where I was in the paper a couple of weeks ago where the article itself was completely different than an interview given on TV so what is it, is it an inaccurate sometimes sensationless report that is created to gain attention, mislead, deceive or to damage a reputation and that is different to miss-information which is inaccurate because of the reporter has confused facts so fake news of course is created with the intent to manipulate something or someone and it can obviously spread virally these days and this is why for example you have the notion which I often use on TV, a media meme is a cultural packet of information which spreads virally in a non-linear fashion

108 because you have mimetic engines these days which suit that sort of transmission, your social media, twitter and Facebook and everything else and people consume this sort of news because it is mostly and most often aligned with their world opinion so for example as you’ve seen with the article I was in with the Daily Express most people in their comments would not have read past the headline so would have made a judgement based on that not based on the reality of the video, now that doesn’t have an impact on AI because of course the algorithms, its 0’s and 1’s its not to do with any sense of false news I don’t think, so what you could get is a rogue programmer who programs in lines of code which is false in terms of what they are supposed to do but that happens all the time now in chips and everything else and coding’s are not a precise science and obviously we have a huge volume that comes out every month from Microsoft with all the patches that are needed because it is a complex business and no-one is perfect and this why there are holes in everything and why people can use the holes in software to be attacked.

25a) I think my reasoning behind that question was, I suppose my opinion of how an AI would soak up all this information in the internet and then make decisions based on that.

Well information, disinformation and propaganda have always been part of war, and recently they have been talking about hybrid war but that’s just another one of these dish of the days words that is being used like suddenly we’ve got this damascene revelation new thing its nonsense it has just replaced the dish of the day that was 10/15 years ago, which was effect based operations and we’ve always had effect based operations, what’s the object of war, you know, what’s the outcome and that’s exactly the same with this hybrid warfare. I would worry more about, I have already touched on this, AI could be corrupted by cyber intrusion altering the lines of code and things like that because of the holes in either the network or software or whatever because of course its corrupting the data or information that might alter the behaviour of the AI by utilising different algorithms and different responses and of course AI will ultimately be manmade and therefore can be manipulated as a contestable domain as any other bit and what I can do is give you a map that I use about the contestable domain and how you can manipulate these ends and I

109 originally got this from a lecture I attended in 2010, the pinnacle course at the US war college which is their top when some cyber type geek was lecturing on this and was most impacted bloke that I’ve heard talk about the subject.

26. So the DSTL – Defence Science of Technology Laboratory has created the hypothetical system that uses live CCTV imaging to identify possible targets and if the intelligence suggests so the system can intervene as it sees fit – how would you personally feel if this was purely automated.

I don’t really understand what they are getting at there in the sense that of course there are high grade cannons on any drone system at the moment and that’s what gives you your confirmatory intelligence feed but that still doesn’t lead to your automation of the attack and again it has to come back to that criteria discussed in the previous questions about targeting, it will remain hypothetical because you still need that kill chain as we call it, still needs that human intervention to make the go no go criteria, it would not be purely automated from that sort of perspective, now that’s the same thing, for example, there’s a notion, if you take for example the Brussels terrorist attack in 2016, so for example this is the airport attack so the notion in the future that an airport in 3/5 years based on facial recognition software, this may be slightly wrong in terms of time frame and other recognition criteria you will be transiting through a tunnel of truth and the tunnel of truth will be able to identify you in real time based on facial recognition software and because you have given up a lot of your data through things like Facebook and you have had a lot of pictures in the public domain then if you are a bad guy you will be able to sort of put your name straight to the fact that you were in this tunnel at any one time but where would that lead if you had like James Bond type automated weapons in the roof of walls of that airport would lead to that guy being taken out absolutely not you would still need some sort of intervention to take him down because of course like all these things there is potentially a ladder of escalation or de-escalation, is he a threat, why is he a threat at that time, you know he might just have a t-shirt on and be actually going on a flight to somewhere when you know his history it means he is a terrorist so there is always going to be human intervention in that sense I think so DSTL are very very good, there was a bunch of scientists on a thing called Siad, Scientific Advisors in northern Ireland when I was there and they

110 come up with all sorts of brilliant scientific solutions but scientific solutions which are going to be legally compliant, the laws of war and all those other sorts of things.

27. Can you think of any ethical issues that the military may face if an AI was created, hypothetically, to create and run in theatre operations?

Well yes it must understand the law of war for armed conflict and of course that requires judgement and as I said the characteristics of war are its an art not a science it’s not the mere servicing of targets and that’s where you can get sort of stuck on the use of AI and why it must follow as sort of just war theory in the sense that is all from Thomas Aquinas which is a western model of course, Daesh, IS whatever you want to call them don’t follow that sort of model, the other point why it must follow that is its various things that we mentioned in a previous answer about proportionality, necessity, but the other thing it’s going to follow is distinction , is that really a military target which can be distinct from a civilian target so that you don’t have unnecessary collateral damage and also those who have immunity in combat in the western model, children, woman and those in civil industry are not deemed to be warlike and therefore leading to warlike outcomes, i.e., they are protected and this is why IS is so different, i.e. by sort of behaviour like that priest in the church being killed last year for example.

27a) To add to that obviously computers can be programmed to follow a specific set of instructions so in the laws of war for example do you think that that could just never be programmed in 1’s and 0’s because it’s so subjective maybe?

Yeah, so for example let’s take a real time example in 1993 in west Belfast a car was coming towards some soldiers who were in a VCP vehicle checkpoint and the car sped through and hit one of the soldiers and was fleeing on the other side, so A when would the car pose a threat and B when would the car no longer pose a threat, now you would have to have an automation for example of where the car was temporally and spatially at all times to try and automate that decision, now this did actually lead to a murder case and the soldier fired 6 rounds I think it was, 5 rounds were deemed to be legal and proportionate and the 6th round was deemed to be illegal and disproportionate either because of the distance that

111 the car was away from the soldiers at that time and therefore it was no longer posing a threat and he was actually done for murder, unless you’ve got that complete big big data where all those things are linked in you just couldn’t make that decision, that could come of course, I’m not saying that couldn’t and again there’s another I am involved in which we are attempting to do that but you would need real time data on all those elements involved in that sort of decision making for a soldier and time or for an AI system to do that. You can look the case up if you need to it was a guy called Lee Clegg, so it would be an interesting one in how you would automate that and of course you could argue that, there were 2 people killed I think, but just understanding the legal angle on why it was disproportionate and proportionate and all that.

28. How would you feel if an automated military system was brought in that took away human command to a certain degree?

I think it would potentially be disastrous unless it was a high value target or a suicide mission scenario that you cannot envisage or have gained any outcome on decision points that might lead to an abort, i.e. this is a mission that you’ve got to do therefore it can be completely automated and there is no way that we can see any other outcome apart from one outcome that we need to go and do this and in that scenario then there may be a complete role for a AI system to go and do that.

29. How would you feel about that automated military system without human command within a piloting mission so interpret that as you will?

Now that could be one so for example let’s take one from say 3 years ago, let’s say Iran were developing a nuclear capability and they had flooded an area around one of their key development sites with air defence systems which make the ability of air breathing system (planes with pilots) to not make it automated the risk of going in with those, well high risk, chances of being shot down would be higher, and those circumstances putting an AI in an over and into a plane if necessary with a payload would be a credible scenario.

30. Do you think that is every likely to happen?

112

Yes it could happen, now of course it doesn’t need to happen because what you really want is to launch a clever GPS guided stand-off bomb from so far away you wouldn’t have to get into that position it is very difficult to do that because there is always a trade-off technically there, the fuel you need to get to the system and the pay load at the far end like the V1 V2 rocket, you can only go so far based on how much fuel you have and fuel is a weapon in itself.

31. Can you foresee any ethical issues if that were to happen with an automated system, within the example you mentioned in Iran an ethical issue which may arise that a computer did it rather than a computer?

You would still need some sort of abort criteria because of course who is to say that there wasn’t a political amelioration in last few minutes before you’d given automated system to go over, this is a crucial thing and it is partly back to the Clausewitzian trinity and it is partly due to what is not often understood which is the difference between war; the political objective for which you fight cos its always fought for a political purpose and warfare; which is the act of fighting and therefore the automation of systems which employ force, so there is this key difference between war and warfare because of course even soldiers don’t act on their own they act for some sort of political purpose, now we don’t very often get that clearly clarified by politicians and you don’t at the moment in Syria and Iraq, that is the difference and the key key thing of course a lot of politicians when you are talking about high value targets things do you get involved in the go no go criteria so for example, the raid on Bin Laden in Pakistan, the president and all the top political officials were in the situation because it wasn’t a military decision to go in and get him because it was invading someone else’s sovereign territory, a political decision of which the buck stops with the politician in that sense. Another example of that, although it is not an AI system is for example if you have a rogue or renegade aircraft with hijackers in it over UK airspace, although it would be shot down potentially shot down by the RAF it is not for the RAF to make that decision it is political decision to engage it because of the consequences of the outcome if it was to go into wherever you want canary wharf, the houses of parliament etc.

113

31a) Because I think that is was I’m hypothetically suggesting that there is this overseeing system that is tracking all traffic, all data, that can not necessarily predict or foresee but make that decision for the political leaders.

Yes that’s an interesting one, say I do know a lot about this and that scenario I just gave you, say for example of course air space is already automated so you have complete air picture over the corridors to the UK and it’s those things when you deviate from the norm and when there’s a deviance from a flight pattern or there is a loss of communications or erratic behaviour which leads to the launching of the QRA (quick reaction alert aircraft) to interrogate the aircraft to get it to try and comply by visual and other signals, i.e., if you can re-establish communications and I mean digital or radio communications but there are a number of visual indicators for it to comply otherwise you have the potential for it to be shot down, but of course an aircraft again, this is where the danger comes in, aircraft is an aircraft going from A-B until it is in the final moment of saying, actually I’m not I’m going in to the Houses of Parliament so when is it a threat and when is it no longer a threat and how could you automate that, you couldn’t automate it in the sense that there is a deviance from the norm, therefore that’s a threat we will shoot it down, particularly if it’s got 350/400 people in it so how would the AI make that decision.

32. What about working with other Allied Nations such as the US using their technologies and their potential impact of theatre?

The US are light years ahead and we are light years behind so you get relegated to a flank if you have different capabilities and your equity is different and this comes down again to the fact that now there are 5 domains or environments of war it’s not just land, sea air, its land, sea, air, cyber and space and the capability of America both defensively and offensively in cyber, and the capability of America in space is far far quantum light years ahead of the UK, so that doesn’t mean of course that all the things I’ve said about ethics and war don’t pertain to the Americans because generally they do apart from, and we are catching up with them in that sort of policy space because of course the drone strike of August 15 which I have given Dan a paper on was actually really catching up on US policy and the other things we call active self-defence which was a new thing in a sense which was we were never able

114 to prove immanency if the threat was thousands of miles away, so for example, if you take Pakistan, there was lots of rogue Al Qaeda guys who would like to have gone and killed probably from a UK perspective but we couldn’t do because we couldn’t legally prove immanency when they were at that range, now that’s the bit of the law which we seem to have said that doesn’t matter anymore, again I think under, which I haven’t seen and neither will we see for 30 years, the Attorney Generals determination on this, but this where it comes into active self-defence where an organisation has proven a pattern of activity which therefore shows malient intent and therefore you are justified in going in and neutralising that threat which means killing him, in a drone strike basically, that’s in a paper which I have given you.

33. Do you have any more comments to make on the subject?

The other aspect to this which is AI potentially is in battle damage assessment –can you automate battle damage assessment or do you need to send again air breathers over to do that, it doesn’t have to be, well a satellite can be a component of battle damage assessment in the use of space because some of the optics on the system that you see from space are incredible, but like all this stuff when we are talking there is some intelligence its always human intelligence which is one of the components of the intelligence spectrum that we have OSINT, SIGINT, COMINT, HUMINT, all these various things, human intelligence has always been more powerful than artificial intelligence so we’ve relied heavily in the last 10/15 years on technical intelligence be it SIGINT or COMINT or whatever, but all these other forms of intelligence are always the best thing you’ll ever have is human intelligence, now again you can have deception from human intelligence sourced from single source potentially to say there was weapons of mass destruction in Iraq and we will bomb the bejesus out of them like in 2003 but HUMINT is far more powerful and of course its human people which can use some of the systems that we need, you might need to infiltrate to extract data from bad guys so of course its humans that you might use to put malware into the systems of an opposed person as there is with this sort of suggestion that the CIA did this on the latest WikiLeaks most of which I think was a bit bogus to be honest, it doesn’t really show us much but you know that’s what you can do and its really easy to do because of human psychology I know there was a study done last year in 2016 in the University of 115

Illinois where they put 300 USB’s around the campus there was an extraordinary number that not only picked them up but plugged them into their machines which activated malware, and again that comes back to my pretty dumb people.

33a) Very aware of people doing that in my old school, we had a virus installed because someone did something similar, got a USB off a mate, plugged it in and it infected our entire network.

And again that is an interesting one because of course this where there has never been a cyber Pearl Harbour and neither has there ever been a cyber Hiroshima, no-one has actually died as far as I am aware from a cyber strike, now the use of data and meta date is different to that and again its really to do with, rather like your network recovered from the surprise of that virus and probably recovered from that virus quite quickly this means the same as in war you know although there might be this shock it’s no more than surprise and diminishes over time so the question then is what is your sort of disaster recovery plan and from either your systems being infiltrated or whatever it might be.

33b) This coming back to public perception – which ties in to my AI in popular culture a lot of the AI’s that the public will know about are films and television so I make a lot of reference to Terminator and Skynet being a system that’s developed then militarised in the sense of the military are using it rather than weaponised but because it is a learning algorithm it learns that humans are the problem, exterminate the humans – obviously that is completely far-fetched and it’s an idea but at the same time the point I try to make is that all of these things are out there and they are all negatives, you’ve got Skynet you’ve got Hal from 2001 Space Odyssey so that puts people’s perceptions of AI to be a negative but do you also think that then gives people a thing to strive to so it’s like in a sub conscious mind of programming, I am going to program something that learns to make its own mind and becomes sentient.

I think people would prefer really to have, you could argue with wearables and stuff like that, it’s kind of like, I don’t know if you’ve seen this, the Black Mirror series, I think people

116 would prefer to have an artificial implant in their brains which does what Google does for you and things like that I think people would see that as a way. There is 1 or 2 of them where the guy you can call up the memories of anything quite like Google can because Google doesn’t forget but he manages to see that his wife is having an affair but he can only erase this by taking this thing out of your head so yes, I think there’s probably more mileage in that, making people sort of smarter in a sense.

33c) This is why it is hypothetical

Yes and there’s ethical issues in that particular you take the Saudi Arabians for example they have got a big terrorist problem and there was a thing they were saying to the Brits the other day why can’t you microchip all your potential bad guys or whatever, because that’s what we do with our hawks and the answer was, yeah that might be true but hawks don’t have human rights so again this is where the law plays on human rights, now of course if you did that to a hawk there might be some animal rights people would say that was wrong so it’s not quite the same.

117

6.9.1 CENTCOM Example IMMEDIATE RELEASE November 9: Iraq and Syria civilian casualty assessments Press Operations Release No: 16-151 Nov. 9, 2016 November 9, 2016 Release Number 20161109-33 FOR IMMEDIATE RELEASE

U.S. Central Command Releases

TAMPA, Fla. — After months of reviewing reports and databases to resolve cases where Coalition air strikes may have resulted in civilian casualties, U.S. Central Command has determined that over the past year 24 U.S. airstrikes in Iraq and Syria regrettably may have killed 64 civilians and injured 8 other civilians.

"We have teams who work full time, to prevent unintended civilian casualties," said Col. John J. Thomas, U.S. Central Command spokesperson.

"It's a key tenant of the counter-ISIL air campaign that we do not want to add to the tragedy of the situation by inflicting addition suffering. Sometimes civilians bear the brunt of military action but we do all we can to minimize those occurrences even at the cost of sometimes missing the chance to strike valid targets in real time," Thomas said.

Central Command thoroughly reviewed the facts and circumstances surrounding each report, officials said.

"The assessments determined that in each of these strikes the right processes were followed; each complied with Law of Armed Conflict and significant precautions were taken, despite the unfortunate outcome," he said.

118

Investigations were informed by the military's own records, combined with an exhaustive review of reports from outside sources from news media reports, non-governmental organizations and other U.S. Government departments and agencies.

Short descriptions of the 24 resolved cases are listed below.

"In most every case, when we determined there may have been civilian casualties from one of our airstrikes, we are choosing to list the largest number of possible civilian casualties. In cases where we just don't have the investigative resources or evidence to determine precisely how many people may have died, we went with the worst-case number to ensure a full accounting," Thomas said.

1. November 20, 2015, near Dayr Az Zawr, Syria, against an ISIL tactical unit, it is assessed that five civilians were killed and three individuals were injured after the civilians entered the target area after the aircraft released its weapon.

2. March 5, 2016, near Mosul, Iraq, on a strike against an ISIL weapons production facility it is assessed that 10 civilians were killed.

3. March 24, 2016, near Qayyarah, Iraq, during a strike against an ISIL target, it is assessed that one civilian was killed. 4. April 1, 2016, near Raqqah, Syria, against ISIL tactical unit, it is assessed that three civilians were killed after entering the target area after the aircraft released its weapon.

5. April 9, 2016, near Mosul, Iraq, during a strike against an ISIL tactical unit, it is assessed that one civilian was killed after entering the target area after the aircraft released its weapon.

6. April 30, 2016, near Mosul, Iraq, against ISIL military leadership, it is assessed that five

119 civilians were killed after entering the target area after the aircraft released its weapon.

7. May 25, 2016, near Mosul, Iraq, during a strike against an ISIL tactical unit, it is assessed that one civilian was killed.

8. May 26, 2016, near Mosul, Iraq, during a strike against ISIL fighters, it is assessed that one civilian was killed after entering the target area after the aircraft released its weapon.

9. May 29, 2016, near Mosul, Iraq, during a strike against an ISIL weapons system, it is assessed that six civilians were killed.

10. June 15, 2016, near Kisik, Iraq, during a strike against an ISIL weapons storage facility, it is assessed that six civilians were killed.

11. June 15, 2016, near Mosul, Iraq, during another strike against ISIL targets, it is assessed that two individuals were injured after entering the target area after the aircraft released its weapon.

12. June 21, 2016, near Ar Raqqah, Syria, during a strike targeting an ISIL headquarters building, it is assessed that three individuals were killed after entering the target area after the aircraft released its weapon.

13. June 23, 2016, near Ar Raqqah, Syria, during a strike against an ISIL-held building, it is assessed that four civilians were killed after entering the target area after the aircraft released its weapon.

14. June 26, 2016, near Mosul, Iraq, during a strike against an ISIL target, it is assessed that one individual was injured after entering the target area after the aircraft released its weapon.

15. June 26, 2016, near Mosul, Iraq, during another strike against an ISIL target, it is

120 assessed that one individual was injured after entering the target area after the aircraft released its weapon.

16. July 3, 2016, near Manbij, Syria, during a strike against an ISIL fighting position, it is assessed that four civilians were killed.

17. July 10, 2016, near Manbij, Syria, during a strike against an ISIL target, it is assessed that two civilians were killed.

18. July 14, 2016, near Qayyarah, Iraq, during a strike on an ISIL-held building, it is assessed that one civilian was killed.

19. July 31, 2016, near Manbij, Syria, during a strike against ISIL fighters, it is assessed that one civilian was injured after entering the target area after the aircraft released its weapon.

20. August 17, 2016, near Ar Raqqah, Syria, during a strike against an ISIL target, it is assessed that two civilians were killed. 21. August 20, 2016, near Manbij, Syria, during a strike against an ISIL artillery firing position, it is assessed that one civilian was killed after entering the target area after the aircraft released its weapon.

22. August 31, 2016, near Ramadi, Iraq, during a strike against an ISIL target, it is assessed that two civilians were killed.

23. September 7, 2016, near Dayz Az Zawr, Syria, during a strike against an ISIL oil collection point, it is assessed that one civilian was killed after entering the target area after the aircraft released its weapon.

24. September 10, 2016, near Ar Raqqah, Syria, during a strike against an ISIL target, it is assessed that five civilians were killed. (Centcom.mil, 2016)

121

6.10 Online Survey Questions

An Investigation into the Militarisation of Artificial Intelligence

This project has received the approval of Cardiff School of Managements’ Ethics

I understand that my participation in this project will involve completing a questionnaire about

Militarisation of Artificial Intelligence which will take approximately 15-20 minutes of my time.

I understand that participation in this study is entirely anonymous and voluntary and that my name will not be used within the study but I can discuss my concerns with Daniel Edwards [email protected] or his supervisor [email protected]

I understand that any identifying information provided by me will be held confidentially, such that only the PI (Daniel Edwards) and the supervisor will have access to the data.

I understand that my data will be stored on password protected computers, anonymised after completion of the survey and that no one will be able to trace my information back to me. The raw data will be retained for five years when it will be deleted/destroyed.

If you are 18 years of age or over, understand the statement above and freely consent to participate in this study please tick the consent box to proceed.

* 1. I agree to continue

I agree

122

An Investigation into the Militarisation of Artificial Intelligence

About You Please complete ALL of the following questions to your best ability without the aid of any other information. Failure to complete ALL questions will result in non used data

* 2. What is your age?

18 to 24

25 to 34

35 to 44

45 to 54

55 to 64

65 to 74

75 or older

Do Not Wish to Disclose

* 3. What is your gender?

Female

Male

Prefer not to say

* 4. Your Employment (Select all that apply)

Student

Full Time Employment

Part Time Employment

Retired

Unemployed

Do not wish to disclose

Other (please specify)

* 5. Employment/Course Sector

123

* 6. How would you rate your Technical Ability?

No Technical Ability Neutral Expert

124

An Investigation into the Militarisation of Artificial Intelligence

The Study

The following questions relate to the study in question.

Please complete the following questions to your best ability without the aid of any other information.

* 7. What is your level of knowledge/understanding of the term “Artificial Intelligence"?

I know lots about the term “Artificial Intelligence"

I know some about the term “Artificial Intelligence"

Neutral

I don't know that much about the term “Artificial Intelligence"

I know nothing about the term “Artificial Intelligence"

* 8. I see Siri, Cortana, Google Assistant, etc, as forms of AI

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

I do not know what they are

* 9. Do you use any of the above mentioned programs?

Yes

No

* 10. I think that automated website help chats are forms of AI

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

I do not know what they are 125

* 11. I think that automatic Motorway Speed Limit monitoring is a form of AI

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

I do not know what they are

* 12. I think that Smart Home Technology (Hive, Amazon Echo) are forms of AI

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

I do not know what that is

* 13. I think that non controlled characters in a game are forms of AI

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

I do not know what that is

* 14. I think that shopping recommendations (Purchase Predictions) are forms of AI

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

I do not know what that is

126

* 15. Are you aware of the Investigatory Powers Act 2016?

Yes

No

* 16. Are you aware of Unmanned Aircraft being used within the Military?

Yes

No

I do not know what Unmanned Aircraft is

* 17. I am comfortable with Unmanned Aircraft being used with in the Military

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

"The UK is being watched by a network of 1.85m CCTV cameras"

(Lewis, 2011) https://www.theguardian.com/uk/2011/mar/02/cctv-cameras-watching-surveillance

* 18. How aware were you of the above statistic?

Extremely Aware

Very Aware

Somewhat Aware

Not so Aware

Not at all Aware

* 19. I am comfortable with potentially having my image used in CCTV systems

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree 127

* 20. I am comfortable with my information being stored indefinitely to help with criminal and/or national security purposes

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

* 21. I am comfortable with a system that used previously stored CCTV images were used to target suspects committing criminal and/or national security offences

Strongly Agree

Somewhat Agree

Neither agree nor disagree

Somewhat Disagree

Strongly Disagree

* 22. Some ethical issues of CCTV monitoring may include privacy breaches, agreement to be recorded, image leaks and entrapment. Can you think of anymore?

128

An Investigation into the Militarisation of Artificial Intelligence

Case Studies

The following are fictional ideas that have not occurred and have been created purely for this survey. Please answer the questions that follow each statement.

* 23. Drones have the ability to fly unmanned. How comfortable would you feel if the human interaction was removed completely from the act of flying?

Extremely comfortable

Very comfortable

Somewhat comfortable

Not so comfortable

Not at all comfortable

* 24. How comfortable would you feel if the human interaction was removed completely from the act of dropping munitions/strikes?

Extremely comfortable

Very comfortable

Somewhat comfortable

Not so comfortable

Not at all comfortable

* 25. If an Artificial Intelligence was in control of the unmanned drone. Who do you think would be to blame if a strike were to go wrong?

The Military for not interveening

The intelligence that the AI was acting upon

The AI for not checking intelligence

The programming of the AI

All of the above

129

* 26. Hostile forces have been reported by a number of credible sources. The AI has decided to alert the required forces to intervene. Upon human inspection the reports are fake and false. Who should be held accountable?

The Military for not checking the sources

The AI

The sources for false reporting

All of the above

* 27. Are you aware of what DST is?

Yes No I have heard of it

130

An Investigation into the Militarisation of Artificial Intelligence

Ethical Issues

* 28. How would you feel if automated military systems were brought in that took away human command to a certain degree?

Very Concerned Somewhat Concerned Neutral Not that Concerned Not at all Concerned

29. Do you have any other concerns regarding the potential use of AI, personal data, etc?

Thank you for taking the time to complete the questionnaire. Please submit your answers.

6.11 Online Survey Results

Question 1 I agree to continue

Answer Options Response Response Percent Count I agree 100.0% 63 Table 1: Question 1 Results

Question 2 What is your age?

Answer Options Response Response Percent Count 18 to 24 30.2% 19 25 to 34 12.7% 8 35 to 44 9.5% 6 45 to 54 28.6% 18 55 to 64 15.9% 10 65 to 74 3.2% 2 75 or older 0.0% 0 Do Not Wish to Disclose 0.0% 0 Table 2: Question 2 Results

Question 3 Answer Options Response Response Percent Count Female 31.7% 20 Male 68.3% 43 Prefer not to say 0.0% 0 Table 3: Question 3 Results

Question 4 Answer Options Response Response Percent Count Student 20.6% 13 Full Time Employment 57.1% 36 Part Time Employment 11.1% 7 Retired 14.3% 9 Unemployed 4.8% 3

132

Do not wish to disclose 0.0% 0 Other (please specify) 1.6% 1 Table 4: Question 4 Results

133

Question 5 Employment/Course Sector

Answer Options Response Response Percent Count Finance 7.9% 5 ICT/Computing 36.5% 23 Voluntary/Charity 0.0% 0 Retail 3.2% 2 Sales 0.0% 0 Energy and Utilities 3.2% 2 Public Sector 9.5% 6 Healthcare 9.5% 6 Creative Arts and Design 0.0% 0 Engineering and Manufacturing 4.8% 3 Hospitality 1.6% 1 Law 1.6% 1 Tourism 0.0% 0 Property and Construction 1.6% 1 Recruitment and HR 0.0% 0 Other 20.6% 13 Table 5: Question 5 Results

Question 6

134

Question 7 What is your level of knowledge/understanding of the term “Artificial Intelligence"?

Answer Options Response Response Percent Count I know lots about the term “Artificial Intelligence" 20.6% 13 I know some about the term “Artificial Intelligence" 63.5% 40 Neutral 6.3% 4 I don't know that much about the term “Artificial 6.3% 4 Intelligence" I know nothing about the term “Artificial Intelligence" 3.2% 2 Table 6: Question 7 Results

Question 8 I see Siri, Cortana, Google Assistant, etc, as forms of AI

Answer Options Response Response Percent Count Strongly Agree 12.7% 8 Somewhat Agree 57.1% 36 Neither agree nor disagree 12.7% 8 Somewhat Disagree 12.7% 8 Strongly Disagree 3.2% 2 I do not know what they are 1.6% 1 Table 7: Question 8 Results

Question 9 Do you use any of the above mentioned programs?

Answer Options Response Response Percent Count Yes 57.1% 36 No 42.9% 27 Table 8: Question 9 Results

Question 10 I think that automated website help chats are forms of AI

Answer Options Response Response Percent Count Strongly Agree 14.3% 9 Somewhat Agree 41.3% 26 135

Neither agree nor disagree 15.9% 10 Somewhat Disagree 19.0% 12 Strongly Disagree 4.8% 3 I do not know what they are 4.8% 3 Table 9: Question 10 Results

136

Question 11 I think that automatic Motorway Speed Limit monitoring is a form of AI

Answer Options Response Response Percent Count Strongly Agree 6.3% 4 Somewhat Agree 25.4% 16 Neither agree nor disagree 27.0% 17 Somewhat Disagree 27.0% 17 Strongly Disagree 7.9% 5 I do not know what they are 6.3% 4 Table 10: Question 11 Results

Question 12 I think that Smart Home Technology (Hive, Amazon Echo) are forms of AI

Answer Options Response Response Percent Count Strongly Agree 12.7% 8 Somewhat Agree 34.9% 22 Neither agree nor disagree 23.8% 15 Somewhat Disagree 20.6% 13 Strongly Disagree 1.6% 1 I do not know what that is 6.3% 4 Table 11: Question 12 Results

Question 13 I think that non controlled characters in a game are forms of AI

Answer Options Response Response Percent Count Strongly Agree 11.1% 7 Somewhat Agree 33.3% 21 Neither agree nor disagree 15.9% 10 Somewhat Disagree 19.0% 12 Strongly Disagree 11.1% 7 I do not know what that is 9.5% 6 Table 12: Question 13 Results

Question 14 I think that shopping recommendations (Purchase Predictions) are forms of AI

137

Answer Options Response Response Percent Count Strongly Agree 14.3% 9 Somewhat Agree 25.4% 16 Neither agree nor disagree 19.0% 12 Somewhat Disagree 28.6% 18 Strongly Disagree 12.7% 8 I do not know what that is 0.0% 0 Table 13: Question 14 Results

Question 15 Are you aware of the Investigatory Powers Act 2016?

Answer Options Response Response Percent Count Yes 36.5% 23 No 63.5% 40 Table 14: Question 15 Results

Question 16 Are you aware of Unmanned Aircraft being used within the Military?

Answer Options Response Response Percent Count Yes 92.1% 58 No 7.9% 5 I do not know what Unmanned Aircraft is 0.0% 0 Table 15: Question 16 Results

Question 17 I am comfortable with Unmanned Aircraft being used with in the Military

Answer Options Response Response Percent Count Strongly Agree 27.0% 17 Somewhat Agree 39.7% 25 Neither agree nor disagree 15.9% 10 Somewhat Disagree 12.7% 8 Strongly Disagree 4.8% 3 Table 16: Question 17 Results

138

Question 18 "The UK is being watched by a network of 1.85m CCTV cameras" (Lewis, 2017) How aware were you of the above statistic?

Answer Options Response Response Percent Count Extremely Aware 14.3% 9 Very Aware 22.2% 14 Somewhat Aware 50.8% 32 Not so Aware 9.5% 6 Not at all Aware 3.2% 2 Table 17: Question 18 Results

139

Question 19 I am comfortable with potentially having my image used in CCTV systems

Answer Options Response Response Percent Count Strongly Agree 11.1% 7 Somewhat Agree 38.1% 24 Neither agree nor disagree 22.2% 14 Somewhat Disagree 20.6% 13 Strongly Disagree 7.9% 5 Table 18: Question 19 Results

Question 20 I am comfortable with my information being stored indefinitely to help with criminal and/or national security purposes Answer Options Response Response Percent Count Strongly Agree 11.1% 7 Somewhat Agree 31.7% 20 Neither agree nor disagree 22.2% 14 Somewhat Disagree 14.3% 9 Strongly Disagree 20.6% 13 Table 19: Question 20 Results

Question 21 I am comfortable with a system that used previously stored CCTV images were used to target suspects committing criminal and/or national security offences Answer Options Response Response Percent Count Strongly Agree 23.8% 15 Somewhat Agree 41.3% 26 Neither agree nor disagree 19.0% 12 Somewhat Disagree 9.5% 6 Strongly Disagree 6.3% 4 Table 20: Question 21 Results

Question 22 Some ethical issues of CCTV monitoring may include privacy breaches, agreement to be recorded, image leaks and entrapment. Can you think of anymore? Witness Protection

140

Stalking, though slightly unlikely surely some people can be watched even if they aren't doing anything criminal? Maybe? wrong identification (twins, very similar looking)

Lack of freedom

Potential voyeuristic if people are watching us through CCTV

Accurate verification of individuals, malicious use (e.g. blackmail or doctoring of footage), access to footage if all held by same company/stored in same area.

Could be used as a solitary narrative which could cause other factors to be left out

A key misuse maybe Discrimination. Drone and body worn cameras are at higher risk of being stolen or even tampered with, there maybe more dependency on this video/audio as evidence. Systems like ANPR and Facial recognition could have increased reliance on them, but are these systems gameable, can someone with false plates arrange to be in London while they are elsewhere? Can disguises or anti- recognition makeup make the dependency on this data less reliable? stalking, control, protection

Misuse of data - not for the reason that it was recorded

Misinterpretation of image

Robberies

Invasion of civil liberties

Access of footage can allow stalking/paedophilia/abduction to become an easier issue

Incorrect person identification, miscarriage of justice.

141

Use of images for extortion, to imply guilt by association (by presence in the vicinity), embarrassment (posting on social media)?

Extended cross referencing of different sources used as basis for profile based investigation and detainment, or as a basis for arrest

Right to be forgotten

Altered footage may be carried out and present false data/information or at a minimum do not reflect the entire story i.e. an assault showing the outcome may delete the initial events that led to outcome sensitive data, unregulated monitoring

Table 21: Question 22 Results

Question 23 Drones have the ability to fly unmanned. How comfortable would you feel if the human interaction was removed completely from the act of flying? Answer Options Response Response Percent Count Extremely comfortable 4.8% 3 Very comfortable 6.3% 4 Somewhat comfortable 20.6% 13 Not so comfortable 49.2% 31 Not at all comfortable 19.0% 12 Table 22: Question 23 Results

Question 24 How comfortable would you feel if the human interaction was removed completely from the act of dropping munitions/strikes? Answer Options Response Response Percent Count Extremely comfortable 4.8% 3 Very comfortable 0.0% 0 Somewhat comfortable 7.9% 5 Not so comfortable 39.7% 25 Not at all comfortable 47.6% 30 Table 23: Question 24 Results

Question 25 142

If an Artificial Intelligence was in control of the unmanned drone. Who do you think would be to blame if a strike were to go wrong? Answer Options Response Response Percent Count The Military for not intervening 17.5% 11 The intelligence that the AI was acting upon 12.7% 8 The AI for not checking intelligence 0.0% 0 The programming of the AI 7.9% 5 All of the above 61.9% 39 Table 24: Question 25 Results

Question 26 Hostile forces have been reported by a number of credible sources. The AI has decided to alert the required forces to intervene. Upon human inspection the reports are fake and false. Who should be held accountable? Answer Options Response Response Percent Count The Military for not checking the sources 31.7% 20 The AI 0.0% 0 The sources for false reporting 20.6% 13 All of the above 47.6% 30 Table 25: Question 26 Results

Question 27 Are you aware of what DSTL is?

Answer Options Response Response Percent Count Yes 3.2% 2 No 74.6% 47 I have heard of it 22.2% 14 Table 26: Question 27 Results

143

Question 28 How would you feel if automated military systems were brought in that took away human command to a certain degree? Answer Options Response Response Percent Count Very Concerned 30.2% 19 Somewhat Concerned 47.6% 30 Neutral 9.5% 6 Not that Concerned 12.7% 8 Not at all Concerned 0.0% 0 Table 27: Question 28 Results

Question 29 Do you have any other concerns regarding the potential use of AI, personal data, etc?

There needs to be several layers of oversight and verification.

Huge potential for job losses as AI takes on more roles such as driving.

Lots can go wrong with AI and need to be standardised with the 3 rules of robotics are built in

The fear of jobs being replaced by AI

The development of the military stuff sounds really worrying. Not personal but my mum works in data stuff and she's so protective of her card details online and what data she puts online generally because data leaks worry her.

Only that there is an override by human available

Have you not seen the Terminator?

If programmed well, used well, then it could be a very helpful aspect of technology.

However, there is always be people looking to exploit good nature, programs could be made to do potentially harmful things.

Use of tracking in every program and piece of technology to create personality

144 profiles of people, currently used to predict purchasing options and such but potentially usable to profile behaviour and select individuals who are threatening to the interests of the profilers.

Things could all get a little Terminator up in here. data breaches

My concerns are more commercial than military. In the end it comes down to money and I dislike marketing that is targeted using my personal information without my consent.

Cyber security, hacking of other governments/countries data and misuse of intelligence for AI in military devices or domestic devices (eg. Drones) tampered with by terrorists. How can you be certain you have 'real' intelligence to build from rather than 'fake/tampered' data?

"always on" listening such as echo is a concern

Job losses, eventual dependency on these systems with no contingency for when they fail

At the end of the day those using the AI should be held accountable for AI decisions and actions

The impact of AI on the rights of a person - if AI develops to the point that AI could be considered a "person" what would that mean to personal rights

Privacy rights, protection of my data and how used including tracking of my activities via phone or CCTV.

I have strong reservations about false judgements being made by AI if used in an unsupervised manner. It is very unlikely that programming could be made perfect to cover all possible scenarios - a human executive function should remain in place at all times.

Only over the security of the storage and potential leaks.

Who would be blamed for mistakes

145

I don't feel I understand it enough to give a concern

I've seen terminator, everyone should be scared of AI :)

AI is a powerful tool and we must use it to our advantage, but I strongly believe that human commands, testing and data checks are always required.

On the basis that anything that can be abused will be, I am concerned at the amount of data relating to my life being amalgamated and "interpreted". Being familiar with Analytics, it already bothers me as to how much of my life can be understood from my purchasing history - add in my movements and I have no privacy/escape (whether i'm guilty or not) :-)

I am uneasy about a situation where AI has total control as "Asimov's laws" haven't yet been proven to be implementable. AI should be an assistance to life, no more.

Developers of AI should be mindful of how it could be used for unintended uses.

Misinterpretation of data can lead to serious errors and actions, although human oversight will not fully mitigate the risk it reduces it slightly. Assumptions made by rules based 'AI' tools can only base their decisions on known expected behaviours, outliers can seriously distort results

No concerns necessarily, however I do see AI as a way of human evolution.

The ability for human intervention to override the system if it fails

With the development of technology, in the military especially, there seem to be many disasters and unnecessary loss of human life, I am concerned that a similar occurrence would happen with the widespread introduction of AI and the reducing control of such systems.

If the enemy gets the technology and is less proactive with its quality controls

Table 28: Question 29 Results

146