<<

OCTOBER 2019

Artificial and defense: What is at stake?

Renaud Bellais

RESEARCH PAPER

Pensée stratégique

Image : ©Daniil Peshkov

1

The Institute of Strategic and Defense Studies (IESD) is an academic research centre created in 2018 and specializing in strategic studies. The IESD is supported by the Université de Lyon (UdL) and belongs to the Law School of Université Jean Moulin – Lyon III. The institute consists of a multi-disciplinary team of researchers (Law, Political Science, Management, Economy) and unites a network of experts, researchers, PhD students and master students specialised in strategic studies.

The IESD is currently a candidate for selection as one of the Defense Ministry’s (DGRIS)“National Centre of Defense Excellence” focusing on: “Interconnection of high strategic functions (air power, space, nuclear deterrence, missile defense): political and operational implications of high intensity capability interactions in homogeneous spaces and contested commons.”

Director of IESD: Olivier Zajec Collection: Defense capabilities analysis

Site web : https://iesd.univ-lyon3.fr/ Contact : [email protected]

IESD – Faculté de droit Université Jean Moulin – Lyon III 1C avenue des Frères Lumière – CS 78242 69372 LYON CEDEX 08

2

Renaud Bellais, « and Defense: What is at Stake? », Note de recherche IESD, coll. « Analyse technico-capacitaire », n°1, October 2019.

Abstract Artificial intelligence is currently a “hot topic” in defense matters. This new technology simultaneously creates fears and sets high expectations. Despite the large amount of literature written about AI and defense, it is difficult to distinguish its true impacts and estimate when they could occur. This policy paper aims to provide a comprehensive assessment, which goes beyond prejudices and misunderstandings. It underlines how AI constitutes a general-purpose technology with huge potential that could turn defense and warfare upside down. However, we must keep in its advantages and limits while considering its growth potential on different time scales. In addition, the military adoption of AI-based systems should not be considered a guarantee. The integration of technology is a complex process that ultimately affects how AI would change the art of war.

Résumé L’intelligence artificielle est aujourd’hui un des sujets les plus brûlants pour la défense. Cette nouvelle technologie induit à la fois de nombreuses craintes et de très hautes attentes. Malgré une littérature foisonnante sur l’IA et la défense, il n’est pas aisé de comprendre les impacts réels de cette relation, ni même de distinguer le moment où ils pourraient produire des résultats concrets. Cette note de recherche a pour objectif d’aller au- delà des préjugés et des exagérations. Elle souligne que l’IA constitue une technologie polyvalente avec un immense potentiel qui pourrait transformer la défense et les formes de la guerre contemporaine. Il est néanmoins nécessaire de garder à l’esprit ses avantages et ses limites et de considérer son évolution potentielle sur plusieurs échelles de temps. De plus, l’adoption de systèmes d’IA au sein des armées ne devrait pas être considéré comme allant de soi. L’intégration technologique suppose des mécanismes d’adaptation relativement complexes, qui affectent la manière dont l’IA transformerait effectivement l’art de la guerre.

About the author Renaud Bellais is an associate researcher in economics at ENSTA Bretagne and at CESICE, Université Grenoble Alpes. He graduated from Institut d'Études Politiques de Lille (1994), PhD in economics from Université du Littoral (1998) and accredited research director from Université Grenoble-Alpes (2004). After being lecturer at Université du Littoral, he worked at the French defense procurement agency (DGA) before joining EADS, now Airbus, and MBDA since 2017. He has teaching commitments in universities and military academies.

The views expressed in this article reflect solely the author's opinion.

3

NOTE DE RECHERCHE SEPTEMBER 2019

Table of Contents Artificial intelligence and defense: What is at stake? ...... 5 Artificial Intelligence as a general-purpose technology ...... 6 A multi-level “game changer” for defense? ...... 7 Strategic dimensions ...... 8 Operational features of AI systems ...... 10 AI and effective autonomous systems ...... 13 Trusting AI-based systems, a technical challenge ...... 16 International regulations, a hindrance to AI deployment? ...... 20 Defense innovations: the human factor ...... 22 Conclusion ...... 28

4

RENAUD BELLAIS Coll. Analyse techno-capacitaire

Artificial Intelligence and In February 2019, for instance, the White House and Pentagon unveiled their respective AI Defense: What is at Stake? strategies. President Trump signed an executive order enacting the “American AI Initiative” that identifies AI as a priority for government research his Note aims at identifying how artificial and development. The next day, the U.S. intelligence (AI) can affect the creation of Department of Defense released its own AI T defense systems and military operations as a strategy, centred on the newly created Joint means to communicate between defense Artificial Intelligence Center (JAIC). The DoD is stakeholders. It does not look at core technical poised to spend nearly $1 billion on artificial issues, but only at conceptual and operational ones. intelligence in Fiscal Year 2020 while DARPA has It proposes a comprehensive review of related announced a multiyear $2 billion program to deliver topics, stakes and issues, and relies solely on open game-changing AI solutions. sources. Simultaneously, AI has become a hotter and There are many fears and prejudices related to hotter topic for major arms-producing countries in the use of AI in defense systems. For instance, a Europe. Recent official statements on defense coalition of NGOs launched the “Campaign to Stop innovation, e.g. French MoD’s strategic innovation Killer ” in 2012. However, it seems very policy paper3, have increasingly brought the 1 2 improbable that systems like Skynet or Ava could criticality of IA for defense matters to the forefront. soon be deployed. Even though the potential Therefore, it is not surprising that AI was a major implementation of advanced AI (sometimes defined topic on the agenda of the informal meeting of EU as Artificial General Intelligence) into defense defense ministers in August 2019. With the systems appears as a very distant possibility for objective to forge new ties between the defense technical, architectural and military reasons sector and the private tech industry in this field, explored here, it is indisputable that AI has achieved these ministers met with members of the Global major progress over the past decade. Therefore, Tech Panel4 consisting of tech industry leaders, exploring its military potential constitutes a investors and civil society representatives. legitimate objective. As the interest of the defense community is After decades of ups and downs, the so-called (once again) very strong for AI applications all over “AI winters”, AI has clearly made progress in recent the world, this Note aims at the years because three major obstacles have been of AI, its effectiveness and growth potential overcome since the early 2010s: high-performing as well as its possible impacts on the way of war. computational power, massive , and improved associated with (Cardon et al. 2018). Recent successes of AI performances has nurtured high expectations, in particular in the realm of defense capabilities where techno-centrism remains, more than ever, a core .

1 In the series of “Terminator” movies, Skynet is an 3 Imaginer au-delà, Document d’orientation de artificial neural network-based conscious group mind l’innovation de défense [Looking beyond, Defence and artificial general intelligence. innovation policy paper]. Paris, French Ministry of Armed Forces, July 2019. 2 A female humanoid with artificial intelligence featuring in the 2014 British psychological science-fiction 4 https://eeas.europa.eu/topics/global-tech-panel_en movie “Ex Machina”, which is centered on the ability to pass (but who is testing who?).

5

NOTE DE RECHERCHE SEPTEMBER 2019

Artificial Intelligence as a general- spark a process of cognitization analogous to the purpose technology changes wrought by industrialization.” In the field of defense, Wunische (2018) Since artificial intelligence emerged in the proposed a similar assessment of its 1950s, innovators and researchers have filed comprehensive integration on military capabilities: applications for 340,000 AI-related inventions and “AI is not a wholly revolutionary to be applied published over 1.6 million scientific publications. to the military domain, and it is merely the next The pace has significantly increased in recent years logical step in the digitization and mechanization of and resulted in major breakthroughs: over half of the modern battlefield.” Such pervasiveness AI-related identified inventions have been published constitutes a very feature of AI and places it apart since 2013 (WIPO 2019: 13). This surge certainly from many technologies. Contrary to many reveals an in-depth transformation, in which AI defense-related technologies that have could become a game-changer throughout the characterised arms production in the past, AI is not economic world and society. For instance, the ratio defense-specific. It presents two specific features: of scientific papers to inventions has decreased - AI is not a unified, clearly identifiable from 8:1 in 2010 to 3:1 in 2016. This trend indicates technology (like nuclear fission), but a a shift from theoretical research to the use of AI methodology to design algorithms for technologies in operational products and services. various purposes, based on various Additionally, AI is not just another new knowledge components. technology. Its impacts go far beyond a given - AI development is not limited to defense product or sector. Notwithstanding its intrinsic systems (contrary to stealth), even in the performances, AI has the potential to transform short run (like semiconductors or many activities and products in depth. Kevin Kelly, composite materials), but is applicable to the founder of Wired magazine and a renowned inputs from all possible sectors and could futurologist, clearly sums up how we need to provide benefit to almost all of them. apprehend its potential: “Like all utilities, AI will be This constitutes a major difference when supremely boring, even as it transforms the considering path-breaking defense technologies, Internet, the global economy, and civilization. It will which, at least for a certain time after having enliven inert objects, much as electricity did more emerged, were very limited to defense or high-end than a century ago. Everything that we formerly applications close to defense like space and high- electrified we will now cognitize.” (Kelly 2014) performance computing (Bellais 1999). In , AI can be considered as a general-purpose technology What does GPT stands for? (GPT), which is defined as being able to constitute

an influential driver of long-run technological As the technologies in the first and second progress by representing “the invention of a industrial revolutions allowed the creation of method of invention” as Griliches explained in his 5 machines replacing human physical labour and as seminal 1957 paper . If AI is not fully a GPT for the electricity connected the world, AI had the potential time being, it is highly possible that it becomes one to provide capacities to replace human cognitive in the foreseeable future. activities – at least for certain tasks. Scharre and From an economic perspective, GPTs constitute Horowitz (2018: 3) notes: “The integration of AI core inventions that have the potential to technologies across human society could also significantly enhance productivity, performances or

5 Griliches, Z. 1957. Hybrid Corn: An Exploration in the Economics of . Econometrica 25(4), 501-522.

6

RENAUD BELLAIS Coll. Analyse techno-capacitaire

quality across a wide number of fields or sectors. commercial companies for specific military For instance, the invention of the combustion motor needs. brought about enormous technological and On the negative side, it is unlikely that arms- organizational change across sectors as diverse as producing countries would be able to keep AI from manufacturing, agriculture, retail, and residential proliferating across sectors and, what is more construction (which go well beyond purely technical challenging, across countries. Hence, a strong dimensions). GPTs usually meet three criteria that concern regarding the ability of countries to invest distinguish them from other innovations (Cockburn in and master IA technology in order to keep pace et al. 2017: 5): with potential enemies. The US Department of - They have pervasive application across Commerce insisted on such dual nature of AI many sectors; through its request for comments in November - They spawn further innovation in 2018 regarding the that are application sectors; essential to the US National Security. Some - They themselves are rapidly improving. observers even perceive an “AI arms race” between China and the United States that goes far beyond Thus, if we consider AI as a GPT, we can expect defense topics. that many sectors will benefit from AI applications, especially since the emergence of deep learning In addition, AI could be considered as a GPT in thanks to multi-layered neural networks. the field of defense too. AI will not become a

defense capability per se, but act as an enabler for Defense stake developing several capabilities. AI as a GPT can also improve the systemic effectiveness of several

existing capabilities as well as the whole defense On the positive side, defense can leverage on architecture, notably by reducing the “fog of war” broader and massive investments from commercial through better intelligence, coordination and and civilian stakeholders to mature this technology information (what the Chinese call “intelligentized for nurturing its own needs. This is especially true war”) and by increasing the operational added value regarding broad-spectrum AI where a defense of assets and soldiers. investment would duplicate similar R&D efforts already engaged throughout the commercial world. Defense spending should rather focus on specific A multi-level “game changer” for needs and applications, in particular at both ends of defense? the R&D spectrum: - At the lowest technology readiness levels The previously underlined characteristics of AI are (TRLs), exploratory research like DARPA’s the reason why major military powers expect that AI is likely to change the balance of power in their can help crack down on issues and topics 6 for which commercial companies may have favour. As Stuart Russell notes, “technologies have neither incentives nor interests to invest or reached a point at which the deployment of such do not possess the appropriate level of systems is — practically if not legally — feasible within years, not decades. The stakes are high: effort to deliver expected advancements in 7 a given timeframe. LAWS have been described as the third revolution - At the highest TRLs, defense can support in warfare, after gunpowder and nuclear arms.” the adaptations of component knowledge (Russell 2015: 415) The main drivers for developing previously funded and matured by AI-based defense applications are strategic, operational, and budgetary dimensions.

6 Professor of , University of California, 7 Lethal autonomous weapon systems. Berkeley.

7

NOTE DE RECHERCHE SEPTEMBER 2019

Revolution in Military Affairs. Among emerging Strategic dimensions technologies, AI appears as a critical dimension. Chinese military strategists anticipate a The United States places AI at the highest transformation in the form and character of warfare strategic priority for their defense transformation and conflicts, since they anticipate an evolution (notably since the launching of the Third Offset from today’s “informatized” (信息化) warfare to Strategy few years ago). Leveraging on AI aims at tomorrow’s “intelligentized” (智能化) warfare avoiding a level playing field with contesting military (Kania, 2019b). AI is particularly identified as a powers, since China or Russia are catching up with means to to leapfrog current American capabilities regard to major defense capabilities. in order to invert the balance of power with the When he was Deputy Secretary of Defense, Bob United States (Allen 2019). Work regularly underlined the critical role of AI, Indeed, major military powers consider AI as the considering it as part of both the solution and the means to dramatically increase the potential of their issue. The emergence of defense AI could be existing defense capabilities. They could thus avoid identified as another "Sputnik moment" in entering into a technology-based arms race with international relations. the United States, which could lead them to a A year ago, Eric Schmidt8 summarised the military and economic exhaustion (and to the fate American assessment of AI potential: “AI has the of the USSR). Indeed, AI can improve not only the power to affect every corner of DoD, from performances of individual capabilities but also, and personnel and logistics to acquisition and above all, the effectiveness of armed forces thanks multi-domain operations; and to create and sustain to systemic effects on military operations. the asymmetric advantage required to outpace our adversaries. In the long run, AI will profoundly Operational dimensions affect military strategy in the 21st century.”9 Trying to maintain a significant military edge is the reason why the United States massively invests in AI. Military planners believe that AI and the resulting greater autonomy of systems could enable defense On the opposite, China, Russia and other systems to achieve greater speed, accuracy, countries expect to overcome the obvious military persistence, reach and coordination on the superiority of the United States, in both quantitative battlefield. For instance, former Deputy Secretary of and qualitative dimensions, by leveraging AI to Defense Bob Work noted: “Learning machines (…) bridge the gap, to “offset” or even to undermine the literally will operate at the speed of light. So when current military advantages of the United States you are operating against a cyber-attack or an (and its allies). AI is perceived as a game-changer electronic warfare attack, or attacks against your that could turn international relations upside down, space architecture, or missiles that are coming requiring a sustained defense effort from all major screaming in at you at Mach 6, you're going to have players, including Europeans, in this field to a learning machine that helps you solve that maintain the global stability (Bellais 2019). problem right away.”10 The Chinese People’s Liberation Army aims at achieving an advantage in the course of the ongoing

8 Former chairman of Google and Alphabet, Eric Schmidt https://docs.house.gov/meetings/AS/AS00/20180417/10 was appointed by Secretary of Defense Ashton Carter as 8132/HHRG-115-AS00-Wstate-SchmidtE-20180417.pdf chairman of the DoD Innovation Advisory Board in 2016. 10 Deputy Secretary of Defense Bob Work’s speech at 9 Statement of Dr. Eric Schmidt, House Armed Services the Reagan Defense Forum: The Third Offset Strategy, Committee, U.S. House of Representatives, April 17th, Reagan Presidential , Simi Valley, California, 2018, November 7, 2015 (as delivered) https://www.defense.gov/Newsroom/Speeches/Speech/ 8

RENAUD BELLAIS Coll. Analyse techno-capacitaire

AI is likely to improve intelligence by processing Budgetary dimensions large batches of information through big in order to speed up threat assessment (thus AI can also improve the cost-effectiveness of reducing the “fog of war”) and to reshuffle in- operations and missions. Deploying AI throughout theatre resources to optimise the effectiveness of defense constitutes a means to relieve budget operations. constraints without compromising the level of AI can help develop autonomous systems that ambition. overcome the physiological and psychological limits In addition, AI can level the playing field even of human beings (stress, tiredness, low more for contesting powers, since they can expect concentration, etc.). “Automation could improve to achieve a balance of power despite lower military safety and reliability in nuclear operations in some expenditures. Per se, this impact represents a true 11 cases, notes Horowitz . Simple and repetitive tasks game-changer, since countries leveraging on AI where human fatigue, anger and distraction could would no longer need to engage an arms race in a interfere are ripe for the use of algorithms.” budgetary perspective. (Boulanin 2019: 83) This is particularly useful for countries like China Additionally, AI can improve to some extent the or Russia, which cannot invest as much as the interpretation of massive flows of data (imagery, Pentagon for their defense. Rather than sonar, signal intercepts, videos…). The quantity of dramatically increasing their military spending, they images and data is well beyond the capacity of can leverage AI to magnify their defense resources. existing analyst communities (Feickert et al. 2018). For instance, it can be less expensive to develop an This lack of resources creates huge backlogs of AI-based anti-access/area-denial air defense than data, which cannot be processed in due time to trying to develop an aircraft that can match the provide the expected level of support to boots on performances of F22s and F35s (that is, falling into the ground. Due to the inability to recruit enough the “Top Gun” syndrome). analysts and to the cost of dealing with such quantities, most of collected data can never be Autonomous and automated systems are reviewed. believed to provide opportunities for reducing the operating costs of weapon systems and the need Moreover, due to the heterogeneity of data for soldiers, in particular by decreasing the ratio of streaming in from several kinds of platforms and deployed soldiers per square kilometre. Thanks to sensors, quite often it is impossible to exploit such man-machine teaming, AI can favour a more data directly. Often times, it must be adapted, efficient use of work force and reduce the likeliness integrated or fused to ensure accurate, of casualties. It could help armed forces cover comprehensive situational awareness. These larger areas with a limited number of soldiers and operations still rely heavily on human operators, platforms. inducing delays, mistakes, labour shortages and high costs. AI is likely to improve the interoperability between soldiers and manned systems, on one Nevertheless, aside from the impressive side, and unmanned systems, on the other. In the performances of AI, we should take into medium and, more certainly, long run, man- consideration the fact that current AI is not able to machine teaming will help concentrate soldiers on provide all cognitive tasks, which still require a missions that human beings can best achieve. human operator for the most advanced ones. AI can support operations to alleviate the burden of human beings, not fully substitute them.

Article/628246/reagan-defense-forum-the-third-offset- 11 Horowitz, M. Artificial intelligence and nuclear stability. strategy/ In: Boulanin (ed. 2019), 79-83.

9

NOTE DE RECHERCHE SEPTEMBER 2019

Norbert Wiener (1894-1964)

Founding father of the field of cybernetics, a field of research at the root of issues related to Artificial Intelligence Norbert Wiener. F. 26/11 1894 d. 18/3 1964.

10

RENAUD BELLAIS Coll. Analyse techno-capacitaire

Operational features of AI systems heterogeneous data can mislead the AI processing.

Unexpected or impure environments can create edge effects that dramatically diminish the There is a biased understanding about what AI effectiveness of otherwise performant AI systems. systems can and cannot do. We must depart from phantasms as well as prejudices that prevent many Such systems cannot be considered as people from really understanding what we can comparable to the human brain, since they have no expect from AI. Its performances must be assessed capability to “think” beyond the scope of their with regard to the complexity of tasks and the programming. Marsh (2019) puts into relief that our heterogeneity of the environment in which understanding of AI is marred and there are too operations take place. many anthropomorphic representations that mislead people on the true abilities of such Up to now, most AI systems are specialised, algorithms. AI systems do not “learn” in a human meaning they are dedicated to one or several tasks sense. “Intelligence measures a system’s ability to that have been clearly defined ex ante. They determine the best course of action to achieve its perform very well at specific tasks in a given, goals in a wide range of environments.” (Scharre homogeneous environment. These specialised AI and Horowitz 2018: 4) systems must be distinguished from artificial general intelligence (AGI). AGI would constitute the The human brain continuously triages ground for fully autonomous systems but it does information, that is, “natural data” from its ever- not exist in operational features today (and could changing, noisy and unpredictable surroundings in never become a reality). real time, without any assistance. The current AI is

far from presenting such ability, since it can only Specialised AI evolve within a predefined representation of the world (Cardon et al. 2018).

12 Given the state of the art, specialised AI (also Rather, as John Borrie underlines, AI systems referred to as “weak AI”) is the only kind of are conceived in order to recursively improve their operational AI technology today. Specialised AI can ability to successfully complete pattern recognition be defined as Perception AI, which is based on a or matching tasks based on sets of data (which probabilistic, quasi-mechanical approach. A system usually need to be carefully curated by human with specialised AI can manage complex decisions beings first). Such AI applications deal with issues based on reasoning and previously analysed sets of when they are manageable through a tree-structure data, but it needs to be trained and pre- analysis (response function). In other words, programmed for specific applications. specialised AI succeeds when it is subjected to a vectorised world. According to Yann LeCun’s own In “The Organization of Behavior” (1949), wording, the ambition of the designers of Donald Hebb suggested that learning specifically connectionist machines consists in “placing the involves strengthening certain patterns of neural world into a vector (world2vec)” (Cardon et al. activity by increasing the (weight) of 2018). induced neuron firing between the associated connections. Today, specialised AI appears very optimal for carrying on a given task in a given context. If task-oriented AI systems can deliver Nevertheless, its effectiveness requires a impressive results, these latter depend heavily on structured environment, homogeneous data, the quality of data and the homogeneity of the cooperative context… many criteria that are hardly environment. Biased or even solely too met in most military operations. AI systems need to

12 Borrie, J. Cold war lessons for automation in nuclear weapon systems. In: Boulanin (ed. 2019), 41-52.

11

NOTE DE RECHERCHE SEPTEMBER 2019

be more adaptive to operate safely and reliably in theoretical, technical and operational stalemates complex, dynamic and adversarial environments so and resulted in what is often called an “AI winter”. as to become fully supportive to military operations. AI research was able to escape from this by New validation and verification procedures must be abandoning these ambitious objectives and by developed for systems that are adaptive or capable exploring non-linear solutions through what some of learning. researchers13 called the “deep learning conspiracy”

(Cardon et al. 2018). Artificial general intelligence A real substitute to human intelligence does not

exist today and we are far away from it. We do not In the AI community, the of artificial know if this is even possible, especially if the general intelligence (AGI) (or “strong AI”) refers to objective consists in creating a “conscious system”. a general-purpose AI that would be as smart as, or As the neurosurgeon Henry Marsh explains, the even smarter than, human beings. An AGI-based complexity of the brain resides in the way in which system would be able to understand the world itself the nerve cells are connected. “Nerve cells are not, through a comprehensive representation, not in fact, simple electrical switches (…) Neuronal defined ex ante but created by the AI itself. Thus, it networks are dynamic, they are constantly could develop its own meaning for the environment changing. They weaken or strengthen in it encounters and the ways to achieve its objectives accordance with how much traffic is passing (by opposition to delimited tasks). What defines AGI through them (…) Furthermore, neurons come in a is learning and reasoning abilities across multiple wide variety of shapes and sizes (…) Neural domains with broad autonomy (Kuncic 2019). networks only resemble brain networks in a very loose way.” (Marsh 2019: 1-2)

AGI represents the theoretical objective to achieve an effective and efficient symbolic AI, which relies on a truly cognitive approach coming closer Machine intelligence to the core features of human intelligence. In 1957, Allen Newell and Herbert Simon summed up the The Oxford mathematician Marcus du Sautoy top-down approach in what they called “the takes a different perspective. He argues that AI can physical symbol system hypothesis”. This compete with human intelligence as long as we hypothesis states that processing structures of consider that being creative consists of “super- symbols is sufficient, in principle, to produce smart synthesis” rather than the flash of inspiration artificial intelligence in a digital computer and that, or genius. Indeed, much of creativity takes place in moreover, human intelligence is the result of the a continuum because it is exploratory, same type of symbolic manipulations. combinatorial and transformational (even though this latter can be erratic and thus unpredictable If such ambitious (and still theoretical) through AI). objectives can be achieved, AGI would overcome the identified limits of Perception AI, especially for While this approach does lead to an AI system systems operating in complex, heterogeneous and comparable to human brain, his perspective holds changing environments – which characterises the out the prospect for very advanced AI technology. essence of military missions and operations. It is important to take into consideration the intrinsic limits of AI. Bob Works prefers to qualify these However, for the time being, AGI remains in the algorithms as “machine intelligence” rather than realm of science fiction or of theoretical possibility. artificial intelligence, which can be a misleading Even though it was the ideal of the first-wave AI terminology. This is the reason why synthetic designers, this approach encountered many

13 Yann LeCun, Yoshua Bengio and Geoffrey Hinton (Nature 521, 436-444, May 28th, 2015).

12

RENAUD BELLAIS Coll. Analyse techno-capacitaire

intelligence represents a possible alternative from what science-fiction describes in movies like approach to trying to improve deep learning “Terminator” or “Screamers”14. technics. Autonomy can be defined as the ability of a As Kuncic (2019) notes: “Decision-making is a machine to execute a task, or tasks, without human distinguishing point of difference between AI and input, using interactions of computer programming synthetic intelligence. Because AI is - with its environment. Once activated, an based, decisions are made in a deterministic way autonomous system is expected to perform some based on hard-wired sequential instructions. This is tasks or functions on its own. Even though AI can crucial for reliable number-crunching computation. help increase the autonomy of defense systems, In contrast, decisions made by synthetic technology is currently insufficient to achieve fully intelligence are less predictable, reflecting the non- autonomous systems. digital nature of the underlying network. For this reason, synthetic intelligence could never replace Levels of autonomy AI. Indeed, even the human brain is incapable of replicating AI-driven computation. But what synthetic intelligence could deliver is machines Paul Scharre15 divides degrees of autonomy into whose responses to unpredictable environmental three categories: cues are both reason-based and flexible — like - The human-machine C2 relationship; humans”. - The sophistication of the system’s One should keep in mind that AI research decision-making process; considers that hybridisation between both - The types of decisions or functions Connectionist and Symbolic AIs could deliver the being made autonomous. most promising approach for developing advanced AI systems that require human interaction at operational solutions. As Andrew Ng noted, “we still some stage to execute a task can be referred to as have a long way to go in the field. For example, a semi-autonomous. This can be associated to toddler can usually recognize a cat after just one automatic tasks, e.g. mobility (homing, following, encounter, but a computer still needs more than navigation, take-off and landing), intelligence, one example to learn. We need to find ways to train interoperability or health management. Such computers on training datasets as small as 100, or dimension reflects the development of autonomy even 10 […] Effective ‘’ – for specific functions of weapon systems, which are learning without labelled data – remains a holy grail delegated to an AI application to relieve the human of AI.” (WIPO 2019: 8). AI potential remains operator, rather than the development of important but developing it is an uneasy way and autonomous systems as a whole. we should remain aware of its possible limits. AI systems that can operate independently but

under the oversight of a human being, who can AI and effective autonomous systems intervene if something goes wrong (e.g. malfunction or system failure) or who keeps the AI is intrinsically linked to autonomous systems, ultimate control over the delivery of force, nurturing prejudices and phantasms. However, correspond to automated tasks. when considering the current maturity of AI and its Automatic and automated tasks can discharge foreseeable evolutions, what can be delivered is far human beings from operating specific dimensions

14 “Screamers” is a 1995 dystopian science-fiction movie, Variety”, where humans face AI-based, self-evolving based on Philip K. Dick's 1953 short story “Second robots. 15 Center for a New American Security.

13

NOTE DE RECHERCHE SEPTEMBER 2019

of the system and let them focus their cognitive Human-out-of-the-loop systems are capable of abilities on more essential or appropriate tasks. selecting targets and delivering force without any human input or interaction. Systems that operate completely on their own and where humans are not in a position to intervene Even though a fully autonomous system may be during a mission are considered as fully possible, such conception is not necessarily autonomous. Here, AI helps to carry on a mission compatible with the military doctrine and how rather than a task. armed forces want to operate. The concept of

“sliding autonomy” is sometimes used to refer to The place of human operators systems able to switch back and forth between automated and autonomous functioning, depending

on the complexity of the mission, external operating This classification regarding the degree of environments and legal/regulatory frames. autonomy shows that the level of implication of human beings in the implementation of tasks or the From this classification, it appears that some functioning of systems can vary. A standard functions in defense systems can become classification of human interaction and supervision autonomous without presenting significant ethical, is defined by three categories: legal, operational or strategic risks (e.g. navigation or countering foes’ defense systems), while others Human-in-the-loop systems can select and raise greater concerns and question international deliver action or force only with a human command. regulation (e.g. targeting, objective management or Human-on-the-loop systems can select and determination). One difficulty regarding weapon deliver action or force under the oversight of a systems can result from the fact that different levels human operator, who can override the system’s of autonomy are likely to be implemented in the actions. same capability, at least with regard to the first two levels.

Stratus Augmentet Reality Interface for soldier MBDA

©-MBDA Master Image

14

RENAUD BELLAIS Coll. Analyse techno-capacitaire

On-board vs off-board AI systems? Network-based AI

While the Chinese speak about “algorithm wars”, An alternative solution consists in developing one major challenge is to combine AI software and off-board AI applications. A network-based AI dedicated hardware to improve its effectiveness. supporting distributed capabilities can deliver This software-hardware combination appears higher effectiveness and eventually efficiency, since essential to deliver the highest performances of AI. it creates the opportunity to gather the critical mass Thus, one question pops up: Where should the of resources (processing power, comprehensive hardware layer be placed? Embedded into a capacities, data gathering and fusion, etc.) with deployed capability or at a node inside the defense reduced constraints in terms of space or energy for architecture? This question is far from trivial but it a given capability. In this perspective, the Chinese is neglected most of the time. expect to benefit greatly from quantum computing

to exploit the potential of networked AI and to achieve higher speed and accuracy in managing Embedded AI military operations.

A network-based AI approach could also reduce The embeddedness of AI raises the question of the risks that biased data could trick a given related hardware. Indeed, processing capacities are platform, due to limited access data or the lack of essential to deliver expected performances from AI cross-checking databases, in order to prevent any applications. This is the reason why China tries to kind of misinterpretation. Such more powerful AI develop its domestic microprocessor industrial solutions are nevertheless highly dependent on base, and why the United States wants to restrict resilient, dense and secured communication the export of AI-specialised chips. The criticality of networks to deliver its output to in-theatre systems. this dimension is well understood by Chinese Any network-based AI requires relying on advanced decision-makers, since the focus of its national AI and cybersecurity (the more complex strategy now includes the build-up of a and interconnected systems become, the more comprehensive domestic semiconductor industry vulnerabilities they present to ) as well (Allen 2019: 16-20). as high performing communication architecture. On-board AI capacities lead to challenges in These different features question the conception terms of design, energy, system architecture... and of defense systems and organisation with regard to significant costs, even though operational benefits desired outcomes. One can expect that an effective can represent a game-changer. The design of the approach would consist in combining embedded actuators and end-effectors will affect the and network-based layers of AI. In addition, an hardiness, endurance and cost of the systems. This effective use of AI requires accessing a strong option can favour more autonomous systems, telecommunications infrastructure in order to be being able to pursue their missions despite able to manage huge flows of data. This is the weakened or disrupted communications. reason why the mastering of 5G capacities has Due to the required computing capacities and become a strategic issue. 5G networks are likely to the network-based effectiveness, there is no constitute a critical resource for achieving the full obvious solution where to position AI capabilities in potential of defense-related AI. the military architecture. If the NGOs fear the One reason why distributed AI could prevail over emergence of “Terminator-like” systems, this on-board Ai results from the huge quantity of would require embedding sophisticated AI energy that AI needs. As Julia (2019) remarks, capacities inside a given platform. Nevertheless, DeepMind consumes more than 440,000 watts per this approach would need very advanced AI hour to play go game when a human brain only systems that do not exist today and that are quite needs 20 watts per hour. Even though one can complex to design. expect to develop less energy-intensive

15

NOTE DE RECHERCHE SEPTEMBER 2019

microprocessors than today’s GPUs16, remote constitutes the only way to keep pace with potential capacities linked to deployed platforms could enemies in the AI arms race. For instance, Chinese achieve a good balance between the self- researchers Shen Shoulin and Zhang Guoning governance of deployed capabilities and several clearly explain that China wants to acquire technical constraints (unit cost, size, stealth, capacities to “take the cognitive initiative and embedded energy, etc.). damage or interfere with the of the enemy

based on the speed and quality of the cognitive confrontation” (Eastwood 2019). Resilient defense architecture

The emergence of adversarial AI could favour a combination of different layers of AI both on board One major stake consists in increasing the and off board to increase the resilience of resilience of AI systems against adversarial capabilities and the whole defense architecture. capabilities, in particular when these latter are based on AI too. Trusting AI-based systems, a technical Chinese strategists have fully understood that this is a key feature for future capabilities. For challenge instance, Li Minghai (researcher at PLA’s National Defense University) underlines that China expects AI-based systems are elaborated around a to achieve military superiority by switching from theoretical view of the world that is often a systems confrontation to algorithms competition. simplification of physical reality in terms of This approach requires achieving superiority in mathematical models17. An autonomous system algorithms: “In future warfare, the force that enjoys always has a model of its universe, or design space, algorithm superiority will be able to rapidly and which mathematically describes the system’s accurately predict the development of the battlefield relationship to its environment. This latter relies on situation, thus coming up with the best combat- the interpretation of signals from its sensors. fighting methods and achieving the war objective of Therefore, a system is intended to act effectively ‘prevailing before battle starts’.” (Gertz 2019) and efficiently only within its design space. It is likely that its behaviour outside of the initially Freedberg (2019) describes a new emerging defined design space becomes unpredictable, since field known as “adversarial AI”. The objective the system has no description of his environment consists in training AI to confront adversarial forces to which it can refer and on which it relies. through dynamic virtual battle experiments. For instance, generative adversarial networks place two This is the reason why AI differs from human machine-learning systems together in a virtual intelligence, and such a conceptual approach leads cage, each pushing the other to evolve more to two main issues. First, the mathematical model sophisticated algorithms over thousands of rounds. relies on estimated parameters, which may vary Generative adversarial networks aim at confronting with time, be insufficiently precise or provide one side to fake data that the other system incomplete information, notably because the constantly generates so that it can learn to detect operational domain is only partially observable. counterfeits. What ensues is a kind of Darwinian Second, AI is very likely to access incomplete contest, a survival of the fittest in which duelling AIs information. Thus, autonomous systems face replicate millions of years of evolution on fast difficulty to deal with real-life environments. forward.

Such virtual tournaments allow identifying weaknesses and optimising against them. This

16 Graphics processing units. 17 The mathematical model is a description of the interaction between the system and its universe.

16

RENAUD BELLAIS Coll. Analyse techno-capacitaire

Issues of mathematical modelling over recent years. However, these systems tend to

operate as black boxes. As Dimitri Scheftelowitsch18 explains, “even While the algorithm is known for such AI given perfect , perfect situation systems, the process through which inputs result awareness, perfect modelling and perfect in an outcome is not necessarily observable or computational decision-making capabilities, the explainable on mathematical grounds. As Vincent actions of an autonomous system are defined by its Boulanin19 notes: “The lack of transparency and goal statement.” (Boulanin 2019: 29) The effective explainability of these systems in turn creates a functioning of a given AI system requires that such fundamental problem of predictability. A machine a goal can be identified and formulated in machine- learning system might fail in ways that were readable terms. unthinkable to humans because the engineers do not have a full understanding of its inner working. These issues linked to the mathematical In the context of weapon systems (…) it makes modelling and implementation of AI are even more complex the task of identifying the source of a problematic regarding defense capabilities because problem and attributing responsibility when of specific dimensions: something goes wrong.” (Boulanin 2019: 20) It is difficult to perfectly model all possible environments in which a given capability could be This feature leads to many questions regarding how much trust we can have on such AI deployed (no pre-recorded data) and to predict applications and how predictable their behaviour their evolutions once the battle has started (difficult can be. to perform real-life measurements). This limit of today’s AI applications led DARPA Adversaries are likely to introduce biases in AI to launch dedicated programmes that allow the analysis by providing erroneous data in order to extensive use of AI applications for defense needs: trick its algorithm (even if the environment was a Trustworthy AI and Explainable AI. In April 2019, the priori predictable). independent high-level expert group on AI set up by They will try to disrupt communications so as to the European Commission published a report prevent the system to confirm its own assessment entitled “Ethics guidelines of Trustworthy AI” as a by implementing a benchmark with other friendly starting point for the discussion. defense systems. Goals are hard to define ex ante and are likely to Explainability and observability change in the course of operations because of interactions with (friendly and adversarial) If one considers that the AI systems for platforms inside the theatre and because of the autonomous cars face major challenges in terms of dynamics of battle. validation and verification, it appears even tougher for defense capacities. Autonomous cars would Learning and black boxes travel within a quite predictable space (thanks to 3D

mapping) and with “engagement rules” that could Thus, using AI for autonomous systems leads to be modelled up to a certain point. This is not that promote self-learning that has become possible easy when dealing with defense capabilities. In thanks to neural networks and deep learning, which addition, armed forces have very high standards for contributed to greatly improved performances of AI validation and verification procedures, since safety

18 Scheftelowitsch, D. The state of artificial intelligence: 19 Boulanin, V. Artificial intelligence: A primer. In: An engineer’s perspective on autonomous systems. In: Boulanin (ed. 2019), 13-25. Boulanin (ed. 2019), 26-31.

17

NOTE DE RECHERCHE SEPTEMBER 2019

and reliability cannot fail when soldiers are actually - Formal verification is achieved at the fighting. mathematical level, which is meant to cover all cases. Specialised AI is very effective to manage a - Testing is carried on at empirical level, given task. It can analyse through a probabilistic which involve only a limited number of approach, implementing routines. However, their representative cases. behaviour cannot be fully understood, since results sometimes rely on parameters differing from what One should keep in mind that is the a human being would use, usually uncovered once ground for a large share of science. The ability to the AI application delivers unexpected – revealing reproduce a process is key, and it is different from ex post the limits of the system’s trustworthiness. a mathematical demonstration. This perspective could open an alternative approach to validate some The learning curve of an AI system is contingent systems, but it requires reaching the critical mass to a given environment with quite homogeneous of experimentations. Is it achievable in the field of parameters. As Boulanin and Verbruggen (2017: defense? 15) note: “A computer’s lack of contextual understanding derives from the fact that it remains Verification and validation process is a very complex for engineers to represent in a model methodological issue, since there are not yet well- the abstract relationship between objects and established methods to verify and test machine- people in the real world.” learning systems. Even though deep learning can lead to a black box, patterns are not explainable per If the environment in which an AI system is se but parameters are known. So one can consider deployed diverges from these ex ante criteria, its that a verifiability is possible, notably thanks to a effectiveness and thus trustworthiness are bound sufficient number of experimentations (still to be to decrease rapidly. We can fear that enemy forces defined). This leads to define the appropriate will try to trick or defeat it through erroneous data, process to experiment enough a system so as inducing biases in the system’s assessment, being able to trust it. However, this remains a very understanding and/or behaviour. These potential difficult research issue up to now. limits of current specialised AI systems would reduce significantly their benefit for armed forces. While commercial industries can expect to implement such verification and validation process Additionally, when a learning process is (based on huge volume, acceptability of failure, low involved, one can wonder if it is necessary to freeze risk environment…), it appears more complex to its evolution once it enters into service. Indeed, do so in the field of defense, ceteris paribus. In unless a system’s learning process is stopped defense, there is a limited recurrence of when being deployed, it is difficult to predict how it implementation (limited fleets, low rotation rate, would behave as it might learn or do things beyond limited deployments, etc.) and, the acceptable rate what we can expect from it. of failure is much lower for any kind of capability Thus, the validity of such systems is based on (hence higher costs for comparable civilian the guarantee that it does what we expect from him, systems). only what we expect, and all we expect. In a longer perspective, technological advancement could provide AI technology able to Validation and verification explain by itself the process and resulting decisions it delivers. Such self-assessment would constitute Nevertheless, observability should be an alternative approach to observability. However, distinguished from predictability and explicability. In this seems to represent a long-run prospect today. a true black box, the algorithm is not observable. If it is observable, we can explain its functioning (on a mathematical basis). In fact, one can consider two different process:

18

RENAUD BELLAIS Coll. Analyse techno-capacitaire

AI systems must also be able to identify fake or This programme expects to develop more adaptive erroneous information that aim at generating reasoning and, hopefully, an ability of algorithms to dysfunctional behaviours. This requires adaptive determine subjective phenomena, recently systems, which is far from the case today. This is explained Valerie Browning, director of DARPA’s the reason why in 2018 DARPA launched a $2bn Defense Sciences Office in an interview with investment programme called “AI-Next” in order to Nextgov20. reach a new level of advanced artificial intelligence.

C2 – Command and Control - USA

©-Inez Lawson

20 Corrigan, J. 2019. Inside DARPA’s Ambitious ‘AI Next’ -pentagons-big-plans-develop-trustworthy-artificial- Program. March 10th. intelligence/155427/ https://www.defenseone.com/technology/2019/03/inside 19

NOTE DE RECHERCHE SEPTEMBER 2019

International regulations, a hindrance to categorically ban LAWS?), and the limited scope of AI deployment? agreement provides no basis for further action” (Liu and Moodie 2019: 1). Ongoing discussions on regulating autonomous By November 2019, they have to decide how systems result from increasing normative they will continue to discuss the issue: pressures from civil society against the conception, production or use of autonomous weapon systems. Keeping a group of governmental experts (GGE) We should keep in mind that the quest for with a similar exploratory mandate; international regulation regarding AI and Entering into negotiations on a new treaty to autonomous systems does not come out of regulate killer robots; nowhere. Since the Geneva Conventions in 1949, an international law has emerged and been Discontinuing discussions. consolidated to supervise the features and use of This intergovernmental process resulted at least weapons. partially from the “Campaign to Stop Killer Robots”, In particular, Article 36 of the 1977 Additional which aims at achieving a U.N. treaty banning the Protocol I to the Geneva Conventions requires development, production and use of fully States to conduct a legal review of all new weapons, autonomous lethal weapons. This coalition of NGOs means and methods of warfare in order to tries to lobby the ongoing international determine whether their employment is prohibited negotiations, to influence national policies and to by international law. raise awareness of public opinion. They use tactics that had succeeded in achieving international

treaties to ban land mines and cluster munitions On-going discussions on how to regulate (for the records, outside the United Nations LAWS framework).

Since 2014, the governance of lethal Implementing International Humanitarian autonomous weapon systems (LAWS) has been Law discussed internationally under the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW), which regulates IHL already includes a number of obligations weapons that may be deemed to have an that restrict the use of autonomous targeting excessively injurious or indiscriminate effect. capabilities. It also requires military command to Representatives from about 80 countries have been maintain, in most circumstances, some form of meeting on lethal autonomous weapons systems. human control or oversight over the weapon system’s behaviour. This requirement is implicit, In 2017, these meetings were upgraded from hence the call of NGOs for a new legal instrument. informal “Meetings of Experts” to a formal Group of IHL puts limits on how attack may be conducted Government Experts (GGE). The GGE invites through the principles of distinction, proportionality experts from civil society to partake in the and precaution. deliberations alongside members of national delegations. AI appears sensitive when it could be used to manage lethal tasks, notably identification, tracking, Despite several years of debates, the GGE has prioritization, selection of targets and, possibly, not succeeded in producing any specific policy target engagement. Autonomous targeting may be recommendation yet. A consensus has emerged on lawful in some specific circumstances, e.g. against two points: appropriate levels of human control distinct material military targets, in an unpopulated, must be maintained over any LAWS; and LAWS remote and predicable environment or in good must remain subject to International Humanitarian weather conditions. Law (IHL). However, “the mechanics of applying both terms remain contentious (e.g., does IHL Even though these tasks are somehow lawful in a segregate space and uncluttered background, it

20

RENAUD BELLAIS Coll. Analyse techno-capacitaire

appears crystal clear that most of engagements do companies it had acquired—Boston Dynamics and not meet these criteria. Target identification can be Schaft—and prohibited future government work for difficult and theatres are messier and messier, with DeepMind, an AI software start-up that Google a high probability to misidentify real targets. acquired. AI-based systems can implement certain rules Nevertheless, not all companies share this and principles of international law. They may approach. Simultaneously to Google’s decision, distinguish between targets, if clearly identified as several companies have pledged to continue military ones, but they cannot act in a proportionate supporting DOD initiatives. Amazon CEO Jeff Bezos way. Even when such implementation is possible, even underlined that “if big tech companies are this occurs only in a very crude manner for the time going to turn their back on the U.S. Department of being. Fiott and Lindstrom (2018: 7) notes: “Should Defense, this country is going to be in trouble”, AI-systems eventually act beyond the intended defending government contracts amid a wave of boundaries set by humans, the issue of employee protests – adding ““We are going to accountability would emerge and conflict situations continue to support the DOD and I think we should.” might deteriorate even further as a result.” (Tiku 2018) Even before authorising or banning LAWS, the In addition, while armed forces expect that key stake consists in defining a lethal autonomous ethics will prevail (e.g., as defined through weapon system. Up to now, there are several and Weinbaum’s grounds for an AI code of conduct), competing definitions (Boulanin and Verbruggen one cannot exclude that, facing a possible defeat 2017, Liu and Moodie 2019). For instance, Vatican and overwhelmed by despair, some armed forces looks at “a weapon system able to identify, select could be tempted to choose rogue behaviours. and fire on a target without human intervention” History provides many examples, especially since (close to automated systems). Alternatively, France the late 19th century, where emerging technologies proposes a more restricted perimeter through three have served military purposes even if banning such criteria: full autonomy, no human supervision at all, use was already banned, e.g. machine guns, and ability to self-adapt to the environment, to submarines and chemical weapons. target and to fire with lethal effect (focusing on truly The prohibition of militarising technology at autonomous ones). national or international level is not sufficient to Depending on the selected definition, the score ultimately prevent armed forces from using it. As of IHL and eventually a possible ban can vary Wunische (2018) underlines, “the single significantly. This is the reason why it is important determinant of its use, or non-use, on the battlefield to be involved in ongoing discussions to avoid is the result of something much more unwanted restrictions or bans that could include straightforward—states will use it if it's effective on much more than solely autonomous defense the battlefield.” systems. In addition, even though armed forces sometimes resist new technologies, this Ethics and companies’ involvement positioning usually only postpones their adoption in

the medium and long rum. As Price (1995: 73) observed, “Throughout history, numerous weapons As AI technologies advance, some academics have provoked cries of moral protest upon their and private companies might choose to “opt out” of introduction as novel technologies of warfare. working with DOD on AI applications that they might However, as examples such as the longbow, view as opposite to their own values (Feickert et al. crossbow, firearms, explosive shells, and 2018; Sayler 2019). This was already the case when submarines demonstrate, the dominant pattern has several employees petitioned Google to exit DOD’s been for such moral qualms to disappear over time Project Maven in May 2018. Actually, this decision as these innovations became incorporated into the was coherent with Google’s decision to cancel standard techniques of war.” existing government contracts for two

21

NOTE DE RECHERCHE SEPTEMBER 2019

Defense innovations: the human factor of complementary ones if one considers the technical system strictly speaking). Most of the

time, technical issues and difficulties can be Historian Joel Mokyr’s works21 provides at least overcome, to some extent at least and through one major lesson: nothing should be taken for appropriate efforts22. granted when considering the adoption of However, the technical feasibility of a innovations that have the potential to radically technology or its improving performances does not transform defense. His analysis sounds a necessarily lead to its desirability and spreading cautionary note. Ever since the dawn of the first since political, economic, operational, sociological Industrial Revolution three centuries ago, both the and ethical factors come into play, either pessimists and optimists concerning innovation stimulating the adoption or hindering it. As Oh et al. and technology have almost been proven wrong in (2019) remark: “History is replete with examples of a long-run perspective. militaries that possessed certain technologies but In fact, the adoption and diffusion of innovation failed to incorporate them organizationally to do not rely only on technical dimensions (the succeed in battle." maturity of a given technology and the availability

Grounds for an AI Code of Conduct for the Pentagon

Source: Weinbaum, C. 2019. Here's What an AI Code of Conduct for the Pentagon Might Look Like. June 21st.

21 For instance, Mokyr, J., Vickers, C., Ziebarth, N. 2015. good illustration of such ability to get over challenges in The History of Technological Anxiety and the Future of terms of both science and experimentations. See, for Economic Growth: Is This Time Different? Journal of instance, Michael J. Neufeld’s The Rocket and the Reich: Economic Perspectives. 29(3), 31-50. Peenemunde and the Coming of the Ballistic Missile Era (Smithsonian Books, 2013). 22 The history of missile systems in Germany from the inter-war period to the end of World War II provides a 22

RENAUD BELLAIS Coll. Analyse techno-capacitaire

No doctrine, no capability in the safety and reliability of autonomous systems.

Moreover, some servicemen understand the development of certain autonomous capabilities as No new system can have military value as long a direct threat to their role, status, positioning or as armed forces have not conceived a professional ethics. corresponding doctrine to understand its military value and the ways they would integrate such a system inside missions and operations. A given Compatibility with military ethos system should be understandable for military uses and acceptable with regard to how military forces As Bob Work underlined: “A general AI system want to carry on their duties and missions. sets its own goals and can change them. No one in None of the previously mentioned technical and the Department of Defense is saying we ought to conceptual problems is unsolvable per se in the go toward those type of weapons.” Even Project long run. However, as Dimitri Scheftelowitsch23 Maven, against which Google employees petitioned explains, they require a deep understanding of the their company to remove itself from the project for application domain and the mathematical ethical reasons, focuses on merging human and foundations of current solution methods, their machine intelligence in a way that improves human capabilities and limitations, and an integration of decision-making. It does not aim at substituting a domain and technical knowledge into military machine decision for a human one. doctrine. In fact, this ultimate dimension appears Resistance results, for a large part, from a lack much more critical to the deployment of AI into of trust in substituting AI-based solution to existing defense capabilities than technical issues. The tools, since decision-makers fear downgrading the design of an appropriate doctrine is a sine-qua-non effectiveness their current defense capabilities. For condition to give a specific technology its full instance, Deputy Director for CIA technology military potential. development, Dawn Meyerriecks expressed The use of AI in military operations is logical if concern about the willingness of senior political and and only if armed forces are able to understand military leaders to accept AI-generated analysis, what added value it can provide and how military arguing that the defense establishment’s risk- operations need to be reshuffled so as AI can averse culture may pose greater challenges to deliver its full benefits. Such process can only take future competitiveness than the pace of adversary place if armed forces accept to integrate AI and technology development (Sayler 2019: 18). develop a sound doctrine to go along with new In its AI Strategy Review for 2018, the US capabilities. Military history demonstrates that such Department of Defense only mentions AI in weapon a process is far from being granted. What is systems in relation to keeping humans in the loop technical feasible does not automatically determine for any kind of lethal engagement. This report what will be accepted or developed. There is no emphasises many other applications where AI teleological process creating an automatic runoff would provide benefit to the operational from technology to defense. environment such as “improving situational Even when a doctrine is available, institutional awareness and decision-making, increasing the resistance can also persist, notably when soldiers safety of operating equipment, implementing consider that new capabilities are incompatible with predictive maintenance and supply, and the operational paradigm they are used to. Up to streamlining business processes.” now, it appears that armed forces often lack trust

23 Scheftelowitsch, D. The state of artificial intelligence: An engineer’s perspective on autonomous systems. In: Boulanin (ed. 2019), 26-31.

23

NOTE DE RECHERCHE SEPTEMBER 2019

While being one of the fiercest proponents of AI in all matters, but the machine makes the human operational deployment, Bob Work underlined: “We much more powerful and more capable.” He also are not talking about Skynets and we’re not talking added: “We’re looking for narrow AI systems that about Terminators (…) We’re looking for narrow AI can compose courses of action to accomplish the systems that can compose courses of action to tasks that the machine is given and it can choose accomplish the tasks that the machine is given and among the courses of action.”24 it can choose among the courses of action.” (Boyd 2018) AI should complement and not be a Man-machine teaming at stake substitute to human fighters. This is the reason why he talks about machine intelligence and algorithmic warfare rather than artificial intelligence. Bob Works Maintaining a safe and meaningful interaction even added: “A general AI system sets its own goals between soldiers and AI-based systems appears and can change them. No one in the Department of increasingly challenging as the level of system Defense is saying we ought to go toward those type autonomy increases. Trust is essential to avoid of weapons.” (op. cit.) jeopardising the life of soldiers deployed simultaneously with AI-based systems. Additionally, The EU’s positioning is quite similar. The partial the differential in data management between IA agreement for a Regulation establishing a European systems and human beings can lead to mismatch Defense Fund purports that the EDF shall not in terms of decisions. As Boulanin and Verbruggen finance activities that involve the development of (2017: 67) underline, “The more autonomous a lethal autonomous weapons without the possibility process is, the harder it is for human operators to for meaningful human control over the selection react to a problem correctly and in a timely and engagement decisions when carrying out manner.” strikes against humans. Thus, discussions around the development of AI technologies for integration This constitutes a challenge for hybrid in weapons systems will affect directly the future deployment of forces. This nevertheless requires a regulatory framework for AI and defense good man-machine teaming. This latter constitutes technologies within the EU. an ancient issue in computer science, and a fundamental dimension to succeed in implementing Fiott and Lindstrom (2018: 3) provide a AI into any kind of system. comprehensive assessment of armed forces’ position when they note that AI “can easily be seen According to Sayler (2019: 30-31), human- as a stand-alone capability or technology when in machine interaction issues that may be challenged reality it should be regarded as a strategic enabler. by insufficient explainability in a military context It is therefore more accurate to speak of AI-enabled include three dimensions: cyber-defense, AI-supported supply chain Goal Alignment. Human beings and autonomous management or AI-ready unmanned and robotic systems must have a common understanding of systems.” the objective. As they encounter an evolving AI and autonomous systems are currently seen environment, goals are bound to change. Thus, as a means to make human beings more powerful, human beings and systems must adjust not to substitute to them – even in the Chinese simultaneously and they must base such perspective. Bob Work underlined that “When adjustment on a shared picture of their people hear me talk about this [autonomy], they environment. immediately start to think of Skynet and Terminator, I think more in terms of Iron Man […] A machine to assist a human, where a human is still in control

24 Quoted by Freedberg (2016).

24

RENAUD BELLAIS Coll. Analyse techno-capacitaire

Task Alignment. Both must understand the transformative impact: “a highly capable and boundaries of each other’s decision space, sustainable land combat battlegroup in 2030 may especially when goals change. consist of as few as 250–300 human soldiers and several thousand robotic systems of various sizes Human-Machine Interface. Due to the and functions. By the same token, many functions requirement for timely decisions in many military AI of artillery and combat engineer units, currently applications, traditional machine interfaces may undertaken by humans, might be better done by slow down performances, but real-time human- robots in human-robot teams. This has the potential machine coordination is essential to build trust. to reduce the size of these types of units by As long as man-machine teaming issues are not hundreds of combat arms personnel. This approach solved, one can expect only a limited could free up personnel for redeployment into embeddedness of AI applications into defense areas where the art of war demands leadership and capacities. This issue is far more complex than one creativity-enabling intelligence functions; training expect. Indeed, even in a segregated space like an and education; planning; and, most importantly, industrial plant, the development of cobotics command and leadership.” (“collaborative robots”) raises many difficulties to One should consider that relying heavily on AI- have human operators and machines operating based systems also constitutes a challenge for the closely. organisation of armed forces. Indeed, if capabilities These issues reach another level when dealing can manage autonomously their tasks and with defense capabilities that are deployed in missions, then there is no use for soldiers and, moving, heterogeneous and hostile even more, officers in the loop. However, the environments… Many uncertainties exist today institutional evolution of armed forces is regarding the fact that AI-based capabilities can fundamentally based on the chain of command. work in harmony with soldiers. If this is not the In his Principles of War, General Foch noted that case, one can consider that AI would become a “in an army, there are only subordinate units”25. source of trouble and reduce the effectiveness of Such hierarchical organisation remains at the heart military operations or even result in casualties of armed forces today. As Caplain (2019: 11) among the friendly forces. explains: “From political decision to the The effectiveness of military operations appears implementation order up to the last soldier, armed as the ultimate criterion when deciding which forces rely on an operational command structure capabilities can fulfil operational requirements. which role consists in conceiving, implementing Since alternative solutions exist and as long as their and eventually conducting military manoeuver.” benefit equals or exceeds the performances and Since the 18th century, the chain of command is reliability of AI-based systems, it is likely that armed the quintessential dimension of military forces favour the conservative approach in terms of organisations with staffs and the “modèle capabilities (Bellais 1999). divisionnaire”, inspired by Guibert and fully implemented by Napoleon. Reshuffling the organisation of armed forces This does not mean that decentralised, autonomous operations did not exist. There are several instances of these latter – especially in the The integration of AI into defense capabilities is German armed forces during World War I and II likely to modify the organisation of armed forces. (e.g. Stoßtruppen). However, even though they can Ryan (2018: 20) provides an illustration of such deliver good outcomes in the short run, loosely

25 Ferdinand Foch. 1911. Des principes de la guerre. Paris: Berger-Levrault, 3rd edition, 93.

25

NOTE DE RECHERCHE SEPTEMBER 2019

coordinated operations are unlikely to deliver the deployment of more autonomous systems even best outcomes in the long run, and thus fulfil though AI would give them such ability. Artificial political decision-markers’ expectations. intelligence can help manage the complexity of military operations, but it is improbable that it could Indeed, the hierarchical organisation of armed replace the hierarchical organisation of armed forces results also from the need to articulate three forces, especially at staff levels. levels of operations: tactical, operative and strategic. Sometimes, a decision can appear As Caplain (2019: 79-80) notes, AI can play a irrelevant at the tactical level, but it provides a high decisive role in future command structures, notably added value at the operative one, which cannot be in order to gather and exploit huge volumes of identified as such from a very local perspective. The information through decision-making tools. If some ultimate outcome does not result from the observers are already speaking about another juxtaposition of effects delivered by each unit revolution in military affairs, we should be cautious locally but from the merger of all outcomes through and consider the potential resistance of existing a comprehensive approach on a global perspective. structures to changes, especially when they turn If boots on the ground can face difficulties to existing sources and structures of power upside appreciate the reasons why they implement some down. “Changing any military’s doctrine,” pointed actions, the ability to achieve apparently irrational out by Toffler and Toffler (1993: 62), “is like trying operations based on beyond-the-line-of sight to stop a tank armour by throwing marshmallows at information can only be inaccessible to AI systems. it. The military, like any huge bureaucracy, resists innovation – especially if the change implies the Therefore, the more autonomous decisions downgrading of certain units and the need to learn (unmanned or manned) are at the tactical level, the new skills and to transcend service rivalries.” more fragile and loose the articulation between these three levels of war can be. The very military As Oh et al. (2019) underline, “military concept of “command and control” speaks by itself institutions have a deserved reputation for, at best, when we try to catch the core organisational only a half-hearted embrace of disruptive principles of armed forces: understand, conceive, technologies. To beat historical precedent, the joint and command… force must take appropriate best practices from all sources, whether from inside or outside of

government, to alter organizational mindsets, New mind-sets, structures, and processes structures, and processes to better incorporate AI.” Their point of view is even more interesting since authors of this paper are all U.S. military officers Despite the digitalisation of armed forces, and graduates of the U.S. Army War College command structure has not lessened but, on the resident class of 2019. contrary, been reinforced through the RMA. This is Perhaps the biggest challenge for any company due to the ability to access information in near-real as well as the armed forces in incorporating AI is its time for most situations; in addition, this information loop gives political decision-makers the organisational culture. "A company’s culture can possibility to prevent soldiers on the ground from resist disruptive technology, especially if it engaging actions with side effects and political threatens core models and processes that are engrained in the company’s . The workforce hazards (e.g. civilian casualties). While soldiers and may view these models as sacrosanct because they officers have access to much more information, have led to successes in the past." (Oh et al. 2019) paradoxically they have a lower margin of Initiatives that impact core business models take manoeuvre to make decisions at their own level due longer to fully incorporate (if it even succeeds) to this control obsession. since they meet the strongest cultural resistance Therefore, digitalisation tends to increase the inside organisations. importance of the chain of command rather to favour a more decentralised management of Such resistance from military structures to game-changing innovations goes beyond the operations. This trend seems contradictory to the

26

RENAUD BELLAIS Coll. Analyse techno-capacitaire

United States. In China, Xi Jinping placed the organisation of armed forces upside down. Due to military intelligentization and the integration of AI the role of officers in armed forces, a sociological into defense capabilities as a top priority in the resistance can limit the deployment of AI-based political agenda. Additionally, China concentrates autonomous systems in quantitative and qualitative important efforts of AI-related R&D. However, a terms. More decentralised operations of armed military deployment of AI solutions is likely to take forces could provide benefits (e.g. in the short run some time: and very thick “fog of war”), but history proves the added value of operative and strategic levels, for “The inclusion of discussion of competition in which the role of officers and the chain of command artificial intelligence in this authoritative textbook is of upmost importance. reflects a further formalization of the PLA’s strategic thinking on the importance of military Therefore, a slow implementation of IA onto intelligentization (…) However, these defense capabilities is likely to result from: must enter the PLA’s ‘operational regulations’ in - A lack of trust among soldiers in order to act as a basis for future military operations unproven technology (learning curve) and training, and the PLA’s process of transforming - A priority given to proven solutions over concepts into doctrine requires a more formal theoretically superior ones (risk process of evaluation and authoritative mitigation) assessment, including on the basis of ideological - The shortfall of added value from these considerations. In this regard, it would be solutions premature to say that the PLA has a formal doctrine - The inertia of organisational setup that or framework of firm policies established for prevents from optimising the use of questions of autonomy and artificial intelligence.” new capabilities (Kania 2019b: 7) - Inappropriate doctrines to give The transformative impacts of AI require to military/operational meaning to these changes in-depth the of national defense to capabilities deliver its full military potential. In China, these - High switching costs (depreciating transformations could be impeded by the fact that existing assets, training of soldiers, the People’s Liberation Army is a Party army, not a creating appropriate infrastructure, nation’s one. Xi Jinping has consistently reiterated support and maintenance…) that the PLA must adhere to the Party’s “absolute - The slow adoption by (potentially) leadership” to ensure its complete obedience. The competing countries imperatives of capability and controllability could constraint and condition the Chinese approach to AI in the field of defense. The Chinese situation In addition, History proves that, in absence of illustrates by its extreme features the decisive conflict or rapidly intensifying tensions, the impact of non-technical factors in the possible introduction and adoption of military innovations integration of AI-based capabilities into armed can be slow compared to the relative maturity of forces. related technologies (e.g. radars, aircraft carriers or 26 missile systems in the interwar period or the key systems of Andrew Marshall’s “revolution in A slower integration than expected? military affairs”).

We should keep in mind that, in many countries Indeed, an AI-centric evolution of defense and even in the United States, armed forces and the capacities and capabilities would put the

26 See, for instance, Military Innovation in the Interwar Period. Edited by Williamson Murray and Allan R. Millett. Cambridge, Cambridge University Press, 1996. 27

NOTE DE RECHERCHE SEPTEMBER 2019

MoD have been more comfortable investing in AI Conclusion for back-office initiatives rather than capabilities that truly affect how wars are fought. Regarding the Many expectations and fears result from the United States, Oh et al. (2019) state: "Current AI new wave of AI developments that have been pilot projects have focused on areas like occurring over the past decade. Even though AI preventive maintenance and humanitarian solutions appear more and more capable, this Note assistance. The military has not yet invested in underlines the technical and non-technical areas where AI can impact core warfighting parameters that could promote or hinder the functions like command and control. Such integration of AI solutions into defense capabilities. initiatives will likely challenge many cultural norms such as the role of the commander in decision One should keep in mind that, while keeping making or appropriate levels of automation." human beings in the loop or on the loop constitutes the favoured option of armed forces, the fate of Armed forces are quite conservative when ongoing battles could push some belligerents to deciding what kinds of capabilities to deploy due to ultimately rely on autonomous, AI capabilities as both hysteresis effect, institutional latency and Paul Scharre27 underlines. Decision-makers would mistrust vis-à-vis disruptions. This is the reason then be confronted to Faustian dilemmas related to why we can expect that the deployment of AI-based the costs of victory, the fate of battles, and the ways systems could be slower than expected, but with and means to win. punctuated diffusion rather than unpredictable tipping points, resulting in a non-linear adoption Relying too much on AI would prevent human requiring a sustained monitoring. In addition, these decision-makers from stopping damaging or dynamics of diffusion can gear up in case of destructive engagements. This requires a thorough emergency, for instance because of increased analysis of possible consequences in the medium international tensions (in order to change the level- and long run as well as a systemic understanding playing field) or of a new arms race (with AI as a of resulting interactions, especially on the general-purpose technology for defense architecture of defense. For instance, it is important capabilities). to avoid any automaticity of decisions that could put capabilities out of control like in the 1983 movie

“War Games” though domino effects or misunderstandings. The adoption of AI solutions should be based on a global cost-benefit assessment in terms of

national defense and of the ability to keep war under control. As Scharre and Horowitz (2018: 14) underline: “In national security settings, unintended interactions could occur by AI systems trying to gain a competitive advantage on one another and taking actions that could be destructive or counterproductive. In settings where machines interact at superhuman speeds, such as in cyberspace or electronic warfare, these interactions could lead to harmful consequences before human users can adequately respond.”

27 Scharre, P. 2018. Army of none: Autonomous weapons and the future of war. New York: W.W. Norton & Company.

28

RENAUD BELLAIS Coll. Analyse techno-capacitaire

Bibliography

Allen, G.C. 2019. Understanding China’s AI strategy: Clues to Chinese strategic thinking on artificial intelligence and national security. Washington, DC: Center for a New American Security, February. Bellais, R. 1999. Production d'armes et puissance des nations. Paris: L'Harmattan. Bellais, R. 2019. Le rôle indispensable de l’Union européenne comme acteur de la stabilité internationale. Défense nationale, 821, June, 30-36. Bendett, S. 2018. Here’s How the Russian Military Is Organizing to Develop AI. July 20th. https://www.defenseone.com/ideas/2018/07/russian-militarys-ai-development-roadmap/149900/ Boulanin, V. ed. 2019. The impact of artificial intelligence on strategic stability and nuclear risk. Volume 1: Euro-Atlantic perspectives. Solna: SIPRI. Boulanin, V., Verbruggen, M. 2017. Mapping the development of autonomy in weapon systems. Solna: SIPRI. Boyd, A. 2018. Pentagon Doesn’t Want Real Artificial Intelligence In War, Former Official Says. Nextgov, October 30th. https://www.nextgov.com/emerging-tech/2018/10/pentagon-doesnt-want-real-artificial- intelligence-war-former-official-says/152418/ Caplain, S. 2019. La fourmilière du général. Le commandement opérationnel face aux enjeux de haute intensité. Focus stratégique 89, Paris: Institut Français des Relations Internationales, June. Cardon, D., Cointet, J.P., Mazières, A. 2018. La revanche des neurones: L’invention des machines inductives et la controverse de l’intelligence artificielle. Réseaux 211, 2018/5, 173-220. Cockburn, I., Henderson, R., Stern, S.. 2017. The Impact of Artificial Intelligence on Innovation: An Exploratory Analysis. Paper prepared for the NBER Conference on Research Issues in Artificial Intelligence, Toronto, September. Copeland; B.J. 2019. Artificial intelligence. Encyclopaedia Britannica, April 11th. https://www.britannica.com/technology/artificial-intelligence (viewed on April 17th, 2019). Corrigan, J. 2019. Inside DARPA’s Ambitious ‘AI Next’ Program. NextGov, March 10th. https://www.defenseone.com/technology/2019/03/inside-pentagons-big-plans-develop-trustworthy-artificial- intelligence/155427/ DoD. 2019. Summary of the 2018 Department of Defense Artificial Intelligence Strategy. Washington, DC: U.S. Department of Defense. du Sautoy, M. 2019. The Creativity Code: How AI is learning to write, paint and think. London: Fourth Estate. Eastwood, B. 2019. A smarter battlefield? PLA concepts for “intelligent operations” begin to take shape. Washington, DC: The Jamestown Foundation, February 15th. https://jamestown.org/program/a-smarter- battlefield-pla-concepts-for-intelligent-operations-begin-to-take-shape Feickert, A., Kapp, L., Elsea, J. Harris, L. 2018. U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence (AI): Considerations for Congress. Washington, DC: Congressional Research Service, November 1st. https://crsreports.congress.gov/product/pdf/R/R45392 Fiott, D., Lindstrom, G. 2018. Artificial Intelligence, What implications for EU security and defense?. Brief Issue No. 10/2018. Paris: European Union Institute for Security Studies, November. Freedberg, S. 2016. Iron Man, not Terminator: the Pentagon’s sci-fi inspirations. Breaking Defense, May 3rd. https://breakingdefense.com/2016/05/iron-man-not-terminator-the-pentagons-sci-fi-inspirations/ Freedberg, S. 2019. Attacking Artificial Intelligence: How To Trick The Enemy. Breaking Defense, February 6th. https://breakingdefense.com/2019/02/attacking-artificial-intelligence-how-to-trick-the-enemy/ Gassilloud, T., Becht, O. 2018. Gagner la guerre numérique, nouvel enjeu de souveraineté. Information report No. 996. Paris: National Assembly.

29

NOTE DE RECHERCHE SEPTEMBER 2019

Gertz, B. 2019. China developing battlefield AI for high-technology warfare. Washington Times, January 30th. https://m.washingtontimes.com/news/2019/jan/30/chinas-military-outlines-artificial-intelligence-p/ Glass, P. 2019. Here’s the Key Innovation in DARPA AI Project: Ethics From the Start. Defense One ,March 15th. https://www.defenseone.com/technology/2019/03/heres-key-innovation-darpas-ai-project-ethics- start/155589/ Gorriez, F. 2017. Vers une interdiction des systèmes d'armes létaux autonomes ?. Cahiers de la pensée Mili-Terre 47, June 30th. Kania, E. 2019a. Learning Without Fighting: New Developments in PLA Artificial Intelligence War-Gaming. China Brief 19(7), Washington, DC: The Jamestown Foundation; April 9th, 15-19. Kania, E. 2019b. Chinese Military Innovation in Artificial Intelligence. Testimony before the U.S.-China Economic and Security Review Commission Hearing on Trade, Technology, and Military-Civil Fusion, Washington, DC: Center for a New American Security, June 7th. https://s3.amazonaws.com/files.cnas.org/documents/June-7-Hearing_Panel-1_Elsa-Kania_Chinese-Military- Innovation-in-Artificial-Intelligence.pdf?mtime=20190617115242 Kelly, K. 2014. The Three Breakthroughs That Have Finally Unleashed AI on the World. Wired, October 27th. https://www.wired.com/2014/10/future-of-artificial-intelligence/ Kuncic, Z. 2019. In search of smarter machines. Financial Times, August 3rd/4th, page FTWeekend 18. Liu, Z., Moodie, M. 2019. International Discussions Concerning Lethal Autonomous Weapon Systems. Washington, DC: Congressional Research Service, August 16th. https://fas.org/sgp/crs/weapons/IF11294.pdf Mallat, S. 2018. Inaugural lecture on the data sciences chair. Paris: Collège de France, January 11th. Marsh, H. 2019. Can we ever build a mind? Financial Times, 12 January, pages FTWeekend 1-2. Oh, P., Spahr, T., Chase, C., Abadie, A. 2019. Incorporating artificial intelligence: Lessons from the private sector. 19 June. https://warroom.armywarcollege.edu/articles/incorporating-artificial-intelligence-private- sector/ (consulted on 21 June 2019). Petersson, H. Wood, A. 2019. Possible political and legal consequences of the ethical debate on Artificial Intelligence. Scoping Paper. Brussels: ASD. April. Price, R. 1995. A Genealogy of the Chemical Weapons Taboo. International Organization 49(1), 73-103. Russell, S. 2015. Take a stand on AI weapons. Nature 521(7553), 28 May, 415-416. Ryan, M. 2018. Human-Machine Teaming for Future Ground Forces. Washington, DC: Center for Strategic and Budgetary Assessments. Sayler, K. 2019. Artificial Intelligence and National Security. Washington, DC: Congressional Research Service, 30 January. https://crsreports.congress.gov/product/pdf/R/R45178. Scharre, P. 2015. The opportunity and challenge of autonomous systems. In: A. Williams and P. Scharre (eds.). Autonomous Systems: Issues for Defense Policymakers. Norfolk: NATO. Scharre, P. Horowitz, M. 2018. Artificial intelligence: What Every Policymaker Needs to Know. Whasington, DC: Center for a New American Security, June. Schmidt, E. 2018. Statement before the House Armed Services Committee. Washington: U.S. House of Representatives, April 17th. https://es.ndu.edu/Portals/75/Documents/HHRG-115-AS00-Wstate-SchmidtE- 20180417.pdf Sherman, J. 2019. Don’t Call It an ‘Arms Race’: US-China AI Competition Is Not Winner-Takes-All, March 7th. https://www.defenseone.com/ideas/2019/03/dont-call-it-arms-race-us-china-ai-competition-not-winner- takes-all/155378/ Tiku, N. 2018. Amazon’s Jeff Bezos Says Tech Companies Should Work with the Pentagon. Wired, October 15th. https://www.wired.com/story/amazons-jeff-bezos-says-tech-companies-should-work-with-the-pentagon/ Toffler, A.; Toffler, H. 1993. War and anti-war: Survival at the dawn of the 21st century, Boston: Little, Brown and Company.

30

RENAUD BELLAIS Coll. Analyse techno-capacitaire

Villani C. 2018. Donner un sens à l'intelligence artificielle, Pour une stratégie nationale et européenne. Report to the French Prime Minister, Paris, March. Wang, H., Dotson J. 2019. The “Algorithm Game” and Its Implications for Chinese War Control. China Brief 19(7), Washington, DC: The Jamestown Foundation, 19-24. WIPO. 2019. WIPO Technology Trends 2019: Artificial Intelligence. Geneva: World Intellectual Property Organization. Work, B. 2015. Deputy Secretary of Defense Bob Work’s speech at the Reagan Defense Forum: The Third Offset Strategy. Reagan Presidential Library, Simi Valley, California, November 7th (as delivered) https://www.defense.gov/Newsroom/Speeches/Speech/Article/628246/reagan-defense-forum-the-third- offset-strategy/ Wunische, A. 2018. AI Weapons Are Here to Stay. The National Interest, August 7th. https://nationalinterest.org/feature/ai-weapons-are-here-stay-27862

31

Contact : [email protected]

Site : https://iesd.univ-lyon3.fr/

IESD – Faculté de droit Université Jean Moulin – Lyon III 1C avenue des Frères Lumière – CS 78242 69372 LYON CEDEX 08