The Seven Tools of Causal Inference with Reflections on Machine Learning

Total Page:16

File Type:pdf, Size:1020Kb

The Seven Tools of Causal Inference with Reflections on Machine Learning Forthcoming, Communications of Association for Computing Machinery. TECHNICAL REPORT R-481 November 2018 The Seven Tools of Causal Inference with Reflections on Machine Learning JUDEA PEARL, UCLA Computer Science Department, USA ACM Reference Format: summarizes how traditional impediments are circumvented using Judea Pearl. 2018. The Seven Tools of Causal Inference with Reflections on modern tools of causal inference. In particular, I will present seven Machine Learning. 1, 1 (November 2018), 6 pages. https://doi.org/10.1145/ tasks which are beyond reach of associational learning systems and nnnnnnn.nnnnnnn which have been accomplished using the tools of causal modeling. The dramatic success in machine learning has led to an explosion of THE THREE LAYER CAUSAL HIERARCHY AI applications and increasing expectations for autonomous systems that exhibit human-level intelligence. These expectations, however, A useful insight unveiled by the theory of causal models is the have met with fundamental obstacles that cut across many applica- classification of causal information in terms of the kind of questions tion areas. One such obstacle is adaptability or robustness. Machine that each class is capable of answering. The classification forms a learning researchers have noted that current systems lack the capa- 3-level hierarchy in the sense that questions at level i ¹i = 1; 2; 3º bility of recognizing or reacting to new circumstances they have not can only be answered if information from level j (j ≥ i) is available. been specifically programmed or trained for. Intensive theoretical Figure 1 shows the 3-level hierarchy, together with the charac- and experimental efforts toward “transfer learning,” “domain adap- teristic questions that can be answered at each level. The levels are tation,” and “Lifelong learning” [Chen and Liu 2016] are reflective titled 1. Association, 2. Intervention, and 3. Counterfactual. The of this obstacle. names of these layers were chosen to emphasize their usage. We Another obstacle is explainability, that is, “machine learning mod- call the first level Association, because it invokes purely statistical 1 els remain mostly black boxes” [Ribeiro et al. 2016] unable to ex- relationships, defined by the naked data. For instance, observing a plain the reasons behind their predictions or recommendations, customer who buys toothpaste makes it more likely that he/she buys thus eroding users trust. and impeding diagnosis and repair. See floss; such association can be inferred directly from the observed [Marcus 2018] and hhttp://www.sciencemag.org/news/2018/05/ai- data using conditional expectation. Questions at this layer, because researchers-allege-machine-learning-alchemyi. they require no causal information, are placed at the bottom level on A third obstacle concerns the understanding of cause-effect con- the hierarchy. Answering these questions is the hallmark of current nections. This hallmark of human cognition [Lake et al. 2015; Pearl machine learning methods. The second level, Intervention, ranks and Mackenzie 2018] is, in this author’s opinion, a necessary (though higher than Association because it involves not just seeing what is, not sufficient) ingredient for achieving human-level intelligence. but changing what we see. A typical question at this level would be: This ingredient should allow computer systems to choreograph a What will happen if we double the price? Such questions cannot parsimonious and modular representation of their environment, be answered from sales data alone, because they involve a change interrogate that representation, distort it by acts of imagination and in customers choices, in reaction to the new pricing. These choices finally answer “What if?” kind of questions. Examples are interven- may differ substantially from those taken in previous price-raising tional questions: “What if I make it happen?” and retrospective or situations. (Unless we replicate precisely the market conditions that explanatory questions: “What if I had acted differently?” or “what if existed when the price reached double its current value.) Finally, the my flight had not been late?” Such questions cannot be articulated, top level is called Counterfactuals, a mode of reasoning that goes let alone answered by systems that operate in purely statistical back to the philosophers David Hume and John Stewart Mill, and mode, as do most learning machines today. which has been given computer-friendly semantics in the past two I postulate that all three obstacles mentioned above require equip- decades. A typical question in the counterfactual category is “What ping machines with causal modeling tools, in particular, causal if I had acted differently,” thus necessitating retrospective reasoning. diagrams and their associated logic. Advances in graphical and Counterfactuals are placed at the top of the hierarchy because structural models have made counterfactuals computationally man- they subsume interventional and associational questions. If we have ageable and thus rendered causal reasoning a viable component in a model that can answer counterfactual queries, we can also an- support of strong AI. swer questions about interventions and observations. For example, In the next section, I will describe a three-level hierarchy that re- the interventional question, What will happen if we double the stricts and governs inferences in causal reasoning. The final section price? can be answered by asking the counterfactual question: What would happen had the price been twice its current value? Likewise, Author’s address: Judea Pearl, UCLA Computer Science Department, 4532 Boelter Hall, associational questions can be answered once we can answer in- Los Angeles, California, 90095-1596, USA, [email protected]. terventional questions; we simply ignore the action part and let © 2018 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for 1 redistribution. The definitive Version of Record was published in , https://doi.org/10. Some other terms used in connection to this layer are: “model-free,” “model-blind,” 1145/nnnnnnn.nnnnnnn. “black-box,” or “data-centric.” Darwiche [2017] used “function-fitting,” for it amounts to fitting data by a complex function defined by the neural network architecture. , Vol. 1, No. 1, Article . Publication date: November 2018. :2 • Judea Pearl Level Typical Typical Questions Examples (Symbol) Activity 1. Association Seeing What is? What does a symptom tell me about P¹yjxº How would seeing X a disease? change my belief inY? What does a survey tell us about the election results? 2. Intervention Doing What if? What if I take aspirin, will my P¹yjdo¹xº; zº Intervening What if I do X? headache be cured? What if we ban cigarettes? 3. Counterfactuals Imagining, Why? Was it the aspirin that stopped my 0 0 P¹yx jx ;y º Retrospection Was it X that caused Y? headache? What if I had acted Would Kennedy be alive had Os- differently? wald not shot him? What if I had not been smoking the past 2 years? Fig. 1. The Causal Hierarchy. Questions at level i can only be answered if information from level i or higher is available. observations take over. The translation does not work in the oppo- to be x 0 and Y to be y0. For example, the probability that Joe’s salary site direction. Interventional questions cannot be answered from would be y had he finished college, given that his actual salary is y0 purely observational information (i.e., from statistical data alone). and that he had only two years of college.” Such sentences can be No counterfactual question involving retrospection can be answered computed only when we possess functional or Structural Equation from purely interventional information, such as that acquired from models, or properties of such models [Pearl 2000, Chapter 7]. controlled experiments; we cannot re-run an experiment on subjects This hierarchy, and the formal restrictions it entails, explains who were treated with a drug and see how they behave had they why machine learning systems, based only on associations, are not been given the drug. The hierarchy is therefore directional, with prevented from reasoning about (novel) actions, experiments and the top level being the most powerful one. causal explanations.2 Counterfactuals are the building blocks of scientific thinking as well as legal and moral reasoning. In civil court, for example, the THE SEVEN TOOLS OF CAUSAL INFERENCE (OR WHAT defendant is considered to be the culprit of an injury if, but for YOU CAN DO WITH A CAUSAL MODEL THAT YOU the defendant’s action, it is more likely than not that the injury COULD NOT DO WITHOUT?) would not have occurred. The computational meaning of but for Consider the following five questions: calls for comparing the real world to an alternative world in which • How effective is a given treatment in preventing a disease? the defendant action did not take place. • Was it the new tax break that caused our sales to go up? Each layer in the hierarchy has a syntactic signature that char- • What is the annual health-care costs attributed to obesity? acterizes the sentences admitted into that layer. For example, the • Can hiring records prove an employer guilty of sex discrimi- association layer is characterized by conditional probability sen- nation? tences, e.g., P¹yjxº = p stating that: the probability of event Y = y • I am about to quit my job, but should I? given that we observed event X = x is equal to p. In large sys- tems, such evidential sentences can be computed efficiently using The common feature of these questions is that they are concerned Bayesian Networks, or any number of machine learning techniques. with cause-and-effect relationships. We can recognize them through At the interventional layer we find sentences of the type P¹yjdo¹xº; words such as “preventing,” “cause,” “attributed to,” “discrimination,” zº, which denotes “The probability of event Y = y given that we and “should I.” Such words are common in everyday language, and intervene and set the value of X to x and subsequently observe event our society constantly demands answers to such questions.
Recommended publications
  • A Philosophical Analysis of Causality in Econometrics
    A Philosophical Analysis of Causality in Econometrics Damien James Fennell London School of Economics and Political Science Thesis submitted to the University of London for the completion of the degree of a Doctor of Philosophy August 2005 1 UMI Number: U209675 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion. Dissertation Publishing UMI U209675 Published by ProQuest LLC 2014. Copyright in the Dissertation held by the Author. Microform Edition © ProQuest LLC. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code. ProQuest LLC 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 48106-1346 Abstract This thesis makes explicit, develops and critically discusses a concept of causality that is assumed in structural models in econometrics. The thesis begins with a development of Herbert Simon’s (1953) treatment of causal order for linear deterministic, simultaneous systems of equations to provide a fully explicit mechanistic interpretation for these systems. Doing this allows important properties of the assumed causal reading to be discussed including: the invariance of mechanisms to intervention and the role of independence in interventions. This work is then extended to basic structural models actually used in econometrics, linear models with errors-in-the-equations. This part of the thesis provides a discussion of how error terms are to be interpreted and sets out a way to introduce probabilistic concepts into the mechanistic interpretation set out earlier.
    [Show full text]
  • Artificial General Intelligence and Classical Neural Network
    Artificial General Intelligence and Classical Neural Network Pei Wang Department of Computer and Information Sciences, Temple University Room 1000X, Wachman Hall, 1805 N. Broad Street, Philadelphia, PA 19122 Web: http://www.cis.temple.edu/∼pwang/ Email: [email protected] Abstract— The research goal of Artificial General Intelligence • Capability. Since we often judge the level of intelligence (AGI) and the notion of Classical Neural Network (CNN) are of other people by evaluating their problem-solving capa- specified. With respect to the requirements of AGI, the strength bility, some people believe that the best way to achieve AI and weakness of CNN are discussed, in the aspects of knowledge representation, learning process, and overall objective of the is to build systems that can solve hard practical problems. system. To resolve the issues in CNN in a general and efficient Examples: various expert systems. way remains a challenge to future neural network research. • Function. Since the human mind has various cognitive functions, such as perceiving, learning, reasoning, acting, I. ARTIFICIAL GENERAL INTELLIGENCE and so on, some people believe that the best way to It is widely recognized that the general research goal of achieve AI is to study each of these functions one by Artificial Intelligence (AI) is twofold: one, as certain input-output mapping. Example: various intelligent tools. • As a science, it attempts to provide an explanation of • Principle. Since the human mind seems to follow certain the mechanism in the human mind-brain complex that is principles of information processing, some people believe usually called “intelligence” (or “cognition”, “thinking”, that the best way to achieve AI is to let computer systems etc.).
    [Show full text]
  • ABSTRACT CAUSAL PROGRAMMING Joshua Brulé
    ABSTRACT Title of dissertation: CAUSAL PROGRAMMING Joshua Brul´e Doctor of Philosophy, 2019 Dissertation directed by: Professor James A. Reggia Department of Computer Science Causality is central to scientific inquiry. There is broad agreement on the meaning of causal statements, such as \Smoking causes cancer", or, \Applying pesticides affects crop yields". However, formalizing the intuition underlying such statements and conducting rigorous inference is difficult in practice. Accordingly, the overall goal of this dissertation is to reduce the difficulty of, and ambiguity in, causal modeling and inference. In other words, the goal is to make it easy for researchers to state precise causal assumptions, understand what they represent, understand why they are necessary, and to yield precise causal conclusions with minimal difficulty. Using the framework of structural causal models, I introduce a causation coeffi- cient as an analogue of the correlation coefficient, analyze its properties, and create a taxonomy of correlation/causation relationships. Analyzing these relationships provides insight into why correlation and causation are often conflated in practice, as well as a principled argument as to why formal causal analysis is necessary. Next, I introduce a theory of causal programming that unifies a large number of previ- ously separate problems in causal modeling and inference. I describe the use and implementation of a causal programming language as an embedded, domain-specific language called `Whittemore'. Whittemore permits rigorously identifying and esti- mating interventional queries without requiring the user to understand the details of the underlying inference algorithms. Finally, I analyze the computational com- plexity in determining the equilibrium distribution of cyclic causal models.
    [Show full text]
  • Artificial Intelligence: Distinguishing Between Types & Definitions
    19 NEV. L.J. 1015, MARTINEZ 5/28/2019 10:48 AM ARTIFICIAL INTELLIGENCE: DISTINGUISHING BETWEEN TYPES & DEFINITIONS Rex Martinez* “We should make every effort to understand the new technology. We should take into account the possibility that developing technology may have im- portant societal implications that will become apparent only with time. We should not jump to the conclusion that new technology is fundamentally the same as some older thing with which we are familiar. And we should not hasti- ly dismiss the judgment of legislators, who may be in a better position than we are to assess the implications of new technology.”–Supreme Court Justice Samuel Alito1 TABLE OF CONTENTS INTRODUCTION ............................................................................................. 1016 I. WHY THIS MATTERS ......................................................................... 1018 II. WHAT IS ARTIFICIAL INTELLIGENCE? ............................................... 1023 A. The Development of Artificial Intelligence ............................... 1023 B. Computer Science Approaches to Artificial Intelligence .......... 1025 C. Autonomy .................................................................................. 1026 D. Strong AI & Weak AI ................................................................ 1027 III. CURRENT STATE OF AI DEFINITIONS ................................................ 1029 A. Black’s Law Dictionary ............................................................ 1029 B. Nevada .....................................................................................
    [Show full text]
  • Causal Screening to Interpret Graph Neural
    Under review as a conference paper at ICLR 2021 CAUSAL SCREENING TO INTERPRET GRAPH NEURAL NETWORKS Anonymous authors Paper under double-blind review ABSTRACT With the growing success of graph neural networks (GNNs), the explainability of GNN is attracting considerable attention. However, current works on feature attribution, which frame explanation generation as attributing a prediction to the graph features, mostly focus on the statistical interpretability. They may struggle to distinguish causal and noncausal effects of features, and quantify redundancy among features, thus resulting in unsatisfactory explanations. In this work, we focus on the causal interpretability in GNNs and propose a method, Causal Screening, from the perspective of cause-effect. It incrementally selects a graph feature (i.e., edge) with large causal attribution, which is formulated as the individual causal effect on the model outcome. As a model-agnostic tool, Causal Screening can be used to generate faithful and concise explanations for any GNN model. Further, by conducting extensive experiments on three graph classification datasets, we observe that Causal Screening achieves significant improvements over state-of-the-art approaches w.r.t. two quantitative metrics: predictive accuracy, contrastivity, and safely passes sanity checks. 1 INTRODUCTION Graph neural networks (GNNs) (Gilmer et al., 2017; Hamilton et al., 2017; Velickovic et al., 2018; Dwivedi et al., 2020) have exhibited impressive performance in a wide range of tasks. Such a success comes from the powerful representation learning, which incorporates the graph structure with node and edge features in an end-to-end fashion. With the growing interest in GNNs, the explainability of GNN is attracting considerable attention.
    [Show full text]
  • Artificial Intelligence Is Stupid and Causal Reasoning Won't Fix It
    Artificial Intelligence is stupid and causal reasoning wont fix it J. Mark Bishop1 1 TCIDA, Goldsmiths, University of London, London, UK Email: [email protected] Abstract Artificial Neural Networks have reached `Grandmaster' and even `super- human' performance' across a variety of games, from those involving perfect- information, such as Go [Silver et al., 2016]; to those involving imperfect- information, such as `Starcraft' [Vinyals et al., 2019]. Such technological developments from AI-labs have ushered concomitant applications across the world of business, where an `AI' brand-tag is fast becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong - an autonomous vehicle crashes; a chatbot exhibits `racist' behaviour; automated credit-scoring processes `discriminate' on gender etc. - there are often significant financial, legal and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that \... all the impressive achievements of deep learning amount to just curve fitting". The key, Pearl suggests [Pearl and Mackenzie, 2018], is to replace `reasoning by association' with `causal reasoning' - the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for the New York Times: \we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets { often using an approach known as \Deep Learning" { and start building computer systems that from the moment of arXiv:2008.07371v1 [cs.CY] 20 Jul 2020 their assembly innately grasp three basic concepts: time, space and causal- ity"[Marcus and Davis, 2019].
    [Show full text]
  • Artificial Intelligence/Artificial Wisdom
    Artificial Intelligence/Artificial Wisdom - A Drive for Improving Behavioral and Mental Health Care (06 September to 10 September 2021) Department of Computer Science & Information Technology Central University of Jammu, J&K-181143 Preamble An Artificial Intelligence is the capability of a machine to imitate intelligent human behaviour. Machine learning is based on the idea that machines should be able to learn and adapt through experience. Machine learning is a subset of AI. That is, all machine learning counts as AI, but not all AI counts as machine learning. Artificial intelligence (AI) technology holds both great promises to transform mental healthcare and potential pitfalls. Artificial intelligence (AI) is increasingly employed in healthcare fields such as oncology, radiology, and dermatology. However, the use of AI in mental healthcare and neurobiological research has been modest. Given the high morbidity and mortality in people with psychiatric disorders, coupled with a worsening shortage of mental healthcare providers, there is an urgent need for AI to help identify high-risk individuals and provide interventions to prevent and treat mental illnesses. According to the publication Spectrum News, a form of AI called "deep learning" is sometimes better able than human beings to spot relevant patterns. This five days course provides an overview of AI approaches and current applications in mental healthcare, a review of recent original research on AI specific to mental health, and a discussion of how AI can supplement clinical practice while considering its current limitations, areas needing additional research, and ethical implications regarding AI technology. The proposed workshop is envisaged to provide opportunity to our learners to seek and share knowledge and teaching skills in cutting edge areas from the experienced and reputed faculty.
    [Show full text]
  • Machine Guessing – I
    Machine Guessing { I David Miller Department of Philosophy University of Warwick COVENTRY CV4 7AL UK e-mail: [email protected] ⃝c copyright D. W. Miller 2011{2018 Abstract According to Karl Popper, the evolution of science, logically, methodologically, and even psy- chologically, is an involved interplay of acute conjectures and blunt refutations. Like biological evolution, it is an endless round of blind variation and selective retention. But unlike biological evolution, it incorporates, at the stage of selection, the use of reason. Part I of this two-part paper begins by repudiating the common beliefs that Hume's problem of induction, which com- pellingly confutes the thesis that science is rational in the way that most people think that it is rational, can be solved by assuming that science is rational, or by assuming that Hume was irrational (that is, by ignoring his argument). The problem of induction can be solved only by a non-authoritarian theory of rationality. It is shown also that because hypotheses cannot be distilled directly from experience, all knowledge is eventually dependent on blind conjecture, and therefore itself conjectural. In particular, the use of rules of inference, or of good or bad rules for generating conjectures, is conjectural. Part II of the paper expounds a form of Popper's critical rationalism that locates the rationality of science entirely in the deductive processes by which conjectures are criticized and improved. But extreme forms of deductivism are rejected. The paper concludes with a sharp dismissal of the view that work in artificial intelligence, including the JSM method cultivated extensively by Victor Finn, does anything to upset critical rationalism.
    [Show full text]
  • How Empiricism Distorts Ai and Robotics
    HOW EMPIRICISM DISTORTS AI AND ROBOTICS Dr Antoni Diller School of Computer Science University of Birmingham Birmingham, B15 2TT, UK email: [email protected] Abstract rently, Cog has no legs, but it does have robotic arms and a head with video cameras for eyes. It is one of the most im- The main goal of AI and Robotics is that of creating a hu- pressive robots around as it can recognise various physical manoid robot that can interact with human beings. Such an objects, distinguish living from non-living things and im- android would have to have the ability to acquire knowl- itate what it sees people doing. Cynthia Breazeal worked edge about the world it inhabits. Currently, the gaining of with Cog when she was one of Brooks’s graduate students. beliefs through testimony is hardly investigated in AI be- She said that after she became accustomed to Cog’s strange cause researchers have uncritically accepted empiricism. appearance interacting with it felt just like playing with a Great emphasis is placed on perception as a source of baby and it was easy to imagine that Cog was alive. knowledge. It is important to understand perception, but There are several major research projects in Japan. A even more important is an understanding of testimony. A common rationale given by scientists engaged in these is sketch of a theory of testimony is presented and an appeal the need to provide for Japan’s ageing population. They say for more research is made. their robots will eventually act as carers for those unable to look after themselves.
    [Show full text]
  • Part IV: Reminiscences
    Part IV: Reminiscences 31 Questions and Answers Nils J. Nilsson Few people have contributed as much to artificial intelligence (AI) as has Judea Pearl. Among his several hundred publications, several stand out as among the historically most significant and influential in the theory and practice of AI. With my few pages in this celebratory volume, I join many of his colleagues and former students in showing our gratitude and respect for his inspiration and exemplary career. He is a towering figure in our field. Certainly one key to Judea’s many outstanding achievements (beyond dedication and hard work) is his keen ability to ask the right questions and follow them up with insightful intuitions and penetrating mathematical analyses. His overarching question, it seems to me, is “how is it that humans can do so much with simplistic, unreliable, and uncertain information?” The very name of his UCLA laboratory, the Cognitive Systems Laboratory, seems to proclaim his goal: understanding and automating the most cognitive of all systems, namely humans. In this essay, I’ll focus on the questions and inspirations that motivated his ground-breaking research in three major areas: heuristics, uncertain reasoning, and causality. He has collected and synthesized his work on each of these topics in three important books [Pearl 1984; Pearl 1988; Pearl 2000]. 1 Heuristics Pearl is explicit about what inspired his work on heuristics [Pearl 1984, p. xi]: The study of heuristics draws its inspiration from the ever-amazing ob- servation of how much people can accomplish with that simplistic, un- reliable information source known as intuition.
    [Show full text]
  • Artificial Intelligence for Robust Engineering & Science January 22
    Artificial Intelligence for Robust Engineering & Science January 22 – 24, 2020 Organizing Committee General Chair: Logistics Chair: David Womble Christy Hembree Program Director, Artificial Intelligence Project Support, Artificial Intelligence Oak Ridge National Laboratory Oak Ridge National Laboratory Committee Members: Jacob Hinkle Justin Newcomer Research Scientist, Computational Sciences Manager, Machine Intelligence and Engineering Sandia National Laboratories Oak Ridge National Laboratory Frank Liu Clayton Webster Distinguished R&D Staff, Computational Distinguished Professor, Department of Sciences and Mathematics Mathematics Oak Ridge National Laboratory University of Tennessee–Knoxville Robust engineering is the process of designing, building, and controlling systems to avoid or mitigate failures and everything fails eventually. This workshop will examine the use of artificial intelligence and machine learning to predict failures and to use this capability in the maintenance and operation of robust systems. The workshop comprises four sessions examining the technical foundations of artificial intelligence and machine learning to 1) Examine operational data for failure indicators 2) Understand the causes of the potential failure 3) Deploy these systems “at the edge” with real-time inference and continuous learning, and 4) Incorporate these capabilities into robust system design and operation. The workshop will also include an additional session open to attendees for “flash” presentations that address the conference theme. AGENDA Wednesday, January 22, 2020 7:30–8:00 a.m. │ Badging, Registration, and Breakfast 8:00–8:45 a.m. │ Welcome and Introduction Jeff Nichols, Oak Ridge National Laboratory David Womble, Oak Ridge National Laboratory 8:45–9:30 a.m. │ Keynote Presentation: Dave Brooks, General Motors Company AI for Automotive Engineering 9:30–10:00 a.m.
    [Show full text]
  • Artificial Intelligence and National Security
    Artificial Intelligence and National Security Artificial intelligence will have immense implications for national and international security, and AI’s potential applications for defense and intelligence have been identified by the federal government as a major priority. There are, however, significant bureaucratic and technical challenges to the adoption and scaling of AI across U.S. defense and intelligence organizations. Moreover, other nations—particularly China and Russia—are also investing in military AI applications. As the strategic competition intensifies, the pressure to deploy untested and poorly understood systems to gain competitive advantage could lead to accidents, failures, and unintended escalation. The Bipartisan Policy Center and Georgetown University’s Center for Security and Emerging Technology (CSET), in consultation with Reps. Robin Kelly (D-IL) and Will Hurd (R-TX), have worked with government officials, industry representatives, civil society advocates, and academics to better understand the major AI-related national and economic security issues the country faces. This paper hopes to shed more clarity on these challenges and provide actionable policy recommendations, to help guide a U.S. national strategy for AI. BPC’s effort is primarily designed to complement the work done by the Obama and Trump administrations, including President Barack Obama’s 2016 The National Artificial Intelligence Research and Development Strategic Plan,i President Donald Trump’s Executive Order 13859, announcing the American AI Initiative,ii and the Office of Management and Budget’s subsequentGuidance for Regulation of Artificial Intelligence Applications.iii The effort is also designed to further June 2020 advance work done by Kelly and Hurd in their 2018 Committee on Oversight 1 and Government Reform (Information Technology Subcommittee) white paper Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S.
    [Show full text]