Artificial Intelligence As Structural Estimation: Economic
Total Page:16
File Type:pdf, Size:1020Kb
Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo∗ Mitsuru Igami† March 1, 2018 Abstract Artificial intelligence (AI) has achieved superhuman performance in a growing num- ber of tasks, but understanding and explaining AI remain challenging. This paper clarifies the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogi- playing Bonanza is an estimated value function via Rust’s (1987) nested fixed-point method. AlphaGo’s “supervised-learning policy network” is a deep neural network implementation of Hotz and Miller’s (1993) conditional choice probability estimation; its “reinforcement-learning value network” is equivalent to Hotz, Miller, Sanders, and Smith’s (1994) conditional choice simulation method. Relaxing these AIs’ implicit econometric assumptions would improve their structural interpretability. arXiv:1710.10967v3 [econ.EM] 1 Mar 2018 Keywords: Artificial intelligence, Conditional choice probability, Deep neural network, Dynamic game, Dynamic structural model, Simulation estimator. JEL classifications: A12, C45, C57, C63, C73. ∗First version: October 30, 2017. This paper benefited from seminar comments at Riken AIP, Georgetown, Tokyo, Osaka, Harvard, Johns Hopkins, and The Third Cambridge Area Economics and Computation Day conference at Microsoft Research New England, as well as conversations with Susan Athey, Xiaohong Chen, Jerry Hausman, Greg Lewis, Robert Miller, Yusuke Narita, Aviv Nevo, Anton Popov, John Rust, Takuo Sugaya, Elie Tamer, and Yosuke Yasuda. †Yale Department of Economics and MIT Department of Economics. E-mail: [email protected]. 1 1 Introduction Artificial intelligence (AI) has achieved human-like performance in a growing number of tasks, such as visual recognition and natural language processing.1 The classical games of chess, shogi (Japanese chess), and Go were once thought to be too complicated and intractable for AI, but computer scientists have overcome these challenges. In chess, IBM’s computer system named Deep Blue defeated Grandmaster Garry Kasparov in 1997. In shogi, a machine-learning-based program called Bonanza challenged (and was defeated by) Ry¯u¯o champion Akira Watanabe in 2007, but one of its successors (Ponanza) played against Meijin champion Amahiko Satoh and won in 2017. In Go, Google DeepMind developed AlphaGo, a deep-learning-based program, which beat the 2-dan European champion Fan Hui in 2015, a 9-dan (highest rank) professional Lee Sedol in 2016, and the world’s best player Ke Jie in 2017. Despite such remarkable achievements, one of the lingering criticisms of AI is its lack of transparency. The internal mechanism seems like a black box to most people, including the human experts of the relevant tasks,2 which raises concerns about accountability and responsibility. The desire to understand and explain the functioning of AI is not limited to the scientific community. For example, the US Department of Defense airs its concern that “the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users,” which led it to host the Explainable AI (XAI) program aimed at developing “understandable” and “trustworthy” machine learning.3 This paper examines three prominent game AIs in recent history: Deep Blue, Bonanza, and AlphaGo. I have chosen to study this category of AIs because board games represent an archetypical task that has required human intelligence, including cognitive skills, decision- making, and problem-solving. They are also well-defined problems for which economic inter- pretations are more natural than for, say, visual recognition and natural language processing. The main finding from this paper’s case studies is that these AIs’ key components are math- ematically equivalent to well-known econometric methods to estimate dynamic structural models. Chess experts and IBM’s engineers manually adjusted thousands of parameters in Deep 1The formal definition of AI seems contentious, partly because scholars have not agreed on the definition of intelligence in the first place. This paper follows a broad definition of AI as computer systems able to perform tasks that traditionally required human intelligence. 2For example, Yoshiharu Habu, the strongest shogi player in recent history, states he does not understand certain board-evaluation functions of computer shogi programs (Habu and NHK [2017]). 3See https://www.darpa.mil/program/explainable-artificial-intelligence (accessed on October 17, 2017). 2 Blue’s “evaluation function,” which quantifies the probability of eventual winning as a func- tion of the current positions of pieces (i.e., state of the game) and therefore could be inter- preted as an approximate value function. Deep Blue is a calibrated value function with a linear functional form. By contrast, the developer of Bonanza constructed a dataset of professional shogi games, and used a discrete-choice regression and a backward-induction algorithm to determine the parameters of its value function. Hence, his method of “supervised learning” is equivalent to Rust’s (1987) nested fixed-point (NFXP) algorithm, which combined a discrete-choice model with dynamic programming (DP) in the maximum likelihood estimation (MLE) framework. Bonanza is an empirical model of human shogi players that is estimated by this direct (or “full-solution”) method. Google DeepMind’s AlphaGo (its original version) embodies an alternative approach to estimating dynamic structural models: two-step estimation.4 Its first component, the “supervised-learning (SL) policy network,” predicts the moves of human experts as a function of the board state. It is an empirical policy function with a class of nonparametric basis functions (DNN: deep neural network) that is estimated by MLE, using data from online Go games. Thus, the SL policy network is a DNN implementation of Hotz and Miller’s (1993) first-stage conditional choice probability (CCP) estimation. AlphaGo’s value function, called “reinforcement-learning (RL) value network,” is con- structed by simulating many games based on the self-play of the SL policy network and estimating another DNN model that maps state to the probability of winning. This pro- cedure is equivalent to the second-stage conditional choice simulation (CCS) estimation, proposed by Hotz, Miller, Sanders, and Smith (1994) for single-agent DP, and by Bajari, Benkard, and Levin (2007) for dynamic games. Thus, these leading game AIs and the core algorithms for their development turn out to be successful applications of the empirical methods to implement dynamic structural models. After introducing basic notations in section 2, I describe the main components of Deep Blue, Bonanza, and AlphaGo in sections 3, 4, and 5, respectively, and explain their structural interpretations. Section 6 clarifies some of the implicit assumptions underlying these AIs, such as (the absence of) unobserved heterogeneity, strategic interactions, and various constraints human players are facing in real games. Section 7 concludes by suggesting that relaxing some of these assumptions and explicitly incorporating more realistic features 4This paper focuses on the original version of AlphaGo, published in 2016, and distinguishes it from its later version, “AlphaGo Zero,” published in 2017. The latter version contains few econometric elements, and is not an immediate subject of my case study, although I discuss some of its interesting features in section 5. 3 of the data-generating process could help make AIs both more human-like (if needed) and more amenable to structural interpretations. Literature This paper clarifies the equivalence between some of the algorithms for devel- oping game AI and the aforementioned econometric methods for estimating dynamic models. As such, the most closely related papers are Rust (1987), Hotz and Miller (1993), and Hotz, Miller, Sanders, and Smith (1994). The game AIs I analyze in this paper are probably the most successful (or at least the most popular) empirical applications of these methods. For a historical review of numerical methods for dynamic programming, see Rust (2017). At a higher level, the purpose of this paper is to clarify the connections between machine learning and econometrics in certain areas. Hence, the paper shares the spirit of, for example, Belloni, Chernozhukov, and Hansen (2014), Varian (2014), Athey (2017), and Mullainathan and Spiess (2017), among many others in the rapidly growing literature on data analysis at the intersection of computer science and economics. 2 Model Rules Chess, shogi, and Go belong to the same class of games, with two players (i =1, 2), discrete time (t = 1, 2, ...), alternating moves (players 1 and 2 choose their actions, at, in odd and even periods, respectively), perfect information, and deterministic state transition, st+1 = f (st, at) , (1) where both the transition, f (·), and the initial state, s1, are completely determined by the rule of each game.5 Action space is finite and is defined by the rule as “legal moves,” at ∈A (st) . (2) State space is finite as well, and consists of four mutually exclusive subsets: st ∈ S = Scont ⊔ Swin ⊔ Sloss ⊔ Sdraw, (3) 5This setup abstracts from the time constraints in official games because the developers of game AIs typically do not incorporate them at the data-analysis stage. Hence, t represents turn-to-move,