Actor-Critic Fictitious Play in Simultaneous Move Multistage Games Julien Pérolat, Bilal Piot, Olivier Pietquin

Total Page:16

File Type:pdf, Size:1020Kb

Actor-Critic Fictitious Play in Simultaneous Move Multistage Games Julien Pérolat, Bilal Piot, Olivier Pietquin Actor-Critic Fictitious Play in Simultaneous Move Multistage Games Julien Pérolat, Bilal Piot, Olivier Pietquin To cite this version: Julien Pérolat, Bilal Piot, Olivier Pietquin. Actor-Critic Fictitious Play in Simultaneous Move Multi- stage Games. AISTATS 2018 - 21st International Conference on Artificial Intelligence and Statistics, Apr 2018, Playa Blanca, Lanzarote, Canary Islands, Spain. hal-01724227 HAL Id: hal-01724227 https://hal.inria.fr/hal-01724227 Submitted on 6 Mar 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Actor-Critic Fictitious Play in Simultaneous Move Multistage Games Julien Perolat1 Bilal Piot1 Olivier Pietquin1 Univ. Lille Univ. Lille Univ. Lille Abstract ceive a reward informing on how good was their ac- tion when performed in the state they were in. The goal of MARL is to learn a strategy that maximally Fictitious play is a game theoretic iterative accumulates rewards over time. Whilst the problem is procedure meant to learn an equilibrium in fairly well understood when studying single agent RL, normal form games. However, this algorithm learning while independently interacting with other requires that each player has full knowledge agents remains superficially explored. The range of of other players' strategies. Using an archi- open questions is so wide in that area [31] that it tecture inspired by actor-critic algorithms, is worth giving a precise definition of our goal. In we build a stochastic approximation of the this paper, we follow a prescriptive agenda. We in- fictitious play process. This procedure is on- tend to find a learning algorithm that provably con- line, decentralized (an agent has no informa- verges to Nash equilibrium in cooperative and in non- tion of others' strategies and rewards) and cooperative games. The goal is to find a strategy that applies to multistage games (a generalization can be executed independently by each player that cor- of normal form games). In addition, we prove responds to a Nash equilibrium. Many, if not most, convergence of our method towards a Nash approaches to address this problem consider a central- equilibrium in both the cases of zero-sum ized learning procedure that produces an independent two-player multistage games and cooperative strategy for each player [21]. Centralized learning pro- multistage games. We also provide empirical cedures are quite common and often perform better evidence of the soundness of our approach on than decentralized learning procedures [13]. But these the game of Alesia with and without function centralized learning procedure require synchronization approximation. between agents during learning (which is the main lim- itation of these methods). The agenda we follow in this paper is to propose a decentralized on-line learn- 1 Introduction ing method that provably converges to a Nash equilib- rium in self-play. Decentralized algorithms, because Go, Chess, Checkers, Oshi-Zumo [10] are just a few they allow building identical independent agents that example of Multistage games [7]. In these games, the don't rely on anything but the observation of their interaction proceeds from stage to stage without loop- state and reward, no central controller being required. ing back to a previously encountered situation. This On-line algorithms, on another hand, allow learning model groups a broad class of multi-agent sequential while playing and do not require prior computation of decision processes where the interaction never goes possible strategies. back in the same state. This work focuses on Multi- This agenda is a fertile ground of interaction between Agent Reinforcement Learning [11] (MARL) in Mul- traditional RL and game theory. Indeed, RL aims at tistage games. In this multi-agent environment, play- building autonomous agents learning on-line in games ers evolve from state to state as a result of their mu- against nature (where the environment in not inter- tual actions. During this interaction, all players re- ested in wining). For that reason, a wide variety 1now with DeepMind, London (UK) of single agent RL algorithms have been adapted to multi-agent problems. But several major issues pre- Proceedings of the 21st International Conference on Ar- vent direct use of standard RL with multi-agent sys- tificial Intelligence and Statistics (AISTATS) 2018, Lan- tems. First, blindly applying single agent RL in a zarote, Spain. PMLR: Volume 84. Copyright 2018 by the decentralized fashion implies that, from each agent's author(s). point of view, the other agents are part of the envi- Actor-Critic Fictitious Play in Simultaneous Move Multistage Games ronment. Such an hypothesis breaks the crucial RL an off-policy control step whilst the second relies on a assumption that the environment is (at least almost) policy evaluation step. Although the actor-critic archi- stationary [22]. Second, it introduces partial observ- tecture is popular for its success in solving (continuous ability as each agent's knowledge is restricted to its action) RL domains, we choose this architecture for own actions and rewards while its behavior should de- a different reason. Our framework requires handling pend on others' strategies. non-stationarity (because of adaptation of the other players) which is another nice property of actor-critic Decentralized procedures (unlike counterfactual regret architectures. Our algorithms are stochastic approxi- minimization algorithms [34]) have been the topic of mations of two dynamical systems that generalize the many studies in game theory and many approaches work of [23] and [17] on the fictitious play process from were proposed from policy hill climbing methods [8, 2] normal form games to multistage games [7]. to evolutionary dynamics [33, 1] (related work will be detailed in Sec. 2). But those dynamics do not con- In the following, we first outline related work (in verge in all general-sum normal-form games, and, there Sec. 2) and then describe the necessary background exists a three-player normal form game [15] for which in both game theory and RL (Sec. 3) to introduce no first order uncoupled dynamics (i.e. most decen- our first contribution, the two-timescale algorithms tralized dynamics) can converge to a Nash equilib- (Sec. 4). These algorithms are stochastic approxi- rium. Despite this counterexample, decentralize dy- mations of two continuous-time processes defined in namics remain an important case to study because Sec. 5. Then, we study (in Sec. 5) the asymptotic be- building a central controller for a multi-agent system havior of these continuous-time processes and show, is not always possible nor is observing the actions and as a second contribution, that they converge in self- rewards of every agent. Even if decentralized learning play in cooperative games and in zero-sum two-player processes (as described in [15]) will never be guaran- games. In Sec. 6, our third contribution proves that teed to converge in general, they should be at least the algorithms are stochastic approximations of the guaranteed to converge in some interesting classes of two continuous-time processes. Finally, we perform games such as cooperative and zero-sum two-player an empirical evaluation (in Sec. 7). games. Fictitious play is a model-based process that learns 2 Related Work Nash equilibria in normal form games. It has been widely studied and required assumptions were weak- Decentralized reinforcement learning in games has ened over time [23, 17] since the original article of been studied widely in the case of normal form games Robinson [28]. It has been extended to extensive form and includes regret minimization approaches [9, 12] games (game trees) and, to a lesser extent, to func- or stochastic approximation algorithms [23]. However, tion approximation [16]. However it is neither on-line to our knowledge, none of the previous methods have nor decentralized except from the work of [23] which been extended to independent reinforcement learning focuses on normal form games and [16] that has weak in Markov Games or any intermediate models such as guarantees of convergence and focus on turn taking MSGs with guarantees of convergence both for cooper- imperfect information games. Fictitious play enjoys ative and zero-sum case. Finding a single independent several convergence guarantees [17] which makes it a RL algorithm addressing both cases is still treated as good candidate for learning in simultaneous multistage separate agendas since the seminal paper [31]. stage games. Q-Learning Like Algorithms: The adaptation of This paper contributes to fill a gap in the MARL liter- RL algorithms to the multi-agent setting was the first ature by providing two online decentralized algorithms approach to address online learning in games. On-line converging to a Nash equilibrium in multistage games algorithms like Q-learning [32] are often used in coop- both in the cooperative case and the zero-sum two- erative multi-agent learning environments but fail to player case. Those two cases used to be treated as learn a stationary strategy in simultaneous zero-sum different agendas since the seminal paper of Shoham two-player games. They fail in this setting because, & al. [31] and we expect our work to serve as a mile- in simultaneous zero-sum two-player games, it is not stone to reconcile them going further than normal form sufficient to use a greedy strategy to learn a Nash equi- games [23, 17]. Our first contribution is to propose librium. In [25], the Q-learning method is adapted to two novel on-line and decentralized algorithms inspired guarantee convergence to zero-sum two-player MGs.
Recommended publications
  • Learning and Equilibrium
    Learning and Equilibrium Drew Fudenberg1 and David K. Levine2 1Department of Economics, Harvard University, Cambridge, Massachusetts; email: [email protected] 2Department of Economics, Washington University of St. Louis, St. Louis, Missouri; email: [email protected] Annu. Rev. Econ. 2009. 1:385–419 Key Words First published online as a Review in Advance on nonequilibrium dynamics, bounded rationality, Nash equilibrium, June 11, 2009 self-confirming equilibrium The Annual Review of Economics is online at by 140.247.212.190 on 09/04/09. For personal use only. econ.annualreviews.org Abstract This article’s doi: The theory of learning in games explores how, which, and what 10.1146/annurev.economics.050708.142930 kind of equilibria might arise as a consequence of a long-run non- Annu. Rev. Econ. 2009.1:385-420. Downloaded from arjournals.annualreviews.org Copyright © 2009 by Annual Reviews. equilibrium process of learning, adaptation, and/or imitation. If All rights reserved agents’ strategies are completely observed at the end of each round 1941-1383/09/0904-0385$20.00 (and agents are randomly matched with a series of anonymous opponents), fairly simple rules perform well in terms of the agent’s worst-case payoffs, and also guarantee that any steady state of the system must correspond to an equilibrium. If players do not ob- serve the strategies chosen by their opponents (as in extensive-form games), then learning is consistent with steady states that are not Nash equilibria because players can maintain incorrect beliefs about off-path play. Beliefs can also be incorrect because of cogni- tive limitations and systematic inferential errors.
    [Show full text]
  • Improving Fictitious Play Reinforcement Learning with Expanding Models
    Improving Fictitious Play Reinforcement Learning with Expanding Models Rong-Jun Qin1;2, Jing-Cheng Pang1, Yang Yu1;y 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2Polixir emails: [email protected], [email protected], [email protected]. yTo whom correspondence should be addressed Abstract Fictitious play with reinforcement learning is a general and effective framework for zero- sum games. However, using the current deep neural network models, the implementation of fictitious play faces crucial challenges. Neural network model training employs gradi- ent descent approaches to update all connection weights, and thus is easy to forget the old opponents after training to beat the new opponents. Existing approaches often maintain a pool of historical policy models to avoid the forgetting. However, learning to beat a pool in stochastic games, i.e., a wide distribution over policy models, is either sample-consuming or insufficient to exploit all models with limited amount of samples. In this paper, we pro- pose a learning process with neural fictitious play to alleviate the above issues. We train a single model as our policy model, which consists of sub-models and a selector. Everytime facing a new opponent, the model is expanded by adding a new sub-model, where only the new sub-model is updated instead of the whole model. At the same time, the selector is also updated to mix up the new sub-model with the previous ones at the state-level, so that the model is maintained as a behavior strategy instead of a wide distribution over policy models.
    [Show full text]
  • Approximation Guarantees for Fictitious Play
    Approximation Guarantees for Fictitious Play Vincent Conitzer Abstract— Fictitious play is a simple, well-known, and often- significant progress has been made in the computation of used algorithm for playing (and, especially, learning to play) approximate Nash equilibria. It has been shown that for any games. However, in general it does not converge to equilibrium; ǫ, there is an ǫ-approximate equilibrium with support size even when it does, we may not be able to run it to convergence. 2 Still, we may obtain an approximate equilibrium. In this O((log n)/ǫ ) [1], [22], so that we can find approximate paper, we study the approximation properties that fictitious equilibria by searching over all of these supports. More play obtains when it is run for a limited number of rounds. recently, Daskalakis et al. gave a very simple algorithm We show that if both players randomize uniformly over their for finding a 1/2-approximate Nash equilibrium [12]; we actions in the first r rounds of fictitious play, then the result will discuss this algorithm shortly. Feder et al. then showed is an ǫ-equilibrium, where ǫ = (r + 1)/(2r). (Since we are examining only a constant number of pure strategies, we know a lower bound on the size of the supports that must be that ǫ < 1/2 is impossible, due to a result of Feder et al.) We considered to be guaranteed to find an approximate equi- show that this bound is tight in the worst case; however, with an librium [16]; in particular, supports of constant sizes can experiment on random games, we illustrate that fictitious play give at best a 1/2-approximate Nash equilibrium.
    [Show full text]
  • Modeling Human Learning in Games
    Modeling Human Learning in Games Thesis by Norah Alghamdi In Partial Fulfillment of the Requirements For the Degree of Masters of Science King Abdullah University of Science and Technology Thuwal, Kingdom of Saudi Arabia December, 2020 2 EXAMINATION COMMITTEE PAGE The thesis of Norah Alghamdi is approved by the examination committee Committee Chairperson: Prof. Jeff S. Shamma Committee Members: Prof. Eric Feron, Prof. Meriem T. Laleg 3 ©December, 2020 Norah Alghamdi All Rights Reserved 4 ABSTRACT Modeling Human Learning in Games Norah Alghamdi Human-robot interaction is an important and broad area of study. To achieve success- ful interaction, we have to study human decision making rules. This work investigates human learning rules in games with the presence of intelligent decision makers. Par- ticularly, we analyze human behavior in a congestion game. The game models traffic in a simple scenario where multiple vehicles share two roads. Ten vehicles are con- trolled by the human player, where they decide on how to distribute their vehicles on the two roads. There are hundred simulated players each controlling one vehicle. The game is repeated for many rounds, allowing the players to adapt and formulate a strategy, and after each round, the cost of the roads and visual assistance is shown to the human player. The goal of all players is to minimize the total congestion experienced by the vehicles they control. In order to demonstrate our results, we first built a human player simulator using Fictitious play and Regret Matching algorithms. Then, we showed the passivity property of these algorithms after adjusting the passivity condition to suit discrete time formulation.
    [Show full text]
  • Lecture 10: Learning in Games Ramesh Johari May 9, 2007
    MS&E 336 Lecture 10: Learning in games Ramesh Johari May 9, 2007 This lecture introduces our study of learning in games. We first give a conceptual overview of the possible approaches to studying learning in repeated games; in particular, we distinguish be- tween approaches that use a Bayesian model of the opponents, vs. nonparametric or “model-free” approaches to playing against the opponents. Our investigation will focus almost entirely on the second class of models, where the main results are closely tied to the study of regret minimization in online regret. We introduce two notions of regret minimization, and also consider the corre- sponding equilibrium notions. (Note that we focus attention on repeated games primarily because learning results in stochastic games are significantly weaker.) Throughout the lecture we consider a finite N-player game, where each player i has a finite pure action set ; let , and let . We let denote a pure action for Ai A = Qi Ai A−i = Qj=6 i Aj ai player i, and let si ∈ ∆(Ai) denote a mixed action for player i. We will typically view si as a Ai vector in R , with si(ai) equal to the probability that player i places on ai. We let Πi(a) denote the payoff to player i when the composite pure action vector is a, and by an abuse of notation also let Πi(s) denote the expected payoff to player i when the composite mixed action vector is s. More q q a a generally, if is a joint probability distribution on the set A, we let Πi( )= Pa∈A q( )Π( ).
    [Show full text]
  • On the Convergence of Fictitious Play Author(S): Vijay Krishna and Tomas Sjöström Source: Mathematics of Operations Research, Vol
    On the Convergence of Fictitious Play Author(s): Vijay Krishna and Tomas Sjöström Source: Mathematics of Operations Research, Vol. 23, No. 2 (May, 1998), pp. 479-511 Published by: INFORMS Stable URL: http://www.jstor.org/stable/3690523 Accessed: 09/12/2010 03:21 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=informs. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Mathematics of Operations Research. http://www.jstor.org MATHEMATICS OF OPERATIONS RESEARCH Vol. 23, No. 2, May 1998 Printed in U.S.A.
    [Show full text]
  • Information and Beliefs in a Repeated Normal-Form Game
    Beliefs inaRepeated Forschungsgemeinschaft through through Forschungsgemeinschaft SFB 649DiscussionPaper2008-026 Normal-form Game Information and This research was supported by the Deutsche the Deutsche by was supported This research * TechnischeUniversitätBerlin,Germany SFB 649, Humboldt-Universität zu Berlin zu SFB 649,Humboldt-Universität Dorothea Kübler* Spandauer Straße 1,D-10178 Berlin Spandauer Dietmar Fehr* http://sfb649.wiwi.hu-berlin.de http://sfb649.wiwi.hu-berlin.de David Danz* ISSN 1860-5664 the SFB 649 "Economic Risk". "Economic the SFB649 SFB 6 4 9 E C O N O M I C R I S K B E R L I N Information and Beliefs in a Repeated Normal-form Game Dietmar Fehr Dorothea Kübler Technische Universität Berlin Technische Universität Berlin & IZA David Danz Technische Universität Berlin March 29, 2008 Abstract We study beliefs and choices in a repeated normal-form game. In addition to a baseline treatment with common knowledge of the game structure and feedback about choices in the previous period, we run treatments (i) without feedback about previous play, (ii) with no infor- mation about the opponent’spayo¤s and (iii) with random matching. Using Stahl and Wilson’s (1995) model of limited strategic reasoning, we classify behavior with regard to its strategic sophistication and consider its development over time. We use belief statements to track the consistency of subjects’actions and beliefs as well as the accuracy of their beliefs (relative to the opponent’strue choice) over time. In the baseline treatment we observe more sophisticated play as well as more consistent and more accurate beliefs over time. We isolate feedback as the main driving force of such learning.
    [Show full text]
  • Arxiv:1911.08418V3 [Cs.GT] 15 Nov 2020 Minimax Point Or Nash Equilibrium If We Have: to Tune
    Fast Convergence of Fictitious Play for Diagonal Payoff Matrices Jacob Abernethy∗ Kevin A. Lai† Andre Wibisono‡ Abstract later showed the same holds for non-zero-sum games Fictitious Play (FP) is a simple and natural dynamic for [20]. Von Neumann's theorem is often stated in terms repeated play in zero-sum games. Proposed by Brown in of the equivalence of a min-max versus a max-min: 1949, FP was shown to converge to a Nash Equilibrium by min max x>Ay = max min x>Ay: Robinson in 1951, albeit at a slow rate that may depend on x2∆n y2∆m y2∆m x2∆n the dimension of the problem. In 1959, Karlin conjectured p that FP converges at the more natural rate of O(1= t). It is easy to check that the minimizer of the left hand However, Daskalakis and Pan disproved a version of this side and the maximizer of the right exhibit the desired conjecture in 2014, showing that a slow rate can occur, equilibrium pair. although their result relies on adversarial tie-breaking. In One of the earliest methods for computing Nash this paper, we show that Karlin's conjecture is indeed correct Equilibria in zero sum games is fictitious play (FP), pro- for the class of diagonal payoff matrices, as long as ties posed by Brown [7, 8]. FP is perhaps the simplest dy- are broken lexicographically. Specifically, we show that FP namic one might envision for repeated play in a game| p converges at a O(1= t) rate in the case when the payoff in each round, each player considers the empirical distri- matrix is diagonal.
    [Show full text]
  • Chronology of Game Theory
    Chronology of Game Theory http://www.econ.canterbury.ac.nz/personal_pages/paul_walker/g... Home | UC Home | Econ. Department | Chronology of Game Theory | Nobel Prize A Chronology of Game Theory by Paul Walker September 2012 | Ancient | 1700 | 1800 | 1900 | 1950 | 1960 | 1970 | 1980 | 1990 | Nobel Prize | 2nd Nobel Prize | 3rd Nobel Prize | 0-500AD The Babylonian Talmud is the compilation of ancient law and tradition set down during the first five centuries A.D. which serves as the basis of Jewish religious, criminal and civil law. One problem discussed in the Talmud is the so called marriage contract problem: a man has three wives whose marriage contracts specify that in the case of this death they receive 100, 200 and 300 respectively. The Talmud gives apparently contradictory recommendations. Where the man dies leaving an estate of only 100, the Talmud recommends equal division. However, if the estate is worth 300 it recommends proportional division (50,100,150), while for an estate of 200, its recommendation of (50,75,75) is a complete mystery. This particular Mishna has baffled Talmudic scholars for two millennia. In 1985, it was recognised that the Talmud anticipates the modern theory of cooperative games. Each solution corresponds to the nucleolus of an appropriately defined game. 1713 In a letter dated 13 November 1713 Francis Waldegrave provided the first, known, minimax mixed strategy solution to a two-person game. Waldegrave wrote the letter, about a two-person version of the card game le Her, to Pierre-Remond de Montmort who in turn wrote to Nicolas Bernoulli, including in his letter a discussion of the Waldegrave solution.
    [Show full text]
  • A Choice Prediction Competition for Market Entry Games: an Introduction
    Games 2010, 1, 117-136; doi:10.3390/g1020117 OPEN ACCESS games ISSN 2073-4336 www.mdpi.com/journal/games Article A Choice Prediction Competition for Market Entry Games: An Introduction Ido Erev 1,*, Eyal Ert 2 and Alvin E. Roth 3,4 1 Max Wertheimer Minerva Center for Cognitive Studies, Faculty of Industrial Engineering and Management, Technion, Haifa 32000, Israel 2 Computer Laboratory for Experimental Research, Harvard Business School, Boston, MA, 02163, USA; E-Mail: [email protected] 3 Department of Economics, 308 Littauer, Harvard University, Cambridge, MA 02138, USA; E-Mail: [email protected] 4 Harvard Business School, 441 Baker Library, Boston, MA 02163, USA * Author to whom correspondence should be addressed; E-Mail: [email protected]. Received: 30 April 2010 / Accepted: 12 May 2010 / Published: 14 May 2010 Abstract: A choice prediction competition is organized that focuses on decisions from experience in market entry games (http://sites.google.com/site/gpredcomp/ and http://www.mdpi.com/si/games/predict-behavior/). The competition is based on two experiments: An estimation experiment, and a competition experiment. The two experiments use the same methods and subject pool, and examine games randomly selected from the same distribution. The current introductory paper presents the results of the estimation experiment, and clarifies the descriptive value of several baseline models. The experimental results reveal the robustness of eight behavioral tendencies that were documented in previous studies of market entry games and individual decisions from experience. The best baseline model (I-SAW) assumes reliance on small samples of experiences, and strong inertia when the recent results are not surprising.
    [Show full text]
  • Quantal Response Methods for Equilibrium Selection in 2×2 Coordination Games
    Games and Economic Behavior 97 (2016) 19–31 Contents lists available at ScienceDirect Games and Economic Behavior www.elsevier.com/locate/geb Note Quantal response methods for equilibrium selection in 2 × 2 coordination games ∗ Boyu Zhang a, , Josef Hofbauer b a Laboratory of Mathematics and Complex System, Ministry of Education, School of Mathematical Sciences, Beijing Normal University, Beijing 100875, PR China b Department of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090, Vienna, Austria a r t i c l e i n f o a b s t r a c t Article history: The notion of quantal response equilibrium (QRE), introduced by McKelvey and Palfrey Received 2 May 2013 (1995), has been widely used to explain experimental data. In this paper, we use quantal Available online 21 March 2016 response equilibrium as a homotopy method for equilibrium selection, and study this in detail for 2 × 2bimatrix coordination games. We show that the risk dominant equilibrium JEL classification: need not be selected. In the logarithmic game, the limiting QRE is the Nash equilibrium C61 C73 with the larger sum of square root payoffs. Finally, we apply the quantal response methods D58 to the mini public goods game with punishment. A cooperative equilibrium can be selected if punishment is strong enough. Keywords: © 2016 Elsevier Inc. All rights reserved. Quantal response equilibrium Equilibrium selection Logit equilibrium Logarithmic game Punishment 1. Introduction Quantal response equilibrium (QRE) was introduced by McKelvey and Palfrey (1995) in the context of bounded rationality. In a QRE, players do not always choose best responses. Instead, they make decisions based on a probabilistic choice model and assume other players do so as well.
    [Show full text]
  • Learning in Games I
    LEARNING AND EQUILIBRIUM Simons Institute Economics and Computation Boot Camp UC Berkeley August 2015 DREW FUDENBERG Today: Static Games. Each player takes a single action, actions simultaneous. Tomorrow: Extensive Form Games. Strategies as “complete contingent plans.” 2 Rationality (even common knowledge of rationality) is neither necessary nor sufficient for Nash equilibrium. (“NE”) Not sufficient: In games with multiple NE, no reason for play to look like any of the equilibria unless there is a reason all players expect the same equilibrium. Not necessary theoretically (replicator dynamic can converge to NE) or empirically (convergence to approximation of NE seen in colonies of bacteria.) 3 Learning-Theoretic Explanation: equilibrium arises as long run outcome of a non- equilibrium adaptive process. Experimental play does converge to Nash equilibrium in a reasonable time frame in some games of interest to economist, including Cournot duopoly, “voluntary contribution” games, the “beauty contest” game, and the “double auctions” used to explain equilibrium prices. 4 (Vesterlund et al [2011] J Pub Econ on two different voluntary contribution games where the Nash equilibrium is for both players to contribute 3.) 5 Gill and Prowse [2012] on a “beauty contest” game with NE =0 6 To understand how and when equilibrium arises, look at long‐run behavior of non‐equilibrium dynamic processes. Many sorts of adjustment processes, including biological evolution, have been said to involve “learning” in a broad sense. And it can be hard to draw
    [Show full text]