1 Trading Platform 2 the Prisoners' Dilemma 3 Rock-Paper-Scissors
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Learning and Equilibrium
Learning and Equilibrium Drew Fudenberg1 and David K. Levine2 1Department of Economics, Harvard University, Cambridge, Massachusetts; email: [email protected] 2Department of Economics, Washington University of St. Louis, St. Louis, Missouri; email: [email protected] Annu. Rev. Econ. 2009. 1:385–419 Key Words First published online as a Review in Advance on nonequilibrium dynamics, bounded rationality, Nash equilibrium, June 11, 2009 self-confirming equilibrium The Annual Review of Economics is online at by 140.247.212.190 on 09/04/09. For personal use only. econ.annualreviews.org Abstract This article’s doi: The theory of learning in games explores how, which, and what 10.1146/annurev.economics.050708.142930 kind of equilibria might arise as a consequence of a long-run non- Annu. Rev. Econ. 2009.1:385-420. Downloaded from arjournals.annualreviews.org Copyright © 2009 by Annual Reviews. equilibrium process of learning, adaptation, and/or imitation. If All rights reserved agents’ strategies are completely observed at the end of each round 1941-1383/09/0904-0385$20.00 (and agents are randomly matched with a series of anonymous opponents), fairly simple rules perform well in terms of the agent’s worst-case payoffs, and also guarantee that any steady state of the system must correspond to an equilibrium. If players do not ob- serve the strategies chosen by their opponents (as in extensive-form games), then learning is consistent with steady states that are not Nash equilibria because players can maintain incorrect beliefs about off-path play. Beliefs can also be incorrect because of cogni- tive limitations and systematic inferential errors. -
Improving Fictitious Play Reinforcement Learning with Expanding Models
Improving Fictitious Play Reinforcement Learning with Expanding Models Rong-Jun Qin1;2, Jing-Cheng Pang1, Yang Yu1;y 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2Polixir emails: [email protected], [email protected], [email protected]. yTo whom correspondence should be addressed Abstract Fictitious play with reinforcement learning is a general and effective framework for zero- sum games. However, using the current deep neural network models, the implementation of fictitious play faces crucial challenges. Neural network model training employs gradi- ent descent approaches to update all connection weights, and thus is easy to forget the old opponents after training to beat the new opponents. Existing approaches often maintain a pool of historical policy models to avoid the forgetting. However, learning to beat a pool in stochastic games, i.e., a wide distribution over policy models, is either sample-consuming or insufficient to exploit all models with limited amount of samples. In this paper, we pro- pose a learning process with neural fictitious play to alleviate the above issues. We train a single model as our policy model, which consists of sub-models and a selector. Everytime facing a new opponent, the model is expanded by adding a new sub-model, where only the new sub-model is updated instead of the whole model. At the same time, the selector is also updated to mix up the new sub-model with the previous ones at the state-level, so that the model is maintained as a behavior strategy instead of a wide distribution over policy models. -
Approximation Guarantees for Fictitious Play
Approximation Guarantees for Fictitious Play Vincent Conitzer Abstract— Fictitious play is a simple, well-known, and often- significant progress has been made in the computation of used algorithm for playing (and, especially, learning to play) approximate Nash equilibria. It has been shown that for any games. However, in general it does not converge to equilibrium; ǫ, there is an ǫ-approximate equilibrium with support size even when it does, we may not be able to run it to convergence. 2 Still, we may obtain an approximate equilibrium. In this O((log n)/ǫ ) [1], [22], so that we can find approximate paper, we study the approximation properties that fictitious equilibria by searching over all of these supports. More play obtains when it is run for a limited number of rounds. recently, Daskalakis et al. gave a very simple algorithm We show that if both players randomize uniformly over their for finding a 1/2-approximate Nash equilibrium [12]; we actions in the first r rounds of fictitious play, then the result will discuss this algorithm shortly. Feder et al. then showed is an ǫ-equilibrium, where ǫ = (r + 1)/(2r). (Since we are examining only a constant number of pure strategies, we know a lower bound on the size of the supports that must be that ǫ < 1/2 is impossible, due to a result of Feder et al.) We considered to be guaranteed to find an approximate equi- show that this bound is tight in the worst case; however, with an librium [16]; in particular, supports of constant sizes can experiment on random games, we illustrate that fictitious play give at best a 1/2-approximate Nash equilibrium. -
Modeling Human Learning in Games
Modeling Human Learning in Games Thesis by Norah Alghamdi In Partial Fulfillment of the Requirements For the Degree of Masters of Science King Abdullah University of Science and Technology Thuwal, Kingdom of Saudi Arabia December, 2020 2 EXAMINATION COMMITTEE PAGE The thesis of Norah Alghamdi is approved by the examination committee Committee Chairperson: Prof. Jeff S. Shamma Committee Members: Prof. Eric Feron, Prof. Meriem T. Laleg 3 ©December, 2020 Norah Alghamdi All Rights Reserved 4 ABSTRACT Modeling Human Learning in Games Norah Alghamdi Human-robot interaction is an important and broad area of study. To achieve success- ful interaction, we have to study human decision making rules. This work investigates human learning rules in games with the presence of intelligent decision makers. Par- ticularly, we analyze human behavior in a congestion game. The game models traffic in a simple scenario where multiple vehicles share two roads. Ten vehicles are con- trolled by the human player, where they decide on how to distribute their vehicles on the two roads. There are hundred simulated players each controlling one vehicle. The game is repeated for many rounds, allowing the players to adapt and formulate a strategy, and after each round, the cost of the roads and visual assistance is shown to the human player. The goal of all players is to minimize the total congestion experienced by the vehicles they control. In order to demonstrate our results, we first built a human player simulator using Fictitious play and Regret Matching algorithms. Then, we showed the passivity property of these algorithms after adjusting the passivity condition to suit discrete time formulation. -
CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 2: Game Theory Preliminaries
CS599: Algorithm Design in Strategic Settings Fall 2012 Lecture 2: Game Theory Preliminaries Instructor: Shaddin Dughmi Administrivia Website: http://www-bcf.usc.edu/~shaddin/cs599fa12 Or go to www.cs.usc.edu/people/shaddin and follow link Emails? Registration Outline 1 Games of Complete Information 2 Games of Incomplete Information Prior-free Games Bayesian Games Outline 1 Games of Complete Information 2 Games of Incomplete Information Prior-free Games Bayesian Games Example: Rock, Paper, Scissors Figure: Rock, Paper, Scissors Games of Complete Information 2/23 Rock, Paper, Scissors is an example of the most basic type of game. Simultaneous move, complete information games Players act simultaneously Each player incurs a utility, determined only by the players’ (joint) actions. Equivalently, player actions determine “state of the world” or “outcome of the game”. The payoff structure of the game, i.e. the map from action vectors to utility vectors, is common knowledge Games of Complete Information 3/23 Typically thought of as an n-dimensional matrix, indexed by a 2 A, with entry (u1(a); : : : ; un(a)). Also useful for representing more general games, like sequential and incomplete information games, but is less natural there. Figure: Generic Normal Form Matrix Standard mathematical representation of such games: Normal Form A game in normal form is a tuple (N; A; u), where N is a finite set of players. Denote n = jNj and N = f1; : : : ; ng. A = A1 × :::An, where Ai is the set of actions of player i. Each ~a = (a1; : : : ; an) 2 A is called an action profile. u = (u1; : : : un), where ui : A ! R is the utility function of player i. -
Chapter 16 Oligopoly and Game Theory Oligopoly Oligopoly
Chapter 16 “Game theory is the study of how people Oligopoly behave in strategic situations. By ‘strategic’ we mean a situation in which each person, when deciding what actions to take, must and consider how others might respond to that action.” Game Theory Oligopoly Oligopoly • “Oligopoly is a market structure in which only a few • “Figuring out the environment” when there are sellers offer similar or identical products.” rival firms in your market, means guessing (or • As we saw last time, oligopoly differs from the two ‘ideal’ inferring) what the rivals are doing and then cases, perfect competition and monopoly. choosing a “best response” • In the ‘ideal’ cases, the firm just has to figure out the environment (prices for the perfectly competitive firm, • This means that firms in oligopoly markets are demand curve for the monopolist) and select output to playing a ‘game’ against each other. maximize profits • To understand how they might act, we need to • An oligopolist, on the other hand, also has to figure out the understand how players play games. environment before computing the best output. • This is the role of Game Theory. Some Concepts We Will Use Strategies • Strategies • Strategies are the choices that a player is allowed • Payoffs to make. • Sequential Games •Examples: • Simultaneous Games – In game trees (sequential games), the players choose paths or branches from roots or nodes. • Best Responses – In matrix games players choose rows or columns • Equilibrium – In market games, players choose prices, or quantities, • Dominated strategies or R and D levels. • Dominant Strategies. – In Blackjack, players choose whether to stay or draw. -
Lecture 10: Learning in Games Ramesh Johari May 9, 2007
MS&E 336 Lecture 10: Learning in games Ramesh Johari May 9, 2007 This lecture introduces our study of learning in games. We first give a conceptual overview of the possible approaches to studying learning in repeated games; in particular, we distinguish be- tween approaches that use a Bayesian model of the opponents, vs. nonparametric or “model-free” approaches to playing against the opponents. Our investigation will focus almost entirely on the second class of models, where the main results are closely tied to the study of regret minimization in online regret. We introduce two notions of regret minimization, and also consider the corre- sponding equilibrium notions. (Note that we focus attention on repeated games primarily because learning results in stochastic games are significantly weaker.) Throughout the lecture we consider a finite N-player game, where each player i has a finite pure action set ; let , and let . We let denote a pure action for Ai A = Qi Ai A−i = Qj=6 i Aj ai player i, and let si ∈ ∆(Ai) denote a mixed action for player i. We will typically view si as a Ai vector in R , with si(ai) equal to the probability that player i places on ai. We let Πi(a) denote the payoff to player i when the composite pure action vector is a, and by an abuse of notation also let Πi(s) denote the expected payoff to player i when the composite mixed action vector is s. More q q a a generally, if is a joint probability distribution on the set A, we let Πi( )= Pa∈A q( )Π( ). -
On the Convergence of Fictitious Play Author(S): Vijay Krishna and Tomas Sjöström Source: Mathematics of Operations Research, Vol
On the Convergence of Fictitious Play Author(s): Vijay Krishna and Tomas Sjöström Source: Mathematics of Operations Research, Vol. 23, No. 2 (May, 1998), pp. 479-511 Published by: INFORMS Stable URL: http://www.jstor.org/stable/3690523 Accessed: 09/12/2010 03:21 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=informs. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Mathematics of Operations Research. http://www.jstor.org MATHEMATICS OF OPERATIONS RESEARCH Vol. 23, No. 2, May 1998 Printed in U.S.A. -
Information and Beliefs in a Repeated Normal-Form Game
Beliefs inaRepeated Forschungsgemeinschaft through through Forschungsgemeinschaft SFB 649DiscussionPaper2008-026 Normal-form Game Information and This research was supported by the Deutsche the Deutsche by was supported This research * TechnischeUniversitätBerlin,Germany SFB 649, Humboldt-Universität zu Berlin zu SFB 649,Humboldt-Universität Dorothea Kübler* Spandauer Straße 1,D-10178 Berlin Spandauer Dietmar Fehr* http://sfb649.wiwi.hu-berlin.de http://sfb649.wiwi.hu-berlin.de David Danz* ISSN 1860-5664 the SFB 649 "Economic Risk". "Economic the SFB649 SFB 6 4 9 E C O N O M I C R I S K B E R L I N Information and Beliefs in a Repeated Normal-form Game Dietmar Fehr Dorothea Kübler Technische Universität Berlin Technische Universität Berlin & IZA David Danz Technische Universität Berlin March 29, 2008 Abstract We study beliefs and choices in a repeated normal-form game. In addition to a baseline treatment with common knowledge of the game structure and feedback about choices in the previous period, we run treatments (i) without feedback about previous play, (ii) with no infor- mation about the opponent’spayo¤s and (iii) with random matching. Using Stahl and Wilson’s (1995) model of limited strategic reasoning, we classify behavior with regard to its strategic sophistication and consider its development over time. We use belief statements to track the consistency of subjects’actions and beliefs as well as the accuracy of their beliefs (relative to the opponent’strue choice) over time. In the baseline treatment we observe more sophisticated play as well as more consistent and more accurate beliefs over time. We isolate feedback as the main driving force of such learning. -
Arxiv:1911.08418V3 [Cs.GT] 15 Nov 2020 Minimax Point Or Nash Equilibrium If We Have: to Tune
Fast Convergence of Fictitious Play for Diagonal Payoff Matrices Jacob Abernethy∗ Kevin A. Lai† Andre Wibisono‡ Abstract later showed the same holds for non-zero-sum games Fictitious Play (FP) is a simple and natural dynamic for [20]. Von Neumann's theorem is often stated in terms repeated play in zero-sum games. Proposed by Brown in of the equivalence of a min-max versus a max-min: 1949, FP was shown to converge to a Nash Equilibrium by min max x>Ay = max min x>Ay: Robinson in 1951, albeit at a slow rate that may depend on x2∆n y2∆m y2∆m x2∆n the dimension of the problem. In 1959, Karlin conjectured p that FP converges at the more natural rate of O(1= t). It is easy to check that the minimizer of the left hand However, Daskalakis and Pan disproved a version of this side and the maximizer of the right exhibit the desired conjecture in 2014, showing that a slow rate can occur, equilibrium pair. although their result relies on adversarial tie-breaking. In One of the earliest methods for computing Nash this paper, we show that Karlin's conjecture is indeed correct Equilibria in zero sum games is fictitious play (FP), pro- for the class of diagonal payoff matrices, as long as ties posed by Brown [7, 8]. FP is perhaps the simplest dy- are broken lexicographically. Specifically, we show that FP namic one might envision for repeated play in a game| p converges at a O(1= t) rate in the case when the payoff in each round, each player considers the empirical distri- matrix is diagonal. -
Guiding Mathematical Discovery How We Started a Math Circle
Guiding Mathematical Discovery How We Started a Math Circle Jackie Chan, Tenzin Kunsang, Elisa Loy, Fares Soufan, Taylor Yeracaris Advised by Professor Deanna Haunsperger Illustrations by Elisa Loy Carleton College, Mathematics and Statistics Department 2 Table of Contents About the Authors 4 Acknowledgments 6 Preface 7 Designing Circles 9 Leading Circles 11 Logistics & Classroom Management 14 The Circles 18 Shapes and Patterns 20 Penny Shapes 21 Polydrons 23 Knots and What-Not 25 Fractals 28 Tilings and Tessellations 31 Graphs and Trees 35 The Four Islands Problem (Königsberg Bridge Problem) 36 Human Graphs 39 Map Coloring 42 Trees: Dots and Lines 45 Boards and Spatial Reasoning 49 Filing Grids 50 Gerrymandering Marcellusville 53 Pieces on a Chessboard 58 Games and Strategy 63 Game Strategies (Rock/Paper/Scissors) 64 Game Strategies for Nim 67 Tic-Tac-Torus 70 SET 74 KenKen Puzzles 77 3 Logic and Probability 81 The Monty Hall Problem 82 Knights and Knaves 85 Sorting Algorithms 87 Counting and Combinations 90 The Handshake/High Five Problem 91 Anagrams 96 Ciphers 98 Counting Trains 99 But How Many Are There? 103 Numbers and Factors 105 Piles of Triangular Numbers 106 Cup Flips 108 Counting with Cups — The Josephus Problem 111 Water Cups 114 Guess What? 116 Additional Circle Ideas 118 Further Reading 120 Keyword Index 121 4 About the Authors Jackie Chan Jackie is a senior computer science and mathematics major at Carleton College who has had an interest in teaching mathematics since an early age. Jackie’s interest in mathematics education stems from his enjoyment of revealing the intuition behind mathematical concepts. -
Problem Set #8 Solutions: Introduction to Game Theory
Finance 30210 Solutions to Problem Set #8: Introduction to Game Theory 1) Consider the following version of the prisoners dilemma game (Player one’s payoffs are in bold): Player Two Cooperate Cheat Player One Cooperate $10 $10 $0 $12 Cheat $12 $0 $5 $5 a) What is each player’s dominant strategy? Explain the Nash equilibrium of the game. Start with player one: If player two chooses cooperate, player one should choose cheat ($12 versus $10) If player two chooses cheat, player one should also cheat ($0 versus $5). Therefore, the optimal strategy is to always cheat (for both players) this means that (cheat, cheat) is the only Nash equilibrium. b) Suppose that this game were played three times in a row. Is it possible for the cooperative equilibrium to occur? Explain. If this game is played multiple times, then we start at the end (the third playing of the game). At the last stage, this is like a one shot game (there is no future). Therefore, on the last day, the dominant strategy is for both players to cheat. However, if both parties know that the other will cheat on day three, then there is no reason to cooperate on day 2. However, if both cheat on day two, then there is no incentive to cooperate on day one. 2) Consider the familiar “Rock, Paper, Scissors” game. Two players indicate either “Rock”, “Paper”, or “Scissors” simultaneously. The winner is determined by Rock crushes scissors Paper covers rock Scissors cut paper Indicate a -1 if you lose and +1 if you win.