Reinforcement Learning from Self-Play in Imperfect-Information Games
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Game Theory 10
Game Theory 10. Poker Albert-Ludwigs-Universität Freiburg Bernhard Nebel and Robert Mattmüller 1 Motivation Motivation Kuhn Poker Real Poker: Problems and techniques Counterfac- tual regret minimization B. Nebel, R. Mattmüller – Game Theory 3 / 25 Motivation The system Libratus played a Poker tournament (heads up no-limit Texas hold ’em) from January 11 to 31, 2017 Motivation against four world-class Poker players. Kuhn Poker Heads up: One-on-One, i.e., a zero-sum game. Real Poker: No-limit: There is no limit in betting, only the stack the Problems and user has. techniques Texas hold’em: Each player gets two private cards, then Counterfac- tual regret open cards are dealt: first three, then one, and finally minimization another one. One combines the best 5 cards. Betting before the open cards are dealt and in the end: check, call, raise, or fold. Two teams (reversing the dealt cards). Libratus won the tournament with more than 1.7 Million US-$ (which neither the system nor the programming team got). B. Nebel, R. Mattmüller – Game Theory 4 / 25 The humans behind the scene Motivation Kuhn Poker Real Poker: Problems and techniques Counterfac- tual regret minimization Professional player Jason Les and Prof. Tuomas Sandholm (CMU) B. Nebel, R. Mattmüller – Game Theory 5 / 25 2 Kuhn Poker Motivation Kuhn Poker Real Poker: Problems and techniques Counterfac- tual regret minimization B. Nebel, R. Mattmüller – Game Theory 7 / 25 Kuhn Poker Motivation Minimal form of heads-up Poker, with only three cards: Kuhn Poker Real Poker: Jack, Queen, King. Problems and Each player is dealt one card and antes 1 chip (forced bet techniques in the beginning). -
Equilibrium Refinements
Equilibrium Refinements Mihai Manea MIT Sequential Equilibrium I In many games information is imperfect and the only subgame is the original game. subgame perfect equilibrium = Nash equilibrium I Play starting at an information set can be analyzed as a separate subgame if we specify players’ beliefs about at which node they are. I Based on the beliefs, we can test whether continuation strategies form a Nash equilibrium. I Sequential equilibrium (Kreps and Wilson 1982): way to derive plausible beliefs at every information set. Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 2 / 38 An Example with Incomplete Information Spence’s (1973) job market signaling game I The worker knows her ability (productivity) and chooses a level of education. I Education is more costly for low ability types. I Firm observes the worker’s education, but not her ability. I The firm decides what wage to offer her. In the spirit of subgame perfection, the optimal wage should depend on the firm’s beliefs about the worker’s ability given the observed education. An equilibrium needs to specify contingent actions and beliefs. Beliefs should follow Bayes’ rule on the equilibrium path. What about off-path beliefs? Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 3 / 38 An Example with Imperfect Information Courtesy of The MIT Press. Used with permission. Figure: (L; A) is a subgame perfect equilibrium. Is it plausible that 2 plays A? Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 4 / 38 Assessments and Sequential Rationality Focus on extensive-form games of perfect recall with finitely many nodes. An assessment is a pair (σ; µ) I σ: (behavior) strategy profile I µ = (µ(h) 2 ∆(h))h2H: system of beliefs ui(σjh; µ(h)): i’s payoff when play begins at a node in h randomly selected according to µ(h), and subsequent play specified by σ. -
An Essay on Assignment Games
GRAU DE MATEMÀTIQUES Facultat de Matemàtiques i Informàtica Universitat de Barcelona DEGREE PROJECT An essay on assignment games Rubén Ureña Martínez Advisor: Dr. Javier Martínez de Albéniz Dept. de Matemàtica Econòmica, Financera i Actuarial Barcelona, January 2017 Abstract This degree project studies the main results on the bilateral assignment game. This is a part of cooperative game theory and models a market with indivisibilities and money. There are two sides of the market, let us say buyers and sellers, or workers and firms, such that when we match two agents from different sides, a profit is made. We show some good properties of the core of these games, such as its non-emptiness and its lattice structure. There are two outstanding points: the buyers-optimal core allocation and the sellers-optimal core allocation, in which all agents of one sector get their best possible outcome. We also study a related non-cooperative mechanism, an auction, to implement the buyers- optimal core allocation. Resumen Este trabajo de fin de grado estudia los resultados principales acerca de los juegos de asignación bilaterales. Corresponde a una parte de la teoría de juegos cooperativos y proporciona un modelo de mercado con indivisibilidades y dinero. Hay dos lados del mercado, digamos compradores y vendedores, o trabajadores y empresas, de manera que cuando se emparejan dos agentes de distinto lado, se produce un cierto beneficio. Se muestran además algunas buenas propiedades del núcleo de estos juegos, tales como su condición de ser siempre no vacío y su estructura de retículo. Encontramos dos puntos destacados: la distribución óptima para los compradores en el núcleo y la distribución óptima para los vendedores en el núcleo, en las cuales todos los agentes de cada sector obtienen simultáneamente el mejor resultado posible en el núcleo. -
Improving Fictitious Play Reinforcement Learning with Expanding Models
Improving Fictitious Play Reinforcement Learning with Expanding Models Rong-Jun Qin1;2, Jing-Cheng Pang1, Yang Yu1;y 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2Polixir emails: [email protected], [email protected], [email protected]. yTo whom correspondence should be addressed Abstract Fictitious play with reinforcement learning is a general and effective framework for zero- sum games. However, using the current deep neural network models, the implementation of fictitious play faces crucial challenges. Neural network model training employs gradi- ent descent approaches to update all connection weights, and thus is easy to forget the old opponents after training to beat the new opponents. Existing approaches often maintain a pool of historical policy models to avoid the forgetting. However, learning to beat a pool in stochastic games, i.e., a wide distribution over policy models, is either sample-consuming or insufficient to exploit all models with limited amount of samples. In this paper, we pro- pose a learning process with neural fictitious play to alleviate the above issues. We train a single model as our policy model, which consists of sub-models and a selector. Everytime facing a new opponent, the model is expanded by adding a new sub-model, where only the new sub-model is updated instead of the whole model. At the same time, the selector is also updated to mix up the new sub-model with the previous ones at the state-level, so that the model is maintained as a behavior strategy instead of a wide distribution over policy models. -
Lecture Notes
GRADUATE GAME THEORY LECTURE NOTES BY OMER TAMUZ California Institute of Technology 2018 Acknowledgments These lecture notes are partially adapted from Osborne and Rubinstein [29], Maschler, Solan and Zamir [23], lecture notes by Federico Echenique, and slides by Daron Acemoglu and Asu Ozdaglar. I am indebted to Seo Young (Silvia) Kim and Zhuofang Li for their help in finding and correcting many errors. Any comments or suggestions are welcome. 2 Contents 1 Extensive form games with perfect information 7 1.1 Tic-Tac-Toe ........................................ 7 1.2 The Sweet Fifteen Game ................................ 7 1.3 Chess ............................................ 7 1.4 Definition of extensive form games with perfect information ........... 10 1.5 The ultimatum game .................................. 10 1.6 Equilibria ......................................... 11 1.7 The centipede game ................................... 11 1.8 Subgames and subgame perfect equilibria ...................... 13 1.9 The dollar auction .................................... 14 1.10 Backward induction, Kuhn’s Theorem and a proof of Zermelo’s Theorem ... 15 2 Strategic form games 17 2.1 Definition ......................................... 17 2.2 Nash equilibria ...................................... 17 2.3 Classical examples .................................... 17 2.4 Dominated strategies .................................. 22 2.5 Repeated elimination of dominated strategies ................... 22 2.6 Dominant strategies .................................. -
Using Counterfactual Regret Minimization to Create Competitive Multiplayer Poker Agents
Using Counterfactual Regret Minimization to Create Competitive Multiplayer Poker Agents Nick Abou Risk Duane Szafron University of Alberta University of Alberta Department of Computing Science Department of Computing Science Edmonton, AB Edmonton, AB 780-492-5468 780-492-5468 [email protected] [email protected] ABSTRACT terminal choice node exists for each player’s possible decisions. Games are used to evaluate and advance Multiagent and Artificial The directed edges leaving a choice node represent the legal Intelligence techniques. Most of these games are deterministic actions for that player. Each terminal node contains the utilities of with perfect information (e.g. Chess and Checkers). A all players for one potential game result. Stochastic events deterministic game has no chance element and in a perfect (Backgammon dice rolls or Poker cards) are represented as special information game, all information is visible to all players. nodes, for a player called the chance player. The directed edges However, many real-world scenarios with competing agents are leaving a chance node represent the possible chance outcomes. stochastic (non-deterministic) with imperfect information. For Since extensive games can model a wide range of strategic two-player zero-sum perfect recall games, a recent technique decision-making scenarios, they have been used to study poker. called Counterfactual Regret Minimization (CFR) computes Recent advances in computer poker have led to substantial strategies that are provably convergent to an ε-Nash equilibrium. advancements in solving general two player zero-sum perfect A Nash equilibrium strategy is useful in two-player games since it recall extensive games. Koller and Pfeffer [10] used linear maximizes its utility against a worst-case opponent. -
Characterizing Solution Concepts in Terms of Common Knowledge Of
Characterizing Solution Concepts in Terms of Common Knowledge of Rationality Joseph Y. Halpern∗ Computer Science Department Cornell University, U.S.A. e-mail: [email protected] Yoram Moses† Department of Electrical Engineering Technion—Israel Institute of Technology 32000 Haifa, Israel email: [email protected] March 15, 2018 Abstract Characterizations of Nash equilibrium, correlated equilibrium, and rationaliz- ability in terms of common knowledge of rationality are well known (Aumann 1987; arXiv:1605.01236v1 [cs.GT] 4 May 2016 Brandenburger and Dekel 1987). Analogous characterizations of sequential equi- librium, (trembling hand) perfect equilibrium, and quasi-perfect equilibrium in n-player games are obtained here, using results of Halpern (2009, 2013). ∗Supported in part by NSF under grants CTC-0208535, ITR-0325453, and IIS-0534064, by ONR un- der grant N00014-02-1-0455, by the DoD Multidisciplinary University Research Initiative (MURI) pro- gram administered by the ONR under grants N00014-01-1-0795 and N00014-04-1-0725, and by AFOSR under grants F49620-02-1-0101 and FA9550-05-1-0055. †The Israel Pollak academic chair at the Technion; work supported in part by Israel Science Foun- dation under grant 1520/11. 1 Introduction Arguably, the major goal of epistemic game theory is to characterize solution concepts epistemically. Characterizations of the solution concepts that are most commonly used in strategic-form games, namely, Nash equilibrium, correlated equilibrium, and rational- izability, in terms of common knowledge of rationality are well known (Aumann 1987; Brandenburger and Dekel 1987). We show how to get analogous characterizations of sequential equilibrium (Kreps and Wilson 1982), (trembling hand) perfect equilibrium (Selten 1975), and quasi-perfect equilibrium (van Damme 1984) for arbitrary n-player games, using results of Halpern (2009, 2013). -
Games Lectures.Key
Imperfect Information • So far, all games we’ve developed solutions for have perfect information Lecture 10: Imperfect Information • No hidden information such as individual cards • Hidden information often represented as chance AI For Traditional Games nodes Prof. Nathan Sturtevant Winter 2011 • Could be a play by one player that is hidden until the end of the game Example Tree What is the size of a game with ii? • Simple betting game (Kuhn Poker) • Ante 1 chip • 2-player game, 3-card deck, 1 card each • First player can check/bet • Second player can bet/check or call/fold • If 2nd player bets, 1st player can call/fold 1 1111-1-1 -1 -1 -1 111 -1 -1 -1 • 3 hands each / 6 total combinations • [Exercise: Draw top portion of tree in class] Simple Approach: Perfect-Info Monte-Carlo Drawbacks of Monte-Carlo • We have good perfect information-solvers • May be too many worlds to sample • How can we use them for imperfect information • May get probabilities on worlds incorrect games? • World prob. based on previous actions in the game • Sample all unknown information (eg a world) • May reveal information in actions • For each world: • Good probabilities needed for information hiding • Solve perfectly with alpha-beta • Program has no sense of information seeking/hiding • Take the average best move moves • If too many worlds, sample a reasonable subset • Analysis may be incorrect (see work by Frank and Basin) Strategy Fusion Non-locality World 2 World 1 c 1 c c' World 1 & 2 -1 1 b a a b a' b' -1 1 World 1 1 -1 World 2 1 -1 -1 1 Strengths of Monte-Carlo Analysis of PIMC • Simple to implement • Understanding the Success of Perfect Information Monte Carlo Sampling in Game Tree Search • Relatively fast • Jeffrey Long and Nathan R. -
TESIS DOCTORAL Three Essays on Game Theory
UNIVERSIDAD CARLOS III DE MADRID TESIS DOCTORAL Three Essays on Game Theory Autor: José Carlos González Pimienta Directores: Luis Carlos Corchón Francesco De Sinopoli DEPARTAMENTO DE ECONOMÍA Getafe, Julio del 2007 Three Essays on Game Theory Carlos Gonzalez´ Pimienta To my parents Contents List of Figures iii Acknowledgments 1 Chapter 1. Introduction 3 Chapter 2. Conditions for Equivalence Between Sequentiality and Subgame Perfection 5 2.1. Introduction 5 2.2. Notation and Terminology 7 2.3. Definitions 9 2.4. Results 12 2.5. Examples 24 2.6. Appendix: Notation and Terminology 26 Chapter 3. Undominated (and) Perfect Equilibria in Poisson Games 29 3.1. Introduction 29 3.2. Preliminaries 31 3.3. Dominated Strategies 34 3.4. Perfection 42 3.5. Undominated Perfect Equilibria 51 Chapter 4. Generic Determinacy of Nash Equilibrium in Network Formation Games 57 4.1. Introduction 57 4.2. Preliminaries 59 i ii CONTENTS 4.3. An Example 62 4.4. The Result 64 4.5. Remarks 66 4.6. Appendix: Proof of Theorem 4.1 70 Bibliography 73 List of Figures 2.1 Notation and terminology of finite extensive games with perfect recall 8 2.2 Extensive form where no information set is avoidable. 11 2.3 Extensive form where no information set is avoidable in its minimal subform. 12 2.4 Example of the use of the algorithm contained in the proof of Proposition 2.1 to generate a game where SPE(Γ) = SQE(Γ). 14 6 2.5 Selten’s horse. An example of the use of the algorithm contained in the proof of proposition 2.1 to generate a game where SPE(Γ) = SQE(Γ). -
Successful Nash Equilibrium Agent for a 3-Player Imperfect-Information Game
Successful Nash Equilibrium Agent for a 3-Player Imperfect-Information Game Sam Ganzfried www.ganzfriedresearch.com 1 2 3 4 Scope and applicability of game theory • Strategic multiagent interactions occur in all fields – Economics and business: bidding in auctions, offers in negotiations – Political science/law: fair division of resources, e.g., divorce settlements – Biology/medicine: robust diabetes management (robustness against “adversarial” selection of parameters in MDP) – Computer science: theory, AI, PL, systems; national security (e.g., deploying officers to protect ports), cybersecurity (e.g., determining optimal thresholds against phishing attacks), internet phenomena (e.g., ad auctions) 5 Game theory background rock paper scissors Rock 0,0 -1, 1 1, -1 Paper 1,-1 0, 0 -1,1 Scissors -1,1 1,-1 0,0 • Players • Actions (aka pure strategies) • Strategy profile: e.g., (R,p) • Utility function: e.g., u1(R,p) = -1, u2(R,p) = 1 6 Zero-sum game rock paper scissors Rock 0,0 -1, 1 1, -1 Paper 1,-1 0, 0 -1,1 Scissors -1,1 1,-1 0,0 • Sum of payoffs is zero at each strategy profile: e.g., u1(R,p) + u2(R,p) = 0 • Models purely adversarial settings 7 Mixed strategies • Probability distributions over pure strategies • E.g., R with prob. 0.6, P with prob. 0.3, S with prob. 0.1 8 Best response (aka nemesis) • Any strategy that maximizes payoff against opponent’s strategy • If P2 plays (0.6, 0.3, 0.1) for r,p,s, then a best response for P1 is to play P with probability 1 9 Nash equilibrium • Strategy profile where all players simultaneously play a best response • Standard solution concept in game theory – Guaranteed to always exist in finite games [Nash 1950] • In Rock-Paper-Scissors, the unique equilibrium is for both players to select each pure strategy with probability 1/3 10 • Theorem [Nash 1950]: Every game in strategic form G, with a finite number of players and in which every player has a finite number of pure strategies, has an equilibrium in mixed strategies. -
Sequential Preference Revelation in Incomplete Information Settings”
Online Appendix for “Sequential preference revelation in incomplete information settings” † James Schummer∗ and Rodrigo A. Velez May 6, 2019 1 Proof of Theorem 2 Theorem 2 is implied by the following result. Theorem. Let f be a strategy-proof, non-bossy SCF, and fix a reporting order Λ ∆(Π). Suppose that at least one of the following conditions holds. 2 1. The prior has Cartesian support (µ Cartesian) and Λ is deterministic. 2 M 2. The prior has symmetric Cartesian support (µ symm Cartesian) and f is weakly anonymous. 2 M − Then equilibria are preserved under deviations to truthful behavior: for any N (σ,β) SE Γ (Λ, f ), ,µ , 2 h U i (i) for each i N, τi is sequentially rational for i with respect to σ i and βi ; 2 − (ii) for each S N , there is a belief system such that β 0 ⊆ N ((σ S ,τS ),β 0) SE β 0(Λ, f ), ,µ . − 2 h U i Proof. Let f be strategy-proof and non-bossy, Λ ∆(Π), and µ Cartesian. Since the conditions of the two theorems are the same,2 we refer to2 arguments M ∗Department of Managerial Economics and Decision Sciences, Kellogg School of Manage- ment, Northwestern University, Evanston IL 60208; [email protected]. †Department of Economics, Texas A&M University, College Station, TX 77843; [email protected]. 1 made in the proof of Theorem 1. In particular all numbered equations refer- enced below appear in the paper. N Fix an equilibrium (σ,β) SE Γ (Λ, f ), ,µ . -
Evolutionary Game Theory: a Renaissance
games Review Evolutionary Game Theory: A Renaissance Jonathan Newton ID Institute of Economic Research, Kyoto University, Kyoto 606-8501, Japan; [email protected] Received: 23 April 2018; Accepted: 15 May 2018; Published: 24 May 2018 Abstract: Economic agents are not always rational or farsighted and can make decisions according to simple behavioral rules that vary according to situation and can be studied using the tools of evolutionary game theory. Furthermore, such behavioral rules are themselves subject to evolutionary forces. Paying particular attention to the work of young researchers, this essay surveys the progress made over the last decade towards understanding these phenomena, and discusses open research topics of importance to economics and the broader social sciences. Keywords: evolution; game theory; dynamics; agency; assortativity; culture; distributed systems JEL Classification: C73 Table of contents 1 Introduction p. 2 Shadow of Nash Equilibrium Renaissance Structure of the Survey · · 2 Agency—Who Makes Decisions? p. 3 Methodology Implications Evolution of Collective Agency Links between Individual & · · · Collective Agency 3 Assortativity—With Whom Does Interaction Occur? p. 10 Assortativity & Preferences Evolution of Assortativity Generalized Matching Conditional · · · Dissociation Network Formation · 4 Evolution of Behavior p. 17 Traits Conventions—Culture in Society Culture in Individuals Culture in Individuals · · · & Society 5 Economic Applications p. 29 Macroeconomics, Market Selection & Finance Industrial Organization · 6 The Evolutionary Nash Program p. 30 Recontracting & Nash Demand Games TU Matching NTU Matching Bargaining Solutions & · · · Coordination Games 7 Behavioral Dynamics p. 36 Reinforcement Learning Imitation Best Experienced Payoff Dynamics Best & Better Response · · · Continuous Strategy Sets Completely Uncoupled · · 8 General Methodology p. 44 Perturbed Dynamics Further Stability Results Further Convergence Results Distributed · · · control Software and Simulations · 9 Empirics p.