CS711: Introduction to Game Theory and Mechanism Design

Total Page:16

File Type:pdf, Size:1020Kb

CS711: Introduction to Game Theory and Mechanism Design CS711: Introduction to Game Theory and Mechanism Design Teacher: Swaprava Nath Extensive Form Games Extensive Form Games Chocolate Division Game: Suppose a mother gives his elder son two (indivisible) chocolates to share between him and his younger sister. She also warns that if there is any dispute in the sharing, she will take the chocolates back and nobody will get anything. The brother can propose the following sharing options: (2-0): brother gets two, sister gets nothing, or (1-1): both gets one each, or (0-2): both chocolates to the sister. After the brother proposes the sharing, his sister may \Accept" the division or \Reject" it. Brother 2-0 1-1 0-2 Sister ARARAR (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 1 / 16 Game Theory and Mechanism Design Extensive Form Games S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 P (?) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg X (?) = f(2 − 0); (1 − 1); (0 − 2)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Representing the Chocolate Division Game Brother 2-0 1-1 0-2 Sister ARARAR (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 2 / 16 Game Theory and Mechanism Design Extensive Form Games S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 P (?) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg X (?) = f(2 − 0); (1 − 1); (0 − 2)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g Representing the Chocolate Division Game N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Brother 2-0 1-1 0-2 Sister ARARAR (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 2 / 16 Game Theory and Mechanism Design Extensive Form Games S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 P (?) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg X (?) = f(2 − 0); (1 − 1); (0 − 2)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g Representing the Chocolate Division Game N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Brother H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g 2-0 1-1 0-2 Sister ARARAR (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 2 / 16 Game Theory and Mechanism Design Extensive Form Games S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 P (?) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg X (?) = f(2 − 0); (1 − 1); (0 − 2)g Representing the Chocolate Division Game N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Brother H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g 2-0 1-1 0-2 Sister ARARAR (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 2 / 16 Game Theory and Mechanism Design Extensive Form Games S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 P (?) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg Representing the Chocolate Division Game N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Brother H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g X (?) = f(2 − 0); (12-0− 1); (0 − 2)g 1-1 0-2 Sister ARARAR (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 2 / 16 Game Theory and Mechanism Design Extensive Form Games S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 P (?) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 Representing the Chocolate Division Game N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Brother H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g X (?) = f(2 − 0); (12-0− 1); (0 − 2)g 1-1 0-2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg Sister ARARAR (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 2 / 16 Game Theory and Mechanism Design Extensive Form Games S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 Representing the Chocolate Division Game N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Brother H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g X (?) = f(2 − 0); (12-0− 1); (0 − 2)g 1-1 0-2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg Sister P ( ) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 ? ARARAR (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 2 / 16 Game Theory and Mechanism Design Extensive Form Games S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg Representing the Chocolate Division Game N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Brother H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g X (?) = f(2 − 0); (12-0− 1); (0 − 2)g 1-1 0-2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg Sister P ( ) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 ? ARARAR u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) 2 / 16 Game Theory and Mechanism Design Extensive Form Games Representing the Chocolate Division Game N = f1 (brother); 2 (sister)g;A = f2 − 0; 1 − 1; 0 − 2; A; Rg Brother H = f?; (2 − 0); (1 − 1); (0 − 2); (2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g Z = f(2 − 0;A); (2 − 0;R); (1 − 1;A); (1 − 1;R); (0 − 2;A); (0 − 2;R)g X (?) = f(2 − 0); (12-0− 1); (0 − 2)g 1-1 0-2 X (2 − 0) = X (1 − 1) = X (0 − 2) = fA; Rg Sister P ( ) = 1;P (2 − 0) = P (1 − 1) = P (0 − 2) = 2 ? ARARAR u1(2 − 0;A) = 2; u1(1 − 1;A) = 1; u2(1 − 1;A) = 1; u2(0 − 2;A) = 2 u1(0 − 2;A) = u1(0 − 2;R) = u1(1 − 1;R) = u1(2 − 0;R) = 0 u2(0 − 2;R) = u2(1 − 1;R) = u2(2 − 0;R) = u2(2 − 0;A) = 0 (2; 0) (0; 0) (1; 1)(0; 0) (0; 2) (0; 0) S1 = f2 − 0; 1 − 1; 0 − 2g S2 = fA; Rg × fA; Rg × fA; Rg = fAAA; AAR; ARA; ARR; RAA; RAR; RRA; RRRg 2 / 16 Game Theory and Mechanism Design Extensive Form Games For the given example, we can express the utility function as in the following table: B n S AAA AAR ARA ARR RAA RAR RRA RRR 2-0 (2,0) (2,0) (2,0) (2,0) (0,0) (0,0) (0,0) (0,0) 1-1 (1,1) (1,1) (0,0) (0,0) (1,1) (1,1) (0,0) (0,0) 0-2 (0,2) (0,0) (0,2) (0,0) (0,2) (0,0) (0,2) (0,0) Observe that there are many PSNEs in the given game, some of which leads to quite nonintuitive solutions.
Recommended publications
  • Game Theory 2: Extensive-Form Games and Subgame Perfection
    Game Theory 2: Extensive-Form Games and Subgame Perfection 1 / 26 Dynamics in Games How should we think of strategic interactions that occur in sequence? Who moves when? And what can they do at different points in time? How do people react to different histories? 2 / 26 Modeling Games with Dynamics Players Player function I Who moves when Terminal histories I Possible paths through the game Preferences over terminal histories 3 / 26 Strategies A strategy is a complete contingent plan Player i's strategy specifies her action choice at each point at which she could be called on to make a choice 4 / 26 An Example: International Crises Two countries (A and B) are competing over a piece of land that B occupies Country A decides whether to make a demand If Country A makes a demand, B can either acquiesce or fight a war If A does not make a demand, B keeps land (game ends) A's best outcome is Demand followed by Acquiesce, worst outcome is Demand and War B's best outcome is No Demand and worst outcome is Demand and War 5 / 26 An Example: International Crises A can choose: Demand (D) or No Demand (ND) B can choose: Fight a war (W ) or Acquiesce (A) Preferences uA(D; A) = 3 > uA(ND; A) = uA(ND; W ) = 2 > uA(D; W ) = 1 uB(ND; A) = uB(ND; W ) = 3 > uB(D; A) = 2 > uB(D; W ) = 1 How can we represent this scenario as a game (in strategic form)? 6 / 26 International Crisis Game: NE Country B WA D 1; 1 3X; 2X Country A ND 2X; 3X 2; 3X I Is there something funny here? I Is there something funny here? I Specifically, (ND; W )? I Is there something funny here?
    [Show full text]
  • Best Experienced Payoff Dynamics and Cooperation in the Centipede Game
    Theoretical Economics 14 (2019), 1347–1385 1555-7561/20191347 Best experienced payoff dynamics and cooperation in the centipede game William H. Sandholm Department of Economics, University of Wisconsin Segismundo S. Izquierdo BioEcoUva, Department of Industrial Organization, Universidad de Valladolid Luis R. Izquierdo Department of Civil Engineering, Universidad de Burgos We study population game dynamics under which each revising agent tests each of his strategies a fixed number of times, with each play of each strategy being against a newly drawn opponent, and chooses the strategy whose total payoff was highest. In the centipede game, these best experienced payoff dynamics lead to co- operative play. When strategies are tested once, play at the almost globally stable state is concentrated on the last few nodes of the game, with the proportions of agents playing each strategy being largely independent of the length of the game. Testing strategies many times leads to cyclical play. Keywords. Evolutionary game theory, backward induction, centipede game, computational algebra. JEL classification. C72, C73. 1. Introduction The discrepancy between the conclusions of backward induction reasoning and ob- served behavior in certain canonical extensive form games is a basic puzzle of game the- ory. The centipede game (Rosenthal (1981)), the finitely repeated prisoner’s dilemma, and related examples can be viewed as models of relationships in which each partic- ipant has repeated opportunities to take costly actions that benefit his partner and in which there is a commonly known date at which the interaction will end. Experimen- tal and anecdotal evidence suggests that cooperative behavior may persist until close to William H.
    [Show full text]
  • Lecture Notes
    GRADUATE GAME THEORY LECTURE NOTES BY OMER TAMUZ California Institute of Technology 2018 Acknowledgments These lecture notes are partially adapted from Osborne and Rubinstein [29], Maschler, Solan and Zamir [23], lecture notes by Federico Echenique, and slides by Daron Acemoglu and Asu Ozdaglar. I am indebted to Seo Young (Silvia) Kim and Zhuofang Li for their help in finding and correcting many errors. Any comments or suggestions are welcome. 2 Contents 1 Extensive form games with perfect information 7 1.1 Tic-Tac-Toe ........................................ 7 1.2 The Sweet Fifteen Game ................................ 7 1.3 Chess ............................................ 7 1.4 Definition of extensive form games with perfect information ........... 10 1.5 The ultimatum game .................................. 10 1.6 Equilibria ......................................... 11 1.7 The centipede game ................................... 11 1.8 Subgames and subgame perfect equilibria ...................... 13 1.9 The dollar auction .................................... 14 1.10 Backward induction, Kuhn’s Theorem and a proof of Zermelo’s Theorem ... 15 2 Strategic form games 17 2.1 Definition ......................................... 17 2.2 Nash equilibria ...................................... 17 2.3 Classical examples .................................... 17 2.4 Dominated strategies .................................. 22 2.5 Repeated elimination of dominated strategies ................... 22 2.6 Dominant strategies ..................................
    [Show full text]
  • Modeling Strategic Behavior1
    Modeling Strategic Behavior1 A Graduate Introduction to Game Theory and Mechanism Design George J. Mailath Department of Economics, University of Pennsylvania Research School of Economics, Australian National University 1September 16, 2020, copyright by George J. Mailath. World Scientific Press published the November 02, 2018 version. Any significant correc- tions or changes are listed at the back. To Loretta Preface These notes are based on my lecture notes for Economics 703, a first-year graduate course that I have been teaching at the Eco- nomics Department, University of Pennsylvania, for many years. It is impossible to understand modern economics without knowl- edge of the basic tools of game theory and mechanism design. My goal in the course (and this book) is to teach those basic tools so that students can understand and appreciate the corpus of modern economic thought, and so contribute to it. A key theme in the course is the interplay between the formal development of the tools and their use in applications. At the same time, extensions of the results that are beyond the course, but im- portant for context are (briefly) discussed. While I provide more background verbally on many of the exam- ples, I assume that students have seen some undergraduate game theory (such as covered in Osborne, 2004, Tadelis, 2013, and Wat- son, 2013). In addition, some exposure to intermediate microeco- nomics and decision making under uncertainty is helpful. Since these are lecture notes for an introductory course, I have not tried to attribute every result or model described. The result is a somewhat random pattern of citations and references.
    [Show full text]
  • Dynamic Games Under Bounded Rationality
    Munich Personal RePEc Archive Dynamic Games under Bounded Rationality Zhao, Guo Southwest University for Nationalities 8 March 2015 Online at https://mpra.ub.uni-muenchen.de/62688/ MPRA Paper No. 62688, posted 09 Mar 2015 08:52 UTC Dynamic Games under Bounded Rationality By ZHAO GUO I propose a dynamic game model that is consistent with the paradigm of bounded rationality. Its main advantages over the traditional approach based on perfect rationality are that: (1) the strategy space is a chain-complete partially ordered set; (2) the response function is certain order-preserving map on strategy space; (3) the evolution of economic system can be described by the Dynamical System defined by the response function under iteration; (4) the existence of pure-strategy Nash equilibria can be guaranteed by fixed point theorems for ordered structures, rather than topological structures. This preference-response framework liberates economics from the utility concept, and constitutes a marriage of normal-form and extensive-form games. Among the common assumptions of classical existence theorems for competitive equilibrium, one is central. That is, individuals are assumed to have perfect rationality, so as to maximize their utilities (payoffs in game theoretic usage). With perfect rationality and perfect competition, the competitive equilibrium is completely determined, and the equilibrium depends only on their goals and their environments. With perfect rationality and perfect competition, the classical economic theory turns out to be deductive theory that requires almost no contact with empirical data once its assumptions are accepted as axioms (see Simon 1959). Zhao: Southwest University for Nationalities, Chengdu 610041, China (e-mail: [email protected]).
    [Show full text]
  • Chemical Game Theory
    Chemical Game Theory Jacob Kautzky Group Meeting February 26th, 2020 What is game theory? Game theory is the study of the ways in which interacting choices of rational agents produce outcomes with respect to the utilities of those agents Why do we care about game theory? 11 nobel prizes in economics 1994 – “for their pioneering analysis of equilibria in the thoery of non-cooperative games” John Nash Reinhard Selten John Harsanyi 2005 – “for having enhanced our understanding of conflict and cooperation through game-theory” Robert Aumann Thomas Schelling Why do we care about game theory? 2007 – “for having laid the foundatiouns of mechanism design theory” Leonid Hurwicz Eric Maskin Roger Myerson 2012 – “for the theory of stable allocations and the practice of market design” Alvin Roth Lloyd Shapley 2014 – “for his analysis of market power and regulation” Jean Tirole Why do we care about game theory? Mathematics Business Biology Engineering Sociology Philosophy Computer Science Political Science Chemistry Why do we care about game theory? Plato, 5th Century BCE Initial insights into game theory can be seen in Plato’s work Theories on prisoner desertions Cortez, 1517 Shakespeare’s Henry V, 1599 Hobbes’ Leviathan, 1651 Henry orders the French prisoners “burn the ships” executed infront of the French army First mathematical theory of games was published in 1944 by John von Neumann and Oskar Morgenstern Chemical Game Theory Basics of Game Theory Prisoners Dilemma Battle of the Sexes Rock Paper Scissors Centipede Game Iterated Prisoners Dilemma
    [Show full text]
  • Contemporaneous Perfect Epsilon-Equilibria
    Games and Economic Behavior 53 (2005) 126–140 www.elsevier.com/locate/geb Contemporaneous perfect epsilon-equilibria George J. Mailath a, Andrew Postlewaite a,∗, Larry Samuelson b a University of Pennsylvania b University of Wisconsin Received 8 January 2003 Available online 18 July 2005 Abstract We examine contemporaneous perfect ε-equilibria, in which a player’s actions after every history, evaluated at the point of deviation from the equilibrium, must be within ε of a best response. This concept implies, but is stronger than, Radner’s ex ante perfect ε-equilibrium. A strategy profile is a contemporaneous perfect ε-equilibrium of a game if it is a subgame perfect equilibrium in a perturbed game with nearly the same payoffs, with the converse holding for pure equilibria. 2005 Elsevier Inc. All rights reserved. JEL classification: C70; C72; C73 Keywords: Epsilon equilibrium; Ex ante payoff; Multistage game; Subgame perfect equilibrium 1. Introduction Analyzing a game begins with the construction of a model specifying the strategies of the players and the resulting payoffs. For many games, one cannot be positive that the specified payoffs are precisely correct. For the model to be useful, one must hope that its equilibria are close to those of the real game whenever the payoff misspecification is small. To ensure that an equilibrium of the model is close to a Nash equilibrium of every possible game with nearly the same payoffs, the appropriate solution concept in the model * Corresponding author. E-mail addresses: [email protected] (G.J. Mailath), [email protected] (A. Postlewaite), [email protected] (L.
    [Show full text]
  • A Note on Stabilizing Cooperation in the Centipede Game
    games Article A Note on Stabilizing Cooperation in the Centipede Game Steven J. Brams 1,* and D. Marc Kilgour 2 1 Department of Politics, New York University, 19 W. 4th Street, New York, NY 10012, USA 2 Department of Mathematics, Wilfrid Laurier University, Waterloo, ON N2L 3C5, Canada; [email protected] * Correspondence: [email protected] Received: 14 June 2020; Accepted: 18 August 2020; Published: 20 August 2020 Abstract: In the much-studied Centipede Game, which resembles the Iterated Prisoners’ Dilemma, two players successively choose between (1) cooperating, by continuing play, or (2) defecting and terminating play. The subgame-perfect Nash equilibrium implies that play terminates on the first move, even though continuing play can benefit both players—but not if the rival defects immediately, which it has an incentive to do. We show that, without changing the structure of the game, interchanging the payoffs of the two players provides each with an incentive to cooperate whenever its turn comes up. The Nash equilibrium in the transformed Centipede Game, called the Reciprocity Game, is unique—unlike the Centipede Game, wherein there are several Nash equilibria. The Reciprocity Game can be implemented noncooperatively by adding, at the start of the Centipede Game, a move to exchange payoffs, which it is rational for the players to choose. What this interchange signifies, and its application to transforming an arms race into an arms-control treaty, are discussed. Keywords: centipede game; Prisoners’ Dilemma; subgame-perfect equilibrium; payoff exchange 1. Introduction Since Rosenthal [1] introduced what has come to be called the Centipede Game, there has been controversy about what constitutes rational play in it, which we discuss in the next section.
    [Show full text]
  • Best Experienced Payoff Dynamics and Cooperation in the Centipede
    Best Experienced Payoff Dynamics and Cooperation in the Centipede Game∗ William H. Sandholmy, Segismundo S. Izquierdoz, and Luis R. Izquierdox March 7, 2019 Abstract We study population game dynamics under which each revising agent tests each of his strategies a fixed number of times, with each play of each strategy being against a newly drawn opponent, and chooses the strategy whose total payoff was highest. In the Centipede game, these best experienced payoff dynamics lead to cooperative play. When strategies are tested once, play at the almost globally stable state is concentrated on the last few nodes of the game, with the proportions of agents playing each strategy being largely independent of the length of the game. Testing strategies many times leads to cyclical play. ∗We thank Dan Friedman, Drew Fudenberg, Ken Judd, Panayotis Mertikopoulos, Erik Mohlin, Ignacio Monzon,´ Arthur Robson, Marzena Rostek, Ariel Rubinstein, Larry Samuelson, Ryoji Sawa, Andy Skrzypacz, Lones Smith, Mark Voorneveld, Marek Weretka, and especially Antonio Penta for helpful discussions and comments. Financial support from the U.S. National Science Foundation (SES-1458992 and SES- 1728853), the U.S. Army Research Office (W911NF-17-1-0134 MSN201957), project ECO2017-83147-C2-2-P (MINECO/AEI/FEDER, UE), and the Spanish Ministerio de Educacion,´ Cultura, y Deporte (PRX15/00362 and PRX16/00048) is gratefully acknowledged. yDepartment of Economics, University of Wisconsin, 1180 Observatory Drive, Madison, WI 53706, USA. e-mail: [email protected]; website: www.ssc.wisc.edu/~whs. zBioEcoUva Research Institute on Bioeconomy. Department of Industrial Organization, Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain.
    [Show full text]
  • Evolutionary Game Theory: a Renaissance
    games Review Evolutionary Game Theory: A Renaissance Jonathan Newton ID Institute of Economic Research, Kyoto University, Kyoto 606-8501, Japan; [email protected] Received: 23 April 2018; Accepted: 15 May 2018; Published: 24 May 2018 Abstract: Economic agents are not always rational or farsighted and can make decisions according to simple behavioral rules that vary according to situation and can be studied using the tools of evolutionary game theory. Furthermore, such behavioral rules are themselves subject to evolutionary forces. Paying particular attention to the work of young researchers, this essay surveys the progress made over the last decade towards understanding these phenomena, and discusses open research topics of importance to economics and the broader social sciences. Keywords: evolution; game theory; dynamics; agency; assortativity; culture; distributed systems JEL Classification: C73 Table of contents 1 Introduction p. 2 Shadow of Nash Equilibrium Renaissance Structure of the Survey · · 2 Agency—Who Makes Decisions? p. 3 Methodology Implications Evolution of Collective Agency Links between Individual & · · · Collective Agency 3 Assortativity—With Whom Does Interaction Occur? p. 10 Assortativity & Preferences Evolution of Assortativity Generalized Matching Conditional · · · Dissociation Network Formation · 4 Evolution of Behavior p. 17 Traits Conventions—Culture in Society Culture in Individuals Culture in Individuals · · · & Society 5 Economic Applications p. 29 Macroeconomics, Market Selection & Finance Industrial Organization · 6 The Evolutionary Nash Program p. 30 Recontracting & Nash Demand Games TU Matching NTU Matching Bargaining Solutions & · · · Coordination Games 7 Behavioral Dynamics p. 36 Reinforcement Learning Imitation Best Experienced Payoff Dynamics Best & Better Response · · · Continuous Strategy Sets Completely Uncoupled · · 8 General Methodology p. 44 Perturbed Dynamics Further Stability Results Further Convergence Results Distributed · · · control Software and Simulations · 9 Empirics p.
    [Show full text]
  • Epistemic Game Theory∗
    Epistemic game theory∗ Eddie Dekel and Marciano Siniscalchi Tel Aviv and Northwestern University, and Northwestern University January 14, 2014 Contents 1 Introduction and Motivation2 1.1 Philosophy/Methodology.................................4 2 Main ingredients5 2.1 Notation...........................................5 2.2 Strategic-form games....................................5 2.3 Belief hierarchies......................................6 2.4 Type structures.......................................7 2.5 Rationality and belief...................................9 2.6 Discussion.......................................... 10 2.6.1 State dependence and non-expected utility................... 10 2.6.2 Elicitation...................................... 11 2.6.3 Introspective beliefs................................ 11 2.6.4 Semantic/syntactic models............................ 11 3 Strategic games of complete information 12 3.1 Rationality and Common Belief in Rationality..................... 12 3.2 Discussion.......................................... 14 3.3 ∆-rationalizability..................................... 14 4 Equilibrium Concepts 15 4.1 Introduction......................................... 15 4.2 Subjective Correlated Equilibrium............................ 16 4.3 Objective correlated equilibrium............................. 17 4.4 Nash Equilibrium...................................... 21 4.5 The book-of-play assumption............................... 22 4.6 Discussion.......................................... 23 4.6.1 Condition AI...................................
    [Show full text]
  • Subgame-Perfect Nash Equilibrium
    Chapter 11 Subgame-Perfect Nash Equilibrium Backward induction is a powerful solution concept with some intuitive appeal. Unfor- tunately, it can be applied only to perfect information games with a finite horizon. Its intuition, however, can be extended beyond these games through subgame perfection. This chapter defines the concept of subgame-perfect equilibrium and illustrates how one can check whether a strategy profile is a subgame perfect equilibrium. 11.1 Definition and Examples An extensive-form game can contain a part that could be considered a smaller game in itself; such a smaller game that is embedded in a larger game is called a subgame.A main property of backward induction is that, when restricted to a subgame of the game, the equilibrium computed using backward induction remains an equilibrium (computed again via backward induction) of the subgame. Subgame perfection generalizes this notion to general dynamic games: Definition 11.1 A Nash equilibrium is said to be subgame perfect if an only if it is a Nash equilibrium in every subgame of the game. A subgame must be a well-defined game when it is considered separately. That is, it must contain an initial node, and • all the moves and information sets from that node on must remain in the subgame. • 173 174 CHAPTER 11. SUBGAME-PERFECT NASH EQUILIBRIUM 1 2 1 (2,5) • • • (1,1) (0,4) (3,3) Figure 11.1: A Centipede Game Consider, for instance, the centipede game in Figure 11.1, where the equilibrium is drawn in thick lines. This game has three subgames. One of them is: 1 (2,5) • (3,3) Here is another subgame: 2 1 (2,5) • • (0,4) (3,3) The third subgame is the game itself.
    [Show full text]