Scenario Analysis Normal Form Game

Total Page:16

File Type:pdf, Size:1020Kb

Scenario Analysis Normal Form Game Overview Economics 3030 I. Introduction to Game Theory Chapter 10 II. Simultaneous-Move, One-Shot Games Game Theory: III. Infinitely Repeated Games Inside Oligopoly IV. Finitely Repeated Games V. Multistage Games 1 2 Normal Form Game A Normal Form Game • A Normal Form Game consists of: n Players Player 2 n Strategies or feasible actions n Payoffs Strategy A B C a 12,11 11,12 14,13 b 11,10 10,11 12,12 Player 1 c 10,15 10,13 13,14 3 4 Normal Form Game: Normal Form Game: Scenario Analysis Scenario Analysis • Suppose 1 thinks 2 will choose “A”. • Then 1 should choose “a”. n Player 1’s best response to “A” is “a”. Player 2 Player 2 Strategy A B C a 12,11 11,12 14,13 Strategy A B C b 11,10 10,11 12,12 a 12,11 11,12 14,13 Player 1 11,10 10,11 12,12 c 10,15 10,13 13,14 b Player 1 c 10,15 10,13 13,14 5 6 1 Normal Form Game: Normal Form Game: Scenario Analysis Scenario Analysis • Suppose 1 thinks 2 will choose “B”. • Then 1 should choose “a”. n Player 1’s best response to “B” is “a”. Player 2 Player 2 Strategy A B C Strategy A B C a 12,11 11,12 14,13 a 12,11 11,12 14,13 b 11,10 10,11 12,12 11,10 10,11 12,12 Player 1 b c 10,15 10,13 13,14 Player 1 c 10,15 10,13 13,14 7 8 Normal Form Game Dominant Strategy • Regardless of whether Player 2 chooses A, B, or C, Scenario Analysis Player 1 is better off choosing “a”! • “a” is Player 1’s Dominant Strategy (i.e., the • Similarly, if 1 thinks 2 will choose C… strategy that results in the highest payoff regardless n Player 1’s best response to “C” is “a”. of the opponent’s action). Player 2 Player 2 Strategy A B C Strategy A B C a 12,11 11,12 14,13 a 12,11 11,12 14,13 b 11,10 10,11 12,12 b 11,10 10,11 12,12 Player 1 c 10,15 10,13 13,14 Player 1 c 10,15 10,13 13,14 9 10 Putting Yourself in your Rival’s The Outcome Shoes Player 2 • What should player 2 do? Strategy A B C n 2 has no dominant strategy! a 12,11 11,12 14,13 n But 2 should reason that 1 will play “a”. b 11,10 10,11 12,12 n Therefore 2 should choose “C”. Player 1 10,15 10,13 13,14 Player 2 c Strategy A B C • This outcome is called a Nash equilibrium (i.e., a 12,11 11,12 14,13 no way a player can unilaterally change strategies b 11,10 10,11 12,12 and be better off). Player 1 c 10,15 10,13 13,14 n “a” is player 1’s best response to “C”. n “C” is player 2’s best response to “a”. 11 12 2 Key Insights E.g., A Market Share Game • Look for dominant strategies • Two managers want to maximize market • Put yourself in your rival’s shoes share • Strategies are pricing decisions • Simultaneous moves • One-shot game 13 14 Market-Share Game The Market-Share Game Equilibrium in Normal Form Manager 2 Manager 2 Strategy P=$10 P=$5 P = $1 Strategy P=$10 P=$5 P = $1 P=$10 .5, .5 .2, .8 .1, .9 P=$10 .5, .5 .2, .8 .1, .9 P=$5 .8, .2 .5, .5 .2, .8 P=$5 .8, .2 .5, .5 .2, .8 Manager 1 Manager 1 P=$1 .9, .1 .8, .2 .5, .5 P=$1 .9, .1 .8, .2 .5, .5 Note: P = $1 is the dominant strategy for both managers Nash Equilibrium 15 16 Examples of Coordination Key Insight: Games • Game theory can also be used to analyze • Product standards situations where “payoffs” are non n size of floppy disks monetary! n size of CDs n VHS vs. Betamax • National standards n electric current n traffic laws n etc. • It may be beneficial for all to cooperate and 17 have “standards” 18 3 A Coordination Game in A Coordination Problem: Normal Form Three Nash Equilibria! Player 2 Player 2 Strategy A B C Strategy A B C 1 0,0 0,0 $10,$10 1 0,0 0,0 $10,$10 2 $10,$10 0,0 0,0 Player 1 2 $10,$10 0,0 0,0 Player 1 3 0,0 $10,$10 0,0 3 0,0 $10, $10 0,0 19 20 Key Insights: An Advertising Game • Not all games are games of conflict. • Two firms (Kellogg’s & General Mills) • Communication can help solve coordination managers want to maximize profits problems. • Strategies consist of advertising campaigns • Sequential moves can help solve coordination • Simultaneous moves problems (i.e., let one player move first) • One-shot interaction • Repeated interaction 21 22 Equilibrium to the One-Shot A One-Shot Advertising Game Advertising Game General Mills General Mills Strategy None Moderate High Strategy None Moderate High None 12,12 1, 20 -1, 15 None 12,12 1, 20 -1, 15 Moderate 20, 1 6, 6 0, 9 Kellogg’s Moderate 20, 1 6, 6 0, 9 High 15, -1 9, 0 2, 2 Kellogg’s High 15, -1 9, 0 2, 2 Nash Equilibrium 23 24 4 Can collusion work if the game No (by backwards induction). is repeated 2 times? • In period 2, the game is a one-shot game, so equilibrium entails High Advertising in the last period. General Mills • This means period 1 is “really” the last Strategy None Moderate High period, since everyone knows what will None 12,12 1, 20 -1, 15 happen in period 2. Moderate 20, 1 6, 6 0, 9 Kellogg’s High 15, -1 9, 0 2, 2 • Equilibrium entails High Advertising by each firm in both periods. • The same holds true if we repeat the game 25 any known, finite number of times. 26 Suppose General Mills adopts this Can collusion work if firms play the trigger strategy. Kellogg’s profits? game each year, forever? 2 3 PCooperate = 12 +12/(1+i) + 12/(1+i) + 12/(1+i) + … • Consider the following “trigger strategy” Value of a perpetuity of $12 paid = 12 + 12/i at the end of every year by each firm: 2 3 PCheat = 20 +2/(1+i) + 2/(1+i) + 2/(1+i) + … n “Don’t advertise, provided the rival has not advertised in the past. If the rival ever advertises, “punish” it by = 20 + 2/i engaging in a high level of advertising forever after.” General Mills • In effect, each firm agrees to “cooperate” so long as the rival hasn’t “cheated” in the Strategy None Moderate High past. “Cheating” triggers punishment in all None 12,12 1, 20 -1, 15 future periods. Moderate 20, 1 6, 6 0, 9 Kellogg’s High 15, -1 9, 0 2, 2 27 28 Kellogg’s Gain to Cheating: Benefits & Costs of Cheating • PCheat - PCooperate = 20 + 2/i - (12 + 12/i) = 8 - 10/i • PCheat - PCooperate = 8 - 10/i n Suppose i = .05 n 8 = Immediate Benefit (20 - 12 today) • PCheat - PCooperate = 8 - 10/.05 = 8 - 200 = -192 n 10/i = PV of Future Cost (12 - 2 forever after) • It doesn’t pay to deviate. • If Immediate Benefit > PV of Future Cost n Collusion is a Nash equilibrium in the infinitely repeated n Pays to “cheat”. game! General Mills • If Immediate Benefit £ PV of Future Cost n Doesn’t pay to “cheat”. Strategy None Moderate High General Mills None 12,12 1, 20 -1, 15 Strategy None Moderate High None 12,12 1, 20 -1, 15 Moderate 20, 1 6, 6 0, 9 Kellogg’s Moderate 20, 1 6, 6 0, 9 High 15, -1 9, 0 2, 2 Kellogg’s High 15, -1 9, 0 2, 2 29 30 5 Real World Examples of Key Insight Collusion • Collusion can be sustained as a Nash 1. Garbage Collection Industry equilibrium when there is no certain “end” 2. OPEC to a game. • Doing so requires: n Ability to monitor actions of rivals n Ability (and reputation for) punishing defectors n Low interest rate n High probability of future interaction 31 32 1. Garbage Collection Industry Normal Form Bertrand Game • Homogeneous products • Bertrand oligopoly Firm 2 • Identity of customers is known Strategy Low Price High Price • Identity of competitors is known Firm 1 Low Price 0,0 20,-1 High Price -1, 20 15, 15 33 34 One-Shot Bertrand Potential Repeated Game (Nash) Equilibrium Equilibrium Outcome Firm 2 Firm 2 Strategy Low Price High Price Strategy Low Price High Price Firm 1 Low Price 0,0 20,-1 Firm 1 Low Price 0,0 20,-1 High Price -1, 20 15, 15 High Price -1, 20 15, 15 35 36 6 2. OPEC Current OPEC Members • Cartel founded in 1960 by Iran, Iraq, Kuwait, Saudi Arabia, and Venezuela • Currently has 11 members • “OPEC’s objective is to co-ordinate and unify petroleum policies among Member Countries, in order to secure fair and stable prices for petroleum producers…” (www.opec.com) • Cournot oligopoly • With no collusion: P Competition < PCournot < PMonopoly 37 38 Cournot Game in Normal One-Shot Cournot Form (Nash) Equilibrium Venezuela Venezuela Strategy High Q Med Q Low Q High Q 5, 3 9,4 3, 6 Strategy High Q Med Q Low Q Med Q 6, 7 12,10 20, 8 High Q 5, 3 9,4 3, 6 Saudi Arabia Low Q 8, 1 10, 18 18, 15 Med Q 6, 7 12,10 20, 8 Saudi Arabia Low Q 8, 1 10, 18 18, 15 39 40 Effect of Collusion on Oil Repeated Game Equilibrium* Prices Price Venezuela $30 Strategy High Q Med Q Low Q High Q 5, 3 9,4 3, 6 Med Q 6, 7 12,10 20, 8 $15 Saudi Arabia Low Q 8, 1 10, 18 18, 15 World Demand for Oil * (Assuming a Low Interest Rate) 41 Low Medium Quantity of Oil42 7 OPEC’s Demise Caveat 40 Low Interest High Interest 35 • Collusion is illegal in most countries Rates Rates 30 • Firms are constantly been investigated by the 25 Competition Bureau in Canada and brought to 20 trial in Federal Court 15 • OPEC isn’t illegal; North American laws don’t 10 apply 5 0 1970 1972 1974 1976 1978 1980 1982 1984 1986 -5 Real Interest Rate Price of Oil 43 44 Simultaneous-Move Bargaining The Bargaining Game • Management and a union are negotiating a wage increase.
Recommended publications
  • Repeated Games
    6.254 : Game Theory with Engineering Applications Lecture 15: Repeated Games Asu Ozdaglar MIT April 1, 2010 1 Game Theory: Lecture 15 Introduction Outline Repeated Games (perfect monitoring) The problem of cooperation Finitely-repeated prisoner's dilemma Infinitely-repeated games and cooperation Folk Theorems Reference: Fudenberg and Tirole, Section 5.1. 2 Game Theory: Lecture 15 Introduction Prisoners' Dilemma How to sustain cooperation in the society? Recall the prisoners' dilemma, which is the canonical game for understanding incentives for defecting instead of cooperating. Cooperate Defect Cooperate 1, 1 −1, 2 Defect 2, −1 0, 0 Recall that the strategy profile (D, D) is the unique NE. In fact, D strictly dominates C and thus (D, D) is the dominant equilibrium. In society, we have many situations of this form, but we often observe some amount of cooperation. Why? 3 Game Theory: Lecture 15 Introduction Repeated Games In many strategic situations, players interact repeatedly over time. Perhaps repetition of the same game might foster cooperation. By repeated games, we refer to a situation in which the same stage game (strategic form game) is played at each date for some duration of T periods. Such games are also sometimes called \supergames". We will assume that overall payoff is the sum of discounted payoffs at each stage. Future payoffs are discounted and are thus less valuable (e.g., money and the future is less valuable than money now because of positive interest rates; consumption in the future is less valuable than consumption now because of time preference). We will see in this lecture how repeated play of the same strategic game introduces new (desirable) equilibria by allowing players to condition their actions on the way their opponents played in the previous periods.
    [Show full text]
  • Repeated Games
    Repeated games Felix Munoz-Garcia Strategy and Game Theory - Washington State University Repeated games are very usual in real life: 1 Treasury bill auctions (some of them are organized monthly, but some are even weekly), 2 Cournot competition is repeated over time by the same group of firms (firms simultaneously and independently decide how much to produce in every period). 3 OPEC cartel is also repeated over time. In addition, players’ interaction in a repeated game can help us rationalize cooperation... in settings where such cooperation could not be sustained should players interact only once. We will therefore show that, when the game is repeated, we can sustain: 1 Players’ cooperation in the Prisoner’s Dilemma game, 2 Firms’ collusion: 1 Setting high prices in the Bertrand game, or 2 Reducing individual production in the Cournot game. 3 But let’s start with a more "unusual" example in which cooperation also emerged: Trench warfare in World War I. Harrington, Ch. 13 −! Trench warfare in World War I Trench warfare in World War I Despite all the killing during that war, peace would occasionally flare up as the soldiers in opposing tenches would achieve a truce. Examples: The hour of 8:00-9:00am was regarded as consecrated to "private business," No shooting during meals, No firing artillery at the enemy’s supply lines. One account in Harrington: After some shooting a German soldier shouted out "We are very sorry about that; we hope no one was hurt. It is not our fault, it is that dammed Prussian artillery" But... how was that cooperation achieved? Trench warfare in World War I We can assume that each soldier values killing the enemy, but places a greater value on not getting killed.
    [Show full text]
  • 2.4 Finitely Repeated Games
    UC Berkeley UC Berkeley Electronic Theses and Dissertations Title Three Essays on Dynamic Games Permalink https://escholarship.org/uc/item/5hm0m6qm Author Plan, Asaf Publication Date 2010 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California Three Essays on Dynamic Games By Asaf Plan A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Economics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Matthew Rabin, Chair Professor Robert M. Anderson Professor Steven Tadelis Spring 2010 Abstract Three Essays in Dynamic Games by Asaf Plan Doctor of Philosophy in Economics, University of California, Berkeley Professor Matthew Rabin, Chair Chapter 1: This chapter considers a new class of dynamic, two-player games, where a stage game is continuously repeated but each player can only move at random times that she privately observes. A player’s move is an adjustment of her action in the stage game, for example, a duopolist’s change of price. Each move is perfectly observed by both players, but a foregone opportunity to move, like a choice to leave one’s price unchanged, would not be directly observed by the other player. Some adjustments may be constrained in equilibrium by moral hazard, no matter how patient the players are. For example, a duopolist would not jump up to the monopoly price absent costly incentives. These incentives are provided by strategies that condition on the random waiting times between moves; punishing a player for moving slowly, lest she silently choose not to move.
    [Show full text]
  • Spring 2017 Final Exam
    Spring 2017 Final Exam ECONS 424: Strategy and Game Theory Tuesday May 2, 3:10 PM - 5:10 PM Directions : Complete 5 of the 6 questions on the exam. You will have a minimum of 2 hours to complete this final exam. No notes, books, or phones may be used during the exam. Write your name, answers and work clearly on the answer paper provided. Please ask me if you have any questions. Problem 1 (20 Points) [From Lecture Slides]. Consider the following game between two competing auction houses, Christie's and Sotheby's. Each firm charges its customers a commission on the items sold. Customers view each of the auction houses as essentially identical. For this reason, which ever house charges the lowest commission charge will be the one most of the customers want to use. However, if they can cooperate they might be able to make more money without undercutting each other. The stage-game for the competition between the auction houses is shown below. Sotheby's 7% 5% 2% 7% 7, 7 1, 10 -2, 3 Christie's 5% 10, 1 4, 4 1, 2 2% 3, -2 2, 1 0, 0 (A) (2.5 points) List all pure strategy Nash equilibria of the stage-game. (B) (2.5 points) List the preferred cooperative outcome of the stage-game and determine the best payoff a player gains by unilaterally deviating from this outcome. (C) (5 points) Describe the Grim-trigger strategy for the infinitely repeated game between Sotheby's and Christie's. (D) (10 points) If both auction houses have a common discount factor 0 ≤ δ ≤ 1, find the con- dition on this discount factor that will allow the Grim-trigger strategy to be sustained as a Subgame Perfect Nash equilibrium of the infinitely repeated game.
    [Show full text]
  • Collusive Behaviour in Finite Repeated Games with Bonding
    DIVISION OF THE HUMANITIES AND SOCIAL SCIENCES CALIFORNIA INSTITUTE OF TECHNOLOGY PASADENA. CALIFORNIA 91125 COLLUSIVE BEHAVIOUR IN FINITE REPEATED GAMES WITH BONDING ..i.c:,1\lUTf OJ: \\' )"� Mukesh Eswaran �� � University of British Columbia � !f �\"'.'. ....., 0 c:i: C" ..... -< Tracy R. Lewis � � California Institute of Technology � � University of British Columbia ,;... <.;: �t--r. ,c.;;:, � SlfALL N\�\'-� SOCIAL SCIENCE WORKING PAPER 46 6 February 1983 COLLUSIVE BEHAVIOUR IN FINITE REPEATED GAMES WITH BONDING Abstract It is well known that it is possible (even with strictly positive discounting) to obtain collusive perfect equilibria in In finite repeated games, it is not possible to enforce infinitely repeated games. However, only noncooperative perfect collusive behaviour using deterrent strategies because of the equilibria exist in finite games. Even though finite games may last "unravel! ing" of cooperative behaviour in the 1 ast period. This paper for a long time, the cooperative behaviour of the players unravels in demonstrates that under certain conditions collusion among the players the final period of play: defection from the cooperative agreement is can be maintained if they can post a bond which they must forfeit if the dominanat strategy in the last period, and backward induction they defect from the cooperative mode. We show that the incentives to renders noncooperative action the dominant strategy in all earlier cooperate increase as the period of interaction grows in that the size periods. This phenomenon of unraveling is unsatisfactory for two of the bond required to deter defection becomes arbitrarily small as reasons. First, it contradicts our intuition that cooperative the number of periods in the game increases.
    [Show full text]
  • MS&E 246: Lecture 10 Repeated Games
    MS&E 246: Lecture 10 Repeated games Ramesh Johari What is a repeated game? A repeated game is: A dynamic game constructed by playing the same game over and over. It is a dynamic game of imperfect information. This lecture • Finitely repeated games • Infinitely repeated games • Trigger strategies • The folk theorem Stage game At each stage, the same game is played: the stage game G. Assume: • G is a simultaneous move game •In G, player i has: •Action set Ai • Payoff Pi(ai, a-i) Finitely repeated games G(K) : G is repeated K times Information sets: All players observe outcome of each stage. What are: strategies? payoffs? equilibria? History and strategies Period t history ht: ht = (a(0), …, a(t-1)) where a(τ) = action profile played at stage τ Strategy si: Choice of stage t action si(ht) ∈ Ai for each history ht i.e. ai(t) = si(ht) Payoffs Assume payoff = sum of stage game payoffs Example: Prisoner’s dilemma Recall the Prisoner’s dilemma: Player 1 defect cooperate defect (1,1) (4,0) Player 2 cooperate (0,4) (2,2) Example: Prisoner’s dilemma Two volunteers Five rounds No communication allowed! Round 1 2 3 4 5 Total Player 1 1 1 1 1 1 5 Player 2 1 1 1 1 1 5 SPNE Suppose aNE is a stage game NE. Any such NE gives a SPNE: NE Player i plays ai at every stage, regardless of history. Question: Are there any other SPNE? SPNE How do we find SPNE of G(K)? Observe: Subgame starting after history ht is identical to G(K - t) SPNE: Unique stage game NE Suppose G has a unique NE aNE Then regardless of period K history hK , last stage has unique NE aNE NE ⇒ At SPNE, si(hK) = ai SPNE: Backward induction At stage K -1, given s-i(·), player i chooses si(hK -1) to maximize: Pi(si(hK -1), s-i(hK -1)) + Pi(s(hK)) payoff at stage K -1 payoff at stage K SPNE: Backward induction At stage K -1, given s-i(·), player i chooses si(hK -1) to maximize: NE Pi(si(hK -1), s-i(hK -1)) + Pi(a ) payoff at stage K -1 payoff at stage K We know: at last stage, aNE is played.
    [Show full text]
  • The Complexity of Nash Equilibria in Infinite Multiplayer Games
    The Complexity of Nash Equilibria in Infinite Multiplayer Games Michael Ummels [email protected] FOSSACS 2008 Michael Ummels – The Complexity of Nash Equilibria in Infinite Multiplayer Games 1 / 13 Infinite Games Let’s play! 2 5 1 4 3 6 Play: π 1, 2, 3, 6, 4, 2, 5, ... Note: No probabilistic vertices! = Michael Ummels – The Complexity of Nash Equilibria in Infinite Multiplayer Games 2 / 13 Winning conditions Question: What is the payoff of a play? Specifiedy b a winning condition for each player: ▶ Büchi condition: Given a set F of vertices, defines the set of all plays π that hit F infinitely often. ▶ Co-Büchi condition: Given a set F of vertices, defines the set of all plays π that hit F only finitely often. ▶ Parity condition: Given a priority function Ω V N, defines the set of all plays π such that the least priority occurring infinitely often is even. ∶ → Player receives payoff 1 if her winning condition is satisfied, otherwise 0. But we are not so much interested in the winner of a certain play, but in the strategic behaviour that can occur. Michael Ummels – The Complexity of Nash Equilibria in Infinite Multiplayer Games 3 / 13 The Classical Case Two-player Zero-sum Games: Games with two players where the winning conditions are complements of each other. (Pure) Determinacy:A two-player zero-sum game is determined (in pure strategies)i f one of the two players has a (pure) winning strategy. Theorem (Martin 1975) Any two-player zero-sum game with a Borel winning condition is determined in pure strategies.
    [Show full text]
  • Repeated Games
    Multi-agent learning Repeated games Multi-agent learning Repeated games Gerard Vreeswijk, Intelligent Systems Group, Computer Science Department, Faculty of Sciences, Utrecht University, The Netherlands. Gerard Vreeswijk. Last modified on February 9th, 2012 at 17:15 Slide 1 Multi-agent learning Repeated games Repeated games: motivation 1. Much interaction in multi-agent systems can be modelled through games. 2. Much learning in multi-agent systems can therefore be modelled through learning in games. 3. Learning in games usually takes place through the (gradual) adaption of strategies (hence, behaviour) in a repeated game. 4. In most repeated games, one game (a.k.a. stage game) is played repeatedly. Possibilities: • A finite number of times. • An indefinite (same: indeterminate) number of times. • An infinite number of times. 5. Therefore, familiarity with the basic concepts and results from the theory of repeated games is essential to understand multi-agent learning. Gerard Vreeswijk. Last modified on February 9th, 2012 at 17:15 Slide 2 Multi-agent learning Repeated games Plan for today • NE in normal form games that are repeated a finite number of times. – Principle of backward induction. • NE in normal form games that are repeated an indefinite number of times. – Discount factor. Models the probability of continuation. – Folk theorem. (Actually many FT’s.) Repeated games generally do have infinitely many Nash equilibria. – Trigger strategy, on-path vs. off-path play, the threat to “minmax” an opponent. This presentation draws heavily on (Peters, 2008). * H. Peters (2008): Game Theory: A Multi-Leveled Approach. Springer, ISBN: 978-3-540-69290-4. Ch. 8: Repeated games.
    [Show full text]
  • Chapter 10 Game Theory: Inside Oligopoly
    Managerial Economics & Business Strategy Chapter 10 Game Theory: Inside Oligopoly McGraw-Hill/Irwin Copyright © 2010 by the McGraw-Hill Companies, Inc. All rights reserved. Overview I. Introduction to Game Theory II. Simultaneous-Move, One-Shot Games III. Infinitely Repeated Games IV. Finitely Repeated Games V. Multistage Games 10-2 Game Environments Players’ planned decisions are called strategies. Payoffs to players are the profits or losses resulting from strategies. Order of play is important: – Simultaneous-move game: each player makes decisions with knowledge of other players’ decisions. – Sequential-move game: one player observes its rival’s move prior to selecting a strategy. Frequency of rival interaction – One-shot game: game is played once. – Repeated game: game is played more than once; either a finite or infinite number of interactions. 10-3 Simultaneous-Move, One-Shot Games: Normal Form Game A Normal Form Game consists of: – Set of players i ∈ {1, 2, … n} where n is a finite number. – Each players strategy set or feasible actions consist of a finite number of strategies. • Player 1’s strategies are S 1 = {a, b, c, …}. • Player 2’s strategies are S2 = {A, B, C, …}. – Payoffs. • Player 1’s payoff: π1(a,B) = 11. • Player 2’s payoff: π2(b,C) = 12. 10-4 A Normal Form Game Player 2 Strategy ABC a 12 ,11 11 ,12 14 ,13 b 11 ,10 10 ,11 12 ,12 Player 1 Player c 10 ,15 10 ,13 13 ,14 10-5 Normal Form Game: Scenario Analysis Suppose 1 thinks 2 will choose “A”. Player 2 Strategy ABC a 12 ,11 11 ,12 14 ,13 b 11 ,10 10 ,11 12 ,12 Player 1 Player c 10 ,15 10 ,13 13 ,14 10-6 Normal Form Game: Scenario Analysis Then 1 should choose “a”.
    [Show full text]
  • Repeated Games EC202 Lectures IX & X
    Repeated Games EC202 Lectures IX & X Francesco Nava London School of Economics January 2011 Nava (LSE) EC202 —Lectures IX & X Jan2011 1/16 Summary Repeated Games: Definitions: Feasible Payoffs Minmax Repeated Game Stage Game Trigger Strategy Main Result: Folk Theorem Examples: Prisoner’sDilemma Nava (LSE) EC202 —Lectures IX & X Jan2011 2/16 Feasible Payoffs Q: What payoffs are feasible in a strategic form game? A: A profile of payoffs is feasible in a strategic form game if can be expressed as a weighed average of payoffs in the game. Definition (Feasible Payoffs) A profile of payoffs wi i N is feasible in a strategic form game f g 2 N, Ai , ui i N if there exists a distribution over profiles of actions p suchf that: g 2 wi = ∑a A p(a)ui (a) for any i N 2 2 Unfeasible payoffs cannot be outcomes of the game Points on the north-east boundary of the feasible set are Pareto effi cient Nava (LSE) EC202 —Lectures IX & X Jan2011 3/16 Minmax Q: What’sthe worst possible payoff that a player can achieve if he chooses according to his best response function? A: The minmax payoff. Definition (Minmax) The (pure strategy) minmax payoff of player i N in a strategic form 2 game N, Ai , ui i N is: f g 2 ui = min max ui (ai , a i ) a i A i ai Ai 2 2 Mixed strategy minmax payoffs satisfy: v i = min max ui (si , s i ) s i si The mixed strategy minmax is not higher than the pure strategy minmax.
    [Show full text]
  • Cooperation in a Repeated Public Goods Game with a Probabilistic Endpoint
    W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 5-2014 Cooperation in a Repeated Public Goods Game with a Probabilistic Endpoint Daniel M. Carlen College of William and Mary Follow this and additional works at: https://scholarworks.wm.edu/honorstheses Part of the Behavioral Economics Commons, Econometrics Commons, Economic History Commons, Economic Theory Commons, Other Economics Commons, Political Economy Commons, and the Social Statistics Commons Recommended Citation Carlen, Daniel M., "Cooperation in a Repeated Public Goods Game with a Probabilistic Endpoint" (2014). Undergraduate Honors Theses. Paper 34. https://scholarworks.wm.edu/honorstheses/34 This Honors Thesis is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Undergraduate Honors Theses by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. Carlen 1 Cooperation in a Repeated Public Goods Game with a Probabilistic Endpoint A thesis submitted in partial fulfillment of the requirement for the degree of Bachelor of Arts in the Department of Economics from The College of William and Mary by Daniel Marc Carlen Accepted for __________________________________ (Honors) ________________________________________ Lisa Anderson (Economics), Co-Advisor ________________________________________ Rob Hicks (Economics) Co-Advisor ________________________________________ Christopher Freiman (Philosophy) Williamsburg, VA April 11, 2014 Carlen 2 Acknowledgements Professor Lisa Anderson, the single most important person throughout this project and my academic career. She is the most helpful and insightful thesis advisor that I could have ever expected to have at William and Mary, going far beyond the call of duty and always offering a helping hand.
    [Show full text]
  • Computing Correlated Equilibrium and Succinct Representation of Games
    Algorithmic Game Theory Computing Correlated Equilibrium and Succinct Representation of Games Branislav Boˇsansk´y Artificial Intelligence Center, Department of Computer Science, Faculty of Electrical Engineering, Czech Technical University in Prague [email protected] April 23, 2018 Correlated Equilibrium Correlated Equilibrium { a probability distribution over pure strategy profiles p = ∆(S) that recommends each player i to play 0 the best response; 8si; si 2 Si: X X 0 p(si; s−i)ui(si; s−i) ≥ p(si; s−i)ui(si; s−i) s−i2S−i s−i2S−i Coarse Correlated Equilibrium { a probability distribution over pure strategy profiles p = ∆(S) that in expectation recommends each player i to play the best response; 8si 2 Si: X 0 0 X 0 0 p(s )ui(s ) ≥ p(s )ui(si; s−i) s02S0 s02S0 Correlated Equilibrium The solution concept describes situations with a correlation device present in the environment. Correlated equilibrium is closely related to learning in competitive scenarios. (Coarse) Correlated equilibrium is often a result of a no-regret learning strategy in a game. Correlated Equilibrium Computing a CE in normal-form games: X X 0 0 p(si; s−i)ui(si; s−i) ≥ p(si; s−i)ui(si; s−i) 8si; si 2 Si s−i2S−i s−i2S−i Computation in succinct games: polymatrix games congestion games anonymous games symmetric games graphical games with a bounded tree-width Succinct Representations compact representation of the game with n = jN j players we want to reduce the input from jSjjN j to jSjd, where d jN j which succinct representations are we going to talk about: congestion games (network congestion games, ...) polymatrix games (zero-sum polymatrix games) graphical games (action graph games) Succinct Representations Definition (Papadimitriou and Roughgarden, 2008) A succinct game G = (I;T;U) is defined, like all computational problems, in terms of a set of efficiently recognizable inputs I, and two polynomial algorithms T and U.
    [Show full text]