Lecture 2: Alternative Equilibrium Concepts

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 2: Alternative Equilibrium Concepts Overview 1 Weaknesses of NE Example 1: Centipede Game Example 2: Matching Pennies 2 Logit QRE Explaining the Centipede Game using logit-QRE Explaining Matching Pennies using Logit QRE 3 Impulse Balance Equilibrium 4 Action-Sampling Equilibrium 5 Comparing Equilibrium Concepts 3 / 41 Literature Fey, M., McKelvey, R. D., and Palfrey, T. R. (1996): An Experimental Study of Constant-Sum Centipede Games, International Journal of Game Theory, 25, 269-287. Goeree, J. K. and Holt, C. A. (2001): Ten Little Treasures of Game Theory and Ten Intuitive Contradictions, American Economic Review, 91(5), 1402-1422 McKelvey, R. D., and Palfrey, T. R. (1992): An Experimental Study of the Centipede Game, Econometrica, 60(4), 803-836 Selten, R. and Chmura, T. (2008): Stationary Concepts for Experimental 2x2-Games, American Economic Review, 98(3), 938Ð966 Brunner, C., Camerer, C., and Goeree, J. (2011): Stationary Concepts for Experimental 2x2-Games, Comment, American Economic Review, 101, 1-14 4 / 41 Nash Equilibrium Nash Equilibrium (NE) assumes agents have correct beliefs about other agent’s behavior agents best respond given their beliefs These assumptions are often violated unobservable variables (weather, temperature, framing...) cognitive limitations, lack of attention/willpower social preferences etc. 5 / 41 Stigler (1965) proposes 3 criteria to evaluate economic theories: Congruence with Reality Tractability Generality “Improve on NE?” Critics of NE typically point out that it is not always congruent with reality But is this the only criterion that a “good” theory has to satisfy? 6 / 41 “Improve on NE?” Critics of NE typically point out that it is not always congruent with reality But is this the only criterion that a “good” theory has to satisfy? Stigler (1965) proposes 3 criteria to evaluate economic theories: Congruence with Reality Tractability Generality 6 / 41 How many pairs of subjects play T at the first node? 0.7% 806 R. D. MCKELVEY AND T. R. PALFREY niques. This enables us to estimate the numberof subjectsthat actuallybehave in such a fashion, and to address the question as to whether the beliefs of subjectsare on averagecorrect. Our experimentcan also be comparedto the literatureon repeatedprisoner's dilemmas. This literature (see e.g., Selten and Stoecker (1986) for a review) finds that experienced subjects exhibit a pattern of "tacit cooperation"until shortly before the end of the game, when they start to adopt noncooperative behavior.Such behaviorwould be predictedby incomplete informationmodels like that of Kreps et al. (1982). However, Selten and Stoecker also find that inexperiencedsubjects do not immediatelyadopt this patternof play, but that it takes them some time to "learn to cooperate." Selten and Stoecker develop a learning theory model that is not based on optimizingbehavior to account for such a learning phase. One could alternativelydevelop a model similar to the one used here, where in additionto incompleteinformation about the payoffsof others, all subjects have some chance of making errors, which decreases over time. If some other subjects might be making errors, then it could be in the interest of all subjects to take some time to learn to cooperate, since they can masqueradeas slow learners. Thus, a natural analog of the model used here might offer an alternativeexplanation for the data in Selten and Stoecker. 2. EXPERIMENTAL DESIGN Our budget is too constrainedto use the payoffsproposed by Aumann.So we run a rather more modest version of the centipede game. In our laboratory games, we start with a total pot of $.50 divided into a large pile of $.40 and a small pile of $.10. Each time a player chooses to pass, both piles are multiplied by two. We considerboth a two round(four move) and a three round (six move) version of the game. This leads to the extensive forms illustratedin Figures 1 and 2. In addition, we consider a version of the four move game in which all payoffs are quadrupled.This "high payoff" condition therefore produced a payoff structureequivalent to the last four moves of the six move game. 1 2 1 2 6.40 Centipede Game fT 1T fT fT P 1.60 0.40 0.20 1.60 0.80 0.10 0.80 0.40 3.20 Centipede Game:FIGURE McKelvey 1.-The four and move Palfreycentipede (1992)game. I 2 14 2 1 2 25.60 . P P P0 P P P 6.40 fT fT fT fT fT fT 26 0.40 0.20 1.60 0.80 6.40 3.20 0.10 0.80 0.40 3.20 1.60 I2.80 FIGURE2.-The six move centipedegame. What is the unique subgame-perfect equilibrium of this game? Play T at all nodes 7 / 41 0.7% 806 R. D. MCKELVEY AND T. R. PALFREY niques. This enables us to estimate the numberof subjectsthat actuallybehave in such a fashion, and to address the question as to whether the beliefs of subjectsare on averagecorrect. Our experimentcan also be comparedto the literatureon repeatedprisoner's dilemmas. This literature (see e.g., Selten and Stoecker (1986) for a review) finds that experienced subjects exhibit a pattern of "tacit cooperation"until shortly before the end of the game, when they start to adopt noncooperative behavior.Such behaviorwould be predictedby incomplete informationmodels like that of Kreps et al. (1982). However, Selten and Stoecker also find that inexperiencedsubjects do not immediatelyadopt this patternof play, but that it takes them some time to "learn to cooperate." Selten and Stoecker develop a learning theory model that is not based on optimizingbehavior to account for such a learning phase. One could alternativelydevelop a model similar to the one used here, where in additionto incompleteinformation about the payoffsof others, all subjects have some chance of making errors, which decreases over time. If some other subjects might be making errors, then it could be in the interest of all subjects to take some time to learn to cooperate, since they can masqueradeas slow learners. Thus, a natural analog of the model used here might offer an alternativeexplanation for the data in Selten and Stoecker. 2. EXPERIMENTAL DESIGN Our budget is too constrainedto use the payoffsproposed by Aumann.So we run a rather more modest version of the centipede game. In our laboratory games, we start with a total pot of $.50 divided into a large pile of $.40 and a small pile of $.10. Each time a player chooses to pass, both piles are multiplied by two. We considerboth a two round(four move) and a three round (six move) version of the game. This leads to the extensive forms illustratedin Figures 1 and 2. In addition, we consider a version of the four move game in which all payoffs are quadrupled.This "high payoff" condition therefore produced a payoff structureequivalent to the last four moves of the six move game. 1 2 1 2 6.40 Centipede Game fT 1T fT fT P 1.60 0.40 0.20 1.60 0.80 0.10 0.80 0.40 3.20 Centipede Game:FIGURE McKelvey 1.-The four and move Palfreycentipede (1992)game. I 2 14 2 1 2 25.60 . P P P0 P P P 6.40 fT fT fT fT fT fT 26 0.40 0.20 1.60 0.80 6.40 3.20 0.10 0.80 0.40 3.20 1.60 I2.80 FIGURE2.-The six move centipedegame. What is the unique subgame-perfect equilibrium of this game? Play T at all nodes How many pairs of subjects play T at the first node? 7 / 41 806 R. D. MCKELVEY AND T. R. PALFREY niques. This enables us to estimate the numberof subjectsthat actuallybehave in such a fashion, and to address the question as to whether the beliefs of subjectsare on averagecorrect. Our experimentcan also be comparedto the literatureon repeatedprisoner's dilemmas. This literature (see e.g., Selten and Stoecker (1986) for a review) finds that experienced subjects exhibit a pattern of "tacit cooperation"until shortly before the end of the game, when they start to adopt noncooperative behavior.Such behaviorwould be predictedby incomplete informationmodels like that of Kreps et al. (1982). However, Selten and Stoecker also find that inexperiencedsubjects do not immediatelyadopt this patternof play, but that it takes them some time to "learn to cooperate." Selten and Stoecker develop a learning theory model that is not based on optimizingbehavior to account for such a learning phase. One could alternativelydevelop a model similar to the one used here, where in additionto incompleteinformation about the payoffsof others, all subjects have some chance of making errors, which decreases over time. If some other subjects might be making errors, then it could be in the interest of all subjects to take some time to learn to cooperate, since they can masqueradeas slow learners. Thus, a natural analog of the model used here might offer an alternativeexplanation for the data in Selten and Stoecker. 2. EXPERIMENTAL DESIGN Our budget is too constrainedto use the payoffsproposed by Aumann.So we run a rather more modest version of the centipede game. In our laboratory games, we start with a total pot of $.50 divided into a large pile of $.40 and a small pile of $.10. Each time a player chooses to pass, both piles are multiplied by two. We considerboth a two round(four move) and a three round (six move) version of the game. This leads to the extensive forms illustratedin Figures 1 and 2. In addition, we consider a version of the four move game in which all payoffs are quadrupled.This "high payoff" condition therefore produced a payoff structureequivalent to the last four moves of the six move game. 1 2 1 2 6.40 Centipede Game fT 1T fT fT P 1.60 0.40 0.20 1.60 0.80 0.10 0.80 0.40 3.20 Centipede Game:FIGURE McKelvey 1.-The four and move Palfreycentipede (1992)game.
Recommended publications
  • Game Theory 2: Extensive-Form Games and Subgame Perfection
    Game Theory 2: Extensive-Form Games and Subgame Perfection 1 / 26 Dynamics in Games How should we think of strategic interactions that occur in sequence? Who moves when? And what can they do at different points in time? How do people react to different histories? 2 / 26 Modeling Games with Dynamics Players Player function I Who moves when Terminal histories I Possible paths through the game Preferences over terminal histories 3 / 26 Strategies A strategy is a complete contingent plan Player i's strategy specifies her action choice at each point at which she could be called on to make a choice 4 / 26 An Example: International Crises Two countries (A and B) are competing over a piece of land that B occupies Country A decides whether to make a demand If Country A makes a demand, B can either acquiesce or fight a war If A does not make a demand, B keeps land (game ends) A's best outcome is Demand followed by Acquiesce, worst outcome is Demand and War B's best outcome is No Demand and worst outcome is Demand and War 5 / 26 An Example: International Crises A can choose: Demand (D) or No Demand (ND) B can choose: Fight a war (W ) or Acquiesce (A) Preferences uA(D; A) = 3 > uA(ND; A) = uA(ND; W ) = 2 > uA(D; W ) = 1 uB(ND; A) = uB(ND; W ) = 3 > uB(D; A) = 2 > uB(D; W ) = 1 How can we represent this scenario as a game (in strategic form)? 6 / 26 International Crisis Game: NE Country B WA D 1; 1 3X; 2X Country A ND 2X; 3X 2; 3X I Is there something funny here? I Is there something funny here? I Specifically, (ND; W )? I Is there something funny here?
    [Show full text]
  • Lecture 4 Rationalizability & Nash Equilibrium Road
    Lecture 4 Rationalizability & Nash Equilibrium 14.12 Game Theory Muhamet Yildiz Road Map 1. Strategies – completed 2. Quiz 3. Dominance 4. Dominant-strategy equilibrium 5. Rationalizability 6. Nash Equilibrium 1 Strategy A strategy of a player is a complete contingent-plan, determining which action he will take at each information set he is to move (including the information sets that will not be reached according to this strategy). Matching pennies with perfect information 2’s Strategies: HH = Head if 1 plays Head, 1 Head if 1 plays Tail; HT = Head if 1 plays Head, Head Tail Tail if 1 plays Tail; 2 TH = Tail if 1 plays Head, 2 Head if 1 plays Tail; head tail head tail TT = Tail if 1 plays Head, Tail if 1 plays Tail. (-1,1) (1,-1) (1,-1) (-1,1) 2 Matching pennies with perfect information 2 1 HH HT TH TT Head Tail Matching pennies with Imperfect information 1 2 1 Head Tail Head Tail 2 Head (-1,1) (1,-1) head tail head tail Tail (1,-1) (-1,1) (-1,1) (1,-1) (1,-1) (-1,1) 3 A game with nature Left (5, 0) 1 Head 1/2 Right (2, 2) Nature (3, 3) 1/2 Left Tail 2 Right (0, -5) Mixed Strategy Definition: A mixed strategy of a player is a probability distribution over the set of his strategies. Pure strategies: Si = {si1,si2,…,sik} σ → A mixed strategy: i: S [0,1] s.t. σ σ σ i(si1) + i(si2) + … + i(sik) = 1. If the other players play s-i =(s1,…, si-1,si+1,…,sn), then σ the expected utility of playing i is σ σ σ i(si1)ui(si1,s-i) + i(si2)ui(si2,s-i) + … + i(sik)ui(sik,s-i).
    [Show full text]
  • Best Experienced Payoff Dynamics and Cooperation in the Centipede Game
    Theoretical Economics 14 (2019), 1347–1385 1555-7561/20191347 Best experienced payoff dynamics and cooperation in the centipede game William H. Sandholm Department of Economics, University of Wisconsin Segismundo S. Izquierdo BioEcoUva, Department of Industrial Organization, Universidad de Valladolid Luis R. Izquierdo Department of Civil Engineering, Universidad de Burgos We study population game dynamics under which each revising agent tests each of his strategies a fixed number of times, with each play of each strategy being against a newly drawn opponent, and chooses the strategy whose total payoff was highest. In the centipede game, these best experienced payoff dynamics lead to co- operative play. When strategies are tested once, play at the almost globally stable state is concentrated on the last few nodes of the game, with the proportions of agents playing each strategy being largely independent of the length of the game. Testing strategies many times leads to cyclical play. Keywords. Evolutionary game theory, backward induction, centipede game, computational algebra. JEL classification. C72, C73. 1. Introduction The discrepancy between the conclusions of backward induction reasoning and ob- served behavior in certain canonical extensive form games is a basic puzzle of game the- ory. The centipede game (Rosenthal (1981)), the finitely repeated prisoner’s dilemma, and related examples can be viewed as models of relationships in which each partic- ipant has repeated opportunities to take costly actions that benefit his partner and in which there is a commonly known date at which the interaction will end. Experimen- tal and anecdotal evidence suggests that cooperative behavior may persist until close to William H.
    [Show full text]
  • Equilibrium Refinements
    Equilibrium Refinements Mihai Manea MIT Sequential Equilibrium I In many games information is imperfect and the only subgame is the original game. subgame perfect equilibrium = Nash equilibrium I Play starting at an information set can be analyzed as a separate subgame if we specify players’ beliefs about at which node they are. I Based on the beliefs, we can test whether continuation strategies form a Nash equilibrium. I Sequential equilibrium (Kreps and Wilson 1982): way to derive plausible beliefs at every information set. Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 2 / 38 An Example with Incomplete Information Spence’s (1973) job market signaling game I The worker knows her ability (productivity) and chooses a level of education. I Education is more costly for low ability types. I Firm observes the worker’s education, but not her ability. I The firm decides what wage to offer her. In the spirit of subgame perfection, the optimal wage should depend on the firm’s beliefs about the worker’s ability given the observed education. An equilibrium needs to specify contingent actions and beliefs. Beliefs should follow Bayes’ rule on the equilibrium path. What about off-path beliefs? Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 3 / 38 An Example with Imperfect Information Courtesy of The MIT Press. Used with permission. Figure: (L; A) is a subgame perfect equilibrium. Is it plausible that 2 plays A? Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 4 / 38 Assessments and Sequential Rationality Focus on extensive-form games of perfect recall with finitely many nodes. An assessment is a pair (σ; µ) I σ: (behavior) strategy profile I µ = (µ(h) 2 ∆(h))h2H: system of beliefs ui(σjh; µ(h)): i’s payoff when play begins at a node in h randomly selected according to µ(h), and subsequent play specified by σ.
    [Show full text]
  • Lecture Notes
    GRADUATE GAME THEORY LECTURE NOTES BY OMER TAMUZ California Institute of Technology 2018 Acknowledgments These lecture notes are partially adapted from Osborne and Rubinstein [29], Maschler, Solan and Zamir [23], lecture notes by Federico Echenique, and slides by Daron Acemoglu and Asu Ozdaglar. I am indebted to Seo Young (Silvia) Kim and Zhuofang Li for their help in finding and correcting many errors. Any comments or suggestions are welcome. 2 Contents 1 Extensive form games with perfect information 7 1.1 Tic-Tac-Toe ........................................ 7 1.2 The Sweet Fifteen Game ................................ 7 1.3 Chess ............................................ 7 1.4 Definition of extensive form games with perfect information ........... 10 1.5 The ultimatum game .................................. 10 1.6 Equilibria ......................................... 11 1.7 The centipede game ................................... 11 1.8 Subgames and subgame perfect equilibria ...................... 13 1.9 The dollar auction .................................... 14 1.10 Backward induction, Kuhn’s Theorem and a proof of Zermelo’s Theorem ... 15 2 Strategic form games 17 2.1 Definition ......................................... 17 2.2 Nash equilibria ...................................... 17 2.3 Classical examples .................................... 17 2.4 Dominated strategies .................................. 22 2.5 Repeated elimination of dominated strategies ................... 22 2.6 Dominant strategies ..................................
    [Show full text]
  • More Experiments on Game Theory
    More Experiments on Game Theory Syngjoo Choi Spring 2010 Experimental Economics (ECON3020) Game theory 2 Spring2010 1/28 Playing Unpredictably In many situations there is a strategic advantage associated with being unpredictable. Experimental Economics (ECON3020) Game theory 2 Spring2010 2/28 Playing Unpredictably In many situations there is a strategic advantage associated with being unpredictable. Experimental Economics (ECON3020) Game theory 2 Spring2010 2/28 Mixed (or Randomized) Strategy In the penalty kick, Left (L) and Right (R) are the pure strategies of the kicker and the goalkeeper. A mixed strategy refers to a probabilistic mixture over pure strategies in which no single pure strategy is played all the time. e.g., kicking left with half of the time and right with the other half of the time. A penalty kick in a soccer game is one example of games of pure con‡icit, essentially called zero-sum games in which one player’s winning is the other’sloss. Experimental Economics (ECON3020) Game theory 2 Spring2010 3/28 The use of pure strategies will result in a loss. What about each player playing heads half of the time? Matching Pennies Games In a matching pennies game, each player uncovers a penny showing either heads or tails. One player takes both coins if the pennies match; otherwise, the other takes both coins. Left Right Top 1, 1, 1 1 Bottom 1, 1, 1 1 Does there exist a Nash equilibrium involving the use of pure strategies? Experimental Economics (ECON3020) Game theory 2 Spring2010 4/28 Matching Pennies Games In a matching pennies game, each player uncovers a penny showing either heads or tails.
    [Show full text]
  • Modeling Strategic Behavior1
    Modeling Strategic Behavior1 A Graduate Introduction to Game Theory and Mechanism Design George J. Mailath Department of Economics, University of Pennsylvania Research School of Economics, Australian National University 1September 16, 2020, copyright by George J. Mailath. World Scientific Press published the November 02, 2018 version. Any significant correc- tions or changes are listed at the back. To Loretta Preface These notes are based on my lecture notes for Economics 703, a first-year graduate course that I have been teaching at the Eco- nomics Department, University of Pennsylvania, for many years. It is impossible to understand modern economics without knowl- edge of the basic tools of game theory and mechanism design. My goal in the course (and this book) is to teach those basic tools so that students can understand and appreciate the corpus of modern economic thought, and so contribute to it. A key theme in the course is the interplay between the formal development of the tools and their use in applications. At the same time, extensions of the results that are beyond the course, but im- portant for context are (briefly) discussed. While I provide more background verbally on many of the exam- ples, I assume that students have seen some undergraduate game theory (such as covered in Osborne, 2004, Tadelis, 2013, and Wat- son, 2013). In addition, some exposure to intermediate microeco- nomics and decision making under uncertainty is helpful. Since these are lecture notes for an introductory course, I have not tried to attribute every result or model described. The result is a somewhat random pattern of citations and references.
    [Show full text]
  • 8. Maxmin and Minmax Strategies
    CMSC 474, Introduction to Game Theory 8. Maxmin and Minmax Strategies Mohammad T. Hajiaghayi University of Maryland Outline Chapter 2 discussed two solution concepts: Pareto optimality and Nash equilibrium Chapter 3 discusses several more: Maxmin and Minmax Dominant strategies Correlated equilibrium Trembling-hand perfect equilibrium e-Nash equilibrium Evolutionarily stable strategies Worst-Case Expected Utility For agent i, the worst-case expected utility of a strategy si is the minimum over all possible Husband Opera Football combinations of strategies for the other agents: Wife min u s ,s Opera 2, 1 0, 0 s-i i ( i -i ) Football 0, 0 1, 2 Example: Battle of the Sexes Wife’s strategy sw = {(p, Opera), (1 – p, Football)} Husband’s strategy sh = {(q, Opera), (1 – q, Football)} uw(p,q) = 2pq + (1 – p)(1 – q) = 3pq – p – q + 1 We can write uw(p,q) For any fixed p, uw(p,q) is linear in q instead of uw(sw , sh ) • e.g., if p = ½, then uw(½,q) = ½ q + ½ 0 ≤ q ≤ 1, so the min must be at q = 0 or q = 1 • e.g., minq (½ q + ½) is at q = 0 minq uw(p,q) = min (uw(p,0), uw(p,1)) = min (1 – p, 2p) Maxmin Strategies Also called maximin A maxmin strategy for agent i A strategy s1 that makes i’s worst-case expected utility as high as possible: argmaxmin ui (si,s-i ) si s-i This isn’t necessarily unique Often it is mixed Agent i’s maxmin value, or security level, is the maxmin strategy’s worst-case expected utility: maxmin ui (si,s-i ) si s-i For 2 players it simplifies to max min u1s1, s2 s1 s2 Example Wife’s and husband’s strategies
    [Show full text]
  • Subgame Perfect (-)Equilibrium in Perfect Information Games
    Subgame perfect (-)equilibrium in perfect information games J´anosFlesch∗ April 15, 2016 Abstract We discuss recent results on the existence and characterization of subgame perfect ({)equilibrium in perfect information games. The game. We consider games with perfect information and deterministic transitions. Such games can be given by a directed tree.1 In this tree, each node is associated with a player, who controls this node. The outgoing arcs at this node represent the actions available to this player at this node. We assume that each node has at least one successor, rather than having terminal nodes.2 Play of the game starts at the root. At any node z that play visits, the player who controls z has to choose one of the actions at z, which brings play to a next node. This induces an infinite path in the tree from the root, which we call a play. Depending on this play, each player receives a payoff. Note that these payoffs are fairly general. This setup encompasses the case when the actions induce instantaneous rewards which are then aggregated into a payoff, possibly by taking the total discounted sum or the long-term average. It also includes payoff functions considered in the literature of computer science (reachability games, etc.). Subgame-perfect (-)equilibrium. We focus on pure strategies for the players. A central solution concept in such games is subgame-perfect equilibrium, which is a strategy profile that induces a Nash equilibrium in every subgame, i.e. when starting at any node in the tree, no player has an incentive to deviate individually from his continuation strategy.
    [Show full text]
  • Subgame-Perfect Equilibria in Mean-Payoff Games
    Subgame-perfect Equilibria in Mean-payoff Games Léonard Brice ! Université Gustave Eiffel, France Jean-François Raskin ! Université Libre de Bruxelles, Belgium Marie van den Bogaard ! Université Gustave Eiffel, France Abstract In this paper, we provide an effective characterization of all the subgame-perfect equilibria in infinite duration games played on finite graphs with mean-payoff objectives. To this end, we introduce the notion of requirement, and the notion of negotiation function. We establish that the plays that are supported by SPEs are exactly those that are consistent with the least fixed point of the negotiation function. Finally, we show that the negotiation function is piecewise linear, and can be analyzed using the linear algebraic tool box. As a corollary, we prove the decidability of the SPE constrained existence problem, whose status was left open in the literature. 2012 ACM Subject Classification Software and its engineering: Formal methods; Theory of compu- tation: Logic and verification; Theory of computation: Solution concepts in game theory. Keywords and phrases Games on graphs, subgame-perfect equilibria, mean-payoff objectives. Digital Object Identifier 10.4230/LIPIcs... 1 Introduction The notion of Nash equilibrium (NE) is one of the most important and most studied solution concepts in game theory. A profile of strategies is an NE when no rational player has an incentive to change their strategy unilaterally, i.e. while the other players keep their strategies. Thus an NE models a stable situation. Unfortunately, it is well known that, in sequential games, NEs suffer from the problem of non-credible threats, see e.g. [18]. In those games, some NE only exists when some players do not play rationally in subgames and so use non-credible threats to force the NE.
    [Show full text]
  • (501B) Problem Set 5. Bayesian Games Suggested Solutions by Tibor Heumann
    Dirk Bergemann Department of Economics Yale University Microeconomic Theory (501b) Problem Set 5. Bayesian Games Suggested Solutions by Tibor Heumann 1. (Market for Lemons) Here I ask that you work out some of the details in perhaps the most famous of all information economics models. By contrast to discussion in class, we give a complete formulation of the game. A seller is privately informed of the value v of the good that she sells to a buyer. The buyer's prior belief on v is uniformly distributed on [x; y] with 3 0 < x < y: The good is worth 2 v to the buyer. (a) Suppose the buyer proposes a price p and the seller either accepts or rejects p: If she accepts, the seller gets payoff p−v; and the buyer gets 3 2 v − p: If she rejects, the seller gets v; and the buyer gets nothing. Find the optimal offer that the buyer can make as a function of x and y: (b) Show that if the buyer and the seller are symmetrically informed (i.e. either both know v or neither party knows v), then trade takes place with probability 1. (c) Consider a simultaneous acceptance game in the model with private information as in part a, where a price p is announced and then the buyer and the seller simultaneously accept or reject trade at price p: The payoffs are as in part a. Find the p that maximizes the probability of trade. [SOLUTION] (a) We look for the subgame perfect equilibrium, thus we solve by back- ward induction.
    [Show full text]
  • Dynamic Games Under Bounded Rationality
    Munich Personal RePEc Archive Dynamic Games under Bounded Rationality Zhao, Guo Southwest University for Nationalities 8 March 2015 Online at https://mpra.ub.uni-muenchen.de/62688/ MPRA Paper No. 62688, posted 09 Mar 2015 08:52 UTC Dynamic Games under Bounded Rationality By ZHAO GUO I propose a dynamic game model that is consistent with the paradigm of bounded rationality. Its main advantages over the traditional approach based on perfect rationality are that: (1) the strategy space is a chain-complete partially ordered set; (2) the response function is certain order-preserving map on strategy space; (3) the evolution of economic system can be described by the Dynamical System defined by the response function under iteration; (4) the existence of pure-strategy Nash equilibria can be guaranteed by fixed point theorems for ordered structures, rather than topological structures. This preference-response framework liberates economics from the utility concept, and constitutes a marriage of normal-form and extensive-form games. Among the common assumptions of classical existence theorems for competitive equilibrium, one is central. That is, individuals are assumed to have perfect rationality, so as to maximize their utilities (payoffs in game theoretic usage). With perfect rationality and perfect competition, the competitive equilibrium is completely determined, and the equilibrium depends only on their goals and their environments. With perfect rationality and perfect competition, the classical economic theory turns out to be deductive theory that requires almost no contact with empirical data once its assumptions are accepted as axioms (see Simon 1959). Zhao: Southwest University for Nationalities, Chengdu 610041, China (e-mail: [email protected]).
    [Show full text]