Game Theory Lecture Notes

Total Page:16

File Type:pdf, Size:1020Kb

Game Theory Lecture Notes Lecture Notes for 1st Year Ph.D. Game Theory∗ Navin Kartik† 1 Introduction Game theory is a formal methodology and a set of techniques to study the interaction of rational agents in strategic settings. ‘Rational’ here means the standard thing in economics: maximizing over well-defined objectives; ‘strategic’ means that agents care not only about their own actions, but also about the actions taken by other agents. Note that decision theory — which you should have seen at least a bit of last term — is the study of how an individual makes decisions in non-strategic settings; hence game theory is sometimes also referred to as multi-person decision theory. The common terminology for the field comes from its putative applications to games such as poker, chess, etc.1 However, the applications we are usually interested in have little directly to do with such games. In particular, these are what we call “zero-sum” games in the sense that one player’s loss is another player’s gain; they are games of pure conflict. In economic applications, there is typically a mixture of conflict and cooperation motives. 1.1 A (Very!) Brief History Modern game theory as a field owes much to the work of John von Neumann. In 1928, he wrote an important paper on two-person zero-sum games that contained the famous Min- imax Theorem, which we’ll see later on. In 1944, von Neumann and Oscar Morgenstern published their classic book, Theory of Games and Strategic Behavior, that extended the work on zero-sum games, and also started cooperative game theory. In the early 1950’s, John Nash made his seminal contributions to non-zero-sum games and started bargaining ∗Last updated: March 11, 2009. These notes draw upon various published and unpublished sources, including notes by Vince Crawford, David Miller, and particularly those by Doug Bernheim. Thanks to David Miller and former and current students for comments. †[email protected]. Please feel free to send me suggestions, corrections, typos, etc. 1Ironically, game theory actually has limited prescriptive advice to offer on how to play either chess or poker. For example, we know that chess is “solvable” in a sense to be made precise later, but nobody actually knows what the solution is! This stems from the fact that chess is simply too complicated to “solve” (at present); this is of course why the best players are said to rely just as much on their intuition or feel as much as logic, and can defeat powerful computers. 1 theory. In 1957, Robert Luce and Howard Raiffa published their book, Games and De- cisions: Introduction and Critical Survey, popularizing game theory. In 1967–1968, John Harsanyi formalized methods to study games of incomplete information, which was crucial for widening the scope of applications. In the 1970s, there was an explosion of theoretical and applied work in game theory, and the methodology was well along its way to its current status as a preeminent tool in not only economics, but other social sciences too. 1.2 Non-cooperative Game Theory Throughout this course, we will focus on noncooperative game theory, as opposed to co- operative game theory. All of game theory describes strategic settings by starting with the set of players, i.e. the decision-makers. The difference between noncooperative and cooperative game theory is that the former takes each player’s individual actions as primi- tives, whereas the latter takes joint actions as primitives. That is, cooperative game theory assumes that binding agreements can be made by players within various groups and players can communicate freely in order to do so. We will take the noncooperative viewpoint that each player acts as an individual, and the possibilities for agreements and communication must be explicitly modeled. Except for brief discussions in Appendix A of Chapter 18 and parts of Chapter 22, Mas-Colell, Whinston, and Green (1995, hereafter, MWG) does not deal with cooperative game theory either. For an excellent introduction, see Chapters 13-15 in Osborne and Rubinstein (1994). 2 Strategic Settings A game is a description of a strategic environment. Informally, the description must specify who is playing, what the rules are, what the outcomes are depending on any set of actions, and how players value the various outcomes. Example 1. (Matching Pennies version A) Two players, Anne and Bob. Simultane- ously, each picks Heads or Tails. If they pick the same, Bob pays Anne $2; if they pick different, Anne pays Bob $2. Example 2. (Matching Pennies version B) Two players, Anne and Bob. First, Anne picks either Heads or Tails. Upon observing her choice, Bob then picks either Heads or Tails. If they pick the same, Bob pays Anne $2; if they pick different, Anne pays Bob $2. Example 3. (Matching Pennies version N) Two players, Anne and Bob. Simultane- ously, they pick either Heads or Tails. If they pick different, then they each receive $0. If they pick the same, they wait for 15 minutes to see if it rains outside in that time. If it does, they each receive $2 (from God); if it does not rain, they each receive $0. Assume it rains with 50% chance. 2 In all the above examples, implicitly, players value money in the canonical way and are risk-neutral. Notice that Examples 1 and 2 are zero-sum games — whatever Anne wins, Bob loses, and vice-versa. Example 3 is not zero-sum, since they could both win $2. In fact, it is a particular kind of coordination game. Moreover, it is also a game that involves an action taken by “nature” (who decides whether it rains or not). 2.1 Extensive Form Representation • Work through extensive form representation of the examples first. Let us now be more precise about the description of a game. Definition 1. An extensive form game is defined by a tuple ΓE = {X , A,I,p,α, H, H, ι, ρ, u} as follows: 1. A finite set of I players. Denote the set of players as I = {0, 1,...,I}. Players 1,...,I are the “real” players; player 0 is used as an “auxiliary” player, nature. 2. A set of nodes, X .2 3. A function p : X → X ∪ {∅} specifying a unique immediate predecessor of each node 3 x such that p(x) is the empty-set for exactly one node, called the root node, x0. (a) The immediate successors of node x are defined as s(x)= {y ∈X : p(y)= x}. (b) By iterating the functions p and s, we can find all predecessors and successors of any node, x, which we denote P (x) and S(x) respectively. We require that that P (x) ∩ S(x)= ∅, i.e. no node is both a predecessor and a successor to any other node. (c) The set of terminal nodes is T = {x ∈X : s(x)= ∅}. 4. A set of actions, A, and a function α : X \ {x0}→A that specifies for each node x 6= x0, the action which leads to x from p(x). We require that α be such that if distinct x′,x′′ ∈ s(x), then α(x′) 6= α(x′′). That is, from any node, each action leads to a unique successor. The set of available actions at any node, x, is denoted ′ c(x)= {α(x )}x′∈s(x). 5. A collection of information sets, H, that forms a partition of X ,4 and a function H : X →H that assigns each decision node into an information set. We require that if y ∈ P (x) then H(x) 6= H(y); that is, no node can be in the same information set 2Nodes are typically drawn as small solid circles, but note fn. 3. 3 The root node is typically drawn as a small hollow circle. 4Recall that a partition is a set of mutually exclusive and exhaustive subsets. 3 as any of its predecessors. We also require that c(x) = c(x′) if H(x) = H(x′); that is, two nodes in the same information set have the same set of available actions. It is therefore meaningful to write C(H)= {a ∈A : a ∈ c(x) ∀x ∈ H} for any information set H ∈H as the set of choices available at H. 6. A function ι : H→ I assigning the player (possibly nature) to move at all the decision nodes in any information set. This defines a collection of information sets that any player i moves at, Hi ≡ {H ∈H : i = ι(H)}. 5 7. For each H ∈ H0, a probability distribution ρ(H) on the set C(H). This dictates nature’s moves at each of its information sets. 8. u = (u1,...,uI ) is a vector of utility functions such that for each i = 1,...,I, ui : T → R is a (vNM) payoff function that represents (expected utility) preferences for i over terminal nodes. Keep in mind that when drawing game trees, we use dotted lines between nodes (Kreps) or ellipses around nodes (MWG) to indicate nodes that fall into the same informa- tion set. • Work through examples of what the definition of an extensive form game rules out. To avoid technical complications, we restrict attention the formal definition above to finite games which satisfy the following property. Assumption 1. The set of nodes, X , is finite. Remark 1. If X is finite, then even if the set of actions, A, is infinite, there are only a finite number of relevant actions; hence without loss of generality, we can take A as finite if X is finite. At various points, we will study infinite games (where the number of nodes is in- finite); the extension of the formal concept of a game to such cases will be intuitive and straightforward.
Recommended publications
  • Game Theory 2: Extensive-Form Games and Subgame Perfection
    Game Theory 2: Extensive-Form Games and Subgame Perfection 1 / 26 Dynamics in Games How should we think of strategic interactions that occur in sequence? Who moves when? And what can they do at different points in time? How do people react to different histories? 2 / 26 Modeling Games with Dynamics Players Player function I Who moves when Terminal histories I Possible paths through the game Preferences over terminal histories 3 / 26 Strategies A strategy is a complete contingent plan Player i's strategy specifies her action choice at each point at which she could be called on to make a choice 4 / 26 An Example: International Crises Two countries (A and B) are competing over a piece of land that B occupies Country A decides whether to make a demand If Country A makes a demand, B can either acquiesce or fight a war If A does not make a demand, B keeps land (game ends) A's best outcome is Demand followed by Acquiesce, worst outcome is Demand and War B's best outcome is No Demand and worst outcome is Demand and War 5 / 26 An Example: International Crises A can choose: Demand (D) or No Demand (ND) B can choose: Fight a war (W ) or Acquiesce (A) Preferences uA(D; A) = 3 > uA(ND; A) = uA(ND; W ) = 2 > uA(D; W ) = 1 uB(ND; A) = uB(ND; W ) = 3 > uB(D; A) = 2 > uB(D; W ) = 1 How can we represent this scenario as a game (in strategic form)? 6 / 26 International Crisis Game: NE Country B WA D 1; 1 3X; 2X Country A ND 2X; 3X 2; 3X I Is there something funny here? I Is there something funny here? I Specifically, (ND; W )? I Is there something funny here?
    [Show full text]
  • Lecture 4 Rationalizability & Nash Equilibrium Road
    Lecture 4 Rationalizability & Nash Equilibrium 14.12 Game Theory Muhamet Yildiz Road Map 1. Strategies – completed 2. Quiz 3. Dominance 4. Dominant-strategy equilibrium 5. Rationalizability 6. Nash Equilibrium 1 Strategy A strategy of a player is a complete contingent-plan, determining which action he will take at each information set he is to move (including the information sets that will not be reached according to this strategy). Matching pennies with perfect information 2’s Strategies: HH = Head if 1 plays Head, 1 Head if 1 plays Tail; HT = Head if 1 plays Head, Head Tail Tail if 1 plays Tail; 2 TH = Tail if 1 plays Head, 2 Head if 1 plays Tail; head tail head tail TT = Tail if 1 plays Head, Tail if 1 plays Tail. (-1,1) (1,-1) (1,-1) (-1,1) 2 Matching pennies with perfect information 2 1 HH HT TH TT Head Tail Matching pennies with Imperfect information 1 2 1 Head Tail Head Tail 2 Head (-1,1) (1,-1) head tail head tail Tail (1,-1) (-1,1) (-1,1) (1,-1) (1,-1) (-1,1) 3 A game with nature Left (5, 0) 1 Head 1/2 Right (2, 2) Nature (3, 3) 1/2 Left Tail 2 Right (0, -5) Mixed Strategy Definition: A mixed strategy of a player is a probability distribution over the set of his strategies. Pure strategies: Si = {si1,si2,…,sik} σ → A mixed strategy: i: S [0,1] s.t. σ σ σ i(si1) + i(si2) + … + i(sik) = 1. If the other players play s-i =(s1,…, si-1,si+1,…,sn), then σ the expected utility of playing i is σ σ σ i(si1)ui(si1,s-i) + i(si2)ui(si2,s-i) + … + i(sik)ui(sik,s-i).
    [Show full text]
  • Learning and Equilibrium
    Learning and Equilibrium Drew Fudenberg1 and David K. Levine2 1Department of Economics, Harvard University, Cambridge, Massachusetts; email: [email protected] 2Department of Economics, Washington University of St. Louis, St. Louis, Missouri; email: [email protected] Annu. Rev. Econ. 2009. 1:385–419 Key Words First published online as a Review in Advance on nonequilibrium dynamics, bounded rationality, Nash equilibrium, June 11, 2009 self-confirming equilibrium The Annual Review of Economics is online at by 140.247.212.190 on 09/04/09. For personal use only. econ.annualreviews.org Abstract This article’s doi: The theory of learning in games explores how, which, and what 10.1146/annurev.economics.050708.142930 kind of equilibria might arise as a consequence of a long-run non- Annu. Rev. Econ. 2009.1:385-420. Downloaded from arjournals.annualreviews.org Copyright © 2009 by Annual Reviews. equilibrium process of learning, adaptation, and/or imitation. If All rights reserved agents’ strategies are completely observed at the end of each round 1941-1383/09/0904-0385$20.00 (and agents are randomly matched with a series of anonymous opponents), fairly simple rules perform well in terms of the agent’s worst-case payoffs, and also guarantee that any steady state of the system must correspond to an equilibrium. If players do not ob- serve the strategies chosen by their opponents (as in extensive-form games), then learning is consistent with steady states that are not Nash equilibria because players can maintain incorrect beliefs about off-path play. Beliefs can also be incorrect because of cogni- tive limitations and systematic inferential errors.
    [Show full text]
  • On Games of Strategic Experimentation Dinah Rosenberg, Antoine Salomon, Nicolas Vieille
    On Games of Strategic Experimentation Dinah Rosenberg, Antoine Salomon, Nicolas Vieille To cite this version: Dinah Rosenberg, Antoine Salomon, Nicolas Vieille. On Games of Strategic Experimentation. 2010. hal-00579613 HAL Id: hal-00579613 https://hal.archives-ouvertes.fr/hal-00579613 Preprint submitted on 24 Mar 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. On Games of Strategic Experimentation Dinah Rosenberg∗, Antoine Salomon† and Nicolas Vieille‡ December 15, 2010 Abstract We focus on two-player, two-armed bandit games. We analyze the joint effect on the informational spillovers between the players of the correlation between the risky arms, and the extent to which one’s experimentation results are publicly disclosed. Our main results only depend on whethert informational shocks bring good or bad news. In the latter case, there is a sense in which the marginal value of these informational spillovers is zero. Strategic experimentation issues are prevalent in most situations of social learning. In such setups, an agent may learn useful information by experimenting himself, or possibly, by observ- ing other agents. Typical applications include dynamic R&D (see e.g. Moscarini and Squin- tani (2010), Malueg and Tsutsui (1997)), competition in an uncertain environment (MacLennan (1984)), financial contracting (Bergemann and Hege (2005) ), etc.
    [Show full text]
  • Best Experienced Payoff Dynamics and Cooperation in the Centipede Game
    Theoretical Economics 14 (2019), 1347–1385 1555-7561/20191347 Best experienced payoff dynamics and cooperation in the centipede game William H. Sandholm Department of Economics, University of Wisconsin Segismundo S. Izquierdo BioEcoUva, Department of Industrial Organization, Universidad de Valladolid Luis R. Izquierdo Department of Civil Engineering, Universidad de Burgos We study population game dynamics under which each revising agent tests each of his strategies a fixed number of times, with each play of each strategy being against a newly drawn opponent, and chooses the strategy whose total payoff was highest. In the centipede game, these best experienced payoff dynamics lead to co- operative play. When strategies are tested once, play at the almost globally stable state is concentrated on the last few nodes of the game, with the proportions of agents playing each strategy being largely independent of the length of the game. Testing strategies many times leads to cyclical play. Keywords. Evolutionary game theory, backward induction, centipede game, computational algebra. JEL classification. C72, C73. 1. Introduction The discrepancy between the conclusions of backward induction reasoning and ob- served behavior in certain canonical extensive form games is a basic puzzle of game the- ory. The centipede game (Rosenthal (1981)), the finitely repeated prisoner’s dilemma, and related examples can be viewed as models of relationships in which each partic- ipant has repeated opportunities to take costly actions that benefit his partner and in which there is a commonly known date at which the interaction will end. Experimen- tal and anecdotal evidence suggests that cooperative behavior may persist until close to William H.
    [Show full text]
  • Hierarchies of Ambiguous Beliefs∗
    Hierarchies of Ambiguous Beliefs∗ David S. Ahny August 2006 Abstract We present a theory of interactive beliefs analogous to Mertens and Zamir [31] and Branden- burger and Dekel [10] that allows for hierarchies of ambiguity. Each agent is allowed a compact set of beliefs at each level, rather than just a single belief as in the standard model. We propose appropriate definitions of coherency and common knowledge for our types. Common knowledge of coherency closes the model, in the sense that each type homeomorphically encodes a compact set of beliefs over the others' types. This space universally embeds every implicit type space of ambiguous beliefs in a beliefs-preserving manner. An extension to ambiguous conditional probability systems [4] is presented. The standard universal type space and the universal space of compact continuous possibility structures are epistemically identified as subsets. JEL classification: C72; D81 Keywords: ambiguity, Knightian uncertainty, Bayesian games, universal type space 1 Introduction The idea of a player's type introduced by Harsanyi [19] provides a useful and compact represen- tation of the interactive belief structures that arise in a game, encoding a player's beliefs on some \primitive" parameter of uncertainty, her belief about the others' beliefs, their beliefs about her belief about their beliefs, and so on. Mertens and Zamir [31], hereafter MZ, constructed a universal type space encoding all internally consistent streams of beliefs, ensuring that Bayesian games with Harsanyi types lose no analytic generality.1 ∗NOTICE: this is the author's version of a work that was accepted for publication in Journal of Economic Theory.
    [Show full text]
  • 1 Continuous Strategy Markov Equilibrium in a Dynamic Duopoly
    1 Continuous Strategy Markov Equilibrium in a Dynamic Duopoly with Capacity Constraints* Milan Horniacˇek CERGE-EI, Charles University and Academy of Sciences of Czech Republic, Prague The paper deals with an infinite horizon dynamic duopoly composed of price setting firms, producing differentiated products, with sales constrained by capacities that are increased or maintained by investments. We analyze continuous strategy Markov perfect equilibria, in which strategies are continuous functions of (only) current capacities. The weakest possible criterion of a renegotiation-proofness, called renegotiation-quasi-proofness, eliminates all equilibria in which some continuation equilibrium path in the capacity space does not converge to a Pareto efficient capacity vector giving both firms no lower single period net profit than some (but the same for both firms) capacity unconstrained Bertrand equilibrium. Keywords: capacity creating investments, continuous Markov strategies, dynamic duopoly, renegotiation-proofness. Journal of Economic Literature Classification Numbers: C73, D43, L13. * This is a revised version of the paper that I presented at the 1996 Econometric Society European Meeting in Istanbul. I am indebted to participants in the session ET47 Oligopoly Theory for helpful and encouraging comments. Cristina Rata provided an able research assistance. The usual caveat applies. CERGE ESC Grant is acknowledged as a partial source of financial support. 2 1. INTRODUCTION Many infinite horizon, discrete time, deterministic oligopoly models involve physical links between periods, i.e., they are oligopolistic difference games. These links can stem, for example, from investment or advertising. In difference games, a current state, which is payoff relevant, should be taken into account by rational players when deciding on a current period action.
    [Show full text]
  • Equilibrium Refinements
    Equilibrium Refinements Mihai Manea MIT Sequential Equilibrium I In many games information is imperfect and the only subgame is the original game. subgame perfect equilibrium = Nash equilibrium I Play starting at an information set can be analyzed as a separate subgame if we specify players’ beliefs about at which node they are. I Based on the beliefs, we can test whether continuation strategies form a Nash equilibrium. I Sequential equilibrium (Kreps and Wilson 1982): way to derive plausible beliefs at every information set. Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 2 / 38 An Example with Incomplete Information Spence’s (1973) job market signaling game I The worker knows her ability (productivity) and chooses a level of education. I Education is more costly for low ability types. I Firm observes the worker’s education, but not her ability. I The firm decides what wage to offer her. In the spirit of subgame perfection, the optimal wage should depend on the firm’s beliefs about the worker’s ability given the observed education. An equilibrium needs to specify contingent actions and beliefs. Beliefs should follow Bayes’ rule on the equilibrium path. What about off-path beliefs? Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 3 / 38 An Example with Imperfect Information Courtesy of The MIT Press. Used with permission. Figure: (L; A) is a subgame perfect equilibrium. Is it plausible that 2 plays A? Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 4 / 38 Assessments and Sequential Rationality Focus on extensive-form games of perfect recall with finitely many nodes. An assessment is a pair (σ; µ) I σ: (behavior) strategy profile I µ = (µ(h) 2 ∆(h))h2H: system of beliefs ui(σjh; µ(h)): i’s payoff when play begins at a node in h randomly selected according to µ(h), and subsequent play specified by σ.
    [Show full text]
  • Lecture Notes
    GRADUATE GAME THEORY LECTURE NOTES BY OMER TAMUZ California Institute of Technology 2018 Acknowledgments These lecture notes are partially adapted from Osborne and Rubinstein [29], Maschler, Solan and Zamir [23], lecture notes by Federico Echenique, and slides by Daron Acemoglu and Asu Ozdaglar. I am indebted to Seo Young (Silvia) Kim and Zhuofang Li for their help in finding and correcting many errors. Any comments or suggestions are welcome. 2 Contents 1 Extensive form games with perfect information 7 1.1 Tic-Tac-Toe ........................................ 7 1.2 The Sweet Fifteen Game ................................ 7 1.3 Chess ............................................ 7 1.4 Definition of extensive form games with perfect information ........... 10 1.5 The ultimatum game .................................. 10 1.6 Equilibria ......................................... 11 1.7 The centipede game ................................... 11 1.8 Subgames and subgame perfect equilibria ...................... 13 1.9 The dollar auction .................................... 14 1.10 Backward induction, Kuhn’s Theorem and a proof of Zermelo’s Theorem ... 15 2 Strategic form games 17 2.1 Definition ......................................... 17 2.2 Nash equilibria ...................................... 17 2.3 Classical examples .................................... 17 2.4 Dominated strategies .................................. 22 2.5 Repeated elimination of dominated strategies ................... 22 2.6 Dominant strategies ..................................
    [Show full text]
  • 1 Bertrand Model
    ECON 312: Oligopolisitic Competition 1 Industrial Organization Oligopolistic Competition Both the monopoly and the perfectly competitive market structure has in common is that neither has to concern itself with the strategic choices of its competition. In the former, this is trivially true since there isn't any competition. While the latter is so insignificant that the single firm has no effect. In an oligopoly where there is more than one firm, and yet because the number of firms are small, they each have to consider what the other does. Consider the product launch decision, and pricing decision of Apple in relation to the IPOD models. If the features of the models it has in the line up is similar to Creative Technology's, it would have to be concerned with the pricing decision, and the timing of its announcement in relation to that of the other firm. We will now begin the exposition of Oligopolistic Competition. 1 Bertrand Model Firms can compete on several variables, and levels, for example, they can compete based on their choices of prices, quantity, and quality. The most basic and funda- mental competition pertains to pricing choices. The Bertrand Model is examines the interdependence between rivals' decisions in terms of pricing decisions. The assumptions of the model are: 1. 2 firms in the market, i 2 f1; 2g. 2. Goods produced are homogenous, ) products are perfect substitutes. 3. Firms set prices simultaneously. 4. Each firm has the same constant marginal cost of c. What is the equilibrium, or best strategy of each firm? The answer is that both firms will set the same prices, p1 = p2 = p, and that it will be equal to the marginal ECON 312: Oligopolisitic Competition 2 cost, in other words, the perfectly competitive outcome.
    [Show full text]
  • Recent Advances in Understanding Competitive Markets
    Recent Advances in Understanding Competitive Markets Chris Shannon Department of Economics University of California, Berkeley March 4, 2002 1 Introduction The Arrow-Debreu model of competitive markets has proven to be a remark- ably rich and °exible foundation for studying an array of important economic problems, ranging from social security reform to the determinants of long-run growth. Central to the study of such questions are many of the extensions and modi¯cations of the classic Arrow-Debreu framework that have been ad- vanced over the last 40 years. These include dynamic models of exchange over time, as in overlapping generations or continuous time, and under un- certainty, as in incomplete ¯nancial markets. Many of the most interesting and important recent advances have been spurred by questions about the role of markets in allocating resources and providing opportunities for risk- sharing arising in ¯nance and macroeconomics. This article is intended to give an overview of some of these more recent developments in theoretical work involving competitive markets, and highlight some promising areas for future work. I focus on three broad topics. The ¯rst is the role of asymmetric informa- tion in competitive markets. Classic work in information economics suggests that competitive markets break down in the face of informational asymme- 0 tries. Recently these issues have been reinvestigated from the vantage point of more general models of the market structure, resulting in some surpris- ing results overturning these conclusions and clarifying the conditions under which perfectly competitive markets can incorporate informational asymme- tries. The second concentrates on the testable implications of competitive markets.
    [Show full text]
  • Chapter 2 Equilibrium
    Chapter 2 Equilibrium The theory of equilibrium attempts to predict what happens in a game when players be- have strategically. This is a central concept to this text as, in mechanism design, we are optimizing over games to find the games with good equilibria. Here, we review the most fundamental notions of equilibrium. They will all be static notions in that players are as- sumed to understand the game and will play once in the game. While such foreknowledge is certainly questionable, some justification can be derived from imagining the game in a dynamic setting where players can learn from past play. Readers should look elsewhere for formal justifications. This chapter reviews equilibrium in both complete and incomplete information games. As games of incomplete information are the most central to mechanism design, special at- tention will be paid to them. In particular, we will characterize equilibrium when the private information of each agent is single-dimensional and corresponds, for instance, to a value for receiving a good or service. We will show that auctions with the same equilibrium outcome have the same expected revenue. Using this so-called revenue equivalence we will describe how to solve for the equilibrium strategies of standard auctions in symmetric environments. Emphasis is placed on demonstrating the central theories of equilibrium and not on providing the most comprehensive or general results. For that readers are recommended to consult a game theory textbook. 2.1 Complete Information Games In games of compete information all players are assumed to know precisely the payoff struc- ture of all other players for all possible outcomes of the game.
    [Show full text]