The Nash Equilibrium: a Perspective

Total Page:16

File Type:pdf, Size:1020Kb

The Nash Equilibrium: a Perspective PERSPECTIVE The Nash equilibrium: A perspective Charles A. Holt* and Alvin E. Roth Department of Economics, University of Virginia, Charlottesville, VA 22904-4182; and Department of Economics and Harvard Business School, Harvard University, Cambridge, MA 02138 Edited by Vernon L. Smith, George Mason University, Fairfax, VA, and approved January 28, 2004 (received for review January 7, 2004) In 1950, John Nash contributed a remarkable one-page PNAS article that defined and characterized a notion of equilibrium for n- person games. This notion, now called the ‘‘Nash equilibrium,’’ has been widely applied and adapted in economics and other behav- ioral sciences. Indeed, game theory, with the Nash equilibrium as its centerpiece, is becoming the most prominent unifying theory of social science. In this perspective, we summarize the historical context and subsequent impact of Nash’s contribution. n a brief 1950 communication to The notion of a strategy is quite gen- advice is an equilibrium, however, this PNAS (1), John Forbes Nash for- eral, and it includes ‘‘mixed’’ strategies will not be the case, because the advice mulated the notion of equilibrium that are probability distributions over to each player is the best response to that bears his name and that has decisions, e.g., an inspector who audits the advice given to the other players. I This point of view is sometimes also revolutionized economics and parts of on a random basis or a poker player other sciences. Nash, a young mathemat- who sometimes bluffs. Another interpre- used to derive predictions of what play- ics graduate student at Princeton, was a tation of a mixed-strategy is that of a ers would do, if they can be approxi- part of the Camelot of game theory cen- population of randomly matched indi- mated as ‘‘perfectly rational’’ players tered around von Neumann and Mor- viduals in the role of each player of the who can all make whatever calculations genstern. They had written Theory of game, some proportion of whom make are necessary and so are in the posi- Games and Economic Behavior (2) to each of a number of available choices. tion of deriving the relevant advice for expand economic analysis to allow econ- The idea of the Nash equilibrium is that themselves. omists to model the ‘‘rules of the game’’ a set of strategies, one for each player, When the goal is prediction rather that influence particular environments would be stable if nobody has a unilat- than prescription, a Nash equilibrium and to extend the scope of economic eral incentive to deviate from their own can also be interpreted as a potential theory to include strategic small-group strategy: stable point of a dynamic adjustment situations in which each person must try process in which individuals adjust their Any n-tuple of strategies, one for to anticipate others’ actions. von Neu- behavior to that of the other players in each player, may be regarded as a mann and Morgenstern’s definition of the game, searching for strategy choices point in the product space obtained that will give them better results. This equilibrium for ‘‘noncooperative’’ games by multiplying the n strategy spaces was largely confined to the special case point of view has been productive in of the players. One such n-tuple biology also: when mixed strategies are of ‘‘two-person zero-sum’’ games, in counters another if the strategy of interpreted as the proportion of a popu- which one person’s gain is another’s each player in the countering n-tuple lation choosing each of a set of strate- loss, so the payoffs always sum to zero yields the highest obtainable expecta- gies, game payoffs are interpreted as the (3). Nash proposed a notion of equilib- tion for its player against the n Ϫ 1 change in inclusive fitness that results rium that applied to a much wider class strategies of the other players in the from the play of the game, and the dy- of games without restrictions on the countered n-tuple. A self-countering namics are interpreted as population payoff structure or number of players n-tuple is called an equilibrium point. dynamics (6, 7). No presumptions of (1, 4, 5). von Neumann’s reaction was † (ref. 1, p. 49) rationality are made in this case, of polite but not enthusiastic. Neverthe- course, but only of simple self-interested That is, a Nash equilibrium is a set of less, the Nash equilibrium, as it has be- dynamics. This evolutionary approach strategies, one for each of the n players come known, helped produce a revolu- has also been attractive to economists of a game, that has the property that tion in the use of game theory in (e.g., ref. 8). economics, and it was the contribution each player’s choice is his best response Ϫ A third interpretation is that a Nash for which Nash was cited by the Nobel to the choices of the n 1 other players. equilibrium is a self-enforcing agree- Prize committee at the time of his It would survive an announcement test: ment, that is, an (implicit or explicit) award, 44 years later. if all players announced their strategies agreement that, once reached by the simultaneously, nobody would want to players, does not need any external Equilibrium Points in n-Person Games reconsider. The Nash equilibrium has means of enforcement, because it is in The first part of the 1950 PNAS paper found many uses in economics, partly the self interest of each player to follow introduces the model of a game with n because it can be usefully interpreted in participants, or ‘‘players,’’ who must a number of ways. each select a course of action, or When the goal is to give advice to all This Perspective is published as part of a series highlighting ‘‘strategy’’: of the players in a game (i.e., to advise landmark papers published in PNAS. Read more about each player what strategy to choose), this classic PNAS article online at www.pnas.org͞misc͞ classics.shtml. One may define a concept of an n- any advice that was not an equilibrium person game in which each player This paper was submitted directly (Track II) to the PNAS would have the unsettling property that office. has a finite set of pure strategies and there would always be some player for *To whom correspondence should be addressed. E-mail: in which a definite set of payments to whom the advice was bad, in the sense [email protected]. the n players corresponds to each that, if all other players followed the †In a personal communication with one of the authors, Nash n-tuple of pure strategies, one strat- parts of the advice directed to them, it notes that von Neumann was a ‘‘European gentleman’’ but egy being taken for each player. would be better for some player to do was not an enthusiastic supporter of Nash’s approach. (ref. 1, p. 48) differently than he was advised. If the © 2004 by The National Academy of Sciences of the USA www.pnas.org͞cgi͞doi͞10.1073͞pnas.0308738101 PNAS ͉ March 23, 2004 ͉ vol. 101 ͉ no. 12 ͉ 3999–4002 the agreement if the others do. Viewed Equilibrium and Social Dilemmas is not an equilibrium, is going to be un- in this way, the Nash equilibrium has The Nash equilibrium is useful not just stable in ways that can make coopera- helped to clarify a distinction sometimes when it is itself an accurate predictor of tion difficult to maintain. This observa- still made between ‘‘cooperative’’ and how people will behave in a game but tion has been confirmed in many ‘‘noncooperative’’ games, with coopera- also when it is not, because then it iden- subsequent experiments on this and tive games being those in which agree- tifies situations in which there is a ten- more general ‘‘social dilemmas’’ (see, ments can be enforced (e.g., through the sion between individual incentives and e.g., refs. 21–23). You can put yourself courts), and noncooperative games be- other motivations. A class of problems into a social dilemma game by going to ing those in which no such enforcement that have received a good deal of study the link: http://veconlab.econ.virginia. mechanism exists, so that only equilib- from this point of view is the family of edu/tddemo.htm and playing against rium agreements are sustainable. One ‘‘social dilemmas,’’ in which there is a decisions retrieved from a database. trend in modern game theory, often re- socially desirable action that is not a This Traveler’s Dilemma game is some- ferred to as the ‘‘Nash program,’’ is to Nash equilibrium. Indeed, one of the what more complex than a prisoner’s erase this distinction by including any first responses to Nash’s definition of dilemma, in that the best decision is not relevant enforcement mechanisms in the equilibrium gave rise to one of the best independent of your beliefs about what strategy might be selected by the other model of the game, so that all games known models in the social sciences, the player (24). can be modeled as noncooperative. Prisoners’ Dilemma. This model began Nash took initial steps in this direction life as a simple experiment conducted in Design of Markets and Social Institutions January 1950 at the Rand Corporation in his early and influential model of bar- One of the ways in which research on gaining as a cooperative game (9) and by mathematicians Melvin Dresher and Merrill Flood, to demonstrate that the dilemmas and other problems of collec- then as a noncooperative game (10).
Recommended publications
  • Game Theory 2: Extensive-Form Games and Subgame Perfection
    Game Theory 2: Extensive-Form Games and Subgame Perfection 1 / 26 Dynamics in Games How should we think of strategic interactions that occur in sequence? Who moves when? And what can they do at different points in time? How do people react to different histories? 2 / 26 Modeling Games with Dynamics Players Player function I Who moves when Terminal histories I Possible paths through the game Preferences over terminal histories 3 / 26 Strategies A strategy is a complete contingent plan Player i's strategy specifies her action choice at each point at which she could be called on to make a choice 4 / 26 An Example: International Crises Two countries (A and B) are competing over a piece of land that B occupies Country A decides whether to make a demand If Country A makes a demand, B can either acquiesce or fight a war If A does not make a demand, B keeps land (game ends) A's best outcome is Demand followed by Acquiesce, worst outcome is Demand and War B's best outcome is No Demand and worst outcome is Demand and War 5 / 26 An Example: International Crises A can choose: Demand (D) or No Demand (ND) B can choose: Fight a war (W ) or Acquiesce (A) Preferences uA(D; A) = 3 > uA(ND; A) = uA(ND; W ) = 2 > uA(D; W ) = 1 uB(ND; A) = uB(ND; W ) = 3 > uB(D; A) = 2 > uB(D; W ) = 1 How can we represent this scenario as a game (in strategic form)? 6 / 26 International Crisis Game: NE Country B WA D 1; 1 3X; 2X Country A ND 2X; 3X 2; 3X I Is there something funny here? I Is there something funny here? I Specifically, (ND; W )? I Is there something funny here?
    [Show full text]
  • Lecture Notes
    GRADUATE GAME THEORY LECTURE NOTES BY OMER TAMUZ California Institute of Technology 2018 Acknowledgments These lecture notes are partially adapted from Osborne and Rubinstein [29], Maschler, Solan and Zamir [23], lecture notes by Federico Echenique, and slides by Daron Acemoglu and Asu Ozdaglar. I am indebted to Seo Young (Silvia) Kim and Zhuofang Li for their help in finding and correcting many errors. Any comments or suggestions are welcome. 2 Contents 1 Extensive form games with perfect information 7 1.1 Tic-Tac-Toe ........................................ 7 1.2 The Sweet Fifteen Game ................................ 7 1.3 Chess ............................................ 7 1.4 Definition of extensive form games with perfect information ........... 10 1.5 The ultimatum game .................................. 10 1.6 Equilibria ......................................... 11 1.7 The centipede game ................................... 11 1.8 Subgames and subgame perfect equilibria ...................... 13 1.9 The dollar auction .................................... 14 1.10 Backward induction, Kuhn’s Theorem and a proof of Zermelo’s Theorem ... 15 2 Strategic form games 17 2.1 Definition ......................................... 17 2.2 Nash equilibria ...................................... 17 2.3 Classical examples .................................... 17 2.4 Dominated strategies .................................. 22 2.5 Repeated elimination of dominated strategies ................... 22 2.6 Dominant strategies ..................................
    [Show full text]
  • Collusion Constrained Equilibrium
    Theoretical Economics 13 (2018), 307–340 1555-7561/20180307 Collusion constrained equilibrium Rohan Dutta Department of Economics, McGill University David K. Levine Department of Economics, European University Institute and Department of Economics, Washington University in Saint Louis Salvatore Modica Department of Economics, Università di Palermo We study collusion within groups in noncooperative games. The primitives are the preferences of the players, their assignment to nonoverlapping groups, and the goals of the groups. Our notion of collusion is that a group coordinates the play of its members among different incentive compatible plans to best achieve its goals. Unfortunately, equilibria that meet this requirement need not exist. We instead introduce the weaker notion of collusion constrained equilibrium. This al- lows groups to put positive probability on alternatives that are suboptimal for the group in certain razor’s edge cases where the set of incentive compatible plans changes discontinuously. These collusion constrained equilibria exist and are a subset of the correlated equilibria of the underlying game. We examine four per- turbations of the underlying game. In each case,we show that equilibria in which groups choose the best alternative exist and that limits of these equilibria lead to collusion constrained equilibria. We also show that for a sufficiently broad class of perturbations, every collusion constrained equilibrium arises as such a limit. We give an application to a voter participation game that shows how collusion constraints may be socially costly. Keywords. Collusion, organization, group. JEL classification. C72, D70. 1. Introduction As the literature on collective action (for example, Olson 1965) emphasizes, groups often behave collusively while the preferences of individual group members limit the possi- Rohan Dutta: [email protected] David K.
    [Show full text]
  • Nash Equilibrium
    Lecture 3: Nash equilibrium Nash equilibrium: The mathematician John Nash introduced the concept of an equi- librium for a game, and equilibrium is often called a Nash equilibrium. They provide a way to identify reasonable outcomes when an easy argument based on domination (like in the prisoner's dilemma, see lecture 2) is not available. We formulate the concept of an equilibrium for a two player game with respective 0 payoff matrices PR and PC . We write PR(s; s ) for the payoff for player R when R plays 0 s and C plays s, this is simply the (s; s ) entry the matrix PR. Definition 1. A pair of strategies (^sR; s^C ) is an Nash equilbrium for a two player game if no player can improve his payoff by changing his strategy from his equilibrium strategy to another strategy provided his opponent keeps his equilibrium strategy. In terms of the payoffs matrices this means that PR(sR; s^C ) ≤ P (^sR; s^C ) for all sR ; and PC (^sR; sC ) ≤ P (^sR; s^C ) for all sc : The idea at work in the definition of Nash equilibrium deserves a name: Definition 2. A strategy s^R is a best-response to a strategy sc if PR(sR; sC ) ≤ P (^sR; sC ) for all sR ; i.e. s^R is such that max PR(sR; sC ) = P (^sR; sC ) sR We can now reformulate the idea of a Nash equilibrium as The pair (^sR; s^C ) is a Nash equilibrium if and only ifs ^R is a best-response tos ^C and s^C is a best-response tos ^R.
    [Show full text]
  • Lecture Notes
    Chapter 12 Repeated Games In real life, most games are played within a larger context, and actions in a given situation affect not only the present situation but also the future situations that may arise. When a player acts in a given situation, he takes into account not only the implications of his actions for the current situation but also their implications for the future. If the players arepatient andthe current actionshavesignificant implications for the future, then the considerations about the future may take over. This may lead to a rich set of behavior that may seem to be irrational when one considers the current situation alone. Such ideas are captured in the repeated games, in which a "stage game" is played repeatedly. The stage game is repeated regardless of what has been played in the previous games. This chapter explores the basic ideas in the theory of repeated games and applies them in a variety of economic problems. As it turns out, it is important whether the game is repeated finitely or infinitely many times. 12.1 Finitely-repeated games Let = 0 1 be the set of all possible dates. Consider a game in which at each { } players play a "stage game" , knowing what each player has played in the past. ∈ Assume that the payoff of each player in this larger game is the sum of the payoffsthat he obtains in the stage games. Denote the larger game by . Note that a player simply cares about the sum of his payoffs at the stage games. Most importantly, at the beginning of each repetition each player recalls what each player has 199 200 CHAPTER 12.
    [Show full text]
  • What Is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?
    What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? Chi Jin Praneeth Netrapalli University of California, Berkeley Microsoft Research, India [email protected] [email protected] Michael I. Jordan University of California, Berkeley [email protected] August 18, 2020 Abstract Minimax optimization has found extensive application in modern machine learning, in settings such as generative adversarial networks (GANs), adversarial training and multi-agent reinforcement learning. As most of these applications involve continuous nonconvex-nonconcave formulations, a very basic question arises—“what is a proper definition of local optima?” Most previous work answers this question using classical notions of equilibria from simultaneous games, where the min-player and the max-player act simultaneously. In contrast, most applications in machine learning, including GANs and adversarial training, correspond to sequential games, where the order of which player acts first is crucial (since minimax is in general not equal to maximin due to the nonconvex-nonconcave nature of the problems). The main contribution of this paper is to propose a proper mathematical definition of local optimality for this sequential setting—local minimax—as well as to present its properties and existence results. Finally, we establish a strong connection to a basic local search algorithm—gradient descent ascent (GDA)—under mild conditions, all stable limit points of GDA are exactly local minimax points up to some degenerate points. 1 Introduction arXiv:1902.00618v3 [cs.LG] 15 Aug 2020 Minimax optimization refers to problems of two agents—one agent tries to minimize the payoff function f : X × Y ! R while the other agent tries to maximize it.
    [Show full text]
  • MS&E 246: Lecture 3 Pure Strategy Nash Equilibrium
    MS&E 246: Lecture 3 Pure strategy Nash equilibrium Ramesh Johari January 16, 2007 Outline • Best response and pure strategy Nash equilibrium • Relation to other equilibrium notions • Examples • Bertrand competition Best response set Best response set for player n to s-n: R (s ) = arg max Π (s , s ) n -n sn ∈ Sn n n -n [ Note: arg maxx ∈ X f(x) is the set of x that maximize f(x) ] Nash equilibrium Given: N-player game A vector s = (s1, …, sN) is a (pure strategy) Nash equilibrium if: si ∈ Ri(s-i) for all players i. Each individual plays a best response to the others. Nash equilibrium Pure strategy Nash equilibrium is robust to unilateral deviations One of the hardest questions in game theory: How do players know to play a Nash equilibrium? Example: Prisoner’s dilemma Recall the routing game: AT&T near far near (-4,-4) (-1,-5) MCI far (-5,-1) (-2,-2) Example: Prisoner’s dilemma Here (near,near) is the unique (pure strategy) NE: AT&T near far near (-4,-4) (-1,-5) MCI far (-5,-1) (-2,-2) Summary of relationships Given a game: • Any DSE also survives ISD, and is a NE. (DSE = dominant strategy equilibrium; ISD = iterated strict dominance) Example: bidding game Recall the bidding game from lecture 1: Player 2’s bid $0 $1 $2 $3 $4 $0 $4.00 $4.00 $4.00 $4.00 $4.00 $1 $11.00 $7.00 $5.67 $5.00 $4.60 $2 $10.00 $7.33 $6.00 $5.20 $4.67 Player 1’s bid $3 $9.00 $7.00 $5.80 $5.00 $4.43 $4 $8.00 $6.40 $5.33 $4.57 $4.00 Example: bidding game Here (2,2) is the unique (pure strategy) NE: Player 2’s bid $0 $1 $2 $3 $4 $0 $4.00 $4.00 $4.00 $4.00 $4.00 $1 $11.00 $7.00 $5.67 $5.00 $4.60 $2 $10.00 $7.33 $6.00 $5.20 $4.67 Player 1’s bid $3 $9.00 $7.00 $5.80 $5.00 $4.43 $4 $8.00 $6.40 $5.33 $4.57 $4.00 Summary of relationships Given a game: • Any DSE also survives ISD, and is a NE.
    [Show full text]
  • Solution Concepts in Cooperative Game Theory
    1 A. Stolwijk Solution Concepts in Cooperative Game Theory Master’s Thesis, defended on October 12, 2010 Thesis Advisor: dr. F.M. Spieksma Mathematisch Instituut, Universiteit Leiden 2 Contents 1 Introduction 7 1.1 BackgroundandAims ................................. 7 1.2 Outline .......................................... 8 2 The Model: Some Basic Concepts 9 2.1 CharacteristicFunction .............................. ... 9 2.2 Solution Space: Transferable and Non-Transferable Utilities . .......... 11 2.3 EquivalencebetweenGames. ... 12 2.4 PropertiesofSolutions............................... ... 14 3 Comparing Imputations 15 3.1 StrongDomination................................... 15 3.1.1 Properties of Strong Domination . 17 3.2 WeakDomination .................................... 19 3.2.1 Properties of Weak Domination . 20 3.3 DualDomination..................................... 22 3.3.1 Properties of Dual Domination . 23 4 The Core 25 4.1 TheCore ......................................... 25 4.2 TheDualCore ...................................... 27 4.2.1 ComparingtheCorewiththeDualCore. 29 4.2.2 Strong ǫ-Core................................... 30 5 Nash Equilibria 33 5.1 Strict Nash Equilibria . 33 5.2 Weak Nash Equilibria . 36 3 4 CONTENTS 6 Stable Sets 39 6.1 DefinitionofStableSets ............................... .. 39 6.2 Stability in A′ ....................................... 40 6.3 ConstructionofStronglyStableSets . ...... 41 6.3.1 Explanation of the Strongly Stable Set: The Standard of Behavior in the 3-personzero-sumgame ............................
    [Show full text]
  • Nash Equilibrium
    Nash Equilibrium u A game consists of – a set of players – a set of strategies for each player – A mapping from set of strategies to a set of payoffs, one for each player N.E.: A Set of strategies form a NE if, for player i, the strategy chosen by i maximises i’s payoff, given the strategies chosen by all other players u NE is the set of strategies from which no player has an incentive to unilaterally deviate u NE is the central concept of non- cooperative game theory I.e. situtations in which binding agreements are not possible Example Player 2 C D C (10,10) (0,20) This is the Player 1 game’s D (20,0) (1,1) payoff matrix. Player A’s payoff is shown first. Player B’s payoff is shown second. NE: (DD) = (1,1) Another Example…. Player B L R U (3,9) (1,8) Player A D (0,0) (2,1) Two Nash equilibria: (U,L) = (3,9) (D,R) = (2,1) Applying the NE Concept Modelling Short Run ‘Conduct’ Bertrand Competition Cournot Competition [Building blocks in modeling the intensity of competition in an industry in the short run] p pmonop P(N))? C N Bertrand Price Competition u What if firms compete using only price-setting strategies,? u Games in which firms use only price strategies and play simultaneously are Bertrand games. Bertrand Games (1883) 1. 2 players, firms i and j 2. Bertrand Strategy - All firms simultaneously set their prices. 3. Homogenous product 4. Perfect Information 5. Each firm’s marginal production cost is constant at c.
    [Show full text]
  • Subjecjwity and Correlation in Randomized Strategies*
    Journal of Mathematical Economics 1 (1974) 67-96. Q North-Holland Publishing Company SUBJECJWITY AND CORRELATION IN RANDOMIZED STRATEGIES* Robert J. AUMANN The Hebrew University of Jerusalem, Jerusalem, Israel Received 15 April 1973 1. Introduction Subjectivity and correlation, though formally related, are conceptually distinct and independent issues. We start by discussing subjectivity. A mixed strategy in a game involves the selection of a pure strategy by means of a random device. It has usually been assumed that the random device is a coin flip, the spin of a roulette wheel, or something similar; in brief, an ‘objective’ device, one for which everybody agrees on the numerical values of the proba- bilities involved. Rather oddly, in spite of the long history of the theory of subjective probability, nobody seems to have examined the consequences of basing mixed strategies on ‘subjective’ random devices, i.e. devices on the probabilities of whose outcomes people may disagree (such as horse races, elections, etc.). Even a fairly superficial such examination yields some startling results, as follows : (a) Two-person zero-sum games lose their ‘strictly competitive’ character. It becomes worthwhile to cooperate in such games, i.e. to enter into binding agreements.’ The concept of the ‘value’ of a zero-sum game loses some of its force, since both players can get more than the value (in the utility sense). (b) In certain n-person games with n 2 3 new equilibrium points appear, whose payoffs strictly dominate the payoffs of all other equilibrium points.2 Result (a) holds not just for certain selected 2-person O-sum games, but for practically3 all such games.
    [Show full text]
  • Part 4: Game Theory II Sequential Games
    Part 4: Game Theory II Sequential Games Games in Extensive Form, Backward Induction, Subgame Perfect Equilibrium, Commitment June 2016 Games in Extensive Form, Backward Induction, SubgamePart 4: Perfect Game Equilibrium, Theory IISequential Commitment Games () June 2016 1 / 17 Introduction Games in Extensive Form, Backward Induction, SubgamePart 4: Perfect Game Equilibrium, Theory IISequential Commitment Games () June 2016 2 / 17 Sequential Games games in matrix (normal) form can only represent situations where people move simultaneously ! sequential nature of decision making is suppressed ! concept of ‘time’ plays no role but many situations involve player choosing actions sequentially (over time), rather than simultaneously ) need games in extensive form = sequential games example Harry Local Latte Starbucks Local Latte 1; 2 0; 0 Sally Starbucks 0; 0 2; 1 Battle of the Sexes (BS) Games in Extensive Form, Backward Induction, SubgamePart 4: Perfect Game Equilibrium, Theory IISequential Commitment Games () June 2016 3 / 17 Battle of the Sexes Reconsidered suppose Sally moves first (and leaves Harry a text-message where he can find her) Harry moves second (after reading Sally’s message) ) extensive form game (game tree): game still has two Nash equilibria: (LL,LL) and (SB,SB) but (LL,LL) is no longer plausible... Games in Extensive Form, Backward Induction, SubgamePart 4: Perfect Game Equilibrium, Theory IISequential Commitment Games () June 2016 4 / 17 Sequential Games a sequential game involves: a list of players for each player, a set
    [Show full text]
  • Nash Equilibrium and Mechanism Design ✩ ∗ Eric Maskin A,B, a Institute for Advanced Study, United States B Princeton University, United States Article Info Abstract
    Games and Economic Behavior 71 (2011) 9–11 Contents lists available at ScienceDirect Games and Economic Behavior www.elsevier.com/locate/geb Commentary: Nash equilibrium and mechanism design ✩ ∗ Eric Maskin a,b, a Institute for Advanced Study, United States b Princeton University, United States article info abstract Article history: I argue that the principal theoretical and practical drawbacks of Nash equilibrium as a Received 23 December 2008 solution concept are far less troublesome in problems of mechanism design than in most Available online 18 January 2009 other applications of game theory. © 2009 Elsevier Inc. All rights reserved. JEL classification: C70 Keywords: Nash equilibrium Mechanism design Solution concept A Nash equilibrium (called an “equilibrium point” by John Nash himself; see Nash, 1950) of a game occurs when players choose strategies from which unilateral deviations do not pay. The concept of Nash equilibrium is far and away Nash’s most important legacy to economics and the other behavioral sciences. This is because it remains the central solution concept—i.e., prediction of behavior—in applications of game theory to these fields. As I shall review below, Nash equilibrium has some important shortcomings, both theoretical and practical. I will argue, however, that these drawbacks are far less troublesome in problems of mechanism design than in most other applications of game theory. 1. Solution concepts Game-theoretic solution concepts divide into those that are noncooperative—where the basic unit of analysis is the in- dividual player—and those that are cooperative, where the focus is on coalitions of players. John von Neumann and Oskar Morgenstern themselves viewed the cooperative part of game theory as more important, and their seminal treatise, von Neumann and Morgenstern (1944), devoted fully three quarters of its space to cooperative matters.
    [Show full text]