Strategy-Proofness and Responsiveness Imply Minimal Participation
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Labsi Working Papers
UNIVERSITY OF SIENA S.N. O’ H iggins Arturo Palomba Patrizia Sbriglia Second Mover Advantage and Bertrand Dynamic Competition: An Experiment May 2010 LABSI WORKING PAPERS N. 28/2010 SECOND MOVER ADVANTAGE AND BERTRAND DYNAMIC COMPETITION: AN EXPERIMENT § S.N. O’Higgins University of Salerno [email protected] Arturo Palomba University of Naples II [email protected] Patrizia Sbriglia §§ University of Naples II [email protected] Abstract In this paper we provide an experimental test of a dynamic Bertrand duopolistic model, where firms move sequentially and their informational setting varies across different designs. Our experiment is composed of three treatments. In the first treatment, subjects receive information only on the costs and demand parameters and on the price’ choices of their opponent in the market in which they are positioned (matching is fixed); in the second and third treatments, subjects are also informed on the behaviour of players who are not directly operating in their market. Our aim is to study whether the individual behaviour and the process of equilibrium convergence are affected by the specific informational setting adopted. In all treatments we selected students who had previously studied market games and industrial organization, conjecturing that the specific participants’ expertise decreased the chances of imitation in treatment II and III. However, our results prove the opposite: the extra information provided in treatment II and III strongly affects the long run convergence to the market equilibrium. In fact, whilst in the first session, a high proportion of markets converge to the Nash-Bertrand symmetric solution, we observe that a high proportion of markets converge to more collusive outcomes in treatment II and more competitive outcomes in treatment III. -
1 Sequential Games
1 Sequential Games We call games where players take turns moving “sequential games”. Sequential games consist of the same elements as normal form games –there are players, rules, outcomes, and payo¤s. However, sequential games have the added element that history of play is now important as players can make decisions conditional on what other players have done. Thus, if two people are playing a game of Chess the second mover is able to observe the …rst mover’s initial move prior to making his initial move. While it is possible to represent sequential games using the strategic (or matrix) form representation of the game it is more instructive at …rst to represent sequential games using a game tree. In addition to the players, actions, outcomes, and payo¤s, the game tree will provide a history of play or a path of play. A very basic example of a sequential game is the Entrant-Incumbent game. The game is described as follows: Consider a game where there is an entrant and an incumbent. The entrant moves …rst and the incumbent observes the entrant’sdecision. The entrant can choose to either enter the market or remain out of the market. If the entrant remains out of the market then the game ends and the entrant receives a payo¤ of 0 while the incumbent receives a payo¤ of 2. If the entrant chooses to enter the market then the incumbent gets to make a choice. The incumbent chooses between …ghting entry or accommodating entry. If the incumbent …ghts the entrant receives a payo¤ of 3 while the incumbent receives a payo¤ of 1. -
Notes on Sequential and Repeated Games
Notes on sequential and repeated games 1 Sequential Move Games Thus far we have examined games in which players make moves simultaneously (or without observing what the other player has done). Using the normal (strategic) form representation of a game we can identify sets of strategies that are best responses to each other (Nash Equilibria). We now focus on sequential games of complete information. We can still use the normal form representation to identify NE but sequential games are richer than that because some players observe other players’decisions before they take action. The fact that some actions are observable may cause some NE of the normal form representation to be inconsistent with what one might think a player would do. Here’sa simple game between an Entrant and an Incumbent. The Entrant moves …rst and the Incumbent observes the Entrant’s action and then gets to make a choice. The Entrant has to decide whether or not he will enter a market or not. Thus, the Entrant’s two strategies are “Enter” or “Stay Out”. If the Entrant chooses “Stay Out” then the game ends. The payo¤s for the Entrant and Incumbent will be 0 and 2 respectively. If the Entrant chooses “Enter” then the Incumbent gets to choose whether or not he will “Fight”or “Accommodate”entry. If the Incumbent chooses “Fight”then the Entrant receives 3 and the Incumbent receives 1. If the Incumbent chooses “Accommodate”then the Entrant receives 2 and the Incumbent receives 1. This game in normal form is Incumbent Fight if Enter Accommodate if Enter . -
SEQUENTIAL GAMES with PERFECT INFORMATION Example
SEQUENTIAL GAMES WITH PERFECT INFORMATION Example 4.9 (page 105) Consider the sequential game given in Figure 4.9. We want to apply backward induction to the tree. 0 Vertex B is owned by player two, P2. The payoffs for P2 are 1 and 3, with 3 > 1, so the player picks R . Thus, the payoffs at B become (0, 3). 00 Next, vertex C is also owned by P2 with payoffs 1 and 0. Since 1 > 0, P2 picks L , and the payoffs are (4, 1). Player one, P1, owns A; the choice of L gives a payoff of 0 and R gives a payoff of 4; 4 > 0, so P1 chooses R. The final payoffs are (4, 1). 0 00 We claim that this strategy profile, { R } for P1 and { R ,L } is a Nash equilibrium. Notice that the 0 00 strategy profile gives a choice at each vertex. For the strategy { R ,L } fixed for P2, P1 has a maximal payoff by choosing { R }, ( 0 00 0 00 π1(R, { R ,L }) = 4 π1(R, { R ,L }) = 4 ≥ 0 00 π1(L, { R ,L }) = 0. 0 00 In the same way, for the strategy { R } fixed for P1, P2 has a maximal payoff by choosing { R ,L }, ( 00 0 00 π2(R, {∗,L }) = 1 π2(R, { R ,L }) = 1 ≥ 00 π2(R, {∗,R }) = 0, where ∗ means choose either L0 or R0. Since no change of choice by a player can increase that players own payoff, the strategy profile is called a Nash equilibrium. Notice that the above strategy profile is also a Nash equilibrium on each branch of the game tree, mainly starting at either B or starting at C. -
Finitely Repeated Games
Repeated games 1: Finite repetition Universidad Carlos III de Madrid 1 Finitely repeated games • A finitely repeated game is a dynamic game in which a simultaneous game (the stage game) is played finitely many times, and the result of each stage is observed before the next one is played. • Example: Play the prisoners’ dilemma several times. The stage game is the simultaneous prisoners’ dilemma game. 2 Results • If the stage game (the simultaneous game) has only one NE the repeated game has only one SPNE: In the SPNE players’ play the strategies in the NE in each stage. • If the stage game has 2 or more NE, one can find a SPNE where, at some stage, players play a strategy that is not part of a NE of the stage game. 3 The prisoners’ dilemma repeated twice • Two players play the same simultaneous game twice, at ! = 1 and at ! = 2. • After the first time the game is played (after ! = 1) the result is observed before playing the second time. • The payoff in the repeated game is the sum of the payoffs in each stage (! = 1, ! = 2) • Which is the SPNE? Player 2 D C D 1 , 1 5 , 0 Player 1 C 0 , 5 4 , 4 4 The prisoners’ dilemma repeated twice Information sets? Strategies? 1 .1 5 for each player 2" for each player D C E.g.: (C, D, D, C, C) Subgames? 2.1 5 D C D C .2 1.3 1.5 1 1.4 D C D C D C D C 2.2 2.3 2 .4 2.5 D C D C D C D C D C D C D C D C 1+1 1+5 1+0 1+4 5+1 5+5 5+0 5+4 0+1 0+5 0+0 0+4 4+1 4+5 4+0 4+4 1+1 1+0 1+5 1+4 0+1 0+0 0+5 0+4 5+1 5+0 5+5 5+4 4+1 4+0 4+5 4+4 The prisoners’ dilemma repeated twice Let’s find the NE in the subgames. -
Implementation Theory*
Chapter 5 IMPLEMENTATION THEORY* ERIC MASKIN Institute for Advanced Study, Princeton, NJ, USA TOMAS SJOSTROM Department of Economics, Pennsylvania State University, University Park, PA, USA Contents Abstract 238 Keywords 238 1. Introduction 239 2. Definitions 245 3. Nash implementation 247 3.1. Definitions 248 3.2. Monotonicity and no veto power 248 3.3. Necessary and sufficient conditions 250 3.4. Weak implementation 254 3.5. Strategy-proofness and rich domains of preferences 254 3.6. Unrestricted domain of strict preferences 256 3.7. Economic environments 257 3.8. Two agent implementation 259 4. Implementation with complete information: further topics 260 4.1. Refinements of Nash equilibrium 260 4.2. Virtual implementation 264 4.3. Mixed strategies 265 4.4. Extensive form mechanisms 267 4.5. Renegotiation 269 4.6. The planner as a player 275 5. Bayesian implementation 276 5.1. Definitions 276 5.2. Closure 277 5.3. Incentive compatibility 278 5.4. Bayesian monotonicity 279 * We are grateful to Sandeep Baliga, Luis Corch6n, Matt Jackson, Byungchae Rhee, Ariel Rubinstein, Ilya Segal, Hannu Vartiainen, Masahiro Watabe, and two referees, for helpful comments. Handbook of Social Choice and Welfare, Volume 1, Edited by K.J Arrow, A.K. Sen and K. Suzumura ( 2002 Elsevier Science B. V All rights reserved 238 E. Maskin and T: Sj'str6m 5.5. Non-parametric, robust and fault tolerant implementation 281 6. Concluding remarks 281 References 282 Abstract The implementation problem is the problem of designing a mechanism (game form) such that the equilibrium outcomes satisfy a criterion of social optimality embodied in a social choice rule. -
Collusion Constrained Equilibrium
Theoretical Economics 13 (2018), 307–340 1555-7561/20180307 Collusion constrained equilibrium Rohan Dutta Department of Economics, McGill University David K. Levine Department of Economics, European University Institute and Department of Economics, Washington University in Saint Louis Salvatore Modica Department of Economics, Università di Palermo We study collusion within groups in noncooperative games. The primitives are the preferences of the players, their assignment to nonoverlapping groups, and the goals of the groups. Our notion of collusion is that a group coordinates the play of its members among different incentive compatible plans to best achieve its goals. Unfortunately, equilibria that meet this requirement need not exist. We instead introduce the weaker notion of collusion constrained equilibrium. This al- lows groups to put positive probability on alternatives that are suboptimal for the group in certain razor’s edge cases where the set of incentive compatible plans changes discontinuously. These collusion constrained equilibria exist and are a subset of the correlated equilibria of the underlying game. We examine four per- turbations of the underlying game. In each case,we show that equilibria in which groups choose the best alternative exist and that limits of these equilibria lead to collusion constrained equilibria. We also show that for a sufficiently broad class of perturbations, every collusion constrained equilibrium arises as such a limit. We give an application to a voter participation game that shows how collusion constraints may be socially costly. Keywords. Collusion, organization, group. JEL classification. C72, D70. 1. Introduction As the literature on collective action (for example, Olson 1965) emphasizes, groups often behave collusively while the preferences of individual group members limit the possi- Rohan Dutta: [email protected] David K. -
Chapter 16 Oligopoly and Game Theory Oligopoly Oligopoly
Chapter 16 “Game theory is the study of how people Oligopoly behave in strategic situations. By ‘strategic’ we mean a situation in which each person, when deciding what actions to take, must and consider how others might respond to that action.” Game Theory Oligopoly Oligopoly • “Oligopoly is a market structure in which only a few • “Figuring out the environment” when there are sellers offer similar or identical products.” rival firms in your market, means guessing (or • As we saw last time, oligopoly differs from the two ‘ideal’ inferring) what the rivals are doing and then cases, perfect competition and monopoly. choosing a “best response” • In the ‘ideal’ cases, the firm just has to figure out the environment (prices for the perfectly competitive firm, • This means that firms in oligopoly markets are demand curve for the monopolist) and select output to playing a ‘game’ against each other. maximize profits • To understand how they might act, we need to • An oligopolist, on the other hand, also has to figure out the understand how players play games. environment before computing the best output. • This is the role of Game Theory. Some Concepts We Will Use Strategies • Strategies • Strategies are the choices that a player is allowed • Payoffs to make. • Sequential Games •Examples: • Simultaneous Games – In game trees (sequential games), the players choose paths or branches from roots or nodes. • Best Responses – In matrix games players choose rows or columns • Equilibrium – In market games, players choose prices, or quantities, • Dominated strategies or R and D levels. • Dominant Strategies. – In Blackjack, players choose whether to stay or draw. -
Implementation and Strong Nash Equilibrium
LIBRARY OF THE MASSACHUSETTS INSTITUTE OF TECHNOLOGY Digitized by the Internet Archive in 2011 with funding from Boston Library Consortium Member Libraries http://www.archive.org/details/implementationstOOmask : } ^mm working paper department of economics IMPLEMENTATION AND STRONG NASH EQUILIBRIUM Eric Maskin Number 216 January 1978 massachusetts institute of technology 50 memorial drive Cambridge, mass. 021 39 IMPLEMENTATION AND STRONG NASH EQUILIBRIUM Eric Maskin Number 216 January 1978 I am grateful for the financial support of the National Science Foundation. A social choice correspondence (SCC) is a mapping which assoc- iates each possible profile of individuals' preferences with a set of feasible alternatives (the set of f-optima) . To implement *n SCC, f, is to construct a game form g such that, for all preference profiles the equilibrium set of g (with respect to some solution concept) coincides with the f-optimal set. In a recent study [1], I examined the general question of implementing social choice correspondences when Nash equilibrium is the solution concept. Nash equilibrium, of course, is a strictly noncooperative notion, and so it is natural to consider the extent to which the results carry over when coalitions can form. The cooperative counterpart of Nash is the strong equilibrium due to Aumann. Whereas Nash defines equilibrium in terms of deviations only by single individuals, Aumann 's equilibrium incorporates deviations by every conceivable coalition. This paper considers implementation for strong equilibrium. The results of my previous paper were positive. If an SCC sat- isfies a monotonicity property and a much weaker requirement called no veto power, it can be implemented by Nash equilibrium. -
Cooperation Spillovers in Coordination Games*
Cooperation Spillovers in Coordination Games* Timothy N. Casona, Anya Savikhina, and Roman M. Sheremetab aDepartment of Economics, Krannert School of Management, Purdue University, 403 W. State St., West Lafayette, IN 47906-2056, U.S.A. bArgyros School of Business and Economics, Chapman University, One University Drive, Orange, CA 92866, U.S.A. November 2009 Abstract Motivated by problems of coordination failure observed in weak-link games, we experimentally investigate behavioral spillovers for order-statistic coordination games. Subjects play the minimum- and median-effort coordination games simultaneously and sequentially. The results show the precedent for cooperative behavior spills over from the median game to the minimum game when the games are played sequentially. Moreover, spillover occurs even when group composition changes, although the effect is not as strong. We also find that the precedent for uncooperative behavior does not spill over from the minimum game to the median game. These findings suggest guidelines for increasing cooperative behavior within organizations. JEL Classifications: C72, C91 Keywords: coordination, order-statistic games, experiments, cooperation, minimum game, behavioral spillover Corresponding author: Timothy Cason, [email protected] * We thank Yan Chen, David Cooper, John Duffy, Vai-Lam Mui, seminar participants at Purdue University, and participants at Economic Science Association conferences for helpful comments. Any remaining errors are ours. 1. Introduction Coordination failure is often the reason for the inefficient performance of many groups, ranging from small firms to entire economies. When agents’ actions have strategic interdependence, even when they succeed in coordinating they may be “trapped” in an equilibrium that is objectively inferior to other equilibria. Coordination failure and inefficient coordination has been an important theme across a variety of fields in economics, ranging from development and macroeconomics to mechanism design for overcoming moral hazard in teams. -
Strong Nash Equilibria and Mixed Strategies
Strong Nash equilibria and mixed strategies Eleonora Braggiona, Nicola Gattib, Roberto Lucchettia, Tuomas Sandholmc aDipartimento di Matematica, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milano, Italy bDipartimento di Elettronica, Informazione e Bioningegneria, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milano, Italy cComputer Science Department, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA Abstract In this paper we consider strong Nash equilibria, in mixed strategies, for finite games. Any strong Nash equilibrium outcome is Pareto efficient for each coalition. First, we analyze the two–player setting. Our main result, in its simplest form, states that if a game has a strong Nash equilibrium with full support (that is, both players randomize among all pure strategies), then the game is strictly competitive. This means that all the outcomes of the game are Pareto efficient and lie on a straight line with negative slope. In order to get our result we use the indifference principle fulfilled by any Nash equilibrium, and the classical KKT conditions (in the vector setting), that are necessary conditions for Pareto efficiency. Our characterization enables us to design a strong–Nash– equilibrium–finding algorithm with complexity in Smoothed–P. So, this problem—that Conitzer and Sandholm [Conitzer, V., Sandholm, T., 2008. New complexity results about Nash equilibria. Games Econ. Behav. 63, 621–641] proved to be computationally hard in the worst case—is generically easy. Hence, although the worst case complexity of finding a strong Nash equilibrium is harder than that of finding a Nash equilibrium, once small perturbations are applied, finding a strong Nash is easier than finding a Nash equilibrium. -
570: Minimax Sample Complexity for Turn-Based Stochastic Game
Minimax Sample Complexity for Turn-based Stochastic Game Qiwen Cui1 Lin F. Yang2 1School of Mathematical Sciences, Peking University 2Electrical and Computer Engineering Department, University of California, Los Angeles Abstract guarantees are rather rare due to complex interaction be- tween agents that makes the problem considerably harder than single agent reinforcement learning. This is also known The empirical success of multi-agent reinforce- as non-stationarity in MARL, which means when multi- ment learning is encouraging, while few theoret- ple agents alter their strategies based on samples collected ical guarantees have been revealed. In this work, from previous strategy, the system becomes non-stationary we prove that the plug-in solver approach, proba- for each agent and the improvement can not be guaranteed. bly the most natural reinforcement learning algo- One fundamental question in MBRL is that how to design rithm, achieves minimax sample complexity for efficient algorithms to overcome non-stationarity. turn-based stochastic game (TBSG). Specifically, we perform planning in an empirical TBSG by Two-players turn-based stochastic game (TBSG) is a two- utilizing a ‘simulator’ that allows sampling from agents generalization of Markov decision process (MDP), arbitrary state-action pair. We show that the em- where two agents choose actions in turn and one agent wants pirical Nash equilibrium strategy is an approxi- to maximize the total reward while the other wants to min- mate Nash equilibrium strategy in the true TBSG imize it. As a zero-sum game, TBSG is known to have and give both problem-dependent and problem- Nash equilibrium strategy [Shapley, 1953], which means independent bound.