Game Theory, On-Line Prediction and Boosting

Total Page:16

File Type:pdf, Size:1020Kb

Game Theory, On-Line Prediction and Boosting Proceedings of the Ninth Annual Conference on Computational Learning Theory, 1996. Game Theory, On-line Prediction and Boosting Yoav Freund Robert E. Schapire AT&T Laboratories 600 Mountain Avenue Murray Hill, NJ 07974-0636 yoav, schapire ¡ @research.att.com Abstract for learning to play repeated games based on the on-line prediction methods of Littlestone and Warmuth [15]. The We study the close connections between game the- analysis of this algorithmyields a new (as far as we know) and ory, on-line prediction and boosting. After a brief simple proof of von Neumann's famous minmax theorem, as review of game theory, we describe an algorithm well as a provable method of approximately solving a game. for learning to play repeated games based on the In the last part of the paper we show that the on-line on-line prediction methods of Littlestone and War- prediction model is obtained by applying the game-playing muth. The analysis of this algorithm yields a sim- algorithm to an appropriate choice of game and that boosting ple proof of von Neumann's famous minmax theo- is obtained by applying the same algorithm to the ªdualº of rem, as well as a provable method of approximately this game. solving a game. We then show that the on-line pre- diction model is obtained by applying this game- 2 GAME THEORY playing algorithm to an appropriate choice of game We begin with a review of basic game theory. Further back- and that boosting is obtained by applying the same ground can be found in any introductorytext on game theory; algorithm to the ªdualº of this game. see for instance Fudenberg and Tirole [11]. We study two- person games in normal form. That is, each game is de®ned by a matrix ¦ . There are two players called the row player 1 INTRODUCTION and column player. To play the game, the row player chooses a row § , and, simultaneously, the column player chooses a The purpose of this paper is to bring out the close connec- © § ¨ column ¨ . The selected entry is the loss suffered by tions between game theory, on-line prediction and boosting. the row player. Brie¯y, game theory is the study of games and other interac- For instance, the loss matrix for the children's game tions of various sorts. On-line prediction is a learning model ªRock, Paper, Scissorsº is given by: in which an agent predicts the classi®cation of a sequence of items and attempts to minimize the total number of prediction R P S R 1 errors. Finally, boosting is a method of converting a ªweakº 2 1 0 learning algorithm which performs only slightly better than P 0 1 1 random guessing into one that performs extremely well. 2 S 1 All three of these topics will be explained in more detail 1 0 2 below. All have been studied extensively in the past. In this The row player's goal is to minimize its loss. Often, the paper, the close relationship between these three seemingly goal of the column player is to maximize this loss, in which unrelated topics will be brought out. case the game is said to be ªzero-sum.º Most of our results Here is an outline of the paper. We will begin with a are given in the context of a zero-sum game. However, our review of game theory. Then we will describe an algorithm results also apply when no assumptions are made about the Home page: ªhttp://www.research.att.com/orgs/ssr/people/uidº. goal or strategy of the column player. We return to this point Expected to change to ªhttp://www.research.att.com/Äuidº some- below. For the sake of simplicity, we assume that all the losses ¥ time in the near future (for uid ¢¤£ yoav, schapire ). are in the range 0 1 . Simple scaling can be used to get more general results. Also, we restrict ourselves to the case Permission to make digital/hard copy of all or part of this mate- where the number of choices available to each player is ®nite. rial without fee is granted provided that copies are not made or However, most of the results translate with very mild addi- distributed for pro®t or commercial advantage, the ACM copy- tional assumptions to cases in which the number of choices right/server notice, the title of the publication and its date appear, is in®nite. For a discussion of in®nite matrix games see, for and notice is given that copying is by permission of the Association for Computing Machinery, Inc. (ACM). To copy otherwise, to re- instance, Chapter 2 in Ferguson [3]. publish, to post on servers or to redistribute to lists, requires prior speci®c permission and/or a fee. 2.1 RANDOMIZED PLAY We might go on naively to conjecture that the advantage of As described above, the players choose a single row or col- playing last is strict for some games so that, at least in some umn. Usually, this choice of play is allowed to be random- cases, the inequality in Eq. (2) is strict. ized. That is, the row player chooses a distribution over the Surprisingly, it turns out not to matter which player plays ®rst. Von Neumann's well-known minmax theorem states rows of ¦ , and (simultaneously) the column player chooses that the outcome is the same in either case so that a distribution ¡ over columns. The row player's expected loss is easily computed as ¨ max min M P Q min max M P Q 3 ¢¤£ T Q P P Q § ¦ § ¨ §¡ ¨ ©¨ ¦ ¡ ¦ ¦ ¥ for every matrix . The common value of the two sides For ease of notation, we will often denote this quantity by of the equality is called the value of the game ¦ . A proof of the minmax theorem will be given in Section 2.5. ¡¤ ¦ , and refer to it simply as the loss (rather than In words, Eq. (3) means that the row player has a (min- expected loss). In addition, if the row player chooses a dis- £ max) strategy such that regardless of the strategy ¡ played ¨ tribution but the column player chooses a single column , ¡ by the column player, the loss suffered ¦ will be at § ¦ § ¨ then the (expected) loss is which we denote most . Symmetrically, it means that the column player has ¦ ¨ ¦ § ¡¤ by . The notation is de®ned analogously. a (maxmin) strategy ¡ such that, regardless of the strategy § Individual (deterministically chosen) rows and columns played by the row player the loss will be at least . This ¨ are called pure strategies. Randomized plays de®ned by means that the strategies ¡ and are optimal in a strong distributions and ¡ over rows and columns are called sense. mixed strategies. The number of rows of the matrix ¦ will Thus, classical game theory says that given a (zero-sum) be denoted by . game ¦ , one should play using a minmax strategy. Such a 2.2 SEQUENTIAL PLAY strategy can be computed using linear programming. Up until now, we have assumed that the players choose their However, there are a number of problems with this ap- (pure or mixed) strategies simultaneously. Suppose now that proach. For instance, instead play is sequential. That is, suppose that the column ¦ may be unknown; player chooses its strategy ¡ after the row player has chosen and announced its strategy . Assume further that the column ¦ may be so large that computing a minmax strategy player's goal is to maximize the row player's loss (i.e., that using linear programming is infeasible; the game is zero-sum). Then given , such a ªworst-caseº or ªadversarialº column player will choose ¡ to maximize the column player may not be truly adversarial and may ¡¤ ¦ ; that is, if the row player plays mixed strategy , behave in a manner that admits loss signi®cantly smaller then its payoff will be than the game value . max M P Q 1 Q Overcoming these dif®culties in the one-shot game is (It is understood here and throughout the paper that maxQ hopeless. But suppose instead that we are playing the game denotes maximum over all probability distributions over col- repeatedly. Then it is natural to ask if one can learn to play umns; similarly, minP will always denote minimum over well against the particular opponent that is being faced. all probability distributions over rows. These extrema exist because the set of distributionsover a ®nite space is compact.) 2.4 REPEATED PLAY Knowing this, the row player should choose to mini- Such a model of repeated play can be formalized as described mize Eq. (1), so the row player's loss will be below. To emphasize the roles of the two players, we refer to the row player as the learner and the column player as the min max M P Q P Q environment. A mixed strategy realizing this minimum is called a min- Let ¦ be a matrix, possibly unknown to the learner. The max strategy. game is played repeatedly in a sequence of rounds. On round Suppose now that the column player plays ®rst and the ¨ 1 : row player can choose its play with the bene®t of knowing 1. the learner chooses mixed strategy ; the column player's chosen strategy ¡ . Then by a symmetric argument, the loss of the row player will be 2. the environment chooses mixed strategy ¡ (which may max min M P Q be chosen with knowledge of ) Q P ¡ § ¡ and a realizing the maximum is called a maxmin strategy. 3. the learner is permitted to observe the loss ¦ for each row § ; this is the loss it would have suffered had it 2.3 THE MINMAX THEOREM played using pure strategy § ; Intuitively, we expect the player who chooses its strategy last ¦ ¡ to have the advantage since it plays knowing its opponent's 4.
Recommended publications
  • Best Experienced Payoff Dynamics and Cooperation in the Centipede Game
    Theoretical Economics 14 (2019), 1347–1385 1555-7561/20191347 Best experienced payoff dynamics and cooperation in the centipede game William H. Sandholm Department of Economics, University of Wisconsin Segismundo S. Izquierdo BioEcoUva, Department of Industrial Organization, Universidad de Valladolid Luis R. Izquierdo Department of Civil Engineering, Universidad de Burgos We study population game dynamics under which each revising agent tests each of his strategies a fixed number of times, with each play of each strategy being against a newly drawn opponent, and chooses the strategy whose total payoff was highest. In the centipede game, these best experienced payoff dynamics lead to co- operative play. When strategies are tested once, play at the almost globally stable state is concentrated on the last few nodes of the game, with the proportions of agents playing each strategy being largely independent of the length of the game. Testing strategies many times leads to cyclical play. Keywords. Evolutionary game theory, backward induction, centipede game, computational algebra. JEL classification. C72, C73. 1. Introduction The discrepancy between the conclusions of backward induction reasoning and ob- served behavior in certain canonical extensive form games is a basic puzzle of game the- ory. The centipede game (Rosenthal (1981)), the finitely repeated prisoner’s dilemma, and related examples can be viewed as models of relationships in which each partic- ipant has repeated opportunities to take costly actions that benefit his partner and in which there is a commonly known date at which the interaction will end. Experimen- tal and anecdotal evidence suggests that cooperative behavior may persist until close to William H.
    [Show full text]
  • Comparative Statics and Perfect Foresight in Infinite Horizon Economies
    Digitized by the Internet Archive in 2011 with funding from Boston Library Consortium IVIember Libraries http://www.archive.org/details/comparativestatiOOkeho working paper department of economics Comparative Statics And Perfect Foresight in Infinite Horizon Economies Timothy J. Kehoe David K. Levine* Number 312 December 1982 massachusetts institute of technology 50 memorial drive Cambridge, mass. 02139 Comparative Statics And Perfect Foresight in Infinite Horizon Economies Timothy J. Kehoe David K. Levine* Number 312 December 1982 Department of Economics, M.I.T. and Department of Economics, U.C.L.A., respectively. Abstract This paper considers whether infinite horizon economies have determinate perfect foresight equilibria. ¥e consider stationary pure exchange economies with both finite and infinite numbers of agents. When there is a finite number of infinitely lived agents, we argue that equilibria are generically determinate. This is because the number of equations determining the equilibria is not infinite, but is equal to the number of agents minus one and must determine the marginal utility of income for all but one agent. In an overlapping generations model with infinitely many finitely lived agents, this reasoning breaks down. ¥e ask whether the initial conditions together with the requirement of convergence to a steady state locally determine an equilibrium price path. In this framework there are many economies with isolated equilibria, many with continue of equilibria, and many with no equilibria at all. With two or more goods in every period not only can the price level be indeterminate but relative prices as well. Furthermore, such indeterminacy can occur whether or not there is fiat money in the economy.
    [Show full text]
  • Network Markets and Consumer Coordination
    NETWORK MARKETS AND CONSUMER COORDINATION ATTILA AMBRUS ROSSELLA ARGENZIANO CESIFO WORKING PAPER NO. 1317 CATEGORY 9: INDUSTRIAL ORGANISATION OCTOBER 2004 PRESENTED AT KIEL-MUNICH WORKSHOP ON THE ECONOMICS OF INFORMATION AND NETWORK INDUSTRIES, AUGUST 2004 An electronic version of the paper may be downloaded • from the SSRN website: www.SSRN.com • from the CESifo website: www.CESifo.de CESifo Working Paper No. 1317 NETWORK MARKETS AND CONSUMER COORDINATION Abstract This paper assumes that groups of consumers in network markets can coordinate their choices when it is in their best interest to do so, and when coordination does not require communication. It is shown that multiple asymmetric networks can coexist in equilibrium if consumers have heterogeneous reservation values. A monopolist provider might choose to operate multiple networks to price differentiate consumers on both sides of the market. Competing network providers might operate networks such that one of them targets high reservation value consumers on one side of the market, while the other targets high reservation value consumers on the other side. Firms can obtain positive profits in price competition. In these asymmetric equilibria product differentiation is endogenized by the network choices of consumers. Heterogeneity of consumers is necessary for the existence of this type of equilibrium. JEL Code: D43, D62, L10. Attila Ambrus Rossella Argenziano Department of Economics Department of Economics Harvard University Yale University Cambridge, MA 02138 New Haven, CT 06520-8268
    [Show full text]
  • Contemporaneous Perfect Epsilon-Equilibria
    Games and Economic Behavior 53 (2005) 126–140 www.elsevier.com/locate/geb Contemporaneous perfect epsilon-equilibria George J. Mailath a, Andrew Postlewaite a,∗, Larry Samuelson b a University of Pennsylvania b University of Wisconsin Received 8 January 2003 Available online 18 July 2005 Abstract We examine contemporaneous perfect ε-equilibria, in which a player’s actions after every history, evaluated at the point of deviation from the equilibrium, must be within ε of a best response. This concept implies, but is stronger than, Radner’s ex ante perfect ε-equilibrium. A strategy profile is a contemporaneous perfect ε-equilibrium of a game if it is a subgame perfect equilibrium in a perturbed game with nearly the same payoffs, with the converse holding for pure equilibria. 2005 Elsevier Inc. All rights reserved. JEL classification: C70; C72; C73 Keywords: Epsilon equilibrium; Ex ante payoff; Multistage game; Subgame perfect equilibrium 1. Introduction Analyzing a game begins with the construction of a model specifying the strategies of the players and the resulting payoffs. For many games, one cannot be positive that the specified payoffs are precisely correct. For the model to be useful, one must hope that its equilibria are close to those of the real game whenever the payoff misspecification is small. To ensure that an equilibrium of the model is close to a Nash equilibrium of every possible game with nearly the same payoffs, the appropriate solution concept in the model * Corresponding author. E-mail addresses: [email protected] (G.J. Mailath), [email protected] (A. Postlewaite), [email protected] (L.
    [Show full text]
  • The Folk Theorem in Repeated Games with Discounting Or with Incomplete Information
    The Folk Theorem in Repeated Games with Discounting or with Incomplete Information Drew Fudenberg; Eric Maskin Econometrica, Vol. 54, No. 3. (May, 1986), pp. 533-554. Stable URL: http://links.jstor.org/sici?sici=0012-9682%28198605%2954%3A3%3C533%3ATFTIRG%3E2.0.CO%3B2-U Econometrica is currently published by The Econometric Society. Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/econosoc.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact [email protected]. http://www.jstor.org Tue Jul 24 13:17:33 2007 Econornetrica, Vol.
    [Show full text]
  • THE HUMAN SIDE of MECHANISM DESIGN a Tribute to Leo Hurwicz and Jean-Jacque Laffont
    THE HUMAN SIDE OF MECHANISM DESIGN A Tribute to Leo Hurwicz and Jean-Jacque Laffont Daniel McFadden Department of Economics University of California, Berkeley December 4, 2008 [email protected] http://www.econ.berkeley.edu/~mcfadden/index.shtml ABSTRACT This paper considers the human side of mechanism design, the behavior of economic agents in gathering and processing information and responding to incentives. I first give an overview of the subject of mechanism design, and then examine a pervasive premise in this field that economic agents are rational in their information processing and decisions. Examples from applied mechanism design identify the roles of perceptions and inference in agent behavior, and the influence of systematic irrationalities and sociality on agent responses. These examples suggest that tolerance of behavioral faults be added to the criteria for good mechanism design. In principle-agent problems for example, designers should consider using experimental treatments in contracts, and statistical post-processing of agent responses, to identify and mitigate the effects of agent non-compliance with contract incentives. KEYWORDS: mechanism_design, principal-agent_problem, juries, welfare_theory JEL CLASSIFICATION: D000, D600, D610, D710, D800, C420, C700 THE HUMAN SIDE OF MECHANISM DESIGN A Tribute to Leo Hurwicz and Jean-Jacque Laffont Daniel McFadden1 1. Introduction The study of mechanism design, the systematic analysis of resource allocation institutions and processes, has been the most fundamental development
    [Show full text]
  • Whither Game Theory? Towards a Theory of Learning in Games
    Whither Game Theory? Towards a Theory of Learning in Games The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Fudenberg, Drew, and David K. Levine. “Whither Game Theory? Towards a Theory of Learning in Games.” Journal of Economic Perspectives, vol. 30, no. 4, Nov. 2016, pp. 151–70. As Published http://dx.doi.org/10.1257/jep.30.4.151 Publisher American Economic Association Version Final published version Citable link http://hdl.handle.net/1721.1/113940 Terms of Use Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. Journal of Economic Perspectives—Volume 30, Number 4—Fall 2016—Pages 151–170 Whither Game Theory? Towards a Theory of Learning in Games Drew Fudenberg and David K. Levine hen we were graduate students at MIT (1977–81), the only game theory mentioned in the first-year core was static Cournot (1838) oligopoly, W although Eric Maskin taught an advanced elective on game theory and mechanism design. Just ten years later, game theory had invaded mainstream economic research and graduate education, and instructors had a choice of three graduate texts: Fudenberg and Tirole (1991), Osborne and Rubinstein (1994), and Myerson (1997). Game theory has been successful in economics because it works empirically in many important circumstances. Many of the theoretical applications in the 1980s involved issues in industrial organization, like commitment and timing in patent races and preemptive investment.
    [Show full text]
  • Impossibility of Collusion Under Imperfect Monitoring with Flexible Production
    Impossibility of Collusion under Imperfect Monitoring with Flexible Production By Yuliy Sannikov and Andrzej Skrzypacz* We show that it is impossible to achieve collusion in a duopoly when (a) goods are homogenous and firms compete in quantities; (b) new, noisy information arrives continuously, without sudden events; and (c) firms are able to respond to new information quickly. The result holds even if we allow for asymmetric equilibria or monetary transfers. The intuition is that the flexibility to respond quickly to new information unravels any collusive scheme. Our result applies to both a simple sta- tionary model and a more complicated one, with prices following a mean-reverting Markov process, as well as to models of dynamic cooperation in many other set- tings. (JEL D43, L2, L3) Collusion is a major problem in many mar- We study the scope of collusion in a quan- kets and has been an important topic of study tity-setting duopoly with homogenous goods in both applied and theoretical economics. From and flexible production—that is, if firms can exposed collusive cases, we know how numer- change output flow frequently.2 As in Green and ous real-life cartels have been organized and Porter (984), in our model firms cannot observe what kinds of agreements (either implicit or each other’s production decisions directly. They explicit) are likely to be successful in obtain- observe only noisy market prices/signals that ing collusive profits. At least since George J. depend on the total market supply. We show that Stigler (964), economists have recognized that collusion is impossible to achieve if: imperfect monitoring may destabilize cartels.
    [Show full text]
  • Economics 286: Game Theory 1 Overview
    Economics 286: Game Theory Stanford University, Fall 2016 Mon, Wed 9:30 – 11:20 AM, Landau 218 https://canvas.stanford.edu/courses/47170 Syllabus version: September 23, 2016 Gabriel Carroll, [email protected] | Landau 245 | Office hours by appointment 1 Overview What this class is: This is an intermediate-level, mathematically oriented class in game theory, aimed at economics PhD students (but qualified students in other depart- ments are very much welcome to attend). The goals are twofold: to provide technical tools for studying game-theoretic problems that arise in economic models, and to discuss conceptual issues in interpreting predictions made using game theory. What this class is not: • A class just for micro theorists. This class is meant to provide concepts and analytical tools useful in every area of economics (and beyond). • A first course in game theory. Technically, the mathematical content will be developed in a way that doesn’t presume game theory background; but in practice, if you haven’t taken a previous game theory class it will be steep going. (The official prerequisite is ECON 203. If you have taken a different game theory class and are unsure if you are prepared, you are encouraged to discuss with me.) • An introduction to the research frontiers. This is intended as a foundational course, although we will run across some relatively recent papers. ECON 290 or MGTECON 616 are more like “advanced topics” classes. Textbooks: The main source for this class, which covers much (though not all) of the material, is the still-classic • Drew Fudenberg and Jean Tirole, Game Theory, MIT Press, 1991.
    [Show full text]
  • 1. Introduction
    SELF-CONFIRMING EQUILIBRIUM IN-KOO CHO AND THOMAS J. SARGENT 1. Introduction A self-con rming equilibrium is the answer to the following question: what are the pos- sible limiting outcomes of purposeful interactions among a collection of adaptive agents, each of whom averages past data to approximate moments of the conditional probability distributions of interest? If outcomes converge, a Law of Large Numbers implies that agents' beliefs about conditional moments become correct on events that are observed suciently often. Beliefs are not necessarily correct about events that are infrequently observed. Where beliefs are correct, a self-con rming equilibrium is like a rational expec- tations equilibrium. But there can be interesting gaps between self-con rming and rational expectations equilibria where beliefs of some important decision makers are incorrect. Self-con rming equilibria interest macroeconomists because they connect to an in uen- tial 1970s argument made by Christopher Sims that advocated rational expectations as a sensible equilibrium concept. This argument defended rational expectations equilibria against the criticism that they require that agents `know too much' by showing that we do not have to assume that agents start out `knowing the model'. If agents simply average past data, perhaps conditioning by grouping observations, their forecasts will eventually become unimprovable. Research on adaptive learning has shown that that the glass is `half-full' and `half-empty' for this clever 1970s argument. On the one hand, the argument is correct when applied to competitive or in nitesimal agents: by using naive adaptive learning schemes (various versions of recursive least squares), agents can learn every conditional distribution that they require to play best responses within an equilibrium.
    [Show full text]
  • Coalitional Rationalizability*
    Coalitional Rationalizability∗ Attila Ambrus This Version: September 2005 Abstract This paper investigates how groups or coalitions of players can act in their collective interest in non-cooperative normal form games even if equilibrium play is not assumed. The main idea is that each member of a coalition will confine play to a subset of their strategies if it is in their mutual interest to do so. An iterative procedure of restrictions is used to define a non-cooperative solution concept, the set of coalitionally rational- izable strategies. The procedure is analogous to iterative deletion of never best response strategies, but operates on implicit agreements by different coalitions. The solution set is a nonempty subset of the rationalizable strategies. ∗I thank Dilip Abreu for support and extremely valuable comments at all stages of writingthispaper.Iwouldalsoliketothank Drew Fudenberg, Faruk Gul, Wolfgang Pesendorfer, Ariel Rubinstein and Marciano Siniscalchi for useful comments and dis- cussions. Finally, I would like to thank Eddie Dekel, Haluk Ergin, Erica Field, Jozsef Molnar, Wojciech Oszlewski, Hugo Sonnenschein, Andrea Wilson, seminar participants at various universities for useful comments. 1 I. Introduction The main solution concept in noncooperative game theory, Nash equilibrium, requires stability only with respect to individual deviations by players. It does not take into account the possibility that groups of players might coordinate their moves, in order to achieve an outcome that is better for all of them. There have been several attempts to incorporate this consideration into the theory of noncooperative games, starting with the pioneering works of Schelling [1960] and Aumann [1959]. The latter offered strong Nash equilibrium, the first for- mal solution concept in noncooperative game theory that takes into account the interests of coalitions.
    [Show full text]
  • Collusion in Markets with Syndication∗
    Collusion in Markets with Syndication∗ John William Hatfield Scott Duke Kominers McCombs School of Business Harvard Business School & University of Texas at Austin Department of Economics Harvard University Richard Lowery Jordan M. Barry McCombs School of Business School of Law University of Texas at Austin University of San Diego April 26, 2019 Abstract Many markets, including markets for IPOs and debt issuances, are syndicated: each winning bidder invites competitors to join its syndicate to complete production. Using repeated extensive form games, we show that collusion in syndicated markets may become easier as market concentration falls, and that market entry may facilitate collusion. In particular, firms can sustain collusion by refusing to syndicate with any firm that undercuts the collusive price (and thereby raising that firm’s production costs). Our results can thus rationalize the paradoxical empirical observations that the IPO underwriting market exhibits seemingly collusive pricing despite low levels of market concentration. JEL Classification: D43, L13, G24, L4 Keywords: Collusion, Antitrust, IPO underwriting, Syndication, Repeated games ∗The authors appreciate the helpful comments of Kenneth Ayotte, Benjamin Brooks, Lauren Cohen, Lin William Cong, Glenn Ellison, Drew Fudenberg, Joshua Gans, Shengwu Li, Leslie Marx, Will Rafey, Ehud I. Ronn, Yossi Spiegel, Juuso Toikka, Alex White, Lucy White, Alex Wolitzky, Mindy Zhang, the editor (Emir Kamenica), and several referees, as well as seminar participants at CMU-Pitt-Penn State Finance Conference, the FSU Suntrust Finance Beach Conference, the Southern California Private Equity Conference, and the Fall 2017 Meeting of the Finance Theory Group, INFORMS 2017, and the 7th Annual Law and Economic Theory Conferences, as well as at Carnegie Mellon, Georgia State University, Harvard University, MIT, Rice University, Texas A&M University, the University of Amsterdam, the University of Calgary, the University of Chicago, the University of Texas at Austin, and the University of Western Ontario.
    [Show full text]