Game Theory Through Examples

Total Page:16

File Type:pdf, Size:1020Kb

Game Theory Through Examples Game through EXAMPLES ERICH PRISNER CLASSROOM RESOURCE MATERIALS i i “Alles” — 2014/5/8 — 11:19 — page i — #1 i i Game Theory Through Examples i i i i i i “Alles” — 2014/5/8 — 11:36 — page ii — #2 i i c 2014 by the Mathematical Association of America, Inc. Electronic edition ISBN 978-1-61444-115-1 i i i i i i “Alles” — 2014/5/8 — 11:19 — page iii — #3 i i Game Theory Through Examples Erich Prisner Franklin University Switzerland Published and Distributed by The Mathematical Association of America i i i i i i “Alles” — 2014/5/8 — 11:19 — page iv — #4 i i Council on Publications and Communications Frank Farris, Chair Committee on Books Fernando Gouvˆea, Chair Classroom Resource Materials Editorial Board Susan G Staples, Editor Michael Bardzell Jennifer Bergner Caren L Diefenderfer Christopher Hallstrom Cynthia J Huffman Paul R Klingsberg Brian Lins Mary Morley Philip P Mummert Barbara E Reynolds Darryl Yong i i i i i i “Alles” — 2014/5/8 — 11:19 — page v — #5 i i CLASSROOM RESOURCE MATERIALS Classroom Resource Materials is intended to provide supplementary classroom material for students— laboratory exercises, projects, historical information, textbooks with unusual approaches for presenting math- ematical ideas, career information, etc. 101 Careers in Mathematics, 3rd edition edited by Andrew Sterrett Archimedes: What Did He Do Besides Cry Eureka?, Sherman Stein Calculus: An Active Approach with Projects, Stephen Hilbert, Diane Driscoll Schwartz, Stan Seltzer, John Maceli, and Eric Robinson Calculus Mysteries and Thrillers, R. Grant Woods Conjecture and Proof, Mikl´os Laczkovich Counterexamples in Calculus, Sergiy Klymchuk Creative Mathematics, H. S. Wall Environmental Mathematics in the Classroom, edited by B. A. Fusaro and P. C. Kenschaft Excursions in Classical Analysis: Pathways to Advanced Problem Solving and Undergraduate Research, by Hongwei Chen Explorations in Complex Analysis, Michael A. Brilleslyper, Michael J. Dorff, Jane M. McDougall, James S. Rolf, Lisbeth E. Schaubroeck, Richard L. Stankewitz, and Kenneth Stephenson Exploratory Examples for Real Analysis, Joanne E. Snow and Kirk E. Weller Exploring Advanced Euclidean Geometry with GeoGebra, Gerard A. Venema Game Theory Through Examples, Erich Prisner Geometry From Africa: Mathematical and Educational Explorations, Paulus Gerdes Historical Modules for the Teaching and Learning of Mathematics (CD), edited by Victor Katz and Karen Dee Michalowicz Identification Numbers and Check Digit Schemes, Joseph Kirtland Interdisciplinary Lively Application Projects, edited by Chris Arney Inverse Problems: Activities for Undergraduates, Charles W. Groetsch Keeping it R.E.A.L.: Research Experiences for All Learners, Carla D. Martin and Anthony Tongen Laboratory Experiences in Group Theory, Ellen Maycock Parker Learn from the Masters, Frank Swetz, John Fauvel, Otto Bekken, Bengt Johansson, and Victor Katz Math Made Visual: Creating Images for Understanding Mathematics, Claudi Alsina and Roger B. Nelsen Mathematics Galore!: The First Five Years of the St. Marks Institute of Mathematics, James Tanton Methods for Euclidean Geometry, Owen Byer, Felix Lazebnik, and Deirdre L. Smeltzer Ordinary Differential Equations: A Brief Eclectic Tour, David A. S´anchez Oval Track and Other Permutation Puzzles, John O. Kiltinen Paradoxes and Sophisms in Calculus, Sergiy Klymchuk and Susan Staples A Primer of Abstract Mathematics, Robert B. Ash Proofs Without Words, Roger B. Nelsen Proofs Without Words II, Roger B. Nelsen Rediscovering Mathematics: You Do the Math, Shai Simonson i i i i i i “Alles” — 2014/5/8 — 11:19 — page vi — #6 i i She Does Math!, edited by Marla Parker Solve This: Math Activities for Students and Clubs, James S. Tanton Student Manual for Mathematics for Business Decisions Part 1: Probability and Simulation, David Williamson, Marilou Mendel, Julie Tarr, and Deborah Yoklic Student Manual for Mathematics for Business Decisions Part 2: Calculus and Optimization, David Williamson, Marilou Mendel, Julie Tarr, and Deborah Yoklic Teaching Statistics Using Baseball, Jim Albert Visual Group Theory, Nathan C. Carter Which Numbers are Real?, Michael Henle Writing Projects for Mathematics Courses: Crushed Clowns, Cars, and Coffee to Go, Annalisa Crannell, Gavin LaRose, Thomas Ratliff, and Elyn Rykken MAA Service Center P.O. Box 91112 Washington, DC 20090-1112 1-800-331-1MAA FAX: 1-301-206-9789 i i i i i i “Alles” — 2014/5/8 — 11:19 — page vii — #7 i i Contents Preface xvi 1 Theory 1: Introduction 1 1.1 What’s a Game? .......................................... 1 1.2 Game, Play, Move: Some Definitions .............................. 1 1.3 Classification of Games ...................................... 2 Exercises ................................................ 3 2 Theory 2: Simultaneous Games 4 2.1 Normal Form—Bimatrix Description ............................... 4 2.1.1 Two Players ........................................ 4 2.1.2 Two Players, Zero-sum .................................. 5 2.1.3 Three or More Players .................................. 5 2.1.4 Symmetric Games .................................... 6 2.2 Which Option to Choose ..................................... 6 2.2.1 Maximin Move and Security Level ............................ 6 2.2.2 Dominated Moves .................................... 7 2.2.3 Best Response ...................................... 8 2.2.4 Nash Equilibria ...................................... 9 2.3 Additional Topics ......................................... 13 2.3.1 Best Response Digraphs ................................. 13 2.3.2 2-Player Zero-sum Symmetric Games .......................... 14 Exercises ................................................ 15 Project 1: Reacting fast or slow ..................................... 18 3 Example: Selecting a Class 19 3.1 Three Players, Two Classes .................................... 19 3.1.1 “I likeyouboth” ..................................... 19 3.1.2 Dislikingthe Rival .................................... 21 3.1.3 Outsider .......................................... 21 3.2 Larger Cases ........................................... 22 3.3 Assumptions ........................................... 23 Exercises ................................................ 23 Project 2 ................................................. 23 Project 3 ................................................. 24 Project 4 ................................................. 24 vii i i i i i i “Alles” — 2014/5/8 — 11:19 — page viii — #8 i i viii Contents 4 Example: Doctor Location Games 25 4.1 Doctor Location .......................................... 25 4.1.1 An Example Graph .................................... 26 4.1.2 No (Pure) Nash Equilibrium? .............................. 27 4.1.3 How Good are the Nash Equilibriafor the Public? ................... 28 4.2 Trees ................................................ 28 4.3 More than one Office (optional) ................................. 31 Exercises ................................................ 31 Project 5: Doctor location on MOPs .................................. 33 Project 6 ................................................. 33 Project 7 ................................................. 33 5 Example: Restaurant Location Games 34 5.1 A FirstGraph ........................................... 35 5.2 A Second Graph .......................................... 36 5.3 Existence of Pure Nash Equilibria ................................ 37 5.4 More than one Restaurant (optional) ............................... 38 Exercises ................................................ 39 6 Using Excel 42 6.1 Spreadsheet Programs like Excel ................................. 42 6.2 Two-Person Simultaneous Games ................................ 43 6.3 Three-Person Simultaneous Games ................................ 43 Exercises ................................................ 43 Project 8: Simultaneous Quatro-Uno .................................. 44 Project 9: Restaurant Location Games ................................. 44 Project 10: 5 Knights .......................................... 45 Project 11: 5 Cardinals ......................................... 45 7 Example: Election I 47 7.1 First Example ........................................... 47 7.2 Second Example ......................................... 48 7.3 The General Model ........................................ 50 7.4 Third Example .......................................... 50 7.5 The Eight Cases .......................................... 51 7.6 Voting Power Indices (optional) ................................. 51 Exercises ................................................ 52 8 Theory 3: Sequential Games I: Perfect Information and no Randomness 53 8.1 Extensive Form: Game Tree and Game Digraph ......................... 53 8.2 Analyzing the Game: Backward Induction ............................ 56 8.2.1 Finite Games ....................................... 56 8.2.2 The Procedure ...................................... 57 8.2.3 Zermelo’s Theorem .................................... 59 8.3 Additional Topics ......................................... 59 8.3.1 Reality Check ....................................... 59 8.3.2 Playing it Safe—Guaranteed Payoffs .......................... 61 8.3.3 Two-person Zero-sum Games .............................. 63 i i i i i i “Alles” — 2014/5/8 — 11:19 — page ix — #9 i i Contents ix 8.3.4 Breaking Ties ....................................... 64 8.3.5 Existing Games ...................................... 64 8.3.6 Greedy Strategies ....................................
Recommended publications
  • Game Theory 2: Extensive-Form Games and Subgame Perfection
    Game Theory 2: Extensive-Form Games and Subgame Perfection 1 / 26 Dynamics in Games How should we think of strategic interactions that occur in sequence? Who moves when? And what can they do at different points in time? How do people react to different histories? 2 / 26 Modeling Games with Dynamics Players Player function I Who moves when Terminal histories I Possible paths through the game Preferences over terminal histories 3 / 26 Strategies A strategy is a complete contingent plan Player i's strategy specifies her action choice at each point at which she could be called on to make a choice 4 / 26 An Example: International Crises Two countries (A and B) are competing over a piece of land that B occupies Country A decides whether to make a demand If Country A makes a demand, B can either acquiesce or fight a war If A does not make a demand, B keeps land (game ends) A's best outcome is Demand followed by Acquiesce, worst outcome is Demand and War B's best outcome is No Demand and worst outcome is Demand and War 5 / 26 An Example: International Crises A can choose: Demand (D) or No Demand (ND) B can choose: Fight a war (W ) or Acquiesce (A) Preferences uA(D; A) = 3 > uA(ND; A) = uA(ND; W ) = 2 > uA(D; W ) = 1 uB(ND; A) = uB(ND; W ) = 3 > uB(D; A) = 2 > uB(D; W ) = 1 How can we represent this scenario as a game (in strategic form)? 6 / 26 International Crisis Game: NE Country B WA D 1; 1 3X; 2X Country A ND 2X; 3X 2; 3X I Is there something funny here? I Is there something funny here? I Specifically, (ND; W )? I Is there something funny here?
    [Show full text]
  • Learning and Equilibrium
    Learning and Equilibrium Drew Fudenberg1 and David K. Levine2 1Department of Economics, Harvard University, Cambridge, Massachusetts; email: [email protected] 2Department of Economics, Washington University of St. Louis, St. Louis, Missouri; email: [email protected] Annu. Rev. Econ. 2009. 1:385–419 Key Words First published online as a Review in Advance on nonequilibrium dynamics, bounded rationality, Nash equilibrium, June 11, 2009 self-confirming equilibrium The Annual Review of Economics is online at by 140.247.212.190 on 09/04/09. For personal use only. econ.annualreviews.org Abstract This article’s doi: The theory of learning in games explores how, which, and what 10.1146/annurev.economics.050708.142930 kind of equilibria might arise as a consequence of a long-run non- Annu. Rev. Econ. 2009.1:385-420. Downloaded from arjournals.annualreviews.org Copyright © 2009 by Annual Reviews. equilibrium process of learning, adaptation, and/or imitation. If All rights reserved agents’ strategies are completely observed at the end of each round 1941-1383/09/0904-0385$20.00 (and agents are randomly matched with a series of anonymous opponents), fairly simple rules perform well in terms of the agent’s worst-case payoffs, and also guarantee that any steady state of the system must correspond to an equilibrium. If players do not ob- serve the strategies chosen by their opponents (as in extensive-form games), then learning is consistent with steady states that are not Nash equilibria because players can maintain incorrect beliefs about off-path play. Beliefs can also be incorrect because of cogni- tive limitations and systematic inferential errors.
    [Show full text]
  • Best Experienced Payoff Dynamics and Cooperation in the Centipede Game
    Theoretical Economics 14 (2019), 1347–1385 1555-7561/20191347 Best experienced payoff dynamics and cooperation in the centipede game William H. Sandholm Department of Economics, University of Wisconsin Segismundo S. Izquierdo BioEcoUva, Department of Industrial Organization, Universidad de Valladolid Luis R. Izquierdo Department of Civil Engineering, Universidad de Burgos We study population game dynamics under which each revising agent tests each of his strategies a fixed number of times, with each play of each strategy being against a newly drawn opponent, and chooses the strategy whose total payoff was highest. In the centipede game, these best experienced payoff dynamics lead to co- operative play. When strategies are tested once, play at the almost globally stable state is concentrated on the last few nodes of the game, with the proportions of agents playing each strategy being largely independent of the length of the game. Testing strategies many times leads to cyclical play. Keywords. Evolutionary game theory, backward induction, centipede game, computational algebra. JEL classification. C72, C73. 1. Introduction The discrepancy between the conclusions of backward induction reasoning and ob- served behavior in certain canonical extensive form games is a basic puzzle of game the- ory. The centipede game (Rosenthal (1981)), the finitely repeated prisoner’s dilemma, and related examples can be viewed as models of relationships in which each partic- ipant has repeated opportunities to take costly actions that benefit his partner and in which there is a commonly known date at which the interaction will end. Experimen- tal and anecdotal evidence suggests that cooperative behavior may persist until close to William H.
    [Show full text]
  • A Class of N-Player Colonel Blotto Games with Multidimensional Private Information
    University of Zurich Department of Economics Working Paper Series ISSN 1664-7041 (print) ISSN 1664-705X (online) Working Paper No. 336 A Class of N-Player Colonel Blotto Games with Multidimensional Private Information Christian Ewerhart and Dan Kovenock Revised version, February 2021 A Class of N-Player Colonel Blotto Games With Multidimensional Private Information Christian Ewerhart Dan Kovenocky Department of Economics Economic Science Institute University of Zurich Chapman University Revised version: February 1, 2021 Abstract. In this paper, we study N-player Colonel Blotto games with incomplete information about battlefield valuations. Such games arise in job markets, research and development, electoral competition, security analysis, and conflict resolution. For M N + 1 battlefields, we identify a Bayes-Nash equilibrium in which the resource ≥ allocation to a given battlefield is strictly monotone in the valuation of that battlefield. We also explore extensions such as heterogeneous budgets, the case M N, full-support ≤ type distributions, and network games. Keywords. Colonel Blotto games Private information Bayes-Nash equilibrium · · · Generalized Dirichlet distributions Networks · *) Corresponding author. Postal address: Schönberggasse 1, 8001 Zurich, Switzerland. E-mail address: [email protected]. ) E-mail address: [email protected]. y 1 Introduction In a Colonel Blotto game, players simultaneously and independently allocate their en- dowments of a resource across a set of battlefields. The player that deploys the largest amount of the resource to a given battlefield scores a win and enjoys a gain in utility equivalent to her valuation of that battlefield. Thus, a player’s utility corresponds to the sum of the valuations of all battlefields won by the player.
    [Show full text]
  • Game Theory- Prisoners Dilemma Vs Battle of the Sexes EXCERPTS
    Lesson 14. Game Theory 1 Lesson 14 Game Theory c 2010, 2011 ⃝ Roberto Serrano and Allan M. Feldman All rights reserved Version C 1. Introduction In the last lesson we discussed duopoly markets in which two firms compete to sell a product. In such markets, the firms behave strategically; each firm must think about what the other firm is doing in order to decide what it should do itself. The theory of duopoly was originally developed in the 19th century, but it led to the theory of games in the 20th century. The first major book in game theory, published in 1944, was Theory of Games and Economic Behavior,byJohnvon Neumann (1903-1957) and Oskar Morgenstern (1902-1977). We will return to the contributions of Von Neumann and Morgenstern in Lesson 19, on uncertainty and expected utility. Agroupofpeople(orteams,firms,armies,countries)areinagame if their decision problems are interdependent, in the sense that the actions that all of them take influence the outcomes for everyone. Game theory is the study of games; it can also be called interactive decision theory. Many real-life interactions can be viewed as games. Obviously football, soccer, and baseball games are games.Butsoaretheinteractionsofduopolists,thepoliticalcampaignsbetweenparties before an election, and the interactions of armed forces and countries. Even some interactions between animal or plant species in nature can be modeled as games. In fact, game theory has been used in many different fields in recent decades, including economics, political science, psychology, sociology, computer science, and biology. This brief lesson is not meant to replace a formal course in game theory; it is only an in- troduction.
    [Show full text]
  • Campaigning Is Hard but Approximately Manageable
    Computational Analyses of the Electoral College: Campaigning Is Hard But Approximately Manageable Sina Dehghani,1 Hamed Saleh,2 Saeed Seddighin,3 Shang-Hua Teng4 1 Institute for Research in Fundamental Sciences 2 University of Maryland 3 TTIC 4 University of Southern California [email protected], [email protected], [email protected], [email protected] Abstract Electoral College, Political Campaigns, and In the classical discrete Colonel Blotto game—introduced Multi-Battleground Resource Allocation by Borel in 1921—two colonels simultaneously distribute their troops across multiple battlefields. The winner of each The president of the United States is elected by the Electoral battlefield is determined by a winner-take-all rule, indepen- College, which consists of electors selected based on 51 con- dently of other battlefields. In the original formulation, each current elections. The number of electors each of the 50 colonel’s goal is to win as many battlefields as possible. The states and D. C. can select is determined every 10 years by Blotto game and its extensions have been used in a wide the United States Census. All but two states1 use a winner- range of applications from political campaign—exemplified take-all system, and in typical election without any third- by the U.S presidential election—to marketing campaign, party candidate, the ticket that wins the majority votes in the from (innovative) technology competition to sports compe- state earns the right to have their slate of electors from that tition. Despite persistent efforts, efficient methods for find- state chosen to vote in the Electoral College. Thus, in prin- ing the optimal strategies in Blotto games have been elu- sive for almost a century—due to exponential explosion in ciple, the team that receives the majority of electoral votes the organic solution space—until Ahmadinejad, Dehghani, wins the race and its main candidate is elected as president.
    [Show full text]
  • Minimax TD-Learning with Neural Nets in a Markov Game
    Minimax TD-Learning with Neural Nets in a Markov Game Fredrik A. Dahl and Ole Martin Halck Norwegian Defence Research Establishment (FFI) P.O. Box 25, NO-2027 Kjeller, Norway {Fredrik-A.Dahl, Ole-Martin.Halck}@ffi.no Abstract. A minimax version of temporal difference learning (minimax TD- learning) is given, similar to minimax Q-learning. The algorithm is used to train a neural net to play Campaign, a two-player zero-sum game with imperfect information of the Markov game class. Two different evaluation criteria for evaluating game-playing agents are used, and their relation to game theory is shown. Also practical aspects of linear programming and fictitious play used for solving matrix games are discussed. 1 Introduction An important challenge to artificial intelligence (AI) in general, and machine learning in particular, is the development of agents that handle uncertainty in a rational way. This is particularly true when the uncertainty is connected with the behavior of other agents. Game theory is the branch of mathematics that deals with these problems, and indeed games have always been an important arena for testing and developing AI. However, almost all of this effort has gone into deterministic games like chess, go, othello and checkers. Although these are complex problem domains, uncertainty is not their major challenge. With the successful application of temporal difference learning, as defined by Sutton [1], to the dice game of backgammon by Tesauro [2], random games were included as a standard testing ground. But even backgammon features perfect information, which implies that both players always have the same information about the state of the game.
    [Show full text]
  • Improving Fictitious Play Reinforcement Learning with Expanding Models
    Improving Fictitious Play Reinforcement Learning with Expanding Models Rong-Jun Qin1;2, Jing-Cheng Pang1, Yang Yu1;y 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2Polixir emails: [email protected], [email protected], [email protected]. yTo whom correspondence should be addressed Abstract Fictitious play with reinforcement learning is a general and effective framework for zero- sum games. However, using the current deep neural network models, the implementation of fictitious play faces crucial challenges. Neural network model training employs gradi- ent descent approaches to update all connection weights, and thus is easy to forget the old opponents after training to beat the new opponents. Existing approaches often maintain a pool of historical policy models to avoid the forgetting. However, learning to beat a pool in stochastic games, i.e., a wide distribution over policy models, is either sample-consuming or insufficient to exploit all models with limited amount of samples. In this paper, we pro- pose a learning process with neural fictitious play to alleviate the above issues. We train a single model as our policy model, which consists of sub-models and a selector. Everytime facing a new opponent, the model is expanded by adding a new sub-model, where only the new sub-model is updated instead of the whole model. At the same time, the selector is also updated to mix up the new sub-model with the previous ones at the state-level, so that the model is maintained as a behavior strategy instead of a wide distribution over policy models.
    [Show full text]
  • Lecture Notes
    GRADUATE GAME THEORY LECTURE NOTES BY OMER TAMUZ California Institute of Technology 2018 Acknowledgments These lecture notes are partially adapted from Osborne and Rubinstein [29], Maschler, Solan and Zamir [23], lecture notes by Federico Echenique, and slides by Daron Acemoglu and Asu Ozdaglar. I am indebted to Seo Young (Silvia) Kim and Zhuofang Li for their help in finding and correcting many errors. Any comments or suggestions are welcome. 2 Contents 1 Extensive form games with perfect information 7 1.1 Tic-Tac-Toe ........................................ 7 1.2 The Sweet Fifteen Game ................................ 7 1.3 Chess ............................................ 7 1.4 Definition of extensive form games with perfect information ........... 10 1.5 The ultimatum game .................................. 10 1.6 Equilibria ......................................... 11 1.7 The centipede game ................................... 11 1.8 Subgames and subgame perfect equilibria ...................... 13 1.9 The dollar auction .................................... 14 1.10 Backward induction, Kuhn’s Theorem and a proof of Zermelo’s Theorem ... 15 2 Strategic form games 17 2.1 Definition ......................................... 17 2.2 Nash equilibria ...................................... 17 2.3 Classical examples .................................... 17 2.4 Dominated strategies .................................. 22 2.5 Repeated elimination of dominated strategies ................... 22 2.6 Dominant strategies ..................................
    [Show full text]
  • Evolutionary Stable Strategy Application of Nash Equilibrium in Biology
    GENERAL ARTICLE Evolutionary Stable Strategy Application of Nash Equilibrium in Biology Jayanti Ray-Mukherjee and Shomen Mukherjee Every behaviourally responsive animal (including us) make decisions. These can be simple behavioural decisions such as where to feed, what to feed, how long to feed, decisions related to finding, choosing and competing for mates, or simply maintaining ones territory. All these are conflict situations between competing individuals, hence can be best understood Jayanti Ray-Mukherjee is using a game theory approach. Using some examples of clas- a faculty member in the School of Liberal Studies sical games, we show how evolutionary game theory can help at Azim Premji University, understand behavioural decisions of animals. Game theory Bengaluru. Jayanti is an (along with its cousin, optimality theory) continues to provide experimental ecologist who a strong conceptual and theoretical framework to ecologists studies mechanisms of species coexistence among for understanding the mechanisms by which species coexist. plants. Her research interests also inlcude plant Most of you, at some point, might have seen two cats fighting. It invasion ecology and is often accompanied with the cats facing each other with puffed habitat restoration. up fur, arched back, ears back, tail twitching, with snarls, growls, Shomen Mukherjee is a and howls aimed at each other. But, if you notice closely, they faculty member in the often try to avoid physical contact, and spend most of their time School of Liberal Studies in the above-mentioned behavioural displays. Biologists refer to at Azim Premji University, this as a ‘limited war’ or conventional (ritualistic) strategy (not Bengaluru. He uses field experiments to study causing serious injury), as opposed to dangerous (escalated) animal behaviour and strategy (Figure 1) [1].
    [Show full text]
  • Statistical Mechanics of Evolutionary Dynamics
    Statistical Mechanics of Evolutionary Dynamics Kumulative Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakult¨at der Christian-Albrechts-Universit¨at zu Kiel vorgelegt von Torsten Rohl¨ Kiel 2007 Referent: Prof. Dr. H. G. Schuster Koreferent(en): Prof. Dr. G. Pfister Tagderm¨undlichenPr¨ufung: 4.Dezember2007 ZumDruckgenehmigt: Kiel,den12.Dezember2007 gez. Prof. Dr. J. Grotemeyer (Dekan) Contents Abstract 1 Kurzfassung 2 1 Introduction 3 1.1 Motivation................................... 3 1.1.1 StatisticalPhysicsandBiologicalSystems . ...... 3 1.1.1.1 Statistical Physics and Evolutionary Dynamics . ... 4 1.1.2 Outline ................................ 7 1.2 GameTheory ................................. 8 1.2.1 ClassicalGameTheory. .. .. .. .. .. .. .. 8 1.2.1.1 TheMatchingPenniesGame . 11 1.2.1.2 ThePrisoner’sDilemmaGame . 13 1.2.2 From Classical Game Theory to Evolutionary Game Theory.... 16 1.3 EvolutionaryDynamics. 18 1.3.1 Introduction.............................. 18 1.3.2 ReplicatorDynamics . 22 1.3.3 GamesinFinitePopulations . 25 1.4 SurveyofthePublications . 29 1.4.1 StochasticGaininPopulationDynamics. ... 29 1.4.2 StochasticGaininFinitePopulations . ... 31 1.4.3 Impact of Fraud on the Mean-Field Dynamics of Cooperative Social Systems................................ 36 2 Publications 41 2.1 Stochasticgaininpopulationdynamics . ..... 42 2.2 Stochasticgaininfinitepopulations . ..... 47 2.3 Impact of fraud on the mean-field dynamics of cooperative socialsystems . 54 Contents II 3 Discussion 63 3.1 ConclusionsandOutlook . 63 Bibliography 67 Curriculum Vitae 78 Selbstandigkeitserkl¨ arung¨ 79 Abstract Evolutionary dynamics is an essential component of a mathematical and computational ap- proach to biology. In recent years the mathematical description of evolution has moved to a description of any kind of process where information is being reproduced in a natural envi- ronment. In this manner everything that lives is a product of evolutionary dynamics.
    [Show full text]
  • Game Theory]: Basics of Game Theory
    Artificial Intelligence Methods for Social Good M2-1 [Game Theory]: Basics of Game Theory 08-537 (9-unit) and 08-737 (12-unit) Instructor: Fei Fang [email protected] Wean Hall 4126 1 5/8/2018 Quiz 1: Recap: Optimization Problem Given coordinates of 푛 residential areas in a city (assuming 2-D plane), denoted as 푥1, … , 푥푛, the government wants to find a location that minimizes the sum of (Euclidean) distances to all residential areas to build a hospital. The optimization problem can be written as A: min |푥푖 − 푥| 푥 푖 B: min 푥푖 − 푥 푥 푖 2 2 C: min 푥푖 − 푥 푥 푖 D: none of above 2 Fei Fang 5/8/2018 From Games to Game Theory The study of mathematical models of conflict and cooperation between intelligent decision makers Used in economics, political science etc 3/72 Fei Fang 5/8/2018 Outline Basic Concepts in Games Basic Solution Concepts Compute Nash Equilibrium Compute Strong Stackelberg Equilibrium 4 Fei Fang 5/8/2018 Learning Objectives Understand the concept of Game, Player, Action, Strategy, Payoff, Expected utility, Best response Dominant Strategy, Maxmin Strategy, Minmax Strategy Nash Equilibrium Stackelberg Equilibrium, Strong Stackelberg Equilibrium Describe Minimax Theory Formulate the following problem as an optimization problem Find NE in zero-sum games (LP) Find SSE in two-player general-sum games (multiple LP and MILP) Know how to find the method/algorithm/solver/package you can use for solving the games Compute NE/SSE by hand or by calling a solver for small games 5 Fei Fang 5/8/2018 Let’s Play! Classical Games
    [Show full text]