Introduction to Game Theory

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Game Theory Introduction to Game Theory 3a. More on Normal-Form Games Dana Nau University of Maryland Nau: Game Theory 1 More Solution Concepts Last time, we talked about several solution concepts Pareto optimality Nash equilibrium Maximin and Minimax Dominance Rationalizability We’ll continue with several more Trembling-hand perfect equilibrium ε-Nash equilibrium Rationalizability Evolutionarily stable strategies Nau: Game Theory 2 Trembling-Hand Perfect Equilibrium A solution concept that’s stricter than Nash equilibrium “Trembling hand”: Requires that the equilibrium be robust against slight errors or “trembles” by the agents I.e., small perturbations of their strategies Recall: A fully mixed strategy assigns every action a non-0 probability Let S = (s1, …, sn) be a mixed strategy profile for a game G S is a (trembling hand) perfect equilibrium if there is a sequence of fully mixed-strategy profiles S0, S1, …, that has the following properties: k lim k→∞ S = S k k k k k for each S = (s1 , …, si , …, sn ), every strategy si is a best response to k the strategies S−i The details are complicated, and I won’t discuss them Nau: Game Theory 3 ε-Nash Equilibrium Another solution concept Reflects the idea that agents might not change strategies if the gain would be very small Let ε > 0. A strategy profile S = (s1, . , sn ) is an ε-Nash equilibrium if, for every agent i and for all strategies siʹ ≠ si, ui (si , S−i ) ≥ ui (siʹ, S−i ) − ε ε-Nash equilibria always exist Every Nash equilibrium is surrounded by a region of ε-Nash equilibria for any ε > 0 This concept can be computationally useful Algorithms to identify ε-Nash equilibria need consider only a finite set of mixed-strategy profiles (not the whole continuous space) Because of finite precision, computers generally find only ε-Nash equilibria, where ε is roughly the machine precision Nau: Game Theory 4 Problems with ε-Nash Equilibrium For every Nash equilibrium, there are ε-Nash equilibria that approximate it, but the converse isn’t true There are ε-Nash equilibria that aren’t close to any Nash equilibrium Example: the game at right has just one Nash equilibrium: (D, R) We can use strategy elimination to get it: • D dominates U for agent 1 • On removing U, R dominates L for agent 2 (D, R) is also an ε-Nash equilibrium But there’s another ε-Nash equilibrium: (U, L) In this equilibrium, neither agent’s payoff is within ε of the agent’s payoff in a Nash equilibrium Problem: In the ε-Nash equilibrium (U, L), agent 1 can’t gain more than ε by deviating But if agent 1 deviates, agent 2 can gain more than ε by best-responding to agent 1’s deviation Nau: Game Theory 5 Problems with ε-Nash Equilibrium Some ε-Nash equilibria are very unlikely to arise Agent 1 might not care about a gain of ε/2, but might reason as follows: • Agent 2 may expect agent 1 to to play D since it dominates U • So agent 2 is likely to play R • If agent 2 plays R, agent 1 does much better by playing D rather than U In general, ε-approximation is much messier in games than in optimization problems Nau: Game Theory 6 Rationalizability A strategy is rationalizable if a perfectly rational agent could justifiably play it against perfectly rational opponents The formal definition is complicated Informally, a strategy for agent i is rationalizable if it’s a best response to some beliefs that agent i could have about the strategies that the other agents will take But agent i’s beliefs must take into account i’s knowledge of the rationality of the others. This incorporates • the other agents’ knowledge of i’s rationality, • their knowledge of i’s knowledge of their rationality, • and so on ad infinitum A rationalizable strategy profile is a strategy profile that consists only of rationalizable strategies Nau: Game Theory 7 Heads Tails Example Heads 1,–1 –1, 1 Matching Pennies Agent 1’s pure strategy Heads is rationalizable Tails –1, 1 1,–1 Let’s look at the chain of beliefs For agent 1, Heads is a best response to agent 2’s pure strategy Heads, … … and believing that 2 would also play Heads is consistent with 2’s rationality, for the following reasons 2 could believe that 1 would play Tails, to which 2’s best response is Heads; … … and it would be rational for 2 to believe that 1 would play Tails, for the following reasons: • 2 could believe that 1 believed that 2 would play Tails, to which Tails is a best response; … Nau: Game Theory 8 Strategies that aren’t rationalizable Prisoner’s Dilemma Strategy C isn’t rationalizable for agent 1 3, 3 0, 5 It isn’t a best response to any of agent 2’s strategies 5, 0 1, 1 The 3x3 game we used earlier M is not a rationalizable strategy for agent 1 It is a best response to one of agent 2’s strategies, namely R But there’s no belief that agent 2 could have about agent 1’s strategy for which R would be a best response Nau: Game Theory 9 Comments The formal definition of rationalizability is complicated because of the infinite regress But we can say some intuitive things about rationalizable strategies Nash equilibrium strategies are always rationalizable So the set of rationalizable strategies (and strategy profiles) is always nonempty In two-player games, rationalizable strategies are simply those that survive the iterated elimination of strictly dominated strategies In n-agent games, this isn’t so Rather, rationalizable strategies are those that survive iterative removal of strategies that are never a best response to any strategy profile by the other agents Example: the p-beauty contest Nau: Game Theory 10 The p-Beauty Contest At the start of my first class, I asked you to do the following: Choose a number in the range from 0 to 100 Write it on a piece of paper, along with your name In a few minutes, I’ll ask you to pass your papers to the front of the room After class, I’ll compute the average of all of the numbers The winner(s) will be whoever chose a number that’s closest to 2/3 of the average I’ll announce the results in a subsequent class This game is famous among economists and game theorists It’s called the p-beauty contest I used p = 2/3 Nau: Game Theory 11 The p-Beauty Contest Recall that in n-player games, Rationalizable strategies are those that survive iterative removal of strategies that are never a best response to any strategy profile by the other agents In the p-beauty contest, consider the strategy profile in which everyone else chooses 100 Every number in the interval [0,100) is a best response Thus every number in the interval [0,100) is rationalizable Nau: Game Theory 12 Nash Equilibrium for the p-Beauty Contest Iteratively eliminate dominated strategies All numbers ≤ 100 => 2/3(average) < 67 => any strategy that includes numbers ≥ 67 isn’t a best response to any strategy profile, so eliminate it The remaining strategies only include numbers < 67 => for every rationalizable strategy profile, 2/3(average) < 45 => any strategy that includes numbers ≥ 45 isn’t a best response to any strategy profile, so eliminate it Rationalizable strategies only include numbers < 45 => for every rationalizable strategy profile, 2/3(average) < 30 . The only strategy profile that survives elimination of dominated strategies: Everybody chooses 0 Therefore this is the unique Nash equilibrium Nau: Game Theory 13 p-Beauty Contest Results (2/3)(average) = 21 winner = Giovanni Nau: Game Theory 14 Another Example of p-Beauty Contest Results Average = 32.93 2/3 of the average = 21.95 Winner: anonymous xx Nau: Game Theory 15 We aren’t rational Most of you didn’t play Nash equilibrium strategies We aren’t game-theoretically rational agents Huge literature on behavioral economics going back to about 1979 Many cases where humans (or aggregations of humans) tend to make different decisions than the game-theoretically optimal ones Daniel Kahneman received the 2002 Nobel Prize in Economics for his work on that topic Nau: Game Theory 16 Choosing “Irrational” Strategies Why choose a non-equilibrium strategy? Limitations in reasoning ability • Didn’t calculate the Nash equilibrium correctly • Don’t know how to calculate it • Don’t even know the concept Hidden payoffs • Other things may be more important than winning › Want to be helpful › Want to see what happens › Want to create mischief Agent modeling (next slide) Nau: Game Theory 17 Agent Modeling A Nash equilibrium strategy is best for you if the other agents also use their Nash equilibrium strategies In many cases, the other agents won’t use Nash equilibrium strategies If you can forecast their actions accurately, you may be able to do much better than the Nash equilibrium strategy I’ll say more about this in Session 9 Incomplete-information games Nau: Game Theory 18 Evolutionarily Stable Strategies An evolutionarily stable strategy (ESS) is a mixed strategy that’s “resistant to invasion” by new strategies This concept comes from evolutionary biology Consider how various species’ relative “fitness” causes their proportions of the population to grow or shrink For us, an organism’s fitness = its expected payoff from interacting with a random member of the population An organism’s strategy = anything that might affect its fitness • size, aggressiveness, sensory abilities, intelligence, … Suppose a small population of “invaders” playing a different strategy is added to a population The original strategy is an ESS if it gets a higher payoff against the mixture of the new and old strategies than the invaders do Nau: Game Theory 19 r r' Evolutionary
Recommended publications
  • Lecture 4 Rationalizability & Nash Equilibrium Road
    Lecture 4 Rationalizability & Nash Equilibrium 14.12 Game Theory Muhamet Yildiz Road Map 1. Strategies – completed 2. Quiz 3. Dominance 4. Dominant-strategy equilibrium 5. Rationalizability 6. Nash Equilibrium 1 Strategy A strategy of a player is a complete contingent-plan, determining which action he will take at each information set he is to move (including the information sets that will not be reached according to this strategy). Matching pennies with perfect information 2’s Strategies: HH = Head if 1 plays Head, 1 Head if 1 plays Tail; HT = Head if 1 plays Head, Head Tail Tail if 1 plays Tail; 2 TH = Tail if 1 plays Head, 2 Head if 1 plays Tail; head tail head tail TT = Tail if 1 plays Head, Tail if 1 plays Tail. (-1,1) (1,-1) (1,-1) (-1,1) 2 Matching pennies with perfect information 2 1 HH HT TH TT Head Tail Matching pennies with Imperfect information 1 2 1 Head Tail Head Tail 2 Head (-1,1) (1,-1) head tail head tail Tail (1,-1) (-1,1) (-1,1) (1,-1) (1,-1) (-1,1) 3 A game with nature Left (5, 0) 1 Head 1/2 Right (2, 2) Nature (3, 3) 1/2 Left Tail 2 Right (0, -5) Mixed Strategy Definition: A mixed strategy of a player is a probability distribution over the set of his strategies. Pure strategies: Si = {si1,si2,…,sik} σ → A mixed strategy: i: S [0,1] s.t. σ σ σ i(si1) + i(si2) + … + i(sik) = 1. If the other players play s-i =(s1,…, si-1,si+1,…,sn), then σ the expected utility of playing i is σ σ σ i(si1)ui(si1,s-i) + i(si2)ui(si2,s-i) + … + i(sik)ui(sik,s-i).
    [Show full text]
  • Learning and Equilibrium
    Learning and Equilibrium Drew Fudenberg1 and David K. Levine2 1Department of Economics, Harvard University, Cambridge, Massachusetts; email: [email protected] 2Department of Economics, Washington University of St. Louis, St. Louis, Missouri; email: [email protected] Annu. Rev. Econ. 2009. 1:385–419 Key Words First published online as a Review in Advance on nonequilibrium dynamics, bounded rationality, Nash equilibrium, June 11, 2009 self-confirming equilibrium The Annual Review of Economics is online at by 140.247.212.190 on 09/04/09. For personal use only. econ.annualreviews.org Abstract This article’s doi: The theory of learning in games explores how, which, and what 10.1146/annurev.economics.050708.142930 kind of equilibria might arise as a consequence of a long-run non- Annu. Rev. Econ. 2009.1:385-420. Downloaded from arjournals.annualreviews.org Copyright © 2009 by Annual Reviews. equilibrium process of learning, adaptation, and/or imitation. If All rights reserved agents’ strategies are completely observed at the end of each round 1941-1383/09/0904-0385$20.00 (and agents are randomly matched with a series of anonymous opponents), fairly simple rules perform well in terms of the agent’s worst-case payoffs, and also guarantee that any steady state of the system must correspond to an equilibrium. If players do not ob- serve the strategies chosen by their opponents (as in extensive-form games), then learning is consistent with steady states that are not Nash equilibria because players can maintain incorrect beliefs about off-path play. Beliefs can also be incorrect because of cogni- tive limitations and systematic inferential errors.
    [Show full text]
  • Evolutionary Game Theory: ESS, Convergence Stability, and NIS
    Evolutionary Ecology Research, 2009, 11: 489–515 Evolutionary game theory: ESS, convergence stability, and NIS Joseph Apaloo1, Joel S. Brown2 and Thomas L. Vincent3 1Department of Mathematics, Statistics and Computer Science, St. Francis Xavier University, Antigonish, Nova Scotia, Canada, 2Department of Biological Sciences, University of Illinois, Chicago, Illinois, USA and 3Department of Aerospace and Mechanical Engineering, University of Arizona, Tucson, Arizona, USA ABSTRACT Question: How are the three main stability concepts from evolutionary game theory – evolutionarily stable strategy (ESS), convergence stability, and neighbourhood invader strategy (NIS) – related to each other? Do they form a basis for the many other definitions proposed in the literature? Mathematical methods: Ecological and evolutionary dynamics of population sizes and heritable strategies respectively, and adaptive and NIS landscapes. Results: Only six of the eight combinations of ESS, convergence stability, and NIS are possible. An ESS that is NIS must also be convergence stable; and a non-ESS, non-NIS cannot be convergence stable. A simple example shows how a single model can easily generate solutions with all six combinations of stability properties and explains in part the proliferation of jargon, terminology, and apparent complexity that has appeared in the literature. A tabulation of most of the evolutionary stability acronyms, definitions, and terminologies is provided for comparison. Key conclusions: The tabulated list of definitions related to evolutionary stability are variants or combinations of the three main stability concepts. Keywords: adaptive landscape, convergence stability, Darwinian dynamics, evolutionary game stabilities, evolutionarily stable strategy, neighbourhood invader strategy, strategy dynamics. INTRODUCTION Evolutionary game theory has and continues to make great strides.
    [Show full text]
  • Lecture Notes
    GRADUATE GAME THEORY LECTURE NOTES BY OMER TAMUZ California Institute of Technology 2018 Acknowledgments These lecture notes are partially adapted from Osborne and Rubinstein [29], Maschler, Solan and Zamir [23], lecture notes by Federico Echenique, and slides by Daron Acemoglu and Asu Ozdaglar. I am indebted to Seo Young (Silvia) Kim and Zhuofang Li for their help in finding and correcting many errors. Any comments or suggestions are welcome. 2 Contents 1 Extensive form games with perfect information 7 1.1 Tic-Tac-Toe ........................................ 7 1.2 The Sweet Fifteen Game ................................ 7 1.3 Chess ............................................ 7 1.4 Definition of extensive form games with perfect information ........... 10 1.5 The ultimatum game .................................. 10 1.6 Equilibria ......................................... 11 1.7 The centipede game ................................... 11 1.8 Subgames and subgame perfect equilibria ...................... 13 1.9 The dollar auction .................................... 14 1.10 Backward induction, Kuhn’s Theorem and a proof of Zermelo’s Theorem ... 15 2 Strategic form games 17 2.1 Definition ......................................... 17 2.2 Nash equilibria ...................................... 17 2.3 Classical examples .................................... 17 2.4 Dominated strategies .................................. 22 2.5 Repeated elimination of dominated strategies ................... 22 2.6 Dominant strategies ..................................
    [Show full text]
  • A New Approach to Stable Matching Problems
    August1 989 Report No. STAN-CS-89- 1275 A New Approach to Stable Matching Problems bY Ashok Subramanian Department of Computer Science Stanford University Stanford, California 94305 3 DISTRIBUTION /AVAILABILITY OF REPORT Unrestricted: Distribution Unlimited GANIZATION 1 1 TITLE (Include Securrty Clamfrcat!on) A New Approach to Stable Matching Problems 12 PERSONAL AUTHOR(S) Ashok Subramanian 13a TYPE OF REPORT 13b TtME COVERED 14 DATE OF REPORT (Year, Month, Day) 15 PAGE COUNT FROM TO August 1989 34 16 SUPPLEMENTARY NOTATION 17 COSATI CODES 18 SUBJECT TERMS (Contrnue on reverse If necessary and jdentrfy by block number) FIELD GROUP SUB-GROUP 19 ABSTRACT (Continue on reverse if necessary and identrfy by block number) Abstract. We show that Stable Matching problems are the same as problems about stable config- urations of X-networks. Consequences include easy proofs of old theorems, a new simple algorithm for finding a stable matching, an understanding of the difference between Stable Marriage and Stable Roommates, NTcompleteness of Three-party Stable Marriage, CC-completeness of several Stable Matching problems, and a fast parallel reduction from the Stable Marriage problem to the ’ Assignment problem. 20 DISTRIBUTION /AVAILABILITY OF ABSTRACT 21 ABSTRACT SECURITY CLASSIFICATION q UNCLASSIFIED/UNLIMITED 0 SAME AS Rf’T 0 DTIC USERS 22a NAME OF RESPONSIBLE INDIVIDUAL 22b TELEPHONE (Include Area Code) 22c OFFICE SYMBOL Ernst Mavr DD Form 1473, JUN 86 Prevrous edrtions are obsolete SECURITY CLASSIFICATION OF TYS PAGt . 1 , S/N 0102-LF-014-6603 \ \ ““*5 - - A New Approach to Stable Matching Problems * Ashok Subramanian Department of Computer Science St anford University Stanford, CA 94305-2140 Abstract.
    [Show full text]
  • 8. Maxmin and Minmax Strategies
    CMSC 474, Introduction to Game Theory 8. Maxmin and Minmax Strategies Mohammad T. Hajiaghayi University of Maryland Outline Chapter 2 discussed two solution concepts: Pareto optimality and Nash equilibrium Chapter 3 discusses several more: Maxmin and Minmax Dominant strategies Correlated equilibrium Trembling-hand perfect equilibrium e-Nash equilibrium Evolutionarily stable strategies Worst-Case Expected Utility For agent i, the worst-case expected utility of a strategy si is the minimum over all possible Husband Opera Football combinations of strategies for the other agents: Wife min u s ,s Opera 2, 1 0, 0 s-i i ( i -i ) Football 0, 0 1, 2 Example: Battle of the Sexes Wife’s strategy sw = {(p, Opera), (1 – p, Football)} Husband’s strategy sh = {(q, Opera), (1 – q, Football)} uw(p,q) = 2pq + (1 – p)(1 – q) = 3pq – p – q + 1 We can write uw(p,q) For any fixed p, uw(p,q) is linear in q instead of uw(sw , sh ) • e.g., if p = ½, then uw(½,q) = ½ q + ½ 0 ≤ q ≤ 1, so the min must be at q = 0 or q = 1 • e.g., minq (½ q + ½) is at q = 0 minq uw(p,q) = min (uw(p,0), uw(p,1)) = min (1 – p, 2p) Maxmin Strategies Also called maximin A maxmin strategy for agent i A strategy s1 that makes i’s worst-case expected utility as high as possible: argmaxmin ui (si,s-i ) si s-i This isn’t necessarily unique Often it is mixed Agent i’s maxmin value, or security level, is the maxmin strategy’s worst-case expected utility: maxmin ui (si,s-i ) si s-i For 2 players it simplifies to max min u1s1, s2 s1 s2 Example Wife’s and husband’s strategies
    [Show full text]
  • Economics 201B Economic Theory (Spring 2021) Strategic Games
    Economics 201B Economic Theory (Spring 2021) Strategic Games Topics: terminology and notations (OR 1.7), games and solutions (OR 1.1-1.3), rationality and bounded rationality (OR 1.4-1.6), formalities (OR 2.1), best-response (OR 2.2), Nash equilibrium (OR 2.2), 2 2 examples × (OR 2.3), existence of Nash equilibrium (OR 2.4), mixed strategy Nash equilibrium (OR 3.1, 3.2), strictly competitive games (OR 2.5), evolution- ary stability (OR 3.4), rationalizability (OR 4.1), dominance (OR 4.2, 4.3), trembling hand perfection (OR 12.5). Terminology and notations (OR 1.7) Sets For R, ∈ ≥ ⇐⇒ ≥ for all . and ⇐⇒ ≥ for all and some . ⇐⇒ for all . Preferences is a binary relation on some set of alternatives R. % ⊆ From % we derive two other relations on : — strict performance relation and not  ⇐⇒ % % — indifference relation and ∼ ⇐⇒ % % Utility representation % is said to be — complete if , or . ∀ ∈ % % — transitive if , and then . ∀ ∈ % % % % can be presented by a utility function only if it is complete and transitive (rational). A function : R is a utility function representing if → % ∀ ∈ () () % ⇐⇒ ≥ % is said to be — continuous (preferences cannot jump...) if for any sequence of pairs () with ,and and , . { }∞=1 % → → % — (strictly) quasi-concave if for any the upper counter set ∈ { ∈ : is (strictly) convex. % } These guarantee the existence of continuous well-behaved utility function representation. Profiles Let be a the set of players. — () or simply () is a profile - a collection of values of some variable,∈ one for each player. — () or simply is the list of elements of the profile = ∈ { } − () for all players except . ∈ — ( ) is a list and an element ,whichistheprofile () .
    [Show full text]
  • Alpha-Beta Pruning
    Alpha-Beta Pruning Carl Felstiner May 9, 2019 Abstract This paper serves as an introduction to the ways computers are built to play games. We implement the basic minimax algorithm and expand on it by finding ways to reduce the portion of the game tree that must be generated to find the best move. We tested our algorithms on ordinary Tic-Tac-Toe, Hex, and 3-D Tic-Tac-Toe. With our algorithms, we were able to find the best opening move in Tic-Tac-Toe by only generating 0.34% of the nodes in the game tree. We also explored some mathematical features of Hex and provided proofs of them. 1 Introduction Building computers to play board games has been a focus for mathematicians ever since computers were invented. The first computer to beat a human opponent in chess was built in 1956, and towards the late 1960s, computers were already beating chess players of low-medium skill.[1] Now, it is generally recognized that computers can beat even the most accomplished grandmaster. Computers build a tree of different possibilities for the game and then work backwards to find the move that will give the computer the best outcome. Al- though computers can evaluate board positions very quickly, in a game like chess where there are over 10120 possible board configurations it is impossible for a computer to search through the entire tree. The challenge for the computer is then to find ways to avoid searching in certain parts of the tree. Humans also look ahead a certain number of moves when playing a game, but experienced players already know of certain theories and strategies that tell them which parts of the tree to look.
    [Show full text]
  • 14.12 Game Theory Lecture Notes∗ Lectures 7-9
    14.12 Game Theory Lecture Notes∗ Lectures 7-9 Muhamet Yildiz Intheselecturesweanalyzedynamicgames(withcompleteinformation).Wefirst analyze the perfect information games, where each information set is singleton, and develop the notion of backwards induction. Then, considering more general dynamic games, we will introduce the concept of the subgame perfection. We explain these concepts on economic problems, most of which can be found in Gibbons. 1 Backwards induction The concept of backwards induction corresponds to the assumption that it is common knowledge that each player will act rationally at each node where he moves — even if his rationality would imply that such a node will not be reached.1 Mechanically, it is computed as follows. Consider a finite horizon perfect information game. Consider any node that comes just before terminal nodes, that is, after each move stemming from this node, the game ends. If the player who moves at this node acts rationally, he will choose the best move for himself. Hence, we select one of the moves that give this player the highest payoff. Assigning the payoff vector associated with this move to the node at hand, we delete all the moves stemming from this node so that we have a shorter game, where our node is a terminal node. Repeat this procedure until we reach the origin. ∗These notes do not include all the topics that will be covered in the class. See the slides for a more complete picture. 1 More precisely: at each node i the player is certain that all the players will act rationally at all nodes j that follow node i; and at each node i the player is certain that at each node j that follows node i the player who moves at j will be certain that all the players will act rationally at all nodes k that follow node j,...ad infinitum.
    [Show full text]
  • How to Win at Tic-Tac-Toe
    More Than Child’s Play How to Get N in a Row Games with Animals Hypercube Tic-Tac-Toe How to Win at Tic-Tac-Toe Norm Do Undoubtably, one of the most popular pencil and paper games in the world is tic-tac-toe, also commonly known as noughts and crosses. In this talk, you will learn how to beat your friends (at tic-tac-toe), discover why snaky is so shaky, and see the amazing tic-tac-toe playing chicken! March 2007 Norm Do How to Win at Tic-Tac-Toe Tic-Tac-Toe is popular: You’ve all played it while sitting at the back of a boring class. In fact, some of you are probably playing it right now! Tic-Tac-Toe is boring: People who are mildly clever should never lose. More Than Child’s Play How to Get N in a Row Some Facts About Tic-Tac-Toe Games with Animals Games to Beat your Friends With Hypercube Tic-Tac-Toe Some facts about tic-tac-toe Tic-Tac-Toe is old: It may have been played under the name of “terni lapilli” in Ancient Rome. Norm Do How to Win at Tic-Tac-Toe Tic-Tac-Toe is boring: People who are mildly clever should never lose. More Than Child’s Play How to Get N in a Row Some Facts About Tic-Tac-Toe Games with Animals Games to Beat your Friends With Hypercube Tic-Tac-Toe Some facts about tic-tac-toe Tic-Tac-Toe is old: It may have been played under the name of “terni lapilli” in Ancient Rome.
    [Show full text]
  • Daniel Kahneman Curriculum Vitae August 2016
    Daniel Kahneman Curriculum Vitae August 2016 Born: 1934, Tel Aviv, Israel Citizenship: US, Israel Education Ph.D. University of California, Berkeley, 1961 (Psychology) B.A. The Hebrew University, Jerusalem, 1954, (Psychology and Mathematics) Professional Positions Held 2007- Professor of Psychology and Public Affairs, Emeritus, Woodrow Wilson School, Princeton University 2007- Eugene Higgins Professor of Psychology, Emeritus, Princeton University 2000- Fellow, Center for Rationality, Hebrew University, Jerusalem 1993-2007 Eugene Higgins Professor of Psychology, Princeton University 1993-2007 Professor of Psychology and Public Affairs, Woodrow Wilson School, Princeton University 1991-1992 Visiting Scholar, Russell Sage Foundation 1986-1994 Professor of Psychology, University of California, Berkeley 1984-1986 Associate Fellow, Canadian Institute for Advanced Research 1978-1986 Professor of Psychology, The University of British Columbia 1977-1978 Fellow, Center for Advanced Studies in the Behavioral Sciences 1968-1969 Visiting Scientist (summers), Applied Psychological Research Unit, Cambridge, England 1966-1967 Fellow, Center for Cognitive Studies; Lecturer in Psychology, Harvard University 1965-1966 Visiting Scientist, Department of Psychology, University of Michigan 1961-1978 Lecturer to Professor, The Hebrew University, Jerusalem, Israel LINKS TO RECENT LECTURES Video and Audio Links “Interview with Leif Nelson,” The Society for Judgment and Decision Making, November 2015. “Thinking That We Know”, Sackler Colloquium to the National Academy
    [Show full text]
  • Game Theory with Translucent Players
    Game Theory with Translucent Players Joseph Y. Halpern and Rafael Pass∗ Cornell University Department of Computer Science Ithaca, NY, 14853, U.S.A. e-mail: fhalpern,[email protected] November 27, 2012 Abstract A traditional assumption in game theory is that players are opaque to one another—if a player changes strategies, then this change in strategies does not affect the choice of other players’ strategies. In many situations this is an unrealistic assumption. We develop a framework for reasoning about games where the players may be translucent to one another; in particular, a player may believe that if she were to change strategies, then the other player would also change strategies. Translucent players may achieve significantly more efficient outcomes than opaque ones. Our main result is a characterization of strategies consistent with appro- priate analogues of common belief of rationality. Common Counterfactual Belief of Rationality (CCBR) holds if (1) everyone is rational, (2) everyone counterfactually believes that everyone else is rational (i.e., all players i be- lieve that everyone else would still be rational even if i were to switch strate- gies), (3) everyone counterfactually believes that everyone else is rational, and counterfactually believes that everyone else is rational, and so on. CCBR characterizes the set of strategies surviving iterated removal of minimax dom- inated strategies: a strategy σi is minimax dominated for i if there exists a 0 0 0 strategy σ for i such that min 0 u (σ ; µ ) > max u (σ ; µ ). i µ−i i i −i µ−i i i −i ∗Halpern is supported in part by NSF grants IIS-0812045, IIS-0911036, and CCF-1214844, by AFOSR grant FA9550-08-1-0266, and by ARO grant W911NF-09-1-0281.
    [Show full text]