Characterizing Solution Concepts in Games Using Knowledge-Based Programs

Total Page:16

File Type:pdf, Size:1020Kb

Characterizing Solution Concepts in Games Using Knowledge-Based Programs Characterizing Solution Concepts in Games Using Knowledge-Based Programs Joseph Y. Halpern∗ Yoram Moses Computer Science Department Department of Electrical Engineering Cornell University, U.S.A. Technion—Israel Institute of Technology e-mail: [email protected] 32000 Haifa, Israel email: [email protected] Abstract x 0 We show how solution concepts in games such as .5 .5 Nash equilibrium, correlated equilibrium, rational- z S x xzS 1 izability, and sequential equilibrium can be given 0 1 2 a uniform definition in terms of knowledge-based 2 2 programs. Intuitively, all solution concepts are im- plementations of two knowledge-based programs, B B one appropriate for games represented in normal form, the other for games represented in extensive form. These knowledge-based programs can be x 3 x4 viewed as embodying rationality. The representa- L R L R tion works even if (a) information sets do not cap- ture an agent’s knowledge, (b) uncertainty is not zz2 3 z 4 z 5 represented by probability, or (c) the underlying 346 2 game is not common knowledge. Figure 1: A game of imperfect recall. 1 Introduction a payoff of 2, or continue, by playing move B. If he contin- Game theorists represent games in two standard ways: in ues, he gets a high payoff if he matches nature’s move, and a normal form, where each agent simply chooses a strategy, low payoff otherwise. Although he originally knows nature’s and in extensive form, using game trees, where the agents move, the information set that includes the nodes labeled x3 make choices over time. An extensive-form representation and x4 is intended to indicate that the player forgets whether has the advantage that it describes the dynamic structure of nature moved left or right after moving B. Intuitively, when the game—it explicitly represents the sequence of decision he is at the information set X, the agent is not supposed to problems encountered by agents. However, the extensive- know whether he is at x3 or at x4. form representation purports to do more than just describe It is not hard to show that the strategy that maximizes ex- the structure of the game; it also attempts to represent the pected utility chooses action S at node x1, action B at node information that players have in the game, by the use of in- x2, and action R at the information set X consisting of x3 formation sets. Intuitively, an information set consists of a and x4. Call this strategy f.Letf be the strategy of choos- set of nodes in the game tree where a player has the same ing action B at x1, action S at x2,andL at X. Piccione and [ ] information. However, as Halpern 1997 has pointed out, Rubinstein argue that if node x1 is reached, the player should information sets may not adequately represent a player’s in- reconsider, and decide to switch from f to f .AsHalpern formation. points out, this is indeed true, provided that the player knows Halpern makes this point by considering the following at each stage of the game what strategy he is currently using. single-agent game of imperfect recall, originally presented by However, in that case, if the player is using f at the infor- [ ] Piccione and Rubinstein 1997 : The game starts with nature mation set, then he knows that he is at node x4;ifhehas / moving either left or right, each with probability 1 2.The switched and is using f , then he knows that he is at x3.So, S agent can then either stop the game (playing move ) and get in this setting, it is no longer the case that the player does not ∗ know whether he is at x3 or x4 in the information set; he can Supported in part by NSF under grants CTC-0208535 and ITR- 0325453, by ONR under grant N00014-02-1-0455, by the DoD infer which state he is at from the strategy he is using. Multidisciplinary University Research Initiative (MURI) program In game theory, a strategy is taken to be a function from in- administered by the ONR under grants N00014-01-1-0795 and formation sets to actions. The intuition behind this is that, N00014-04-1-0725, and by AFOSR under grant F49620-02-1-0101. since an agent cannot tell the nodes in an information set IJCAI-07 1300 apart, he must do the same thing at all these nodes. But this express with the formula doi(S)), and she believes that she example shows that if the agent has imperfect recall but can would not do any better with another strategy, then she should switch strategies, then he can arrange to do different things indeed go ahead and run S. This test can be viewed as em- at different nodes in the same information set. As Halpern bodying rationality. There is a subtlety in expressing the [1997] observes, ‘ “situations that [an agent] cannot distin- statement “she would not do any better with another strat- guish” and “nodes in the same information set” may be two egy”. We express this by saying “if her expected utility, given quite different notions.’ He suggests using the game tree to that she will use strategy S,isx, then her expected utility if describe the structure of the game, and using the runs and sys- she were to use strategy S is at most x.” The “if she were tems framework [Fagin et al., 1995] to describe the agent’s to use S”isacounterfactual statement. She is planning information. The idea is that an agent has an internal local to use strategy S, but is contemplating what would happen state that describes all the information that he has. A strat- if she were to do something counter to fact, namely, to use egy (or protocol in the language of [Fagin et al., 1995])isa S. Counterfactuals have been the subject of intense study function from local states to actions. Protocols capture the in the philosophy literature (see, for example, [Lewis, 1973; intuition that what an agent does can depend only what he Stalnaker, 1968]) and, more recently, in the game theory lit- knows. But now an agent’s knowledge is represented by its erature (see, for example, [Aumann, 1995; Halpern, 2001; local state, not by an information set. Different assumptions Samet, 1996]). We write the counterfactual “If A were the about what agents know (for example, whether they know case then B would be true” as “A B”. Although this state- their current strategies) are captured by running the same pro- ment involves an “if . then”, the semantics of the counter- tocol in different contexts. If the information sets appropri- factual implication A B is quite different from the material ately represent an agent’s knowledge in a game, then we can implication A ⇒ B. In particular, while A ⇒ B is true if A identify local states with information sets. But, as the exam- is false, A B might not be. ple above shows, we cannot do this in general. With this background, consider the following kb program A number of solution concepts have been considered in the for player i: game-theory literature, ranging from Nash equilibrium and for each strategy S ∈Si(Γ) do correlated equilibrium to refinements of Nash equilibrium if Bi(doi(S) ∧∀x(EUi = x ⇒ such as sequential equilibrium and weaker notions such as S∈S (Γ)(doi(S ) (EUi ≤ x)))) then S. rationalizability (see [Osborne and Rubinstein, 1994] for an i overview). The fact that game trees represent both the game This kb program is meant to capture the intuition above. In- i and the players’ information has proved critical in defining tuitively, it says that if player believes that she is about to S x solution concepts in extensive-form games. Can we still rep- perform strategy and, if her expected utility is ,thenif S resent solution concepts in a useful way using runs and sys- she were to perform another strategy , then her expected utility would be no greater than x, then she should perform tems to represent a player’s information? As we show here, Γ not only can we do this, but we can do it in a way that gives strategy S. Call this kb program EQNF (with the indi- Γ deeper insight into solution concepts. Indeed, all the standard vidual instance for player i denoted by EQNFi ). As we Γ solution concepts in the literature can be understood as in- show, if all players follow EQNF , then they end up play- stances of a single knowledge-based (kb) program [Fagin et ing some type of equilibrium. Which type of equilibrium they al., 1995; 1997], which captures the underlying intuition that play depends on the context. Due to space considerations, a player should make a best response, given her beliefs. The we focus on three examples in this abstract. If the players differences between solution concepts arise from running the have a common prior on the joint strategies being used, and kb program in different contexts. this common prior is such that players’ beliefs are indepen- In a kb program, a player’s actions depend explicitly on dent of the strategies they use, then they play a Nash equilib- the player’s knowledge. For example, a kb program could rium. Without this independence assumption, we get a cor- have a test that says “If you don’t know that Ann received the related equilibrium. On the other hand, if players have pos- information, then send her a message”, which can be written sibly different priors on the space of strategies, then this kb program defines rationalizable strategies [Bernheim, 1984; if ¬Bi(Ann received info) then send Ann a message.
Recommended publications
  • Equilibrium Refinements
    Equilibrium Refinements Mihai Manea MIT Sequential Equilibrium I In many games information is imperfect and the only subgame is the original game. subgame perfect equilibrium = Nash equilibrium I Play starting at an information set can be analyzed as a separate subgame if we specify players’ beliefs about at which node they are. I Based on the beliefs, we can test whether continuation strategies form a Nash equilibrium. I Sequential equilibrium (Kreps and Wilson 1982): way to derive plausible beliefs at every information set. Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 2 / 38 An Example with Incomplete Information Spence’s (1973) job market signaling game I The worker knows her ability (productivity) and chooses a level of education. I Education is more costly for low ability types. I Firm observes the worker’s education, but not her ability. I The firm decides what wage to offer her. In the spirit of subgame perfection, the optimal wage should depend on the firm’s beliefs about the worker’s ability given the observed education. An equilibrium needs to specify contingent actions and beliefs. Beliefs should follow Bayes’ rule on the equilibrium path. What about off-path beliefs? Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 3 / 38 An Example with Imperfect Information Courtesy of The MIT Press. Used with permission. Figure: (L; A) is a subgame perfect equilibrium. Is it plausible that 2 plays A? Mihai Manea (MIT) Equilibrium Refinements April 13, 2016 4 / 38 Assessments and Sequential Rationality Focus on extensive-form games of perfect recall with finitely many nodes. An assessment is a pair (σ; µ) I σ: (behavior) strategy profile I µ = (µ(h) 2 ∆(h))h2H: system of beliefs ui(σjh; µ(h)): i’s payoff when play begins at a node in h randomly selected according to µ(h), and subsequent play specified by σ.
    [Show full text]
  • Evolutionary Game Theory: ESS, Convergence Stability, and NIS
    Evolutionary Ecology Research, 2009, 11: 489–515 Evolutionary game theory: ESS, convergence stability, and NIS Joseph Apaloo1, Joel S. Brown2 and Thomas L. Vincent3 1Department of Mathematics, Statistics and Computer Science, St. Francis Xavier University, Antigonish, Nova Scotia, Canada, 2Department of Biological Sciences, University of Illinois, Chicago, Illinois, USA and 3Department of Aerospace and Mechanical Engineering, University of Arizona, Tucson, Arizona, USA ABSTRACT Question: How are the three main stability concepts from evolutionary game theory – evolutionarily stable strategy (ESS), convergence stability, and neighbourhood invader strategy (NIS) – related to each other? Do they form a basis for the many other definitions proposed in the literature? Mathematical methods: Ecological and evolutionary dynamics of population sizes and heritable strategies respectively, and adaptive and NIS landscapes. Results: Only six of the eight combinations of ESS, convergence stability, and NIS are possible. An ESS that is NIS must also be convergence stable; and a non-ESS, non-NIS cannot be convergence stable. A simple example shows how a single model can easily generate solutions with all six combinations of stability properties and explains in part the proliferation of jargon, terminology, and apparent complexity that has appeared in the literature. A tabulation of most of the evolutionary stability acronyms, definitions, and terminologies is provided for comparison. Key conclusions: The tabulated list of definitions related to evolutionary stability are variants or combinations of the three main stability concepts. Keywords: adaptive landscape, convergence stability, Darwinian dynamics, evolutionary game stabilities, evolutionarily stable strategy, neighbourhood invader strategy, strategy dynamics. INTRODUCTION Evolutionary game theory has and continues to make great strides.
    [Show full text]
  • Lecture Notes
    GRADUATE GAME THEORY LECTURE NOTES BY OMER TAMUZ California Institute of Technology 2018 Acknowledgments These lecture notes are partially adapted from Osborne and Rubinstein [29], Maschler, Solan and Zamir [23], lecture notes by Federico Echenique, and slides by Daron Acemoglu and Asu Ozdaglar. I am indebted to Seo Young (Silvia) Kim and Zhuofang Li for their help in finding and correcting many errors. Any comments or suggestions are welcome. 2 Contents 1 Extensive form games with perfect information 7 1.1 Tic-Tac-Toe ........................................ 7 1.2 The Sweet Fifteen Game ................................ 7 1.3 Chess ............................................ 7 1.4 Definition of extensive form games with perfect information ........... 10 1.5 The ultimatum game .................................. 10 1.6 Equilibria ......................................... 11 1.7 The centipede game ................................... 11 1.8 Subgames and subgame perfect equilibria ...................... 13 1.9 The dollar auction .................................... 14 1.10 Backward induction, Kuhn’s Theorem and a proof of Zermelo’s Theorem ... 15 2 Strategic form games 17 2.1 Definition ......................................... 17 2.2 Nash equilibria ...................................... 17 2.3 Classical examples .................................... 17 2.4 Dominated strategies .................................. 22 2.5 Repeated elimination of dominated strategies ................... 22 2.6 Dominant strategies ..................................
    [Show full text]
  • 8. Maxmin and Minmax Strategies
    CMSC 474, Introduction to Game Theory 8. Maxmin and Minmax Strategies Mohammad T. Hajiaghayi University of Maryland Outline Chapter 2 discussed two solution concepts: Pareto optimality and Nash equilibrium Chapter 3 discusses several more: Maxmin and Minmax Dominant strategies Correlated equilibrium Trembling-hand perfect equilibrium e-Nash equilibrium Evolutionarily stable strategies Worst-Case Expected Utility For agent i, the worst-case expected utility of a strategy si is the minimum over all possible Husband Opera Football combinations of strategies for the other agents: Wife min u s ,s Opera 2, 1 0, 0 s-i i ( i -i ) Football 0, 0 1, 2 Example: Battle of the Sexes Wife’s strategy sw = {(p, Opera), (1 – p, Football)} Husband’s strategy sh = {(q, Opera), (1 – q, Football)} uw(p,q) = 2pq + (1 – p)(1 – q) = 3pq – p – q + 1 We can write uw(p,q) For any fixed p, uw(p,q) is linear in q instead of uw(sw , sh ) • e.g., if p = ½, then uw(½,q) = ½ q + ½ 0 ≤ q ≤ 1, so the min must be at q = 0 or q = 1 • e.g., minq (½ q + ½) is at q = 0 minq uw(p,q) = min (uw(p,0), uw(p,1)) = min (1 – p, 2p) Maxmin Strategies Also called maximin A maxmin strategy for agent i A strategy s1 that makes i’s worst-case expected utility as high as possible: argmaxmin ui (si,s-i ) si s-i This isn’t necessarily unique Often it is mixed Agent i’s maxmin value, or security level, is the maxmin strategy’s worst-case expected utility: maxmin ui (si,s-i ) si s-i For 2 players it simplifies to max min u1s1, s2 s1 s2 Example Wife’s and husband’s strategies
    [Show full text]
  • 14.12 Game Theory Lecture Notes∗ Lectures 7-9
    14.12 Game Theory Lecture Notes∗ Lectures 7-9 Muhamet Yildiz Intheselecturesweanalyzedynamicgames(withcompleteinformation).Wefirst analyze the perfect information games, where each information set is singleton, and develop the notion of backwards induction. Then, considering more general dynamic games, we will introduce the concept of the subgame perfection. We explain these concepts on economic problems, most of which can be found in Gibbons. 1 Backwards induction The concept of backwards induction corresponds to the assumption that it is common knowledge that each player will act rationally at each node where he moves — even if his rationality would imply that such a node will not be reached.1 Mechanically, it is computed as follows. Consider a finite horizon perfect information game. Consider any node that comes just before terminal nodes, that is, after each move stemming from this node, the game ends. If the player who moves at this node acts rationally, he will choose the best move for himself. Hence, we select one of the moves that give this player the highest payoff. Assigning the payoff vector associated with this move to the node at hand, we delete all the moves stemming from this node so that we have a shorter game, where our node is a terminal node. Repeat this procedure until we reach the origin. ∗These notes do not include all the topics that will be covered in the class. See the slides for a more complete picture. 1 More precisely: at each node i the player is certain that all the players will act rationally at all nodes j that follow node i; and at each node i the player is certain that at each node j that follows node i the player who moves at j will be certain that all the players will act rationally at all nodes k that follow node j,...ad infinitum.
    [Show full text]
  • Characterizing Solution Concepts in Terms of Common Knowledge Of
    Characterizing Solution Concepts in Terms of Common Knowledge of Rationality Joseph Y. Halpern∗ Computer Science Department Cornell University, U.S.A. e-mail: [email protected] Yoram Moses† Department of Electrical Engineering Technion—Israel Institute of Technology 32000 Haifa, Israel email: [email protected] March 15, 2018 Abstract Characterizations of Nash equilibrium, correlated equilibrium, and rationaliz- ability in terms of common knowledge of rationality are well known (Aumann 1987; arXiv:1605.01236v1 [cs.GT] 4 May 2016 Brandenburger and Dekel 1987). Analogous characterizations of sequential equi- librium, (trembling hand) perfect equilibrium, and quasi-perfect equilibrium in n-player games are obtained here, using results of Halpern (2009, 2013). ∗Supported in part by NSF under grants CTC-0208535, ITR-0325453, and IIS-0534064, by ONR un- der grant N00014-02-1-0455, by the DoD Multidisciplinary University Research Initiative (MURI) pro- gram administered by the ONR under grants N00014-01-1-0795 and N00014-04-1-0725, and by AFOSR under grants F49620-02-1-0101 and FA9550-05-1-0055. †The Israel Pollak academic chair at the Technion; work supported in part by Israel Science Foun- dation under grant 1520/11. 1 Introduction Arguably, the major goal of epistemic game theory is to characterize solution concepts epistemically. Characterizations of the solution concepts that are most commonly used in strategic-form games, namely, Nash equilibrium, correlated equilibrium, and rational- izability, in terms of common knowledge of rationality are well known (Aumann 1987; Brandenburger and Dekel 1987). We show how to get analogous characterizations of sequential equilibrium (Kreps and Wilson 1982), (trembling hand) perfect equilibrium (Selten 1975), and quasi-perfect equilibrium (van Damme 1984) for arbitrary n-player games, using results of Halpern (2009, 2013).
    [Show full text]
  • Contemporaneous Perfect Epsilon-Equilibria
    Games and Economic Behavior 53 (2005) 126–140 www.elsevier.com/locate/geb Contemporaneous perfect epsilon-equilibria George J. Mailath a, Andrew Postlewaite a,∗, Larry Samuelson b a University of Pennsylvania b University of Wisconsin Received 8 January 2003 Available online 18 July 2005 Abstract We examine contemporaneous perfect ε-equilibria, in which a player’s actions after every history, evaluated at the point of deviation from the equilibrium, must be within ε of a best response. This concept implies, but is stronger than, Radner’s ex ante perfect ε-equilibrium. A strategy profile is a contemporaneous perfect ε-equilibrium of a game if it is a subgame perfect equilibrium in a perturbed game with nearly the same payoffs, with the converse holding for pure equilibria. 2005 Elsevier Inc. All rights reserved. JEL classification: C70; C72; C73 Keywords: Epsilon equilibrium; Ex ante payoff; Multistage game; Subgame perfect equilibrium 1. Introduction Analyzing a game begins with the construction of a model specifying the strategies of the players and the resulting payoffs. For many games, one cannot be positive that the specified payoffs are precisely correct. For the model to be useful, one must hope that its equilibria are close to those of the real game whenever the payoff misspecification is small. To ensure that an equilibrium of the model is close to a Nash equilibrium of every possible game with nearly the same payoffs, the appropriate solution concept in the model * Corresponding author. E-mail addresses: [email protected] (G.J. Mailath), [email protected] (A. Postlewaite), [email protected] (L.
    [Show full text]
  • Extensive Form Games
    Extensive Form Games Jonathan Levin January 2002 1 Introduction The models we have described so far can capture a wide range of strategic environments, but they have an important limitation. In each game we have looked at, each player moves once and strategies are chosen simultaneously. This misses a common feature of many economic settings (and many classic “games” such as chess or poker). Consider just a few examples of economic environments where the timing of strategic decisions is important: • Compensation and Incentives. Firms first sign workers to a contract, then workers decide how to behave, and frequently firms later decide what bonuses to pay and/or which employees to promote. • Research and Development. Firms make choices about what tech- nologies to invest in given the set of available technologies and their forecasts about the future market. • Monetary Policy. A central bank makes decisions about the money supply in response to inflation and growth, which are themselves de- termined by individuals acting in response to monetary policy, and so on. • Entry into Markets. Prior to competing in a product market, firms must make decisions about whether to enter markets and what prod- ucts to introduce. These decisions may strongly influence later com- petition. These notes take up the problem of representing and analyzing these dy- namic strategic environments. 1 2TheExtensiveForm The extensive form of a game is a complete description of: 1. The set of players 2. Who moves when and what their choices are 3. What players know when they move 4. The players’ payoffs as a function of the choices that are made.
    [Show full text]
  • How Proper Is the Dominance-Solvable Outcome? Yukio Koriyama, Matias Nunez
    How proper is the dominance-solvable outcome? Yukio Koriyama, Matias Nunez To cite this version: Yukio Koriyama, Matias Nunez. How proper is the dominance-solvable outcome?. 2014. hal- 01074178 HAL Id: hal-01074178 https://hal.archives-ouvertes.fr/hal-01074178 Preprint submitted on 13 Oct 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ECOLE POLYTECHNIQUE CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE HOW PROPER IS THE DOMINANCE-SOLVABLE OUTCOME? Yukio KORIYAMA Matías NÚÑEZ September 2014 Cahier n° 2014-24 DEPARTEMENT D'ECONOMIE Route de Saclay 91128 PALAISEAU CEDEX (33) 1 69333033 http://www.economie.polytechnique.edu/ mailto:[email protected] How proper is the dominance-solvable outcome?∗ Yukio Koriyamay and Mat´ıas Nu´ nez˜ z September 2014 Abstract We examine the conditions under which iterative elimination of weakly dominated strategies refines the set of proper outcomes of a normal-form game. We say that the proper inclusion holds in terms of outcome if the set of outcomes of all proper equilibria in the reduced game is included in the set of all proper outcomes of the original game. We show by examples that neither dominance solvability nor the transference of decision-maker indiffer- ence condition (TDI∗ of Marx and Swinkels [1997]) implies proper inclusion.
    [Show full text]
  • 8. Noncooperative Games 8.1 Basic Concepts Theory of Strategic Interaction Definition 8.1. a Strategy Is a Plan of Matching Penn
    EC 701, Fall 2005, Microeconomic Theory November 9, 2005 page 351 8. Noncooperative Games 8.1 Basic Concepts How I learned basic game theory in 1986. • Game theory = theory of strategic interaction • Each agent chooses a strategy in order to achieve his goals with • the knowledge that other agents may react to his strategic choice. Definition 8.1. A strategy is a plan of action that specifies what a player will do in every circumstance. EC 701, Fall 2005, Microeconomic Theory November 9, 2005 page 352 Matching Pennies (a zero-sum game) Eva and Esther each places a penny on the table simultaneously. • They each decide in advance whether to put down heads (H )or • tails (T). [This is a choice, NOT a random event as when a coin is flipped.] If both are heads (H ) or both are tails (T), then Eva pays Esther • $1. If one is heads and the other tails, then Esther pays Eva $1. • Esther HT 1 1 H − -1 1 Eva 1 1 T − 1 -1 EC 701, Fall 2005, Microeconomic Theory November 9, 2005 page 353 Esther HT 1 1 H − -1 1 Eva 1 1 T − 1 -1 Eva and Esther are the players. • H and T are strategies,and H , T is each player’s strategy • space. { } 1 and 1 are payoffs. • − Each box corresponds to a strategy profile. • For example, the top-right box represents the strategy profile • H , T h i EC 701, Fall 2005, Microeconomic Theory November 9, 2005 page 354 Deterministic strategies (without random elements) are called • pure strategies.
    [Show full text]
  • Solution Concepts in Cooperative Game Theory
    1 A. Stolwijk Solution Concepts in Cooperative Game Theory Master’s Thesis, defended on October 12, 2010 Thesis Advisor: dr. F.M. Spieksma Mathematisch Instituut, Universiteit Leiden 2 Contents 1 Introduction 7 1.1 BackgroundandAims ................................. 7 1.2 Outline .......................................... 8 2 The Model: Some Basic Concepts 9 2.1 CharacteristicFunction .............................. ... 9 2.2 Solution Space: Transferable and Non-Transferable Utilities . .......... 11 2.3 EquivalencebetweenGames. ... 12 2.4 PropertiesofSolutions............................... ... 14 3 Comparing Imputations 15 3.1 StrongDomination................................... 15 3.1.1 Properties of Strong Domination . 17 3.2 WeakDomination .................................... 19 3.2.1 Properties of Weak Domination . 20 3.3 DualDomination..................................... 22 3.3.1 Properties of Dual Domination . 23 4 The Core 25 4.1 TheCore ......................................... 25 4.2 TheDualCore ...................................... 27 4.2.1 ComparingtheCorewiththeDualCore. 29 4.2.2 Strong ǫ-Core................................... 30 5 Nash Equilibria 33 5.1 Strict Nash Equilibria . 33 5.2 Weak Nash Equilibria . 36 3 4 CONTENTS 6 Stable Sets 39 6.1 DefinitionofStableSets ............................... .. 39 6.2 Stability in A′ ....................................... 40 6.3 ConstructionofStronglyStableSets . ...... 41 6.3.1 Explanation of the Strongly Stable Set: The Standard of Behavior in the 3-personzero-sumgame ............................
    [Show full text]
  • 2.- Game Theory Overview Game Theory Studies Situations in Which
    2.- Game Theory Overview Game Theory studies situations in which the outcome of the decisions of an indi- vidual depends on the decisions taken by other individuals as well as on his own - Strategic Interdependence - 2.- Game Theory Overview Plan 2.1 Introduction 2.2 Static Games of Complete Information: Nash Equilibrium 2.3 Dynamic Games of Complete Information: Backwards Induction and Subgame Perfection 2.4 Static Games of Incomplete Information:Bayesian Equilibrium 2.5 Dynamic Games of Incomplete Information: Perfect Bayesian Equilibrium and Sequential Equilibrium 2.- Game Theory Overview 2.1 Introduction Any (non cooperative) game has the following elements 1. The Players Who is to take decisions 2 The Rules Who moves first, who moves next, what can be done and what can not, ... 3 The Strategies What actions are available to the players 4 The Outcomes What is the consequence of each combination of actions by the players 5 The Payoffs What each player obtains in each possible outcome 2.- Game Theory Overview 2.1 Introduction Example: Rock - Scissors - Paper 1. The Players Player 1 and Player 2 2 The Rules Simultaneously, both players show either rocks, scissors or paper 3 The Strategies Each player can choose among showing rocks, scissors or paper 4 The Outcomes Rock beats scissors, which beats paper, which beats rock 5 The Payoffs The winning player receives 1 from the loser (thus getting -1) 2.- Game Theory Overview 2.1 Introduction Game Scenarios Complete Information Incomplete Information Static Players have all relevant information Not all players have all the information and move simultaneously and move simultaneously Dynamic Players have all relevant information Not all players have all the information and move in turns and move in turns Examples Complete Information Incomplete Information Static Rock, Scissors, Paper Sealed-bid Auction Dynamic Chess English Auction 2.- Game Theory Overview 2.2 Static Games of Complete Information Players take decisions without knowing the actions taken by other players Basic Assumptions 1.
    [Show full text]