Microeconomics 3 Andreas Ortmann, Ph.D., room 332b Summer 2006 (420 2) 240 05 117 [email protected] http://home.cerge-ei.cz/ortmann

Office hours: See information on office door

Lectures 1 - 3 (May 9, 11, 12): Basic elements of non-cooperative games (extensive form and normal form representation of games, randomized choices), basic solution concepts for simultaneous-move and sequential-move games (dominance, , Nash), and randomized choices and Cooper & Van Huyck (JET 2003) [-> Friday] and Palacios-Huerta (RES 2003) [-> Friday]

Problem set # 1 ready this coming Friday (due at beginning of exercise session May 23), downloadable from http://home.cerge-ei.cz/ortmann/CourseMaterials.html.

Key readings:

MWG chapter 7 (all sections)

Assignments:

MWG chapter 8 (all sections)

Johnson, Camerer, et al. (JET 2002) [-> Thursday] and Bosch-Domenech, et al. (AER 2002) [-> Friday] 2

Remark 0: A game is a formal representation of a situation in which a number of individuals interact in a setting of strategic interdependence. I.e., the success (or failure) of a player’s choices depend on the choices of others players. Typically this necessitates that a player has to form expectations about what other players will choose. is about what rational players might do in such situations. Decision theory can be conceptualized as special case of game theory with nature being the “player” one is matched with.

Remark 1:

Non- (as opposed to cooperative game theory)

-> models interactive decision problems (This “strategic interaction” may reflect conflict or cooperation ) -> assumes rationality (but not necessarily self-interest) as primitive (“Nash program”)

-> comes in two basic forms: - Deductive or eductive or prospective: assumes rationality and lots of knowledge (e.g., = all players know the structure of the game, know how their rivals know that they know it, and so on.) assumes that people are forward-looking

- Inductive or evolutive or retrospective: assumes less rationality and knowledge (e.g., payoffs of own past actions) assumes that people are backward-looking 3

Note 1: MWG is all about non-cooperative deductive game theory (see also Kreps 1990, 1990a; Tirole 1988; Tirole & Fudenberg 1991), as opposed to

-> (see Weibull 1995, Evolutionary Game Theory; Vega-Redondo 1996, Evolution, Games, and Economic Behavior; Samuelson 1997, Evolutionary Games and ; Fudenberg & Levine 1997, The Theory of Learning in Games; Evans & Honkapohja 2001, Learning and Expectations in Macroeconomics; Vega-Redondo 2003, Economics and the Theory of Games.)

-> ACE (see Riechmann (2001) Genetic Algorithm Learning and Evolutionary Games Journal of Economic Dynamics and Control, 25:1019-1037, 2001

-> “biological” game theory (Maynard Smith 1982, Evolution and the Theory of Games; Hofbauer & Sigmund 1984/1988, The Theory of Evolution and Dynamical Systems).

-> cooperative game theory (see Van Neumann & Morgenstern, Theory of Games and Economic Behavior, Second edition 1947).

Note 2: What is the relationship between the eductive and evolutive approaches?

A very fruitful area of research. One of the central results is that,for relatively weak conditions, one can define dynamical processes whose rest points are Nash equilibria (Friedman Econometrica 1991; see also Friedmann Economic Journal 1996).

Another interesting result is that dynamical processes select one of possibly many Nash equilibria (e.g. Van Huyck et al. International Journal of Game Theory 1995).

However, many open questions. One of the key open questions is how to define the appropriate mapping between a particular dynamic process/model function and the experimental design and implementation: What exactly is the right model function? Why are we justified to use continuous dynamics? (Shouldn’t we automatically take discrete dynamics if we model dynamics in experiments?) What is the appropriate step length? What is the appropriate level of noise? Etc.

Remark 2: 4

Two steps to analyze a game:

-> Describing/constructing (Remarks 3,4) -> Solving (Remark 7) 5

Remark 3:

How to describe a game (form):

Extensive form Normal (strategic) form

x List of players x (Who is involved?) x List of actions available to each player x Actions -> strategies (What can they do?) x Payoffs (=valuation of the outcomes) x x “Rules” (=order of moves, available information) (Who moves when? What do they know when they move?)

Note: MWG define a normal form game as tuple with 3 elements (see Def

7.D.2 of 'N on p. 230); they define an extensive form game as a tuple with 9 elements (see their description of 'E on p. 227). Note that their description mentions actions only implicitly (by way of outcomes and payoffs). Note also that the key difference between the two game form lies in the “rules”. 6

Remark 4:

How to construct a game form (somewhat informally):

Extensive form Normal (strat) form

Collection of initial, decision, and Collection of cells terminal nodes and information sets (“game tree”: every “path” through the tree represents a unique sequence of moves by the players)

How to construct a game (somewhat informally):

Add payoffs to the terminal nodes Insert payoffs in cells

Note: Strategic form and extensive form are not bijective; equivalency of the two game forms is in dispute (e.g., theoretically Kreps 1990, 1990a, Kohlberg & Mertens Econometrica 1986; empirically McCabe, Smith, & LePore PNAS 2000, Cooper & Van Huyck JET 2003) 7

Remark 5:

Possible other classifications of games:

-> 2 players, more than two players -> 2 actions, more than two actions -> mistake-free actions, trembling-hand actions -> symmetric, asymmetric (uneven numbers of actions, different actions; etc.) -> pure strategies, mixed strategies -> zero/constant-sum [often not written in bi-matrix format], or variable- sum -> perfect recall [= players don’t forget what they once knew], imperfect recall -> , imperfect information (see MWG Def 7.C.1) -> , incomplete information (see MWG p. 253) (although it may be assumed that have common knowledge of distribution of actions available to agents; following Harsanyi this allows us to re-interpret incomplete information as imperfect information )

-> one-shot, finitely or in(de)finitely repeated

In this course deal mostly with 2 players and two actions for each player, and variable-sum games.

Note: Rapoport and Guyer, in a deservedly famous paper, construct a taxonomy of 2x2 normal form games (one-shot games involving two players with two actions each). R&G show that there exist exactly 78 non- equivalent games. 8

Remark 6:

Examples of games [those marked with a star discussed in class]:

Extensive form Normal form * B, C, D *Matching pennies A ? *Paper-Rock-Scissors Tic-Tac-Toe (Fig 7.C.2) Tic-Tac-Toe ? Hawk-Dove ? *-> PDG (symm) ? *-> Chicken ? *Give-us-a-Break [= Let’s Make a Deal] ? *Coordination [= Meeting in NY] ? *Stag-hunt -2 ? Stag-hunt -n ? Entry (Kahneman) *Entry deterrence ?[Selten] ? *Battle-of-the-Sexes ? ? ? Public Good Provision ? Nash demand game *Dictator ? *Ultimatum ? *Trust ? *Gift exchange game (asymm PDG) ?[Kreps] *Take-It-or-Leave-It ? *Centipede (Fig 9.B.8) ? Alternating offers ?[Johnson et al]

. 9

Remark 7:

Solving (predicting what the – likely – of the game will be if ... ): [For now we ignore the possibility that players will randomize in their action choices.]

Normal (strategic) form

-> Dominance (strict, weak) - Weak dominance could cause problems; strict doesn’t. -> Iterated dominance - Requires rationality and common.knowledge (also of other players’ rationality). - Leads to rationalizability (Bernheim, Pearce).

-> - Requires rationality and common knowledge and mutually correct expectations (“” property). - Is silent on selection of “the best” among several equilibria. - Leads to refinements (selection theories) and evolutionary models.

Extensive form

-> Iterated dominance (the principle of sequential rationality) - Requires rationality and common.knowledge (also of other players’ rationality). - Leads to . - Leads to perfection (Nash equilibria that are credible). 10

Definition 8.B.1 (strictly dominant strategies):

A si 0Si is a strictly dominant strategy for player i in game 'N = [I, {Si }, {ui (@)}] if for all s’i … si , we have ui (si,s-i) > ui (s’i,s-i) for all s-i 0S-i.

Definition 8.B.2 (strictly dominated strategies):

A strategy si 0Si is strictly dominated for player i in game 'N = [I, {Si }, {ui (@)}] if there exists another strategy s’i 0Si such that for all s-i 0S-i , ui (s’i,s-i) > ui (si,s-i). The strategy s’i is said to strictly dominate strategy si.

Definition 8.B.3a (weakly dominant strategies):

A strategy si 0Si is a strictly dominant strategy for player i in game 'N = [I, {Si }, {ui (@)}] if for all s’i … si , we have ui (si,s-i) $ ui (s’i,s-i) for all s-i 0S-i.

Definition 8.B.3b (weakly dominated strategies):

A strategy si 0Si is weakly dominated for player i in game 'N = [I, {Si }, {ui (@)}] if there exists another strategy s’i 0Si such that for all s-i 0S-i , ui (s’i,s-i) $ ui (si,s-i). The strategy s’i is said to weakly dominate strategy si.

Note: Iteratively eliminating strictly dominated strategies has the nice property that the order of deletion does not affect the set of strategies that remain; that can not be guaranteed when iteratively eliminating weakly dominated strategies.

[Application of these concepts to some of the sample games.] 11

Definition 8.D.1 (Nash equilibrium):

A strategy profile s = (s1,..., sI) constitutes a Nash equilibrium of game 'N = [I, {Si }, {ui (@)}] if for every i = 1, ..., I, ui (si,s-i) $ ui (s’i,s-i) for all si 0Si.

Note: In a Nash equilibrium each player’s strategy choice is a best response to the strategies actually played by the other players. We’ll have more to say about this at a later point.

[Application of these concepts to some of the sample games.]

Definition 9.B.1 (subgame):

A subgame of an extensive form game 'E is a subset of the game having the following properties:

(i) It begins with an information set containing a single decision node, contains all the decision nodes that are successors (both immediate and later) of this node, and contains only these nodes. (ii) If decision node x is in the subgame, then every x’ 0 H(x) is also, where H(x) is the information set that contains decision node x.

Note 1: The game as a whole is a subgame, as may be some strict subsets of the game.

Note 2: In a finite game of perfect information, every decision node initiates a subgame.

Definition 9.B.1 (subgame perfect Nash equilibrium):

A strategy profile s = (s1,..., sI) in an I-player extensive form game 'E is a subgame perfect Nash equilibrium (SPNE) if it induces a Nash equilibrium

in every subgame of 'E.

[Application of these concepts to some of the sample games, namely TOL, Centiped, Entry deterrence, examples in Cooper & Van Huyck JET 2003.] 12 Definition 8.D.1 (Nash equilibrium, pure strategies):

A strategy profile s = (s1,..., sI) constitutes a Nash equilibrium of game 'N = [I, {Si }, {ui (@)}] if for every i = 1, ..., I, ui (si,s-i) $ ui (s’i,s-i) for all si 0Si.

Note: In a Nash equilibrium each player’s strategy choice is a best response to the strategies actually played by the other players. [We’ll get back to the notion of best response presently when discussing rationalizable strategies.]

[How to determine a pure strategy Nash equilibrium: Illustrated in class with several examples]

Definition 7.E.1 (mixed strategy):

Given player i’s (finite) pure strategy set Si,a mixed strategy for player i, Fi, Si -> [0,1], assigns to each pure strategy si 0Si a probability Fi (si) $ 0 that it will be played, where 3 Fi (si) = 1.

Note: A mixed strategy represents the convexification, or mixed extension of Si . In fact, this mixed extension spans a simplex whose vertices are the pure strategies that support the mixed strategy. Note that this implies that pure strategies can be thought of as degenerate mixed strategies. [For more formal details see MWG p. 232.]

Definition 8.D.2 (Nash equilibrium, mixed strategies):

A mixed strategy profile F = (F 1,..., F I) constitutes a Nash equilibrium of game 'N = [I, {ªSi }, {ui (@)}] if for every i = 1, ..., I, ui (F i,F-i) $ ui (F’i,F-i) for all F’i 0ªSi. [How to compute a mixed strategy Nash equilibrium: Bishop & Cannings (1978). Illustrated in class with Matching Pennies and 2.] 13

Note 1: Mixed strategies can be dominant strategies. Analyze the following decision situation determine a payoff maximizing strategy:

15,15 75, 0 0,75 5,25 PRSD 0,75 15,15 75,0 5,25 75, 0 0,75 15,15 5,25 25, 5 25, 5 25, 5 0, 0

Your strategy? ...... [Note: You may choose mixtures of pure strategies.]

Let M in PRSD denote the mixture {1/3, 1/3, 1/3, 0}. Verify that the resulting compound lottery is strategically equivalent to the outcome MM in the following game MD:

30, 30 5,25 25,5 0,0

Note that D is strictly dominated by M and therefore there are no beliefs that can “rationalize” playing D when the numbers in the bi- matrix denote utility.

Note 2: Computing mixed strategies is also a necessary condition to compute the reaction curves (reaction correspondences).

[Illustrated in class for game of Matching Pennies.Problem set will require you to compute the mixed strategy equilibria for the game of Chicken and the Battle of the Sexes.] 14 Existence propositions and classification results. And rationalizations.

Proposition 8.D.2 (existence):

Every game 'N = [I, {ÎSi }, {ui (@)}] in which the sets Si, ..., SI have a finite number of elements has a mixed strategy Nash equilibrium.

Note 1: This result can be generalized to games in which strategies can be modeled as continuous variables. See for details proposition 8.D.3 in the textbook. For proofs of both propositions, see MWG 260 - 1.

Note 2: For 2x2 normal form games it can be shown (e.g., Eichberger, Haller, & Milne, Journal of Economic Behavior and Organization 1993) that there are essentially three large classes of games according to their equilibrium configuration: - the first contains those games with no pure and one mixed Nash equilibrium - the second contains those games with two pure and one mixed Nash equilibrium - the third contains those games with exactly one pure and no mixed Nash equilibrium

There is a residual class of games that may have either two pure Nash equilibria (one of which contains dominated strategies) or a continuum of mixed strategy equilibria containing at least one pure strategy equilibrium.

This residual class is “non-generic in the space of payoff parameters”, i.e. such games you are very unlikely to encounter.

Proposition 9.B.2 (existence):

Every finite game of perfect information 'E has a pure strategy subgame perfect Nash equilibrium (computable through backward induction). Moreover, if no player has the same payoffs at any two terminal nodes, then there is a unique subgame perfect Nash equilibrium. 15 Note: MWG discuss five “rationalizations” of Nash equilibrium. Essentially, there are two competing explanations: First (MWG iv), Nash eq. as self-enforcing outcome of an unspecified negotiation process. Second (MWG v), Nash eq. as stable outcome of unspecified learning process. [This rationalization of particular interest as it has motivated the evolutive approach.]