Section Note 3
Total Page:16
File Type:pdf, Size:1020Kb
Section Note 3 Ariella Kahn-Lang and Guthrie Gray-Lobe∗ February 13th 2013 Agenda 1. Game Trees 2. Writing out Strategies and the Strategy Space 3. Backwards Induction 1 Game Trees Let's review the basic elements of game trees (extensive form games). Remember, we can write any game in either its normal (matrix) form or its extensive (tree) form - which representation we use is mainly a question of which solution concept we want to implement. Game trees are particularly useful for representing dynamic (sequential) games, primarily be- cause they easily allow us to implement the simple solution concept of backwards induction (see later). 1.1 Dynamic Games vs. Simultaneous Games Many important economic applications in game theory cannot be described by a simultaneous move game. Recall that a simultaneous game is one where all players were choosing their actions without any information about actions chosen by other players. A dynamic game, in contrast, is one where at least one player at some point of the game is choosing his action with some information about actions chosen previously. While dynamic games are often sequential over time, what matters is not the timing, but the information a player holds about previous moves in the game, because this is what enables them to condition their strategy on previous moves. For this reason, the `order' in which we draw a game tree matters only in getting the flow of information correctly. In simultaneous games, we can draw the game tree either way round and it represents the same game. ∗Thanks to previous years' TF's for making their section materials available for adaptation. 1 1.2 Strategies in Dynamics Games In a simultaneous game all players were choosing their actions without any information about actions chosen by other players. Therefore, their choices couldnt be contingent on what the other players were doing. In dynamic games, players may have information about others' past actions. In that case, they can make their choice contingent on what they know about those past actions. A strategy is a complete contingent plan that specifies how the player will act in every possible distinguishable circumstance in which she might be called upon to play. The following description from Fudenberg and Tirole can be very useful: • \A pure strategy is a book of instructions, where each page tells how to play at a particular information set. • The strategy space Si for player i is like a library of these books. • A mixed strategy is a probability measure over books - i.e. a random way of making a selection from the library." 1.3 Information Sets An information set allows us to demarcate within the game the distinguishable circumstances a player can find themselves in. An information set is a set of nodes (we draw an oval around them) such that: 1. The same player moves at each of those nodes 2. The player has the same information and the same available actions in those nodes An information set indicates how much a player knows about past moves when it is his turn to move. All the nodes in an information set are indistinguishable for the player: he knows that he is in that information set, but he cannot tell in which particular node. Players' can distinguish where they are between separate information sets. If the information set is a singleton node, a player who is called on to make a move at this point must know exactly where they are in the game, and therefore all the moves made in the game up to that point (the game's history). Note: we will always assume perfect recall - players do not forget what they once knew (their/others' actions). 1.3.1 Example: Converting from the Normal Form to the Extensive Form 2 Writing out Strategies and the Strategy Space Since a player can only specify distinct actions where they can condition on the other players' actions, a strategy for a player has to tell us the action they would take at each and every 2 information set (each set of distinguishable circumstances). This gives us our complete contingent plan. To find the strategy space for a player: 1. To identify the number of elements in a player's strategy: Count the number of information sets of that player 2. To get a template for a strategy: just pick one action (any) for each information set 3. To describe the strategy space: List all possible combinations of actions across the infor- mation sets. 4. To calculate the number of strategy permutations: Multiply the number of actions across all information sets. Eg. if there are three actions at each of two information sets, the number of possible strategies is 32 = 9. 2.1 Example: Converting from the Extensive Form to the Normal Form 2.2 The `Path' of Play A particular path through a game tree traces actions from the starting node to a terminal node, implying a particular payoff profile. Note that each strategy profile implies a specific path of play. Indeed, multiple strategy profiles might imply the same path of play. Crucially, a strategy profile typically specifies more actions than those on the path of play - it also specifies what would happen ‘off' the path of play, if different actions had taken us down different branches in earlier moves of the game. This completeness, which might seem like a redundancy at first, is essential to analyzing dynamic games. When the path of play corresponds to a Nash equilibrium set of strategies, we often describe the actions along the path of play as being `on the equilibrium path'. Actions specified within a strategy that are not on the path of play are described as ‘off the equilibrium path'. 3 Backward Induction What solution concept is appropriate to dynamic games? The basic idea of best responding is still of great value so we can still use Nash equilibrium. Once we have identified the strategy space for each player and converted to the normal form, our typical technique of underlining best responses remains valid and will identify all Nash equilibria. However, an alternative technique - backward induction - can be applied to the game tree (extensive) form. This has two potential benefits - first, it's intuitive and doesn't require us to convert to the normal form. Second, it is actually more powerful in refining and narrowing the set of Nash equilibria, in a way that will be made clear next week. The main constraint on backward induction is that we can only use it in finite games of perfect information: 3 3.1 Perfect Information The following are interchangeable descriptions of a game of perfect information: • Each player knows exactly where they are in the game tree at all times • All players fully observe all the actions of the players that precede them • Every information set for every player is a singleton decision node 3.2 Applying Backward Induction 1. Start from the last set of decision nodes. 2. For each decision node, identify the highest payoff for that player and replace this decision node with a terminal node labeled with this winning payoff profile. 3. Continue, working backwards through the game tree until all branches have collapsed to the starting node. 4. The remaining payoff is the payoff of a Nash equilibrium of the game. 5. The optimal choices made at all decision nodes - including those ‘off the equilibrium path' - constitute the strategies in this Nash equilibrium. 3.3 Zermelo's Theorem: Every finite sequential game of perfect information has a unique backward induction solution if there are no ties in payoffs (in other words, 8i 2 I, player i cannot get the same payoff for two different actions). If there are ties in payoffs, a backward induction solution exists, but need not be unique. In win-lose-draw type zero sum games of perfect information, there is a unique backward in- duction payoff, but not a unique equilibrium (different strategies lead to the same payoffs). • Tic-tac-toe: draw • Connect four: first-mover wins (solved in 1988) • Checkers: draw (solved in 2007) • Chess: almost certainly draw or first-mover win (this is a very difficult backward induction problem to solve). 4 3.4 Example: Divide-the-Dollar/Ultimatum Game Assume that the minimum unit in the divide the dollar game is 1 cent. Player one makes an offer to player two, who can accept or reject. If they accept, the dollar is divided as in the offer. If they reject, neither player receives anything. The strategy space for player one is S1 = f0; 1; :::; 100g where s 2 S1 is the amount offered to Player 2 (the offeree). Player 2 can either \accept" or \reject" the offer.1 1. To test your understanding of strategies, do you see that that Player 1's strategy is a 101- dimensional vector? Further, do you see that Player 2 (the offeree) has 2101 pure strategies. 2. Backward inducting, what is player 2's best response if offered x > 0 by player 1? 3. What is player 1's best offer given how player 2 will respond to any offer x? 4. Do you see that offering the minimum to Player 2 (x∗ = 1) and \accept" by player 2 is the backward inducted Nash equilibrium? This is what will happen on the equilibrium path. What must be the offeree’s actions off the equilibrium path in this unique equilibrium? Use backward induction to get this result. 5. Interestingly, do you see that any non-zero distribution of the dollar can be sustained as a Nash equilibrium, even though it does not survive backwards induction? Check the following strategies: (a) Player 2's strategy: \accept" if x ≥ x∗ and \reject" otherwise.