An Efficient Optimal-Equilibrium Algorithm for Two-Player Game Trees

An Efficient Optimal-Equilibrium Algorithm for Two-Player Game Trees

An Efficient Optimal-Equilibrium Algorithm for Two-player Game Trees Michael L. Littman Nishkam Ravi Arjun Talwar Martin Zinkevich Dept. of Computer Science Dept. of Computer Science Dept. of Mathematics Dept. of Computing Science Rutgers University Rutgers University Stanford University University of Alberta Piscataway, NJ 08854 Piscataway, NJ 08854 Stanford, CA 94309 Edmonton Alberta Canada T6G 2E8 Abstract {j|P (j)=i};PlayerT (i) selects from among the child nodes of i. When a leaf node i is reached, the play- Two-player complete-information game trees ers receive their payoffs in the form of a payoff vector are perhaps the simplest possible setting R(i). Specifically, Player x receives the xth component for studying general-sum games and the of the payoff vector R(i)x. computational problem of finding equilibria. In the small example in Figure 1, node numbers are These games admit a simple bottom-up al- written above the nodes and leaves are marked with gorithm for finding subgame perfect Nash rectangles. Non-leaf nodes contain the player number equilibria efficiently. However, such an al- for the player who controls that node and leaf nodes gorithm can fail to identify optimal equi- contain their payoff vectors. Here, play begins with a libria, such as those that maximize social decision by Player 2 at node 1. Player 2 can choose be- welfare. The reason is that, counterintu- tween the two children of the root, node 2 and node 3. itively, probabilistic action choices are some- If Player 2 selects node 2 as his action, Player 1 gets to times needed to achieve maximum payoffs. make a choice between node 4 and node 5. If Player 1 We provide a novel polynomial-time algo- chooses the left child, node 4, a leaf is reached. From rithm for this problem that explicitly reasons node 4, Player 1 receives a payoff of 2 and Player 2 about stochastic decisions and demonstrate receives a payoff of 3 and the game ends. its use in an example card game. 1 2 1 Introduction 3 2 1000 1 Game-tree search for zero-sum games has been a sta- 4 ple of AI research since its earliest days. Recently, 4 5 research on general-sum games has intensified as a 2 2 way of reasoning about more complex agent interac- 3 100 tions (Kearns et al. 2001). In this paper, we treat the problem of finding optimal Nash equilibria in general- Figure 1: A small game tree that demonstrates the sum two-player game-trees. challenge of finding optimal equilibria. 1.1 Problem Definition 1.2 Equilibria A game-tree problem, as studied in this paper, is spec- A joint strategy for a game tree is a selection, at each ified by a tree with n nodes. A leaf node has no chil- node of the game tree, of a probability distribution dren, but it does have a payoff vector R(i) ∈<2. Each over actions. We can write a joint strategy as a func- non-leaf node i has an associated player T (i) ∈{1; 2} tion π that maps each node of the tree to a probability who controls play in that node. Except for the root distribution over its children. Returning to Figure 1, node, each node has a parent node P (i) ∈{1;::: ;n}. one joint strategy is π(1)2 =1=2, π(1)3 =1=2(choose Play begins at the root of the tree. When playing node 2 from node 1 with probability 1/2 and node 3 node i,PlayerT (i) chooses an action. We define with probability 1/2), and π(2)5 =1(alwayschoose the set of actions available from node i as A(i)= node 5 from node 2). The value of a strategy π is the expected payoff vector at that node since no other choice will improve its ex- if strategy π is followed. It can be defined recursively pected payoff. as follows. In the context of a strategy π, define V π(i) Given the V function, the corresponding equilibrium to be the value for the game defined by the subtree strategy is π(i)j = 1 where j ∈ A(j)andV (j)=V (i); rooted at node i. For a leaf node, V π(i)=R(i). Oth- at each node, the strategy chooses the child whose erwise, anyNash X value vector matches that of the node. The π π algorithm returns this strategy, π. V (i)= π(i)jV (j): (1) j∈A(i) By construction, the strategy produced by anyNash sat- isfies the conditions for being an equilibrium expressed The sum, here, is taken over each component of the in Equation 2. Specifically, at each node i the strategy payoff vector separately. Equation 1 just tells us that chooses a child node that results in the best possible once we calculate the values for each child node, we (local) outcome for the player who controls node i. can find the values of each parent and consequently the root by working our way up the tree. 1.3 Optimal Equilibria The example strategy above has a value of [501; 52] because, starting from node 2, Player 1 chooses node 5 Some games, like the example game in Figure 1, have resulting in payoffs of [2; 100]. Starting from node 1, multiple equilibria. The situation arises initially when Player 2 randomizes over node 2 and node 3, resulting a player has a choice of actions that results in identical in the expected payoffs given. payoffs for that player. In fact, if anyNash does not encounter any ties as it proceeds up the game tree, it A joint strategy π is a subgame perfect Nash equilib- finds the unique equilibrium. rium (Osborne and Rubinstein 1994), or equilibrium for short, if there is no i such that T (i) can increase When there is a choice of equilibria, anyNash chooses π her value V (i)T (i) at i by changing the strategy π(i) one arbitrarily. However, some equilibria are more de- at i. Specifically, let π be a strategy for a game, and i sirable than others. We use the term optimal equilib- be a node of the game tree. We say that strategy π is rium to refer to an equilibrium that is the most desir- locally optimal at node i if able among the possible equilibria. π π V (i)T (i) ≥ max V (j)T (j): (2) There are many possible kinds of desirable equilibria j∈A(j) and we focus on four representative types here. If Vx is the value to Player x for playing the equilibrium, we Equation 2 simply states that the player controlling can seek the equilibrium that is the: node i (T (i)) can’t improve her payoff by selecting some other action choice at node i. Strategy π is an equilibrium if it is locally optimal at every node. 1. social optimumP; maximize the sum of values over all players: x Vx. A subgame at node i of a game tree is the game ob- tained by retaining only the subtree rooted at node i. 2. fairest; maximize the minimum payoff: minx Vx. An equilibrium strategy for a game is also an equilib- rium strategy for all subgames of the game. 3. single maximum; maximize the maximum pay- off over all players: maxx Vx. Note that the example strategy given above is not an equilibrium because Player 2 can increase his payoff to 4. best for y; maximize the payoff to some specific 100 by choosing node 2 deterministically from node 1. Player y: Vy. The joint strategy πA(1)2 =1,πA(2)5 =1,isanequi- librium, however, as Player 1 cannot improve her value Greenwald and Hall (2003) refer to these concepts as of 2 by changing her decision at node 2 and Player 2 utilitarian, egalitarian, republican, and libertarian, re- cannot improve his value of 100 by changing his deci- spectively. They are also explicitly treated by Conitzer sion at node 1. and Sandholm (2003) in the bimatrix setting. The definition of an equilibrium can be turned directly The previous section described an equilibrium πA for into an algorithm for computing an equilibrium. De- the example game with a value of [2; 100]. A second fine V (i)=R(i) for every leaf node i. For non-leaf equilibrium is πB(1)3 =1andπB(2)4 = 1. The value nodes, V (i)=V (j∗)wherei = P (j∗)andj∗ ∈ A(i)is of this equilibrium is [1000; 4]. Equilibrium B is the so- ∗ chosen such that V (j )T (i) ≥ maxj∈A(j) V (j)T (i).In cial optimum (1004 > 102), the fairest (4 > 2), the sin- words, V (i) is the value vector of the child of node i gle maximum (1000 > 100), and the best for Player 1 that has the largest payoff for the player controlling (1000 > 2). Equilibrium A, however, is the best for node i (T (i)). Such a player is happy with its choice Player 2 (100 > 4). Note that this game has an infinitely large set of equi- nodes bridge the gap to richer models such as stochas- libria. In particular, consider a strategy π such that tic games (Shapley 1953, Condon 1992). π(1)2 =1,π(2)4 = α,andπ(2)5 =1− α (for 0 ≤ α ≤ A general game tree can also include information sets. 1). Since T (2) = 1 and Player 1 is indifferent to the An information set is a set of nodes controlled by outcomes at the children of node 2, the subgame rooted the same player that the player cannot distinguish be- at node 2 is an equilibrium, regardless of the value of α.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us