
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Pruning Game Tree by Rollouts Bojun Huang Microsoft Research [email protected] Abstract in Markov Decision Process (Peret´ and Garcia 2004) (Koc- sis and Szepesvari´ 2006), and game playing (Tesauro and In this paper we show that the α-β algorithm and its succes- Galperin 1996) (Abramson 1990). In the context of game sor MT-SSS*, as two classic minimax search algorithms, can tree evaluation, a rollout is a process that simulates the game be implemented as rollout algorithms, a generic algorithmic from the current state (the root of the game tree) to a termi- paradigm widely used in many domains. Specifically, we de- fine a family of rollout algorithms, in which the rollout policy nating state (a leaf node), following a certain rollout policy is restricted to select successor nodes only from a subset of that determines each move of the rollout in the state space. A the children list. We show that any rollout policy in this fam- rollout algorithm is a sequence of rollout processes, where ily (either deterministic or randomized) is guaranteed to eval- information obtained from one rollout process can be uti- uate the game tree correctly with a finite number of rollouts. lized to reinforce the policies of subsequent rollouts, such Moreover, we identify simple rollout policies in this family that the long-term performance of the algorithm may con- that “implement” α-β and MT-SSS*. Specifically, given any verge towards the (near-)optimal policy. game tree, the rollout algorithms with these particular policies In particular, a specfic class of rollout algorithms, called always visit the same set of leaf nodes in the same order with Monte Carlo Tree Search (MCTS), has substantially ad- α-β and MT-SSS*, respectively. Our results suggest that tra- ditional pruning techniques and the recent Monte Carlo Tree vanced the state-of-art in Computer GO (Gelly et al. 2012) Search algorithms, as two competing approaches for game and General Game Playing (Finnsson and Bjornsson¨ 2008). tree evaluation, may be unified under the rollout paradigm. The key idea of MCTS is to use the average outcome of rollouts to approximate the minimax value of a game tree, which is shown to be effective in dynamic games (Coulom Introduction 2006). On the other hand, however, people also observed that existing MCTS algorithms appear to lack the capabil- Game tree evaluation formulates a logic process to make ity of making “narrow lines” of tactical plans as traditional optimal worst-case plans for sequential decision making game-tree search algorithms do. For example, Ramanujan, problems, a research topic usually benchmarked by two- Sabharwal, and Selman (2012) show that UCT, as a popular player board games. Historically, classic game-tree search rollout algorithms in game playing, is prone to being misled algorithms like α-β and its successors have successfully by over-estimated nodes in the tree, thus often being trapped demonstrated human-champion-level performance in tacti- in sub-optimal solutions in some situations. cal games like Chess, but their computational complexity As a result, researchers have been trying to combine suffers from exponential growth with respect to the depth MCTS algorithms with traditional game tree search algo- of the game tree. In order to reduce the size of the game tree rithms, in order to design unified algorithms that share the under consideration, these traditional game-tree search algo- merits of both sides (Baier and Winands 2013) (Lanctot et rithms have to be complemented by domain-specific evalu- al. 2013) (Coulom 2006). Unfortunately, one of the major ation functions, a task that is very difficult in some domains, difficulties for the unification is that most traditional α-β- such as GO and General Game Playing. like algorithms are based on minimax search, which seems Recently, rollout algorithms have been introduced as a to be a different paradigm from rollout. new paradigm to evaluate large game trees, partially be- In this paper, we show that two classic game-tree search cause of their independence of domain-specific evaluation algorithms, α-β and MT-SSS*, can be implemented as roll- functions. In general, rollout algorithm is a generic algo- out algorithms. The observation offers a new perspective rithmic paradigm that has been widely used in many do- to understand these traditional game-tree search algorithms, mains, such as combinatorial optimization (Bertsekas, Tsit- one that unifies them with their modern “competitors” under siklis, and Wu 1997) (Glover and Taillard 1993), stochas- the generic framework of rollout. Specifically, tic optimization (Bertsekas and Castanon 1999), planning • We define a broad family of rollout algorithms, and Copyright c 2015, Association for the Advancement of Artificial prove that any algorithm in this family is guaranteed Intelligence (www.aaai.org). All rights reserved. to correctly evaluate game trees by visiting each leaf 1165 node at most once. The correctness guarantee is policy- remark that such a “whole-space” storage only serves as a oblivious, which applies to arbitrary rollout policy in the conceptually unified interface that simplifies the presenta- family, either deterministic or probabilistic. tion and analysis in this paper. In practice, we do not need • We then identify two simple rollout policies in this to allocate physical memory for nodes with the trivial bound family that implement the classic minimax search al- [−∞; +1]. For example, the storage is physically empty gorithms α-β and MT-SSS* under our rollout frame- if the algorithm does not access the storage at all. Such a work. We prove that given any game tree, the rollout al- storage can be easily implemented based on standard data gorithms with these particular policies always visit the structures such as a transposition table (Zobrist 1970). same set of leaf nodes in the same order as α-β and an “augmented” version of MT-SSS*, respectively. Depth-first algorithms and α-β pruning • As a by-product, the“augmented” versions of these clas- Observe that Eq. (1) implies that there must exist at least sic algorithms identified in our equivalence analysis are one leaf node in the tree that has the same minimax value as guaranteed to outprune their original versions. the root. We call any such leaf node a critical leaf.A game- tree search algorithm computes V(root) by searching for a Preliminaries critical leaf in the given game tree. Game tree model Depth-first algorithm is a specific class of game-tree A game tree T is defined by a tuple (S; C; V), where S is a search algorithms that compute V(root) through a depth- finite state space, C(·) is the successor function that defines first search. It will evaluate the leaf nodes strictly from left an ordered children list C(s) for each state s 2 S, and V(·) to right, under the order induced by the successor function is the value function that defines a minimax value for each C(·). It is well known that the α-β algorithm is the optimal state s 2 S. depth-first algorithm in the sense that, for any game tree, We assume that C(·) imposes a tree topology over the state no depth-first algorithm can correctly compute V(root) by space, which means there is a single state that does not show evaluating fewer leaf nodes than α-β does (Pearl 1984). in any children list, which is identified as the root node of The key idea of the α-β algorithm is to maintain an open the tree. A state s is a leaf node if its children list is empty interval (α; β) when visiting any tree node s, such that a (i.e. C(s) = ;), otherwise it is an internal node. critical leaf is possible to locate in the subtree under s only The value function V(·) of a given game tree can be spec- if V(s) 2 (α; β). In other words, whenever we know that ified by labeling each state s 2 S as either MAX or MIN, the value of s falls outside the interval (α; β), we can skip and further associating each leaf-node state s with a deter- over all leaf nodes under s without compromising correct- ministic reward R(s). The minimax value V(s) of each state ness. Algorithm 1 gives the psuedocode of the α-β algo- s 2 S is then defined as rithm, which follows Figure 2 in (Plaat et al. 1996). Given a tree node s and an open interval (α; β), the alphabeta proce- 8R(s) if s is Leaf; dure returns a value g, which equals the exact value of V(s) < V(s) = maxc2C(s) V(c) if s is Internal & MAX; (1) if α < g < β, but only encodes a lower bound of V(s) if :min V(c) if s is Internal & MIN. g ≤ α (a situation called fail-low), or an upper bound of c2C(s) V(s) if g ≥ β (a situation called fail-high). To align with previous work, we assume that the reward Note that Algorithm 1 accesses the external storage at R(s) for any s is ranged in a finite set of integers, and we Line 4 and Line 22, which are unnecessary because the al- use the symbols +1 and −∞ to denote a finite upper bound gorithm never visits a tree node more than once. Indeed, the and lower bound that are larger and smaller than any possi- basic version of the α-β algorithm does not require the exter- ble value of R(s), respectively. 1 nal storage at all.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-