
Monte-Carlo Tree Search Mark H.M. Winands Department of Knowledge Engineering, Maastricht University, Maastricht, The Netherlands [email protected] Key words: Adversarial Search, Monte-Carlo Sampling, Game Tree Synonyms Monte Carlo Tree Search, MCTS, UCT 1 Definition Monte-Carlo Tree Search (MCTS) [14, 21] is a best-first search method that does not require a positional evaluation function. It is based on a randomized exploration of the search space. Using the results of previous explorations, the algorithm gradually builds up a game tree in memory, and successively becomes better at accurately estimating the values of the most promising moves. MCTS consists of four strategic steps, repeated as long as there is time left [11]. The steps, outlined in Fig. 1, are as follows. (1) In the selection step the tree is traversed from the root node downwards until a state is chosen, which has not been stored in the tree. (2) Next, in the play-out step moves are chosen in self-play until the end of the game is reached. (3) Subsequently, in the expansion step one or more states encountered along its play-out are added to the tree. (4) Finally, in the backpropagation step, the game result r is propagated back along the previously traversed path up to the root node, where node statistics are updated accordingly. 2 Structure of MCTS MCTS usually starts with a tree containing only the root node. The tree is gradually grown by executing the selection, play-out, expansion, and back- propagation steps. Such an iteration is called a full simulation. After a certain 2 Mark H.M. Winands Repeated X times Selection Play-out Expansion Backpropagation A selection strategy is One simulated One or more nodes The result is propagated used to traverse the tree game is played are created back in the tree Fig. 1. Outline of Monte-Carlo Tree Search. number of simulations, a move is chosen to be played in the actual game. This final move selection is based on the highest score or alternatively the number of times being sampled. The detailed structure of MCTS is discussed by explaining the four steps below. 2.1 Selection Selection chooses a child to be searched based on previous information. It controls the balance between exploitation and exploration. On the one hand, the task consists of selecting the move that leads to the best results so far (exploitation). On the other hand, the less promising moves still have to be tried, due to the uncertainty of the simulations (exploration). Several selection strategies [8] have been suggested for MCTS such as BAST, EXP3, UCB1-Tuned, but the most popular one is based on the UCB1 algorithm [3], called UCT (Upper Confidence Bounds applied to Trees) [21]. UCT works as follows. Let I be the set of nodes immediately reachable from the current node p. The selection strategy selects the child b of node p that satisfies Formula 1: r ! ln np b 2 argmaxi2I vi + C × (1) ni where vi is the value of the node i, ni is the visit count of i, and np is the visit count of p. C is a parameter constant, which can be tuned experimentally (e.g., C = 0:4). The value of vi should lie in the range [0; 1]. In case a child has not been stored in the tree or has not been visited yet, a default value is assumed. For example, the maximum value that a node could obtain by sampling (i.e., vmax = 1) is taken. Monte-Carlo Tree Search 3 2.2 Play-out When in the selection step a state is chosen, which has not been stored in the tree, the play-out starts. Moves are selected in self-play until the end of the game is reached. This task might consist of playing plain random moves or { better { semi-random moves chosen according to a simulation strategy. Smart simulation strategies have the potential to improve the level of play significantly. The main idea is to play interesting moves based on heuristics. In the literature this play-out step is sometimes called the roll-out or simulation. 2.3 Expansion Expansion is the procedure that decides whether nodes are added to the tree. Standard the following expansion strategy is sufficient in most cases: one node is added per simulation [14]. The added leaf node L corresponds to the first state encountered during the traversal that was not already stored. This allows to save memory, and reduces only slightly the level of play. 2.4 Backpropagation Backpropagation is the procedure that propagates the result r of a simulated game t back from the leaf node L, through the previously traversed nodes, all the way up to the root. If a game is won, the result of a player j is scored as rt;j = 1, in the case of a loss as rt;j = 0, and a draw as rt;j = 0:5. To deal with multi-player games, the result is backpropagated as a tuple of size N, where N is the number of players. For instance, if Player 1 and Player 3 both reach a winning condition in a 3-player game, then the result r is returned as the 1 1 tuple ( 2 ; 0; 2 ). Propagating the values back in the tree is performed similar to maxn [31]. To compute the value vi of a node i a backpropagation strategy is applied. Usually, it is calculated by taking the average of the results of all simulated games made through this node [14], i.e., vi Ri;j=ni, where j is the player P to move in its parent node p, and Ri;j t rt;j the cumulative score of all the simulations. 3 MCTS Enhancements Over the past years, several enhancements have been developed to improve the performance of MCTS [8]. First, there are many ways to improve the selection step of MCTS. The major challenge is how to choose a promising node when the number of simulations is still low. Domain-independent tech- niques that only use information gathered during the simulations are Trans- position Tables, Rapid Action Value Estimation (RAVE), and Progressive History [12, 18, 24]. Techniques that rely on hand-coded domain knowledge 4 Mark H.M. Winands are for instance Move Groups, Prior Knowledge, Progressive Bias, and Pro- gressive Widening/Unpruning [11, 12, 18]. The used heuristic knowledge may consist of move patterns and even static board evaluators. When a couple of these enhancements are successfully incorporated, the C parameter of UCT becomes usually very small or even zero. Next, the play-outs require a simulation strategy in order to be accurate. Moves are chosen based on only computationally light knowledge [18] (e.g., patterns, capture potential, and proximity to the last move). Adding compu- tationally intensive heavy heuristic knowledge in the play-outs (such as a 1- or 2-ply search using a full board evaluator) has been beneficial in a few games such as Chinese Checkers and Lines of Action. When domain knowledge is not readily available, there exist various domain-independent techniques to enhance the quality of the play-outs, including the Move Average Sampling Technique (MAST), Last-Good-Reply policy, and N-Grams [32]. The princi- ple of these techniques is that moves good in one situation are likely to be good in other situations as well. The basic version of MCTS converges to the game-theoretic value, but is unable to prove it. The MCTS-Solver technique [34] is able to prove the game- theoretic value of a state with a binary outcome (i.e., win or loss). It labels terminal states in the search tree as a win or loss and backpropagates the game-theoretic result in a maxn way [24]. For games with multiple outcomes (e.g., win, loss, or draw) the technique has been extended to Score Bounded Monte-Carlo Tree Search [9]. Finally, to utilize the full potential of a multi-core machine, parallelization has to be applied in an MCTS program. There exist three different paralleliza- tion techniques for MCTS: (1) root parallelization, (2) leaf parallelization, and (3) tree parallelization [10]. In root parallelization, each thread has its own MCTS tree. When the allotted search time is up, the results of the different trees are combined. In leaf parallelization, one tree is traversed using a single thread. Subsequently, starting from the leaf node, play-outs are executed in parallel for each available thread. Once all threads have finished, the results are backpropagated. When using tree parallelization, one tree is shared, in which all threads operate independently. For shared memory systems, tree parallelization is the natural approach that takes full advantage of the avail- able bandwidth to communicate simulation results [16]. 4 Historical Background Classic search algorithms such as A*, αβ search, or Expectimax require an evaluator that assigns heuristic values to the leaf nodes in the tree. The 15- Puzzle and the board games Backgammon, Chess, and Checkers are instances where this approach has led to world-class performance. However, for some domains constructing a strong static heuristic evaluation function has been a rather difficult or an even infeasible task. Monte-Carlo Tree Search 5 Replacing such an evaluation function with Monte-Carlo sampling was proposed in the early 1990s. Abramson [1] experimented with these so-called Monte-Carlo evaluations in the games of Tic-tac-toe, Othello, and Chess. In 1993 Bernd Br¨ugmannwas the first to use Monte-Carlo evaluations in his 9×9 Go program Gobble. The following years the technique was incorporated in stochastic games such as Backgammon [33] and imperfect-information games such as Bridge [19], Poker [5], and Scrabble [30].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-