
Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling Kyowoon Lee * 1 Sol-A Kim * 1 Jaesik Choi 1 Seong-Whan Lee 2 Abstract 1992), and othello (Buro, 1999). Recently, deep convolu- Many real-world applications of reinforcement tional neural networks (CNNs) (LeCun & Bengio, 1998) learning require an agent to select optimal actions have achieved super-human performance in deterministic from continuous spaces. Recently, deep neural games with perfect information, such as Atari games (Mnih networks have successfully been applied to games et al., 2015) and Go (Silver et al., 2016; 2017). In the latter with discrete actions spaces. However, deep neu- game, board positions are passed through the convolutional ral networks for discrete actions are not suitable layers as a 19-by-19-square image. These CNNs effectively for devising strategies for games where a very reduce the depth and breadth of the search tree by evaluat- small change in an action can dramatically affect ing the positions using a value network and by sampling the outcome. In this paper, we present a new self- actions using a policy network. However, in a continuous play reinforcement learning framework which action space, the space needs to be discretized. Determin- equips a continuous search algorithm which en- istic discretization would cause a strong bias in the policy ables to search in continuous action spaces with evaluation and the policy improvement. Thus, such deep a kernel regression method. Without any hand- CNNs for large, non-convex continuous action spaces are crafted features, our network is trained by super- not directly applicable. vised learning followed by self-play reinforce- To solve this issue, we conduct a policy search with an ef- ment learning with a high-fidelity simulator for ficient stochastic continuous action search on top of policy the Olympic sport of curling. The program trained samples generated from a deep CNN. Our deep CNN still under our framework outperforms existing pro- discretizes the state space and the action space. However, in grams equipped with several hand-crafted features the stochastic continuous action search, we lift the restric- and won an international digital curling competi- tion of the deterministic discretization and conduct a local tion. search procedure in a physical simulator with continuous action samples. In this way, the benefits of both deep neural networks (i.e., learning the global structure) and physical 1. Introduction simulators (i.e., finding precise continuous actions) can be realized. Learning good strategies from large continuous action spaces is important for many real-world problems includ- More specifically, we design a deep CNN called the policy- ing learning robotic manipulations and playing games with value network, which gives the probability distribution of physical objects. In particular, when an autonomous agent actions and expected reward given an input state. The policy- interacts with physical objects, it is often necessary to han- value network is jointly trained to find an optimal policy dle large continuous action spaces. and to estimate the reward given an input instance. During the supervised training, the policy subnetwork is directly Reinforcement learning methods have been extensively ap- learned from the moves of a reference program in each sim- plied to build intelligent agents that can play games such ulated run of games. The value subnetwork is learned using as chess (Campbell et al., 2002), checkers (Schaeffer et al., d-depth simulation and the bootstrapping of the prediction *Equal contribution 1Department of Computer Science and to handle a high variance of a reward obtained from a se- Engineering, Ulsan National Institute of Science and Technology, quence of stochastic moves. The network is then trained Ulsan, Republic of Korea 2Department of Brain and Cognitive further from the games of self-play using kernel regression Engineering, Korea University, Seoul, Republic of Korea. Corre- to precisely handle continuous spaces and actions. This spondence to: Jaesik Choi <[email protected]>. process allows actions in the continuous domain to be ex- Proceedings of the 35 th International Conference on Machine plored and adjusts the policy and the value in consideration Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 of uncertainty of the execution. by the author(s). Deep Reinforcement Learning in Continuous Action Spaces Figure 1. The architecture of our policy-value network. As input, a feature map (Table2 in the supplementary material) is provided from the state information. During the convolutional operations, the layers’ width and height are fixed at 32x32 (the discretized position of the stones) without pooling. The details of the layer information are provided described in Figure2. We train the policy and the value functions in a unified network. The output of the policy head is the probability distribution of each action. The output of the value head is the probability distribution of the final scores [-8,8]. We verify our framework with the sport of curling. Curling, for the policy and value networks to train the networks often called chess on ice, has been viewed as the most faster. intellectually challenging Olympic sport due to its large In the domain of curling, several algorithms have been pro- action space and complicated strategies. Typically, curling posed. As a way of dealing with continuous action space, players put a stone in a large area of about 5m by 30m, game tree search methods (Yamamoto et al., 2015) have dis- with precise interactions which are typically less than 10 cm cretized continuous action space. The evaluation functions among stones. When discretized, the play area is divided are designed based on the domain knowledge and rules of into a 50x300 grid. Asymmetric uncertainty is added to the the game. With considering given execution uncertainty, the final location to which a stone is delivered. action value is calculated as the average of the neighboring The program trained under our framework outperforms state- values. of-the-art digital curling programs, AyumuGAT’17 (Ohto & A MCTS method called KR-UCT has been successfully Tanaka, 2017) and Jiritsukun’17 (Yamamoto et al., 2015). applied to continuous action space (Yee et al., 2016). KR- Our program also won in the Game AI Tournaments (GAT- UCT exhibits effective selection and expansion of nodes 2018) (Ito). using neighborhood information by estimating rewards with kernel regression (KR) and kernel density estimation (KDE) 2. Related Work in continuous action spaces. Given an action, the upper confidence bound (Lai & Robbins, 1985) of the reward is In the game of go, AlphaGo Lee, the successor version of estimated based on the values nearby. KR-UCT can be AlphaGo Fan (Silver et al., 2016), defeated Lee Sedol, the regarded as a specialized exploration of pseudo-count based winner of 18 international titles. Although Go has a finite, approaches (Bellemare et al., 2016). discrete action space, its depth of the play creates complex branches. Based on the moves of human experts, two neural To handle continuous action space in the bandit problem, networks in AlphaGo Lee are trained for the policy and several algorithms have been proposed. For example, hierar- value functions. AlphaGo Lee uses a Monte Carlo tree chical optimistic optimization (HOO) (Bubeck et al., 2008) search (MCTS) for policy improvement. starts by creating a cover tree and recursively divides the action space into smaller candidate ranges at each depth. A AlphaGo Zero (Silver et al., 2017), which is trained via self- node in the cover tree is considered as arms of the sequential play without any hand-crafted knowledge, has demonstrated bandit problem. The most promising node is exploited to a significant improvement in performance. AlphaGo Zero create estimates of finer granularity, and regions which have is expected to win more than 99.999% of games against not been sampled sufficiently are explored further. AlphaGo Lee.1 AlphaGo Zero uses a unified neural network 1 An analysis of the dynamics of curling is important to build Their difference in elo rating is greater than 2,000. an accurate digital curling program. For example, the fric- Deep Reinforcement Learning in Continuous Action Spaces tion coefficients between the curling stones and the ice curl- POLICY EVALUATION:LEARNING VALUE FUNCTIONS ing sheet have been analyzed (Lozowski et al., 2015), while The value function predicts the outcome from state s of pebbles, the small frozen droplets of water across the play games played using policy p for both players, sheet, have also been taken into account (Maeno, 2014). Unfortunately, modeling the changes in friction on the ice v(s) = E[r(s)jst = s; at ∼ p]: (3) surface is not yet possible. Thus, in general, digital Curling simulators assume a fixed friction coefficient with noise The value function is approximated by the value estimator generated from a predefined function (Ito & Kitasei, 2015; vθ(s) with parameters θ. The value estimator is trained Yee et al., 2016; Ahmad et al., 2016). by state-reward pairs (s; r(s)) using stochastic gradient de- The physical behavior of the stones has been modeled using scent to minimize the mean squared error (MSE) between physics simulation engines such as Box2D (Parberry, 2013), the predicted regression value vθ(s) and the corresponding Unity3D (Jackson, 2015) and Chipmunk 2D. Important pa- outcome r(s), rameters including friction coefficients and noise generation @vθ(s) functions are trained from games between professional play- ∆θ / (r(s) − vθ(s)): (4) ers (Yee et al., 2016; Ito & Kitasei, 2015; Heo & Kim, 2013). @θ In this paper, we use the same parameters used in a digital curling competition (Ito & Kitasei, 2015). 3.2. Monte Carlo Tree Search A Monte Carlo tree search (MCTS) (Browne et al., 2012; 3. Background Coulom, 2007a; Kocsis & Szepesvari´ , 2006) is a tree search algorithm for decision processes for finite-horizon tasks.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-