Efficient Reinforcement Learning for Starcraft by Abstract Forward

Efficient Reinforcement Learning for Starcraft by Abstract Forward

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2020 1 Efficient Reinforcement Learning for StarCraft by Abstract Forward Models and Transfer Learning Ruo-Ze Liu, Haifeng Guo, Xiaozhong Ji, Yang Yu, Member, IEEE, Zhen-Jia Pang, Zitai Xiao, Yuzhou Wu, Tong Lu, Member, IEEE, Abstract—Injecting human knowledge is an effective way to investigation of effective knowledge injection methods is an accelerate reinforcement learning (RL). However, these methods overlooked issue and desires more exploration. are underexplored. This paper presents our discovery that an This paper investigates, from the perspective of model-based abstract forward model (Thought-game (TG)) combined with transfer learning is an effective way. We take StarCraft II as the RL (MB-RL), the knowledge injection method by designing an study environment. With the help of a designed TG, the agent abstracted and simplified model of environment. Such models can learn a 99% win-rate on a 64×64 map against the Level-7 are called forward models [9] or world models [10] in previous built-in AI, using only 1.08 hours in a single commercial machine. literatures. Different from them, the model we investigate here We also show that the TG method is not as restrictive as it was is a hand-designed one, moreover, it needs not to be identical thought to be. It can work with roughly designed TGs, and can also be useful when the environment changes. Comparing with to the original one as much as possible. Hence, our model can previous model-based RL, we show TG is more effective. We also be seen as an abstract forward model. In this paper, we give it present a TG hypothesis that gives the influence of fidelity levels a simple name Thought-game (TG). This name draws on the of TG. For real games that have unequal state and action spaces, thinking mode of human players in RTS games. After playing we proposed a novel XfrNet of which usefulness is validated while some games, players will build models of the game in their achieving a 90% win-rate against the cheating Level-10 AI. We argue the TG method might shed light on further studies of minds. The models can be used to pre-train their game abilities efficient RL with human knowledge. before they go to real games. Previous, it is believed that an accurately reconstructed model can be helpful to accelerate the Index Terms—Reinforcement learning, StarCraft. RL. On the one hand, however, learning an accurate model can be quite tricky; on the other hand, human experts often I. INTRODUCTION have a bunch of experiences that can abstractly outline the environment. Therefore, it is interesting to investigate whether N recent years, the combination of reinforcement learning an abstracted and drastically simplified environment model (RL) [1] and deep learning (DL) [2], the deep rein- I constructed from human experience can help RL. forcement learning (DRL), has received increasing attention, particularly in learning to play games [3]. The combination of DRL and Monte-Carlo tree search has conquered the game Training Process of Go [4]. After that, some researchers shifted their attention Thought-Game to real-time strategy (RTS) games, e.g., RL has been applied ACRL s a to StarCraft I (SC1) [5] & II (SC2) [6], and Dota2. However, Mapping m m huge computing resources and a long time is required by the function f fa above approaches, e.g., TStarBot [7] uses 3840 CPU cores s inital and OpenAIdota2 utilizes 128400 ones. Also notice that the ss as arXiv:1903.00715v3 [cs.LG] 30 Jan 2021 training time of OpenAIdota2 is calculated in months, and full training in [6] takes 44 days. Such costs make RL approaches Real game Fine tune impractical for real-world tasks. Meanwhile, it has been widely recognized that injecting hu- Fig. 1: Thought-Game Architecture. S and S are the state man knowledge is an effective way to improve the efficiency of m s spaces for TG and RG respectively. A and A are the action RL. Behavior cloning [8], one kind of imitation learning (IL), m s spaces while π and π are the policies. The dashed line in is an often adopted approach to boost the starting performance φ θ the right means the weight-parameters in π are initialed as of RL [6]. It should not be ignored that human knowledge θ ones in π before finetune training. has also been utilized in choosing the best policy model φ structure, designing the best reward function, and deciding We take SC2 as the study environment, which is a challeng- the best learning algorithm hyper-parameters. However, the ing game for RL. We design a TG and propose a training al- Y. Yu and T. Lu were corresponding authors. They were with the National gorithm TTG (Training with TG). As described in the right of Key Laboratory for Novel Software Technology, Nanjing University e-mail: Fig. 1, we first train the agent using an automated curriculum [email protected], [email protected]. reinforcement learning (ACRL) algorithm in the TG, and then R-Z. Liu, H. Guo, X. Ji, Z-J. Pang, Z. Xiao, Y. Wu were students with Nanjing University. using transfer learning (TL) algorithm finetune [11] to transfer Manuscript received April 19, 2020; revised December 31, 2020. it to the real game (RG) for further training. We observe some JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2020 2 unexpected results that, even if the TG is over-simplified and • A novel XfrNet for TG is proposed to handle the cases of unrealistic, it drastically accelerates the training. The agent transferring to RG with unequal state and action spaces. learns a 99% win-rate on a 64×64 map against the most • We suggest a TG hypothesis that shows the influence of difficult non-cheating level-7 (L7) built-in AI in 1.08 hours, different fidelity levels of TG. which can be nearly 100 times faster than previous approaches. • TG bridge the gap between the traditional simulator and We also disclose that the TG is robust and helpful in more RL, which facilitate training on the real task. situations. It can be used for training agents of different races in SC2, which requires different strategies, and can be adapted. We also show that TG is more effective than previous MB- II. BACKGROUND RL algorithms on SC2. We then present a TG hypothesis that In this section, we first present the definition of RL and gives the influence of different fidelity levels of TG. model-based RL. Then we discuss TL and curriculum learning. Real games may have unequal state and action spaces with Finally, previous studies on SC1 and SC2 are given. TG. Transferring policy to it faces many training difficulties. Therefore we propose a novel deep neural networks (DNN) structure for it called XfrNet (“Xfr” stands for “transfer”) and A. RL and Model-based RL validate its usefulness by experiments. Finally, we summarize RL handles the continuous decision making problem which the useful tricks of training agents on SC2. By applying the can be formulated as a Markov decision process (MDP) which above improvements, our agent can achieve good results in the is represented as a 6-tuple hS; A; P (·|s; a);R(·); γ; T i. At three difficulty levels above 7 (all are cheating difficulty means each time step t of one episode, the agent observes a state the built-in AI use cheating to gain a big advantage above our s 2 S, then selects an action a 2 A accoring to a policy agent). Our winning rates for difficulty 8, 9, 10 reached 95%, t t π : s ! a . The agent obtains a reward r = R(s ; a ), and 94%, and 90% respectively 1. t t t t t the environment transit to the next state st+1 ∼ P (·|st; at). PT k−t T is the max time steps for one episode. Gt = k=t γ rk is the return while γ is a decay discounter. Value function Learn Transfer RL Simulater Real Task Hard Easier is Vπ(st) = Eπ(Gtjst) which is the expected return for st while following policy π. The goal of RL is to get argmaxπE(Vπ(s0)). Transition functions P (·|s; a) (also called as dynamic func- Learn Transfer Transfer tions) are usually unknown in many MDPs. Therefore, de- RL TG Simulater Real Task Easier Easier Easier pending on whether the transition function is used, RL can be divided into model-free RL [3], [12], [13] and MB-RL [10], Fig. 2: The improved pattern by introducing TG. [14]. The advantage of model-free RL is that it is not necessary to learn the transition function, but the disadvantage of it is We argue that another contribution of TG is to improve the the low sample efficiency. In contrast, MB-RL methods tend pattern of applying RL in real-world tasks which often follows to be more sample efficient, but need a model. the below pattern: First, a simulator for the real task is built; Then RL algorithms are applied on the simulator; Finally, the trained policy is transferred to the real task. The motivation B. TL and curriculum learning for such a process is that simulators are often lightweight and Transfer learning [15] allows the transferring of knowledge fast. However, the purpose of the simulator is to be as similar from the source domain to the target domain, assuming to real games as possible, not to facilitate RL training which similarities between them. TL has been used in RL [16], in contrast, TG was born for RL thus can provide much faster- [17].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us