Deep Reinforcement Learning
Total Page:16
File Type:pdf, Size:1020Kb
DEEP REINFORCEMENT LEARNING Yuxi Li ([email protected]) ABSTRACT We discuss deep reinforcement learning in an overview style. We draw a big pic- ture, filled with details. We discuss six core elements, six important mechanisms, and twelve applications, focusing on contemporary work, and in historical con- texts. We start with background of artificial intelligence, machine learning, deep learning, and reinforcement learning (RL), with resources. Next we discuss RL core elements, including value function, policy, reward, model, exploration vs. exploitation, and representation. Then we discuss important mechanisms for RL, including attention and memory, unsupervised learning, hierarchical RL, multi- agent RL, relational RL, and learning to learn. After that, we discuss RL appli- cations, including games, robotics, natural language processing (NLP), computer vision, finance, business management, healthcare, education, energy, transporta- tion, computer systems, and, science, engineering, and art. Finally we summarize briefly, discuss challenges and opportunities, and close with an epilogue. 1 Keywords: deep reinforcement learning, deep RL, algorithm, architecture, ap- plication, artificial intelligence, machine learning, deep learning, reinforcement learning, value function, policy, reward, model, exploration vs. exploitation, representation, attention, memory, unsupervised learning, hierarchical RL, multi- agent RL, relational RL, learning to learn, games, robotics, computer vision, nat- ural language processing, finance, business management, healthcare, education, energy, transportation, computer systems, science, engineering, art arXiv:1810.06339v1 [cs.LG] 15 Oct 2018 1Work in progress. Under review for Morgan & Claypool: Synthesis Lectures in Artificial Intelligence and Machine Learning. This manuscript is based on our previous deep RL overview (Li, 2017). It benefits from discussions with and comments from many people. Acknowledgements will appear in a later version. 1 CONTENTS 1 Introduction5 2 Background8 2.1 Artificial Intelligence . .8 2.2 Machine Learning . 10 2.3 Deep Learning . 11 2.4 Reinforcement Learning . 12 2.4.1 Problem Setup . 13 2.4.2 Value Function . 13 2.4.3 Exploration vs. Exploitation . 14 2.4.4 Dynamic Programming . 14 2.4.5 Monte Carlo . 15 2.4.6 Temporal Difference Learning . 16 2.4.7 Multi-step Bootstrapping . 17 2.4.8 Model-based RL . 18 2.4.9 Function Approximation . 18 2.4.10 Policy Optimization . 20 2.4.11 Deep RL . 21 2.4.12 Brief Summary . 22 2.5 Resources . 22 Part I: Core Elements 23 3 Value Function 24 3.1 Deep Q-Learning . 24 3.2 Distributional Value Function . 27 3.3 General Value Function . 28 4 Policy 30 4.1 Policy Gradient . 31 4.2 Actor-Critic . 32 4.3 Trust Region Methods . 33 4.4 Policy Gradient with Off-Policy Learning . 34 4.5 Benchmark Results . 35 5 Reward 36 6 Model 39 2 7 Exploration vs. Exploitation 42 8 Representation 46 8.1 Classics . 46 8.2 Neural Networks . 48 8.3 Knowledge and Reasoning . 49 Part II: Important Mechanisms 53 9 Attention and Memory 54 10 Unsupervised Learning 56 11 Hierarchical RL 59 12 Multi-Agent RL 62 13 Relational RL 66 14 Learning to Learn 68 14.1 Few/One/Zero-Shot Learning . 69 14.2 Transfer/Multi-task Learning . 70 14.3 Learning to Optimize . 71 14.4 Learning Reinforcement Learn . 71 14.5 Learning Combinatorial Optimization . 72 14.6 AutoML . 73 Part III: Applications 75 15 Games 76 15.1 Board Games . 76 15.2 Card Games . 80 15.3 Video Games . 81 16 Robotics 82 16.1 Sim-to-Real . 82 16.2 Imitation Learning . 83 16.3 Value-based Learning . 83 16.4 Policy-based Learning . 84 16.5 Model-based Learning . 84 16.6 Autonomous Driving Vehicles . 84 17 Natural Language Processing 86 3 17.1 Sequence Generation . 87 17.2 Machine Translation . 87 17.3 Dialogue Systems . 88 18 Computer Vision 90 18.1 Recognition . 90 18.2 Motion Analysis . 91 18.3 Scene Understanding . 91 18.4 Integration with NLP . 91 18.5 Visual Control . 92 18.6 Interactive Perception . 92 19 Finance and Business Management 93 19.1 Option Pricing . 94 19.2 Portfolio Optimization . 94 19.3 Business Management . 95 20 More Applications 96 20.1 Healthcare . 96 20.2 Education . 96 20.3 Energy . 97 20.4 Transportation . 98 20.5 Computer Systems . 98 20.6 Science, Engineering and Art . 100 20.6.1 Chemistry . 101 20.6.2 Mathematics . 101 20.6.3 Music . 101 21 Discussions 103 21.1 Brief Summary . 103 21.2 Challenges and Opportunities . 103 21.3 Epilogue . 109 4 1 INTRODUCTION Reinforcement learning (RL) is about an agent interacting with the environment, learning an optimal policy, by trial and error, for sequential decision making problems, in a wide range of fields in natural sciences, social sciences, and engineering (Sutton and Barto, 1998; 2018; Bertsekas and Tsitsiklis, 1996; Bertsekas, 2012; Szepesvari´ , 2010; Powell, 2011). The integration of reinforcement learning and neural networks has a long history (Sutton and Barto, 2018; Bertsekas and Tsitsiklis, 1996; Schmidhuber, 2015). With recent exciting achievements of deep learning (LeCun et al., 2015; Goodfellow et al., 2016), benefiting from big data, powerful computation, new algorithmic techniques, mature software packages and architectures, and strong financial support, we have been witnessing the renaissance of reinforcement learning (Krakovsky, 2016), especially, the combination of deep neural networks and reinforcement learning, i.e., deep reinforcement learning (deep RL).2 Deep learning, or deep neural networks, has been prevailing in reinforcement learning in the last several years, in games, robotics, natural language processing, etc. We have been witnessing break- throughs, like deep Q-network (DQN) (Mnih et al., 2015), AlphaGo (Silver et al., 2016a; 2017), and DeepStack (Moravcˇ´ık et al., 2017); each of them represents a big family of problems and large number of applications. DQN (Mnih et al., 2015) is for single player games, and single agent con- trol in general. DQN has implications for most algorithms and applications in RL. DQN ignites this round of popularity of deep reinforcement learning. AlphaGo (Silver et al., 2016a; 2017) is for two player perfect information zero-sum games. AlphaGo makes a phenomenal achievement on a very hard problem, and sets a landmark in AI. The success of AlphaGo influences similar games directly, and Alpha Zero (Silver et al., 2017) has already achieved significant successes on chess and Shogi. The techniques underlying AlphaGo (Silver et al., 2016a) and AlphaGo Zero (Silver et al., 2017), namely, deep learning, reinforcement learning, Monte Carlo tree search (MCTS), and self-play, will have wider and further implications and applications. As recommended by AlphaGo authors in their papers (Silver et al., 2016a; 2017), the following applications are worth further inves- tigation: general game-playing (in particular, video games), classical planning, partially observed planning, scheduling, constraint satisfaction, robotics, industrial control, online recommendation systems, protein folding, reducing energy consumption, and searching for revolutionary new mate- rials. DeepStack (Moravcˇ´ık et al., 2017) is for two player imperfect information zero-sum games, a family of problems with inherent hardness to solve. DeepStack, similar to AlphaGo, also makes an extraordinary achievement on a hard problem, and sets a milestone in AI. It will have rich impli- cations and wide applications, e.g., in defending strategic resources and robust decision making for medical treatment recommendations (Moravcˇ´ık et al., 2017). We also see novel algorithms, architectures, and applications, like asynchronous methods (Mnih et al., 2016), trust region methods (Schulman et al., 2015; Schulman et al., 2017b; Nachum et al., 2018), deterministic policy gradient (Silver et al., 2014; Lillicrap et al., 2016), combining policy gradient with off-policy RL (Nachum et al., 2017; 2018; Haarnoja et al., 2018), interpolated policy gradient (Gu et al., 2017), unsupervised reinforcement and auxiliary learning (Jaderberg et al., 2017; Mirowski et al., 2017), hindsight experience replay (Andrychowicz et al., 2017), differentiable neu- ral computer (Graves et al., 2016), neural architecture design (Zoph and Le, 2017), guided policy search (Levine et al., 2016), generative adversarial imitation learning (Ho and Ermon, 2016), multi- agent games (Jaderberg et al., 2018), hard exploration Atari games (Aytar et al., 2018), StarCraft II Sun et al.(2018); Pang et al.(2018), chemical syntheses planning (Segler et al., 2018), character animation (Peng et al., 2018a), dexterous robots (OpenAI, 2018), OpenAI Five for Dota 2, etc. Creativity would push the frontiers of deep RL further w.r.t. core elements, important mechanisms, and applications, seemingly without a boundary. RL probably helps, if a problem can be regarded as or transformed into a sequential decision making problem. Why has deep learning been helping reinforcement learning make so many and so enormous achieve- ments? Representation learning with deep learning enables automatic feature engineering and end- to-end learning through gradient descent, so that reliance on domain knowledge is significantly 2We choose to abbreviate deep reinforcement learning as ”deep RL”, since it is a branch of reinforcement learning, in particular, using deep neural networks in reinforcement learning, and ”RL” is a well established abbreviation for reinforcement learning. 5 reduced or even removed for some problems. Feature engineering used to be done manually and is usually time-consuming, over-specified, and incomplete. Deep distributed representations exploit the.