
The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) Improving Sample Efficiency in Model-Free Reinforcement Learning from Images Denis Yarats,12 Amy Zhang,234 Ilya Kostrikov,1 Brandon Amos,2 Joelle Pineau,234 Rob Fergus1 1New York University 2Facebook AI Research 3McGill University 4MILA [email protected] Abstract Some natural solutions to improve sample efficiency are i) to use off-policy methods and ii) add an auxiliary task with Training an agent to solve control tasks directly from high- an unsupervised objective. Off-policy methods enable more dimensional images with model-free reinforcement learn- efficient sample re-use, while the simplest auxiliary task is ing (RL) has proven difficult. A promising approach is to learn a latent representation together with the control pol- an autoencoder with a pixel reconstruction objective. Prior icy. However, fitting a high-capacity encoder using a scarce work has attempted to learn state representations from pixels reward signal is sample inefficient and leads to poor perfor- with autoencoders, utilizing a two-step training procedure, mance. Prior work has shown that auxiliary losses, such as where the representation is first trained via the autoencoder, image reconstruction, can aid efficient representation learn- and then either with a policy learned on top of the fixed ing. However, incorporating reconstruction loss into an off- representation (Lange and Riedmiller 2010; Munk, Kober, policy learning algorithm often leads to training instability. and Babuska 2016; Higgins et al. 2017b; Zhang, Satija, and We explore the underlying reasons and identify variational Pineau 2018; Nair et al. 2018; Dwibedi et al. 2018), or with autoencoders, used by previous investigations, as the cause of planning (Mattner, Lange, and Riedmiller 2012; Finn et al. the divergence. Following these findings, we propose effec- 2015). This allows for additional stability in optimization by tive techniques to improve training stability. This results in a simple approach capable of matching state-of-the-art model- circumventing dueling training objectives but leads to sub- free and model-based algorithms on MuJoCo control tasks. optimal policies. Other work utilizes continual model-free Furthermore, our approach demonstrates robustness to ob- learning with an auxiliary reconstruction signal in an on- servational noise, surpassing existing approaches in this set- policy manner (Jaderberg et al. 2017; Shelhamer et al. 2016). ting. Code, results, and videos are anonymously available at However, these methods do not report of learning represen- https://sites.google.com/view/sac-ae/home. tations and a policy jointly in the off-policy setting, or note that it performs poorly (Shelhamer et al. 2016). 1 Introduction We revisit the concept of adding an autoencoder to model- free RL approaches, with a focus on off-policy algorithms. Cameras are a convenient and inexpensive way to acquire We perform a sequence of careful experiments to under- state information, especially in complex, unstructured en- stand why previous approaches did not work well. We con- vironments, where effective control requires access to the firm that a pixel reconstruction loss is vital for learning a proprioceptive state of the underlying dynamics. Thus, hav- good representation, specifically when trained jointly, but ing effective RL approaches that can utilize pixels as input requires careful design choices to succeed. Based on these would potentially enable solutions for a wide range of real findings, we recommend a simple and effective autoencoder- world applications, for example robotics. based off-policy method that can be trained end-to-end. We The challenge is to efficiently learn a mapping from pix- believe this to be the first model-free off-policy approach to els to an appropriate representation for control using only train the latent state representation and policy jointly and a sparse reward signal. Although deep convolutional en- match performance with state-of-the-art model-based meth- coders can learn good representations (upon which a pol- ods 1 (Hafner et al. 2018; Lee et al. 2019) on many chal- icy can be trained), they require large amounts of training lenging control tasks. In addition, we demonstrate robust- data. As existing reinforcement learning approaches already ness to observational noise and outperform prior methods in have poor sample complexity, this makes direct use of pixel- this more practical setup. based inputs prohibitively slow. For example, model-free This paper makes three main contributions: (i) a method- methods on Atari (Bellemare et al. 2013) and DeepMind ical study of the issues involved with combining autoen- Control (DMC) (Tassa et al. 2018) take tens of millions of coders with model-free RL in the off-policy setting that ad- steps (Mnih et al. 2013; Barth-Maron et al. 2018), which is impractical in many applications, especially robotics. 1We define model-based methods as those that train a dynamics Copyright c 2021, Association for the Advancement of Artificial model. By this definition, SLAC (Lee et al. 2019) is a model-based Intelligence (www.aaai.org). All rights reserved. method. 10674 vises a successful variant we call SAC+AE; (ii) a demon- strong inductive biases that assume that dramatically chang- stration of the robustness of our model-free approach over ing pixel values and textures are correlated with good explo- model-based methods on tasks with noisy observations; and ration and reward. Unfortunately, such highly task specific (iii) an open-source PyTorch implementation of our simple auxiliary is unlikely to scale to real world applications. and effective algorithm for researchers and practitioners to Generic pixel reconstruction is explored in Higgins et al. build upon. (2017b); Nair et al. (2018), where the authors use a beta vari- ational autoencoder (β-VAE) (Kingma and Welling 2013; 2 Related Work Higgins et al. 2017a) and attempt to perform joint represen- tation learning, but find it hard to train, thus receding to the Efficient learning from high-dimensional pixel observations alternating training procedure (Lange and Riedmiller 2010; has been a problem of paramount importance for model- Finn et al. 2015). free RL. While some impressive progress has been made There has been more success in using model learning applying model-free RL to domains with simple dynamics methods on images, such as Hafner et al. (2018); Lee et al. and discrete action spaces (Mnih et al. 2013), attempts to (2019). These methods use a world model approach (Ha scale these approaches to complex continuous control en- and Schmidhuber 2018), learning a representation space us- vironments have largely been unsuccessful, both in simula- ing a latent dynamics loss and pixel decoder loss to ground tion and the real world. A glaring issue is that the RL sig- on the original observation space. These model-based rein- nal is much sparser than in supervised learning, which leads forcement learning methods often show improved sample ef- to sample inefficiency, and higher dimensional observation ficiency, but with the additional complexity of balancing var- spaces such as pixels worsens this problem. ious auxiliary losses, such as a dynamics loss, reward loss, One approach to alleviate this problem is by training with and decoder loss in addition to the original policy and value auxiliary losses. Early work (Lange and Riedmiller 2010) optimizations. These proposed methods are correspondingly explores using deep autoencoders to learn feature spaces brittle to hyperparameter settings, and difficult to reproduce, in visual reinforcement learning, crucially Lange and Ried- as they balance multiple training objectives. miller (2010) propose to recompute features for all collected experiences after each update of the autoencoder, rendering this approach impractical to scale to more complicated do- 3 Background mains. Moreover, this method has been only demonstrated 3.1 Markov Decision Process on toy problems. Alternatively, Finn et al. (2015) apply deep A fully observable Markov decision process (MDP) can be autoencoder pretraining to real world robots that does not described as M = hS; A; P; R; γi, where S is the state require iterative re-training, improving upon computational space, A is the action space, P (st+1jst; at) is the transition complexity of earlier methods. However, in this work the lin- probability distribution, R(st; at) is the reward function, and ear policy is trained separately from the autoencoder, which γ is the discount factor (Bellman 1957). An agent starts in a we find to not perform as well as end-to-end methods. initial state s1 sampled from a fixed distribution p(s1), then Shelhamer et al. (2016) employ auxiliary losses to en- at each timestep t it takes an action at 2 A from a state hance performance of A3C (Mnih et al. 2016) on Atari. They st 2 S and moves to a next state st+1 ∼ P (·|st; at). Af- recommend a multi-task setting and learning dynamics and ter each action the agent receives a reward rt = R(st; at). reward to find a good representation, which relies on the We consider episodic environments with the length fixed to assumption that the dynamics in the task are easy to learn T . The goal of standard RL is to learn a policy π(atjst) and useful for learning a good policy. To prevent instabili- that can maximize the agent’s expected cumulative reward ties in learning, Shelhamer et al. (2016) pre-train the agent PT E t=1 (st;at)∼ρπ [rt], where ρπ is discounted state-action on randomly collected transitions and then perform joint op- visitations of π, also known as occupancies. An important timization of the policy and auxiliary losses. Importantly, modification (Ziebart et al. 2008) auguments this objective the learning is done completely on-policy: the policy loss is with an entropy term H(π(·|st)) to encourage exploration computed from rollouts while the auxiliary losses use sam- and robustness to noise.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-