
Automatic Data Augmentation for Generalization in Reinforcement Learning Roberta Raileanu 1 Max Goldstein 1 Denis Yarats 1 2 Ilya Kostrikov 1 Rob Fergus 1 Abstract pose Data-regularized Actor-Critic or DrAC, a new algo- rithm that enables the use of data augmentation with actor- Deep reinforcement learning (RL) agents often critic algorithms in a theoretically sound way. Specifically, fail to generalize to unseen scenarios, even when we introduce two regularization terms which constrain the they are trained on many instances of semanti- agent’s policy and value function to be invariant to various cally similar environments. Data augmentation state transformations. Empirically, this method allows the has recently been shown to improve the sample agent to learn useful behaviors in settings in which a naive efficiency and generalization of RL agents. How- use of data augmentation completely fails or converges to a ever, different tasks tend to benefit from different sub-optimal policy. kinds of data augmentation. In this paper, we com- pare three approaches for automatically finding an The current use of data augmentation in RL relies on expert appropriate augmentation. These are combined knowledge to pick an appropriate augmentation (Lee et al., with two novel regularization terms for the policy 2020; Kostrikov et al., 2020) or separately evaluates a large and value function, required to make the use of number of transformations to find the best one (Cobbe et al., data augmentation theoretically sound for certain 2018; Laskin et al., 2020). In this paper, we propose three actor-critic algorithms. We evaluate our approach methods for automatically finding a good augmentation for on the Procgen benchmark which consists of 16 a given task. The first two automatically find an effective procedurally-generated environments and show augmentation from a fixed set, using either the upper confi- that it improves test performance by ∼ 40%. dence bound (UCB, Auer(2002)) algorithm ( UCB-DrAC), or meta-learning (Wang et al., 2016)(RL2-DrAC). The third method directly meta-learns the weights of a convolu- 1. Introduction tional network, without access to predefined transformations (Meta-DrAC). Figure1 gives an overview of UCB-DrAC. Generalization remains one of the key challenges of deep reinforcement learning (RL). A series of recent studies show We evaluate our method on the Procgen generalization that RL agents fail to generalize to new environments (Fare- benchmark (Cobbe et al., 2019), which consists of 16 brother et al., 2018; Gamrian & Goldberg, 2019; Zhang procedurally-generated environments with visual observa- et al., 2018; Cobbe et al., 2018; 2019; Song et al., 2020). tions. Our results show that UCB-DrAC is the most effective This indicates that current RL methods memorize specific for automatically finding an augmentation, and is compara- trajectories rather than learning transferable skills. To im- ble or better than using DrAC with the best augmentation prove generalization in RL, several strategies have been from a given set. UCB-DrAC also outperforms baselines proposed, such as the use of regularization (Farebrother specifically designed to improve generalization in RL (Igl et al., 2018; Zhang et al., 2018; Cobbe et al., 2018; Igl et al., 2019; Lee et al., 2020; Laskin et al., 2020) on both et al., 2019), data augmentation (Cobbe et al., 2018; Lee train and test. et al., 2020; Kostrikov et al., 2020; Laskin et al., 2020), or representation learning (Igl et al., 2019; Lee et al., 2020). 2. Proximal Policy Optimization In this paper, we show that a naive application of data aug- Proximal Policy Optimization (PPO, Schulman et al. mentation can lead to both theoretical and practical prob- (2017)) is an actor-critic algorithm that learns a policy πθ lems with standard RL algorithms. As a solution, we pro- and a value function Vθ with the goal of finding an optimal 1Department of Computer Science, New York University, New policy for a given MDP. PPO alternates between sampling York, USA 2Facebook AI Research, New York, USA. Correspon- data through interaction with the environment and maximiz- dence to: Roberta Raileanu <[email protected]>. ing a clipped surrogate objective function. One component of the PPO objective is the policy gradient term, which is Accepted at Workshop on Inductive Biases, Invariances and Gener- alization, International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s). Automatic Data Augmentation for Generalization in Deep Reinforcement Learning Figure 1. Overview of UCB-DrAC. A UCB bandit selects an image transformation (e.g. random-conv) and applies it to observations. The augmented and original observations are passed to a modified actor-critic RL agent (DrAC) which uses them to improve generalization. estimated using importance sampling: To ensure that the policy and value functions are invariant to these transformation of the input state, we propose an X ^ πθ(ajs) ^ πθ(ajs)Aθold (s; a) = Ea∼πθ Aθold (s; a) ; additional loss term for regularizing the policy, old πθ (ajs) a2A old (1) Gπ = KL [πθ(ajf(s; ν)) j π(ajs)] ; (3) ^ where A(·) is an estimate of the advantage function, πθold is the behavior policy used to collect trajectories, and πθ is as well as an extra loss term for regularizing the value func- the policy we want to optimize. tion, 2 3. Automatic Data Augmentation for RL GV = (Vθ (f(s; ν)) − V (s)) : (4) 3.1. Data Augmentation in RL Thus, our data-regularized actor-critic method, or DrAC, Image augmentation has been successfully applied in com- maximizes the following objective: puter vision for improving generalization on object clas- J = J − α (G + G ) (5) sification tasks (Simard et al., 2003; Cire¸sanet al., 2011; DrAC PPO r π V Ciregan et al., 2012; Krizhevsky et al., 2012). However, where αr is the weight of the regularization term. as noted by Kostrikov et al.(2020), these tasks are invari- ant to certain image transformations such as rotations or The use of Gπ and GV ensures that the agent’s policy and flips. In contrast, RL tasks are not always invariant to such value function are invariant to the transformations induced augmentations (Kostrikov et al., 2020). by various augmentations. Particular transformations can be used to impose certain inductive biases relevant for the If transformations are naively applied to (some of the) ob- task (e.g. invariance with respect to colors or textures). servations in PPO’s buffer, as done in Cobbe et al.(2018); In addition, Gπ and GV can be added to the objective of Laskin et al.(2020), the PPO objective changes and equa- any actor-critic algorithm with a discrete stochastic policy tion (1) is replaced by (e.g. A3C (Mnih et al., 2013), TRPO (Schulman et al., X πθ(ajf(s)) 2015), SAC (Haarnoja et al., 2018), IMPALA (Espeholt π (ajs)A^ (s; a) = A^ (s; a) ; θ θold Ea∼πθold θold et al., 2018)) without any other changes. πθ (ajs) a2A old (2) Note that when using DrAC, as opposed to the method where f : S×H ! S is the image transformation. However, proposed by Laskin et al.(2020), we still use the correct im- the right hand side of the above equation is not a sound portance sampling estimate of the left hand side objective in estimate of the left hand side because πθ(ajf(s)) 6= πθ(ajs), equation (1). This is because the transformed observations since nothing constrains πθ(ajf(s)) to be close to πθ(ajs). f(s) are only used to compute the regularization losses Gπ Moreover, one can define certain transformations f(·) that and GV , and thus are not used for the main PPO objective. result in an arbitrarily large ratio πθ(ajf(s))/πθ(ajs). Hence, DrAC benefits from the regularizing effect of using data augmentation, while mitigating the adverse effects it 3.2. Policy and Value Function Regularization has on the RL objective. Following (Kostrikov et al., 2020), we define an optimality- 3.3. Automatic Data Augmentation invariant state transformation f : S × H ! S as a map- ping that preserves both the agent’s policy π and its value Since different tasks benefit from different kinds of data function V such that V (s) = V (f(s; ν)) and π(ajs) = augmentation, we would like to design a method that can π(ajf(s; ν)), 8s 2 S; ν 2 H, where ν are the parameters automatically find an effective augmentation for any given of f(·), drawn from the set of all possible parameters H. task. In this section, we describe three approaches for doing Automatic Data Augmentation for Generalization in Deep Reinforcement Learning this. In all of them, the augmentation learner is trained at the dimensions as the original observation). We meta-learn the same time as the agent learns to solve the task using DrAC. weights of this network using an approach similar to the Hence, the distribution of rewards varies significantly as the one proposed by (Finn et al., 2017) which we implement agent improves, making the problem highly nonstationary. using the higher library (Grefenstette et al., 2019). For each agent update, we also perform a meta-update of the transfor- Upper Confidence Bound. The problem of selecting a data mation function by splitting PPO’s buffer into training and augmentation from a given set can be formulated as a multi- validation sets. We refer to this approach as Meta-DrAC. armed bandit problem, where the action space is the set of available transformations F = ff1; : : : ; fng. A popular algorithm for multi-armed bandit problems is the Upper 4. Experiments Confidence Bound or UCB (Auer, 2002), which selects See AppendixA for a full description of the baselines, ex- actions according to the following policy: perimental setup, and hyperparameters used.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-