Hierarchical Reinforcement Learning Via Advantage-Weighted Information Maximization

Hierarchical Reinforcement Learning Via Advantage-Weighted Information Maximization

Published as a conference paper at ICLR 2019 HIERARCHICAL REINFORCEMENT LEARNING VIA ADVANTAGE-WEIGHTED INFORMATION MAXIMIZATION Takayuki Osa Voot Tangkaratt University of Tokyo, Tokyo, Japan RIKEN AIP, Tokyo, Japan RIKEN AIP, Tokyo, Japan [email protected] [email protected] Masashi Sugiyama RIKEN AIP, Tokyo, Japan University of Tokyo, Tokyo, Japan [email protected] ABSTRACT Real-world tasks are often highly structured. Hierarchical reinforcement learn- ing (HRL) has attracted research interest as an approach for leveraging the hier- archical structure of a given task in reinforcement learning (RL). However, iden- tifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our ap- proach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the de- terministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks. 1 INTRODUCTION Reinforcement learning (RL) has been successfully applied to a variety of tasks, including board games (Silver et al., 2016), robotic manipulation tasks (Levine et al., 2016), and video games (Mnih et al., 2015). Hierarchical reinforcement learning (HRL) is a type of RL that leverages the hier- archical structure of a given task by learning a hierarchical policy (Sutton et al., 1999; Dietterich, 2000). Past studies in this field have shown that HRL can solve challenging tasks in the video game domain (Vezhnevets et al., 2017; Bacon et al., 2017) and robotic manipulation (Daniel et al., 2016; Osa et al., 2018b). In HRL, lower-level policies, which are often referred to as option policies, learn different behavior/control patterns, and the upper-level policy, which is often referred to as the gating policy, learns to select option policies. Recent studies have developed HRL methods using deep learning (Goodfellow et al., 2016) and have shown that HRL can yield impressive performance for complex tasks (Bacon et al., 2017; Frans et al., 2018; Vezhnevets et al., 2017; Haarnoja et al., 2018a). However, identifying the hierarchical policy structure that yields efficient learning is not a trivial task, since the problem involves learning a sufficient variety of types of behavior to solve a given task. In this study, we present an HRL method via the mutual information (MI) maximization with advantage-weighted importance, which we refer to as adInfoHRL. We formulate the problem of learning a latent variable in a hierarchical policy as one of learning discrete and interpretable repre- 1 Published as a conference paper at ICLR 2019 sentations of states and actions. Ideally, each option policy should be located at separate modes of the advantage function. To estimate the latent variable that corresponds to modes of the advantage function, we introduce advantage-weighted importance weights. Our approach can be considered to divide the state-action space based on an information maximization criterion, and it learns option policies corresponding to each region of the state-action space. We derive adInfoHRL as an HRL method based on deterministic option policies that are trained based on an extension of the deter- ministic policy gradient (Silver et al., 2014; Fujimoto et al., 2018). The contributions of this paper are twofold: 1. We propose the learning of a latent variable of a hierarchical policy as a discrete and hidden representation of the state-action space. To learn option policies that correspond to the modes of the advantage function, we introduce advantage-weighted importance. 2. We propose an HRL method, where the option policies are optimized based on the deter- ministic policy gradient and the gating policy selects the option that maximizes the expected return. The experimental results show that our proposed method adInfoHRL can learn a diversity of options on continuous control tasks. Moreover, our approach can improve the performance of TD3 on such tasks as the Walker2d and Ant tasks in OpenAI Gym with MuJoco simulator. 2 BACKGROUND In this section, we formulate the problem of HRL in this paper and describe methods related to our proposal. 2.1 HIERARCHICAL REINFORCEMENT LEARNING We consider tasks that can be modeled as a Markov decision process (MDP), consisting of a state space S, an action space A, a reward function r : S × A 7! R, an initial state distribution ρ(s0), and a transition probability p(st+1jst; at) that defines the probability of transitioning from state PT i−t st and action at at time t to next state st+1. The return is defined as Rt = i=t γ r(si; ai), where γ is a discount factor, and policy π(ajs) is defined as the density of action a given state π PT t s. Let d (s) = t=0 γ p(st = s) denote the discounted visitation frequency induced by the policy π. The goal of reinforcement learning is to learn a policy that maximizes the expected return J(π) = Es0;a0;:::[R0] where s0 ∼ ρ(s0); a ∼ π and st+1 ∼ p(st+1jst; at). By defining the π Q-function as Q (s; a) = Es0;a0;:::[Rtjst = s; at = a], the objective function of reinforcement learning can be rewritten as follows: ZZ J(π) = dπ(s)π(ajs)Qπ(s; a)dads: (1) P Herein, we consider hierarchical policy π(ajs) = o2O π(ojs)π(ajs; o), where o is the latent vari- able and O is the set of possible values of o. Many existing HRL methods employ a policy structure of this form (Frans et al., 2018; Vezhnevets et al., 2017; Bacon et al., 2017; Florensa et al., 2017; Daniel et al., 2016). In general, latent variable o can be discrete (Frans et al., 2018; Bacon et al., 2017; Florensa et al., 2017; Daniel et al., 2016; Osa & Sugiyama, 2018) or continuous (Vezhnevets et al., 2017). π(ojs) is often referred to as a gating policy (Daniel et al., 2016; Osa & Sugiyama, 2018), policy over options (Bacon et al., 2017), or manager (Vezhnevets et al., 2017). Likewise, π(ajs; o) is often referred to as an option policy (Osa & Sugiyama, 2018), sub-policy (Daniel et al., 2016), or worker (Vezhnevets et al., 2017). In HRL, the objective function is given by ZZ X J(π) = dπ(s) π(ojs)π(ajs; o)Qπ(s; a)dads: (2) o2O As discussed in the literature on inverse RL (Ziebart, 2010), multiple policies can yield equivalent expected returns. This indicates that there exist multiple solutions to latent variable o that maximizes the expected return. To obtain the preferable solution for o, we need to impose additional constraints in HRL. Although prior work has employed regularizers (Bacon et al., 2017) and constraints (Daniel et al., 2016) to obtain various option policies, the method of learning a good latent variable o that improves sample-efficiency of the learning process remains unclear. In this study we propose the learning of the latent variable by maximizing MI between latent variables and state-action pairs. 2 Published as a conference paper at ICLR 2019 2.2 DETERMINISTIC POLICY GRADIENT The deterministic policy gradient (DPG) algorithm was developed for learning a monolithic de- terministic policy µθ(s): S 7! A by Silver et al. (2014). In off-policy RL, the objective is to maximize the expectation of the return, averaged over the state distribution induced by a behavior policy β(ajs): ZZ J(π) = dβ(s)π(ajs)Qπs; a)dads: (3) R β π When a policy is deterministic, the objective becomes J(π) = d (s)Q s; µθ(s) ds. Silver et al. (2014) have shown that the gradient of a deterministic policy is given by π π β β rθEs∼d (s)[Q (s; a)] = Es∼d (s) rθµθ(s)raQ s; a ja=µθ (s) : (4) The DPG algorithm has been extended to the deep deterministic policy gradient (DDPG) for con- tinuous control problems that require neural network policies (Lillicrap et al., 2016). Twin Delayed Deep Deterministic policy gradient algorithm (TD3) proposed by Fujimoto et al. (2018) is a variant of DDPG that outperforms the state-of-the-art on-policy methods such as TRPO (Schulman et al., 2017a) and PPO (Schulman et al., 2017b) in certain domains. We extend this deterministic policy gradient to learn a hierarchical policy. 2.3 REPRESENTATION LEARNING VIA INFORMATION MAXIMIZATION Recent studies such as those by Chen et al. (2016); Hu et al. (2017); Li et al. (2017) have shown that an interpretable representation can be learned by maximizing MI. Given a dataset X = (x1; :::; xn), regularized information maximization (RIM) proposed by Gomes et al. (2010) involves learning a conditional model p^(yjx; η) with parameter vector η that predicts a label y. The objective of RIM is to minimize `(η) − λIη(x; y); (5) where `(η) is the regularization term, Iη(x; y) is MI, and λ is a coefficient. MI can be decomposed as Iη(x; y) = H(y) − H(yjx) where H(y) is entropy and H(yjx) the conditional entropy. Increas- ing H(y) conduces the label to be uniformly distributed, and decreasing H(yjx) conduces to clear cluster assignments.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us