Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space

Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space Zhou Fan∗ , Rui Su∗ , Weinan Zhangy and Yong Yu Shanghai Jiao Tong University [email protected], fsurui, wnzhang, [email protected] Abstract In this paper we propose a hybrid architecture Discrete Action Selection of actor-critic algorithms for reinforcement learn- 1 2 3 4 ing in parameterized action space, which consists Continuous Parameter Selection of multiple parallel sub-actor networks to decom- pose the structured action space into simpler ac- …… … …… tion spaces along with a critic network to guide Continuous Set Continuous Set Continuous Set the training of all sub-actor networks. While this Figure 1: Illustration of a parameterized action space. paper is mainly focused on parameterized action space, the proposed architecture, which we call hy- brid actor-critic, can be extended for more general receives a reward signal rt as well as the next state st+1. Here action spaces which has a hierarchical structure. the action at is selected by the agent from its action space A. We present an instance of the hybrid actor-critic The type of the action space is an important characteristic architecture based on proximal policy optimization of the setup of an RL problem, and problems with different (PPO), which we refer to as hybrid proximal pol- types of action space are usually solved with different algo- icy optimization (H-PPO). Our experiments test H- rithms. A typical RL setup may come with a discrete action PPO on a collection of tasks with parameterized space or a continuous one, and most RL algorithms are de- action space, where H-PPO demonstrates superior signed for either one of these two types. The agent simply performance over previous methods of parameter- selects its actions from a finite set of discrete actions if the ized action reinforcement learning. action space is discrete, or from a single continuous space in the case of a continuous action space. However, action space could also have some hierarchical structure instead of being 1 Introduction a flat set. The most common class of structured action space Reinforcement learning (RL) has achieved impressive per- is known as parameterized action spaces, where a parameter- formance on a wide range of tasks including game playing, ized action is a discrete action parameterized by a continuous robotics and natural language processing. Most of recent ex- real-valued vector [Masson et al., 2016]. With a parameter- citing achievements is obtained by the combination of deep ized action space, the agent not only selects an action from a learning and reinforcement learning, known as deep rein- discrete set, but selects the parameter to use with that action forcement learning [Mnih et al., 2013]. In game playing do- from the continuous parameter set of that action as well. mains, deep Q-network (DQN) [Mnih et al., 2013] is capable Figure 1 shows an example of parameterized action space. of learning control policies directly from high-dimensional The hierarchically structured action space contains four types sensory input in Atari games, and AlphaGo [Silver et al., of discrete actions shown in blue, and every discrete action 2016] has defeated world champions in the game of Go and has a continuous parameter space marked with rounded rect- could achieve superhuman performance even without human angles in grey. In this example, the discrete action with index knowledge for training [Silver et al., 2017]. Robotics is also 2 is actually not parameterized. It can also be viewed as a a significant aspect of applications of RL, where RL enables a special case that the parameter space of discrete action 2 only robot to autonomously learn a sophisticated behavior through has one element. Parameterized action space perfectly mod- interactions with its environment [Kober et al., 2013]. els the scenarios where there are different categories of con- In the general setup of RL, an agent interacts with an envi- tinuous actions. Many games as well as real world tasks have ronment in the following way: at each time step t, it observes a parameterized action space. For example, in the Half Field Offense (HFO) [Hausknecht et al., 2016] domain, which is a (either fully or partially) a state st and takes an action at, then subtask based on the RoboCup 2D simulation platform, the ∗Equal contribution. agent may choose the discrete action Kick and specify its yCorresponding author. real-valued parameters (power and direction). Moreover, pa- 2279 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) experiments and results. Finally, conclusion and future work Layer-1 Action Selection are presented in Section 5. Layer-2 Action Selection 2 Related Work Parameterized action spaces and other hierarchical action Layer-3 Action Selection spaces are more difficult to deal with in RL compared to purely discrete or continuous action spaces for the following more layers of branching nodes reasons. First, the action space has a hierarchical structure, Figure 2: A hierarchically structured action space. which makes selecting an action more complicated than just choosing one element from a flat set of actions. Second, a parameterized action space involves both discrete action se- rameterized actions naturally exist in the context of robotics, lection and continuous parameter selection, while most RL where the action space can be constructed in a way that a set models are designed for only discrete action spaces or con- of meta-actions defines the higher-level selection of actions tinuous action spaces. and every meta-action is controlled with fine-grained param- eters [Kober et al., 2013]. 2.1 RL Methods for Discrete Action Space and In addition to parameterized action spaces, action spaces Continuous Action Space may have more general hierarchical structures. For exam- The Q-learning algorithm [Watkins and Dayan, 1992] is a ple, the parameters for the different actions are discretized in value-based method which updates the Q-function using the some game environments such as StarCraft II Learning Envi- Bellman equation ronment [Vinyals et al., 2017]. Also, the action space may be 0 manually constructed to have a hierarchical structure of more Q(s; a) = E [rt + γ max Q(st+1; a ) j st = s; at = a]: (1) 0 than two layers, which is a technique often used to reduce the rt;st+1 a 2A size of an extremely large action space, with OpenAI Five1 In the domain of discrete action space, deep Q-network on Dota 2 as a remarkable example. While it is intractable to (DQN) [Mnih et al., 2013] takes the framework and uses a choose an action directly from a set that contains millions of deep neural network to approximate the Q function. Some discrete actions, we can tackle this problem by constructing a variations of DQN are also widely used in discrete action hierarchically structured action space with a hierarchical tax- space, including asynchronous DQN [Mnih et al., 2016], dou- onomy. As is shown in Figure 2, the action space has a tree ble DQN [Hasselt et al., 2016] and dueling DQN [Wang et al., structure of multi-layer classifications of actions, in a way 2016]. that each action selection node only has a small number of Policy gradient [Sutton et al., 2000] is another class of RL branches. Note that this tree structure could have more than algorithms which optimizes a stochastic policy πθ parameter- two layers, and the external nodes of the tree structure could ized by θ to maximize the expected policy value J(πθ). The be continuous action-selection instead of discrete branching. gradient of the stochastic policy is given by the policy gradi- In this view, the parameterized action space is a special case ent theorem [Sutton et al., 2000] as of hierarchical action space which has a discrete layer and πθ then a continuous layer. rθJ(πθ) = E [rθ log πθ(a j s)Q (s; a)]: (2) s;a In this work, we propose a hybrid architecture of actor- critic algorithms for RL in parameterized action space. As an alternative, the policy gradient could also be computed It is based on original architecture of actor-critic algo- with the advantage function Aπθ (s; a) as rithms [Konda and Tsitsiklis, 2000], but contains multiple πθ rθJ(πθ) = E [rθ log πθ(a j s)A (s; a)]: (3) parallel sub-actor networks instead of one to solve multi- s;a layer action selection respectively and has one global critic network to update the policy parameters of all sub-actor net- Similarly in continuous action spaces, the deterministic works. Moreover, the hybrid actor-critic architecture we pro- policy gradient (DPG) algorithm [Silver et al., 2014] and the pose is flexible to the structure of the action space, such that DDPG algorithm [Lillicrap et al., 2016] optimize a determin- it can also be generalized for other hierarchically structured istic policy µθ parameterized by θ based on the deterministic action spaces. Specifically, we present an instance of the hy- policy gradient theorem [Silver et al., 2014] as brid actor-critic architecture based on the proximal policy op- µθ rθJ(µθ) = E[rθµθ(s)rθQ (s; a) ja=µ (s)]: (4) timization (PPO) [Schulman et al., 2017], which we call hy- s θ brid proximal policy optimization (H-PPO). We show that H- PPO outperforms previous methods on a collection of tasks Based on the policy gradient methods, trust region policy [ ] with parameterized action space. optimization (TRPO) Schulman et al., 2015 and proximal [ ] The rest of this paper is organized as follows: Section 2 policy optimization (PPO) Schulman et al., 2017 improve introduces related work in the parameterized action space do- the optimization techniques to achieve better performance.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us