SUNRISE: a Simple Unified Framework for Ensemble Learning

SUNRISE: a Simple Unified Framework for Ensemble Learning

SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep Reinforcement Learning Kimin Lee 1 Michael Laskin 1 Aravind Srinivas 1 Pieter Abbeel 1 Abstract efficient model-free RL algorithms through improvements Off-policy deep reinforcement learning (RL) has in off-policy learning both in discrete and continuous do- been successful in a range of challenging domains. mains (Fujimoto et al., 2018; Haarnoja et al., 2018; Hessel However, standard off-policy RL algorithms can et al., 2018; Amos et al., 2020). However, there are still sub- suffer from several issues, such as instability in Q- stantial challenges when training off-policy RL algorithms. learning and balancing exploration and exploita- First, Q-learning often converges to sub-optimal solutions tion. To mitigate these issues, we present SUN- due to error propagation in the Bellman backup, i.e., the RISE, a simple unified ensemble method, which errors induced in the target value can lead to an increase in is compatible with various off-policy RL algo- overall error in the Q-function (Kumar et al., 2019; 2020). rithms. SUNRISE integrates two key ingredi- Second, it is hard to balance exploration and exploitation, ents: (a) ensemble-based weighted Bellman back- which is necessary for efficient RL (Chen et al., 2017; Os- ups, which re-weight target Q-values based on band et al., 2016a) (see Section 2 for further details). uncertainty estimates from a Q-ensemble, and One way to address the above issues with off-policy RL algo- (b) an inference method that selects actions us- rithms is to use ensemble methods, which combine multiple ing the highest upper-confidence bounds for ef- models of the value function and (or) policy (Chen et al., ficient exploration. By enforcing the diversity 2017; Lan et al., 2020; Osband et al., 2016a; Wiering & between agents using Bootstrap with random ini- Van Hasselt, 2008). For example, double Q-learning (Has- tialization, we show that these different ideas selt, 2010; Van Hasselt et al., 2016) addressed the value are largely orthogonal and can be fruitfully in- overestimation by maintaining two independent estimators tegrated, together further improving the perfor- of the action values and later extended to continuous con- mance of existing off-policy RL algorithms, such trol tasks in TD3 (Fujimoto et al., 2018). Bootstrapped as Soft Actor-Critic and Rainbow DQN, for both DQN (Osband et al., 2016a) leveraged an ensemble of Q- continuous and discrete control tasks on both functions for more effective exploration, and Chen et al. low-dimensional and high-dimensional environ- (2017) further improved it by adapting upper-confidence ments. Our training code is available at https: bounds algorithms (Audibert et al., 2009; Auer et al., 2002) //github.com/pokaxpoka/sunrise. based on uncertainty estimates from ensembles. However, most prior works have studied the various axes of improve- ments from ensemble methods in isolation and have ignored 1. Introduction the error propagation aspect. Model-free reinforcement learning (RL), with high-capacity In this paper, we present SUNRISE, a simple unified ensem- function approximators, such as deep neural networks ble method that is compatible with most modern off-policy (DNNs), has been used to solve a variety of sequential RL algorithms, such as Q-learning and actor-critic algo- decision-making problems, including board games (Silver rithms. Our main idea is to reweight sample transitions et al., 2017; 2018), video games (Mnih et al., 2015; Vinyals based on uncertainty estimates from a Q-ensemble. Be- et al., 2019), and robotic manipulation (Kalashnikov et al., cause prediction errors can be characterized by uncertainty 2018). It has been well established that the above suc- estimates from ensembles (i.e., variance of predictions) as cesses are highly sample inefficient (Kaiser et al., 2020). shown in Figure 1(b), we find that the proposed method Recently, a lot of progress has been made in more sample- significantly improves the signal-to-noise in the Q-updates 1 and stabilizes the learning process. Additionally, for ef- University of California, Berkeley. Correspondence to: Kimin ficient exploration, We define an upper-confidence bound Lee <[email protected]>. (UCB) based on the mean and variance of Q-functions simi- Proceedings of the 38 th International Conference on Machine lar to Chen et al. (2017), and introduce an inference method, Learning, PMLR 139, 2021. Copyright 2021 by the author(s). SUNRISE which selects actions with highest UCB for efficient explo- more stable, higher-signal-to-noise backups. ration. This inference method can encourage exploration Ensemble methods in RL. Ensemble methods have been by providing a bonus for visiting unseen state-action pairs, studied for different purposes in RL (Wiering & Van Hasselt, where ensembles produce high uncertainty, i.e., high vari- 2008; Osband et al., 2016a; Anschel et al., 2017; Agarwal ance (see Figure 1(b)). By enforcing the diversity between et al., 2020; Lan et al., 2020). Chua et al. (2018) showed that agents using Bootstrap with random initialization (Osband modeling errors in model-based RL can be reduced using an et al., 2016a), we find that these different ideas can be fruit- ensemble of dynamics models, and Kurutach et al. (2018) fully integrated, and they are largely complementary (see accelerated policy learning by generating imagined experi- Figure 1(a)). ences from the ensemble of dynamics models. For efficient We demonstrate the effectiveness of the proposed method us- exploration, Osband et al. (2016a) and Chen et al. (2017) ing Soft Actor-Critic (SAC; Haarnoja et al. 2018) for contin- also leveraged the ensemble of Q-functions. However, most uous control benchmarks (specifically, OpenAI Gym (Brock- prior works have studied the various axes of improvements man et al., 2016) and DeepMind Control Suite (Tassa et al., from ensemble methods in isolation, while we propose a 2018)) and Rainbow DQN (Hessel et al., 2018) for dis- unified framework that handles various issues in off-policy crete control benchmarks (specifically, Atari games (Belle- RL algorithms. mare et al., 2013)). In our experiments, SUNRISE consis- Exploration in RL. To balance exploration and exploita- tently improves the performance of existing off-policy RL tion, several methods, such as the maximum entropy frame- methods. Furthermore, we find that the proposed weighted works (Ziebart, 2010; Haarnoja et al., 2018), exploration Bellman backups yield improvements in environments with bonus rewards (Bellemare et al., 2016; Houthooft et al., noisy reward, which have a low signal-to-noise ratio. 2016; Pathak et al., 2017; Choi et al., 2019) and randomiza- tion (Osband et al., 2016a;b), have been proposed. Despite 2. Related work the success of these exploration methods, a potential draw- back is that agents can focus on irrelevant aspects of the Off-policy RL algorithms. Recently, various off-policy RL environment because these methods do not depend on the algorithms have provided large gains in sample-efficiency rewards. To handle this issue, Chen et al. (2017) proposed by reusing past experiences (Fujimoto et al., 2018; Haarnoja an exploration strategy that considers both best estimates et al., 2018; Hessel et al., 2018). Rainbow DQN (Hessel (i.e., mean) and uncertainty (i.e., variance) of Q-functions et al., 2018) achieved state-of-the-art performance on the for discrete control tasks. We further extend this strategy to Atari games (Bellemare et al., 2013) by combining sev- continuous control tasks and show that it can be combined eral techniques, such as double Q-learning (Van Hasselt with other techniques. et al., 2016) and distributional DQN (Bellemare et al., 2017). For continuous control tasks, SAC (Haarnoja et al., 2018) achieved state-of-the-art sample-efficiency results by incor- 3. Background porating the maximum entropy framework. Our ensemble Reinforcement learning. We consider a standard RL method brings orthogonal benefits and is complementary framework where an agent interacts with an environment and compatible with these existing state-of-the-art algo- in discrete time. Formally, at each timestep t, the agent rithms. receives a state st from the environment and chooses an Stabilizing Q-learning. It has been empirically observed action at based on its policy ⇡. The environment returns that instability in Q-learning can be caused by applying a reward rt and the agent transitions to the next state st+1. k the Bellman backup on the learned value function (Hasselt, The return Rt = k1=0 γ rt+k is the total accumulated 2010; Van Hasselt et al., 2016; Fujimoto et al., 2018; Song rewards from timestep t with a discount factor γ [0, 1). 2 et al., 2019; Kim et al., 2019; Kumar et al., 2019; 2020). RL then maximizesP the expected return. By following the principle of double Q-learning (Hasselt, Soft Actor-Critic. SAC (Haarnoja et al., 2018) is an off- 2010; Van Hasselt et al., 2016), twin-Q trick (Fujimoto et al., policy actor-critic method based on the maximum entropy 2018) was proposed to handle the overestimation of value RL framework (Ziebart, 2010), which encourages the ro- functions for continuous control tasks. Song et al. (2019) bustness to noise and exploration by maximizing a weighted and Kim et al. (2019) proposed to replace the max operator objective of the reward and the policy entropy (see Ap- with Softmax and Mellowmax, respectively, to reduce the pendix A for further details). To update the parameters, overestimation error. Recently, Kumar et al. (2020) handled SAC alternates between a soft policy evaluation and a soft the error propagation issue by reweighting the Bellman policy improvement. At the soft policy evaluation step, a backup based on cumulative Bellman errors. However, our soft Q-function, which is modeled as a neural network with method is different in that we propose an alternative way that parameters ✓, is updated by minimizing the following soft also utilizes ensembles to estimate uncertainty and provide SUNRISE Bootstrap with Actor 1 random Critic 1 Weighted Bellman initialization Actor 2 backup Critic 2 UCB Action Actor N exploration Replay buffer Critic N (a) SUNRISE: actor-critic version (b) Uncertainty estimates Figure 1.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us