539: Towards Tractable Optimism in Model-Based Reinforcement Learning

539: Towards Tractable Optimism in Model-Based Reinforcement Learning

Towards Tractable Optimism in Model-Based Reinforcement Learning *1 *2 *2 3 2 Aldo Pacchiano Philip Ball Jack Parker-Holder Krzysztof Choromanski Stephen Roberts 1UC Berkeley 2University of Oxford 3Google Brain Robotics *Equal Contribution. Abstract efficiently, wasting as few samples as possible [Ball et al., 2020]. Thus, the effectiveness of MBRL algorithms hinges The principle of optimism in the face of uncertainty is on the exploration-exploitation dilemma. prevalent throughout sequential decision making prob- This dilemma has been studied extensively in the tabular lems such as multi-armed bandits and reinforcement RL setting, which considers Markov Decision Processes learning (RL). To be successful, an optimistic RL algo- (MDPs) with finite states and actions. Optimism in the face rithm must over-estimate the true value function (opti- mism) but not by so much that it is inaccurate (estima- of uncertainty (OFU) [Audibert et al., 2007, Kocsis and tion error). In the tabular setting, many state-of-the-art Szepesvári, 2006] is a principle that emerged first from methods produce the required optimism through ap- the Multi-Arm Bandit literature, where actions having both proaches which are intractable when scaling to deep large expected rewards (exploitation) and high uncertainty RL. We re-interpret these scalable optimistic model- (exploration) are prioritized. OFU is a crucial component based algorithms as solving a tractable noise augmented of several state-of-the-art algorithms in this setting [Silver MDP. This formulation achieves a competitive regret et al., 2016], although its success has thus far failed to scale bound: ˜ H T when augmenting using Gaus- O S A to larger settings. sian noise, where T is the total number of environ- ‘ ment steps.(∂ We∂ also∂ explore∂ ) how this trade-off changes However, in the field of deep RL, many of these theoret- in the deep RL setting, where we show empirically ical advances have been overlooked in favor of heuristics that estimation error is significantly more troublesome. [Burda et al., 2019a], or simple dithering based approaches However, we also show that if this error is reduced, for exploration [Mnih et al., 2013]. There are two potential optimistic model-based RL algorithms can match state- reasons for this. First, many of the theoretically motivated of-the-art performance in continuous control problems. OFU algorithms are intractable in larger settings. For ex- ample, UCRL2 [Jaksch et al., 2010] a canonical optimistic RL algorithm, requires the computation of an analytic un- 1 INTRODUCTION certainty envelope around the MDP, which is infeasible for continuous MDPs. Despite its many extensions [Filippi Reinforcement Learning (RL, Sutton and Barto [1998]) con- et al., 2010, Jaksch et al., 2010, Fruit et al., 2018, Azar et al., siders the problem of an agent taking sequential actions 2017b, Bartlett and Tewari, 2012, Tossou et al., 2019], none in an uncertain environment to maximize some notion of address generalizing the techniques to continuous (or even reward. Model-based reinforcement learning (MBRL) algo- large discrete) MDPs. rithms typically approach this problem by building a “world model” [Sutton, 1991], which can be used to simulate the Second, OFU algorithms must strike a fine balance in what true environment. This facilitates efficient learning, since we call the Optimism Decomposition. That is, they need to the agent no longer needs to query to true environment for be optimistic enough to upper bound the true value function, experience, and instead plans in the world model. In or- while maintaining low estimation error. Theoretically mo- der to learn a world model that accurately represents the tivated OFU algorithms predominantly focus on the prior. dynamics of the environment, the agent must collect data However, when moving to the deep RL setting, several that is rich in experiences [Sekar et al., 2020]. However, for sources of noise make estimation error a thorn in the side of faster convergence, data collection must also be performed optimistic approaches. We show that an optimistic algorithm can fixate on exploiting the least accurate models, which Correspondence to: [email protected], causes the majority of experience the agent learns from to {ball, jackph}@robots.ox.ac.uk Accepted for the 37th Conference on Uncertainty in Artificial Intelligence (UAI 2021). be worthless, or even harmful for performance. must instead approximate these through estimators. For state action pair s,a , we denote the average reward es- In this paper we seek to address both of these issues, paving timator as rˆ s,a and the average dynamics estima- the way for OFU-inspired algorithms to gain prominence in k " R 1 ˆ the deep RL setting. We make two contributions: tor as Pk s,a "(D S ,) where index k refers to the episode. When training,( the) learner collects dynamics tuples during Making provably efficient algorithms tractable Our first its interactions with∂ ∂ , which in turn it uses during each ( ) M contribution is to introduce a new perspective on existing round t to produce a policy pk and an approximate MDP tabular RL algorithms such as UCRL2. We show that a com- k = , ,P˜,H,r˜,P0 . In our theoretical results we will M S A parable regret bound can be achieved by being optimistic allow P˜ s,a to be a signed measure whose entries do not with respect to a noise augmented MDP, where the noise is sum to( one. This is purely) a semantic devise, rendering the proportional to the amount of data collected during learning. exposition( of) our work easier and more general, and in no We propose several mechanisms to inject noise, including way affects the feasibility of our algorithms and arguments. count-scaled Gaussian noise and the variance from a boot- For any policy p, let V p be the (scalar) value of p and let strap mechanism. Since the latter technique is used in many V˜ p be the value of p operating in the approximate MDP prominent state-of-the-art deep MBRL algorithms [Kuru- k k. We define Ep as( the) expectation under the dynamics tach et al., 2018, Janner et al., 2019, Chua et al., 2018, Ball ofM the true MDP and using policy p (analogously E˜ as ( ) M p et al., 2020], we have all the ingredients we need to scale to the expectation under k). The true and approximate value that paradigm. function for a policy pMare defined as follows: Addressing model estimation error in the deep RL H−1 H−1 V p r s ,a ,V˜ p ˜ r˜ s ,a . paradigm We empirically explore the Optimism Decompo- = Ep = h h k = Ep = k h h sition in the deep RL setting, and introduce a new approach h=0 h=0 to reduce the likelihood that the weakest models will be We will( ) evaluate⌫ our( method) ( using) regret⌫ , the( difference) exploited. We show that we can indeed produce optimism between the value of the optimal policy and the value from with low model error, and thus match state of the art MBRL the policies it executed. Formally, in the episodic RL setting K performance. the regret of an agent using policies pk k=1 is (where K is number of episodes and T = KH): The rest of the paper is structured as follows: 1) We begin K ò { } with background and related work, where we formally in- R T V p −V p , = = k troduce the Optimism Decomposition; 2) In Section 3 we k=1 introduce noise augmented MDPs, and draw connections ò where p denotes( the) optimal( policy) for( ), and V p is p with existing algorithms; 3) We next provide our main the- M k k oretical results, followed by empirical verification in the true value function. Furthermore, for each h " 1,⇧,H we h h tabular setting; 4) We rigorously evaluate the Optimism call V p " R S the value vector satisfying V ( p ) s = H−1 h Decomposition in the deep RL setting, demonstrating the ¨ r s ¨ ,∂a∂¨ s s , similarly we define{ V˜ p} Ep <h =h h h h = k " scalability of our approach; 5) We conclude and discuss ( ) h H−1 ( )[ ] S as V˜ p s ˜ ¨ r˜ s ¨ ,a ¨ s s where R k = Ep <h =h h h h = some of the exciting future directions we hope to explore. H ( )∂ ⇢ ( ) V∂ ∂p s = 0. Bold represents a vector-valued quantity. ( )[ ] ( )∂ ⇢ The principle of optimism in the face of uncertainty (OFU) 2 BACKGROUND AND RELATED is used( )[ to] address the exploration-exploitation dilemma in WORK sequential decision making processes by performing both simultaneously. In RL, “model based" OFU algorithms [Jaksch et al., 2010, Fruit et al., 2018, Tossou et al., 2019] In this paper we study a sequential interaction between a proceed as follows: at the beginning of each episode k a learner and a finite horizon MDP = , ,P,H,r,P0 , M S A learner selects an approximate MDP k from a model where denotes the state space, the actions, P its dynam- cloud and a policy p whose approximateM value function S A ✓ Mk k ics, H its episode horizon, r " R S A the( rewards and P)0 V˜k pk is optimistic, that is, it overestimates the optimal pol- ò the initial state distribution. For any∂ ∂ ∂ state∂ action pair s,a , icy’s true value function V p . Our approach follows the we call r s,a their true reward, which we assume to be a same( paradigm,) but instead of using a continuum of models random variable in 0,1 . P represents the dynamics( and) as in [Jaksch et al., 2010, Azar et al., 2017a] we allow Mk ¨ ( ) defines the( distribution) over the next states, i.e., s ⇥ P s,a to be a discrete set (i.e.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us