
Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 2007) Reinforcement learning by backpropagation through an LSTM model/critic Bram Bakker Intelligent Systems Laboratory Amsterdam Informatics Institute, University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam, The Netherlands Email: [email protected] Abstract— This paper describes backpropagation through an problems are discussed, in which feedforward neural networks LSTM recurrent neural network model/critic, for reinforcement can be used. Finally, approaches for partially observable RL learning tasks in partially observable domains. This combines the problems are discussed, in which recurrent neural networks advantage of LSTM’s strength at learning long-term temporal dependencies to infer states in partially observable tasks, with can be used, and the approach described in this paper, back- the advantage of being able to learn high-dimensional and/or propagation through an LSTM model/critic, is introduced. continuous actions with backpropagation’s focused credit assign- ment mechanism. A. Reinforcement learning In RL, the learning system (agent, robot, controller) interacts I. INTRODUCTION with an environment and attempts to optimize an objective There are multiple reasons why neural networks (NNs) are function based on scalar reward (or cost) signals (see [15] used for approximate dynamic programming or reinforcement for an overview of RL, using the same notation as the learning (RL) problems. Probably the most common reason one used here). Its actions a (controls) modify its state in is that the particular RL problem under consideration has a the environment s, and also lead to immediate rewards r. large and/or continuous state space, and that generalization Observations o provide the learning system with information over the state space is required after sampling only parts of about s. the state space. Likewise, the RL problem may have a large Generally, the objective of RL (in the discrete time case) and/or continuous action space, such that the actions can no is to determine a policy π which at time t selects an action longer be enumerated as in standard RL (such as standard at given the state st and which maximizes the expected 2 table-based Q-learning), but instead a high-dimensional, and discounted future cumulative reward: rt + γrt+1 + γ rt+2 + possibly continuous-valued action vector is considered. A third i ∈ ... = i γ rt+i, where γ [0, 1] is a factor which discounts reason may be partial observability, or the problem of hidden futureP rewards. Many RL algorithms estimate a utility or value state, which means that the observations (sensor readings) function, which approximates this measure of expected dis- provide only noisy, limited or ambiguous information about counted future cumulative reward, and which may be defined the underlying (Markov) state. over the state space (V (s)) or the state-action space (Q(s, a)). This paper focuses on the difficult case where all three If a discrete set of states and actions is considered, and issues play a role: a large and/or continuous state space, a large the observations o are equivalent to states s, the problem can and/or continuous action space, and partial observability. We be formulated as a Markov Decision Process (MDP). If there describe how this case may be handled with backpropagation is hidden state or partial observability, i.e. the observations through a model (BTM), where the model is a recurrent neural are not equivalent to states but contain partial or incomplete network (RNN). In particular, we use the Long Short-Term information about the state, the problem can be formulated Memory (LSTM) architecture, because it has in previous work as a Partially Observable Markov Decision Process (MDP). been successful even in hard hidden state problems. In this paper we consider the more general case of possibly The next section describes different approaches to com- continuous states, observations, and actions, as well as the is- bining RL with NNs. Section III describes LSTM and back- sue of partial observability (but we do not consider continuous propagation through LSTM. Section IV describes some initial time). experimental results. Section V, finally, presents conclusions. B. Completely observable RL problems II. COMBINING RL WITH NNS Direct value function approximation. If the state space Different approaches to combining RL with NNs can be is large and/or continuous but completely observable and the taken (see [17], [3] for overviews), and the approach depends action space is relatively small, discrete, and enumerable, an to a considerable extent on the reason to use neural networks. NN such as a multilayer feedforward neural network may be In the following, first the basics of RL as well as some notation used as a function approximator, to approximate the value are discussed. Next, approaches for completely observable RL function of a temporal difference-based reinforcement learning 1-4244-0706-0/07/$20.00 ©2007 IEEE 127 Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 2007) value V(s)/Q(s,a) internal reward value V(s) action a FFNN FFNN FFNN actor critic state s (& action a) state s state s Fig. 1. Direct value function approximation approach for completely observable problems: a feedforward neural network is used as function Fig. 2. Actor-critic approach for completely observable problems: one approximator for an RL value function. feedforward neural network, the critic, is used as function approximator for an RL value function; another, the actor, to learn the policy. algorithm (see figure 1). The input vector contains state infor- next state s', r (value Q(s,a) ) mation, the output represents values of the RL value function. This is the approach taken, for example, in Tesauro’s well- RNN known world class backgammon player [16]. In this approach a single output unit may be used to compute values, and model/ critic the network may be recomputed for different possible actions (as in Tesauro’s work). If a value function over the state- action space (like Q-learning) rather than the state space is state s action a to be computed, the inputs to the network may include a code for different actions, or different output units may represent FFNN actor values of different actions, or multiple networks may represent different actions. A variation of Q-learning may be used for training the network. In that case, the desired value dk at time t for the corresponding output unit k is: state s dk(t) = r(t) + γ max Q(s(t + 1), a) (1) a Fig. 3. Backpropagation through a model/critic approach for completely observable problems: one neural network is used as a predictive model of the where r(t) is the immediate reward achieved after executing environment. In addition, this network may estimate a value function. Another action a(t) in state s(t) and reaching state s(t + 1), Q(s, a) neural network, the actor, learns the policy based on errors backpropagated is the state-action value function representing cumulative dis- through the (frozen) model/critic (dotted lines). counted reward (estimated by the network), and γ is a discount factor. After learning, in each state the best action can then be selected by simply taking that action which leads to the directed credit assignment mechanism to learn the action (see highest value computed by the network(s). figure 3). The idea is that rather than using an NN to directly Actor-critic system. If the state space is large and com- represent an RL value function, an NN is used to learn a model pletely observable, but the action space is large and/or con- of the environment that predicts next observations and rewards tinuous, then an actor-critic architecture (see figure 2) may based on current observations and actions. This network is be used in which, as above, a network approximates a value typically a recurrent neural network. Stochastic RL problems function. This network is called the critic. Now, however, there can be dealt with as well, if the outputs of the model are is also a second network, called the actor. Like the critic, the interpreted as probabilities. actor receives as input state information. Its output is a vector Additionally or alternatively, the model network may learn coding for the action (or action probabilities). The actor is to predict a value function, based on a standard temporal trained using “internal rewards” from the critic, e.g. positive difference-based reinforcement learning algorithm. This net- (negative) internal rewards when its actions lead to positive work is called the model/critic [17]. (negative) temporal difference errors for the critic. A separate neural network, the actor network, learns map- Backpropagation through a model/critic. One disadvan- pings from its input vector to an action vector, which is tage of standard actor-critic systems is that search for the connected to the input side of the model. In the actor-learning best action vector is undirected and can therefore be slow phase of BTM, the weights of the model are fixed, and and fail to converge to good solutions, as it is based on the errors are backpropagated through the model that reflect not actor performing a kind of stochastic hillclimbing in the action prediction errors (the model is assumed to be accurate or space. The backpropagation through a model/critic (BTM) already learned), but differences between desired outcomes approach can remedy this, as it exploits backpropagation’s in the environment and actual outcomes. These errors are 128 Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 2007) value V(s)/Q(s,a) internal reward value V(s) action a RNN inferred RNN FFNN actor s inferred critic s observation o (& action a) observation o inferred s observation o Fig.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-