Reinforcement Learning and Function Approximation∗

Reinforcement Learning and Function Approximation∗

Reinforcement Learning and Function Approximation∗ Marina Irodova and Robert H. Sloan [email protected] and [email protected] Department of Computer Science University of Illinois at Chicago Chicago, IL 60607-7053, USA Abstract having similar structure. Using a traditional attribute-value representation for RL this would be impossible. Relational reinforcement learning combines traditional rein- forcement learning with a strong emphasis on a relational Dzeroski et al. and later (Gartner,¨ Driessens, & Ramon (rather than attribute-value) representation. Earlier work used 2003) used a learning version of the classic Blocks World relational reinforcement learning on a learning version of the planning problem to test their ideas using a Q-learning ap- classic Blocks World planning problem (a version where the proach to RL. In this learning version, introduced originally learner does not know what the result of taking an action will by (Langley 1996), the task is finding the optimal policy in be). “Structural” learning results have been obtained, such as the blocks world, where, unlike the planning version, the learning in a mixed 3–5 block environment and being able to effects of the actions are not known to the agent (until af- perform in a 3 or 10 block environment. ter the agent takes the action and observes its new state). Here we instead take a function approximation approach to We use this same test problem in our work here. We will reinforcement learning for this same problem. We obtain be comparing our learning rates especially to Gartner¨ et al. similar learning accuracies, with much better running times, who improved on the learning rates of Dzeroski et al. by us- allowing us to consider much larger problem sizes. For in- ing a clever new approach for RRL: graph kernels (Gartner,¨ stance, we can train on a mix of 3–7 blocks and then per- form well on worlds with 100–800 blocks—using less run- Flach, & Wrobel 2003) and Gaussian Learning. ning time than the relational method required for 3–10 blocks. Q-learning is a common model-free strategy for RL; it was used for RRL by both Dzeroski et al. and Gartner¨ et al. and we also use it. Q-learning maps every state-action pair Introduction to a real number, the Q-value, which tells how optimal that Traditional Reinforcement Learning (RL) is learning from action is in that state. For small domains this mapping can interaction with an environment, in particular, learning from be represented explicitly (Mitchell 1997); for large domains, the consequences of actions chosen by the learner (see, e.g., such as a 10-block blocks world (which has tens of millions (Mitchell 1997; Kaelbling, Littman, & Moore 1996)). The of states), it is infeasible. Even if memory were not a con- goal is to choose the best action for the current state. More straint, it would be infeasible to train on all those state-action precisely, the task of RL is to use observed rewards to learn pairs. RRL is one approach to this problem. an optimal (or almost optimal) policy for the environment. Here, instead, we take a function approximation approach RL tasks are challenging because often the learner will ex- by representing each state as a small fixed number of fea- ecute a long sequence of actions and receive reward 0 for tures that is independent of the number of blocks. Our main every action but the last one in the sequence. goal is to get similar learning accuracy results with (much) (Dzeroski, Raedt, & Driessens 2001) introduced Rela- better running time than RRL. This allows us to perform tional Reinforcement Learning (RRL). RRL seeks to im- in extremely large state spaces. Thus we can demonstrate prove on RL for problems with a relational structure by tremendous generalization: training on 3–7 blocks instances generalizing RL to relationally represented states and ac- and performing on up to 800 blocks. The earlier works never tions (Tadepalli, Givan, & Driessens 2004b). Traditionally, considered more than 10 blocks, and really could not, given RL has used a straightforward attribute-value representation, their running time. especially for the common Q-learning strategy. For instance, There has been relatively little work on using function ap- Dzeroski et al. showed how, using RRL, one could learn on a proximation for model-free RL in general, and Q-learning in group of different small-size problem instances, and perform particular. The idea is sketched in (Russel & Norvig 2003) well on more complex, or simply larger problem instances and is used by (Stone & Veloso 1999) (in a setting with other ∗ complicating factors). (Of course the use of function ap- This material is based upon work supported by the National proximation in machine learning in general is not novel at Science Foundation under Grant Nos. CCR-0100036 and CCF- 0431059. all.) Copyright c 2005, American Association for Artificial Intelli- The paper is organized as follows. First, we talk about gence (www.aaai.org). All rights reserved. about the problem specification, briefly describing RL and Q-learning. Then we introduce Blocks World problem in RL Function Approximation and Feature-Based and explain the representation of the symbolic features for Method this domain. In the next section, we present our experiments and results. Finally, we devote a short section to additional It may be very difficult in general to learn a Q-function per- related work, and then we conclude. fectly. We often expect learning algorithms to get only some approximation to the target function. We will use func- tion approximation: we will learn a representation of the Problem Specification Q-function as a linear combination of features, where the Following (Dzeroski, Raedt, & Driessens 2001; Gartner,¨ features describe a state. In other words, we translate state s Driessens, & Ramon 2003), we consider the following form into the set of features f1,...,fn, where n is the number of of RL task for the learning agent: features. What is given: A somewhat novel approach is that we consider a small • Set S of possible states. set of symbolic, feature-based, actions, and we learn a dis- tinct Q-function for each of these actions. Thus we have the • Set A of possible actions. set of Q-functions: • Unknown transition function δ : S × A → S. a a a Q (s, a)=θ1 f1 + ···+ θnfn (1) • Reward function r, which is 1 in goal states and 0 else- where. In the problem considered here, the goal states are for each symbolic action a. The update rule is also terminal. a a a a a dQ (s, a) What to find: θk = θk + α[r + γ max Q (s ,a) − Q (s, a)] a a ∗ dθk • An optimal policy π : S → A , for selecting next action (2) at based on the current observed state st. In Algorithm 1, we sketch the Q-learning algorithm we A common approach is to choose the policy that produces used; it generally follows (Mitchell 1997). the greatest cumulative reward over time. The usual way to π ∞ i ∈ define it is V (st)= i=0 γ rt+i where γ [0, 1) is a Algorithm 1 The RL Algorithm discount factor. In general, the sequence of rewards rt+i is initialize all thetas to 0 obtained by starting at state st, and by iteratively using the repeat policy π to choose actions using the rule at = π(st). For the Generate a starting state s0 goal-state type of reward, in one episode a single reward of i=0 1 is obtained at the end of the episode, and the objective is repeat to get to that goal state using as short an action sequence as Choose an action ai, using the policy obtained from possible. Formally, we require the agent to learn a policy π π the current values of thetas that maximizes cumulative reward V . This policy will be ∗ Execute action ai, observe ri and si+1 called optimal and denoted by π . i=i+1 until si is terminal (i.e., a goal state) Q-learning for j = i − 1 to 0 do Q-learning is a very common form of RL. In Q-learning, the update the value of thetas for all taken actions using agent learns an action-value function, or Q-function,given (2) the value of taking a given action in a given state. Q-learning end for always selects the action that maximizes the sum of the im- until no more episodes mediate reward and the value of the immediate successor state. The Q-function for the policy π: Commonly in Q-learning, the training actions are selected probabilistically. In state s our probability of choosing ac- π Q(s, a)=r(s, a)+γV (δ(s, a)) . tion ai is ∗ i Therefore, the optimal policy π in terms of Q-function: Q (s,ai) T Pr(ai|s)= a (3) ∗ Q (s,ak) π (s) = arg max[Q(s, a)] s a T a all legal (in ) actions The update rule for the Q-function is defined as follows: The parameter T , called “temperature,” makes the explo- ration versus exploitation tradeoff. Larger values of T will Q(s, a)=Q(s, a)+α(r + γ max Q(s,a) − Q(s, a)), a give higher probabilities to these actions currently believed to be good and will favor exploitation of what the agent has which is calculated when action a is executed in state s lead- learned. On the other hand, lower values of T will assign ing to the state s. Here 0 ≤ α ≤ 1 is a learning rate parame- higher probabilities to the actions with small Q values, forc- ter and gamma is the discount factor.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us