Reinforcement Learning: Chap1

Reinforcement Learning: Chap1

This excerpt from Reinforcement Learning. Richard S. Sutton and Andrew G. Barto. © 1998 The MIT Press. is provided in screen-viewable form for personal use only by members of MIT CogNet. Unauthorized use or dissemination of this information is expressly forbidden. If you have any questions about this material, please contact [email protected]. 1 Introduction The idea that we learn by interacting with our environmentis probably the first to occur to us when we think about the natureof learning. When an infant plays, wavesits arms, or looks about, it has no explicit teacher, but it doeshave a direct sensorimotorconnection to its environment. Exercisingthis connectionproduces a wealthof informationabout cause and effect , aboutthe consequencesof actions, and about what to do in order to achievegoals . Throughoutour lives, such interactions are undoubtedlya major sourceof knowledgeabout our environmentand ourselves. Whether we are learning to drive a car or to hold a conversation, we are acutely awareof how our environmentresponds to what we do, and we seekto influence what happensthrough our behavior. Learningfrom interactionis a foundationalidea underlyingnearly all theoriesof learningand intelligence. In this book we explore a computationalapproach to learning from interaction. - Ratherthan directly theorizing abouthow peopleor animalslearn , we explore ide alized learningsituations and evaluate the effectivenessof variouslearning methods . That is, we adoptthe perspectiveof an artificial intelligenceresearcher or engineer. We explore designsfor machinesthat are effective in solving learning problemsof scientificor economicinterest , evaluatingthe designsthrough mathematical analysis or computationalexperiments . The approachwe explore, called reinforcementlearning , is much more focusedon goal-directedlearning from interactionthan are other approach es to machinelearning . 1.1 Reinforcement Learning - Reinforcement learning is learning what to do - how to map situations to actions so as to maximize a numerical reward signal. The learner is not told which actions to take, as in most forms of machine learning, but instead must discover which Introduction actions yield the most reward by trying them. In the most interesting and challenging cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards. These two characteristics- trial -and-error - search and delayed reward are the two most important distinguishing features of reinforcement learning. Reinforcement learning is defined not by characterizing learning methods, but by characterizing a learning problem . Any method that is well suited to solving that problem, we consider to be a reinforcement learning method. A full specification of the reinforcement learning problem in terms of optimal control of Markov decision process es must wait until Chapter 3, but the basic idea is simply to capture the most important aspects of the real problem facing a learning agent interacting with its environment to achieve a goal. Clearly , such an agent must be able to sense the state of the environment to some extent and must be able to take actions that affect the state. The agent also must have a goal or goals relating to the state of the environment . The formulation is intended to include just these three aspects- - sensation, action, and goal in their simplest possible forms without trivializing any of them. Reinforcement learning is different from supervised learning , the kind of learning studied in most current research in machine learning, statistical pattern recognition , and artificial neural networks. Supervised learning is learning from examples provided by a knowledgable external supervisor. This is an important kind of learning, but alone it is not adequate for learning from interaction . In interactive problems it is often impractical to obtain examples of desired behavior that are both correct and representative of all the situations in which the agent has to act. In uncharted - territory where one would expect learning to be most beneficial- an agent must be able to learn from its own experience. One of the challenges that arise in reinforcement learning and not in other kinds of learning is the trade-off between exploration and exploitation . To obtain a lot of reward, a reinforcement learning agent must prefer actions that it has tried in the past and found to be effective in producing reward. But to discover such actions, it has to try actions that it has not selected before. The agent has to exploit what it already knows in order to obtain reward, but it also has to explore in order to make better action selections in the future . The dilemma is that neither exploration nor exploitation can be pursued exclusively without failing at the task. The agent must try a variety of actions and progressively favor those that appear to be best. On a stochastic task, each action must be tried many times to gain a reliable estimate of its expected reward. The exploration - exploitation dilemma has been intensively studied by mathematicians for many decades (see Chapter 2). For now, we simply 1.1 Reinforcement Learning note that the entire issue of balancingexploration and exploitation does not even arisein supervisedlearning as it is usually defined. Another key feature of reinforcementlearning is that it explicitly considersthe whole problem of a goal-directed agentinteracting with an uncertainenvironment . This is in contrastwith manyapproach esthat considersubproblems without addressing how they might fit into a larger picture. For example, we have mentionedthat much of machinelearning researchis concernedwith supervisedlearning without explicitly specifyinghow such an ability would finally be useful. Other researchers have developedtheories of planning with general goals, but without considering ' plannings role in real-time decision-making, or the questionof where the predictive models necessaryfor planning would come from. Although theseapproach es haveyielded many useful results, their focuson isolatedsubproblems is a significant limitation. Reinforcementlearning takes the oppositetack , starting with a complete, interactive , goal-seekingagent . All reinforcementlearning agentshave explicit goals, can senseaspects of their environments, and can chooseactions to influencetheir environments. Moreover, it is usually assumedfrom the beginning that the agent has to operatedespite significant uncertaintyabout the environmentit faces. When reinforcementlearning involves planning, it has to addressthe interplay between planningand real -time action selection, aswell asthe questionof how environmental modelsare acquired and improved . Whenreinforcement learning involves supervised learning, it doesso for specificreasons that detenninewhich capabilitiesare critical and which are not. For learning researchto make progress, important subproblems haveto be isolatedand studied, but they shouldbe subproblemsthat play clear roles in complete, interactive, goal-seekingagents , evenif all the details of the complete agentcannot yet be filled in. One of the larger trendsof which reinforcementlearning is a part is that toward greatercontact between artificial intelligenceand other engineeringdisciplines . Not all that long ago, artificial intelligencewas viewed as almost entirely separatefrom control theoryand statistics. It hadto do with logic and symbols, not numbers. Artificial intelligencewas large LISP programs, not linear algebra, differential equations, or statistics. Over the last decadesthis view has gradually eroded. Modem artificial intelligenceresearchers accept statistical and control algorithms, for example, as relevantcompeting methods or simply as tools of their trade. The previouslyignored areaslying betweenartificial intelligence and conventionalengineering are now among the most active, including new fields such as neural networks, intelligent control, and our topic, reinforcementlearning . In reinforcementlearning we extend ideas from optimal control theory and stochasticapproximation to address the broaderand more ambitiousgoals of artificial intelligence. Introduction 1.2 Examples A good way to understandreinforcement learning is to considersome of the examples and possibleapplications that haveguided its development. A masterchess player makesa move. The choiceis informed both by planning- anticipatingpossible replies and counterreplies- and by immediate, intuitive judgments of the desirability of particularpositions and moves. ' . An adaptivecontroller adjustsparameters of a petroleumrefinery s operationin real time. The controller optimizes the yield/cost/quality trade-off on the basis of specifiedmarginal costs without sticking strictly to the setpoints originally suggested by engineers. A gazellecalf strugglesto its feet minutesafter being born. Half an hour later it is running at 20 miles per hour. A mobile robot decideswhether it shouldenter a newroom in searchof moretrash to collect or starttrying to find its way back to its batteryrecharging station . It makes its decisionbased on how quickly and easily it hasbeen able to find the rechargerin the past. Phil prepareshis breakfast. Closely examined, eventhis apparentlymundane activity revealsa complex web of conditionalbehavior and interlocking goal- subgoal relationships: walking to the cupboard, openingit , selectinga cerealbox , thenreaching for, grasping, andretrieving the box. Othercomplex , tuned, interactivesequences of behaviorare

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    23 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us