Constructing States for Reinforcement Learning

Constructing States for Reinforcement Learning

Constructing States for Reinforcement Learning M. M. Hassan Mahmud [email protected] School of Computer Science, The Australian National University, Canberra, 0200 ACT, Australia Abstract tially observable Markov decision process (POMDP), which are (essentially) hidden Markov models with POMDPs are the models of choice for re- actions. Given the model (structure and parame- inforcement learning (RL) tasks where the ters) of a POMDP, there exists effective heuristic al- environment cannot be observed directly. gorithms for planning (see the survey (Ross et al., In many applications we need to learn the 2008)), although exact planning is undecidable in gen- POMDP structure and parameters from ex- eral (Madani et al., 2003). However, in many impor- perience and this is considered to be a dif- tant problems, the POMDP model is not available a- ficult problem. In this paper we address priori and has to be learned from experience. While this issue by modeling the hidden environ- there are some promising new approaches (e.g. (Doshi- ment with a novel class of models that are Velez, 2009) using HDP-POMDPs), this problem is as less expressive, but easier to learn and plan yet unsolved (and in fact NP-hard even under severe with than POMDPs. We call these mod- constraints (Sabbadin et al., 2007)). els deterministic Markov models (DMMs), which are deterministic-probabilistic finite One way to bypass this difficult learning problem is automata from learning theory, extended to consider simpler environment models. In particu- with actions to the sequential (rather than lar, in this paper we assume that each history deter- i.i.d.) setting. Conceptually, we extend ministically maps to one of finitely many states and the Utile Suffix Memory method of McCal- this state is a sufficient statistic of the history (Mc- lum to handle long term memory. We de- callum, 1995; Shalizi & Klinkner, 2004; Hutter, 2009). scribe DMMs, give Bayesian algorithms for Given this history-state map the environment becomes learning and planning with them and also a MDP which can then be used to plan. So the learn- present experimental results for some stan- ing problem now is to learn this map. Indeed, the well dard POMDP tasks and tasks to illustrate known USM algorithm (Mccallum, 1995) used Predic- its efficacy. tion Suffix Trees (Ron et al., 1994) for these maps (each history is mapped to exactly one leaf/state) and was quite successful in benchmark POMDP domains. 1. Introduction However, PSTs lack long term memory and had dif- ficulty with noisy environments and so USM was not In this paper we derive a method to estimate the hid- followed up on for the most part. In our work we con- den structure of environments in general reinforcement sider a Bayesian setup and replace PSTs with finite learning problems. In such problems, at each discrete state machines and endow the agent with long term time step the agent takes an action and in turn re- memory. The resulting model is a proper subclass of ceives just an observation and a reward. The goal of POMDPs, but hopefully maintains the computational the agent is to take actions (i.e. plan) to maximize its simplicity and efficiency that comes with considering future time-averaged discounted rewards (Bertsekas & deterministic history state maps. Shreve, 1996). Clearly, we need to impose some struc- ture on the environment to solve this problem. We note that belief states of POMDPs are also de- terministic functions of the history. But this state The most popular approach in machine learning to space is infinite and so POMDP models learning al- do this is by assuming that the environment is a par- gorithms try to estimate the hidden states (see for in- stance (Doshi-Velez, 2009)). As a result, these meth- th Appearing in Proceedings of the 27 International Confer- ods are quite different from algorithms using deter- ence on Machine Learning, Haifa, Israel, 2010. Copyright 2010 by the author(s)/owner(s). ministic history-state maps. Other notable meth- Constructing States for Reinforcement Learning ods for learning the environment model include PSRs and λ denotes the empty history. (Littman et al., 2002) { unfortunately we lack space and do not discuss these further. Superficially, finite 2.2. General RL Environments state controllers for POMDPs (Kaelbling et al., 1998) seem closely related to our work but these are not A general RL environment, denoted by grle, is defined quite model learning algorithms. They are (powerful) by a tuple (A; R; O; RO; γ) where all the quantities planning algorithms that assume a hidden but known except RO are as defined above. RO defines the dy- environment model (at the very least, implicitly, an namics of the environment: at step t when action a estimate of the number of hidden states). is applied, the next reward×observation is selected ac- cording to probability RO(rojh; a) where h = ao0:t−1 is We now proceed as follows. We define a general RL the history before step t. We will write the marginals environment and then our model, the deterministic over R and O w.r.t. RO by R and O respectively. Markov model (DMM), and show how to use it to 0/0.4 0/0.2 0/0.9 model the environment, infer the state given a history r= r= r= 1/0.6 {1/0.8 {1/0.1 and compute the optimal policy. We then describe { our Bayesian inference framework and then derive a {ao'',ao}/0.3 {ao''}/0.5 {ao'}/0.7 Bayesian, heuristic Monte-Carlo style algorithm for s s s1 2 model learning and planning. Finally, we describe ex- 0 {ao',ao}/0.5 {ao,ao'}/0.56 periments on standard benchmark POMDP domains {ao}/0.44 and some novel domains to illustrate the advantage of our method over USM. Due to lack of space formal proofs of our results are given in (Mahmud, 2010). Our Figure 1. A DMM with A = fag, R := f0; 1g and O := fo; o0; o00g. The edges are labeled with z 2 A × O that focus here is on motivating our approach via discussion cause the transition along with the probability for that and experiments. transition (parameters φ~s;a). The reward distributions for each state appear above each state (parameters θ~s;a). 2. Modeling Environments The actions at each step are chosen using a policy π : To recap, we model a general RL environment by our H!A; the value function of π is defined as: model, the DMM, and then use the MDP derived from π π the DMM to plan for the problem. In the following V (h) = ER(rjh;a)(r) + γEO(ojh;a)[V (hao)] (1) we introduce notation (Sect. 2.1), define a general where a := π(h). The goal in RL problems is to learn RL environment (Sect. 2.2), define our model, the ∗ the optimal policy π∗ which satisfies V π (h) ≥ V π(h) DMMs (Sect. 2.3) and show how they can model the for each policy π and history h. In particular the value environment and construct the requisite MDP to plan function of this policy is given by: with (Sect. 2.4). We then describe the DMM inference criterion and the learning algorithm in Sect. 3 and 4. π∗ ∗ V (h) := V (h) := max ER(rjh;a)(r)+ a ∗ 2.1. Preliminaries γEO(ojh;a)[V (hao)] (2) EP (x)[f(x)] denotes the expectation of the function f The existence of these functions follow via standard with respect to distribution P . We let A be a finite set results (Bertsekas & Shreve, 1996). For the sequel we of actions, O a finite set of observations and R ⊂ IR fix a particular RL environment and denote it by grle a finite set of rewards. We set H := (A × O)∗ to be := (A; R; O; RO; γ). the set of histories and γ 2 [0; 1) to be a fixed discount rate. We now need some notation for sequences over 2.3. Deterministic Markov Models finite alphabets. x0:n will denote a string of length Our model is a graphical model and as such defined n + 1 and x<n ≡ x0:n−1 will denote a string of length by two components { a structure and associated pa- n. xi:j will denote elements i to j inclusive, while xi will denote the the ith element of the sequence. The rameters/probabilities (see Fig. 1). We use DMM to indices will often be time indices (but not always). If refer to the structure as during Bayesian learning we will marginalize out the parameters leaving us with the there are two strings x0:n and y0:n, we will use xy0:n to marginal likelihood of the just the structure. We will denote the interleaved sequence x0y0x1yn : : : xnyn; we use the term `model' to refer to DMM+parameters. will use xyi and xyi:j to denote xiyi and xiyi : : : xjyj respectively. Finally, λ will denote the empty string. The DMM. A DMM ξ is a standard deterministic As an example, each element of H is of the form ao0:n finite state automaton (Hopcroft et al., 2006) and is Constructing States for Reinforcement Learning defined by the tuple (q◦; S; Σ; δ). Here, q◦ is the start tails of O. For example, in a domain with a million state, S is the set of states, Σ = A×O is the edge-label states where O is different at each state but R is the alphabet and δ : S × Σ !S is the transition function. same everywhere, it will be a bad idea to try to learn When ξ is at state s, it transitions to state s0 on input O.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us