On Q-Learning Convergence for Non-Markov Decision Processes

On Q-Learning Convergence for Non-Markov Decision Processes

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) On Q-learning Convergence for Non-Markov Decision Processes Sultan Javed Majeed1, Marcus Hutter2 1;2 Research School of Computer Science, Australian National University, Australia 1 http://www.sultan.pk, 2 http://www.hutter1.net Abstract real-world problems are inherently non-Markovian: the full true state of the environment is not revealed by the last obser- Temporal-difference (TD) learning is an attractive, vation, and the set of true states can be infinite, e.g. as effec- computationally efficient framework for model- tively in non-stationary domains. Therefore, it is important free reinforcement learning. Q-learning is one of to know if the agent performs well in such non-Markovian the most widely used TD learning technique that domains to work with a broad range of real-world problems. optimal enables an agent to learn the action-value In this work, we investigate convergence of one of the most function, i.e. Q-value function. Contrary to its widely used TD learning algorithms, Q-learning [Watkins widespread use, Q-learning has only been proven and Dayan, 1992]. Q-learning has been shown to converge to converge on Markov Decision Processes (MDPs) in MDP domains [Tsitsiklis, 1994; Bertsekas and Tsitsik- and Q-uniform abstractions of finite-state MDPs. lis, 1995], whereas there are empirical observations that Q- On the other hand, most real-world problems are learning sometimes also work in some non-MDP domains inherently non-Markovian: the full true state of [Sutton and Barto, 1984]. First non-MDP convergence of the environment is not revealed by recent obser- Q-learning has been reported by Li et al. [2006] for the vations. In this paper, we investigate the behav- environments that are Q-uniform abstractions of finite-state ior of Q-learning when applied to non-MDP and MDPs. The recent results in Extreme State Aggregation non-ergodic domains which may have infinitely (ESA) [Hutter, 2016] indicate that under some conditions many underlying states. We prove that the con- there exists a deterministic, near-optimal policy for non-MDP vergence guarantee of Q-learning can be extended environments which are not required to be abstractions of any to a class of such non-MDP problems, in particu- finite-state MDP. These positive results motivated this work lar, to some non-stationary domains. We show that to extend the non-MDP convergence proof of Q-learning to a state-uniformity of the optimal Q-value function is larger class of infinite internal state non-MDPs. a necessary and sufficient condition for Q-learning to converge even in the case of infinitely many in- The most popular extension of MDP is a finite-state par- ternal states. tially observable Markov decision process (POMDP). In a POMDP the environment has a hidden true state, and the ob- servations from the environment, generally, do not reveal the 1 Introduction true state. Therefore, the agent either has to keep a full in- teraction history, estimate the true state or maintain a belief Temporal-difference learning [Sutton, 1988] is a well- over the possible true states. In our formulation, we use an celebrated model-free learning framework in machine learn- even more general class of processes, history-based decision ing. In TD, an agent learns the optimal action-value function process (HDP): the history-based process is equivalent to an (also known as the Q-value function) of the underlying prob- infinite-state POMDP [Leike, 2016]. We provide a simple lem without explicitly building or learning a model of the en- proof of Q-learning convergence to a class of domains that vironment. The agent can learn the optimal behavior from the encompasses significantly more domains than MDP and in- learned Q-value function: the optimal action maximizes the tersects with POMDP and HDP classes. We name this class Q-value function. It is generally assumed that the environ- Q-value uniform Decision Process (QDP) and show that Q- ment is Markovian and ergodic for a TD agent to converge learning converges in QDPs. Moreover, we show that QDP [Tsitsiklis, 1994; Bertsekas and Tsitsiklis, 1995]. is the largest class where Q-learning can converge, i.e. QDP The TD agents, apart from a few restrictive cases1, are not provides the necessary and sufficient conditions for Q-leaning proven to learn2 non-Markovian environments, whereas most convergence. 1See Section 5 for exceptions. Apart from a few toy problems, it is always a leap of faith 2In this work we use the term “learn a domain” in the context of to treat real-world problems as MDPs. An MDP model of the learning to act optimally and not to learn a model/dynamics of the underlying true environment is implicitly assumed even for domain. model-free algorithms. Our result helps to relax this assump- 2546 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) tion: rather assuming the domain being a finite-state MDP, riences the agent-environment interaction as an action-state- 3 we can suppose it to be a QDP, which is a much weaker im- reward sequence (at; st; rt)t2N. We call it a state-process plicit assumption. The positive result of this paper can be induced by the map φ. interpreted in a couple of ways; a) as discussed above, it Definition 2 (State-process) For a history h that is mapped provides theoretical grounds for Q-learning to be applicable to a state s, a state-process p is a stochastic mapping from a in a much broader class of environments or b) if the agent h state-action pair with the fixed state s to state-reward pairs. has access to a QDP aggregation map as a potential model Formally, p : fsg × A S × R. of the true environment or the agent has a companion map h learning/estimation algorithm to build such a model, then this The relationship between the underlying HDP and the in- combination of the aggregation map with Q-learning con- duced state-process for an s = φ(h) is formally defined as: verges. It is an interesting topic to learn such maps, but is 0 0 X 0 0 beyond the scope this work. ph(s r jsa) := P (o r jha): (2) The rest of the paper is structured as follows. In Section o0:φ(hao0r0)=s0 2 we set up the framework. Section 3 drafts the QDP class. Section 4 gives a preview of our main convergence result. We denote the action-value function of the state-process by q q∗ Section 5 provides a context of our work in the literature. , and the optimal Q-value function is given by : Section 6 contains the proof of the main results. In Section ∗ X 0 0 0 ∗ 0 q (s; a; h) := ph(s r jsa) r + γ max q (s ; a~; h) 7 we numerically evaluate Q-learning on a few non-MDP toy a~ s0r0 domains. Section 8 concludes the paper. (3) 0 0 0 0 It is clear that ph(s r jsa) may not be same as ph_ (s r jsa) 2 Setup for any two histories h and h_ mapped to a same state s. If the We use the general history-based agent-environment rein- state-process is an MDP, then ph is independent of history forcement learning framework [Hutter, 2005; Hutter, 2016]. and so is q∗, and convergence of Q-learning follows from this The agent and the environment interact in cycles. At the be- MDP condition [Bertsekas and Tsitsiklis, 1995]. However, ginning of a cycle t 2 N the agent takes an action at from we do not assume such a condition and go beyond MDP map- a finite action-space A. The environment dispenses a real- pings. We later show — by constructing examples — that q∗ valued reward rt+1 from a set R ⊂ R and an observation can be made independent of history while the state-process is ot+1 from a finite set of observations O. However, in our still history dependent, i.e. non-MDP. setup, we assume that the agent does not directly use this ob- Now we formally define Q-learning: At each time-step t servation, e.g. because O maybe too huge to learn from. The the agent maintains an action-value function estimate qt. The agent has access to a map/model φ of the environment that agent in a state s := st takes an action a := at and receives 0 0 takes in the observation, reward and previous interaction his- a reward r := rt+1 and the next state s := st+1. Then the tory and provides the same reward rt+1 and a mapped state agent performs an action-value update to the (s; a)-estimate st+1 from a finite set of states S; and the cycle repeats. For- with the following Q-learning update rule: mally, this agent-environment interaction generates a grow- qt+1(s; a) := ing history ht+1 := htatot+1rt+1 from a set of histories 0 0 Ht := (A×O ×R)t. The set of all finite histories is denoted qt(s; a) + αt(s; a)(r + γ maxa~ qt(s ; a~) − qt(s; a)) (4) ∗ S t by H := t H . The map φ is assumed to be a surjective ∗ where (αt)t2N is a learning rate sequence. mapping from H to S. We use ht := to denote the empty history and := to express an equality by definition. In general (non-MDPs), at any time-instant t the transition probability 3 Q-Value Uniform Decision Process (QDP) 0 0 to the next observation o := ot+1 and reward r := rt+1 is a In this section we formulate a class of environments called function of the history-action (h; a) := (ht; at)-pair and not Q-value uniform decision processes, i.e.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us