Automatic Feature Selection for Model-Based Reinforcement Learning in Factored Mdps

Automatic Feature Selection for Model-Based Reinforcement Learning in Factored Mdps

Automatic Feature Selection for Model-Based Reinforcement Learning in Factored MDPs Mark Kroon and Shimon Whiteson Informatics Institute, University of Amsterdam, Science Park 107, 1098 XG Amsterdam The Netherlands {m.kroon,s.a.whiteson}@uva.nl Abstract—Feature selection is an important challenge in ma- performance, do not use labeled data but are nonetheless chine learning. Unfortunately, most methods for automating impractical in RL since the cost of evaluating candidate feature feature selection are designed for supervised learning tasks and sets affects not just the computational complexity but also the are thus either inapplicable or impractical for reinforcement learning. This paper presents a new approach to feature se- sample complexity [6], i.e., the number of interactions with lection specifically designed for the challenges of reinforcement the environment required for learning. Since RL methods are learning. In our method, the agent learns a model, represented typically evaluated in an on-line setting, minimizing sample as a dynamic Bayesian network, of a factored Markov decision complexity is essential to performance. process, deduces a minimal feature set from this network, and In this paper, we propose a new approach to feature selection efficiently computes a policy on this feature set using dynamic programming methods. Experiments in a stock-trading bench- designed to achieve greater sample efficiency. To do so, we mark task demonstrate that this approach can reliably deduce employ a model-based [7], [8] approach, wherein the samples minimal feature sets and that doing so can substantially improve gathered by the agent are used to approximate a model of the performance and reduce the computational expense of planning. environment from which a policy is computed using planning methods such as dynamic programming. Assuming the prob- Keywords-Reinforcement learning; feature selection; factored lem is a factored Markov decision process (MDP), this model MDPs can be represented as a dynamic Bayesian network (DBN) that describes rewards and state transitions as a stochastic function I. INTRODUCTION of the agent’s current state and action. In many machine learning problems, the set of features The main idea is to use the conditional independencies with which the data can be described is quite large. Fea- shown in such a DBN, whose structure and weights can be ture selection is the process of determining which subset of learned with recently developed methods [9], [10], to deduce these features should be included in order to generate the the minimal feature set. In particular, we propose and compare best performance. Doing so correctly can be critical to the several ways to use the DBN to compute a feature set that not success of the learner. Clearly, excluding important features only excludes irrelevant features but also minimizes the size can limit the quality of the resulting solution. At the same of the feature set in cases when multiple DBNs are consistent time, including superfluous features can lead to slower learning with the data. or weaker final performance, since the number of potential We study the performance of this method in several vari- solutions grows exponentially with respect to the number of ations of the stock-trading domain [9]. First, we assess the features. In addition, using a non-minimal feature set may ability of the method to deduce the right feature set assuming incur other costs, e.g., the expense of purchasing and installing a correct model has already been learned. Then, we consider unnecessary sensors on a robot. the performance of the complete algorithm in two RL settings This paper considers the problem of feature selection in where feature selection can be beneficial. The results show reinforcement learning (RL) problems [1], in which an agent that the proposed method can reliably deduce minimal feature seeks a control policy for an unknown environment given only sets and that doing so can substantially improve performance a scalar reward signal as feedback. and reduce the computational expense of planning. The need for methods that automate feature selection in RL is great, as most existing feature selection methods assume II. REINFORCEMENT LEARNING IN FACTORED MDPS a supervised setting, in which the goal is to approximate a A Markov decision process (MDP) can be defined as a 5- function given example input-output pairs. As a result, they tuple S, A, T, R, γ, where S is a finite set of states, A is a cannot be directly applied to RL problems. For example, finite set of actions and T is a transition model T : S × A → filter methods [2], [3] rely on the examples’ labels, which PS where PS is a probability distribution over S. T (s |s, a) are not available in RL, to filter out irrelevant features as a denotes the probability of transitioning to state s after taking preprocessing step. Wrapper methods [4], [5], which conduct action a in state s. R is a reward function R : S ×A → R and a ‘meta-search’ for feature sets that maximize the learner’s γ is a discount factor on the summed sequence of rewards. The agent’s behavior is determined by its policy π : S → A, KWIK learner. The resulting method, which we call KWIK- which has an associated value function V π : S → R Factored-Rmax, is shown in Algorithm 1. Here, the KWIK representing the expected discounted future reward starting model learner K is an implementation of the outline made from a given state and following π. The optimal policy π∗ can by Li et al., and is comparable with the recent Met-Rmax be derived from the optimal value function V ∗. In a model- implementation introduced by Diuk et al.[17]. based approach to RL [7], [8], first a model is learned from K is instantiated with accuracy parameters and δ and the environment (i.e., T and R) and then the optimal value learns a model from samples of the form s, a, r, s obtained function can be computed using a planning technique like by the agent. Specifically, it learns both the structure and value iteration [11]. CPT entries of DBNs for the transition and reward functions. Some model-based methods, such as the Rmax algo- When queried for a prediction, K(s, a) may return ⊥ if it rithm [12], are guaranteed to find a near-optimal policy in has not observed enough examples to estimate the reward or polynomial time. Rmax ensures efficient exploration of the next state resulting from taking action a in state s. Otherwise, environment by distinguishing between states that have been it returns Tˆ(s|s, a), an estimate of the transition function visited often enough to accurately estimate the transition and Rˆ(s, a), an estimate of the reward function such that function (i.e., they are marked as “known”) and those that |Tˆ(s|s, a) − T (s|s, a)|≤ and |Rˆ(s, a) − R(s, a)|≤, have not. The algorithm guides the agent towards unknown both with probability 1 − δ. Internally, the algorithm learns states by giving transitions to them an artificial bonus value a dependency structure for each feature Xi separately by in the model. Planning on this model then yields a policy that considering different hypothetical parent sets Paa(Xi) with up favors exploration in those unknown states. to k parent nodes. The parent sets that result in high prediction In many MDPs, the state space is spanned by a number of errors are discarded. We refer to [10] for more details about factors X = {X1, ··· ,Xn}, where each factor Xi takes on KWIK learning. values in some finite domain D(Xi).Inafactored MDP, each state s ∈ S is represented as a vector s = s(1), ··· ,s(n), Algorithm 1 KWIK-Factored-Rmax(, δ, θ) where s(i) ∈D(Xi). The number of states grows exponen- Inputs: , δ: accuracy thresholds for the KWIK model learner tially in the number of factors. If not all factors of a state θ: accuracy threshold for value iteration influence all factors at the next timestep, the transition func- Initialize KWIK model learner K with parameters and δ. tion can be represented succinctly with a dynamic Bayesian for all (s, a) ∈ S × A do network (DBN) [13]. Q(s, a) ← Rmax // initialize action-value function In a typical formulation, there is one DBN for each action for each timestep t do Let s denote the state at t. a ∈ A. Each DBN consists of a two-layer graph in which the Take action a =argmaxa∈A Q(s, a ). Let s be the next state after taking action a in state s. layers represent the states at time t and t +1. The nodes in Let K observe s, a, r, s . the graph represent factors and the edges denote dependency repeat ← relations given the action a. Paa(Xi) denotes the parents of Δ 0 for all (s, a) ∈ S × A do the ith factor for action a, the set of factors in the previous if K(s, a)=⊥ then timestep that directly influence this particular factor. As in Q(s, a) ← Rmax else // otherwise perform value iteration update previous literature [14], [15], [9], we assume that a factor Xi T,ˆ Rˆ ←K(s, a) t+1 v ← Q(s, a) at time is conditionally independent of all other factors at Q(s, a) ← Rˆ(s, a)+γ Tˆ(s |s, a)maxQ(s ,a ) t +1 t a time given the factors at time . The transition function s can now be described as: Δ ← max(Δ, |v − Q(s, a)|) until Δ <θ T (s |s, a)= Pi(s (i)|ui,a) i III. THE FEATURE SELECTION PROBLEM FOR RL where ui is the setting of the factors in Paa(Xi) and Pi(·|ui,a) is a probability distribution over D(Xi) defined The goal of feature selection is to improve the learner’s by a conditional probability table (CPT).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us