Fast Feature Selection for Linear Value Function Approximation

Fast Feature Selection for Linear Value Function Approximation

Proceedings of the Twenty-Ninth International Conference on Automated Planning and Scheduling (ICAPS 2019) Fast Feature Selection for Linear Value Function Approximation Bahram Behzadian, Soheil Gharatappeh, Marek Petrik Department of Computer Science University of New Hampshire Durham, NH 03824 USA fbahram, soheil, [email protected] Abstract batch is often discovered only after it has been deployed and real damage has been done. Linear value function approximation is a standard approach to solving reinforcement learning problems with large state With linear approximation, overfitting occurs more easily spaces. Since designing good approximation features is dif- when too many features are used. In this paper, we presents ficult, automatic feature selection is an important research Fast Feature Selection (FFS), a new method that can effec- topic. We propose a new method for feature selection that tively reduce the number of features in batch RL. To avoid is based on a low-rank factorization of the transition matrix. confusion, we use the term raw features to refer to the natu- Our approach derives features directly from high-dimensional ral features of a given problem. They could, for example, be raw inputs, such as image data. The method is easy to imple- the individual pixel values in video games or particular ge- ment using SVD, and our experiments show that it is faster ographic observations in geospatial applications. Raw fea- and more stable than alternative methods. tures are usually numerous, but each feature alone has a low predictive value. FFS constructs (rather than selects) a small 1 Introduction set of useful features that are a linear combination of the pro- Reinforcement learning (RL) methods typically use value vided raw features. The constructed features are designed to function approximation to solve problems with large state be used in concert with LSTD, LSPI, and other related batch spaces (Sutton and Barto 1998; Szepesvari´ 2010). The ap- RL methods. proximation makes it possible to generalize from a small FFS reduces the number of features by computing a low- number of samples to the entire state space. Perhaps the rank approximation of the transition matrix after it is com- most common methods for value function approximation pressed using the available raw features. Low-rank matrix are neural networks and linear methods. Neural networks approximation and completion gained popularity from their offer unparalleled expressibility in complex problems, but use in collaborative filtering (Murphy 2012), but they have linear methods remain popular due to their simplicity, inter- been also applied to reinforcement learning and other ma- pretability, ease of use, and low sample and computational chine learning domains (Ong 2015; Cheng, Asamov, and complexity. Powell 2017; Rendle, Freudenthaler, and Schmidt-Thieme This work focuses on batch reinforcement learn- 2010). None of this prior work, however, computes a low- ing (Lange, Gabel, and Riedmiller 2012). In batch RL, all rank approximation of the compressed transition matrix. domain samples are provided in advance as a batch, and it Several feature selection methods for reducing overfit- is impossible or difficult to gather additional samples. This ting in RL have been proposed previously, but none of is common in many practical domains. In medical applica- them explicitly target problems with low-rank (compressed) tions, for example, it is usually too dangerous and expensive transition probabilities. `1 regularization, popularized by to run additional tests, and in ecological applications, it may the LASSO, has been used successfully in reinforcement take an entire growing season to obtain a new batch of sam- learning (Kolter and Ng 2009; Petrik et al. 2010; Le, Ku- ples. maraswamy, and White 2017). `1 regularization assumes Overfitting is a particularly difficult challenge in practical that only a few of the features are sufficient to obtain a good deployments of batch RL. Detecting that the solution over- approximation. This is not a reasonable assumption when fits the available data can be complex. Using a regular test set individual raw features are of a low quality. does not work in RL because of the difference between the Proto-value functions (Mahadevan and Maggioni 2007) sampling policy and the optimized policy. Also, off-policy use the spectral decomposition of the transition probabil- policy evaluation remains difficult in large problems (Jiang ity matrix or of a related random walk. Although the spec- and Li 2015). As a result, a solution that overfits the training trum of a matrix is closely related to its rank, eigenvector- Copyright c 2019, Association for the Advancement of Artificial based methods provide weak approximation guarantees even Intelligence (www.aaai.org). All rights reserved. when the majority of the eigenvalues are zero (Petrik 2007). 601 BEBFs and Krylov are other techniques that work well when for some vector w = fw1; : : : ; wkg of scalar weights that the characteristic polynomial of the transition probability quantify the importance of features. Here, Φ is the feature matrix is of a small degree (Parr et al. 2007; Petrik 2007); matrix of dimensions jSj × k; the columns of this matrix are this property is unrelated to the matrix rank. the features φi. The closest prior method to FFS is LFD (Song et al. Numerous algorithms for computing linear value ap- 2016). LFD works by computing 1) a linear encoder that proximation have been proposed (Sutton and Barto 1998; maps the raw features of a state to a small-dimensional space Lagoudakis and Parr 2003; Szepesvari´ 2010). We focus and 2) a linear decoder that maps the small-dimensional rep- on fixed-point methods that compute the unique vector of π resentation back to the raw features. While LFD was not in- weights wΦ that satisfy the projected Bellman equation (1): troduced as a low-rank approximation technique, we show π + π π π that similarly to FFS, it introduces no additional error when wΦ = Φ (r + γP ΦwΦ) ; (2) the matrix of transition probabilities is low-rank. LFD, un- where Φ+ is the Moore-Penrose pseudo-inverse of Φ and fortunately, has several limitations. It involves solving a non- Φ+ = (Φ|Φ)−1Φ| when columns of Φ are linearly in- convex optimization problem, is difficult to analyze, and dependent (e.g., Golub and Van Loan (2013)). This equa- provides no guidance for deciding on the right number of tion follows by applying the orthogonal projection operator features to use. Φ(Φ|Φ)−1Φ| to both sides of (1). As the main contribution, this paper proposes and ana- The following insight will be important when describing lyzes FFS both theoretically and empirically. We derive new the FFS method. The fixed-point solution to (2) can be inter- bounds that relate the singular values of the transition prob- preted as a value function of an MDP with a linearly com- ability matrix to the approximation error. As a secondary π π pressed transition matrix PΦ and a reward vector rΦ (Parr et contribution, we provide a new interpretation of LFD as a al. 2008; Szepesvari´ 2010): type of low-rank approximation method. We argue that FFS improves on LFD in terms of providing fast and predictable P π = (Φ|Φ)−1Φ|P πΦ = Φ+P πΦ; Φ (3) solutions, similar or better practical performance, and guid- rπ = (Φ|Φ)−1Φ|rπ = Φ+rπ : ance on how many features should be selected. Φ π The remainder of the paper is organized as follows. Sec- The weights wΦ in (2) are equal to the value function for this π tion 2 summarizes the relevant properties of linear value compressed MDP. That is, wΦ satisfies the Bellman equa- function approximation in Markov decision processes. Sec- tion for the compressed MDP: tion 3 describes FFS and new bounds that relate singular val- wπ = rπ + γP πwπ : (4) ues of the compressed transition probability matrix to the ap- Φ Φ Φ Φ proximation error. Section 4 then compares FFS with other In order to construct good features, it is essential to be feature construction algorithms, and, finally, the empirical able to determine their quality in terms of whether they can evaluation in Section 5 indicates that FFS is a promising fea- express a good approximate value function. The standard ture selection method. bound on the performance loss of a policy computed using, for example, approximate policy iteration can be bounded 2 Linear Value Function Approximation as a function of the Bellman error (e.g., Williams and Baird In this section, we summarize the background on linear (1993)). To motivate FFS, we use the following result that value function approximation and feature construction. shows that the Bellman error can be decomposed into the er- We consider a reinforcement learning problem formulated ror in 1) the compressed rewards, and in 2) the compressed as a Markov decision process (MDP) with states S, actions transition probabilities. A, transition probabilities P : S × A × S ! [0; 1], and Theorem 1 (Song et al. 2016). Given a policy π and fea- π rewards r : S × A ! R (Puterman 2005). The value of tures Φ, the Bellman error of a value function v = ΦwΦ P (s; a; s0) denotes the probability of transitioning to state satisfies: 0 s after taking an action a in a state s. The objective is to π π π π π compute a stationary policy π that maximizes the expected BEΦ = (r − ΦrΦ) +γ (P Φ − ΦPΦ ) wΦ : | {zπ } | {zπ } γ−discounted infinite-horizon return. It is well-known that ∆r ∆P π the value function v for a policy π must satisfy the Bellman π optimality condition (e.g., Puterman (2005)): We seek to construct a basis that minimizes both k∆r k2 k∆π k ` π π π π and P 2. These terms can be used to bound the 2 norm v = r + γP v ; (1) of Bellman error as: where P π and rπ are the matrix of transition probabilities π π π k BEΦ k2 ≤ k∆r k2 + γk∆P k2kwΦk2 ≤ π (5) and the vector of rewards, respectively, for the policy .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us