Concrete Autoencoders: Differentiable Feature Selection and Reconstruction

Concrete Autoencoders: Differentiable Feature Selection and Reconstruction

Concrete Autoencoders: Differentiable Feature Selection and Reconstruction Abubakar Abid * 1 Muhammed Fatih Balin * 2 James Zou 1 3 4 Abstract to reduce dimensionality of data by identifying the subset We introduce the concrete autoencoder, an end- of relevant features in a dataset. A large number of algo- to-end differentiable method for global feature rithms have been proposed for feature selection in both selection, which efficiently identifies a subset of supervised and unsupervised settings (Kohavi & John, 1997; the most informative features and simultaneously Wang et al., 2013). These methods provide insight into the learns a neural network to reconstruct the input relationship between features in complex data and can sim- data from the selected features. Our method is plify the process of training downstream models. Feature unsupervised, and is based on using a concrete selection is particularly useful if the data with the full set of selector layer as the encoder and using a standard features is expensive or difficult to collect, as it can eliminate neural network as the decoder. During the train- the need to measure irrelevant or redundant features. ing phase, the temperature of the concrete selector As a motivating example, consider a dataset that consists of layer is gradually decreased, which encourages a the expression of various genes across tissue samples. Such user-specified number of discrete features to be “omics” measurements are increasingly carried out to fully learned; during test time, the selected features characterize biological samples at the individual and single- can be used with the decoder network to recon- cell level (Bock et al., 2016; Huang et al., 2017). Yet it struct the remaining input features. We evaluate remains expensive to conduct all of the biological assays that concrete autoencoders on a variety of datasets, are needed to characterize such samples. It is natural to ask: where they significantly outperform state-of-the- what are the most important features in this dataset? Are art methods for feature selection and data recon- there redundant features that do not need to be measured? struction. In particular, on a large-scale gene ex- The idea of only measuring a subset of biological features pression dataset, the concrete autoencoder selects and then reconstructing the remaining features is not new; in a small subset of genes whose expression levels fact, this line of thinking has motivated the identification of can be used to impute the expression levels of the the landmark genes, also known as the L1000 genes, which remaining genes; in doing so, it improves on the are a small subset of the over 20,000 human genes. The current widely-used expert-curated L1000 land- expression levels of the L1000 are strongly correlated with mark genes, potentially reducing measurement the expression levels of other genes, and thus this subset can costs by 20%. The concrete autoencoder can be be measured cheaply and then used to impute the remaining implemented by adding just a few lines of code gene expression levels (Lamb et al., 2006). to a standard autoencoder, and the code for the algorithm and experiments is publicly available. The problem of feature selection is different from the more general problem of dimensionality reduction. Standard tech- niques for dimensionality reduction, such as principal com- 1. Introduction ponents analysis (Hotelling, 1933) and autoencoders (Hin- ton & Salakhutdinov, 2006), are be able to represent data High-dimensional datasets often pose a challenge for ma- with fewer dimensions while preserving maximal variance chine learning algorithms. Feature selection methods aim or minimizing reconstruction loss. However, such methods *Equal contribution 1Department of Electrical Engineering, do not select a set of features present in the original dataset, Stanford University, Stanford, United States 2Department of and thus cannot be directly used to eliminate redundant Computer Engineering, Bogazici University, Istanbul, Turkey features and reduce experimental costs. We emphasize the 3Department of Biomedical Data Sciences, Stanford Univer- unsupervised 4 nature of this problem: specific prediction sity, Stanford, United States Chan-Zuckerberg Biohub, San tasks may not be known ahead of time, and thus it is impor- Francisco, United States. Correspondence to: Abubakar Abid <[email protected]>. tant to develop methods that can identify a subset of features while allowing imputation of the remaining features with Proceedings of the 36 th International Conference on Machine minimal distortion for arbitrary downstream tasks. Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s). In this paper, we propose a new end-to-end method to per- Concrete Autoencoders (a) (b) (c) (d) Figure 1. Demonstrating concrete autoencoders on the MNIST dataset. Here, we show the results of using concrete autoencoders to select in an unsupervised manner the k = 20 most informative pixels of images in the MNIST dataset. (a) The 20 selected features (out of the 784 pixels) on the MNIST dataset are shown in white. (b) A sample of input images in MNIST dataset with the top 2 rows being training images and the bottom 3 rows being test images. (c) The same input images with only the selected features shown as colored dots. (d). The reconstructed versions of the images, using only the 20 selected pixels, shows that generally the digit is identified correctly and some stylistic features, such as the swirl in the digit “2”, are captured. (cf. figures in AppendixA which show the results of applying concrete autoencoder to individual classes of digits.) form feature subset selection and imputation that leverages are able to detect potential relationships between features, the power of deep autoencoders for discrete feature selection. but usually at the expense of increased computation time. Our method, the concrete autoencoder, uses a relaxation of Embedded methods also consider relationships between the discrete distribution, the Concrete distribution (Maddi- features but generally do so more efficiently as they incor- son et al., 2016), and the reparametrization trick (Kingma porate feature selection into the learning phase of another & Welling, 2013) to differentiate through an arbitrary (e.g. algorithm. A well-known example is the Lasso (Tibshirani, reconstruction) loss and select input features to minimize 1996), which can be used to select features for regression this loss. A visual example of results from our method is by varying the strength of `1 regularization. shown in Fig.1, where the concrete autoencoder selects Many embedded unsupervised feature selection algorithms the 20 most informative pixels (out of a total of 784) on the use regularization as the means to select discrete features. MNIST dataset, and reconstructs the original images with The popular UDFS algorithm Yang et al.(2011) incorpo- high accuracy. We test concrete autoencoders on a variety rates ` regularization on a set of weights applied to the of datasets, and find that they generally outperform state-of- 2;1 input to select features most useful for local discriminative the-art methods for feature selection and data reconstruction. analysis. Similarly, the MCFS algorithm (Cai et al., 2010) We have made the code for our algorithm and experiments uses regularization to solve for the features which preserve available on a public repository1. the clustering structure in the data. The recently-proposed AEFS (Han et al., 2017) algorithm also uses `2;1 regular- Related Works Feature selection methods are generally ization on the weights of the encoder that maps the input divided into three classes: filter, wrapper, and embedded data to a latent space and optimizes these weights for their methods. Filter methods rank the features of the dataset ability to reconstruct the original input. using statistical tests, such as the variance, in order to se- lect features that individually maximize the desired criteria In this paper, we select discrete features using an embed- (Battiti, 1994; Duda et al., 2012). Filter methods generally ded method but without resorting to regularization. Rather, do not consider interactions between features and do not we use a relaxation of the discrete random variables, the provide a way to impute the remaining features; one must Concrete distribution (Maddison et al., 2016), which allows train a separate algorithm to do so. Wrapper methods select a low-variance estimate of the gradient through discrete subsets of features that maximize an objective function opti- stochastic nodes. By using Concrete random variables, we mized over the choice of input features using a black-box can directly parametrize the selection of input features, and optimization method, such as sequential search or genetic al- differentiate through the parameters. As we show through gorithms (Kohavi & John, 1997; Goldberg & Holland, 1988). experiments in Section4, this leads to lower reconstruction Since wrapper methods evaluate subsets of features, they errors on real-world datasets compared to the aforemen- tioned regularization-based methods. Additional related 1Code available at: https://github.com/mfbalin/ works can bed found in AppendixB. Concrete-Autoencoders Concrete Autoencoders 2. Problem Formulation 3. Proposed Method We now describe the problem of global feature selection. The concrete autoencoder is an adaption of the standard Although global feature selection is relevant for both un- autoencoder (Hinton &

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us