
1 Efficient Wrapper Feature Selection using Autoencoder and Model Based Elimination Sharan Ramjee, Student Member, IEEE, and Aly El Gamal, Senior Member, IEEE Abstract—We propose a computationally efficient wrapper of filters with the learning algorithm interaction advantage of feature selection method - called Autoencoder and Model Based wrappers. As examined by [4], embedded methods are model Elimination of features using Relevance and Redundancy scores dependent because they perform feature selection during the (AMBER) - that uses a single ranker model along with autoen- coders to perform greedy backward elimination of features. The training of the learning algorithm. This serves as a motivation ranker model is used to prioritize the removal of features that for the use of wrapper methods that are not model dependent. are not critical to the classification task, while the autoencoders [2] define wrapper methods as an exploration of the feature are used to prioritize the elimination of correlated features. We space, where the saliency of subsets of features are ranked demonstrate the superior feature selection ability of AMBER on using the estimated accuracy of a learning algorithm. Hence, 4 well known datasets corresponding to different domain appli- cations via comparing the accuracies with other computationally τ(σ; α) in (1) can be approximated by minimizing efficient state-of-the-art feature selection techniques. τwrap(σ; α) = min τalg(σ); (2) σ d I. INTRODUCTION subject to σ 2 f0; 1g , where τalg is a classifier having estimates of α. Wrapper methods can further be divided into Feature selection is a preprocessing technique that ranks three types: Exhaustive Search Wrappers, Random Search the significance of features to eliminate features that are Wrappers, and Heuristic Search Wrappers. We will focus on insignificant to the task at hand. As examined by [1], it is a Heuristic Search Wrappers that iteratively select or eliminate powerful tool to alleviate the curse of dimensionality, reduce one feature at each iteration because unlike Exhaustive Search training time and increase the accuracy of learning algorithms, Wrappers, they are more computationally efficient and unlike as well as to improve data comprehensibility. For classification Random Search Wrappers, they have deterministic guarantees problems, [2] divides feature selection problems into two on the set of selected salient features, as illustrated in [5]. types: (a) given a fixed k d, where d is the total number of features, find the k features that lead to the least classification error and (b) given a maximum expected classification error, A. Motivation find the smallest possible k. In this paper, we will be focusing 1) Relevance and Redundancy: We hypothesize that the on problems of type (a). [2] formalizes this type of feature saliency of features is determined by two factors: Relevance selection problems as follows. Given a function y = f(x; α), and Redundancy. Irrelevant features are insignificant because find a mapping of data x 7! (x∗σ), σ 2 f0; 1gd, along with the their direct removal does not result in a drop in classification parameters α for the function f that lead to the minimization accuracy, while redundant features are insignificant because of Z they are linearly or non-linearly dependent on other features τ(σ; α) = V (y; f((x ∗ σ); α))dP (x; y); (1) and can be inferred - or approximated - from them as long as these other features are not removed. As detailed by [6], one arXiv:1905.11592v2 [cs.LG] 2 Jan 2020 subject to kσk0 = k, where the distribution P (x; y) - that de- does not necessarily imply the other. Filter methods are better termines how samples are generated - is unknown, and can be at identifying redundant features while wrapper methods are inferred only from the training set, x ∗ σ = (x1σ1; : : : ; xdσd) better at identifying irrelevant features, and this highlights the is an elementwise product, V (·; ·) is a loss function and k · k0 power of embedded methods as they utilize aspects of both is the L0-norm. in feature selection as mentioned in [7]. Since most wrapper Feature selection algorithms are of 3 types: Filter, Wrapper, methods do not take advantage of filter method based identifi- and Embedded methods. Filters rely on intrinsic character- cation of redundant features, there is a need to incorporate istics of data to measure feature importance while wrappers a filter based technique to identify redundant features into iteratively measure the learning performance of a classifier wrapper methods, which we address using autoencoders. to rank feature importance. [3] asserts that although filters 2) Training the Classifier only once: Wrapper methods are more computationally efficient than wrappers, the features often have a significantly high computational complexity be- selected by filters are not as good. Embedded methods use cause the classifier needs to be trained for every considered the structure of learning algorithms to embed feature selection feature set at every iteration. For greedy backward elimination into the underlying model to reconcile the efficiency advantage wrappers, the removal of one out of d features requires removing each feature separately and training the classifier S. Ramjee and A. El Gamal are with the Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA. Email: with the remaining d − 1 features and testing its performance {sramjee, elgamala}@purdue.edu. on the validation set. The feature whose removal results in 2 the highest classification accuracy is then removed. This is by [15], [16], and [17]. Similar to FQI, we measure the the procedure followed by most backward feature selection relevance of each feature by setting the input to the neuron algorithms such as the Recursive Feature Elimination (RFE) corresponding to that feature to 0. This essentially means that method proposed by [8]. For iterative greedy elimination of k the input neuron is dead because all the weights/synapses features from a set of d features, the classifier has to be trained from that neuron to the next layer will not have an impact on Pk for i=1(d − i + 1) times, which poses a practical limitation the output of the neural network. Since more salient features when the number of features is large. Also, the saliency of the possess weights of higher magnitude, these weights influence features selected is governed by how good the classifier that the output to a greater extent and setting their values to 0 in ranks the features is, and as such, we need to use state-of-the- the input will result in a greater loss in the output layer. This art classifiers for ranking the features (CNNs for image data, is the basis of the Weight Based Analysis feature selection etc.). These models are often complex and thus, consume a methods outlined by [18]. We further note that we normalize lot of training time which implies a trade-off between speed the training set before training by setting the mean of each and the saliency of selected features. We address this issue by feature to 0 and the variance to 1, so that our simulation of training the feature ranker model only once. feature removal is effectively setting the feature to its mean value for all training examples. To summarize, the pre-trained II. STATE OF THE ART neural network ranker model prioritizes the removal of features We now describe top-notch efficient feature selection meth- that are irrelevant to the classification task by simulating the ods that we will be comparing our proposed method to. With removal of each feature. Features whose removal results in a the exception of FQI, the implementations of these methods lower loss are less relevant. We will refer to the loss value of Relevance Score can be found in the scikit-feature package created by [3]. this ranker model as a feature’s . Fisher Score encourages selection of features that have similar values within the same class and distinct values across B. Autoencoders Reveal Non-Linear Correlations different classes. A precise definition is available in [9]. The weights connected to less salient features can pos- Conditional Mutual Information Maximization (CMIM) sess high magnitudes, when these features are redundant in is a fast feature selection method proposed in [10] and [11] presence of other salient features as described in Sec. I-A1. that iteratively selects features while maximizing the empirical Hence, we use a filter based technique that is independent of Shannon mutual information function between the feature a learning algorithm to detect these redundant features. We being selected and class labels, given already selected features. experimented with methods like PCA as detailed by [19] and A precise definition is available in [3]. correlation coefficients as detailed by [20] but these methods Efficient and Robust Feature Selection (RFS) is an effi- revealed only linear correlations in data. We hence introduced cient feature selection method proposed by [12] that exploits autoencoders into the proposed method because they reveal the noise robustness property of the joint `2;1-norm loss non-linear correlations as examined by [21], [22], and [23]. function, by applying the `2;1-norm minimization on both To eliminate one feature from a set of k features, we train the the loss function and its associated regularization function. autoencoder with one dense hidden layer consisting of k − 1 A precise definition is available in [3]. The value of the neurons using the normalized training set. We note that this regularization coefficient for our experiments was chosen by hidden layer can also be convolutional, LSTM, or of other performing RFS on a wide range of values and picking the types depending on the data we are dealing with. To evaluate value that led to the highest accuracy on the validation set. a feature, we set its corresponding values in the training set to Feature Quality Index (FQI) is a feature selection method 0 and pass the set into the autoencoder.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-