Dynamic Feature Scaling for Online Learning of Binary Classifiers

Dynamic Feature Scaling for Online Learning of Binary Classifiers

Dynamic Feature Scaling for Online Learning of Binary Classifiers Danushka Bollegala University of Liverpool, Liverpool, United Kingdom May 30, 2017 Abstract Scaling feature values is an important step in numerous machine learning tasks. Different features can have different value ranges and some form of a feature scal- ing is often required in order to learn an accurate classifier. However, feature scaling is conducted as a preprocessing task prior to learning. This is problematic in an online setting because of two reasons. First, it might not be possible to accu- rately determine the value range of a feature at the initial stages of learning when we have observed only a handful of training instances. Second, the distribution of data can change over time, which render obsolete any feature scaling that we perform in a pre-processing step. We propose a simple but an effective method to dynamically scale features at train time, thereby quickly adapting to any changes in the data stream. We compare the proposed dynamic feature scaling method against more complex methods for estimating scaling parameters using several benchmark datasets for classification. Our proposed feature scaling method consistently out- performs more complex methods on all of the benchmark datasets and improves classification accuracy of a state-of-the-art online classification algorithm. 1 Introduction Machine learning algorithms require train and test instances to be represented using a set of features. For example, in supervised document classification [9], a document is often represented as a vector of its words and the value of a feature is set to the num- ber of times the word corresponding to the feature occurs in that document. However, different features occupy different value ranges, and often one must scale the feature values before any supervised classifier is trained. In our example of document classi- fication, there are both highly frequent words (e.g. stop words) as well as extremely rare words. Often, the relative difference of a value of a feature is more informative than its absolute value. Therefore, feature scaling has shown to improve performance in classification algorithms. Typically, feature values are scaled to a standard range in a preprocessing step before using the scaled features in the subsequent learning task. However, this pre- processing approach to feature value scaling is problematic because of several reasons. 1 First, often feature scaling is done in an unsupervised manner without consulting the labels assigned to the training instances. Although this is the only option in unsuper- vised learning tasks such as document clustering, for supervised learning tasks such as document classification, where we do have access to the label information, we can use the label information also for feature scaling. Second, it is not possible to perform feature scaling as a preprocessing step in one-pass online learning setting. In one-pass online learning we are allowed to traverse through the set of training instances only once. Learning from extremely large datasets such as twitter streams or Web scale learning calls for algorithms that require only a single pass over the set of training instances. In such scenarios it is not possible to scale the feature values beforehand by using statistics from the entire training set. Third, even if we pre-compute scaling parameters for a feature, those values might become obsolete in an online learning set- ting in which the statistical properties of the training instances vary over the time. For example, a twitter text stream regarding a particular keyword might change overtime and the scaling factors computed using old data might not be appropriate for the new data. We study the problem of dynamically scaling feature values at run time for online learning. The term dynamic feature scaling is used in this paper to refer to the prac- tice of scaling feature values at run time as opposed to performing feature scaling as a pre-processing step that happens prior to learning. We focus on binary classifiers as a specific example. However, we note that the proposed method can be easily extended to multi-class classifiers. As shown later in our experiments, we evaluate the proposed feature scaling methods on both binary and multi-class classification datasets. We pro- pose two main approaches for dynamic feature scaling in this paper: (a) Unsupervised Dynamic Feature Scaling (Section 3), in which we do not consider the label informa- tion assigned to the training instances for feature scaling, and (b) Supervised Dynamic Feature Scaling (Section 4), in which we consider the label information assigned to the training instances for feature scaling. All algorithms we propose in this paper can be trained under the one-pass online learning setting, where only a single training instance is provided at a time and only the scale parameters and feature weights are stored in the memory. This enables the proposed method to (a) efficiently adapt to the varying statistics in the data stream, (b) compute the optimal feature scales such that the likelihood of the training data under the trained model is maximised, and (c) train from large datasets where batch learning is impossible because of memory requirements. We evaluate the proposed methods in combination with different online learning algorithms using nine benchmark datasets for binary and multi-class classification. Our experimental results show that, inter- estingly, the much simpler unsupervised dynamic feature scaling method consistently improves all of the online binary classification algorithms we compare, including the state-of-the-art classifier of [9]. 1.1 Potential Applications of OPOL OPOL algorithms in general, and the supervised/unsupervised feature scaling methods we study in this paper in particular, can be applied for various problems and under different configurations. Next, we describe some of those applications. 2 Learning from data streams: Data streams are one of the main sources of data in machine learning and data mining. For example, we might have a sensor that continuously monitors a particular variable such as the temperature in a room and transmits the readings to a database. Here, what we have is a continuous stream of (possibly) real numbers flowing in the form of a data stream. Other examples of data streams include the timeline of a twitter user, stock prices, for- eign currency exchange rate, etc. We would like to learn to predict a particular event using the given stream. For example, in the case of our temperature sen- sor this could be predicting whether there is a danger of an explosion in a room that contains highly inflammatory goods. OPOL is particularly relevant in such stream learning settings because we cannot wait until we have collected the en- tire dataset to perform feature scaling. The data stream flows in continuously without any intermittences. Therefore, we must perform any scaling of features dynamically. Domain adaptation in data streams: In typical supervised machine learning, we as- sume that the train and test data are independently and identically (i.i.d.) dis- tributed samples from the same distribution. However, in domain adaptation [6] the test data is assumed to come from a different distribution than the train data. Under such learning conditions, the parameters we learn from the train data might no longer be suitable for the test data. For example, in cross-domain sentiment classification [4], we are required learn a sentiment classifier from the labelled and unlabelled data from the source domain such as a set of reviews on books and apply the trained classification model to classify sentiment on a dif- ferent target domain such as reviews on movies. If we simply apply without any adaptation a model that was trained using data from a source domain that is dif- ferent from our target domain where we would like to apply our trained model, the performance is usually poor. Feature scaling is useful for domain adaptation setting because we might not have sufficient data before hand from the target domain to perform scaling for features that appears only in the target domain, hence not seen during training. BigData: If the training dataset is extremely large as in the so called BigDat learning settings, then even if we have the entire dataset stored locally prior to learning, we might not be able to traverse the dataset multiple times because of the time and/or space complexity of the learning algorithm. In such situations, our only option is to use OPOL. We will have to scale the features simultaneous as we perform training because of the size of the dataset we might not be able to run two passed over the dataset, once for scaling features and again for online learning. Different learning settings: Although we discuss feature scaling in online binary clas- sification settings, the feature scaling methods discussed in the paper can be easily extended to a wide-range of learning settings such as multi-class classi- fication, regression, and learning to rank [5, 28, 29]. For example, we show the performance of different feature scaling methods when applied to binary and 3 multi-class classification datasets in our experiments later in Section 5. In partic- ular, unsupervised feature scaling method we describe can be applied with any classification algorithm giving a diverse range of combinations. 2 Related Work Online learning has received much attention lately because of the necessity to learn from large training datasets such as query logs in a web search engine [26], web- scale document classification or clustering [23], and sentiment analysis on social media [18, 14]. Online learning toolkits that can efficiently learn from large datasets are made available such as Vowpal Wabbit1 and OLL2 (Online Learning Library). Online learn- ing approaches are attractive than their batch learning counterparts when the training data involved is massive due to two main reasons.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    23 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us