Sentiment Classification on Customer Feedback Data: Noisy Data, Large Feature Vectors, and the Role of Linguistic Analysis

Sentiment Classification on Customer Feedback Data: Noisy Data, Large Feature Vectors, and the Role of Linguistic Analysis

Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis Michael Gamon Microsoft Research One Microsoft Way Redmond, WA 98052 [email protected] attitude. Pang et al. (2002) have demonstrated, Abstract however, that this is not as straightforward as one We demonstrate that it is possible to perform may think, given that sentiment is often expressed automatic sentiment classification in the very noisy in more subtle and indirect ways. domain of customer feedback data. We show that The literature on sentiment classification can be by using large feature vectors in combination with divided into approaches that rely on semantic feature reduction, we can train linear support resources, such as a sentiment or affect lexicon vector machines that achieve high classification (Nasukawa and Yi 2003, Subasic and Huettner accuracy on data that present classification 2001), or a large scale knowledge base (Liu et al challenges even for a human annotator. We also 2003) on the one hand, and approaches that try to show that, surprisingly, the addition of deep learn patterns directly from tagged data, without linguistic analysis features to a set of surface level additional resources (Dave et al 2003, Pang et al. word n-gram features contributes consistently to classification accuracy in this domain. 2003). Much research is also being directed at acquiring affect lexica automatically (Turney 2002, 1 Introduction Turney and Littman 2002). There is also a considerable amount of research Software companies typically receive high on classification of text as “subjective” or volumes of electronic customer feedback every “objective” (Wiebe et al 2001, Yu and day, some of it in the form of elicited surveys, Hatzivassiloglou 2003), a task that is not relevant some of it in the form of unsolicited comments, for the processing of very brief pieces of direct suggestions, criticism. In order to react to that customer feedback. feedback quickly, and to direct it to the appropriate In many studies, research on sentiment channels inside the company, it is desirable to classification is conducted on review-type data, provide intelligent and automatic classification of such as movie or restaurant reviews. These data the feedback along two dimensions: often consist of relatively well-formed, coherent What is the feedback about? and at least paragraph-length pieces of text. The Is the feedback positive or negative? results we present in this paper are based on The first question is addressed by text mining customer feedback data from web surveys, which, tools. Automatic sentiment classification addresses as we will discuss below, are particularly noisy and the second question. Text mining tools can help fragmentary. make large quantities of feedback more For our purpose of automatic classification of manageable by splitting them into clusters based customer feedback, we decided to use machine- on keywords or topics. Sentiment analysis, which learning directly on the customer feedback, instead is the focus of this paper, adds a second dimension of relying on additional semantic resources of any to the analysis. It makes it possible to focus the kind. This decision was motivated by practical text mining on areas in need of improvement considerations: first, the customer feedback data (negative feedback) or on areas of success we are facing are often very short and sometimes (positive feedback). very incoherent. This makes it seem unlikely that a Sentiment classification is a special case of text detailed semantic resource would be of particular categorization, where the criterion of classification help. Second, we believe that an appropriately is the attitude expressed in the text, rather than the chosen machine-learning technique will be able to “content” or topic. Faced with the task of having to draw its own conclusions from the distribution of automatically classify a piece of text as expressing lexical elements in a piece of feedback. positive or negative sentiment, a reasonable first We conducted our sentiment classification approach would consist of paying special attention experiments using support vector machines. to words that tend to express a positive or negative Support vector machines (SVMs) have a good track record in text classification (Joachims 1998, average of 2.56 lines. For our experiments we split Dumais et al. 1998), they can be trained using a the data 90/10 into training and held-out test data. large number of features, and both training and We performed 10-fold cross validation for each of classification for linear SVMs are fast with the experiments reported in this paper. optimized learning algorithms. For our For each of the various classification tasks, we experiments we use John Platt’s Sequential trained a linear SVM using the standard settings of Minimal Optimization (SMO) tool (Platt 1999). In the SMO tool, and calculated accuracy, precision the absence of any evidence that would suggest a and recall numbers on the held-out test data, more complicated kernel function such as a averaging them across the 10-fold cross validation. polynomial or an RBF kernel, we have decided to train linear SVMs for our classification task (see 3 Features also the results in Joachims 1998). 3.1 Feature vectors The procedure, as is standard in supervised machine learning tasks, consists of training a We experimented with a range of different classifier on pretagged training data and then feature sets. Most importantly, we wanted to evaluating the performance of the classifier on a establish whether we would gain any significant held-out set of test data. advantage in the sentiment classification task by The two main questions we wanted to assess using features based on deep linguistic analysis or with our experiments are: whether surface-based features would suffice. 1. which features and feature sets are Previous results in authorship attribution and style relevant for sentiment classification on classification experiments had indicated that customer feedback? linguistic features contribute to the overall 2. what is the maximum classification accuracy of the classifiers, although our null accuracy that can be achieved on this hypothesis based on a review of the relevant data set? literature for sentiment classification was that we would not gain much by using these features. The 2 Data surface features we used were lemma unigrams, lemma bigrams, and lemma trigrams. Our data consists of 11399 feedback items from For the linguistic features, we performed a a Global Support Services survey, and 29485 linguistic analysis of the data with the NLPWin feedback items from a Knowledge Base survey for natural language processing system developed in a total of 40884 items. We excluded pieces of Microsoft Research (an overview can be found in feedback without any verbatim from the data. Heidorn 2000). NLPWin provides us with a phrase Along with the verbatim, customers provided a structure tree and a logical form for each string, numeric satisfaction score on a scale from 1 (not from which we can extract an additional set of satisfied) to 4 (very satisfied) for each of those features: pieces of feedback. The numeric score served as • part-of-speech trigrams the target tag in our experiments, making it • unnecessary to perform any costly human constituent specific length measures evaluation and tagging. The distribution of items (length of sentence, clauses, across numerical scores is given in Table 1. adverbial/adjectival phrases, and noun Category 1 2 3 4 phrases) • Number 8596 9060 14573 8655 constituent structure in the form of context of documents free phrase structure patterns for each constituent in a parse tree. Example: Table 1: number of documents in each DECL::NP VERB NP (a declarative satisfaction category sentence consisting of a noun phrase a The data is extremely noisy, and a human verbal head and a second noun phrase) evaluation of a random set of 200 pieces of • feedback could only assign a positive or negative Part of speech information coupled with sentiment to 117 (58.5%) items, the rest was either semantic relations (e.g. “Verb - Subject - balanced (16 cases or 8%), expressed no sentiment Noun” indicating a nominal subject to a (50 cases or 25%), or too incoherent or random to verbal predicate) • be classified (17 cases or 8.5%). Amongst the 117 Logical form features provided by classifiable cases, the human evaluator assigned NLPWin, such as transitivity of a the category “positive”: to 26 cases (or 22.2%) and predicate, tense information etc. the category “negative” to 91 cases (or 77.8%). For each of these features, except for the length After automatic sentence breaking into one features, we extract a binary value, corresponding sentence per line, the individual files contained an to the presence or absence of that feature in a given document. Using binary values for 4 Results presence/absence as opposed to frequency values is Given the four different rankings associated by motivated by the rather extreme brevity of these users with their feedback, we experimented with documents. two distinct classification scenarios: 3.2 Feature reduction 1. classification of documents as belonging to category 1 versus category 4 Feature reduction is an important part of 2. classification of documents as belonging optimizing the performance of a (linear) classifier to categories 1 or 2 on the one hand, and by reducing the feature vector to a size that does 3 or 4 on the other not exceed the number of training cases as a Two additional scenarios can be envisioned. In starting point. Further reduction of vector size can the first, two classifiers (“1 versus 2/3/4” and “4 lead to more improvements if the features are noisy versus 1/2/3”) would be trained and their votes or redundant. would be combined either through weighted Reducing the number of features in the feature probability voting or other classifier combination vector can be done in two different ways: methods (Dietterich 1997).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us