Learning Word Vectors for Sentiment Analysis

Learning Word Vectors for Sentiment Analysis

Learning Word Vectors for Sentiment Analysis Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts Stanford University Stanford, CA 94305 [amaas, rdaly, ptpham, yuze, ang, cgpotts]@stanford.edu Abstract recognition, part of speech tagging, and document retrieval (Turney and Pantel, 2010; Collobert and Unsupervised vector-based approaches to se- Weston, 2008; Turian et al., 2010). mantics can model rich lexical meanings, but they largely fail to capture sentiment informa- In this paper, we present a model to capture both tion that is central to many word meanings and semantic and sentiment similarities among words. important for a wide range of NLP tasks. We The semantic component of our model learns word present a model that uses a mix of unsuper- vectors via an unsupervised probabilistic model of vised and supervised techniques to learn word documents. However, in keeping with linguistic and vectors capturing semantic term–document in- cognitive research arguing that expressive content formation as well as rich sentiment content. and descriptive semantic content are distinct (Ka- The proposed model can leverage both con- plan, 1999; Jay, 2000; Potts, 2007), we find that tinuous and multi-dimensional sentiment in- formation as well as non-sentiment annota- this basic model misses crucial sentiment informa- tions. We instantiate the model to utilize the tion. For example, while it learns that wonderful document-level sentiment polarity annotations and amazing are semantically close, it doesn’t cap- present in many online documents (e.g. star ture the fact that these are both very strong positive ratings). We evaluate the model using small, sentiment words, at the opposite end of the spectrum widely used sentiment and subjectivity cor- from terrible and awful. pora and find it out-performs several previ- ously introduced methods for sentiment clas- Thus, we extend the model with a supervised sification. We also introduce a large dataset sentiment component that is capable of embracing of movie reviews to serve as a more robust many social and attitudinal aspects of meaning (Wil- benchmark for work in this area. son et al., 2004; Alm et al., 2005; Andreevskaia and Bergler, 2006; Pang and Lee, 2005; Goldberg 1 Introduction and Zhu, 2006; Snyder and Barzilay, 2007). This component of the model uses the vector represen- Word representations are a critical component of tation of words to predict the sentiment annotations many natural language processing systems. It is on contexts in which the words appear. This causes common to represent words as indices in a vocab- words expressing similar sentiment to have similar ulary, but this fails to capture the rich relational vector representations. The full objective function structure of the lexicon. Vector-based models do of the model thus learns semantic vectors that are much better in this regard. They encode continu- imbued with nuanced sentiment information. In our ous similarities between words as distance or angle experiments, we show how the model can leverage between word vectors in a high-dimensional space. document-level sentiment annotations of a sort that The general approach has proven useful in tasks are abundant online in the form of consumer reviews such as word sense disambiguation, named entity for movies, products, etc. The technique is suffi- ciently general to work also with continuous and ing sentiment-imbued topics rather than embedding multi-dimensional notions of sentiment as well as words in a vector space. non-sentiment annotations (e.g., political affiliation, Vector space models (VSMs) seek to model words speaker commitment). directly (Turney and Pantel, 2010). Latent Seman- After presenting the model in detail, we pro- tic Analysis (LSA), perhaps the best known VSM, vide illustrative examples of the vectors it learns, explicitly learns semantic word vectors by apply- and then we systematically evaluate the approach ing singular value decomposition (SVD) to factor a on document-level and sentence-level classification term–document co-occurrence matrix. It is typical tasks. Our experiments involve the small, widely to weight and normalize the matrix values prior to used sentiment and subjectivity corpora of Pang and SVD. To obtain a k-dimensional representation for a Lee (2004), which permits us to make comparisons given word, only the entries corresponding to the k with a number of related approaches and published largest singular values are taken from the word’s ba- results. We also show that this dataset contains many sis in the factored matrix. Such matrix factorization- correlations between examples in the training and based approaches are extremely successful in prac- testing sets. This leads us to evaluate on, and make tice, but they force the researcher to make a number publicly available, a large dataset of informal movie of design choices (weighting, normalization, dimen- reviews from the Internet Movie Database (IMDB). sionality reduction algorithm) with little theoretical guidance to suggest which to prefer. 2 Related work Using term frequency (tf) and inverse document frequency (idf) weighting to transform the values The model we present in the next section draws in- in a VSM often increases the performance of re- spiration from prior work on both probabilistic topic trieval and categorization systems. Delta idf weight- modeling and vector-spaced models for word mean- ing (Martineau and Finin, 2009) is a supervised vari- ings. ant of idf weighting in which the idf calculation is Latent Dirichlet Allocation (LDA; (Blei et al., done for each document class and then one value 2003)) is a probabilistic document model that as- is subtracted from the other. Martineau and Finin sumes each document is a mixture of latent top- present evidence that this weighting helps with sen- ics. For each latent topic T , the model learns a timent classification, and Paltoglou and Thelwall conditional distribution p(wjT ) for the probability (2010) systematically explore a number of weight- that word w occurs in T . One can obtain a k- ing schemes in the context of sentiment analysis. dimensional vector representation of words by first The success of delta idf weighting in previous work training a k-topic model and then filling the matrix suggests that incorporating sentiment information with the p(wjT ) values (normalized to unit length). into VSM values via supervised methods is help- The result is a word–topic matrix in which the rows ful for sentiment analysis. We adopt this insight, are taken to represent word meanings. However, but we are able to incorporate it directly into our because the emphasis in LDA is on modeling top- model’s objective function. (Section 4 compares ics, not word meanings, there is no guarantee that our approach with a representative sample of such the row (word) vectors are sensible as points in a weighting schemes.) k-dimensional space. Indeed, we show in section 4 that using LDA in this way does not deliver ro- 3 Our Model bust word vectors. The semantic component of our model shares its probabilistic foundation with LDA, To capture semantic similarities among words, we but is factored in a manner designed to discover derive a probabilistic model of documents which word vectors rather than latent topics. Some recent learns word representations. This component does work introduces extensions of LDA to capture sen- not require labeled data, and shares its foundation timent in addition to topical information (Li et al., with probabilistic topic models such as LDA. The 2010; Lin and He, 2009; Boyd-Graber and Resnik, sentiment component of our model uses sentiment 2010). Like LDA, these methods focus on model- annotations to constrain words expressing similar sentiment to have similar representations. We can how closely its representation vector φw matches the efficiently learn parameters for the joint objective scaling direction of θ. This idea is similar to the function using alternating maximization. word vector inner product used in the log-bilinear language model of Mnih and Hinton (2007). 3.1 Capturing Semantic Similarities Equation 1 resembles the probabilistic model of We build a probabilistic model of a document us- LDA (Blei et al., 2003), which models documents ing a continuous mixture distribution over words in- as mixtures of latent topics. One could view the en- dexed by a multi-dimensional random variable θ. tries of a word vector φ as that word’s association We assume words in a document are conditionally strength with respect to each latent topic dimension. independent given the mixture variable θ. We assign The random variable θ then defines a weighting over a probability to a document d using a joint distribu- topics. However, our model does not attempt to tion over the document and θ. The model assumes model individual topics, but instead directly models each word wi 2 d is conditionally independent of word probabilities conditioned on the topic mixture the other words given θ. The probability of a docu- variable θ. Because of the log-linear formulation of β ment is thus the conditional distribution, θ is a vector in R and not restricted to the unit simplex as it is in LDA. Z Z N Y We now derive maximum likelihood learning for p(d) = p(d; θ)dθ = p(θ) p(wijθ)dθ: (1) i=1 this model when given a set of unlabeled documents D. In maximum likelihood learning we maximize Where N is the number of words in d and wi is the probability of the observed data given the model the ith word in d. We use a Gaussian prior on θ. parameters. We assume documents dk 2 D are i.i.d. We define the conditional distribution p(wijθ) us- samples. Thus the learning problem becomes ing a log-linear model with parameters R and b. N The energy function uses a word representation ma- Y Z Yk (β x jV j) max p(D; R; b) = p(θ) p(wijθ; R; b)dθ: trix R 2 R where each word w (represented R;b as a one-on vector) in the vocabulary V has a β- dk2D i=1 (5) dimensional vector representation φw = Rw corre- sponding to that word’s column in R.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us