Modeling Word Burstiness Using the Dirichlet Distribution

Modeling Word Burstiness Using the Dirichlet Distribution

Modeling Word Burstiness Using the Dirichlet Distribution Rasmus E. Madsen [email protected] Department of Informatics and Mathematical Modelling, Technical University of Denmark David Kauchak [email protected] Charles Elkan [email protected] Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92092-0144 Abstract suggested heuristics to improve its performance. In this paper, we follow an alternative path. We present a Multinomial distributions are often used to different probabilistic model that, without any heuris- model text documents. However, they do tic changes, is far better suited for representing a class not capture well the phenomenon that words of text documents. in a document tend to appear in bursts: if a word appears once, it is more likely to As most researchers do, we represent an individual appear again. In this paper, we propose document as a vector of word counts (Salton et al., the Dirichlet compound multinomial model 1975). This bag-of-words representation loses seman- (DCM) as an alternative to the multinomial. tic information, but it simplifies further processing. The DCM model has one additional degree The usual next simplification is the assumption that of freedom, which allows it to capture bursti- documents are generated by repeatedly drawing words ness. We show experimentally that the DCM from a fixed distribution. Under this assumption, is substantially better than the multinomial word emissions are independent given the class, i.e. the at modeling text data, measured by perplex- naive Bayes property holds. This property is not valid ity. We also show using three standard docu- (Lewis, 1998), but naive Bayes models remain popu- ment collections that the DCM leads to bet- lar (McCallum & Nigam, 1998; Sebastiani, 2002) be- ter classification than the multinomial model. cause they are fast and easy to implement, they can DCM performance is comparable to that ob- be fitted even with limited training data, and they tained with multiple heuristic changes to the do yield accurate classification when heuristics are ap- multinomial model. plied (Jones, 1972; Rennie et al., 2003). The central problem with the naive Bayes assumption is that words tend to appear in bursts, as opposed to 1. Introduction being emitted independently (Church & Gale, 1995; Katz, 1996). Rennie et al. (2003) address this is- Document classification is the task of identifying what sue by log-normalizing counts, reducing the impact of topic(s) a document concerns. Generative approaches burstiness on the likelihood of a document. Teevan to classification are popular since they are relatively and Karger (2003) empirically search for a model that easy to interpret and can be trained quickly. With fits documents well within an exponential family of these approaches, the key problem is to develop a prob- models, while Jansche (2003) proposes a zero-inflated abilistic model that represents the data well. Unfor- mixture model. tunately, for text classification too little attention has been devoted to this task. Instead, a generic multino- In this paper we go further. We show that the multino- mial model is typically used. mial model is appropriate for common words, but not for other words. The distributions of counts produced Recent work (Rennie et al., 2003) has pointed out a by multinomials are fundamentally different from the number of deficiencies of the multinomial model, and count distributions of natural text. Zipf’s law (Zipf, nd 1949) states that the probability pi of occurrence of an Appearing in Proceedings of the 22 International Confer- −a ence on Machine Learning, Bonn, Germany, 2005. Copy- event follows a power law pi ≈ i , where i is the rank right 2005 by the author(s)/owner(s). of the event and a is a parameter. The most famous Modeling Word Burstiness Using the Dirichlet Distribution example of Zipf’s law is that the frequency of an En- glish word, as a function of the word’s rank, follows a power law with exponent close to minus one. We propose to model a collection of text documents with a Dirichlet distribution (Minka, 2003). The Dirichlet distribution can be interpreted in two ways for this purpose, either as a bag-of-scaled-documents or as a bag-of-bags-of-words. We show below that the latter approach works well. Dirichlet distributions have been used previously to model text, but our approach is fundamentally dif- ferent. In the LDA approach (Blei et al., 2003) the Dirichlet is a distribution over topics, while each topic Figure 1. Count probabilities of common, average and rare is modeled in the usual way as a multinomial distribu- words in the industry sector corpus. The figure shows, tion over words. In our approach, each topic, i.e. each for example, that the probability a given rare word occurs class of documents, is modeled in a novel way by a exactly 10 times in a document is 10−6. A ripple effect is Dirichlet distribution instead of by a multinomial. Our seen for common words because no vocabulary pruning is approach is therefore complementary to the LDA and done, so certain HTML keywords such as “font” or “table” related approaches. occur an even number of times in beginning and ending tags. 2. Multinomial modeling of text When using a multinomial distribution to model text, 3. The burstiness phenomenon the multinomial specifies the probability of observing The term “burstiness” (Church & Gale, 1995; Katz, a given vector of word counts, where the probability 1996) describes the behavior of a rare word appearing θw of emitting word w is subject to the constraints P many times in a single document. Because of the large w θw = 1 and θw > 0 for all w. The probability of a number of possible words, most words do not appear document x represented as a vector of counts xw is in a given document. However, if a word does appear n! W once, it is much more likely to appear again, i.e. words Y xw p(x|θ) = θw appear in bursts. To illustrate this behavior, the prob- QW x ! w=1 w w=1 ability that a given word occurs in a document exactly where xw is the number of times word w appears in x times is shown in Figure 1 for the industry sector the document, W is the size of the vocabulary, and corpus. Words have been split into three categories P n = xw. based on how often they appear in the corpus. The categories are “common,” “average,” and “rare.” The The multinomial distribution is different for each dif- common words are the 500 most frequent words; they ferent document length n. This is not a problem when represent 1% of the words in the vocabulary and 71% learning the parameters; it is possible to generalize of the emissions. The average words are the next 5000 over documents with different lengths. The maximum most common words; they represent 10% of the vocab- likelihood parameter estimates θˆ are ulary and 21% of the emissions. The rare words are PD x the rest of the vocabulary (50,030 words) and account θˆ = d=1 dw w PW PD for 8% of the emissions. w0=1 d=1 xdw0 where d is the document number and D is the number A few things should be noted about Figure 1. Not sur- of documents. These estimates depend only on the prisingly, common words are more probable than av- fraction of times a given word appears in the entire erage words which are more probable than rare words. corpus. Interestingly, though, the curves for the three cate- gories of words are close to parallel and have similar When the multinomial model is used to generate a decay rates. Even though average and rare words are document, the distribution of the number of emissions less likely to appear, once a word has appeared, the (i.e. count) of an individual word is a binomial: probability that it will occur multiple times is similar n across all words. xw n−xw p(xw|θ) = θw (1 − θw) . (1) xw Equation (1) shows that it is unlikely under the multi- Modeling Word Burstiness Using the Dirichlet Distribution Figure 2. Count probabilities for a maximum likelihood Figure 3. Simplex of possible count vectors using multinomial model, trained with the industry sector cor- the multinomial bag-of-words model with parameters pus. 0.4375, 0.25, 0.3125 and n = 50. nomial model for a word to occur many times in a where θ is a vector in the W -dimensional probability P document, because the single word count distribution simplex, i.e. w θw = 1. The α vector entries are the decays exponentially. Figure 2 shows the average word parameters of the Dirichlet. count probabilities from ten synthetic corpora gener- When modeling text, the θ vector represents a docu- ated from a multinomial model trained on the industry ment, making the model have the form dataparameter. sector corpus. Each synthetic corpus was generated so This form makes Dirichlet models qualitatively similar its documents have the same length distribution as to Zipf distributions, where the parameter is an expo- documents in the industry sector corpus. nent. In contrast, the multinomial model has the form The multinomial captures the burstiness of common parameterdata. By rewriting the Dirichlet distribution words, but the burstiness of average and rare words is in the exponential family form not modeled correctly. This is a major deficiency in W the multinomial model since rare and average words X log p(θ|α) = (α − 1) log θ represent 99% of the vocabulary and 29% of emissions w w w=1 and, more importantly, these words are key features W W for classification.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us