
Pachinko Allocation: DAG-Structured Mixture Models of Topic Correlations Wei Li [email protected] Andrew McCallum [email protected] University of Massachusetts, Dept. of Computer Science Abstract give high likelihood to the training data, and the high- Latent Dirichlet allocation (LDA) and other est probability words in each topic provide keywords related topic models are increasingly popu- that briefly summarize the themes in the text collec- lar tools for summarization and manifold dis- tion. In addition to textual data (including news ar- covery in discrete data. However, LDA does ticles, research papers and email), topic models have not capture correlations between topics. In also been applied to images, biological findings and this paper, we introduce the pachinko alloca- other non-textual multi-dimensional discrete data. tion model (PAM), which captures arbitrary, Latent Dirichlet allocation (LDA) (Blei et al., 2003) nested, and possibly sparse correlations be- is a widely-used topic model, often applied to textual tween topics using a directed acyclic graph data, and the basis for many variants. LDA repre- (DAG). The leaves of the DAG represent in- sents each document as a mixture of topics, where each dividual words in the vocabulary, while each topic is a multinomial distribution over words in a vo- interior node represents a correlation among cabulary. To generate a document, LDA first samples its children, which may be words or other in- a per-document multinomial distribution over topics terior nodes (topics). PAM provides a flex- from a Dirichlet distribution. Then it repeatedly sam- ible alternative to recent work by Blei and ples a topic from this multinomial and samples a word Lafferty (2006), which captures correlations from the topic. only between pairs of topics. Using text data from newsgroups, historic NIPS proceedings The topics discovered by LDA capture correlations and other research paper corpora, we show among words, but LDA does not explicitly model cor- improved performance of PAM in document relations among topics. This limitation arises because classification, likelihood of held-out data, the the topic proportions in each document are sampled ability to support finer-grained topics, and from a single Dirichlet distribution. As a result, LDA topical keyword coherence. has difficulty modeling data in which some topics co- occur more frequently than others. However, topic correlations are common in real-world text data, and 1. Introduction ignoring these correlations limits LDA’s ability to pre- dict new data with high likelihood. Ignoring topic cor- Statistical topic models have been successfully used to relations also hampers LDA’s ability to discover a large analyze large amounts of textual information in many number of fine-grained, tightly-coherent topics. Be- tasks, including language modeling, document classi- cause LDA can combine arbitrary sets of topics, LDA fication, information retrieval, document summariza- is reluctant to form highly specific topics, for which tion and data mining. Given a collection of textual some combinations would be “nonsensical”. documents, parameter estimation in these models dis- Motivated by the desire to build accurate models that covers a low-dimensional set of multinomial word dis- can discover large numbers of fine-grained topics, we tributions called “topics”. Mixtures of these topics are interested in topic models that capture topic cor- Appearing in Proceedings of the 23 rd International Con- relations. ference on Machine Learning, Pittsburgh, PA, 2006. Copy- Teh et al. (2005) propose hierarchical Dirichlet pro- right 2006 by the author(s)/owner(s). Pachinko Allocation r … … S S … S … … … … V … V … V (a) Dirichlet Multinomial (b) LDA (c) Four-Level PAM (d) Arbitrary PAM Figure 1. Model structures for four topic models (a) Dirichlet Multinomial: For each document, a multinomial distribution over words is sampled from a single Dirichlet. (b) LDA: This model samples a multinomial over topics for each document, and then generates words from the topics. (c) Four-Level PAM: A four-level hierarchy consisting of a root, a set of super-topics, a set of sub-topics and a word vocabulary. Both the root and the super-topics are associated with Dirichlet distributions, from which we sample multinomials over their children for each document. (d) PAM: An arbitrary DAG structure to encode the topic correlations. Each interior node is considered a topic and associated with a Dirichlet distribution. cesses (HDP) to model groups of data that have a some interior nodes may also have children that are pre-defined hierarchical structure. Each pre-defined other topics, thus representing a mixture over topics. group is associated with a Dirichlet process, whose With many such nodes, PAM therefore captures not base measure is sampled from a higher-level Dirich- only correlations among words (as in LDA), but also let process. HDP can capture topic correlations de- correlations among topics themselves. fined by this nested data structure, however, it does For example, consider a document collection that dis- not automatically discover such correlations from un- cusses four topics: cooking, health, insurance and structured data. A simple version of HDP does not drugs. The cooking topic co-occurs often with health , use a hierarchy over pre-defined groups of data, but while health, insurance and drugs are often discussed can be viewed as an extension to LDA that integrates together. A DAG can describe this kind of correlation. over (or alternatively selects) the appropriate number Four nodes for the four topics form one level that is of topics. directly connected to the words. There are two addi- An alternative model that not only represents topic tional nodes at a higher level, where one is the parent correlations, but also learns them, is the correlated of cooking and health, and the other is the parent of topic model (CTM) (Blei & Lafferty, 2006). It is sim- health, insurance and drugs. ilar to LDA, except that rather than drawing topic In PAM each interior node’s distribution over its chil- mixture proportions from a Dirichlet, it does so from dren could be parameterized arbitrarily. In the re- a logistic normal distribution, whose parameters in- mainder of this paper, however, as in LDA, we use a clude a covariance matrix in which each entry speci- Dirichlet, parameterized by a vector with the same di- fies the correlation between a pair of topics. Thus in mension as the number of children. Thus, here, a PAM CTM topics are not independent, however note that model consists of a DAG, with each interior node con- only pairwise correlations are modeled, and the num- taining a Dirichlet distribution over its children. To ber of parameters in the covariance matrix grows as generate a document from this model, we first sample the square of the number of topics. a multinomial from each Dirichlet. Then, to generate In this paper, we introduce the pachinko allocation each word of the document, we begin at the root of model (PAM), which uses a directed acyclic graph the DAG, sampling one of its children according to its (DAG) structure to represent and learn arbitrary-arity, multinomial, and so on sampling children down the nested, and possibly sparse topic correlations. In DAG until we reach a leaf, which yields a word. The PAM, the concept of topics are extended to be distri- model is named for pachinko machines—a game popu- butions not only over words, but also over other topics. lar in Japan, in which metal balls bounce down around The model structure consists of an arbitrary DAG, in a complex collection of pins until they land in various which each leaf node is associated with a word in the bins at the bottom. vocabulary, and each non-leaf “interior” node corre- Note that the DAG structure in PAM is extremely sponds to a topic, having a distribution over its chil- flexible. It could be a simple tree (hierarchy), or an dren. An interior node whose children are all leaves arbitrary DAG, with cross-connected edges, and edges would correspond to a traditional LDA topic. But Pachinko Allocation skipping levels. The nodes can be fully or sparsely (d) (d) (d) 1. Sample θt1 , θt2 , ..., θts from g1(α1), g2(α2), ..., connected. The structure could be fixed beforehand (d) gs(αs), where θ is a multinomial distribution of or learned from the data. It is easy to see that LDA ti topic ti over its children. can be viewed as a special case of PAM: the DAG corresponding to LDA is a three-level hierarchy con- 2. For each word w in the document, sisting of one root at the top, a set of topics in the • Sample a topic path z of length L : < middle and a word vocabulary at the bottom. The w w z , z , ..., z >. z is always the root root is fully connected to all the topics, and each topic w1 w2 wLw w1 and z through z are topic nodes in is fully connected to all the words. (LDA represents w2 wLw T . z is a child of z and it is sam- topics as multinomial distributions over words, which wi w(i−1) pled according to the multinomial distribu- can be seen as Dirichlet distributions with variance 0.) (d) tion θzw(i−1) . In this paper we present experimental results demon- (d) • Sample word w from θz . strating PAM’s improved performance over LDA in wLw three different tasks, including topical word coherence assessed by human judges, likelihood on held-out test Following this process, the joint probability of gener- ating a document d, the topic assignments z(d) and the data, and document classification accuracy. We also (d) show a favorable comparison versus CTM and HDP multinomial distributions θ is on held-out data likelihood.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-