A Continuous-Time Model of Topic Co-Occurrence Trends

A Continuous-Time Model of Topic Co-Occurrence Trends

A Continuous-Time Model of Topic Co-occurrence Trends Wei Li, Xuerui Wang and Andrew McCallum Department of Computer Science University of Massachusetts 140 Governors Drive Amherst, MA 01003-9264 Abstract from a stream of newswire stories, we discover topics from documents and study their changes over time. While this ap- Recent work in statistical topic models has investi- proach is less detailed and structured, it is more robust and gated richer structures to capture either temporal or inter-topic correlations. This paper introduces a topic does not require supervised training. In addition to provid- model that combines the advantages of two recently ing a coarse-grained view of events in time, these models proposed models: (1) The Pachinko Allocation model could also be used as a filter, or as an input to later, more (PAM), which captures arbitrary topic correlations with detailed traditional processing methods. a directed acyclic graph (DAG), and (2) the Topics Statistical topic models such as Latent Dirichlet alloca- over Time model (TOT), which captures time-localized tion (LDA) (Blei, Ng, & Jordan 2003) have been shown shifts in topic prevalence with a continuous distribution to be effective tools in topic extraction and analysis. Al- over timestamps. Our model can thus capture not only though these models do not capture the dynamic properties temporal patterns in individual topics, but also the tem- poral patterns in their co-occurrences. We present re- of topics, many of the large data sets to which they are ap- sults on a research paper corpus, showing interesting plied are often collected over time. Topics rise and fall in correlations among topics and their changes over time. prominence; they split apart; they merge to form new topics; words change their correlations. More importantly, topic co- occurrences also change significantly over time, and time- Introduction sensitive patterns can be learned from them as well. The increasing amount of data on the Web has heightened Some previous work has performed post-hoc analysis to demand for methods that extract structured information from study the dynamical behaviors of topics—discovering topics unstructured text. A recent study shows that there were without the use of timestamps and then projecting their oc- over 11.5 billion pages on the world wide web as of Jan- currence counts into discretized time (Griffiths & Steyvers uary 2005 (Gulli & Signorini 2005), mostly in unstructured 2004)— but this misses the opportunity for time to improve data. Lately, studies in information extraction and synthe- topic discovery. A more systematic approach is the state sis have led to more advanced techniques to automatically transition based methods (Blei & Lafferty 2006) using the organize raw data into structured forms. For example, the Markov assumption. Recently, we proposed a simple new Automatic Content Extraction (ACE) program1 has added topics over time (TOT) model to use temporal information a new Event Detection and Characterization (EDC) task. in topic models (Wang & McCallum 2006) which represents Compared to entity and relation extraction, EDC extracts timestamps of documents as observed continuous variables. more complex structures that involve many named entities A significant difference between TOT and previous work and multi-relations. with similar goals is that TOT does not discretize time and While ACE-style techniques provide detailed structures, does not make Markov assumptions over state transitions in they are also brittle. State-of-the-art named entity recogni- time. Each topic is associated with a continuous distribution tion and relation extraction techniques are both errorful, and over time, and topics are responsible for generating both ob- when combined, yield even lower accuracies for event ex- served timestamps as well as words. traction. Furthermore, they require significant amount of la- To generate the words in a document, TOT follows the beled training data. In this paper, we explore an alternative same procedure as Latent Dirichlet Allocation (LDA) (Blei, approach that models topical events. Instead of extracting Ng, & Jordan 2003). Each document is represented as a individual entities and relations for specific events, we cap- mixture of topics and each topic is a multinomial distribu- ture the rise and fall of themes, ideas and their co-occurrence tion over a word vocabulary. The mixture components in the patterns over time. Similar to Topic Detection and Tracking documents are sampled from a single Dirichlet distribution. (TDT) (Allan et al. 1998), which extracts related contents Therefore, TOT focuses on modeling individual topics and Copyright c 2006, American Association for Artificial Intelli- their changes over time. gence (www.aaai.org). All rights reserved. However, in real-world text data, topics are often corre- 1http://www.itl.nist.gov/iad/894.01/tests/ace/ lated. That is, some topics are more likely to co-occur than others. In order to model this correlation, we have recently yi over its children. inrtoduced the Pachinko Allocation model (PAM), an exten- 2. For each word w in the document, sion to LDA that assumes a directed acyclic graph (DAG) to capture arbitrary topic correlations. Each leaf in the DAG is • Sample a topic path zw of Lw topics : < associated with a word in the vocabulary, and each interior zw1,zw2, ..., zwLw >. zw1 is always the root and zw2 node represents a correlation among its children, which are through zwLw are topic nodes in S. zwi is a child of either leaves or other interior nodes. Therefore, PAM cap- zw(i−1) and it is sampled from the multinomial distri- (d) tures not only correlations among words like LDA, and also bution θzw(i−1) . correlations among topic themselves. For each topic, PAM (d) • z parameterizes a Dirichlet distribution over its children. Sample word w from θ wLw . Although PAM captures topic correlations, and TOT cap- Following this process, the joint probability of generating tures the distribution of topics over time, neither captures the a document d, the topic assignments z(d) and the multino- phenomena that the correlation among topics evolves over mial distributions θ(d) is time. In this paper, we combine PAM and TOT to model s the temporal aspects and dynamic correlations of extracted (d) (d) (d) P (d, z ,θ |α)= P (θy |αi) topics from large text collections. To generate a document, i we first draw a multinomial distribution from each Dirichlet i=1 Lw associated with the topics. Then for each word in the doc- (d) (d) × ( ( wi| ) ( | )) ument, we sample a topic path in the DAG based on these P z θzw(i−1) P w θzwLw multinomials. Each topic on the path generates a times- w i=2 tamp for this word based on a per-topic Beta distribution Combining PAM and TOT over time. With this combined approach, we can discover not only Now we introduce a combined approach of PAM and top- how topics are correlated, but also when such correlations ics over time (TOT), which captures the evolution of top- occur or disappear. In contrast to other work that models ics and their correlations. Each document is associated with trajectories of individual topics over time, PAMTOT topics one timestamp, but for the convenience of inference (Wang and their meanings are modeled as constant. PAMTOT cap- & McCallum 2006), we consider it to be shared by all the tures the changes in topic co-occurrences, not the changes in words in the document. In order to generate both words and the word distribution of each topic. their timestamps, we modify the generative process in PAM as follows. For each word w in a document d, we still sam- z (d) The Model ple a topic path w based on multinomial distributions θy1 , (d) (d) In this section, we first present the Pachinko allocation θy2 , ..., θys . Simultaneously, we also sample a timestamp model (PAM) which captures individual topics and their cor- twi from each topic zwi on the path based on the correspond- ( ) relations from a static point of view. Then we introduce a ing Beta distribution Beta ψzwi , where ψz is the parameters combined approach of PAM and topics over time (TOT) for of the Beta distribution for topic z. modeling the evolution of topics. We are especially inter- Now the joint probability of generating a document d, the ested in the timelines of topic correlations. While PAM al- topic assignments z(d), the timestamps t(d) and the multino- lows arbitrary DAGs, we will focus on a special four-level mial distributions θ(d) is hierarchical structure and describe the corresponding infer- s ence algorithm and parameter estimation method. ( z(d) t(d) (d)| Ψ) = ( (d)| )× P d, , ,θ α, P θyi αi i=1 Pachinko Allocation Model Lw Lw The notation for the Pachinko allocation model is summa- (d) (d) ( ( wi| z ) ( wi| ) ( | )) P t ψ wi P z θzw(i−1) P w θzwLw rized below. w i=1 i=2 V = {x1,x2, ..., xv}: a word vocabulary. (d) (d) S = {y1,y2, ..., ys}: a set of topic nodes. Each of them Integrating out θ and summing over z , we get the captures some correlation among words or topics. Note that marginal probability of a document and its timestamps as: there is a special node called the root r. It has no incoming s links and every topic path starts from it. (d) (d) P (d, t |α, Ψ) = P (θy |αi)× D: a DAG that consists of nodes in V and S. The topic i i=1 nodes occupy the interior levels and the leaves are words. { ( ) ( ) ( )} Lw Lw G = g1 α1 ,g2 α2 , ..., gs αs : gi, parameterized by (d) (d) (d) ( ( wi| z ) ( wi| z ) ( | z ))d αi, is a Dirichlet distribution associated with topic yi.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us