Supervised Language Modeling for Temporal Resolution of Texts

Supervised Language Modeling for Temporal Resolution of Texts

Supervised Language Modeling for Temporal Resolution of Texts Abhimanu Kumar Matthew Lease Jason Baldridge Dept. of Computer Science School of Information Department of Linguistics University of Texas at Austin University of Texas at Austin University of Texas at Austin [email protected] [email protected] [email protected] ABSTRACT describe any form of communication without cables (e.g. internet We investigate temporal resolution of documents, such as deter- access). As such, the word wireless embodies implicit time cues, mining the date of publication of a story based on its text. We a notion we might generalize by inferring its complete temporal describe and evaluate a model that build histograms encoding the distribution as well as that of all other words. By harnessing many probability of different temporal periods for a document. We con- such implicit cues in combination across a document, we might fur- struct histograms based on the Kullback-Leibler Divergence be- ther infer a unique temporal distribution for the overall document. tween the language model for a test document and supervised lan- As in prior document dating studies, we partition the timeline guage models for each interval. Initial results indicate this language (and document collection) to infer an unique language model (LM) modeling approach is effective for predicting the dates of publica- underlying each time period [10, 14]. While prior work consid- tion of short stories, which contain few explicit mentions of years. ered texts from the past 10-20 years, our work is more historically- oriented, predicting publication dates for historical works of fiction. After estimating a similar LM underlying each document [20], Categories and Subject Descriptors we measure similarity between each document’s LM and each time H.3.1 [Content Analysis and Indexing] period’s LM, yielding a distribution over the timeline for each doc- ument [10, 14]. In addition to document dating applications, this General Terms distribution has potential to inform work in computational humani- ties, specifically scholars’ understandings of how a work was influ- Algorithms, Design, Experimentation, Performance enced by or reflects different time periods. We predict publication dates of short stories from the Gutenberg Keywords project1 published between 1798 to 2008. Gutenberg stories are labeled by publication year. Our inference task is then to predict time, temporal information, text mining, document dating the publication year given the story’s text. These short works of fiction typically use relatively few explicit temporal expressions or 1. INTRODUCTION mentions of real-life named entities. We refer to the stories as docu- There is a long tradition of work on temporal analysis of texts. In ments, with Gutenberg defining a document collection c consisting computational linguistics, the primary focus has been on the fine- of N documents: c = d1:N . Our best model achieves a median grained level of temporal ordering of individual events necessary error of 23 years from the true publication dates. for deep natural language understanding [1, 24]. In Information Retrieval (IR), research has investigated time-sensitivity document 2. RELATED WORK ranking [9, 17], time-based organization and presentation of search results [2], how queries and documents change over time [16], etc. Annotation and corpora. Recent years have brought increased This paper describes temporal analysis of documents using meth- interest in creating new, richly annotated corpora for training and ods rooted in both computational linguistics and IR. While accu- evaluating time-sensitive models. TimeBank [21] and Wikiwars rate extraction and resolution of explicit mentions of time (absolute [19] are great exemplars of such work. They have been used for or relative) is clearly important [2], our primary focus here is the tasks like modeling event structure (e.g. work of [7] on TimeBank). challenge of learning implicit temporal distributions over natural Document dating. Prior work on LM-based document dating [10, language use at-large. As a concrete example, consider the word 14] partitioned the timeline (and document collection) to infer a wireless. Introduced in the early 20th century to refer to radios, it unique LM underlying each time period. A LM underlying each fell into disuse and then its meaning shifted later that century to document [20] was also estimated and used to measure similarity vs. each time period’s LM, yielding a distribution over the timeline for each document. However, while prior work focused on the past 10-20 years, our work is more historically-oriented, modeling the Permission to make digital or hard copies of all or part of this work for timeline from the present day back to the 18th century. personal or classroom use is granted without fee provided that copies are Foundational work by de Jong et al. [10] considered Dutch news- not made or distributed for profit or commercial advantage and that copies paper articles from 1999-2005 and compared language models us- bear this notice and the full citation on the first page. To copy otherwise, to ing the normalised log-likelihood ratio measure (NLLR), a vari- republish, to post on servers or to redistribute to lists, requires prior specific ant of KL-divergence. Linear and Dirichlet smoothing were ap- permission and/or a fee. CIKM’11, October 24–28, 2011, Glasgow, Scotland, UK. 1 Copyright 2011 ACM 978-1-4503-0717-8/11/10 ...$10.00. http://www.gutenberg.org plied, apparently to the partition LMs but not the document LMs. est temporal granularity we consider in this work is a single year, They also distinguish between output granularity of time (to be pre- though the methods we describe can in principle be used with units dicted) and the granularity of time modeled. Kanhabua et al. [14] of finer granularity such as days, weeks, months, etc. extended de Jong et al.’s model with notions of temporal entropy, A chronon is an atomic interval x upon which a discrete time- use of search term trends from Google Zeitgeist, and semantic pre- line is constructed [2]. In this paper, a chronon consists of δ years, processing. Temporal entropy weights terms differently based on where δ is a tunable parameter. Given δ, the timeline Tδ is decom- how well a term distinguishes between time partitions and how im- posed into a sequence of n contiguous, non-overlapping chronons ∆ portant a term is to a specific partition. Semantic techniques in- x = x1:n, where n = δ . cluded part-of-speech tagging, collocation extraction, word sense We model the affinity between each chronon x and a document d disambiguation, and concept extraction. They created a time-labeled by estimating the discrete distribution P (xjd), a document-specific document collection by downloading web pages from the Internet normalized histogram over the timeline of chronons x1:n. In the Archive spanning an eight year period. In follow-on work inferring next section, we use P (xjd) to infer affinity between d and differ- temporal properties of queries [15], they used the New York Times ent chronons as well as longer granules. We describe an LM-based annotated corpus, with articles spanning 1987-2007. approach for inferring affinity between chronon x and document d IR Applications. IR research has investigated time-sensitivity as a function of model divergence between latent unigram distribu- query interpretation and document ranking [17, 9], time-based or- tions P (wjd) and P (wjx) (similar to [10, 14]). ganization and presentation of search results [2], how queries and We model the likelihood of each chronon x for a given docu- documents change over time [16], etc. One of the first LM-based ment d. By forming a unique “pseudo-document” dx associated temporal approaches by Li and Croft [17] used explicit document with each chronon x, we estimate Θx from dx and estimate P (xjd) dates to estimate a more informative document prior. More recent by comparing the similarity of Θd and Θx [10, 14]. Whereas prior work by Dakka et al. [9] automatically identify important time in- work measured similarity via NLLR (§2), we compute the unnor- tervals likely to be of interest for a query and similarly integrate malized likelihood of some x given d via standard (inverse) KL- knowledge of document publication date into the ranking function. divergence D(ΘdjjΘx)−1 and normalize in straight-forward fash- The most relevant work to ours is that by Alonso et al. [2], who ion over all chronons x1:n: provide valuable background on motivation, overview and discus- sion of temporal analysis in IR. Using explicit temporal metadata D(ΘdjjΘx)−1 and expressions, they create temporal document profiles to cluster P (xjd) = (1) wa P D(ΘdjjΘx)−1 documents and create timelines for exploring search results. x Time-sensitive topic modeling. There has been a variety of We generate the “pseudo-document” dx for each chronon x by in- work on time based topic-analysis in texts in recent years, such as cluding all training documents whose labeled span overlaps x. Dynamic Topic Models [6]. Subsequent work [5] proposes proba- As in prior work [10, 14, 28], we adopt Dirichlet smoothing bilistic time series models to analyze the time evolution of topics to regularlize partition LMs. However, rather than adopt a sin- in a large document collection. They take a sequential collection gle fixed prior for all documents and chronons, we instead de- of documents of a particular area e.g. news articles and determine fine document-chronon specific priors. Let jVdx [ Vdj denote the how topics evolve over time - topics appearing and disappearing, document-chronon specific vocabulary for some collection docu- x new topics emerging and older ones fading away. [25] provide a ment di and pseudo-document d . For each document-chronon model to evaluate variations in the occurrence of topics in large pair, we define the prior to be a uniform distribution over this spe- corpora over a period of time.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us