Detection and Geo-Temporal Tracking of Important Topics in News Texts

Detection and Geo-Temporal Tracking of Important Topics in News Texts

Detection and Geo-temporal Tracking of Important Topics in News Texts Erik Michael Leal Wennberg [email protected] ABSTRACT given population in a moment of time. In our current society, newswire documents are in constant The information available in the different newswire docu- development and their growth has been increasing every ments depends on the concerns shown by individuals or time more rapidly. Due to the overwhelming diversity of groups, which can be related to a particular time and space concerns of each population, it would be interesting to dis- of interest. This relation can answer many interesting ques- cover within a certain topic of interest, where and when its tions, such as: \Which are the main concerned topics re- important events took place. ferred in a geographic region?", \How does a topic evolve This thesis attempts to develop a new approach to detect over time and/or space?"and \What is the geographic dis- and track important events over time and space, by analyz- tribution of a topic?". ing the topics of a collection of newswire documents. This This thesis attempts to answer these questions, by perform- approach combines the collection's associated topics (man- ing a geo-temporal topic analysis on a large collection of ual assigned topics or automatically generated topics using newswire documents, in order to extract relevant spatial and a probabilistic topic model) with the associated spatial and temporal information related to events. However this thesis temporal metadata of each document, in order to be able mainly focused on attempting to find a correlation between to analyze the collection's topics over time with time series, the spatial and temporal topic trends and the occurrence of as well as over space with geographic maps displaying the important events, in order to detect and track these events geographic distribution of each topic. over time and space. This paper is organized as follows: By examining each of the topic's spatial and temporal dis- Section 2 presents some fundamental concepts. Section 3 tributions, it was possible to correlate the topic's spatial presents the related work. Section 4 describes in detail the and temporal trend with occurrence of important events. different tasks involved in the proposed approach, as well as By conducting several experiments on a large collection of the software used and its respective configuration. Section 5 newswire documents, it was concluded that the proposed ap- describes the evaluation methodology and discusses the ob- proach can effectively enable to detect and track important tained results of the proposed approach, and finally Section events over time and space. 6 presents the achieved conclusions and future work which were derived from this thesis. Keywords Newswire Documents, Important Events, Topic Modeling, 2. FUNDAMENTAL CONCEPTS Geo-temporal Topic Analysis, Latent Dirichlet Allocation This thesis is framed in the Topic Detection and Tracking (TDT), which is an area of information retrieval that aims to (i) understand if a document created a new topic or if its topic was already referred in previous documents (i.e., First 1. INTRODUCTION Story Detection), and (ii) recognize topics as described over In the world of constant changes, news is in permanent de- documents (i.e., Topic Tracking) [1]. Automatic approaches velopment and an overwhelming amount of information is for TDT can be very useful, in order to analyze a collection constantly distributed worldwide. Due to the wide diversity of news reports. of interests of each population, it became an enormous chal- In brief, first story detection consists of recognizing, in an lenge to discover where and when their important events incoming stream of documents, which are the ones that de- took place, in order to discover the focus of interest of a scribe a new topic. This task is often addressed by measur- ing the similarity between the respective incoming document and every document that appeared in the past. If the in- coming document exceeds a similarity threshold with any one of the previous documents, then it is considered to rep- resent an old topic otherwise it is considered to be about a new topic. It should be noticed that the concept of new topic is relative, due the fact that a system can only base its decision on the previously analyzed documents. Topic Tracking consists of detecting a known topic in an incoming stream of documents. Each incoming document can be assigned a score referring to the matching between as random mixtures of latent topics, where each topic is char- its contents and each of the considered topics. The best acterized by a distribution over words. matching topic can also be obtained, and documents can be LDA model, illustrated in Figure 1, assumes the following retrieved or filtered with basis on their topic matches. A generative process for each document d in a corpus C: document`s score can be computed by measuring the simi- larity between the textual contents of respective document 1. For a document j,the model picks a value for the multino- and a training set of documents for the topic. T In the context of TDT, as well as in most text mining and mial parameter θj of the vector θd = [θd1...θdj ] over the N topics according to the Dirichlet distribution information retrieval tasks, documents are characterized by T their keywords, also known as terms. This characterization α = [α1...αn] . The probability density function asso- can be made through different models, such as the bag-of- ciated with a Dirichlet distribution returns the belief words model or topic distribution models. that the probabilities of K rival events are xi given In the bag-of-words model, the only considered information that each event has been observed αi − 1 times. is the frequency of each term, ignoring the order [10]. Typi- 2. For a word i in document j, a topic label zji is sam- cally, this model stores information regarding the frequency pled from the discrete multinomial distribution zji ∼ of each term in Document Vectors. Each component of these Multinomial(θj ). vectors corresponds to the occurrence frequency of a term and the dimensionality of the vector corresponds to the num- 3. The value wji of word i in document j is sampled from the ber of terms in the collection. If a term does not exist in discrete multinomial distribution of topic zji , which T a document, its value will be zero. Additionally, these val- is generated from the Dirichlet distribution [β1...βN ] ues can be computed in different ways, depending on the for each topic zN . weight given to different terms. A common approach is to use the term-frequency inverse document frequency (TF- IDF) heuristic [10]. The similarity of documents represented For simplification reasons, let` s assume (i) that the dimen- in the bag-of-words model can be computed through vector sionality k of the Dirichlet distribution is known and fixed, matching operations such as the cosine similarity [10]. and (ii) that the word probabilities are parameterized by a j i A Topic Distribution model represents documents as mix- k × V matrix β where βij = p(w = 1jz = 1). tures of topics. The creation of each term is attributable to The LDA generative process can be sumarized in the follow- one of the document` s topics. In natural language, words ing expression: can be polysemous, meaning that words can have multiple N Y senses. Consequently, the same word can belong to more p(θ; z; wjα; β) = p(θjα) p(znjθ)p(wnjzn; β); (1) then one topic. n=1 Two of the most widely used topic distribution models are the probabilistic Latent Semantic Analysis (pLSA) model In the formula, p(θ; z; wjα; β) represents the probability of and the Latent Dirichlet Allocation (LDA) model [13]. The the joint distribution of a topic mixture θ, a set of K topics similarity of documents represented through topic distribu- z, and a set of N words w given the parameters of α and β, tion models can also be computed through the cosine simi- where p(z jθ) is simply θ for the unique i such that zi = 1. larity (i.e., documents are represented as vectors of topics) n i n or through divergence functions such as the Kullback-Leibler Since LDA models the words in the documents under the divergence [10]. Bag-of-words assumption (i.e, word order is not important In the context of TDT, a topic is generally viewed as a set and the occurrence of words is independent), it posits that of interconnected terms, where all terms are related to the the distribution of the words would be independent and iden- same concept. For example, the terms soccer, football, player tically distributed, conditioned on that latent parameter of and goal, could be considered as belonging to the same topic. a probability distribution. Thus, the words are generated by This view is aligned with that of probabilistic topic models, topics, and those topics are infinitely exchangeable within a such as LDA and pLSA, which represent topics as mixtures document. of words (i.e., probabilistic distributions over terms). Although there is a main inferential problem that must be However, several previous works on TDT have reformulated solved in order to use the LDA model, the hidden variables the notion of Topic to Event, due to the fact that news re- ports often describe an unique happening, in some point of space and time. 3. RELATED WORK This section presents some of the most relevant related works for this thesis. 3.1 Latent Dirichlet Allocation Latent Dirichlet Allocation (LDA) is a generative proba- bilistic model for collections of discrete data, such as texted documents [2].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us