Scalable Topical Phrase Mining from Text Corpora

Scalable Topical Phrase Mining from Text Corpora

Scalable Topical Phrase Mining from Text Corpora Ahmed El-Kishky:, Yanglei Song:, Chi Wangx, Clare R. Voss;, Jiawei Han: :Department of Computer Science The University of Illinois at Urbana Champaign xMicrosoft Research ;Computational & Information Sciences Directorate, Army Research Laboratory :Urbana, IL, USA, xRedmond, WA, USA, ;Adelphi, MD, USA felkishk2, ysong44, [email protected], [email protected], [email protected] ABSTRACT Terms Phrases search information retrieval While most topic modeling algorithms model text corpora web social networks with unigrams, human interpretation often relies on inher- retrieval web search ent grouping of terms into phrases. As such, we consider information search engine the problem of discovering topical phrases of mixed lengths. based support vector machine Existing work either performs post processing to the results model information extraction of unigram-based topic models, or utilizes complex n-gram- document web page discovery topic models. These methods generally produce query question answering low-quality topical phrases or suffer from poor scalability on text text classification even moderately-sized datasets. We propose a different ap- social collaborative filtering proach that is both computationally efficient and effective. user topic model Our solution combines a novel phrase mining framework to segment a document into single and multi-word phrases, Table 1: Visualization of the topic of Information Retrieval, and a new topic model that operates on the induced docu- automatically constructed by ToPMine from titles of computer science papers published in DBLP (20Conf dataset). ment partition. Our approach discovers high quality topical phrases with negligible extra cost to the bag-of-words topic model in a variety of datasets including research publication While topic models have clear application in facilitating titles, abstracts, reviews, and news articles. understanding, organization, and exploration in large text collections such as those found in full-text databases, diffi- 1. INTRODUCTION culty in interpretation and scalability issues have hindered In recent years, topic modeling has become a popular adoption. Several attempts have been made to address the method for discovering the abstract `topics' that underly prevalent deficiency in visualizing topics using unigrams. a collection of documents. A topic is typically modeled as These methods generally attempt to infer phrases and top- a multinomial distribution over terms, and frequent terms ics simultaneously by creating complex generative mecha- related by a common theme are expected to have a large nism. The resultant models can directly output phrases probability in a topic multinomial. When latent topic multi- and their latent topic assignment. Two such methods are nomials are inferred, it is of interest to visualize these topics Topical N-Gram and PD-LDA [27, 16]. While it is appeal- in order to facilitate human interpretation and exploration ing to incorporate the phrase-finding element in the topical of the large amounts of unorganized text often found within clustering process, these methods often suffer from high- text corpora. In addition, visualization provides a qualita- complexity, and overall demonstrate poor scalability outside tive method of validating the inferred topic model [4]. A small datasets. list of most probable unigrams is often used to describe in- Some other methods apply a post-processing step to unigram- dividual topics, yet these unigrams often provide a hard-to- based topic models [2, 6]. These methods assume that all interpret or ambiguous representation of the topic. Aug- words in a phrase will be assigned to a common topic, which, menting unigrams with a list of probable phrases provides a however, is not guaranteed by the topic model. more intuitively understandable and accurate description of We propose a new methodology ToPMine that demon- a topic. This can be seen in the term/phrase visualization strates both scalability compared to other topical phrase of an information retrieval topic in Table 1. methods and interpretability. Because language exhibits the principle of non-compositionality, where a phrase's mean- ing is not derivable from its constituent words, under the This work is licensed under the Creative Commons Attribution- NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this li- `bag-of-words' assumption, phrases are decomposed, and a cense, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain per- phrase's meaning may be lost [25]. Our insight is that mission prior to any use beyond those covered by the license. Contact phrases need to be systematically assigned to topics. This copyright holder by emailing [email protected]. Articles from this volume insight motivates our partitioning of a document into phrases, were invited to present their results at the 41st International Conference on then using these phrases as constraints to ensure all words Very Large Data Bases, August 31st - September 4th 2015, Kohala Coast, are systematically placed in the same topic. Hawaii. Proceedings of the VLDB Endowment, Vol. 8, No. 3 We perform topic modeling on phrases by first mining Copyright 2014 VLDB Endowment 2150-8097/14/11. phrases, segmenting each document into single and multi- 305 word phrases, and then using the constraints from segmen- a probability distribution φk over words. φk;x “ ppx|kq P tation in our topic modeling. First, to address the scalabil- r0; 1s is the probability of seeing the word x in topic k, and ity issue, we develop an efficient phrase mining technique V x“1 φk;x “ 1. For example, in a topic about the database to extract frequent significant phrases and segment the text research area, the probability of seeing \database", \system" simultaneously. It uses frequent phrase mining and a sta- andř \query" is high, and the probability of seeing \speech", tistical significance measure to segment the text while si- \handwriting" and \animation" is low. This characteriza- multaneously filtering out false candidates phrases. Second, tion is advantageous in statistical modeling of text, but is to ensure a systematic method of assigning latent topics to weak in human interpretability. Unigrams may be ambigu- phrases, we propose a simple but effective topic model. By ous, especially across specific topics. For example, the word restricting all constituent terms within a phrase to share \model" can mean different things depending on a topic - a the same latent topic, we can assign a phrase the topic of model could be a member of the fashion industry or perhaps its constituent words. be part of a phrase such as \topic model". Using of phrases helps avoid this ambiguity. Example 1. By frequent phrase mining and context-specific statistical significance ranking, the following titles can be seg- Definition 1. We formally define phrases and other nec- mented as follows: essary notation and terminology as follows: Title 1. [Mining frequent patterns] without candidate generation: a [frequent pattern] tree approach. ‚ A phrase is a sequence of contiguous tokens: Title 2. [Frequent pattern mining] : current status and P =twd;i; :::wd;i`nu n ¡ 0 future directions. ‚ A partition over d-th document is a sequence of phrases: The tokens grouped together by [] are constrained to share pPd;1;:::;Pd;Gd q Gd ¥ 1 s.t. the concatenation of the the same topic assignment. phrase instances is the original document. Our TopMine method has the following advantages. In Example 1, we can see the importance of word prox- imity in phrase recognition. As such, we place a contiguity ‚ Our phrase mining algorithm efficiently extracts candi- date phrases and the necessary aggregate statistics needed restriction on our phrases. to prune these candidate phrases. Requiring no domain To illustrate an induced partition upon a text segment, we knowledge or specific linguistic rulesets, our method is can note how the concatenation of all single and multi-word purely data-driven. phrases in Title 1 will yield an ordered sequence of tokens representing the original title. ‚ Our method allows for an efficient and accurate filtering of false-candidate phrases. In title 1 of Example 1, after 2.1 Desired Properties merging `frequent' and `pattern', we only need to test We outline the desired properties of a topical phrase min- whether `frequent pattern tree' is a significant phrase in ing algorithm as follows: order to determine whether to keep `frequent pattern' as a phrase in this title. ‚ The lists of phrases demonstrate a coherent topic. ‚ Segmentation induces a `bag-of-phrases' representation ‚ The phrases extracted are valid and human-interpretable. for documents. We incorporate this as a constraint into our topic model eliminating the need for additional latent ‚ Each phrase is assigned a topic in a principled manner. variables to find the phrases. The model complexity is reduced and the conformity of topic assignments within ‚ The overall method is computationally efficient and of each phrase is maintained. comparable complexity to LDA. The topic model demonstrates similar perplexity to LDA We state and analyze the problem in Section 2, followed ‚ by our proposed solution in Section 3. We present the In addition to the above requirements for the system as a key components of our solution, phrase mining and phrase- whole, we specify the requirements of a topic-representative constrained topic modeling in Sections 4 and 5. In Section 6, phrase. When designing our phrase mining framework, we we review the related work. Then we evaluate the proposed ensure that our phrase-mining and phrase-construction al- solution in Section 7, and conclude in Section 9. gorithms naturally validate candidate phrases on three qual- ities that constitute human-interpretability. 2. PROBLEM DEFINITION 1. Frequency: The most important quality when judging The input is a corpus of D documents, where d-th doc- whether a phrase relays important information regarding ument is a sequence of Nd tokens: wd;i; i “ 1;:::; Nd. Let D a topic is its frequency of use within the topic. A phrase N “ d“1 Nd.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us