Document Language Models, Query Models, and Risk Minimization for Information Retrieval

Document Language Models, Query Models, and Risk Minimization for Information Retrieval

Document Language Models, Query Models, and Risk Minimization for Information Retrieval John Lafferty Chengxiang Zhai School of Computer Science School of Computer Science Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA 15213 Pittsburgh, PA 15213 ABSTRACT model, achieving effects similar to query expansion in more We present a framework for information retrieval that com- standard approaches to IR. The relative simplicity and effec- bines document models and query models using a proba- tiveness of the language modeling approach, together with bilistic ranking function based on Bayesian decision theory. the fact that it leverages statistical methods that have been The framework suggests an operational retrieval model that developed in speech recognition and other areas, makes it an extends recent developments in the language modeling ap- attractive framework in which to develop new text retrieval proach to information retrieval. A language model for each methodology. document is estimated, as well as a language model for each In this paper we motivate the language modeling approach query, and the retrieval problem is cast in terms of risk min- from a general probabilistic retrieval framework based on imization. The query language model can be exploited to risk minimization. This framework not only covers the clas- model user preferences, the context of a query, synonomy sical probabilistic retrieval models as special cases, but also and word senses. While recent work has incorporated word suggests an extension of the existing language modeling ap- translation models for this purpose, we introduce a new proach to retrieval that involves estimating both document method using Markov chains defined on a set of documents language models and query language models and compar- to estimate the query models. The Markov chain method ing the models using the Kullback-Leibler divergence. In has connections to algorithms from link analysis and social the case where the query language model is concentrated on networks. The new approach is evaluated on TREC col- the actual query terms, this reduces to the ranking method lections and compared to the basic language modeling ap- employed by Ponte and Croft [17] and others. We also intro- proach and vector space models together with query expan- duce a novel method for estimating an expanded query lan- sion using Rocchio. Significant improvements are obtained guage model, which may assign probability to words that are over standard query expansion methods for strong baseline not in the original query. The essence of the new method is a TF-IDF systems, with the greatest improvements attained Markov chain word translation model that can be computed for short queries on Web data. based on a set of documents. The Markov chain method is a very general method for expanding either a query model or a document model. As a translation model, it addresses 1. INTRODUCTION several basic shortcomings of the translation models used The language modeling approach to information retrieval by Berger and Lafferty [1], as described in Section 4. The has recently been proposed as a new alternative to tradi- query models explored in this paper are quite simple, but in tional vector space models and other probabilistic models. general, the role of the query model is to incorporate knowl- In the use of language modeling by Ponte and Croft [17], edge of the user and the context of an information need into a unigram language model is estimated for each document, the retrieval model. and the likelihood of the query according to this model is The paper is organized as follows. In Section 2 we dis- used to score the document for ranking. Miller et al. [15] cuss the language modeling approach to IR, and briefly re- smooth the document language model with a background view previous work in this direction. In Section 3 we present model using hidden Markov model techniques, and demon- the risk minimization retrieval framework and our extension strate good performance on TREC benchmarks. Berger and to the language modeling approach that incorporates both Lafferty [1] use methods from statistical machine transla- query and document language models. Section 4 presents tion to incorporate synonomy into the document language the idea of using Markov chains on a documents and words to expand document and query models, and gives several examples. This technique requires various collection statis- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are tics to be calculated, and we explain in Section 5 how these not made or distributed for profit or commercial advantage and that copies can be calculated at index time. A series of experiments bear this notice and the full citation on the first page. To copy otherwise, to to evaluate these methods is presented in Section 6, where republish, to post on servers or to redistribute to lists, requires prior specific we attempt to compare directly to state-of-the art ranking permission and/or a fee. functions and weighting schemes. Conclusions and the con- SIGIR’01, September 9-12, 2001, New Orleans, Louisiana, USA Copyright 2001 ACM 1-58113-331-6/01/0009 ...$5.00. tributions of this work are summarized in Section 8. 111 2. THE LANGUAGE MODELING of relevance judgments will be available to estimate them on APPROACH actual user data. Because of this, Berger and Lafferty gen- erate an artificial collection of “synthetic” data for training. In the language modeling approach to information re- Second, the application of translation models to ranking is trieval, a multinomial model p(w | d) over terms is estimated inefficient, as the model involves a sum over all terms in the for each document d in the collection C to be indexed and document. Third, the translation probabilities are context- searched. This model is used to assign a likelihood to a independent, and are therefore unable to directly incorpo- user’s query q =(q1,q2,...,qm). In the simplest case, each rate word-sense information and context into the language query term is assumed to be independent of the other query models. terms, so that the query likelihood is given by p(q | d)= Ém In the following section we present a formal retrieval frame- p(qi | d). After the specification of a document prior i=1 work based on risk minimization and derive an extension of p(d), the a posteriori probability of a document is given by the language modeling approach just described, which may p(d | q) ∝ p(q | d) p(d) ultimately be better suited to semantic smoothing to model the user’s information need. In Section 4 we then present and is used to rank the documents in the collection C. a technique for expanding document and query models that Just as in the use of language models for speech recog- addresses some of the shortcomings of the translation mod- nition, language models for information retrieval must be els as used in [1]. “smoothed,” so that non-zero probability can be assigned to query terms that do not appear in a given document. One of 3. A RISK MINIMIZATION RETRIEVAL the simplest ways in which a document language model can be smoothed is by linear interpolation with a background FRAMEWORK collection model p(w |C): In an interactive retrieval system, the basic action of the system can be regarded as presenting a document or a se- p λ(w | d)=λ p(w | d)+(1− λ) p(w |C)(1)quence of documents to the user. Intuitively, the choice of Miller et al. [15] view this smoothed model as coming from a which documents to present should be based on some no- tion of utility. In this section we formalize this intuition by simple 2-state hidden Markov model, and train the param- eter λ using maximum likelihood estimation. One of the presenting a framework for the retrieval process based on main effects of this type of smoothing is robust estimation Bayesian decision theory. We view a query as being the output of some probabilistic of common, content-free words that are typically treated as U “stop words” in many IR systems. process associated with the user , and similarly, we view A potentially more significant and effective kind of smooth- a document as being the output of some probabilistic pro- S ing is what may be referred to as semantic smoothing,where cess associated with an author or document source .A synonyms and word sense information is incorporated into query (document) is the result of choosing a model, and the models. With proper semantic smoothing, a document then generating the query (document) using that model. A that contains the term w = automobile may be retrieved set of documents is the result of generating each document independently, possibly from a different model. (The in- to answer a query that includes the term q = car,evenif this query term is not present in the document. Seman- dependence assumption is not essential, and is made here tic smoothing effects are achieved in more standard ap- only to simplify the presentation.) The query model could, in principle, encode detailed knowledge about a user’s infor- proaches to IR using query expansion and relevance and pseudo-relevance feedback techniques. The development of mation need and the context in which they make their query. a well-motivated framework for semantic smoothing is one Similarly, the document model could encode complex infor- mation about a document and its source or author. of the important unresolved problems in the language mod- θ eling approach. More formally, let Q denote the parameters of a query θ In order to incorporate a kind of semantic smoothing into model, and let D denote the parameters of a document model.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us