Query Expansion with Locally-Trained Word Embeddings

Total Page:16

File Type:pdf, Size:1020Kb

Query Expansion with Locally-Trained Word Embeddings Query Expansion with Locally-Trained Word Embeddings Fernando Diaz Bhaskar Mitra Nick Craswell Microsoft Microsoft Microsoft [email protected] [email protected] [email protected] Abstract trained using local windows, risks captur- ing only coarse representations of those top- Continuous space word embeddings ics dominant in the corpus. While a par- have received a great deal of atten- ticular embedding may be appropriate for a tion in the natural language processing specific word within a sentence-length con- and machine learning communities for text globally, it may be entirely inappropri- their ability to model term similarity ate within a specific topic. Gale et al. re- and other relationships. We study the fer to this as the ‘one sense per discourse’ use of term relatedness in the context property (Gale et al., 1992). Previous work of query expansion for ad hoc informa- by Yarowsky demonstrates that this property tion retrieval. We demonstrate that can be successfully combined with informa- word embeddings such as word2vec and tion from nearby terms for word sense dis- GloVe, when trained globally, under- ambiguation (Yarowsky, 1995). Our work ex- perform corpus and query specific em- tends this approach to word2vec-style training beddings for retrieval tasks. These re- in the context word similarity. sults suggest that other tasks benefit- For many tasks that require topic-specific ing from global embeddings may also linguistic analysis, we argue that topic-specific benefit from local embeddings. representations should outperform global rep- resentations. Indeed, it is difficult to imagine 1 Introduction a natural language processing task that would Continuous space embeddings such as not benefit from an understanding of the local word2vec (Mikolov et al., 2013b) or GloVe topical structure. Our work focuses on a query (Pennington et al., 2014a) project terms in expansion, an information retrieval task where a vocabulary to a dense, lower dimensional we can study different lexical similarity meth- space. Recent results in the natural lan- ods with an extrinsic evaluation metric (i.e. guage processing community demonstrate the retrieval metrics). Recent work has demon- effectiveness of these methods for analogy strated that similarity based on global word and word similarity tasks. In general, these embeddings can be used to outperform clas- approaches provide global representations of sic pseudo-relevance feedback techniques (Sor- words; each word has a fixed representation, doni et al., 2014; al Masri et al., 2016). regardless of any discourse context. While a We propose that embeddings be learned on global representation provides some advan- topically-constrained corpora, instead of large tages, language use can vary dramatically by topically-unconstrained corpora. In a retrieval topic. For example, ambiguous terms can eas- scenario, this amounts to retraining an em- ily be disambiguated given local information bedding on documents related to the topic of in immediately surrounding words (Harris, the query. We present local embeddings which 1954; Yarowsky, 1993). The window-based capture the nuances of topic-specific language training of word2vec style algorithms exploits better than global embeddings. There is this distributional property. substantial evidence that global methods un- A global word embedding, even when derperform local methods for information re- 367 Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 367–377, Berlin, Germany, August 7-12, 2016. c 2016 Association for Computational Linguistics trieval tasks such as query expansion (Xu and Croft, 1996), latent semantic analysis (Hull, 150 1994; Sch¨utze et al., 1995; Singhal et al., 1997), cluster-based retrieval (Tombros and van Rijsbergen, 2001; Tombros et al., 2002; 100 Willett, 1985), and term clustering (Attar and Fraenkel, 1977). We demonstrate that the 50 same holds true when using word embeddings for text retrieval. 0 2 Motivation -1 0 1 2 3 4 5 For the purpose of motivating our approach, log(weight) we will restrict ourselves to word2vec although other methods behave similarly (Levy and Goldberg, 2014). These algorithms involve Figure 1: Importance weights for terms occur- discriminatively training a neural network to ring in documents related to ‘argentina peg- predict a word given small set of context ging dollar’ relative to frequency in gigaword. words. More formally, given a target word w and observed context c, the instance loss is de- of observing a word-context pair conditioned fined as, on the topic t. The expected loss under this distribution is (Shimodaira, 2000), `(w, c) = log σ(φ(w) ψ(c)) · pt(w, c) + η Ew θC [log σ( φ(w) ψ(w))] · ∼ − · t = Ew,c pc `(w, c) (2) L ∼ p (w, c) c where φ : k projects a term into a k- V → < dimensional embedding space, ψ : m k In general, if our corpus consists of sufficiently V → < projects a set of m terms into a k-dimensional diverse data (e.g. Wikipedia), the support of embedding space, and w is a randomly sam- pt(w, c) is much smaller than and contained pled ‘negative’ context. The parameter η con- in that of pc(w, c). The loss, `, of a con- trols the sampling of random negative terms. text that occurs more frequently in the topic, These matrices are estimated over a set of con- will be amplified by the importance weight ω = pt(w,c) . Because topics require special- texts sampled from a large corpus and mini- pc(w,c) mize the expected loss, ized language, this is likely to occur; at the same time, these contexts are likely to be un- deremphasized in training a model according c = Ew,c pc [`(w, c)] (1) L ∼ to Equation 1. where pc is the distribution of word-context In order to quantify this, we took a topic pairs in the training corpus and can be esti- from a TREC ad hoc retrieval collection (see mated from corpus statistics. Section 5 for details) and computed the im- While using corpus statistics may make portance weight for each term occurring in sense absent any other information, oftentimes the set of on-topic documents. The histogram we know that our analysis will be topically of weights ω is presented in Figure 1. While constrained. For example, we might be analyz- larger probabilities are expected since the size ing the ‘sports’ documents in a collection. The of a topic-constrained vocabulary is smaller, language in this domain is more specialized there are a non-trivial number of terms with and the distribution over word-context pairs much larger importance weights. If the loss, is unlikely to be similar to pc(w, c). In fact, `(w), of a word2vec embedding is worse for prior work in information retrieval suggests these words with low pc(w), then we expect that documents on subtopics in a collection these errors to be exacerbated for the topic. have very different unigram distributions com- Of course, these highly weighted terms may pared to the whole corpus (Cronen-Townsend have a low value for pt(w) but a very high et al., 2002). Let pt(w, c) be the probability value relative to the corpus. We can adjust the 368 global local 0.15 cutting tax squeeze deficit reduce vote 0.10 slash budget KL reduction reduction spend house 0.05 lower bill halve plan soften spend 0.00 rank freeze billion Figure 2: Pointwise Kullback-Leibler diver- Figure 3: Terms similar to ‘cut’ for a word2vec gence for terms occurring in documents re- model trained on a general news corpus and lated to ‘argentina pegging dollar’ relative to another trained only on documents related to frequency in gigaword. ‘gasoline tax’. weights by considering the pointwise Kullback- Leibler divergence for each word w, two word2vec models: the first on the large, generic Gigaword corpus and the second on a pt(w) topically-constrained subset of the gigaword. Dw(pt pc) = pt(w) log (3) k pc(w) We present the most similar terms to ‘cut’ using both a global embedding and a topic- Words which have a much higher value of specific embedding in Figure 3. In this case, pt(w) than pc(w) and have a high absolute the topic is ‘gasoline tax’. As we can see, the value of pt(w) will have high pointwise KL ‘tax cut’ sense of ‘cut’ is emphasized in the divergence. Figure 2 shows the divergences topic-specific embedding. for the top 100 most frequent terms in pt(w). The higher ranked terms (i.e. good query ex- pansion candidates) tend to have much higher 3 Local Word Embeddings probabilities than found in pc(w). If the loss on those words is large, this may result in poor embeddings for the most important words for The previous section described several reasons the topic. why a global embedding may result in over- A dramatic change in distribution between general word embeddings. In order to perform the corpus and the topic has implications for topic-specific training, we need a set of topic- performance precisely because of the objective specific documents. In information retrieval used by word2vec (i.e. Equation 1). The train- scenarios users rarely provide the system with ing emphasizes word-context pairs occurring examples of topic-specific documents, instead with high frequency in the corpus. We will providing a small set of keywords. demonstrate that, even with heuristic down- Fortunately, we can use information re- sampling of frequent terms in word2vec, these trieval techniques to generate a query-specific techniques result in inferior performance for set of topical documents. Specifically, we specific topics. adopt a language modeling approach to do so Thus far, we have sketched out why using (Croft and Lafferty, 2003). In this retrieval the corpus distribution for a specific topic may model, each document is represented as a max- result in undesirable outcomes.
Recommended publications
  • A Query Suggestion Method Combining TF-IDF and Jaccard Coefficient for Interactive Web Search
    www.sciedu.ca/air Artificial Intelligence Research 2015, Vol. 4, No. 2 ORIGINAL RESEARCH A query suggestion method combining TF-IDF and Jaccard Coefficient for interactive web search Suthira Plansangket ,∗ John Q Gan School of Computer Science and Electronic Engineering, University of Essex, United Kingdom Received: May 10, 2015 Accepted: July 27, 2015 Online Published: August 6, 2015 DOI: 10.5430/air.v4n2p119 URL: http://dx.doi.org/10.5430/air.v4n2p119 Abstract This paper proposes a query suggestion method combining two ranked retrieval methods: TF-IDF and Jaccard coefficient. Four performance criteria plus user evaluation have been adopted to evaluate this combined method in terms of ranking and relevance from different perspectives. Two experiments have been conducted using carefully designed eighty test queries which are related to eight topics. One experiment aims to evaluate the quality of the query suggestions generated by the proposed method, and the other aims to evaluate the improvement of the relevance of retuned documents in interactive web search by using the query suggestions so as to evaluate the effectiveness of the developed method. The experimental results show that the method developed in this paper is the best method for query suggestion among the methods evaluated, significantly outperforming the most popularly used TF-IDF method. In addition, the query suggestions generated by the proposed method significantly improve the relevance of returned documents in interactive web search in terms of increasing the precision or the number of highly relevant documents. Key Words: Query suggestion, Query expansion, Information retrieval, Search engine, Performance evaluation 1 Introduction a cold-start problem.
    [Show full text]
  • Concept-Based Interactive Query Expansion ∗
    Concept-Based Interactive Query Expansion ∗ Bruno M. Fonseca12 Paulo Golgher2 Bruno Pôssas12 [email protected] [email protected] [email protected] Berthier Ribeiro-Neto1 2 Nivio Ziviani1 [email protected] [email protected] ABSTRACT back, web searching. Despite the recent advances in search quality, the fast increase in the size of the Web collection has introduced new challenges for 1. INTRODUCTION Web ranking algorithms. In fact, there are still many situations in The Web is an innovation that has modified the way we learn, which the users are presented with imprecise or very poor results. work and live. The novelty lies not only on the freedom to publish, One of the key difficulties is the fact that users usually submit very but also in the almost universal communication facilities. It marks short and ambiguous queries, and they do not fully specify their in- formation needs. That is, it is necessary to improve the query for- the beginning of a new era, of a new society, started by what we mation process if better answers are to be provided. In this work we may call the information revolution. In these new times, the volume of information that can be ac- propose a novel concept-based query expansion technique, which cessed at low cost and high convenience is mind boggling. To illus- allows disambiguating queries submitted to search engines. The trate, Google1 advertised indexing more than 8 billion Web pages concepts are extracted by analyzing and locating cycles in a special type of query relations graph. This is a directed graph built from in 2004.
    [Show full text]
  • Xu: an Automated Query Expansion and Optimization Tool
    Xu: An Automated Query Expansion and Optimization Tool Morgan Gallant Haruna Isah Farhana Zulkernine Shahzad Khan School of Computing School of Computing School of Computing Gnowit Inc. Queen’s University Queen’s University Queen’s University Ottawa, ON, Canada Kingston, ON, Canada Kingston, ON, Canada Kingston, ON, Canada [email protected] [email protected] [email protected] [email protected] Abstract— The exponential growth of information on the A core issue in IR and search applications is the evaluation Internet is a big challenge for information retrieval systems of search results. The emphasis is on users and their towards generating relevant results. Novel approaches are information needs. The users of an IR system such as a search required to reformat or expand user queries to generate a engine, are the ultimate judges of the quality of the results [2]. satisfactory response and increase recall and precision. Query There are many reasons why a search result may not meet user expansion (QE) is a technique to broaden users’ queries by expectations. Sometimes the query expression may be too introducing additional tokens or phrases based on some short to dictate what the user is looking for or may not be well semantic similarity metrics. The tradeoff is the added formulated [5]. In most cases, the user's original query is not computational complexity to find semantically similar words sufficient to retrieve the information that the user is looking and a possible increase in noise in information retrieval. Despite several research efforts on this topic, QE has not yet been for [6].
    [Show full text]
  • A New Query Expansion Method Based on Query Logs Mining1
    International Journal on Asian Language Processing, 19 (1): 1-12 1 A new query expansion method based on query logs mining 1 Zhu Kunpeng, Wang Xiaolong, Liu Yuanchao School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China Email:{kpzhu, wangxl, ycliu}@insun.hit.edu.cn Abstract: Query expansion has long been suggested as an effective way to improve the performance of information retrieval systems by adding additional relevant terms to the original queries. However, most previous research has been limited in extracting new terms from a subset of relevant documents, but has not exploited the information about user interactions. In this paper, we proposed a method for automatic query expansion based on user interactions recorded in query logs. The central idea is to extract correlations among queries by analyzing the common documents the users selected for them, and the expanded terms only come from the associated queries more than the relevant documents. In particular, we argue that queries should be dealt with in different ways according to their ambiguity degrees, which can be calculated from the log information. We verify this method in a large scale query logs collection and the experimental results show that the method makes good use of the knowledge of user interactions, and it can remarkably improve search performance. Keywords: Query expansion, log mining, information retrieval, search engine, 1. Introduction With the rapid growth of information on the World Wide Web, more and more users need search engine technology to help them exploit such an extremely valuable resource. Although many search engine systems have been successfully deployed, the current search systems are still far from optimal because of using simple keywords to search and rank relevant documents.
    [Show full text]
  • Query Expansion Techniques
    Query Expansion techniques Ashish Kankaria Indian Institute of Technology Bombay, Mumbai [email protected] 1. NEED OF QUERY EXPANSION • Creating a dictionary of expansion terms for each terms, The Information Retrieval system described above works and then looking up in the dictionary for expansion very well if the user is able to convey his information need in form of query. But query is seldom complete. The query 2. EXTERNAL RESOURCE BASED QUERY provided by the user is often unstructured and incomplete. An incomplete query hinders a search engine from satisfy- EXPANSION ing the user’s information need. In practice we need some In these approaches, the query is expanded using some ex- representation which can correctly and more importantly ternal resource like WordNet, lexical dictionaries or the- completely express the user’s information need. saurus.These dictionaries are built manually which contain mappings of the terms to their relevant terms. There tech- Figure 1 explains the need of query expansion. Consider niques involve look up in such resources and adding the re- an input query “Sachin Tendulkar”. As a search engine de- lated terms to query. Following are some of external re- veloper we would expect that user wants documents related source based query expansion techniques. [5] to cricketer Sachin Tendulkar. Consider our corpus has 2 documents. First document is an informative page about 2.1 Thesaurus based expansion Sachin Tendulkar which contains the query terms where as A thesaurus is a data structure that lists words grouped to- second document is an blog on Tendulkar which has various gether according to similarity of meaning (containing syn- adjectives related to Sachin Tendulkar like master blaster or onyms and sometimes antonyms), in contrast to a dictio- God of cricket but does not have query terms.
    [Show full text]
  • Learning for Efficient Supervised Query Expansion Via Two-Stage Feature Selection
    Learning for Efficient Supervised Query Expansion via Two-stage Feature Selection Zhiwei Zhang1,∗ Qifan Wang2, Luo Si13 , Jianfeng Gao4 1Dept of CS, Purdue University, IN, USA 2Google Inc, Mountain View, USA 3Alibaba Group Inc, USA 4Microsoft Research, Redmond, WA, USA {zhan1187,lsi}@purdue.edu, [email protected], [email protected] ABSTRACT predicted to be relevant [26]. It is hoped that these expanded Query expansion (QE) is a well known technique to im- terms can capture user's true intent that is missed in orig- prove retrieval effectiveness, which expands original queries inal query, thus improving the final retrieval effectiveness. with extra terms that are predicted to be relevant. A re- In the past decades, various applications [26, 6] have proved cent trend in the literature is Supervised Query Expansion its value. (SQE), where supervised learning is introduced to better se- Unsupervised QE (UQE) algorithms used to be the main- lect expansion terms. However, an important but neglected stream in the QE literature. Many famous algorithms, such issue for SQE is its efficiency, as applying SQE in retrieval as relevance model (RM) [21] and thesaurus based methods can be much more time-consuming than applying Unsuper- [29], have been widely applied. However, recent studies [5, vised Query Expansion (UQE) algorithms. In this paper, 22] showed that a large portion of expansion terms selected we point out that the cost of SQE mainly comes from term by UQE algorithms are noisy or even harmful, which limits feature extraction, and propose a Two-stage Feature Selec- their performance. Supervised Query Expansion (SQE) is tion framework (TFS) to address this problem.
    [Show full text]
  • CEQE: Contextualized Embeddings for Query Expansion
    CEQE: Contextualized Embeddings for Query Expansion Shahrzad Naseri1, Jeffrey Dalton2, Andrew Yates3, and James Allan1 1 University of Massachusetts Amherst, 2University of Glasgow 3Max Planck Institute for Informatics fshnaseri, [email protected], [email protected] [email protected] Abstract. In this work we leverage recent advances in context-sensitive language models to improve the task of query expansion. Contextual- ized word representation models, such as ELMo and BERT, are rapidly replacing static embedding models. We propose a new model, Contextu- alized Embeddings for Query Expansion (CEQE), that utilizes query- focused contextualized embedding vectors. We study the behavior of contextual representations generated for query expansion in ad-hoc doc- ument retrieval. We conduct our experiments on probabilistic retrieval models as well as in combination with neural ranking models. We evalu- ate CEQE on two standard TREC collections: Robust and Deep Learn- ing. We find that CEQE outperforms static embedding-based expansion methods on multiple collections (by up to 18% on Robust and 31% on Deep Learning on average precision) and also improves over proven prob- abilistic pseudo-relevance feedback (PRF) models. We further find that multiple passes of expansion and reranking result in continued gains in effectiveness with CEQE-based approaches outperforming other ap- proaches. The final model incorporating neural and CEQE-based expan- sion score achieves gains of up to 5% in P@20 and 2% in AP on Robust over the state-of-the-art transformer-based re-ranking model, Birch. 1 Introduction Recently there is a significant shift in text processing from high-dimensional word-based representations to ones based on continuous low-dimensional vectors.
    [Show full text]
  • A Survey of Automatic Query Expansion in Information Retrieval
    1 A Survey of Automatic Query Expansion in Information Retrieval CLAUDIO CARPINETO and GIOVANNI ROMANO, Fondazione Ugo Bordoni The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to over- come this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in labo- ratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)? Categories and Subject Descriptors: H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval—Query formulation; H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing General Terms: Algorithms, Experimentation, Measurement, Performance Additional Key Words and Phrases: Query expansion, query refinement, search, word associations, pseudo-relevance feedback, document ranking ACM Reference Format: Carpineto, C. and Romano, G.
    [Show full text]
  • A Query Suggestion Method Combining TF-IDF and Jaccard Coefficient for Interactive Web Search
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of Essex Research Repository www.sciedu.ca/air Artificial Intelligence Research 2015, Vol. 4, No. 2 ORIGINAL RESEARCH A query suggestion method combining TF-IDF and Jaccard Coefficient for interactive web search Suthira Plansangket ,∗ John Q Gan School of Computer Science and Electronic Engineering, University of Essex, United Kingdom Received: May 10, 2015 Accepted: July 27, 2015 Online Published: August 6, 2015 DOI: 10.5430/air.v4n2p119 URL: http://dx.doi.org/10.5430/air.v4n2p119 Abstract This paper proposes a query suggestion method combining two ranked retrieval methods: TF-IDF and Jaccard coefficient. Four performance criteria plus user evaluation have been adopted to evaluate this combined method in terms of ranking and relevance from different perspectives. Two experiments have been conducted using carefully designed eighty test queries which are related to eight topics. One experiment aims to evaluate the quality of the query suggestions generated by the proposed method, and the other aims to evaluate the improvement of the relevance of retuned documents in interactive web search by using the query suggestions so as to evaluate the effectiveness of the developed method. The experimental results show that the method developed in this paper is the best method for query suggestion among the methods evaluated, significantly outperforming the most popularly used TF-IDF method. In addition, the query suggestions generated by the proposed method significantly improve the relevance of returned documents in interactive web search in terms of increasing the precision or the number of highly relevant documents.
    [Show full text]
  • Joining Automatic Query Expansion Based on Thesaurus and Word Sense Disambiguation Using Wordnet
    Int. J. Computer Applications in Technology, Vol. 33, No. 4, 2008 271 Joining automatic query expansion based on thesaurus and word sense disambiguation using WordNet Francisco João Pinto* and Antonio Fariña Martínez Department of Computer Science, University of A Coruña, Campus de Elviña s/n, A Coruña, 15071, Spain E-mail: [email protected] E-mail: [email protected] *Corresponding author Carme Fernández Pérez-Sanjulián Department of Galician-Portuguese, French, and Linguistics, University of A Coruña, Campus da Zapateira s/n, A Coruña, 15071, Spain E-mail: [email protected] Abstract: The selection of the most appropriate sense of an ambiguous word in a certain context is one of the main problems in Information Retrieval (IR). For this task, it is usually necessary to count on a semantic source, that is, linguistic resources like dictionaries, thesaurus, etc. Using a methodology based on simulation under a vector space model, we show that the use of automatic query expansion and disambiguation of the sense of the words permits to improve retrieval effectiveness. As shown in our experiments, query expansion is not able by itself to improve retrieval. However, when it is combined with Word Sense Disambiguation (WSD), that is, when the correct meaning of a word is chosen from among all its possible variations, it leads to effectiveness improvements. Keywords: automatic query expansion; thesaurus; disambiguation; WordNet. Reference to this paper should be made as follows: Pinto, F.J., Martínez, A.F. and Pérez-Sanjulián, C.F. (2008) ‘Joining automatic query expansion based on thesaurus and word sense disambiguation using WordNet’, Int. J.
    [Show full text]
  • Query Expansion Techniques for Information Retrieval: a Survey
    This article has been accepted and published in the journal \Information Processing & Management". Content may change prior to final publication. DOI: https://doi.org/10.1016/j.ipm.2019.05.009, 0020-0255/ c 2019 Elsevier Inc. All rights reserved. Noname manuscript No. (will be inserted by the editor) Query Expansion Techniques for Information Retrieval: a Survey Hiteshwar Kumar Azad · Akshay Deepak Received: date / Accepted: date Abstract With the ever increasing size of the web, relevant information extraction on the Internet with a query formed by a few keywords has become a big challenge. Query Expansion (QE) plays a crucial role in improving searches on the Internet. Here, the user's initial query is reformulated by adding additional meaningful terms with similar significance. QE { as part of information retrieval (IR) { has long attracted researchers' attention. It has become very influential in the field of personalized social document, question answering, cross-language IR, information filtering and multimedia IR. Research in QE has gained further prominence because of IR dedicated conferences such as TREC (Text Information Retrieval Conference) and CLEF (Conference and Labs of the Evaluation Forum). This paper surveys QE techniques in IR from 1960 to 2017 with respect to core techniques, data sources used, weighting and ranking methodologies, user participation and applications { bringing out similarities and differences. Keywords Query expansion · Query reformulation · Information retrieval · Internet search 1 Introduction There is a huge amount of data available on the Internet, and it is growing exponentially. This unconstrained information-growth has not been accompanied by a corresponding technical ad- vancement in the approaches for extracting relevant information [191].
    [Show full text]
  • Cross-Language IR
    Cross-Language IR many slides courtesy James Allan@umass Jimmy Lin, University of Maryland Paul Clough and Mark Stevenson, University of Sheffield, UK 1 What is Cross-Lingual Retrieval? • Accepting questions in one language (English) and retrieving information in a variety of other languages –“questions” may be typical Web queries or full questions in across-lingual question answering (QA) system –“information” could be news articles, text fragments orpassages, factual answers, audio broadcasts, written documents, images, etc. • Searching distributed, unstructured, heterogeneous, multilingual data • Often combined with summarization, translation, and discovery technology 2 Current Approaches to CLIR • Typical approach is to translate query, use monolingual search engines, then combine answers – other approaches use machine translation of documents – Or translation into an interlingua • Translation ambiguity a major issue – multiple translations for each word – query expansion often used as part of solution – translation probabilities required for some approaches • Requires significant language resources – bilingual dictionaries – parallel corpora –“comparable” corpora – MT systems 3 Two Approaches • Query translation – Translate English query into Chinese query – Search Chinese document collection – Translate retrieved results back into English • Document translation – Translate entire document collection into English – Search collection in English • Translate both? 4 Query Translation Chinese Document Collection Chinese documents Retrieval
    [Show full text]