
A re-examination of IR techniques in QA system Yi Chang Hongbo Xu Shuo Bai Institute of Computing Institute of Computing Institute of Computing Technology, Chinese Technology, Chinese Technology, Chinese Academy of Sciences, Academy of Sciences, Academy of Sciences, Beijing 100080 Beijing 100080 Beijing 100080 [email protected] [email protected] [email protected] t.ac.cn .cn tive mature while IE techniques are still under de- Abstract veloping, most of current researches have focused on answer extraction (Moldovan et al., 2002; The performance of Information Retrieval Soubbotin et al., 2001; Srihari and Li, 1999). There in the Question Answering system is not is little detailed investigation into the IR perform- satisfactory from our experiences in ance which impacts on overall QA system per- TREC QA Track. In this article, we take a formance. Clarke et al. (2000) proposed a passage comparative study to re-examine IR tech- retrieval technique based on passage length and niques on document retrieval and sen- term weights. Tellex et al. (2003) make a quantita- tence level retrieval respectively. Our tive evaluation of various passage retrieval alg o- study shows: 1) query reformulation rithms for QA. Monz (2003) compares the should be a necessary step to achieve a effectiveness of some common document retrieval better retrieval performance; 2) The tech- techniques when they were used in QA. Roberts niques for document retrieval are also ef- and Gaizauskas (2003) use coverage and answer fective in sentence level retrieval, and redundancy to evaluate a variety of passage re- single sentence will be the appropriate re- trieval approaches with TREC QA questions. trieval granularity. In most of current researches, the granularity for information retrieval in QA is passage or document. What is the potential of IR in QA and 1 Introduction what is the most appropriate granularity for re- trieval still need to be explored thoroughly. Information Retrieval (IR) and Information Extrac- We have built our QA system based on the co- tion (IE) are generally regarded as the two key operation of IE and IR. According to our score and techniques to Natural Language Question Answer- rank on past several TREC conferences, although ing (QA) System that returns exact answers. IE we are making progress each year, the results are techniques are incorporated to identify the exact still far from satisfactory. As our recent study answer while IR techniques are used to narrow the shows, IR results in much more loss comparing search space that IE will process, that is, the output with IE. Therefore, we re-examine two important of the IR is the input of IE. questions that have ever been overlooked: According to TREC QA overview (Voorhees, 2001; Voorhees, 2002), most current question an- · Whether a question is a good query for re- swering systems rely on document retrieval to pro- trieval in QA? vide doc uments or passages that are likely to contain the answer to a question. Since document- oriented information retrieval techniques are rela- · Whether the techniques for document re- two consecutive Bi-sentences; a phrase means a trieval are effective on sentence level re- sequence of keywords or one keyword in a ques- trieval? tion, where a keyword is a word in the question but not in Stop-word list. In this paper, we compare some alternative IR To answer each question, Question Analyzing techniques and all our experiments are based on Module makes use of NLP techniques to identify TREC 2003 QA AQUAINT corpus. To make a the right type of information that the question re- thorough analysis, we focus on those questions quires. Question Analyzing Module also preproc- with short, fact-based answers, called Factoid esses the question and makes a query for further que stions in TREC QA. retrieval. Document Retrieval Engine use the ques- In Section 2, we describe our system architec- n ture and evaluate the performance of each module. tion to get relevant documents and selects top ranked relevant documents. Since the selected Then in section 3, according to the comparison of four document retrieval methods, we find the rea- doc uments contain too much information , Sentence Level Retrieval Module matches the question with son to limit our retrieval performance. We then the selected relevant documents to get relevant Bi- present in Section 4 the results of four sentence m level retrieval methods and in Section 5 we re- sentences, and selects top ranked Bi-sentences. Entity Recognizing Module identifies the candi- search different retrieval granularities. Finally, date entities from the selected Bi-sentences, and Section 6 summarizes the conclusions. Answering Selecting Module chooses the answer in a voting method. 2 System Description In our TREC 2003 runs, we incorporate PRISE Our system to answer Factoid questions contains and Multilevel retrieval method and we select 50 five major modules, namely Question Analyzing documents and 20 Bi-sentences. As our recent Module, Document Retrieval Engine, Sentence study shows: among the 383 Factoid questions Level Retrieval Module, Entity Recognizing Mod- whose answer is not NIL, there are only 275 ques- ule and Answer Selecting Module. Figure 2.1 illus- tions whose answer could be got from the top 50 trates the architecture. relevant documents, while there are only 132 out of 275 questions whose answer could be extracted from the top 20 relevant Bi-sentences. All our sta- tistics are based on the examination of both answer and the corresponding Document ID. Figure 2.2 illustrates the loss distribution of each module in our QA system. It is amazing that more than sixty percent of loss is caused by IR while we used to take it for granted that IE is the In this paper, a Bi-sentence means two consecu- bottleneck of QA system. So we should re-examine tive sentences and there is no overlapping between the IR techniques in QA system, and we take ac- count of document retrieval and sentence level re- basic SMART system, the performance of docu- trieval respectively. ment retrieval was greatly improved. In TREC 2002 Web Topic Distillation Task, this method has 3 Document Retrieval Methods been proven to be very effective and efficient. Inspired by the success in Web Track, we Is a question a good query? If the answer is YES, studied the performance of Enhanced-SMART sys- the retrieval algorithm will determine the perform- tem in TREC QA Track. ance of document retrieval in QA system. When we implement a document retrieval system with 3.4 Enhanced-SMART with pseudo-relevant high performance, we could get a satisfactory re- feedback (E-SMART-FB) sult. To explore this, we retrieve relevant docu- We think one of the difficulties of IR in QA system ments for each question from the AQUAINT is that the number of keywords is too limited in a document set with four IR systems: PRISE, basic question. It is natural to expand the query with SMART, Enhanced-SMART and Enhanced pseudo-relevant feedback methods. SMART with pseudo-relevant feedback. In these Pseudo-relevant feedback is an effective method experiments, we use the question itself as the query to make up for lack of keywords in a query. In our for IR system. pseudo-relevant feedback, several keywords with 3.1 PRISE the highest term frequency in the top k ranked documents returned by the first retrieval are added The NIST PRISE is a full text retrieval system into the initial query, and then we make a second based on Tfidf weighting, which uses fast and retrieval. In the experiments on Web Track after space efficient indexing and searching algorithms TREC 2002, we correct a bug in SMART feedback proposed by Harman and Candela (1989). To fa- component. Our experiments shows that the cilitate those participants in the TREC QA Track evaluation performance in Web Topic Distillation who do not have ready access to a document re- Task can outperform that of any other system when trieval system, NIST also publishes a ranked we introduce pseudo-relevant feedback into the document list retrieved by PRISE, in which the top Enhanced-SMART. 1000 ranked documents per question are provided. We try to explore the potential of document re- trieval in TREC QA Track, so we also do experi- 3.2 Basic SMART ments in QA document retrieval by Enhanced- SMART (Salton, 1971) is a well-known and effec- SMART with pseudo-relevant feedback. tive information retrieval system based on Vector Space Model (VSM), which was invented by Sal- 3.5 Performance Comparison ton in the 1960s. Here we give the experiments Figure 3.1 shows the comparison of four document with basic SMART and compare its performance retrieval methods. X coordinate means the number with PRISE to make a complete evaluation. of documents we select to make statistics. Y coor- dinate means the number of the questions that 3.3 Enhanced-SMART (E-SMART) could be correctly answered, and here we mean the According to the study of Xu and Yang (2002) , the exact answers of these questions are contained in classical weighting methods used in basic SMART the selected documents. such as lnc-ltc do not behave well in TREC. According to figure 3.1, the results of Enhanced- In TREC 2002 Web Track, Yang improved the SMART retrieval technique, which proved to be traditional Lnu-Ltu weighting method. Following one of the best in TREC Web Track, are almost the the general hypothesis that relevance of a docu- same as PRISE; basic SMART performs worst, ment in retrieval is irrelevant to its length, Yang and the performances of other three methods are thought over the normalization of the query’s similar.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-