
Answering Questions from Multiple Documents – the Role of Multi-Document Summarization Pinaki Bhaskar Department of Computer Science & Engineering, Jadavpur University, Kolkata – 700032, India [email protected] Text Retrieval Conference (TREC)1 has spon- Abstract sored a question-answering track, which evalu- ates systems that answer factual questions by Ongoing research work on Question Answer- consulting the documents of the TREC corpus. A ing using multi-document summarization has number of systems in this evaluation have suc- been described. It has two main sub modules, cessfully combined information retrieval and document retrieval and Multi-document Sum- natural language processing techniques. More marization. We first preprocess the documents recently, Conference and Labs of Evaluation Fo- and then index them using Nutch with NE rums (CLEF)2 are organizing QA lab from 2010. field. Stop words are removed and NEs are 3 tagged from each question and all remaining INEX has also started Question Answering question words are stemmed and then retrieve track. INEX 2011 designed a QA track (SanJuan the most relevant 10 documents. Now, docu- et al., 2011) to stimulate the research for real ment graph-based query focused multi- world application. The Question Answering document summarizer is used where question (QA) task is contextualizing tweets, i.e., answer- words are used as query. A document graph is ing questions of the form "what is this tweet constructed, where the nodes are sentences of about?" INEX 2012 Tweet Contextualization the documents and edge scores reflect the cor- (TC) track gives QA research a new direction by relation measure between the nodes. The sys- fusing IR and summarization with QA. The first tem clusters similar texts from the graph using this edge score. Each cluster gets a weight and task is to identify the most relevant document, has a cluster center. Next, question dependent for this a focused IR is needed. And the second weights are added to the corresponding cluster task is to extract most relevant passages from the score. Top two-ranked sentences of each clus- most relevant retrieved documents. So an auto- ter is identified in order and compressed and matic summarizer is needed. The general pur- then fused to a single sentence. The com- pose of the task involves tweet analysis, passage pressed and fused sentences are included into and/or XML elements retrieval and construction the output summary with a limit of 500 words, of the answer, more specifically, the summariza- which is presented as answer. The system is tion of the tweet topic. tested on data set of INEX QA track from Automatic text summarization (Jezek and 2011 to 2013 and best readability score was achieved. Steinberger, 2008) has become an important and timely tool for assisting and interpreting text in- 1 Introduction formation in today’s fast-growing information age. An Abstractive Summarization ((Hahn and With the explosion of information in Internet, Romacker, 2001) and (Erkan and Radev, 2004)) Natural language Question Answering (QA) is attempts to develop an understanding of the main recognized as a capability with great potential. concepts in a document and then expresses those Traditionally, QA has attracted many AI re- concepts in clear natural language. Extractive searchers, but most QA systems developed are Summaries (Kyoomarsi et al., 2008) are formu- toy systems or games confined to laboratories and to a very restricted domain. Several recent 1 conferences and workshops have focused on as- http://trec.nist.gov/ 2 http://www.clef-initiative.eu// pects of the QA research. Starting in 1999, the 3 https://inex.mmci.uni-saarland.de/ 14 Proceedings of the Student Research Workshop associated with RANLP 2013, pages 14–21, Hissar, Bulgaria, 9-11 September 2013. lated by extracting key text segments (sentences sage clustering and then composes an extraction or passages) from the text, based on statistical summary, which reflects these main themes. analysis of individual or mixed surface level fea- Graph-based methods have been also proposed tures such as word/phrase frequency, location or for generating summaries. A document graph- cue words to locate the sentences to be extracted. based query focused multi-document summariza- Our approach is based on Extractive Summariza- tion system has been described by Paladhi et al. tion. (2008) and Bhaskar and Bandyopadhyay (2010a In this paper, we describe a hybrid Question and 2010b). Answering system of document retrieval and In the present work, we have used the IR sys- multi-document summarization. The document tem as described by Pakray et al. (2010 and retrieval is based on Nutch4 architecture and the 2011) and Bhaskar et al. (2011) and the automat- multi-document summarization system is based ic summarization system as discussed by graph, cluster, sentence compression & fusion Bhaskar and Bandyopadhyay (2010a and 2010b) and sentence ordering. The same sentence scor- and Bhaskar et al. (2011). ing and ranking approach of Bhaskar and Ban- dyopadhyay (2010a and 2010b) has been fol- 3 System Architecture lowed. The proposed system was run on the data In this section the overview of the system set of three years of INEX QA track from 2011 framework of the current INEX system has been to 2013. shown. The current INEX system has two major 2 Related Work sub-systems; one is the Focused IR system and the other one is the Automatic Summarization Recent trend shows hybrid approach of question system. The Focused IR system has been devel- answering (QA) using Information Retrieval (IR) oped on the basic architecture of Nutch, which can improve the performance of the QA system. use the architecture of Lucene5. Nutch is an open Schiffman et al. (2007) successfully used meth- source search engine, which supports only the ods of IR into QA system. Rodrigo et al. (2010) monolingual Information Retrieval in English, removed incorrect answers of QA system using etc. The Higher-level system architecture of the an IR engine. Pakray et al. (2010) used the IR combined Tweet Contextualization system of system into QA and Pakray et al. (2011) pro- Focused IR and Automatic Summarization is posed an efficient hybrid QA system using IR. shown in the Figure 1. Tombros and Sanderson (1998) presents an investigation into the utility of document summa- rization in the context of IR, more specifically in the application of so-called query-biased sum- maries: summaries customized to reflect the in- formation need expressed in a query. Employed in the retrieved document list displayed after re- trieval took place, the summaries’ utility was evaluated in a task-based environment by meas- uring users’ speed and accuracy in identifying relevant documents. A lot of research work has been done in the domain of both query dependent and independent summarization. MEAD (Radev et al., 2004) is a centroid based multi document summarizer, which generates summaries using cluster cen- troids produced by topic detection and tracking Figure 1. Higher-level system architecture system. NeATS (Lin and Hovy, 2002) selects important content using sentence position, term 4 Document Retrieval frequency, topic signature and term clustering. 4.1 Document Parsing and Indexing XDoX (Hardy et al., 2002) identifies the most salient themes within the document set by pas- The web documents are full of noises mixed with the original content. In that case it is very diffi- 4 http://nutch.apache.org/ 5 http://lucene.apache.org/ 15 7 cult to identify and separate the noises from the standard Porter Stemming algorithm has been actual content. INEX 2012 corpus had some used to stem the question tokens. After stemming noise in the documents and the documents are in all the question tokens, queries are formed with XML tagged format. So, first of all, the docu- the stemmed question tokens. ments had to be preprocessed. The document structure is checked and reformatted according to 4.3 Document Retrieval the system requirements. After searching each query into the Lucene in- XML Parser: The corpus was in XML for- dex, a set of retrieved documents in ranked order mat. All the XML test data has been parsed be- for each question is received. fore indexing using our XML Parser. The XML First of all, all queries were fired with AND Parser extracts the Title of the document along operator. If at least ten documents are retrieved with the paragraphs. using the query with AND operator then the que- Noise Removal: The corpus has some noise as ry is removed from the query list and need not be well as some special symbols that are not neces- searched again. If not then the query is fired sary for our system. The list of noise symbols again with OR operator. OR searching retrieves and the special symbols like “"”, “&”, at least ten documents for each query. We always “'''”, multiple spaces etc. is initially developed ranked the retrieved document using AND opera- manually by looking at a number of documents tor higher than the same using OR operator. and then the list is used to automatically remove Now, the top ranked ten relevant documents for such symbols from the documents. each question is considered for milti-document Named Entity Recognizer (NER): After summarization. Document retrieval is the most cleaning the corpus, the named entity recognizer crucial part of this system. We take only the top identifies all the named entities (NE) in the doc- ranked ten relevant documents assuming that uments and tags them according to their types, these are the most relevant documents for the which are indexed during the document index- question from which the query had been generat- ing. ed. Document Indexing: After parsing the docu- ments, they are indexed using Lucene, an open 5 Multi-Document Summarization source indexer.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-