
A Question Answering System Developed as a Project in a Natural Language Processing Course* W. Wang, J. Auer, R. Parasuraman, I. Zubarev, D. Brandyberry, and M. P. Harpm Purdue University West Lafayette, IN 47907 {wang28,jauer,pram,dbrandyb,harper}@ecn.purdue.edu and [email protected] Abstract Plain Text ( Story and Questions ) This paper describes the Question Answering Sys- tem constructed during a one semester graduate- )i..... 8__~'!P°_s__!,g_ ~L ....... level course on Natural Language Processing (NLP). We hypothesized that by using a combination of syn- Tagged Text tactic and semantic features and machine learning 2 i Name Identification techniques, we could improve the accuracy of ques- Propernoun Identified 3 Tagged Text tion answering on the test set of the Remedia corpus Jr_ ......................... T ........... Word Lex]cat Lexica$ and Role • over the reported levels. The approach, although Infonmatior L ........ LabellnformalJ~1 ~ .... .... ~ Lexicon ....... ~ .............. novel, was not entirely successful in the time frame ........ L .................. Wordnel ........... Grammar Partial ~ : Pronoun of the course. ....... - ............ Rules Parser 4--- ................ - Resolution Gramnrmr ...... ~ Pronouns ...................... .............................. Resolved 1 Introduction This paper describes a preliminary reading com- Sentence-to-Question prehension system constructed as a semester-long . I, Comparison i project for a natural language processing course. This was the first exposure to this material for all but one student, and so much of the semester . --~=:: ..... ~__ .. ~ ....... ~ " Rule-Based " Neuro- : Neural Genetic was spent learning about and constructing the tools Classifier .' Fuzzy Net. Network Algorithm .' that would be needed to attack this comprehen- • • .ANS ANS ANS ~ -- ~" ANS sive problem. The course was structured around the project of building a question answering system Voting following the HumSent evaluation as used by the .......................... ÷ ................................... Deep Read system (Hirschman eta]., 1999). The Answer with i Deep Read reading comprehension prototype system Highest Scores i (Hirschman et al., 1999) achieves a level of 36% of the answers correct using a bag-of-words approach Figure 1: The architecture for our question answer- together with limited linguistic processing. Since the ing system. average number of sentences per passage is 19.41, this performance is much better than chance (i.e., 5%). We hypothesized that by using a combina- (POS) tagger distributed with the Deep Read sys- tion of syntactic and semantic features and machine tem. This tagged text is then passed to the Name learning techniques, we could improve the accuracy Identification Module, which updates the tags of of question answering on the test set of the Remedia named entities with semantic information and gen- corpus over these reported levels. der when appropriate." The Partial Parser Mod- ule then takes this updated text and breaks it into 2 System Description phrases while attempting to ]exically disambiguate The overall architecture of our system is depicted the text. The Pronoun Resolution Module is con- sulted by the parser in order to resolve pronouns be- in Figure 1. The story sentences and its five ques- fore passing partially parsed sentences and questions tions (who, what, where, when, and why) are first to the Sentence-to-Question Comparison Module. preprocessed and tagged by the Brill part-of-speech The Comparison Module determines how strongly * We would like to thank the Deep Read group for giving us the phrases of a sentence are related to those of a " access to their test bed. question, and this information is passed to several 28 Rm3-5 (... The club is for boys who are under 12 years Bid.) They are called Cub ScButs. modules which attempt to learn which features of (Answer to Question 1 ) the comparison are the most important for identify- ing whether a sentence is a strong answer candidate. : POS tagging We intended to set up a voting scheme among vari- They are called i Cub Scouts ous modules; however, this part of the work has not PO_S_-.~_P.R. p_: P_O_S_=_~V_B P" ' P_OS='__V_BN'_~ ii_POS='N_N_P21 POS_--.:NN__S" been completed (as indicated by the dashed lines). Name Identification Our system, like Deep Read, uses as the develop- They are ~ i called Cub Scouts' Pos.'t~s" : ment set 28 stories from grade 2 and 27 from grade POS="PRP" POS='VBP" " POS='VBN" PROP~PE.'~t 5, each with five short answer questions (who, what, Initial Partial Parsing when, where, and why), and 60 stories with ques- tions from grades 3 and 4 for testing 1. We will refer They are called ] Cub Scouts TYPE=NP TYPE=VP TYPE.VP ! TYPE=NP to the development and testing data as the Remedia ID=I ID=I ID..2 1[:)=2 LABEL=auK LABEL=subject Lt~,BEL. rnvb LABEL=object BASE=be BASE.,call ! BASE =CubScouts corpus. The following example shows the informa- BASE=they AGR~,~p AGR.3p AGR=3p AGR=3p TENSE=present TENSE=pastp i tion added to a plain text sentence as it progresses GENDER=male VOtCE=ac~,'e VOICE=gas=ire GENDER=male through each module of the system we have created. SEM_TYPE=per~ .z~, SEM TYPE=tan SEM_TYPEiequat el SEM ~__PE-~_erson Pronoun Resolution Each module-is described in more detail in the fol- Y lowing sections. They are called Cub Scouts "P#PE=NP TYP~=VP TYPE=VP TYPE=NP 11:>-i io=1 ~O.2 ID=2 IL'~'BEL=subjec~ LABEL~ LABEL.nwb LAeEL,=objoct ~',SElthey BAS.~Mce BASE=call BASE=CubScouts AGR=3p , AGR.~3p AGFI=3p 2.1 Name Identification Module GENDER=nut ~1 : TENBE=pQI=~I~ TENSE=pastp AGR=3p ! BEM TYPE=per ' VOICE,=active VOICE=pass)re GENDER.mate , PRONRE F=boys ' = _S E_M TYF~i=b~-pu~.E L SEM TYPE=equate _, SEM-_UPE=~__~ The Name Identification Module expects as in- Updating Features put a file that has been tagged by the Brill They are called ! Cub Scouts ' TYPE=NP TYPE.V'P TYPE=VP TYPE=NP tagger distributed with the Deep Read system. ID=I IB=~ 1[:),=2 ID=2 LABEL=subject LABEL=am LABEL=mvb i LABEL=object The most important named entities in the Re- BASE-be BASE~an BASE=CubSco~t= ' BASE=boy AGR=3p AGRB3p AGFt=3g TENSIE.pms4mt TENSE~p~tp AGR=3p media corpus are the names of people and the GENDER=male voICE~ctive VOICE=passive GENDER=mak~ ; names of places. To distinguish between these • SEM_~PE=~.~ers~ __. SE M "i'~fP _Ept?e-p~_ _ SEU='r~'Pe=~q2~: two types, we created dictionaries for names of Bracketed Sentence people and names of places. The first and last Cub Scouts Who are : ~'V~¢- ......... name dictionaries were derived from the files TypE~.VP TYPE=NP ID=I JD.I ID=2 at http ://www. census, gov/genealogy/names/. , LABEL=subject LABEL~vb ! LABEL=object = BABE=who BABE.be 8ABE=CubScout= i AGR,,3p i : AGR=3p AGR=3p First names had an associated gender feature; TENSE=,preserd GENDER=male : GENDER=male YOICE~cti~ , names that were either male or female included • s E_M,.TY~,~ .... SE_M=D'~,E:--p. ~. gender frequency. Place names were extracted from Bracketed Question 1 atlases and other references, and included names of countries, major cities and capital cities, major Figure 2: Processing an example sentence for match- attractions and parks, continents, etc. WordNet ing with a question in our system. was also consulted because of its coverage of place names. There are 5,165 first name entries, 88,798 last name entries, and 1,086 place name entries in 2.2 Partial Parser Module the dictionaries used by this module. The Partial Parser Module follows sequentially af- The module looks up possible names to decide ter the Name Identification Module. The input is whether a word is a person's name or a location. If it the set of story sentences and questions, such that cannot find the word in the dictionaries, it then looks the words in each are tagged with POS tags and at the POS tags provided in the input file to deter- the names are marked with type and gender infor- mine whether or not it is a propernoun. Heuristics mation. Initially pronouns have not been resolved; (e.g., looking at titles like Mr. or word endings like the partial parser provides segmented text with rich rifle) are then applied to decide the semantic type lexical information and role labels directly to the of the propernoun, and if the type cannot be deter- Pronoun Resolution Modffle. After pronoun reso- mined, the module returns both person and location lution, the segmented text with resolved pronouns as its type. The accuracy of the Name Identification is returned to the partial parser for the parser to Module on the testing set was 79.6%. The accuracy update the feature values corresponding to the pro- adjusted to take into account incorrect tagging was nouns. Finally, the partial parser provides bracketed 83.6%. text to the Comparison Module, which extracts fea- tures that will be used to construct modules for an- swering questions. There were differences between the Deep Read electronic version of the passages and the Remedia published passages. The Partial Parser Module utilizes information We used the electronic passages. in a lexicon and a grammar to provide the partial 29 parses. The lexicon and the parser will be detailed script was created to semi-automate the construc- in the next two subsections. tion of the lexicon from information extracted from 2.2.1 The Lexicon previously existing dictionaries and from WordNet. There were two methods we used to construct the 2.2.2 The Partial Parser lexicon: open lexicon, which includes all words The parser segments each sentence into either a noun from the development set along with all determiners, phrase (NP), a verb phrase (VP), or a prepositional pronouns, prepositions, particles, and conjunctions phrase (PP), each with various feature sets.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-