Statistical NLP  Following largely from Chris Manning’s slides, which includes Spring 2010 slides originally borrowed from Sanda Harabagiu, ISI, Nicholas Kushmerick.

Lecture 24: Question Answering

Dan Klein – UC Berkeley

Question Answering People want to ask questions?

 Question Answering: Examples of search queries  More than search who invented surf music?  Ask general comprehension how to make stink bombs questions of a document where are the snowdens of yesteryear? collection which english translation of the bible is used in official catholic  Can be really easy: “What’s the capital of liturgies? Wyoming?” how to do clayart  Can be harder: “How many US states’ capitals how to copy psx are also their largest how tall is the sears tower? cities?”  Can be open ended: how can i find someone in texas “What are the main where can i find information on puritan religion? issues in the global warming debate?” what are the 7 wonders of the world how can i eliminate stress  SOTA: Can do factoids, What vacuum cleaner does Consumers Guide recommend even when text isn’t a perfect match Around 10–15% of query logs

AskJeeves (Classic) A Brief (Academic) History

 Probably the most hyped example of “question  Question answering is not a new research area answering”  It largely did pattern matching to match your question to  Question answering systems can be found in their own of questions many areas of NLP research, including:  Natural language systems  If that works, you get the human-curated answers to that  A lot of early NLP work on these known question (which are presumably good)  Spoken dialog systems  If that fails, it falls back to regular web search  Currently very active and commercially relevant  A potentially interesting middle ground, but not full QA  The focus on open-domain QA is new  MURAX (Kupiec 1993): Encyclopedia answers  Hirschman: Reading comprehension tests  TREC QA competition: 1999–

1 Question Answering at TREC The TREC Document Collection

 Question answering competition at TREC consists of  One recent round: news articles from: answering a set of 500 fact-based questions, e.g.,  AP newswire, 1998-2000 “When was Mozart born ?”.  New York Times newswire, 1998-2000  For the first three years systems were allowed to return 5  Xinhua News Agency newswire, 1996-2000 ranked answer snippets (50/250 bytes) to each question.  In total 1,033,461 documents in the collection.  IR think  3GB of text  Mean Reciprocal Rank (MRR) scoring:  While small in some sense, still too much text to process  1, 0.5, 0.33, 0.25, 0.2, 0 for 1, 2, 3, 4, 5, 6+ doc using advanced NLP techniques (on the fly at least)  Mainly Named Entity answers (person, place, date, …)  Systems usually have initial followed  From 2002 the systems are only allowed to return a by advanced processing. single exact answer and the notion of confidence has  Many supplement this text with use of the web, and other been introduced. knowledge bases

Sample TREC questions Top Performing Systems

1. Who is the author of the book, "The Iron Lady: A  Currently the best performing systems at TREC can Biography of Margaret Thatcher"? answer approximately 70% of the questions 2. What was the monetary value of the Nobel Peace  Approaches and successes have varied a fair deal Prize in 1989?  Knowledge-rich approaches, using a vast array of 3. What does the Peugeot company manufacture? NLP techniques stole the show in 2000, 2001, still do 4. How much did Mercury spend on advertising in 1993? well 5. What is the name of the managing director of Apricot  Notably Harabagiu, Moldovan et al. – SMU/UTD/LCC Computer?  AskMSR system stressed how much could be 6. Why did David Koresh ask the FBI for a processor? achieved by very simple methods with enough text 7. What debts did Qintex group leave? (and now various copycats) 8. What is the name of the rare neurological disease with  Middle ground is to use large collection of surface symptoms such as: involuntary movements (tics), swearing, matching patterns (ISI) and incoherent vocalizations (grunts, shouts, etc.)?

Webclopedia Architecture

2 Ravichandran and Hovy 2002 Learning Surface Patterns Use Pattern Learning

 Use of Characteristic Phrases  Example: Start with “Mozart 1756”  "When was born”  Results:  Typical answers  “The great composer Mozart (1756-1791) achieved fame at a young age”  "Mozart was born in 1756.”  “Mozart (1756-1791) was a genius”  "Gandhi (1869-1948)...”  “The whole world would always be indebted to the great  Suggests phrases like music of Mozart (1756-1791)”  " was born in ”  Longest matching substring for all 3 sentences is  " ( -” "Mozart (1756-1791)”  as Regular Expressions can help locate correct  Suffix tree would extract "Mozart (1756-1791)" as answer an output, with score of 3  Reminiscent of IE pattern learning

Pattern Learning (cont.) Experiments: (R+H, 2002)

 Repeat with different examples of same question  6 different Question types type  from Webclopedia QA Typology (Hovy et al., 2002a)  “Gandhi 1869”, “Newton 1642”, etc.  BIRTHDATE  Some patterns learned for BIRTHDATE  LOCATION  a. born in ,  INVENTOR  b. was born on ,  DISCOVERER  c. ( -  DEFINITION  d. ( - )  WHY-FAMOUS

3 Experiments: Pattern Precision Experiments (cont.)

WHY-FAMOUS  BIRTHDATE table:   1.0 called  1.0 ( - )  1.0 laureate  0.85 was born on ,  0.71 is the of  0.6 was born in  0.59 was born  0.53 was born  LOCATION  0.50 - (  1.0 's  0.36 ( -  1.0 regional : :  0.92 near in  INVENTOR  1.0 invents  Depending on question type, get high MRR (0.6–0.9),  1.0 the was invented by with higher results from use of Web than TREC QA  1.0 invented the in collection

Shortcomings & Extensions Shortcomings... (cont.)

 Need for POS &/or semantic types  Long distance dependencies  "Where are the Rocky Mountains?”  "Denver's new airport, topped with white fiberglass cones in  "Where is London?” imitation of the Rocky Mountains in the background ,  "London, which has one of the busiest airports in continues to lie empty” the world, lies on the banks of the river Thames”  in  would require pattern like: , ()*, lies on  NE tagger &/or ontology could enable system to  But: abundance & variety of Web data helps determine "background" is not a location system to find an instance of patterns w/o losing answers to long distance dependencies

Shortcomings... (cont.) AskMSR

 Web Question Answering: Is More Always Better?  Their system uses only one anchor word  Dumais, Banko, Brill, Lin, Ng (Microsoft, MIT, Berkeley)  Doesn't work for Q types requiring multiple from question to be in answer  Q: “Where is  "In which county does the city of Long Beach lie?” the Louvre  "Long Beach is situated in Los Angeles County” located ?”  required pattern:  Want “Paris” is situated in or “France” or “75058 Paris Cedex 01”  Does not use case or a map  "What is a micron?”  Don’t just  "...a spokesman for Micron, a maker of semiconductors , want URLs said SIMMs are..."

4 AskMSR: Shallow approach AskMSR: Details

 In what year did Abraham Lincoln die?  Ignore hard documents and find easy ones 1 2

3

5 4

Step 1: Rewrite queries Query Rewriting: Variations

 Intuition: The user’s question is often  Classify question into seven categories  Who is/was/are/were…? syntactically quite close to sentences that  When is/did/will/are/were …?  Where is/are/were …? contain the answer a. Category-specific transformation rules  Where is the Louvre Museum located ? eg “For Where questions, move ‘is’ to all possible locations” “Where is the Louvre Museum located” Nonsense, → “is the Louvre Museum located” but who cares? It’s  The Louvre Museum is located in Paris → “the is Louvre Museum located” only a few → “the Louvre is Museum located” more queries → “the Louvre Museum is located”  Who created the character of Scrooge ? → “the Louvre Museum located is ” b. Expected answer “Datatype” (eg, Date, Person, Location, …) When was the French Revolution? → DATE  Charles Dickens created the character of Scrooge .  Hand-crafted classification/rewrite/datatype rules (Could they be automatically learned?)

Query Rewriting: Weights Step 2: Query

 One wrinkle: Some query rewrites are more reliable than others  Send all rewrites to a search engine  Retrieve top N answers (100?) Where is the Louvre Museum located?  For speed, rely just on search engine’s Weight 5 Weight 1 if we get a match, “snippets”, not the full text of the actual Lots of non-answers it’s probably right document could come back too +“the Louvre Museum is located”

+Louvre +Museum +located

5 Step 3: Mining N-Grams Step 4: Filtering N-Grams

 Simple: Enumerate all N-grams (N=1,2,3 say) in all retrieved snippets  Each question type is associated with one or  Weight of an n-gram: occurrence count, each weighted more “ data-type filters ” = regular expression by “reliability” (weight) of rewrite that fetched the  When… document Date  Where…  Example: “Who created the character of Scrooge?” Location  Dickens - 117  What …  Christmas Carol - 78 Person  Charles Dickens - 75  Who …  Disney - 72  Carl Banks - 54  A Christmas - 41  Christmas Carol - 45  Boost score of n-grams that do match regexp  Uncle - 31  Lower score of n-grams that don’t match regexp  Details omitted from paper….

Step 5: Tiling the Answers Results

Scores  Standard TREC contest test-bed: ~1M documents; 900 questions 20 Charles Dickens merged, discard 15 Dickens old n-grams  Technique doesn’t do too well (though would have Mr Charles placed in top 9 of ~30 participants!) 10  MRR = 0.262 (ie, right answered ranked about #4-#5 on average) Score 45 Mr Charles Dickens  Why? Because it relies on the redundancy of the Web

tile highest-scoring n-gram  Using the Web as a whole, not just TREC’s 1M N-Grams N-Grams documents… MRR = 0.42 (ie, on average, right answer is ranked about #2-#3) Repeat, until no more overlap

Issues LCC: Harabagiu, Moldovan et al.

 In many scenarios (e.g., monitoring an individuals email…) we only have a small set of documents

 Works best/only for “Trivial Pursuit”-style fact-based questions

 Limited/brittle repertoire of  question categories  answer data types/filters  query rewriting rules

6 Value from Sophisticated NLP Pasca and Harabagiu (2001) Abductive inference

 Good IR is needed: SMART paragraph retrieval  System attempts inference to justify an answer  Large taxonomy of question types and expected answer (often following lexical chains) types is crucial  Their inference is a kind of funny middle ground  Statistical parser used to parse questions and relevant text for answers, and to build KB between logic and pattern matching  Query expansion loops (morphological, lexical synonyms,  But quite effective: 30% improvement and semantic relations) important  Q: When was the internal combustion engine  Answer ranking by simple ML method invented?  A: The first internal-combustion engine was built in 1867.  invent -> create_mentally -> create -> build

Question Answering Example Example of Complex Question

 How hot does the inside of an active volcano get? How have thefts impacted on the safety of Russia’s nuclear navy,  get(TEMPERATURE, inside(volcano(active))) and has the theft problem been increased or reduced over time?  “lava fragments belched out of the mountain were as hot as 300 degrees Fahrenheit” Need of domain knowledge To what degree do different thefts put nuclear or radioactive materials at risk?  fragments(lava, TEMPERATURE(degrees(300)), Question decomposition Definition questions: belched(out, mountain)) • What is meant by nuclear navy?  volcano ISA mountain • What does ‘impact’ mean? • How does one define the increase or decrease of a problem?  lava ISPARTOF volcano  lava inside volcano Factoid questions:  fragments of lava HAVEPROPERTIESOF lava • What is the number of thefts that are likely to be reported?  The needed semantic information is in WordNet • What sort of items have been stolen? Alternative questions: definitions, and was successfully translated into a • What is meant by Russia? Only Russia, or also former Soviet form that was used for rough ‘proofs’ facilities in non-Russian republics?

7