Question Answering

Question Answering

Speech and Language Processing. Daniel Jurafsky & James H. Martin. Copyright c 2019. All rights reserved. Draft of October 2, 2019. CHAPTER 25 Question Answering The quest for knowledge is deeply human, and so it is not surprising that practi- cally as soon as there were computers we were asking them questions. By the early 1960s, systems used the two major paradigms of question answering—information- retrieval-based and knowledge-based—to answer questions about baseball statis- tics or scientific facts. Even imaginary computers got into the act. Deep Thought, the computer that Douglas Adams invented in The Hitchhiker’s Guide to the Galaxy, managed to answer “the Ultimate Question Of Life, The Universe, and Everything”.1 In 2011, IBM’s Watson question-answering system won the TV game-show Jeop- ardy! using a hybrid architecture that surpassed humans at answering questions like WILLIAM WILKINSON’S “AN ACCOUNT OF THE PRINCIPAL- ITIES OF WALLACHIA AND MOLDOVIA” INSPIRED THIS AU- THOR’S MOST FAMOUS NOVEL2 Most question answering systems focus on factoid questions, questions that can be answered with simple facts expressed in short texts. The answers to the questions below can be expressed by a personal name, temporal expression, or location: (25.1) Who founded Virgin Airlines? (25.2) What is the average age of the onset of autism? (25.3) Where is Apple Computer based? In this chapter we describe the two major paradigms for factoid question an- swering. Information-retrieval or IR-based question answering relies on the vast quantities of textual information on the web or in collections like PubMed. Given a user question, information retrieval techniques first find relevant documents and passages. Then systems (feature-based, neural, or both) use reading comprehen- sion algorithms to read these retrieved documents or passages and draw an answer directly from spans of text. In the second paradigm, knowledge-based question answering, a system in- stead builds a semantic representation of the query, mapping What states border Texas? to the logical representation: lx:state(x) ^ borders(x;texas), or When was Ada Lovelace born? to the gapped relation: birth-year (Ada Lovelace, ?x). These meaning representations are then used to query databases of facts. Finally, large industrial systems like the DeepQA system in IBM’s Watson are often hybrids, using both text datasets and structured knowledge bases to answer questions. DeepQA finds many candidate answers in both knowledge bases and in textual sources, and then scores each candidate answer using knowledge sources like geospatial databases, taxonomical classification, or other textual sources. We describe IR-based approaches (including neural reading comprehension sys- tems) in the next section, followed by sections on knowledge-based systems, on Watson Deep QA, and a discussion of evaluation. 1 The answer was 42, but unfortunately the details of the question were never revealed. 2 The answer, of course, is ‘Who is Bram Stoker’, and the novel was Dracula. 2 CHAPTER 25 • QUESTION ANSWERING 25.1 IR-based Factoid Question Answering The goal of information retrieval based question answering is to answer a user’s question by finding short text segments on the web or some other collection of doc- uments. Figure 25.1 shows some sample factoid questions and their answers. Question Answer Where is the Louvre Museum located? in Paris, France What’s the abbreviation for limited partnership? L.P. What are the names of Odin’s ravens? Huginn and Muninn What currency is used in China? the yuan What kind of nuts are used in marzipan? almonds What instrument does Max Roach play? drums What’s the official language of Algeria? Arabic How many pounds are there in a stone? 14 Figure 25.1 Some sample factoid questions and their answers. Figure 25.2 shows the three phases of an IR-based factoid question-answering system: question processing, passage retrieval and ranking, and answer extraction. DocumentDocument Document Document Document Document Indexing Answer Document and Passage Question Retrieval Processing Docume Docume Document Query Document Document Passage Answer Document Relevantnt passages Formulation Retrieval nt Retrieval Extraction Question Docs Answer Type Detection Figure 25.2 IR-based factoid question answering has three stages: question processing, passage retrieval, and answer processing. 25.1.1 Question Processing The main goal of the question-processing phase is to extract the query: the keywords passed to the IR system to match potential documents. Some systems additionally extract further information such as: • answer type: the entity type (person, location, time, etc.) of the answer. • focus: the string of words in the question that is likely to be replaced by the answer in any answer string found. • question type: is this a definition question, a math question, a list question? For example, for the question Which US state capital has the largest population? the query processing might produce: query: “US state capital has the largest population” answer type: city focus: state capital In the next two sections we summarize the two most commonly used tasks, query formulation and answer type detection. 25.1 • IR-BASED FACTOID QUESTION ANSWERING 3 25.1.2 Query Formulation Query formulation is the task of creating a query—a list of tokens— to send to an information retrieval system to retrieve documents that might contain answer strings. For question answering from the web, we can simply pass the entire question to the web search engine, at most perhaps leaving out the question word (where, when, etc.). For question answering from smaller sets of documents like corporate information pages or Wikipedia, we still use an IR engine to index and search our documents, generally using standard tf-idf cosine matching, but we might need to do more processing. For example, for searching Wikipedia, it helps to compute tf-idf over bigrams rather than unigrams in the query and document (Chen et al., 2017). Or we might need to do query expansion, since while on the web the answer to a question might appear in many different forms, one of which will probably match the question, in smaller document sets an answer might appear only once. Query expansion methods can add query terms in hopes of matching the particular form of the answer as it appears, like adding morphological variants of the content words in the question, or synonyms from a thesaurus. A query formulation approach that is sometimes used for questioning the web is query reformulation to apply query reformulation rules to the query. The rules rephrase the question to make it look like a substring of possible declarative answers. The question “when was the laser invented?” might be reformulated as “the laser was invented”; the question “where is the Valley of the Kings?” as “the Valley of the Kings is located in”. Here are some sample handwritten reformulation rules from Lin (2007): (25.4) wh-word did A verb B ! ...A verb+ed B (25.5) Where is A ! A is located in 25.1.3 Answer Types question classification Some systems make use of question classification, the task of finding the answer answer type type, the named-entity categorizing the answer. A question like “Who founded Vir- gin Airlines?” expects an answer of type PERSON. A question like “What Canadian city has the largest population?” expects an answer of type CITY. If we know that the answer type for a question is a person, we can avoid examining every sentence in the document collection, instead focusing on sentences mentioning people. While answer types might just be the named entities like PERSON, LOCATION, and ORGANIZATION described in Chapter 18, we can also use a larger hierarchical answer type taxonomy set of answer types called an answer type taxonomy. Such taxonomies can be built automatically, from resources like WordNet (Harabagiu et al. 2000, Pasca 2003), or they can be designed by hand. Figure 25.4 shows one such hand-built ontology, the Li and Roth (2005) tagset; a subset is also shown in Fig. 25.3. In this hierarchical tagset, each question can be labeled with a coarse-grained tag like HUMAN or a fine- grained tag like HUMAN:DESCRIPTION, HUMAN:GROUP, HUMAN:IND, and so on. The HUMAN:DESCRIPTION type is often called a BIOGRAPHY question because the answer is required to give a brief biography of the person rather than just a name. Question classifiers can be built by hand-writing rules like the following rule from (Hovy et al., 2002) for detecting the answer type BIOGRAPHY: (25.6) who fis j was j are j wereg PERSON Most question classifiers, however, are based on supervised learning, trained on databases of questions that have been hand-labeled with an answer type (Li and Roth, 2002). Either feature-based or neural methods can be used. Feature based 4 CHAPTER 25 • QUESTION ANSWERING country city state definition abbreviation LOCATION reason expression DESCRIPTION ABBREVIATION Li & Roth Taxonomy group currency HUMAN ENTITY title food NUMERIC individual animal date money size distance percent Figure 25.3 A subset of the Li and Roth (2005) answer types. methods rely on words in the questions and their embeddings, the part-of-speech of each word, and named entities in the questions. Often, a single word in the question gives extra information about the answer type, and its identity is used as a feature. This word is sometimes called the answer type word or question headword, and may be defined as the headword of the first NP after the question’s wh-word; head- words are indicated in boldface in the following examples: (25.7) Which city in China has the largest number of foreign financial companies? (25.8) What is the state flower of California? In general, question classification accuracies are relatively high on easy ques- tion types like PERSON, LOCATION, and TIME questions; detecting REASON and DESCRIPTION questions can be much harder.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    23 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us