
RobustTextualInferenceViaLearningandAbductiveReasoning Rajat Raina, Andrew Y. Ng and Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305 Abstract potentially be useful for many of the above text processing We present a system for textual inference (the task of infer- applications (PASCAL RTE Challenge, PASCAL Recogniz- ring whether a sentence follows from another text) that uses ing Textual Entailment Challenge 2005). Along these lines, learning and a logical-formula semantic representation of the the basic task we consider is to decide whether the mean- text. More precisely, our system begins by parsing and then ing of one natural language fragment can be approximately transforming sentences into a logical formula-like represen- inferred from another. The most general case of natural lan- tation similar to the one used by (Harabagiu et al., 2000). guage reasoning might require rich commonsense reason- An abductive theorem prover then tries to find the minimum ing, which is known to be hard. Our focus is on inferences “cost” set of assumptions necessary to show that one state- that can be made using language features and semantics, ment follows from the other. These costs reflect how likely possibly with some world knowledge. different assumptions are, and are learned automatically us- In this paper, we present an abductive inference algorithm ing information from syntactic/semantic features and from linguistic resources such as WordNet. If one sentence follows to perform semantic reasoning. More precisely, it learns how from the other given only highly plausible, low cost assump- to combine information from several knowledge sources to tions, then we conclude that it can be inferred. Our approach decide what are plausible “assumptions” during its reason- can be viewed as combining statistical machine learning and ing process. Thus, our system is capable of performing flex- classical logical reasoning, in the hope of marrying the ro- ible semantic reasoning, given just a training set of inferable bustness and scalability of learning with the preciseness and sentences (see Table 1 for examples). elegance of logical theorem proving. We give experimental Even beyond natural language inference, our methods can results from the recent PASCAL RTE 2005 challenge compe- be used for robust inference with other logical representa- tition on recognizing textual inferences, where a system us- tions. Logical inference is well known to frequently be brit- ing this inference algorithm achieved the highest confidence tle and have poor coverage, especially when it uses axioms weighted score. that must be manually tweaked; this limits its applicability to real-world tasks. However, by using abduction and learning, Introduction we combine the elegance and preciseness of the logical for- The performance of many text processing applications malism with the robustness and scalability of a statistically would improve substantially if they were able to rea- trained system. son with natural language representations. These applica- tions typically use information specified in natural language Problem definition (e.g., broad-coverage question answering), and process in- We describe our algorithms on the task of “recognizing tex- put/output in natural language (e.g., document summariza- tual entailment” (PASCAL RTE Challenge, PASCAL Rec- tion). ognizing Textual Entailment Challenge 2005), but the ideas Several semantic reasoning tasks crop up within these ap- extend easily to the general case of textual inference. plications. For example, in an automatic question answering Each example in this domain consists of two parts: one or system, a question is given and we might require the sys- more text sentences and a hypothesis sentence. The task is tem to find a piece of text from which the answer can be to identify whether the hypothesis sentence can be plausibly inferred. In a document summarization system, we might inferred (“entailed”) from the text sentences. Throughout stipulate that the summary should not contain any sentences this section, we use the following as our running example: that can be inferred from the rest of the summary. These se- TEXT: Bob purchased an old convertible. mantic reasoning tasks are largely addressed one application HYPOTHESIS: Bob bought an old car. at a time, despite the many intersections. Actual examples in the task are significantly more complex; There have been recent suggestions to isolate the core see the examples in Table 1. reasoning aspects and formulate a generic task that could Many successful text applications rely only on “keyword” Copyright c 2005, American Association for Artificial Intelli- (or phrasal) counting. But information retrieval techniques gence (www.aaai.org). All rights reserved. that use only word counts (such as bag-of-words representa- AAAI-05 / 1099 Class Text Hypothesis Entailed Comparable A Filipino hostage in Iraq was released. A Filipino hostage was freed in Iraq. Yes Documents Information Global oil prices clung near their highest levels in Oil prices rise. Yes Extraction at least 21 years yesterday after Russian oil giant Yukos was ordered to stop sales, . Information Spain pulled its 1,300 troops from Iraq last month. Spain sends troops to Iraq. No Retrieval Machine Trans- The economy created 228,000 new jobs after a dis- The economy created 228,000 jobs af- No lation appointing 112,000 in June. ter disappointing the 112,000 in June. Paraphrase Clinton is a very charismatic person. Clinton is articulate. Yes Acquisition Question VCU School of the Arts In Qatar is located in Qatar is located in Doha. No Answering Doha, the capital city of Qatar. Reading Eating lots of foods that are a good source of fiber Fiber improves blood sugar control. Yes Comprehension may keep your blood glucose from rising fast after you eat. Table 1: Some illustrative examples from different classes of the PASCAL RTE dataset (PASCAL RTE Challenge, PASCAL Recognizing Textual Entailment Challenge 2005). tions) are not well suited to such semantic reasoning tasks. S (purchased) For example, a successful system must differentiate between the hypothesis above and a similar-looking one which can- (Bob) (purchased) not be inferred: NP VP HYPOTHESIS: Old Bob bought a car. (convertible) Thus, rather than relying on keyword counting, we use a NP significantly richer representation for the syntax of the sen- tence, and augment it with semantic annotations. We choose a logical formula-like representation similar to (Harabagiu, NNP VBD DT JJ NN Pasca, & Maiorano, 2000; Moldovan et al., 2003), which allows us to pose the textual inference problem as one of “approximate” (or, more formally, abductive) logical Bob purchased an old convertible inference. For example, the sentences above become: (9 A,B,C) Bob(A) ^ convertible(B) ^ old(B) ^ purchased(C,A,B) (9 X,Y,Z) Bob(X) ^ car(Y) ^ old(Y) ^ Figure 1: Procedure to discover syntactic dependencies, il- bought(Z,X,Y) lustrated on the parse tree for an example sentence. Heads In our notation, C and Z are event variables that denote the are marked on nonterminal nodes in parentheses. The dotted events of purchasing and buying, respectively. lines show the dependencies discovered. With this representation, the hypothesis is entailed by the text if and only if it can be logically proved from the latter. However, the main obstacle to using such logical infer- scalable than one which requires axioms that are always ences is that most non-trivial entailments require making true (Moldovan et al., 2003). Given this cost model, we can certain “leap-of-faith” assumptions. Thus, in order to cor- then use a search procedure to find the minimum-cost proof, rectly infer the entailment above, one must either know or and judge entailment based on this cost. assume that “a convertible is a car”. Building on (Hobbs et One of the challenges is to obtain a cost model for a large al., 1993), we propose an algorithm in which each “assump- set of assumptions. Such costs have typically been manually tion” is associated with a cost, and a hypothesis is plausible tuned in previous work, or have been left out completely. We if it has a simple—meaning low-cost—proof. describe a learning algorithm that automatically discovers Instead of representing all possible leaps-of-faith as ex- “good” costs given a training set of labeled examples (as in plicit logical axioms (“rules”), we take the view that these Table 1). leaps-of-faith correspond to assumptions about the real world and have some degree of plausibility; we assign a Representation cost to each assumption to quantify its plausibility. The cost In this section, we sketch how our logical representation is of an assumption is computed using a cost model that in- derived from raw English text. tegrates features of the assumption based on many knowl- The first step in processing a sentence is to construct a edge sources. We believe this approach is more robust and syntactic dependency graph. The sentence is parsed using AAAI-05 / 1100 the parser in (Klein & Manning, 2003). Hand-written rules the negation of the goal logical formula to a knowledge are used to find the heads of all nodes in this parse tree (these base consisting of the given axioms, and then deriving reles are an adaptation of those in Collins, 1999). a null clause through successive resolution steps. This This procedure implicitly discovers syntactic relation- corresponds to justifying (i.e., “proving”) the goal by ships between the words in the sentence, because when a deriving a contradiction for its negation. Thus, to use the word is chosen as the head, its siblings must be syntactic method of resolution refutation, we construct a knowledge modifiers to that word. These relationships can be laid out base with the text logical formula and the negation of the as a dependency graph where each node is a word/phrase hypothesis logical formula; finding a proof then corresponds and the links represent particular dependencies.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-