Open-Domain Commonsense Reasoning Using Discourse Relations from a Corpus of Weblog Stories

Open-Domain Commonsense Reasoning Using Discourse Relations from a Corpus of Weblog Stories

Open-domain Commonsense Reasoning Using Discourse Relations from a Corpus of Weblog Stories Matt Gerber Andrew S. Gordon and Kenji Sagae Department of Computer Science Institute for Creative Technologies Michigan State University University of Southern California [email protected] {gordon,sagae }@ict.usc.edu Abstract be supported by knowledge mined from a large cor- pus of personal stories written by Internet weblog We present a method of extracting open- authors. 1 Gordon and Swanson (2008) identified domain commonsense knowledge by apply- three primary obstacles to such an approach. First, ing discourse parsing to a large corpus of per- stories must be distinguished from other weblog sonal stories written by Internet authors. We demonstrate the use of a linear-time, joint syn- content (e.g., lists, recipes, and reviews). Second, tax/discourse dependency parser for this pur- stories must be analyzed in order to extract the im- pose, and we show how the extracted dis- plicit commonsense knowledge that they contain. course relations can be used to generate open- Third, inference mechanisms must be developed that domain textual inferences. Our evaluations use the extracted knowledge to perform the core en- of the discourse parser and inference models visionment tasks listed above. show some success, but also identify a num- In the current paper, we present an approach to ber of interesting directions for future work. open-domain commonsense inference that addresses each of the three obstacles identified by Gordon and 1 Introduction Swanson (2008). We built on the work of Gordon and Swanson (2009), who describe a classification- The acquisition of open-domain knowledge in sup- based approach to the task of story identification. port of commonsense reasoning has long been a The authors’ system produced a corpus of approx- bottleneck within artificial intelligence. Such rea- imately one million personal stories, which we used soning supports fundamental tasks such as textual as a starting point. We applied efficient discourse entailment (Giampiccolo et al., 2008), automated parsing techniques to this corpus as a means of ex- question answering (Clark et al., 2008), and narra- tracting causal and temporal relationships. Further- tive comprehension (Graesser et al., 1994). These more, we developed methods that use the extracted tasks, when conducted in open domains, require vast knowledge to generate textual inferences for de- amounts of commonsense knowledge pertaining to scriptions of states and events. This work resulted states, events, and their causal and temporal relation- in an end-to-end prototype system capable of gen- ships. Manually created resources such as FrameNet erating open-domain, commonsense inferences us- (Baker et al., 1998), WordNet (Fellbaum, 1998), and ing a repository of knowledge extracted from un- Cyc (Lenat, 1995) encode many aspects of com- structured weblog text. We focused on identifying monsense knowledge; however, coverage of causal and temporal relationships remains low for many do- 1We follow Gordon and Swanson (2009) in defining a story mains. to be a “textual discourse that describes a specific series of Gordon and Swanson (2008) argued that the causally related events in the past, spanning a period of time of minutes, hours, or days, where the author or a close associate commonsense tasks of prediction, explanation, and is among the participants.” imagination (collectively called envisionment ) can 43 Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading, pages 43–51, Los Angeles, California, June 2010. c 2010 Association for Computational Linguistics strengths and weaknesses of the system in an effort properties expressed by many factoids extracted by to guide future work. Gordon et al. (2009). We structure our presentation as follows: in Sec- Clark and Harrison (2009) pursued large-scale tion 2, we present previous research that has inves- extraction of knowledge from text using a syntax- tigated the use of large web corpora for natural lan- based approach that was also inspired by the work guage processing (NLP) tasks. In Section 3, we de- of Schubert and Tong (2003). The authors showed scribe an efficient method of automatically parsing how the extracted knowledge tuples can be used weblog stories for discourse structure. In Section 4, to improve syntactic parsing and textual entailment we present a set of inference mechanisms that use recognition. Bar-Haim et al. (2009) present an ef- the extracted discourse relations to generate open- ficient method of performing inference with such domain textual inferences. We conclude, in Section knowledge. 5, with insights into story-based envisionment that Our work is also related to the work of Persing we hope will guide future work in this area. and Ng (2009), in which the authors developed a semi-supervised method of identifying the causes of 2 Related work events described in aviation safety reports. Simi- Researchers have made many attempts to use the larly, our system extracts causal (as well as tem- massive amount of linguistic content created by poral) knowledge; however, it does this in an open users of the World Wide Web. Progress and chal- domain and does not place limitations on the types lenges in this area have spawned multiple workshops of causes to be identified. This greatly increases (e.g., those described by Gurevych and Zesch (2009) the complexity of the inference task, and our results and Evert et al. (2008)) that specifically target the exhibit a corresponding degradation; however, our use of content that is collaboratively created by In- evaluations provide important insights into the task. ternet users. Of particular relevance to the present 3 Discourse parsing a corpus of stories work is the weblog corpus developed by Burton et al. (2009), which was used for the data challenge Gordon and Swanson (2009) developed a super- portion of the International Conference on Weblogs vised classification-based approach for identifying and Social Media (ICWSM). The ICWSM weblog personal stories within the Spinn3r corpus. Their corpus (referred to here as Spinn3r) is freely avail- method achieved 75% precision on the binary task able and comprises tens of millions of weblog en- of predicting story versus non-story on a held-out tries posted between August 1st, 2008 and October subset of the Spinn3r corpus. The extracted “story 1st, 2008. corpus” comprises 960,098 personal stories written Gordon et al. (2009) describe an approach to by weblog users. Due to its large size and broad knowledge extraction over the Spinn3r corpus using domain coverage, the story corpus offers unique op- techniques described by Schubert and Tong (2003). portunities to NLP researchers. For example, Swan- In this approach, logical propositions (known as fac- son and Gordon (2008) showed how the corpus can toids ) are constructed via approximate interpreta- be used to support open-domain collaborative story tion of syntactic analyses. As an example, the sys- writing. 3 tem identified a factoid glossed as “doors to a room As described by Gordon and Swanson (2008), may be opened”. Gordon et al. (2009) found that story identification is just the first step towards com- the extracted factoids cover roughly half of the fac- monsense reasoning using personal stories. We ad- toids present in the corresponding Wikipedia 2 arti- dressed the second step - knowledge extraction - cles. We used a subset of the Spinn3r corpus in by parsing the corpus using a Rhetorical Structure our work, but focused on discourse analyses of en- Theory (Carlson and Marcu, 2001) parser based on tire texts instead of syntactic analyses of single sen- the one described by Sagae (2009). The parser tences. Our goal was to extract general causal and performs joint syntactic and discourse dependency temporal propositions instead of the fine-grained 3The system (called SayAnything) is available at 2http://en.wikipedia.org http://sayanything.ict.usc.edu 44 parsing using a stack-based, shift-reduce algorithm discourse unit with the highest overlap. Accu- with runtime that is linear in the input length. This racy for the predicted discourse unit is equal to lightweight approach is very efficient; however, it the percentage word overlap between the refer- may not be quite as accurate as more complex, chart- ence and predicted discourse units. based approaches (e.g., the approach of Charniak and Johnson (2005) for syntactic parsing). Argument identification accuracy For each dis- We trained the discourse parser over the causal course unit of a predicted discourse relation, and temporal relations contained in the RST corpus. we located the reference discourse unit with the Examples of these relations are shown below: highest overlap. Accuracy is equal to the per- centage of times that a reference discourse rela- (1) [cause Packages often get buried in the load ] tion (of any type) holds between the reference [result and are delivered late. ] discourse units that overlap most with the pre- dicted discourse units. (2) [before Three months after she arrived in L.A. ] [after she spent $120 she didn’t have. ] Argument classification accuracy For the subset of instances in which a reference discourse re- The RST corpus defines many fine-grained rela- lation holds between the units that overlap most tions that capture causal and temporal properties. with the predicted discourse units, accuracy is For example, the corpus differentiates between re- equal to the percentage of times that the pre- sult and reason for causation and temporal-after and dicted discourse relation matches the reference temporal-before for temporal order. In order to in- discourse relation. crease the amount of available training data, we col- lapsed all causal and temporal relations into two Complete accuracy For each predicted discourse general relations causes and precedes . This step re- relation, accuracy is equal to the percentage quired normalization of asymmetric relations such word overlap with a reference discourse rela- as temporal-before and temporal-after .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us