University of Pennsylvania ScholarlyCommons IRCS Technical Reports Series Institute for Research in Cognitive Science September 1998 Modality in Dialogue: Planning, Pragmatics and Computation Matthew Stone University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/ircs_reports Stone, Matthew, "Modality in Dialogue: Planning, Pragmatics and Computation" (1998). IRCS Technical Reports Series. 67. https://repository.upenn.edu/ircs_reports/67 University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-98-23. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/ircs_reports/67 For more information, please contact [email protected]. Modality in Dialogue: Planning, Pragmatics and Computation Abstract Natural language generation (NLG) is first and foremost a reasoning task. In this reasoning, a system plans a communicative act that will signal key facts about the domain to the hearer. In generating action descriptions, this reasoning draws on characterizations both of the causal properties of the domain and the states of knowledge of the participants in the conversation. This dissertation shows how such characterizations can be specified declaratively and accessed efficiently in G.NL The heart of this dissertation is a study of logical statements about knowledge and action in modal logic. By investigating the proof-theory of modal logic from a logic programming point of view, I show how many kinds of modal statements can be seen as straightforward instructions for computationally manageable search, just as Prolog clauses can. These modal statements provide sufficient expressive resources for an NLG system to represent the effects of actions in the world or to model an addressee whose knowledge in some respects exceeds and in other respects falls short of its own. To illustrate the use of such statements, I describe how the SPUD sentence planner exploits a modal knowledge base to assess the interpretation of a sentence as it is constructed incrementally. Comments University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-98-23. This thesis or dissertation is available at ScholarlyCommons: https://repository.upenn.edu/ircs_reports/67 Modality in Dialogue: Planning, Pragmatics and Computation Matthew Stone September 1998 Abstract Natural language generation (NLG) is first and foremost a reasoning task. In this reasoning, a system plans a communicative act that will signal key facts about the domain to the hearer. In generating action descriptions, this reasoning draws on characterizations both of the causal properties of the domain and the states of knowledge of the participants in the conversation. This dissertation shows how such characterizations can be specified declaratively and accessed efficiently in NLG. The heart of this dissertation is a study of logical statements about knowledge and action in modal logic. By investigating the proof-theory of modal logic from a logic programming point of view, I show how many kinds of modal statements can be seen as straightforward instructions for computationally manageable search, just as Prolog clauses can. These modal statements provide sufficient expressive resources for an NLG system to represent the effects of actions in the world or to model an addressee whose knowledge in some respects exceeds and in other respects falls short of its own. To illustrate the use of such statements, I describe how the SPUD sentence planner exploits a modal knowledge base to assess the interpretation of a sentence as it is constructed incrementally. Contents 1 Introduction 5 1.1 Reasoning in NLG 6 1.2 The Results of the Dissertation 10 1.3 Advice to Readers 14 I Modal Deduction 15 2 Modal Logic as a Modular Language 17 2.1 An Informal Survey of Modularity in Modal Logic 18 2.2 Modal Logic in a Nutshell 26 2.3 Proofs in Modal Logic and Modular Search 29 2.4 Summary 56 3 Logic Programming and Modal Logic 58 3.1 Modal Logic Programming in the Abstract 59 3.2 Modality and Modularity in Design 66 3.3 Exploiting Modularity for Search Control 69 3.4 Formalities 77 3.5 Summary 98 4 Constraints for Possible Worlds 101 4.1 The Problem 102 4.2 -only Logic and Variable Introduction 106 4.3 Constructing Trees from Constraints 112 4.4 Summary 127 5 Evaluating Modal Deduction Methods 128 5.1 A Case Study in Deduction Efficiency 128 5.2 Logical Specifications and Reports of Success and Failure 139 5.3 Summary 141 II Modal Knowledge Representation 143 6 Action and Deliberation, Ontology and Search 145 2 3 6.1 The structure and use of plans 146 6.2 The temporal and inferential ontology of planning 150 6.3 Planning and proof search 157 6.4 Branching time, proof theory and validation 165 6.5 Examples and problems 171 6.6 Summary 182 7 Action and Knowledge 183 7.1 Choice and Future Reasons to Act 184 7.2 Logical Foundations 188 7.3 A New Abductive Presentation of Planning 197 7.4 Key Examples 198 7.5 Summary 201 8 Modular Specifications of Knowledge 203 8.1 Motivating Hierarchy 204 8.2 Hierarchy and Causality 207 8.3 Knowledge, Hierarchy and Choice 211 8.4 Modality, Modularity and Disjunction 213 8.5 Conclusion 215 III Modal Logic for NLG 217 9 Modal Logic and Reasoning in NLG 219 9.1 An Overview of SPUD 219 9.2 A Modal Perspective on Conversational State 226 9.3 Worked Examples 239 9.4 Summary 243 10 Conclusion 244 10.1 Overview of Results 244 10.2 Issues and Problems 246 10.3 Closing Statement 248 Acknowledgments As befits its protracted and formative creation, this work deeply reflects not only its author but also the environment which allowed it at last to flourish. From its defining moments, this research was stimulated and refined by Mark Steedman and his students—Beryl Hoff- man, Nobo Komagata, Michael Niv, Charlie Ortiz, Scott Prevost and Mike White—whose shared interest in knowledge representation and natural language generation made for an intellectual environment at once supportive and challenging. As it developed, the research enjoyed the infectious enthusiasm of close collaborators, particularly on projects related to SPUD, TAG and generation—first Christy Doran, and later including Daniel Hardt, Aravind Joshi, Martha Palmer, Vijay Shankar, Bonnie Webber, Tonia Bleam, Julie Bourne and Gann Bierner. I have sought, and received, the sage attention of a range of experts on specialized questions—Justine Cassell, Robin Clark, Dale Miller, Leora Morgenstern, Ellen Prince and Tandy Warnow. And now, even the final assembly of this document has been honed by the close readings and comments provided by Mark Steedman, Aravind Joshi, Rich Thoma- son, Bonnie Webber and Scott Weinstein. Add the broader camaraderie and intellectual exchange among students in CIS and across IRCS and even Penn—those involved know who they are—and you may, like me, also suspect that nothing like this dissertation would have been possible anywhere else in the world. So I’m thankful to have gotten to Penn: for that I owe Pauline Jacobson, whose cultivation of my semantic interests proved a definitive pointer in this direction—and, of course, my first and foremost academic influences, Mom and Dad, who taught me how to take ideas seriously and precisely, and how to write. And I’m thankful to have been able to stay: for the financial support, thanks to NSF and IRCS graduate fellowships, and a number of other supporting grants, including NSF grant IRI95-04372, ARPA grant N66001-94- C6043, and ARO grant DAAH04-94-G0426; for the emotional support (no less necessary), thanks to Doug DeCarlo. 1 Introduction Natural language generation (NLG) promises to provide an exciting technology for improv- ing the way computer systems communicate their results to users. Using NLG, systems can customize the information they present to user and context. The Migraine system [Carenini et al., 1994], for example, takes the user’s history and diagnosis into account to provide medical information that more precisely matches the user’s needs. The ILEX system [Mellish et al., 1998], meanwhile, exploits the opportunities raised by previous dialogue to pack important and interesting information into an interactive, on-line museum tour. Using NLG, computer expertise can also be deployed in new settings, particularly using speech. For instance, the TraumAid system [Gertner and Webber, 1998] issues concise natural lan- guage critiques of treatment plans in the emergency room during the initial management of chest trauma. The Computer fix-it shop system [Biermann et al., 1993] uses voice dialogue over phone lines to give a computer technician in the field convenient remote access to an expert fault-diagnosis system. When NLG succeeds in systems such as these, two factors are at work. On the one hand, the system must have substantive and correct knowledge about its domain, so that users could benefit from the information provided by the system. On the other, the system must communicate that knowledge in a concise and natural form, so that users can understand the information easily, without being distracted from their other tasks and concerns. These two requirements apparently conflict. The more detailed and rich are the system’s representations of the domain, the more interesting and valuable NLG would be—but the further removed those representations are in content and organization from natural linguistic messages. Because of this gap, NLG is first and foremost a reasoning task. In this reasoning, a system plans a communicative act that will signal key facts about the domain to the hearer. This reasoning thus draws on characterizations both of the causal properties of the domain and the states of knowledge of the participants in the conversation. This dissertation shows how such characterizations can be specified declaratively and accessed efficiently in generating NL action descriptions.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages268 Page
-
File Size-