Towards More Flexible and Common-Sense Reasoning About
Total Page:16
File Type:pdf, Size:1020Kb
From: AAAI Technical Report SS-95-05. Compilation copyright © 1995, AAAI (www.aaai.org). All rights reserved. TowardsMore Flexible and Common-SensicalReasoning about Beliefs Gees C. Stein John A. Barnden Computing Research Lab Computing Research Lab & Computer Science Dept. & Computer Science Dept. NewMexico State University NewMexico State University Las Cruces, NM88003-8001 Las Cruces, NM88003-8001 gstein@crl, nmsu. edu j barnden@cri. nmsu. edu Abstract in the future. Someimportant research problemswithin reasoning about beliefs are howto deal withincomplete, inaccurate or uncertain Belief-Reasoning Issues Addressed beliefs, whento ascribe beliefs and howto makereasoning In this section we discuss aspects of several of the crucial about beliefs morecommon-sensical. We present two systems issues mentioned in the call for papers for the workshop. that attack such problems.The first, called CaseMent,uses These aspects are addressed by one or both of the systems we case-basedreasoning to reason about beliefs. It appears will be describing. that using cases as opposedto roles has advantageswith respectto reusability, context-sensitivityof reasoning,belief Kinds of Reasoning Done by People ascriptionand introspection. Although the programis still in an initial stage, the first results fromthe existingimplementation Classical logics rely on deductive inferences and most systems look promising. The second system, ATI’-Meta,addresses that reason about beliefs do the same (Creary, 1979; Haas, the fact that metaphoricaldescriptions of mentalstates play 1986; Hadley, 1988; Lakemeyer, 1991; Levesque, 1984; an important role in the coherence of mundanediscourse Maidaet al., 1991). In particular, believers are implicitly or (including stories). ATI’-Meta’sreasoning modulehas explicitly cast as performing deduction. However,people do advancedprototype implementation. It incorporates powerful not only use deductionbut also induction, abduction, analogy- facilities for defeasibleand uncertain reasoning. Both systems based reasoning and other forms of plausible reasoning. They use simulative reasoning and thereby reduce the amountof havethe ability to infer general rules to explain someparticular explicit self-knowledgethey needin ascribing,to other agents, beliefs and reasoningssimilar to their own. examples(induction) or generate plausible explanations for observedfacts (abduction --- see, for example,Hobbs et al., 1993). While deduction produces logically sound results, Introduction induction and abduction produce plausible results, which do The call for papers addressed the issues of what is needed not necessarily have to be true. Ideally, systems that reason for story understanding, the way mental states should be about beliefs should be able to ascribe abduction, deduction, represented, the reusability of representations, introspection, induction, analogy-basedreasoning etc. to people. and the type/amountof explicit self-knowledge.In this paper two systems will be discussed that reason about beliefs and Incomplete, Incorrect or Uncertain Information address certain aspects of these issues. The two systems are Ideally, reasoning happenswith all the relevant information based on different reasoning techniques and focus on different available, and with all the information being correct and problems of reasoning about beliefs. The first system uses free of uncertainties. In reality people often have to reason case-based reasoning in order to grapple with somecommon- with incomplete, incorrect and uncertain information. They sensical aspects of reasoning about beliefs. The secondsystem generate defeasible conclusions, conclusions that seemlikely focuses on other common-sensicalaspects in concentrating to be true but later mightturn out to be false. A systemthat on discourse contexts that contain metaphorical descriptions reasons about beliefs has to deal with these issues at both the of mentalstates. system level and within a belief space. Becauseagents might Althoughboth systems are currently confined to reasoning have incompleteor uncertain beliefs it is not enoughfor the about beliefs as opposedto other mental states, we plan to system itself to knowhow to reason with its ownincomplete, expandto other mental states in the future. uncertain beliefs; it has to ascribe such reasoning to other Section 2 discusses those issues from the workshop’s agents as well. Only a minority of research on belief has call for papers that our systems address to a significant endowedsystems with this sort of capability. Oneexample is extent. Section 3 discusses the general technique of case- the work of Cravo & Martins (1993), on a version of SNePS based reasoning, which is the basis of our first system, that uses default reasoning. CaseMent.Section 4 sketches CaseMentitself, and section 5 shows why case-based reasoning helps with some of the Shades of Belief issues of section 2. Section 6 sketches our ATT-Metasystem, Most systems do not deal with different kinds of beliefs. and then section 7 discusses its benefits with regard to the However,beliefs can be of different varieties or "shades". issues of section 2. Section 8 concludes, and briefly points For example, they can be held with different degrees of out how CaseMentand ATT-Metacould be brought together consciousness, they can be more or less active (i.e., more 127 or less capable of affecting reasoning and action), and they (ii) if other people similar to P believe B, then ascribing B can be momentaryor long-lasting. The particular shade of a P is reasonable. belief can be implied by its context. Take for instance "Helga (iii) if other people similar to P believe somethingsimilar believes that Werner is intelligent," in a context where subject matter to B, then ascribing B to P is reasonable. Helga is not just in love with Werner, but really worships Thus, analogy might be a good way for a system to improve him, and where Werner is not in fact intelligent. Then, its belief ascription. Of course, a suitable notion of similarity plausibly, Helga’s belief is a kind of "wishful thinking" would need to be developed. belief: something halfway between wishing and believing. Contrast this with "Marybelieves olive oil is very healthy," Introspection: Knowledge of Own Mental given that Mary has done a lot of research on howolive Processes oil affects health and that Maryis a well-respectedscientist. It is useful for an intelligent systemto be able to view other Mary’s belief is presumably not wishful thinking. Other agents as reasoning in muchthe sameway as it itself reasons. mental states also come in various shades. For instance, This view is a useful default, in the absence of specific hoping involves a context-sensitive mixture of expectation evidence about how another agent is reasoning. Now, a and desire. particularly effective way of embodyingthis strategy in a Levesque(1984) and Lakemeyer(1991) makea distinction system is simulative reasoning. This technique has often been between "explicit" and "implicit" beliefs, mainly to solve suggested in AI, under various names(see, e.g., Attardi the logical omniscienceproblem. It is commonin philosophy Simi, 1994; Ballim & Wilks, 1991; Chalupsky, 1993; Creary, to distinguish "tacit" beliefs as a special case (see Richard 1979; Dinsmore, 1991; Haas, 1986; Konolige, 1986; Moore, (1990) for discussion). However,these distinctions only 1973); it is also closely related to the Simulation Theory scratch the surface of the issue of shadesof belief (or of other in philosophy and psychology (Davies & Stone, in press). mentalstates). Simulative reasoning works as follows. Whenreasoning about another agent X, the system in effect pretends it is Ascribing Beliefs agent X by using X’s beliefs as if they were its own, and Tobe able to reason about other agents’ beliefs a systemhas to applying its own reasoning mechanism to them. Results makeassumptions about what the agents believe. Normally, of such pretences are then wrappedinside an "X believes relatively few if any of the beliefs of an agent are knownwith that" layer. Simulative reasoning has various advantages over the alternative methodsthat would otherwise be needed certainty, and the system has to "ascribe" beliefs: assume (such as reasoning explicitly about inference steps taken that the agent has particular beliefs. by X). Onesalient benefit is that it does not require the Peopledo a similar thing: they can interact even with some- system to have explicit information about its own reasoning one totally unknownby assumingthat this person has certain abilities in order to to be able to ascribe themto other agents. beliefs. People appear often to make a default assumption This advantage is especially strong in a system that has a that other people have beliefs similar to their own.By default variety of reasoning methods--- e.g., induction, abduction, everybodyis expectedto believe that the earth is round. At the analogy-based reasoning, as well as deduction --- or where sametime a person might have certain beliefs that he or she an individual methodis procedurally very complex--- e.g., does not ascribe to others: an astronomer will not generally analogy-based reasoning. (See Haas, 1986, and Barnden, in assumethat non-astronomershave the same beliefs as (s)he press, for further discussion and other advantages.) Under aboutstars. simulative reasoning, the system’s reasoning methods are A system has to knowwhich of its beliefs it can ascribe to automatically and implicitly ascribed to other agents. other agents and