<<

From: AAAI-93 Proceedings. Copyright © 1993, AAAI (www.aaai.org). All rights reserved.

A s

Craig Boutilier and Verhica Becher Department of Computer Science University of British Columbia Vancouver, British Columbia CANADA, V6T 172 email: cebly,[email protected]

Abstract “logic-based” frameworks also. Peirce (see Rescher (1978)) We propose a natural model of abduction based on the re- discusses the “plausibility” of explanations, as do Quine and vision of the epistemic state of an agent. We require that Ullian (1970). Consistency-based diagnosis (Reiter 1987; explanations be sufficient to induce in an observation de Kleer, Mackworth and Reiter 1990) uses abnormality in a manner that adequately accounts for factual and hypo- assumptions to capture the context dependence of explana- thetical observations. Our model will generate explanations tions; and preferred explanations are those that minimize that nonmonotonically predict an observation, thus general- abnormalities. Poole’s (1989) assumption-based framework izing most current accounts, which require some deductive captures some of these ideas by explicitly introducing a set relationship between explanation and observation. It also of default assumptions to account for the nonmonotonicity provides a natural preference ordering on explanations, de- of explanations. fined in terms of normality or plausibility. We reconstruct We propose a semantic framework for abduction that cap- the Theorist system in our framework, and show how it can tures the spirit of probabilistic proposals, but in a qualitative be extended to accommodate our predictive explanations and fashion, and in such a way that existing logic-based proposals semantic preferences on explanations. can be represented as well. Our account will take as central subjunctive conditionals of the form A + B, which can be interpreted as asserting that, if an agent were to believe I htroduction A it would also believe B. This is the cornerstone of our A number of different approaches to abduction have been notion of explanation: if believing A is sufficient to induce proposed in the AI literature that model the concept of ab- belief in B, then A explains B. This determines a strong, duction as some sort of deductive relation between an expla- predictive sense of explanation. Semantically, such condi- nation and the explanandum, the “observation” it purports tionals are interpreted relative to an ordering of plausibility to explain (e.g., Hemp&s (1966) deductive-nomological ex- or normality over worlds. Our conditional logic, described planations). Theories of this type are, unfortunately, bound in earlier work as a representation of belief revision and to the unrelenting nature of deductive inference. There are default reasoning (Boutilier 1991; 1992b; 1992c), has the two directions in which such theories must be generalized. desired nonmonotonicity and induces a natural preference Fist, we should not require that an explanation deductively ordering on sentences (hence explanations). In the next sec- entail its observation (even relative to some background the- tion we describe our conditional logics and the necessary ory). There are very few explanations that do not admit logical preliminaries. In Section 3, we discuss the concept exceptions. Second, while there may be many competing of explanation, its epistemic nature, and its definition in our explanations for a particular observation, certain of these framework. We also introduce the notion of preferred ex- may be relatively implausible. Thus we require some notion planations, showing how the same conditional information of preference to chose among these potential explanations. used to represent the defeasibility of explanations induces Both of these problems can be addressed using, for exam- a natural preference ordering. To demonstrate the expres- ple, probabilistic information (Hempel 1966; de KIeer and sive power of our model, in Section 4 we show how Poole’s Williams 1987; Poole 1991; Pearl 1988): we might simply Theorist framework (and Brewka’s (1989) extension) can require that an explanation render the observation sufficiently be captured in our logics. This reconstruction explains se- probably and that most likely explanations be preferred. Ex- mantically the non-predictive and paraconsistent nature of planations might thus nonmonotonic in the sense that Q may explanations in Theorist. It also illustrates the correct man- explain p, but cvAymay not (e.g., P(/?Icr) may be sufficiently ner in which to augment Theorist with a notion of predictive high while P (p 1cy A y ) may not). There have been proposals explanation and how one should capture semantic prefer- to address these issues in a more qualitative manner using ences on explanations. These two abilities have until now

642 Boutilier been unexplored in this canonical abductive framework. We conclude by describing directions for future research, and how consistency-based diagnosis also fits in our system.

More 2 Conditionals and Belief Revision usible The problem of revising a knowledge base or belief set when new information is learned has been well-studied in AI. One of the most influential theories of belief revision is the AGA4 theory (Alchourr&r, Gardenfors and Makinson 1985; Gardenfors 1988). If we take an agent to have a (de- I ductively closed) belief set K, adding new information A to K is problematic if K l- -A. Intuitively, cermin beliefs in z K must be retracted before A can be accepted. The AGM theory provides a set of constraints on acceptable belief re- 0 vision functions *. Roughly, using Ki to denote the belief (a) B))) following truth conditions: Oa (&) is true at a world if Both models in Figure 1 satisfy A 3 B, since B lholds CYholds at some more plausible (less plausible) world; ECL at each world in the shaded regions, min(A), of the mod- els. Using the Ramsey test for acceptance of conditionals (&) holds iff (;Yholds at all (some) worlds, whether more (Stalnaker 1968), we equate B E Ki with M + A j B. or less plausible. Indeed, for both models we have that Ki = Cn(A, B). If The modal logic CT40 is a weaker version of CO, where the model in question is a CO*-model then this characteriza- we weaken the condition of connectedness to be simple re- tion of revision is equivalent to the AGM model (Boutilicr flexivity. This logic is based on models whose structure is that of a partially-ordered set of clusters (see Figure l(a)). ‘We assume, for simplicity, that such a (limiting) set exists for Both logics can be extended by requiring that the set of each cr E &PL, though the following technical developments do worlds in a model include every propositional valuation over not require this (Boutilier 1992b).

Representation and Reasoning 643 1992b). Simply using CT40*, the model satisfies all AGM that might have caused the agent to accept ,L?.This type of postulates (Gardenfors 1988) but the eighth. Properties of explanation is clearly crucial in most reasoning applications. this conditional logic are described in Boutilier(1990; 1991). An intelligent program will provide conclusions of various We briefly describe the contraction of K by -A in this types to a user; but a user should expect a program to be semantic framework. To retract belief in -A, we adopt the able to explain how it reached such a “belief:’ to justify its belief state determined by the set of worlds II I< I I U min( A). reasoning. The explanation should clearly be given in terms The belief set K,‘, does not contain -A, and this operation of other (perhaps more fundamental) beliefs held by the pro- captures the AGM model of contraction. In Figure l(a) gram. This applies to advice-systems, intelligent databases, K-VA = Cn(B), while in Figure l(b) KzA = Cn(A > B). tutorial systems, or a robot that must explain its actions. A key distinction between CT40 and CO-models is il- A second type of explanation is hypothetical. Even if lustrated in Figure 1: in a CO-model, all worlds in min(A) ,0 is not believed, we may want a hypothetical explanation must be equally plausible, while in CT40 this need not be the for it, some new belief the agent could adopt that would be case. Indeed, the CT40-model shown has two maximally sufficient to ensure belief in /3. This counter-factual reading plausible sets of A-worlds (the shaded regions), yet these turns out to be quite important in AI, for instance, in diagnosis are incomparable. We denote the set of such incomparable tasks (see below), planning, and so on (Ginsberg 1986). For subsets of min(A) by PI(A), so that min(A) = lJPI(A)? example, if A explains B in this sense, it may be that ensuring Taking each such subset to be a plausible revised state of A will bring about B. If a is to count as an explanation of pin affairs rather than their union, we can define a weaker notion this case, we must insist that cr is also not believed. If it were, of revision using the following connective. It reflects the it would hardly make sense as a predictive explanation, for intuition that at some element of PI(A), C holds: the agent has aheady adopted belief in cy without committing to /3. This leads us to the following condition on epistemic (A * C) fdf +A) v 8(A A q(A > C)) explanations: if cy is an explanation for p then cy and p must have the same epistemic status for the agent. In other words, The model in Figure l(a) shows the distinction: it satisfies cuEKiffpEKandlcrEKifflpEK.3 neitherA a CnorA j lC,butboth.A -+ CandA ---f Since our explanations are to be predictive, there has to be +Z’. There is a set of comparable most plausible A-worlds some sense in which 0 is sufficient to cause acceptance of p. that satisfies C and one that satisfies 1C. Notice that this On our interpretation of conditionals (using the Ramsey test), connective isparaconsistent in the sense that both C and 1C this is the cast just when the agent the conditional may be “derivable” from A, but C A 4’ is not. However, ---f (Y 3 p. So for cr to count as an explanation of 0 (in and 3 are equivalent in CO, since min(A) must lie within a this predictive sense, at least) this conditional relation must single cluster. hold.4 In other words, if the explanation were believed, so Finally, we define the plausibility of a proposition. A is too would the observation. at least as plausible as B just when, for every B-world w, Unfortunately, this conditional is vacuously satisfied when there is some A-world that is at least as plausible as w . This p is believed, once we adopt the requirement that a be be- is expressed in LB as E (B > OA). If A is (strictly) more lieved too. Any cy E K is such that cy a ,&; but surely plausible than B, then as we move away from IIK 11,we will arbitrary beliefs cannot count as explanations. To determine find an A-world before a B-world; thus, A is qualitatively an explanation for some ,0 E K, we want to (hypothetically) “more likely” than B. In each model in Figure 1, A A B is suspend belief in /3 and, relative to this new belief state, eval- more plausible than A A 1 B. 3This is at odds with one prevailing view of explanation, which 3 Epistemic Explanations takes only non-beliefs to be valid explanations: to offer a current belief cr as an explanation is uninformative; abduction should be an Often explanations are postulated relative to some back- “inference process” allowing the derivation of new beliefs. We take ground theory, which together with the explanation entails a somewhat different view, assuming that observations are not (usu- the observation. Our notion of explanation will be somewhat ally) accepted into a belief set until some explanation is found and different than the usual ones. We define an explanation rel- accepted. In the context of its other beliefs, ,f3is unexpected. An ex- ative to the epistemic state of some agent (or program). An planation relieves this dissonance when it is accepted (Gbdenfors agent’s beliefs and judgements of plausibility will be crucial 1988). After this process both explanation and observation are be- in its evaluation of what counts as a valid explanation (see lieved. Thus, the abductiveprocess should be understood in terms Gardenfors (1988)). We assume a deductively closed belief of hypothetical explanations: when it is realized what could have set K along with some set of conditionals that represent the caused belief in an (unexpected) observation, both observation and explanation are incorporated. Factual explanations are retrospec- revision policies of the agent. These conditionals may repre- tive in the sense that they (should) describe “historically” what sent statements of normality or simply subjunctives (below). explanation was actually adopted for a certain belief. There are two types of sentences that we may wish to ex- In (Becher and Boutilier 1993) we explore a weakening of this plain: beliefs and non-beliefs. If p is a belief held by the condition on epistemic status. Preferences on explanations (see agent, it requires a factual explanation, some other belief cy below) then play a large role in ruling out any explanation whose epistemic status differs from that of the observation. 2PZ(A) = {r&z(A) n C : C is a cluster}. 4See the below for a discussion of non-predictive explanations.

644 Boutilier uate the conditional cy j ,& This hypothetical belief state should simply be the contraction of K by p. The contracted belief set Kp is constructed as described in the last section. We can think of it as the set of beliefs held by the agent before it came to accept /3.5 In general, the conditionals an agent accepts relative to the contracted set need not bear a strong relation to those in the original set. Fortunately, we are only More interested in those conditionals cy + ,Owhere (YE K. The Plausible AGM contraction operation ensures that TX $! KP. This means that we can determine the truth of LY+,6 relative to 1’ OCY). tions are defeasible: W is explained by R; but, R together The preferred explanations of ,f3are those QYsuch that not with C (the lawn is covered) does not explain wet grass, Q’

Representation and on or it rained, but it’s unlikely that my sprinkler was on in Pra tics: We note that p is always an explanation the rain). If we want to represent the fact’ say, that the failure for itself. Indeed’ semantically /3 is as good as any other of fewer components is more plausible than more failures, explanation, for if one is convinced of this trivial explanation, we simply rank worlds accordingly. Preferred explanations one is surely convinced of the proposition to be explained. of ,f?are those that predict /? and presume as few faults as There are many circumstances in which such an explanation possible? We can characterize preferred explanations by is reasonable (for instance, explaining the value of a root node appealing to their “believability” given p: in a causal network); otherwise we would require infinite regress or circular explanations. Proposition 3 a is a preferred explanation for p ifl M k The undesirability of such trivial explanations, in certain -(P 4 la I* circumstances, is not due to a lack of predictive power or In the next section, we discuss the role of -+ further. plausibility, but rather its uninformative nature. We think This approach to preferred explanations is very general, it might be useful to rule out trivial explanations as a mat- and is completely determined by the conditionals (or de- ter of the pragmatics of explanation rather than semantics, faults) held by an agent.’ We needn’t restrict the ordering to, much like Gricean maxims (but see also Levesque (1989)). say, counting component failures. It can be used to represent But’ we note, that in many cases, trivial (or overly specific) any notion of typicality, normality or plausibility required. explanations may be desirable. We discuss this and other For instance, we might use this model of abduction in scene pragmatic issues (e.g., irrelevance) in the full paper (Becher interpretation to “explain” the occurrence of various image and Bout&r 1993). We note that in typical approaches to objects by the presence of actual scene objects (Reiter and diagnosis this problem does not arise. Diagnoses are usually Mackworth 1989). Preferred explanations are those that selected from a pre-determined set of conjectures or com- match the data best. However, we can also introduce an ponent failures. This can be seen as simply another form extra level of preference to capture preferred interpretations, of pragmatic filtering, and can be applied to our model of those scenes that are most likely in a given domain among abduction (see below). those with the best fit. We should point out that we do not require a complete 4 Reconstructing eorist semantic model M to determine explanations. For a given Poole’s (1989) Theorist system is an assumption-based incomplete theory, one can simply use the derivable condi- model of explanation and prediction where observations are tionals to determine derivable explanations and preferences. explained (or predicted) by adopting certsin hypotheses that’ This paper simply concentrates on the semantics of this pro- together with known facts, entail these observations. We cess. All conditions on explanations can be tested as object- illustrate the naturalness and generality of our abductive level queries on an incomplete KB. However, should one framework by recasting Theorist in our model. It shows why have in mind a complete ordering of plausibility (as in the Theorist explanations are paraconsistent and non-predictive, next section), these can usually be represented as a compact how they can be made predictive, and how a natural account object-level theory as well (Boutilier 1991). of preferred explanation can be introduced to Theorist (and Other issues arise with this semantic notion of explana- Brewka’s (1989) extension of it). Our presentation of Theo- tion. Consider the wet grass example, and the following rist will be somewhat more general than that found in (Poole conditionals: R 3 W, S + W and S A R 3 W (note that 1989), but unchanged in essential detail. the third does not follow from the others). We may be in a We assume the existence of a set 2J of defaults, a set situation where rain is preferred to sprinkler as an explana- of propositional formulae taken to be “expectations,” or tion for wet grass (it is more likely). But we might be in a facts that normally hold (Boutilier 1992c). We assume 2) situation where R and S are equally plausible explanations.g is consistent.” Given a fixed set of defaults, we are inter- We might then have W + (S G 1 R). That is’ S and R ested in what follows from a given (known) finite set of facts are the only plausible “causes” for W (and are mutually ex- 3; we use F to denote its conjunction. A scenario for 3 clusive). Notice that S E -R is a preferred explanation for is any subset D of D such that 3 U D is consistent. An W, as is S V R. We say cy is a covering explanation for /3 extension of 3 is my maximal scenario. An explanation of iff cuis a preferred explanation such that ,L?3 cv. Such an cy p given 3 is any cy such that {(Y) U 3 U D /= /? for some represents all preferred explanations for /3.1° scenario D of {cu}U 3. l2 Finally, /3 is predicted given 3 iff 3 U D k ,0 for each extension D of 3. 71nconsistency -based systems, explanations usually do notpre- In the definition of prediction in Theorist, we find an im- diet an observation without adequate fault models (more on this in plicit notion of plausibility: we expect some maximal subset the concluding section). of defaults, consistent with 3, to hold. Worlds that violate *Direct statements of belief, relative plausibility, integrity con- straints, etc. in LB may also be in an agent’s KB. 1988). We are currently investigating causal explanations in our ‘We can ensure that R A S is less likely, e.g., by asserting conditional framework and how a theory might be used to derive S + YRandR =$lS. causal influences (Lewis 1973; Goldszmidt and Pearl 1992). “Space limitations preclude a full discussion (see (Becher and “Nothing crucial depends on this however. Boutilier 1993)), but we might think of a covering explanation as 12Theorist explanations are usually drawn from a given set of the disjunction of all likely causes of ,L3in a causal network (Pearl conjectures, but this is not crucial.

646 Boutilier This illustrates the conditional and defeasible semantic un- derpinnings of Theorist’s weak (paraconsistent) explanations in the conditional framework. In our model, the notion of predictive explanation seems much more natural. In the Theorist model above, there is a possibility that T A B gives S and a possibility that T A B gives 15’. Therefore, T (given B) explains neither possibility. One cannot use the explanation to ensure belief in the “observation” S. We can use our notion of predictive explanation to extend Theorist with this capability. Clearly’ predictive explanations in the Theorist model MD give us: Figure 3: A Theorist Model finition cyis a predictive explanation for /3 given 3 iff p is predicted (in the Theorist sense) given 3 U {a}. more defaults are thus less plausible than those that violate oaem 6 cy is a predictive explanation for ,8 given 3 iff fewer. We define a CT40*-model that reflects this. b a A F 3 ,f3(i.e., ifs 3 U D U {CY} b p for each Definition For a fixed set of defaults D, and a possible extension D). world (valuation) W, the violation set for r.~is defined as Taking those cu-worlds that satisfy as many defaults as V(W) = {d E V : w b 4). YI’heTheorist model forD is ble to be the most plausible or typical cu-worlds, it is cl Mz, = (W,<,cp)whereWandcpareasusual,and O(a A F)). exists any set of defaults that, together with (Y,entails p. This So the notion of preference defined for our concept of epis- means that Q might explain both ,0 and l/3. Such explana- temic explanations induces a preference in Theorist for pre- tions are in a sense paraconsistent, for (Ycannot usually be dictive explanations that are consistent with the greatest sub- used to explain the conjunction p A -p. hrthermore, such sets of defaults; that is, those explanations that are explanations are not predictive: if cy explains contradictory sible or most normal (see Konolige (1992) who sentences, how can it be thought to predict either? Consider similar notion). a set of defaults in Theorist This embedding into CT40 provides a compelling se- mantic account of Theorist in terms of plausibility and belief revision. But it also shows directions in which Theorist which assert that my car will start (S) when I turn the key can be naturally extended, in particular’ with predictive ex- (T), unless my battery is dead (B). The Theorist model MD planations and with preferences on semantic explanations, is shown in Figure 3. Suppose our set of facts 3 has a single notions that have largely been ignored in assumption-based element B. When asked to explain S, Theorist will offer explanation. T. When asked to explain +, Theorist will again offer In (Becher and Boutilier 1993) we show bow these ideas T. If I want my car to start I should turn the key, and if apply to Brewka’s (1989) prioritized extension of Theorist I do not want my car to start I should turn the key. There by ordering worlds in such a way that the prioritization rela- is certainly something unsatisfying about such a notion of tion among defaults is accounted for. If we have a prioritized explanation. Such explanations do, however, correspond default theory D = D1 u - - - D, , we still cluster worlds ac- precisely to weak explanations in CT40 using -+. cording to the defaults they violate; but should w violate 5 a, is a Theorist explanation of p given 3 iff fewer high priority defaults than v, even if it violates more M~~==cY.AF-+@ low priority defaults, w is considered more plausible than v.

Representation and Reasoning 647 This too results in a CT40*-model; and prediction, (weak Boutilier, C. 1992c. Normative, subjunctive and autoepistemic and predictive) explanation, and preference on explanations defaults: Adopting the Ramsey test. In Proceedings of the are all definable in the same fashion as with Theorist. We lirzird International Conference on Principles of Knowledge also show that priorities on defaults, as proposed by Brewka, Representation and Reasoning, pages 685-696, Cambridge. simply prune away certain weak explanations and make oth- Brewka, 6. 1989. Preferred subtheories: An extended logical ers preferred (possibly adding predictive explanations). For framework for default reasoning. In Proceedings of the instance, the counterintuitive explanation T above, for S Eleventh International Joint Conference on Artificial Intel- given B, is pruned away if we require that the default T 3 S ligence, pages 1043-1048, Detroit. be given lower priority than the default T A B > 1s. A de Kleer, J., Mackworth, A. K., and Reiter, R. 1990. Characterizing model for such a prioritized theory simply makes the world diagnoses. In Proceedingsof the Eighth National Conference TBS less plausible than TBS. We note, however, that on , pages 324-330, Boston. such priorities need not be provided explicitly if the Theorist de Kleer, J. and Williams, B. C. 1987. Diagnosing multiple faults. model is abandoned and defaults are expressed directly as Artgcial Intelligence, 32:97-130. conditionals. This preference is derivable in CT40 from the Gkdenfors, P. 1988. Knowledge in Flux: Modeling the Dynamics conditionals T a S and T A B + -S automatically. of Epistemic States. MIT Press, Cambridge. Ginsberg, M. L. 1986. Counterfactuals. Artificial Intelligence, 5 Conchding Remarks 30( 1):35-79. We have proposed anotion of epistemic explanation based on Goldszmidt, M. and Pearl, J. 1992. Rank-based systems: A simple belief revision, and preferences over these explanations us- approach to belief revision, belief update, and reasoning about ing the concept of plausibility. We have shown how Theorist evidence and actions. In Proceedings of the Third Intema- can be captured in this framework. In (Becher and Boutilier tional Conference on Principles of Knowledge Representation 1993), we show how this model can be axiomatized. We can and Reasoning, pages 66 l-672, Cambridge. also capture consistency-based diagnosis in our framework, Hempel, C. G. 1966. PhilosophyofNaturalScience. Prentice-Hall, though it does not usually require that explanations be pre- Englewood Cliffs, NJ. dictive in the sense we describe. Instead, consistency-based Konolige, K. 1992. Using default and causal reasoning in diag- diagnosis is characterized in terms of “might” counterfac- nosis. In Proceedings of the Third International Conference tuals, or excuses that make an observation plausible, rather on Principles of Knowledge Represetiation and Reasoning, than likely (Becher and Boutilier 1993). Of course, fault pages 509-520, Cambridge. models describing how failures are manifested in system Levesque, H. J. 1989. A knowledge level account of abduction. In behavior make explanations more predictive, in our strong Proceedings of the Eleventh International Joint Conference sense. However, the key feature of this approach is not on Artificial Intelligence, pages 1061-1067, Detroit. its ability to represent existing models of diagnosis, but its Journal of , 70:556-567. ability to infer explanations, whether factual or hypothetical, Lewis, D. 1973. Causation. from existing conditional (or default) knowledge. We are Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: also investigating the role of causal explanations in abduc- Networks of Plausible Inference. Morgan Kaufmann, San tion, and how one might distinguish causal from non-causal Mateo. explanations using only conditional information. Poole, D. 1989. Explanation and prediction: An architecture for Acknowledgements: Thanks to David Poole for helpful default and . Computational Intelligence, comments. This research was supported by NSERC Re- 5:97-l 10. search Grant OGF’O121843. Poole, D. 1991. Representing diagnostic knowledge for proba- bilistic horn abduction. In Proceedings of the Twelflh In- eferences ternational Joint Conference on Artificial Intelligence, pages 1129-l 135, Sydney. Alchournk, C., Gtidenfors, P., and Makinson, D. 1985. On the logic of theory change: Partial meet contraction and revision Quine, W. and Ullian, J. 1970. The Web of Belief. Random House, functions. Journal of Symbolic Logic, 505 10-530. New York. Becher, V. and Boutilier, C. 1993. Epistemic explanations. Techni- Reiter, R. 1987. A theory of diagnosis from first principles. Artiji- cal report, University of British Columbia, Vancouver. forth- cial Intelligence, 32:57-95. coming. Reiter, R. and Mackworth, A. K. 1989. A logical framework for Boutilier, C. 1991. Inaccessible worlds and irrelevance: Prelimi- depiction and image interpretation. Artijcial Intelligence, nary report. In Proceedings of the Twelflh International Joint 41:125-155. ConferenceonArt@cial Intelligence, pages413-418, Sydney. Rescher, N. 1978. Peirce ‘sPhilosophy of Science: Critical Studies Boutilier, C. 1992a. Conditional logics for default reasoning and in his Theory of Induction and Scienti@c Method. University belief revision. Technical Report KRR-TR-92-1, University of Notre Dame Press, Notre Dame. of Toronto, Toronto. Ph.D. thesis. Stalnaker, R. C. 1968. A theory of conditionals. In Harper, W., Boutilier, C. 1992b. A logic for revision and subjunctive queries. Stalnaker, R., and Pearce, G., editors, Ifs, pages 41-55. D. In Proceedings of the Tenth National Conference on Artijicial Reidel, Dordrecht. 198 1. Intelligence, pages 609-615, San Jose.

648 Boutilier