Causality in the Mind: Estimating Contextual and Conjunctive Power
Total Page:16
File Type:pdf, Size:1020Kb
This excerpt from Explanation and Cognition. Frank C. Keil and Robert A. Wilson, editors. © 2000 The MIT Press. is provided in screen-viewable form for personal use only by members of MIT CogNet. Unauthorized use or dissemination of this information is expressly forbidden. If you have any questions about this material, please contact [email protected]. 9 Causality in the Mind: Estimating Contextual and Conjunctive Causal Power Patricia W. Cheng I would like to argue that humans, and perhaps all species capable of flex- ibly adaptive goal-directed actions, are born with a conviction that they use for inferring cause-and-effect relations in the world.This conviction— that entities and events may have causal powers with respect to other enti- ties or events—provides a simple framework that enables reasoners to incrementally construct a picture of causal relations in a complex world. In the reasoner’s mind, causal powers are invariant properties of relations that allow the prediction of the consequences of actions regardless of the context in which the action occurs, with “context” being the background causes of an effect (those other than the candidate causes) that happen to occur in a situation. The reasoner’s goal is to infer these powers. The causal power scheme, however, is coherent only if the inferred powers can be tested in contexts other than the one in which they are inferred. When predictions based on simple causal powers fail, reasoners may be motivated to evaluate conjunctive causal power. All conventional statistical measures of independence, none of which is based on a causal power analysis, contradict measures of conjunctive causal power (Novick and Cheng 1999). As Hume (1739) noted, because causal relations are neither deducible nor observable, such relations (unless innately known) must be inferred from observable events—the ultimate source of information about the world. (Observable input includes introspection as well as external sensory input.) This constraint poses a problem for all accounts of causal induc- tion, the process by which reasoners come to know causal relations among entities or events: observable characteristics typical of causation do not always imply causation. One salient example of this type is covariation—a relation between variations in the values of a candidate cause and those 228 Cheng in the effect (for critiques of the covariation view of untutored causal induction, see Cheng 1993, 1997; and Glymour and Cheng 1998). For example, the fact that a rooster’s crowing covaries with sunrise (the sun rises more often soon after a rooster on a farm crows than at other times of the day when the rooster does not crow) does not imply that the crowing causes sunrise. Nor does the absence of covariation between a can- didate cause and an effect, even when alternative causes are controlled, imply the lack of causal power of the candidate. Consider an experiment testing whether a candidate cause prevents an effect. Suppose alternative causes of the effect are controlled, and the effect occurs neither in the presence of the candidate cause nor in its absence, so that there is no covariation between the candidate cause and the effect. For example, suppose headaches never occur in patients who are given a potential drug for relieving headaches (those in the experimental condition of an exper- iment) or in patients who are given a placebo (those in the control con- dition). The reasoner would not be able to conclude from this absence of covariation that the candidate cause does not prevent the effect—that the drug does not relieve headaches. 9.1 Scope This chapter concerns the discovery of causal relations involving candi- date causes and effects that are represented by binary variables (or by other types of variables that can be recoded into that form); in particular, for variables representing candidate causes, those for which the two values respectively indicate a potentially causal state (typically, the presence of a factor) and a noncausal state (typically, the absence of the factor). An example of a causal question within this domain is, does striking a match cause it to light? All analyses presented here address causal strength (the magnitude of the causal relation), separating it from statistical reliability (reason for con- fidence in the estimated magnitude) by assuming the latter. This chapter is consistent with, but does not address, the influence of prior causal knowledge on subsequent causal judgments (e.g., as in the rooster example; for an account of such judgments, see Lien and Cheng forthcoming; for the debate regarding whether such influence contradicts the covariation view, see Ahn and Kalish, chap. 8, this volume; Bullock, Gelman, and Bail- largeon 1982; Cheng 1993; Glymour, chap. 7, this volume; Glymour and Causality in the Mind 229 Cheng 1998; Harré and Madden 1975; Koslowski 1996; Shultz 1982; and White 1995). 9.2 The Power PC Theory To solve the problem of inferring causation from observable input, I pro- posed the power PC theory (Cheng 1997)—a causal power theory of the probabilistic contrast model (Cheng and Novick 1990)—according to which reasoners interpret covariation between two variables, one perceived to occur before the other, in terms of a probably innate a priori frame- work that postulates the possible existence of general types of causes (cf. Kant 1781). In this framework, covariation (a function defined in terms of observable events) is to causal power (an unobservable entity) as a sci- entist’s observed regularity is to that scientist’s theory explaining the reg- ularity.The theory adopts Cartwright’s proposal (1989) that causal powers can be expressed as probabilities with which causes influence the occur- rence of an effect. According to this theory, to evaluate the causal power of a candidate cause i to influence effect e, reasoners make a distinction between i and the composite of (known and unknown) causes of e alternative to i, which I label a, and they explain covariation defined in terms of observable fre- quencies by the unobservable powers of i and a. For example, when e occurs at least as often after i occurs as when i does not occur, so that () () DPPeiPeii = - ≥ 0, reasoners entertain whether i produces e; P(e|i) is the probability of e given that i occurs and P(e|i¯) is that probability given that i does not occur. Both conditional probabilities are directly estimable by observable fre- quencies, and DPi is the probabilistic contrast between i and e, a measure of covariation. To evaluate whether i produces e, reasoners assume that i may produce e with some probability, and explain P(e|i) by the probabil- ity of the union of two events: (1) e produced by i; and (2) e produced by a if a occurs in the presence of i.That is, they reason that when i is present, e can be produced by i, or by a if a occurs in the presence of i. Likewise, they explain P(e|i¯) by how often e is produced by a alone when a occurs in the absence of i. For situations in which DPi £ 0 (e occurs at most as often after i occurs as when i does not occur), there are analogous explanations for 230 Cheng evaluating preventive causal power.The only assumption that differs in the preventive case is that reasoners assume that i may prevent e, rather than produce it, with some probability.This difference implies that when they evaluate whether i prevents e, they explain P(e|i) by the probability of the intersection of two events: (1) e produced by a if a occurs in the presence of i; and (2) e not stopped by i.That is, they reason that when i is present, e occurs only if it is both produced by a and not stopped by i. These explanations yield equations that result in the estimation of the power of i in terms of observable frequencies under a set of assumptions, and account for a diverse range of intuitions and experimental psychological findings that are inexplicable by previous accounts (see Buehner and Cheng 1997; Cheng 1997;Wu and Cheng 1999). Such phenomena include some intuitive principles of experimental design. The assumptions underlying these explanations of covariation in terms of causes producing or preventing effects are 1. i and a influence e independently; 2. a could produce e but not prevent it; 3. the causal powers of i and a are independent of their occurrences (e.g., the probability of a both occurring and producing e is the product of the probability of a occurring and the power of a); and 4. e does not occur unless it is caused. When a and i do not occur independently (i.e., when there is con- founding), then DPi may be greater than, less than, or equal to the causal power of i, and therefore does not allow any inference about i. (See chapter appendix A for a specification of the conditions under which DPi is greater or less than the power of i.) But (Cheng 1997), if 5. a and i do occur independently (i.e., when there is no confounding), then equation 9.1 gives an estimate of qi, the generative power of i, when 1 DPi ≥ 0: DP q = i , (9.1) i 1 - Pe() i and equation 9.2 gives an estimate of pi, the preventive power of i, when 2 DPi £ 0: -D = Pi pi (). Pe i (9.2) Causality in the Mind 231 Note that the right-hand sides (RHSs) of equations 9.1 and 9.2 require the observation of i and e only, implying that qi and pi can be esti- mated without observing a.