Behavioral and Brain Sciences the Uncertain
Total Page:16
File Type:pdf, Size:1020Kb
Behavioral and Brain Sciences http://journals.cambridge.org/BBS Additional services for Behavioral and Brain Sciences: Email alerts: Click here Subscriptions: Click here Commercial reprints: Click here Terms of use : Click here The uncertain reasoner: Bayes, logic, and rationality Mike Oaksford and Nick Chater Behavioral and Brain Sciences / Volume 32 / Issue 01 / February 2009, pp 105 120 DOI: 10.1017/S0140525X0900051X, Published online: 12 February 2009 Link to this article: http://journals.cambridge.org/abstract_S0140525X0900051X How to cite this article: Mike Oaksford and Nick Chater (2009). The uncertain reasoner: Bayes, logic, and rationality. Behavioral and Brain Sciences,32, pp 105120 doi:10.1017/S0140525X0900051X Request Permissions : Click here Downloaded from http://journals.cambridge.org/BBS, IP address: 144.82.107.40 on 14 Aug 2012 Response/Oaksford & Chater: Pre´cis of Bayesian Rationality issues in the empirical data from the psychology of reason- Authors’ Response ing, and the modeling of that data. Finally, section R6 concludes the case for a “Bayesian turn” in the brain and cognitive sciences in general, and for the understanding The uncertain reasoner: Bayes, logic, and of human reasoning in particular. rationality doi:10.1017/S0140525X0900051X R2. The ubiquity of uncertainty: Distinctions that Mike Oaksforda and Nick Chaterb might preserve logic aSchool of Psychology, Birkbeck College London, London, WC1E 7HX, United Kingdom; bDivision of Psychology and Language Sciences & ESRC Centre for Many commentators suggest ways to preserve a role Economic Learning and Social Evolution, University College London, London, for logic as a separate and core component in an account WC1E 6BT, United Kingdom. of human reasoning, despite the challenge provided [email protected] by uncertainty (Allott & Uchida, Evans, Politzer & [email protected] www.bbk.ac.uk/psyc/staff/academic/moaksford Bonnefon). We argue that logic does have an important www.psychol.ucl.ac.uk/people/profiles/chater_nick.htm role in modeling cognition; but we argue against the existence of cognitive processes dedicated to logical Abstract: Human cognition requires coping with a complex and reasoning. uncertain world. This suggests that dealing with uncertainty may be the central challenge for human reasoning. In Bayesian Rationality we argue that probability theory, the calculus of R2.1. Rationality 1 versus Rationality 2 uncertainty, is the right framework in which to understand everyday reasoning. We also argue that probability theory Evans suggests that a distinction should be drawn explains behavior, even on experimental tasks that have been between two types of rationality (Evans & Over 1996a). designed to probe people’s logical reasoning abilities. Most Rationality 1 relates to implicit, possibly associative, pro- commentators agree on the centrality of uncertainty; some cesses, operating over world knowledge, which Evans suggest that there is a residual role for logic in understanding also terms “ecological rationality.” This type of rationality reasoning; and others put forward alternative formalisms for arises from System 1 in Evans and Over’s (2004) Dual uncertain reasoning, or raise specific technical, methodological, Process Theory (see also Evans & Frankish, in press; or empirical challenges. In responding to these points, we aim to clarify the scope and limits of probability and logic in Sloman 1996; Stanovich & West 2000). Rationality 2 cognitive science; explore the meaning of the “rational” involves explicitly following normative rules, and is the explanation of cognition; and re-evaluate the empirical case for type of rationality achieved by Evans and Over’s (2004) Bayesian rationality. System 2. System 2 processes are logical, rule-governed, and conscious. Moreover, Evans has argued for a crucial asymmetry between the systems. It requires cognitive R1. Introduction effort to ignore System 1, and to use System 2 for logical inference: that is, to infer only what follows from the struc- Bayesian Rationality (Oaksford & Chater 2007, hence- ture of the given premises. forth BR) proposed that human reasoning should be The fundamental problem with this Dual Process view understood in probabilistic, not logical, terms. In Part I, is that these two systems must interact – and if the we discussed arguments from the philosophy of science, systems obey fundamentally different principles, it is not artificial intelligence, and cognitive psychology, which clear how this is possible. Consider the familiar example indicate that the vast majority of cognitive problems of inferring that Tweety flies from the general claim that (outside mathematics) involve uncertain, rather than birds fly and the fact that Tweety is a bird. On the Dual deductively certain, reasoning. Moreover, we argued that Process view, this inference could be drawn logically probability theory (the calculus for uncertain reasoning) from the premises given by System 2, from the assumption is a more plausible framework than logic (the calculus that birds fly is a true universal generalization; System 1, for certain reasoning) for modeling both cognition in by contrast, might tentatively draw this conclusion by general, and commonsense reasoning in particular. In defeasible, associative processes, drawing on general Part II, we considered a strong test of this approach, knowledge. But a lack of synchrony between the two asking whether the probabilistic framework can capture systems, presumed to operate by different rational stan- human reasoning performance even on paradigmatically dards, threatens to cause inferential chaos. Consider, for “logical” tasks, such as syllogistic reasoning or conditional example, what happens if we consider the possibility that inference. Tweety is an ostrich. If System 2 works according to The structure of this response is as follows. In section logical principles, the clash of two rules threatens contra- R2, we reflect on the ubiquity of uncertainty and address diction: we know that birds fly, but that ostriches do not. the theoretical attempts to preserve logic as a separate To escape contradiction, one of the premises must be and core reasoning process. In section R3, we compare rejected: most naturally, birds fly will be rejected as and evaluate Bayesian and logic-based approaches to false. But we now have two unpalatable possibilities. On human reasoning about uncertainty. Section R4 focuses the one hand, suppose that this retraction is not trans- on the methodology of rational analysis (Anderson 1990; ferred to general knowledge and hence is not assimilated 1991a; Oaksford & Chater 1998b) and its relationship by System 1. Then the two systems will have contradictory to more traditional algorithmic and neuroscientific beliefs (moreover, if System 2 reasoning cannot modify approaches. Section R5 discusses a variety of specific general knowledge, its purpose seems unclear). On the BEHAVIORAL AND BRAIN SCIENCES (2009) 32:1 105 Response/Oaksford & Chater: Pre´cis of Bayesian Rationality other hand, if birds fly is retracted from world and control is now reflected in Evans [2007], who moots knowledge, along with other defeasible generalizations, the possibility of a tri-process theory.) then almost all of general knowledge will be stripped away (as BR notes, generalizations outside mathematics are typically defeasible), leading System 1 into inferential R2.2. The split between semantics and pragmatics paralysis. The centrality of logic for a putative System 2 is also Grice’s (1975) theory of conversational implicature orig- brought into doubt by considering that one of its main inally attempted to split off a “stripped down” logic- functions is to consciously propose and evaluate argu- based natural language semantics, from the complex, ments. Yet, argumentation, that is, the attempt to persuade knowledge-rich processes of pragmatic interpretation oneself or others of a controversial proposition, is uni- involved in inferring a speaker’s intentions. In this way, formly agreed not to be a matter of formal logic (Walton he aimed to retain a logical core to semantics, despite 1989), although aspects of argumentation may naturally apparently striking and ubiquitous clashes between the be modeled using probability theory (Hahn & Oaksford dictates of formal logic and people’s intuitions about 2007). Thus, perhaps the core human activity for which meaning and inference. a logic-based System 2 is invoked may, ironically, be Within this type of framework, Allott & Uchida better explained in probabilistic terms. attempt to preserve the truth of potentially defeasible con- People can, of course, be trained to ignore some ditionals (if it’s a bird, then it flies, or, as above, birds fly) aspects of linguistic input, and concentrate on despite the ready availability of counterexamples. They others – for example, in the extreme, they can learn to suggest that this conditional is true in one model, but translate natural language statements into predicate logic not in the model that is considered when an additional (ignoring further aspects of their content) and employ premise giving a counterexample is added (e.g., when we logical methods to determine what follows. But, for the consider the possibility that Tweety is an ostrich). But in psychology of reasoning, this observation is no more classical logic, only an inference that holds in all models significant than the fact that people can learn the rules is deductively valid, by definition. Thus, accepting that of chess