<<

OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

CHAPTER On the Distinction Between and 22 Intelligence: Implications for Understanding Individual Diff erences in Reasoning

Keith E. Stanovich

Abstract A concern for individual differences has been missing from the Great Rationality Debate in cognitive science—the debate about how much irrationality to attribute to human cognition. There are individual differences in rational thinking that are less than perfectly correlated with individual differences in intelligence because intelligence and rationality occupy different conceptual locations in models of cognition. A tripartite extension of currently popular dual-process theories is presented in this chapter that illustrates how intelligence and rationality are theoretically separate concepts. The chapter concludes by showing how this tripartite model of , taken in the context of studies of individual differences, can help to resolve the Great Rationality Debate. Key Words: rationality, intelligence, reasoning, individual differences

Introduction Chapter 2). In this chapter, I will illustrate how a In psychology and among the lay public alike, comprehensive assessment of individual diff erences assessments of intelligence and tests of cognitive in reasoning skills will necessitate the theoretical ability are taken to be the sine qua non of good and empirical diff erentiation of the concepts of thinking. Critics of these instruments often point intelligence and rational thinking. out that IQ tests fail to assess many domains of psy- Th e discussion in this chapter will begin by show- chological functioning that are essential. For exam- ing how diff ering defi nitions of rationality frame ple, many largely noncognitive domains such as what is known as the Great Rationality Debate in socioemotional abilities, creativity (Smith & Ward, cognitive science. Th at debate concerns how much Chapter 23), empathy, and interpersonal skills are irrationality to attribute to human cognition. I will almost entirely unassessed by tests of cognitive abil- argue that a concern for individual diff erences has ity. However, even these common critiques of intel- been missing from this debate because we failed to ligence tests often contain the unstated assumption appreciate that there are individual diff erences in that although intelligence tests miss certain key rational thought as well as intelligence. Th is is easier noncognitive areas, they encompass most of what to appreciate when we realize that intelligence and is important cognitively. In this chapter, I will chal- rationality occupy somewhat diff erent conceptual lenge this tacit assumption by arguing that certain locations within most models of cognition. Th us, very important classes of individual diff erences in I present a generic model of the mind that is an thinking are ignored if only intelligence-related vari- extension of currently popular dual-process theories ance is the focus. Many of these classes of individual (Evans, Chapter 8), and I situate both intelligence diff erences that are missing from IQ tests are those and rationality within this model and show how relating to rational thought (Chater & Oaksford, they dissociate, both conceptually and empirically.

343

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 334343 112/1/20112/1/2011 110:22:330:22:33 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

I conclude the chapter by showing how this tripartite that humans, of all the animals, are the only ones model of mind, taking in the context of studies of capable of irrational thoughts and actions” (p. 7). individual diff erences, can help to resolve the Great Hurley and Nudds (2006) make a similar point Rationality Debate. when they argue that, for a strong sense of the term: “ironically, rationality requires the possibility that Th e Concept of Rational Th ought in the animal might err. It can’t be automatically right, Cognitive Science and no matter what it does . . . . when we say that an Th e term rationality has a strong and a weak sense. agent has acted rationally, we imply that it would Th e strong sense of the term is the one used in cogni- have been a mistake in some sense to have acted in tive science, and it will be the one used throughout certain diff erent ways. It can’t be the case that any- this chapter. However, a weaker sense of the term thing the agent might do would count as rational. has sometimes infl uenced—and hence confused— Th is is normativity in a quite weak sense” (p. 2). Th e arguments in the so-called Great Rationality Debate weak sense they are referring to is an Aristotelian in cognitive science. Th e infl uence of the weak sense (categorical) sense, and no cognitive scientist is of the term has also impeded investigation into indi- using rationality in this sense when claiming that an vidual diff erences in rational thought. experiment has demonstrated human irrationality. Dictionary defi nitions of rationality tend to be When a cognitive scientist terms a behavior irra- of the weak sort—often seeming quite lame and tional, he or she means that the behavior departs unspecifi c (“the state or quality of being in accord from the optimum prescribed by a particular nor- with ”). Th e meaning of rationality in mod- mative model. Th e scientist is not implying that no ern cognitive science (the strong sense) is, in con- thought or reason was behind the behavior. Some of trast, much more specifi c and prescriptive than this. the hostility that has been engendered by experimen- Th e weak defi nitions of rationality derive from a tal claims of human irrationality no doubt derive categorical notion of rationality tracing to Aristotle from a (perhaps tacit) infl uence of the Aristotelian (humans as the only animals who base actions on view—the thought that cognitive psychologists are reason). As de Sousa (2007) has pointed out, such a saying that certain people are somehow less than notion of rationality as “based on reason” has as its human when they are said to behave irrationally. opposite not irrationality but arationality (outside But in using the strong sense of the term rationality, the domain of reason). Aristotle’s characterization is most cognitive scientists are saying no such thing.1 categorical—the behavior of entities is either based Some of the heat in the Great Rationality Debate on thought or it is not. Animals are either rational is no doubt caused by reactions to the term irra- or arational. In this conception, humans are ratio- tionality being applied to humans. As mentioned, nal, but other animals are not. Th ere is no room for lingering associations with the Aristotelian categori- individual diff erences in rational thinking among cal view make charges of irrationality sound more humans in this view. cutting than they actually are when in the context of In its stronger sense (the sense employed in most of cognitive science research. When we fi nd a behav- cognitive science and in this chapter) rational thought ioral pattern that is less than optimally rational, we is a normative notion (Chater & Oaksford, Chapter could easily say that it is “less than perfectly ratio- 2; Griffi ths, Tenenbaum, & Kemp, Chapter 3). Its nal” rather than that it is irrational—with no loss of opposite is irrationality, not arationality. Normative meaning. Perhaps if this had been the habit in the models of optimal judgment and decision making literature, the rationality debate in cognitive science defi ne perfect rationality in the noncategorical view would not have become so heated. Such an empha- employed in cognitive science. Rationality (and irra- sis also highlights the theme of this chapter—that tionality) comes in degrees defi ned by the distance of there are indeed individual diff erences in ratio- the thought or behavior from the optimum defi ned nal thought and that understanding the nature of by a normative model. De Sousa (2007) points out these diff erences might have important theoretical that the notion of rationality in Aristotle’s sense can- implications. not be normative, but in the strong sense of cognitive Cognitive scientists recognize two types of science, it is. Other animals may be arational, but rationality: epistemic and instrumental. Epistemic only humans can be irrational. As de Sousa (2007) rationality concerns how well beliefs map onto the puts it, “if human beings can indeed be described as actual structure of the world. It is sometimes called rational animals, it is precisely in virtue of the fact theoretical rationality or evidential rationality (see

344 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 334444 112/1/20112/1/2011 110:22:340:22:34 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Audi, 1993, 2001; Foley, 1987; Harman, 1995; literature on the psychology of rationality has fol- Manktelow, 2004; Over, 2004). lowed this in that the empirical literature has Th e simplest defi nition of instrumental rational- focused on identifying thinking errors, just as physi- ity is as follows: behaving in the world so that you cians focus on disease. Degrees of rationality can be get exactly what you most want, given the resources assessed in terms of the number and severity of such (physical and mental) available to you. Somewhat cognitive biases that individuals display. Conversely, more technically, we could characterize instrumen- failure to display a cognitive bias becomes a measure tal rationality as the optimization of the individual’s of rational thought. goal fulfi llment. Economists and cognitive scientists have refi ned the notion of optimization of goal ful- Th e Great Rationality Debate in fi llment into the technical notion of expected utility. Cognitive Science Th e model of rational judgment used by decision A substantial research literature—one comprising scientists (Chater & Oaksford, Chapter 2; LeBoeuf literally hundreds of empirical studies conducted over & Shafi r, Chapter 16) is one in which a person several decades—has fi rmly established that people’s chooses options based on which option has the larg- responses sometimes deviate from the performance est expected utility2 (see Baron, 2008; Dawes, 1998; considered normative on many reasoning tasks. For Hastie & Dawes, 2010; Wu, Zhang, & Gonzalez, example, people assess probabilities incorrectly, they 2004). One of the fundamental advances in the test hypotheses ineffi ciently, they violate the axioms history of modern decision science was the dem- of utility theory, they do not properly calibrate onstration that if people’s preferences follow certain degrees of belief, their choices are aff ected by irrel- patterns (the so-called axioms of choice—things evant context, they ignore the alternative hypothesis like transitivity and freedom from certain kinds of when evaluating data, and they display numerous context eff ects), then they are behaving as if they other information processing biases (Baron, 2008; are maximizing utility; they are acting to get what Bazerman & Moore, 2008; Evans, 2007; Gilovich, they most want (Edwards, 1954; Gilboa, 2010; Griffi n, & Kahneman, 2002; Kahneman & Tversky, Jeff rey, 1983; Luce & Raiff a, 1957; Savage, 1954; 2000; Shafi r & LeBoeuf, 2002; Stanovich, 2009b, von Neumann & Morgenstern, 1944). Th is is what 2011). Demonstrating that descriptive accounts of makes people’s degrees of rationality measurable human behavior diverged from normative mod- by the experimental methods of cognitive science. els was a main theme of the heuristics and biases Although it is diffi cult to assess utility directly, it research program inaugurated by Kahneman and is much easier to assess whether one of the axioms Tversky in the early 1970s (Kahneman & Tversky, of rational choice is being violated. Th is is much 1972, 1973; Tversky & Kahneman, 1974). like our judgments at a sporting , where, for Researchers working in the heuristics and biases example, it might be diffi cult to discern whether a tradition tend to be so-called Meliorists (see Bishop quarterback has put the ball perfectly on the money, & Trout, 2005; Doherty, 2003; Larrick, 2004; but it is not diffi cult at all to detect a bad throw. Stanovich, 1999, 2004). Th ey assume that human In fact, in many domains of life this is often reasoning is not as good as it could be, and that the case as well. It is often diffi cult to specify what thinking could be improved. Th e dictionary defi - the very best response might be, but performance nition of meliorism is “the doctrine that the world errors are much easier to spot. Essayist Neil Postman tends to become better or may be made better by (1988) has argued, for instance, that educators and human eff ort.” Th us, a Meliorist is one who feels other advocates of good thinking might adopt a that education and the provision of information stance more similar to that of physicians or attor- could help make people more rational—could neys. He points out that doctors would fi nd it hard help them more effi ciently further their goals and to defi ne “perfect health” but, despite this, they are to bring their beliefs more in line with the actual quite good at spotting disease. Likewise, lawyers are state of the world.3 Stated this way, Meliorism seems much better at spotting injustice and lack of citi- to be an optimistic doctrine, and in one sense it is. zenship than defi ning “perfect justice” or ideal citi- But this optimistic part of the Meliorist message zenship. Postman argues that, like physicians and derives from the fact that Meliorists see a large gap attorneys, educators might best focus on instances between normative models of rational respond- of poor thinking which are much easier to identify ing and descriptive models of what people actually as opposed to trying to defi ne ideal thinking. Th e do. Emphasizing the gap, of course, entails that

stanovich 345

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 334545 112/1/20112/1/2011 110:22:340:22:34 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Meliorists will be attributing a good deal of irratio- Evolution Does Not Guarantee nality to human cognition. Human Rationality Over the last two decades, an alternative inter- An Aristotelian view of rationality has no room for pretation of the fi ndings from the heuristics and individual diff erences in rational thinking between biases research program has been championed. humans. In this view, humans are the unique ani- Contributing to this alternative interpretation mals who act based on reason. Th us, all humans are have been evolutionary psychologists, adaptation- rational—and all are equally so. However, once we ist modelers, and ecological theorists (Anderson, move from this view to the normative conception of 1990; Cosmides & Tooby, 1996; Gigerenzer, rationality, we open up room for individual diff er- 2007; Marewski, Gaissmaier, & Gigerenzer, 2010; ences. Th e maximizing notions of rational thought Oaksford & Chater, 2007). Th ey have reinterpreted and action in decision science potentially array indi- the modal response in most of the classic heuris- viduals on a continuum based on the distance of tics and biases experiments as indicating an opti- their behavior from the normative model. mal information processing adaptation on the part We might ask, however, whether—aside from of the subjects. It is argued by these investigators holding an Aristotelian view—there is any other that the research in the heuristics and biases tradi- reason to be a Panglossian. One assumption that tion has not demonstrated human irrationality at often draws people to a Panglossian view of human all. Th is group of theorists—who argue that an rationality is the thought that evolution would assumption of maximal human rationality is the have guaranteed that our cognition is fully ratio- proper default position to take—have been termed nal. Th is is a mistaken view. Th ere are a number the Panglossians (Stanovich, 1999). Th is position of why evolution would not be expected to posits no diff erence between descriptive and norma- guarantee perfect human rationality. One reason is tive models of performance because human perfor- that rationality is defi ned in terms of maximization mance is actually normative. (for example, in the case of instrumental rational- Th e contrasting positions of the Panglossians and ity, maximizing the expected utility of actions). In Meliorists defi ne the diff ering poles in what has contrast to maximization, natural selection works been termed the Great Rationality Debate in cogni- on a “better than” principle. As Dawkins puts it, tive science—the debate about how much irratio- “Natural selection chooses the better of present nality to attribute to human cognition. Th is debate available alternatives . . . . Th e animal that results is has generated a very substantial literature of often not the most perfect design conceivable, nor is it heated arguments (Cohen, 1981; Doherty, 2003; merely good enough to scrape by. It is the prod- Edwards & von Winterfeldt, 1986; Evans & Over, uct of a historical sequence of changes, each one of 1996, 2010; Gigerenzer, 1996; Jungermann, 1986; which represented, at best, the better of the alterna- Kahneman & Tversky, 1983, 1996; Koehler, 1996; tives that happened to be around at the time” (p. 46, Koehler & James, 2009, 2010; Krueger & Funder, 1982). In short, the variation and selective retention 2004; Kuhberger, 2002; Lee, 2006; Samuels & Stich, logic of evolution “designs” (Dennett, 1995) for the 2004; Stanovich, 1999, 2004, 2010; Stanovich reproductive advantage of one organism over the & West, 2000; Stein, 1996; Stich, 1990; Vranas, next, not for the optimality of any one characteristic 2000). Tetlock and Mellers (2002) have noted that (including rationality). It has been said that evolu- “the debate over human rationality is a high-stakes tion should be described as the survival of the fi tter controversy that mixes primordial political and psy- rather than as the survival of the fi ttest. chological prejudices in combustible combinations” Evolution proceeds to increase the reproduc- (p. 97). Th e great debate about human rationality tive fi tness of genes, not to increase the rationality is a “high-stakes controversy” because it involves of humans (Stanovich, 2004; Stanovich & West, nothing less than the models of human nature that 2003). Increases in fi tness do not always entail underlie economics, moral philosophy, and the per- increases in rationality. Take, for example, the sonal theories (folk theories) we use to understand domain of beliefs. Beliefs need not always track the the behavior of other humans. For example, a very world with maximum accuracy in order for fi tness infl uential part of the Panglossian camp is repre- to increase (Stich, 1990). Th us, evolution does not sented by the mainstream of the discipline of eco- guarantee perfect epistemic rationality. For exam- nomics, which is notable for using strong rationality ple, evolution might fail to select out epistemic assumptions as fundamental tools. mechanisms of high accuracy when they are costly

346 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 334646 112/1/20112/1/2011 110:22:340:22:34 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

in terms of organismic resources (for example, in Chapter 3; Dunbar & Klahr, Chapter 35). Th ey rep- terms of memory, energy, or attention). An addi- resent the cultural achievements that foster greater tional reason that belief-forming mechanisms might human rationality (Th agard & Nisbett, 1983). As not be maximally truth preserving is that: societies evolve, they produce more of the cultural tools of rationality and these tools become more a very cautious, risk-aversive inferential strategy—one widespread in the population. Th us, the cultural that leaps to the conclusion that danger is present evolution of rational standards (Th agard & Nisbett, on very slight evidence—will typically lead to false 1983) is apt to occur at a pace markedly faster than beliefs more often, and true ones less often, than a that of human evolution—providing ample oppor- less hair-trigger one that waits for more evidence tunity for mental strategies of utility maximization before rendering a judgment. Nonetheless, the to dissociate from local genetic fi tness maximiza- unreliable, error-prone, risk-aversive strategy may tion. In summary, a consideration of our evolution- well be favored by natural selection. For natural ary history should not lead one to a Panglossian selection does not care about truth; it cares only view of human rationality. about reproductive success. (p. 62, Stich, 1990) A reconciliation of the views of the Panglossians It is likewise in the domain of goals and desires. and Meliorists is possible, however, if we take three As has become clear from recent research on the scientifi c steps. First, we must consider data patterns topic of aff ective forecasting, people are remarkably long ignored in the heuristics and biases literature— bad at making choices that make themselves happy individual diff erences on rational thinking tasks. (Gilbert, 2006; Kahneman et al., 2006; Wilson & Second, we must understand the empirical patterns Gilbert, 2005). Th is should be no surprise. Th e obtained through the lens of a modifi ed dual-pro- reason we have pleasure circuits in our brains is to cess theory (Evans, Chapter 8) and of evolutionary encourage us to do things (survive and reproduce, theory. Th irdly, we much distinguish the concepts help kin) that propagate our genes. Th e pleasure of rationality and intelligence in cognitive theory. centers were not designed to maximize the amount Subsequent sections of this chapter develop each of of time we are happy. these points. Th e instrumental rationality of humans is not guaranteed by evolution for two further reasons. First, Individual Diff erences in the Great many genetic goals that have been lodged in our brain Rationality Debate no longer serve our ends because the environment has Dozens of empirical studies have shown that changed (Richerson & Boyd, 2005). For example, there are few tasks in the heuristics and biases litera- thousands of years ago, humans needed as much fat ture where all untutored laypersons give the same as they could get in order to survive. More fat meant response. What has largely been ignored is that— longer survival and because few humans survived although the average person in the classic heuristics beyond their reproductive years, longevity translated and biases experiments might well display an over- directly into more opportunities for gene replication. confi dence eff ect, underutilize base rates, ignore In short, our mechanisms for storing and utilizing P(D/~H), violate the axioms of utility theory, choose energy evolved in times when fat preservation was P and Q in the selection task, commit the conjunc- effi cacious. Th ese mechanisms no longer serve the tion fallacy, and so on—on each of these tasks, some goals of people in our modern technological soci- people give the standard normative response (Bruine ety where there is a McDonald’s on practically every de Bruin, Parker, & Fischhoff , 2007; Cokely & corner—the goals underlying these mechanisms have Kelley, 2009; Del Missier, Mantyla, & Bruine de become detached from their evolutionary context. Bruin, 2010; Dohmen et al., 2009; Finucane & Finally, rational standards for assessing human Gullion, 2010; Frederick, 2005; Klaczynski, 2001; behavior are social and cultural products that are Oechssler, Roider, & Schmitz, 2009; Stanovich & preserved and stored independently of the genes. West, 1998b, 1998c, 1999, 2000, 2008b; West Th e development of probability theory, concepts of et al., 2008). What has been ignored in the Great , logic, and scientifi c thinking through- Rationality Debate is individual diff erences. out the centuries have provided humans with For example, in knowledge calibration studies, conceptual tools to aid in the formation and revi- although the mean performance level of the entire sion of belief and in their reasoning about action sample may be represented by a calibration curve (Chater & Oaksford, Chapter 2; Griffi ths et al., that indicates overconfi dence, almost always some

stanovich 347

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 334747 112/1/20112/1/2011 110:22:340:22:34 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

people do display near perfect calibration. Likewise, view of cognition termed dual-process theory (see in probabilistic assessment, while the majority of Evans, Chapter 8, for a more detailed discussion). subjects might well ignore the noncausal base-rate evidence, a minority of subjects often makes use of From Dual-Process Th eory to a Tripartite this information in exactly the way prescribed by Model of Mind Bayes’ theorem. A few people even respond cor- Th e idea that the brain is composed of many dif- rectly on the notoriously diffi cult abstract selection ferent subsystems (see Aunger & Curtis, 2008) has task (Evans, Newstead, & Byrne, 1993; Stanovich & recurred in conceptualizations in many diff erent dis- West, 1998a, 2008b). ciplines—from the society of view in artifi cial In short, some people give the response tradition- intelligence (Minsky, 1985); to Freudian analogies ally considered normative, and others do not. Th ere (Ainslie, 1982); to discussions of the concept of mul- is variability in responding on all of these tasks. tiple selves in philosophy, economics, and decision So when Panglossians and heuristics and biases science (Ainslie, 2001; Schelling, 1984). In fact, the researchers argue about the normative appropriate- notion of many diff erent systems in the brain is by ness of a particular response, whoever eventually no means new. Plato (1945) argued that “we may call prevails in the dispute—both sides have been guilty that part of the soul whereby it refl ects, rational; and of glossing over individual diff erences. In short, it the other, with which it feels hunger and thirst and is is incorrect to imply that people uniformly display distracted by sexual passion and all the other desires, a particular rational or irrational response pattern. we will call irrational appetite, associated with plea- A particular experiment might instead be said to sure in the replenishment of certain wants” (p. 137). show that the average person, or perhaps the modal What is new, however, is that cognitive scientists person, displays optimal or suboptimal thinking. are beginning to understand the biology and cogni- Other people, often a minority to be sure, display tive structure of these systems (Evans & Frankish, the opposite style of thinking. 2009; see Morrison & Knowlton, Chapter 6; Green In light of these empirical data, it is puzzling that & Dunbar, Chapter 7) and are beginning to posit Panglossians would presumably accept the existence some testable speculations about their evolutionary of individual diff erences in intelligence but not and experiential origins. I will build on the current rationality. Th is is possible, however, if intelligence consensus that the functioning of the brain can be and rationality are two diff erent things conceptually. characterized by two diff erent types of cognition In the remainder of this chapter, I will show that having somewhat diff erent functions and diff erent the Panglossians are correct in one of their assump- strengths and weaknesses. Th ere is a wide variety tions but incorrect in another. Conceptually, intelli- of evidence that has converged on the conclusion gence and rational thinking are indeed two diff erent that some type of dual-process notion is needed in a things. But—contra Panglossian assumptions—the diverse set of specialty areas not limited to cognitive latter as well as the former displays substantial indi- psychology, social psychology, naturalistic philoso- vidual diff erences. phy, decision theory, and clinical psychology (Evans, Discussions of intelligence often go off the rails 2003, 2008, 2010; Frankish, 2004; Lieberman, at the very beginning by failing to set the concept 2007, 2009; Schneider & Chein, 2003; Sloman, within a general context of cognitive functioning— 1996, 2002; Smith & Decoster, 2000; Stanovich, thus inviting the default assumption that intelli- 1999). Evolutionary theorizing and neurophysiolog- gence is the central feature of the mind. I will try to ical work also have supported a dual-process concep- preclude this natural default by outlining a model tion (Camerer, Loewenstein, & Prelec, 2005; Frank, of the mind and then placing intelligence within Cohen, & Sanfey, 2009; Lieberman, 2009; McClure, it. Th e generic models of the mind developed by Laibson, Loewenstein, & Cohen, 2004; Prado & cognitive scientists often give short shrift to a ques- Noveck, 2007; Reber, 1993; Toates, 2005, 2006). tion that the public is intensely interested in: How In fact, a dual-process view was implicit within the and why do people diff er from each other in their early writings in the groundbreaking heuristics and thinking? In an attempt to answer that question, I biases research program. As Kahneman (2000) notes, am going to present a gross model of the mind that “Tversky and I always thought of the heuristics and is true to modern cognitive science but that empha- biases approach as a two-process theory” (p. 682). sizes individual diff erences in ways that are some- Just how ubiquitous are dual-process models what new. Th e model builds on a current consensus in psychology and related fi elds is illustrated in

348 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 334848 112/1/20112/1/2011 110:22:340:22:34 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Table 22.1, which lists a variety of such theories that a model that emphasizes the family resemblances. have appeared during the last couple of decades. Evans (Chapter 8) provides a much more nuanced Some common terms for the dual processes are discussion. listed in Table 22.1. My purpose here is not to adju- Th e family resemblances extend to the names for dicate the diff erences among these models. Instead, the two classes of process. Th e terms heuristic and I will gloss over diff erences and instead start with analytic are two of the oldest and most popular (see

Table 22.1 Some Alternative Terms for Type 1 and Type 2 Processing Used by Various Th eorists Th eorist Type 1 Type 2

Bargh & Chartrand (1999) Automatic processing Conscious processing

Bazerman, Tenbrunsel, & Wade-Benzoni, (1998) Want self Should self

Bickerton (1995) Online thinking Offl ine thinking

Brainerd & Reyna (2001) Gist processing Analytic processing

Chaiken et al. (1989) Heuristic processing Systematic processing

Evans (1984, 1989) Heuristic processing Analytic processing

Evans & Over (1996) Tacit thought processes Explicit thought processes

Evans & Wason (1976); Wason & Evans (1975) Type 1 processes Type 2 processes

Fodor (1983) Modular processes Central processes

Gawronski & Bodenhausen (2006) Associative processes Propositional processes

Haidt (2001) Intuitive system Reasoning system

Johnson-Laird (1983) Implicit inferences Explicit inferences

Kahneman & Frederick (2002, 2005) Intuition Reasoning

Lieberman (2003) Refl exive system Refl ective system

Loewenstein (1996) Visceral factors Tastes

Metcalfe & Mischel (1999) Hot system Cool system

Norman & Shallice (1986) Contention scheduling Supervisory attentional system

Pollock (1991) Quick and infl exible modules Intellection

Posner & Snyder (1975) Automatic activation Conscious processing

Reber (1993) Implicit cognition Explicit learning

Shiff rin & Schneider (1977) Automatic processing Controlled processing

Sloman (1996) Associative system Rule-based system

Smith & DeCoster (2000) Associative processing Rule-based processing

Strack & Deutsch (2004) Impulsive system Refl ective system

Th aler & Shefrin (1981) Doer Planner

Toates (2006) Stimulus-bound Higher order

Wilson (2002) Adaptive unconscious Conscious

stanovich 349

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 334949 112/1/20112/1/2011 110:22:340:22:34 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Evans, 1984, 1989). However, to attenuate the pro- processes conjoin the properties of automaticity, liferation of nearly identical theories, I suggested the quasi-modularity, and heuristic processing as these more generic terms System 1 and System 2 in a pre- constructs have been variously discussed in cogni- vious book (Stanovich, 1999). Although these terms tive science (Barrett & Kurzban, 2006; Carruthers, have become popular, there is an infelicitousness to 2006; Coltheart, 1999; Evans, 2008, 2009; Moors the System 1/System 2 terminology. Such terminol- & De Houwer, 2006; Samuels, 2005, 2009; Shiff rin ogy seems to connote that the two processes in dual- & Schneider, 1977; Sperber, 1994). process theory map explicitly to two distinct brain It is important to emphasize that Type 1 process- systems. Th is is a stronger assumption than most ing is not limited to modular subprocesses that meet theorists wish to make. Additionally, both Evans all of the classic Fodorian (1983) criteria. Type 1 (2008, 2009, 2010; Chapter 8, this volume) and processing encompasses processes of unconscious Stanovich (2004, 2011) have discussed how terms implicit learning and conditioning. Also, many such as System 1 or heuristic system are really misno- rules, stimulus discriminations, and decision-mak- mers because they imply that what is being referred ing principles that have been practiced to automa- to is a singular system. In actuality, the term used ticity (e.g., Kahneman & Klein, 2009; Shiff rin & should be plural because it refers to a set of systems Schneider, 1977) are processed in a Type 1 manner. in the brain that operate autonomously in response Th is learned information can sometimes be just as to their own triggering stimuli and are not under much a threat to rational behavior as are evolution- higher level cognitive control. I have suggested ary modules that fi re inappropriately in a modern (Stanovich, 2004) the acronym TASS (standing for environment. Rules learned to automaticity can Th e Autonomous Set of Systems) to describe what be overgeneralized—they can autonomously trig- is in actuality a heterogeneous set. ger behavior when the situation is an exception to Using the acronym TASS was a step forward the class of events they are meant to cover (Arkes & in clearing up some of the confusion surrounding Ayton, 1999; Hsee & Hastie, 2006). autonomous processes. For similar reasons, Evans Type 2 processing is nonautonomous. Type 2 (2008, 2009, Chapter 8; see also Samuels, 2009) has processing contrasts with Type 1 processing on each suggested a terminology of Type 1 processing versus of the correlated properties that defi ne the latter. It Type 2 processing. Th e Type 1/Type 2 terminology is relatively slow and computationally expensive. captures better than previous terminology that a Many Type 1 processes can operate at once in par- dual-process theory is not necessarily a dual-system allel, but Type 2 processing is largely serial. Type theory (see Evans, 2008, 2009, for an extensive dis- 2 processing is often language based, but it is not cussion). For these reasons, I will rely most heavily necessarily so. One of the most critical functions of on the Type 1/Type 2 terminology. An even earlier Type 2 processing is to override Type 1 processing. terminology due to Evans (1984, 1989)—heuristic Th is is sometimes necessary because autonomous versus analytic processing—will also be employed processing has heuristic qualities. It is designed to on occasions when it is felicitous. get the response into the right ballpark when solv- Th e defi ning feature of Type 1 processing is its ing a problem or making a decision, but it is not autonomy—the execution of Type 1 processes designed for the type of fi ne-grained analysis called is mandatory when their triggering stimuli are for in situations of unusual importance (fi nancial encountered, and they are not dependent on input decisions, fairness judgments, employment deci- from high-level control systems. Autonomous pro- sions, legal judgments, etc.). Heuristics depend cesses have other correlated features—their execu- on benign environments. In hostile environments, tion is rapid, they do not put a heavy load on central they can be costly (see Hilton, 2003; Over, 2000; processing capacity, they tend to operate in parallel Stanovich, 2004, 2009b). A benign environment without interfering with themselves or with Type 2 means one that contains useful (that is, diagnos- processing—but these other correlated features are tic) cues that can be exploited by various heuris- not defi ning. Autonomous processes would include tics (for example, aff ect-triggering cues, vivid and behavioral regulation by the emotions; the encapsu- salient stimulus components, convenient and accu- lated modules for solving specifi c adaptive problems rate anchors). Additionally, for an environment that have been posited by evolutionary psycholo- to be classifi ed as benign, it also must contain no gists; processes of implicit learning; and the auto- other individuals who will adjust their behavior to matic fi ring of overlearned associations. Type 1 exploit those relying only on heuristics. In contrast,

350 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335050 112/1/20112/1/2011 110:22:340:22:34 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

a hostile environment for heuristics is one in which able to prevent our representations of the real world there are few cues that are usable by heuristic pro- from becoming confused with representations of cesses or there are misleading cues (Kahneman & imaginary situations. Th e so-called cognitive decou- Klein, 2009). Another way that an environment pling operations are the central feature of Type 2 can turn hostile for a heuristic processor is if other processing that make this possible, and they have agents discern the simple cues that are being used implications for how we conceptualize both intel- and the other agents start to arrange the cues for ligence and rationality. their own advantage (for example, advertisements, In a much-cited article, Leslie (1987) modeled or the deliberate design of supermarket fl oor space pretense by positing a so-called secondary represen- to maximize revenue). tation (see Perner, 1991) that was a copy of the pri- All of the diff erent kinds of Type 1 process- mary representation but that was decoupled from ing (processes of emotional regulation, Darwinian the world so that it could be manipulated—that is, modules, associative and implicit learning pro- be a mechanism for simulation. Th e important issue cesses) can produce responses that are irratio- for our purposes is that decoupling secondary repre- nal in a particular context if not overridden. For sentations from the world and then maintaining the example, often humans act as cognitive misers (see decoupling while simulation is carried out is a Type Stanovich, 2009b) by engaging in attribute substi- 2 processing operation. It is computationally taxing tution (Kahneman & Frederick, 2002)—the sub- and greatly restricts the ability to conduct any other stitution of an easy-to-evaluate characteristic for a Type 2 operation simultaneously. In fact, decou- harder one, even if the easier one is less accurate. pling operations might well be a major contributor For example, the cognitive miser will substitute the to a distinctive Type 2 property: its seriality. less eff ortful attributes of vividness or aff ect for the Figure 22.1 represents a preliminary model of more eff ortful retrieval of relevant facts (Kahneman, mind, based on what has been outlined thus far, 2003; Li & Chapman, 2009; Slovic & Peters, 2006; with one important addition. Th e addition stems Wang, 2009). But when we are evaluating impor- from the fact that instructions to initiate override of tant risks—such as the risk of certain activities and Type 1 processing (and to initiate simulation activi- environments for our children—we do not want to ties) must be controlled by cognitive machinery at substitute vividness for careful thought about the a higher level than the decoupling machinery itself. situation. In such situations, we want to employ Type 2 processing needs to be understood in terms Type 2 override processing to block the attribute of two levels of cognitive control—what are termed substitution of the cognitive miser. in Figure 22.1 the algorithmic level and the refl ective To override Type 1 processing, Type 2 processing level. Th ere I have presented the tripartite proposal must display at least two related capabilities. One in the spirit of Dan Dennett’s (1996) book Kinds of is the capability of interrupting Type 1 processing Minds where he used that title to suggest that within and suppressing its response tendencies. Type 2 the brain of humans are control systems of very processing thus involves inhibitory mechanisms of diff erent types—diff erent kinds of minds. I have the type that have been the focus of work on execu- labeled the traditional source of Type 1 processing tive functioning (Aron, 2008; Best, Miller, & Jones, as the autonomous mind but diff erentiated Type 2 2009; Hasher, Lustig, & Zacks, 2007; Miyake et al., processing into the algorithmic mind and the refl ec- 2000; Zelazo, 2004). But the ability to suppress tive mind. Th e autonomous mind can be overrid- Type 1 processing gets the job only half done. den by algorithmic-level mechanisms; but override Suppressing one response is not helpful unless there itself is initiated by higher level control. Th at is, the is a better response available to substitute for it. algorithmic level is conceptualized as subordinate to Where do these better responses come from? One the higher level goal states and epistemic thinking answer is that they come from processes of hypo- dispositions of the refl ective mind. thetical reasoning and cognitive simulation that are a unique aspect of Type 2 processing (Johnson- Individual Diff erences Within Laird, Chapter 9). When we reason hypothetically, the Tripartite Model of Mind we create temporary models of the world and test Psychometricians have long distinguished typi- out actions (or alternative causes) in that simulated cal performance situations from optimal (some- world. To reason hypothetically we must, however, times termed maximal) performance situations (see have one critical cognitive capability—we must be Ackerman, 1994, 1996; Ackerman & Heggestad,

stanovich 351

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335151 112/1/20112/1/2011 110:22:350:22:35 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Reflective Mind (individual differences in rational thinking dispositions)

Algorithmic Mind (individual differences Type 2 in fluid intelligence) Processing

Type 1 Processing Autonomous Mind (few continuous individual differences)

Fig. 22.1 Th e tripartite structure and the locus of individual diff erences.

1997; Ackerman & Kanfer, 2004; see also Cronbach, as just mentioned, measures of the effi ciency of the 1949; Matthews, Zeidner, & Roberts, 2002; algorithmic mind. Th e latter travel under a variety Sternberg, Grigorenko, & Zhang, 2008). Typical of names in psychology—thinking dispositions or performance situations are unconstrained in that cognitive styles being the two most popular. Many no overt instructions to maximize performance are thinking dispositions concern beliefs, belief struc- given, and the task interpretation is determined to ture and, importantly, attitudes toward forming and some extent by the participant. Th e goals to be pur- changing beliefs. Other thinking dispositions that sued in the task are left somewhat open. Th e issue have been identifi ed concern a person’s goals and is what a person would typically do in such a situ- goal hierarchy. Examples of some thinking disposi- ation, given few constraints. Typical performance tions that have been investigated by psychologists measures are measures of the refl ective mind—they are actively open-minded thinking, need for cogni- assess in part goal prioritization and epistemic regu- tion (the tendency to think a lot), consideration of lation. In contrast, optimal performance situations future consequences, need for closure, superstitious are those where the task interpretation is deter- thinking, and dogmatism (Cacioppo et al., 1996; mined externally. Th e person performing the task Kruglanski & Webster, 1996; Norris & Ennis, is instructed to maximize performance. Th us, opti- 1989; Schommer-Aikins, 2004; Stanovich, 1999, mal performance measures examine questions of the 2009b; Sternberg, 2003; Sternberg & Grigorenko, effi ciency of goal pursuit—they capture the process- 1997; Strathman et al., 1994). ing effi ciency of the algorithmic mind. All tests of Th e types of cognitive propensities that these intelligence or cognitive aptitude are optimal per- thinking disposition measures refl ect are the ten- formance assessments, whereas measures of critical dency to collect information before making up one’s or rational thinking are often assessed under typical mind, the tendency to seek various points of view performance conditions. before coming to a conclusion, the disposition to Th e diff erence between the algorithmic mind think extensively about a problem before responding, and the refl ective mind is captured in another well- the tendency to calibrate the degree of strength of established distinction in the measurement of indi- one’s opinion to the degree of evidence available, the vidual diff erences: the distinction between cognitive tendency to think about future consequences before ability and thinking dispositions. Th e former are, taking action, the tendency to explicitly weigh pluses

352 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335252 112/1/20112/1/2011 110:22:350:22:35 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

and minuses of situations before making a decision, such as fi gural analogies, Raven matrices, and series and the tendency to seek nuance and avoid abso- completion. Crystallized intelligence (Gc) refl ects lutism. In short, individual diff erences in thinking declarative knowledge acquired from acculturated dispositions are assessing variation in people’s goal learning experiences. It is measured by vocabulary management, epistemic values, and epistemic self- tasks, verbal comprehension, and general knowl- regulation—diff erences in the operation of refl ective edge measures. Ackerman (1996) discusses how the mind (see Koedinger & Roll, Chapter 40). Th ey are two dominant factors in the CHC theory refl ect a psychological characteristics that underpin rational long history of considering two aspects of intelli- thought and action. gence: intelligence-as-process (Gf) and intelligence- Th e cognitive abilities assessed on intelligence as-knowledge (Gc). tests are not of this type. Th ey are not about high- I have argued that individual diff erences in fl uid level personal goals and their regulation, or about intelligence are a key indicator of the variability the tendency to change beliefs in the face of con- across individuals in the ability to sustain decoupling trary evidence, or about how knowledge acquisition operations (Stanovich, 2001, 2009b). Increasingly is internally regulated when not externally directed. it is becoming apparent that one of the critical men- People have indeed come up with defi nitions of intel- tal operations being tapped by measures of fl uid ligence that encompass such things. Th eorists often intelligence is the cognitive decoupling operation defi ne intelligence in ways that encompass rational I have discussed in this chapter. Th is is becoming action and belief but, nevertheless, the actual mea- clear from converging work on executive function sures of intelligence in use assess only algorithmic-level and working memory. Most measures of executive cognitive capacity. No current intelligence test that function and working memory are direct or indi- is even moderately used in practice assesses rational rect indicators of a person’s ability to sustain decou- thought or behavior (Stanovich, 2002, 2009b). pling operations (Duncan et al., 2008; Engle, 2002; Figure 22.1 represents the classifi cation of indi- Gray, Chabris, & Braver, 2003; Hasher, Lustig, & vidual diff erences in the tripartite view. Th e broken Zacks, 2007; Kane, 2003; Lepine, Barrouillet, & horizontal line represents the location of the key dis- Camos, 2005; Salthouse, Atkinson, & Berish, 2003; tinction in older, dual-process views. Whereas the Salthouse & Pink, 2008; Stanovich, 2011). refl ective and algorithmic minds are characterized Figure 22.1 highlights an important sense in by continuous individual diff erences and substan- which rationality is a more encompassing con- tial variability, there are fewer continuous individual struct than intelligence. As previously discussed, diff erences in the autonomous mind and less vari- to be rational, a person must have well-calibrated ability (see Kaufman et al., 2010, for a diff erent beliefs and must act appropriately on those beliefs view). Disruptions to the autonomous mind often to achieve goals—both properties of the refl ective refl ect damage to cognitive modules that result in mind. Th e person must, of course, have the algo- very discontinuous cognitive dysfunction such as rithmic-level machinery that enables him or her to autism or the agnosias and alexias (Anderson, 2005; carry out the actions and to process the environ- Bermudez, 2001; Murphy & Stich, 2000). ment in a way that enables the correct beliefs to be Figure 22.1 identifi es variation in fl uid intelli- fi xed and the correct actions to be taken. Th us, indi- gence (Gf) with individual diff erences in the effi - vidual diff erences in rational thought and action ciency of processing of the algorithmic mind. Fluid can arise because of individual diff erences in fl uid intelligence is one component in the Cattell/Horn/ intelligence (the algorithmic mind) or because of Carroll (CHC) theory of intelligence (Carroll, individual diff erences in thinking dispositions (the 1993; Cattell, 1963, 1998; Horn & Cattell, 1967). refl ective mind). Sometimes called the theory of fl uid and crystallized Th e conceptualization in Figure 22.1 has several intelligence (symbolized Gf/Gc theory), this theory advantages. First, it conceptualizes intelligence in posits that tests of mental ability tap, in addition to terms of what intelligence tests actually measure. IQ a general factor, a small number of broad factors, tests do not attempt to measure directly an aspect of which two are dominant (Geary, 2005; Horn & of epistemic or instrumental rationality, nor do Noll, 1997; Taub & McGrew, 2004). Fluid intel- they examine any thinking dispositions that relate ligence (Gf) refl ects reasoning abilities operating to rationality. It is also clear from Figure 22.1 why across a variety of domains—in particular, novel rationality and intelligence can become dissociated. ones. It is measured by tasks of abstract reasoning Rational thinking depends on thinking dispositions

stanovich 353

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335353 112/1/20112/1/2011 110:22:350:22:35 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

as well as algorithmic effi ciency. Th us, as long as Th e functions of the diff erent levels of control variation in thinking dispositions is not perfectly are illustrated more completely in Figure 22.2. correlated with fl uid intelligence, there is the statis- Th ere, it is clear that the override capacity itself is tical possibility of dissociations between rationality a property of the algorithmic mind and it is indi- and intelligence. cated by the arrow labeled A. However, previous In fact, substantial empirical evidence indicates dual-process theories have tended to ignore the that individual diff erences in thinking dispositions higher level cognitive function that initiates the and intelligence are far from perfectly correlated. override function in the fi rst place. Th is is a dis- Many diff erent studies involving thousands of sub- positional property of the refl ective mind that is jects (e.g., Ackerman & Heggestad, 1997; Austin & related to rationality. In the model in Figure 22.2, Deary, 2002; Baron, 1982; Bates & Shieles, 2003; it corresponds to arrow B, which represents (in Cacioppo et al., 1996; Eysenck, 1994; Goff & machine intelligence terms) the call to the algo- Ackerman, 1992; Kanazawa, 2004; Kokis et al., rithmic mind to override the Type 1 response by 2002; Zeidner & Matthews, 2000) have indicated taking it offl ine. Th is is a diff erent mental func- that measures of intelligence display only moder- tion than the override function itself (arrow A), ate to weak correlations (usually less than .30) with and there, the evidence cited earlier indicates that some thinking dispositions (e.g., actively open- the two functions are indexed by diff erent types of minded thinking, need for cognition) and near-zero individual diff erences. correlations with others (e.g., conscientiousness, Figure 22.2 represents another aspect of cogni- curiosity, diligence). Other important evidence sup- tion somewhat neglected by previous dual-process ports the conceptual distinction made here between theories. Specifi cally, the override function has algorithmic cognitive capacity and thinking disposi- loomed large in dual-process theory but less so the tions. For example, across a variety of tasks from the simulation process that computes the alternative heuristics and biases literature, it has consistently response that makes the override worthwhile. Figure been found that rational thinking dispositions will 22.2 explicitly represents the simulation function as predict variance after the eff ects of general intelli- well as the fact that the call to initiate simulation gence have been controlled.4 originates in the refl ective mind. Th e decoupling

Response

Reflective Mind

D. Initiate F. Initiate Control Simulation Simulation Via Change in Serial B. Initiate Override Decoupling Assoc Cognition C. Decoupling

Response or Algorithmic E. Serial Attention Mind Associative Cognition

G. Preattentive Processes A. Override

Autonomous Response Mind

Fig. 22.2 A more complete model of the tripartite structure.

354 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335454 112/1/20112/1/2011 110:22:350:22:35 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

operation (indicated by arrow C) itself is carried as illustrated in Figure 22.3. As the Figure indi- out by the algorithmic mind and the call to initiate cates, the refl ective mind not only accesses gen- simulation (indicated by arrow D) by the refl ective eral knowledge structures (Gc) but, importantly, mind. Again, two diff erent types of individual diff er- accesses the person’s opinions, beliefs, and refl ec- ences are associated with the initiation call and the tively acquired goal structure. Th e algorithmic decoupling operator—specifi cally, rational thinking mind accesses microstrategies for cognitive opera- dispositions with the former and fl uid intelligence tions and production system rules for sequencing with the latter. Also represented is the fact that the behaviors and thoughts. Finally, the autonomous higher levels of control receive inputs from the com- mind accesses not only knowledge bases that are putations of the autonomous mind (arrow G) via evolutionary adaptations, but it also retrieves infor- so-called preattentive processes (Evans, 2006, 2007, mation that has become tightly compiled and avail- 2008, 2009). Th e arrows labeled E and F refl ect the able to the autonomous mind due to overlearning decoupling and higher level control of a kind of and practice. Type 2 processing (serial associative cognition) that It is important to note that what is displayed in does not involve fully explicit cognitive simulation Figure 22.3 are the knowledge bases that are unique (see Stanovich, 2011). to each mind. Algorithmic- and refl ective-level pro- cesses also receive inputs from the computations of Mindware in the Tripartite Model the autonomous mind (see arrow G in Figure 22.2). Knowledge bases, both innate and derived from Th e mindware available for retrieval, particularly experience, also importantly bear on rationality. that available to the refl ective mind, is in part the Th e term mindware was coined by Perkins (1995) product of past learning experiences. Th e knowl- to refer to the rules, knowledge, procedures, and edge structures available for retrieval by the refl ective strategies that a person can retrieve from memory in mind represent Gc, crystallized intelligence. Recall order to aid decision making and problem solving. that Gf, fl uid intelligence (intelligence-as-process), Each of the levels in the tripartite model of mind is already represented in the Figure 22.2. It is the has to access knowledge to carry out its operations, general computational power of the algorithmic

Knowledge Structures

Beliefs, Goals, and Reflective Mind General Knowledge (Gc)

Strategies and Algorithmic Production Systems Mind

ENB

Autonomous TCLI Mind

ENB

ENB = Encapsulated Knowledge Base TCLI = Tightly Compiled Learned Information

Fig. 22.3 Knowledge structures in the tripartite framework.

stanovich 355

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335555 112/1/20112/1/2011 110:22:350:22:35 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

mind—importantly exemplifi ed by the ability to problems, and their relation to intelligence, help to sustain cognitive decoupling. explain one data trend that has been uncovered— It is important to see how both of the major that some rational thinking problems show surpris- components of Gf/Gc theory miss critical aspects of ing degrees of dissociation from cognitive ability rational thought. Fluid intelligence will, of course, (Stanovich, 2009b, 2011; Stanovich & West, 2007, have some relation to rationality because it indexes 2008a, 2008b; West, Toplak, & Stanovich, 2008). the computational power of the algorithmic mind Myside bias, for example, is virtually independent to sustain decoupling. Because override and simula- of intelligence (Macpherson & Stanovich, 2007; Sá, tion are important operations for rational thought, Kelley, Ho, & Stanovich, 2005; Stanovich & West, Gf will defi nitely facilitate rational action in some 2007, 2008a, 2008b; Toplak & Stanovich, 2003). situations. Nevertheless, the tendency to initi- Individuals with higher IQs in a university sample ate override (arrow B in Fig. 22.2) and to initiate are no less likely to process information from an simulation activities (arrow D in Fig. 22.2) are both egocentric perspective than are individuals with aspects of the refl ective mind unassessed by intel- relatively lower IQs. ligence tests, so the tests will miss these components Irrational behavior can occur because the right of rationality. mindware (cognitive rules, strategies, knowledge, Th e situation with respect to Gc is a little diff er- and belief systems) is not available to use in deci- ent. It is true that much of the mindware of rational sion making. We would expect to see a correlation thought would be classifi ed as crystallized intelli- with intelligence here because mindware gaps most gence in the abstract. But is it the kind of crystallized often arise from lack of education or experience. knowledge that is specifi cally assessed on the tests? Nevertheless, while it is true that more intelligent Th e answer is no. Th e mindware of rational thought individuals learn more things than less intelligent is somewhat specialized mindware (see Stanovich, individuals, much knowledge (and many thinking 2009b). It clusters in the domains of probabilistic dispositions) relevant to rationality are picked up reasoning (see Griffi ths et al., Chapter 3), causal rather late in life. Explicit teaching of this mind- reasoning (see Cheng & Buehner, Chapter 12), and ware is not uniform in the school curriculum at any scientifi c reasoning (see Dunbar & Klahr, Chapter level. Th at such principles are taught very inconsis- 35). In contrast, the crystallized knowledge assessed tently means that some intelligent people may fail on IQ tests is deliberately designed to be nonspe- to learn these important aspects of critical thinking. cialized. Th e designers of the tests, in order to make In university samples, correlations with cognitive sure the sampling of Gc is fair and unbiased, explic- ability have been found to be roughly (in absolute itly attempt to broadly sample vocabulary, verbal magnitude) in the range of .20–.35 for probabilistic comprehension domains, and general knowledge. reasoning tasks and scientifi c reasoning tasks mea- Th e broad sampling ensures unbiasedness in the suring a variety of rational principles (Bruine de test, but it inevitably means that the specifi c knowl- Bruin, Parker, & Fischhoff , 2007; Kokis et al., 2002; edge bases critical to rationality will go unassessed. Parker & Fischhoff , 2005; Sá, West, & Stanovich, In short, Gc, as traditionally measured, does not 1999; Stanovich & West, 1998b, 1998c, 1998d, assess individual diff erences in rationality, and Gf 1999, 2000, 2008b; Toplak & Stanovich, 2002). will do so only indirectly and to a mild extent. In Th is is again a magnitude of correlation that allows short, as measures of rational thinking, IQ tests are for substantial discrepancies between intelligence radically incomplete. and rationality. Intelligence is thus no inoculation against many of the sources of irrational thought. Th e Requirements of Rational Th inking None of these sources are directly assessed on intel- and Th eir Relation to Intelligence ligence tests, and the processes that are tapped by Within the tripartite framework, rationality IQ tests are not highly overlapping with the pro- requires mental characteristics of three diff erent cesses and knowledge that explain variation in ratio- types. Problems in rational thinking arise when cog- nal thinking ability. nitive capacity is insuffi cient to sustain autonomous system override, when the necessity of override is Conclusions and Future Directions not recognized, or when simulation processes do Th e many studies of individual diff erences on not have access to the mindware necessary for the heuristics and biases tasks falsify the most explicit synthesis of a better response. Th e source of these versions of the Panglossian view. People are not all

356 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335656 112/1/20112/1/2011 110:22:360:22:36 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

identically rational. Th ere are individual diff erences the individual diff erences in rational thought that on all of these tasks and the individual diff erences I have been exploring in this chapter. Such think- are not the result of random performance errors ing dispositions vary systematically from individ- (see Stanovich, 1999; Stein, 1996). Instead, they are ual to individual, and they are the source of what systematic. If there are such systematic individual Meliorists consider the variance in the irrationalities diff erences, it means that at least some people, some in human cognition. Unlike the Panglossian (who of the time, are irrational. Extreme Panglossianism assumes uniform rationality) or the Apologist (who cannot be sustained. However, there is another posi- minimizes such variability while not entirely deny- tion in the debate that, like Panglossianism, serves ing it), the Meliorist is very accepting of the idea of to minimize the attribution of irrationality. Termed variability in rational thought. the Apologist position (Stanovich, 1999), this view Th e Panglossian position in the Great Rationality takes very seriously the idea that humans have com- Debate has obscured the existence of individual putational limitations that keep them from being diff erences in rational thought and its underlying fully rational. components. In particular, Panglossian philoso- Like the Meliorist, the Apologist accepts the phers have obscured the importance of the refl ective empirical and nonspuriousness of norma- mind. Philosophical treatments of rationality by tive/descriptive gaps, but the Apologist is much Panglossians tend to have a common structure (see more hesitant to term them instances of irrational- Cohen, 1981, for example). Such treatments tend ity. Th is is because the Apologist takes the position to stress the importance of the competence/perfor- that to characterize a suboptimal behavior as irra- mance distinction and then proceed to allocate all tional, it must be the case that the normative model of the truly important psychological mechanisms to is computable by the individual. If there are com- the competence side of the dichotomy. putational limitations aff ecting task performance, For example, Rescher (1988) argues that “to then the normative model may not be prescriptive, construe the data of these interesting experimen- at least for individuals of low algorithmic capacity. tal studies [of probabilistic reasoning] to mean Prescriptive models are usually viewed as specify- that people are systematically programmed to fal- ing how processes of belief formation and decision lacious processes of reasoning—rather than merely making should be carried out, given the limitations that they are inclined to a variety of (occasionally of the human cognitive apparatus and the situ- questionable) substantive suppositions—is a very ational constraints (e.g., time pressure) with which questionable step . . . . While all (normal) people are the decision maker must deal (Baron, 2008). From to be credited with the capacity to reason, they fre- the Apologist’s perspective, the descriptive model quently do not exercise it well” (p. 196). Th ere are is quite close to the prescriptive model and the two parts to Rescher’s (1988) point here: the “sys- descriptive/normative gap is attributed to a compu- tematically programmed” part and the “inclination tational limitation. Although the Apologist admits toward questionable suppositions” part (or, as Rips that performance is suboptimal from the standpoint (1994, p. 394) puts it, whether incorrect reason- of the normative model, it is not irrational because ing is “systematically programmed or just a pecca- there is no prescriptive/descriptive gap. dillo”). Rescher’s (1988) focus—like that of many However, as demonstrated in some of the data pat- who have dealt with the philosophical implications terns I have described in this chapter, the Apologist of the idea of human irrationality—is on the issue stratagem will not work for all of the irrational ten- of how humans are “systematically programmed.” dencies that have been uncovered in the heuristics “Inclinations toward questionable suppositions” are and biases literature. Th is is because many biases are only of interest to those in the philosophical debates not very strongly correlated with measures of intel- as mechanisms that allow one to drive a wedge ligence (algorithmic capacity). Additionally, there between competence and performance—thus main- is reliable variance in rational thinking found even taining a theory of near-optimal human rational after cognitive ability is controlled, and that reliable competence in the face of a host of responses that variance is associated with thinking dispositions in seemingly defy explanation in terms of standard theoretically predictable ways. Th ese thinking dis- normative models. positions refl ect control features of the refl ective Analogously to Rescher, Cohen (1982) argues mind that can lead to responses that are more or that there really are only two factors aff ecting per- less rational. Th ey are one of the main sources of formance on rational thinking tasks: “normatively

stanovich 357

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335757 112/1/20112/1/2011 110:22:360:22:36 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

correct mechanisms on the one side, and adventi- Th e key point in Johnson-Laird and Byrne’s tious causes of error on the other” (p. 252). Not (1993) account is that once an individual constructs surprisingly given such a conceptualization, the pro- a mental model from the premises, once the indi- cesses contributing to error (“adventitious causes”) vidual draws a new conclusion from the model, and are of little interest to Cohen (1981, 1982). In his once the individual begins the search for an alter- view, human performance arises from an intrinsic native model of the premises that contradicts the human competence that is impeccably rational, conclusion, the individual “lacks any systematic but responses occasionally deviate from normative method to make this search for counter-examples” correctness due to inattention, memory lapses, lack (p. 205). Here is where Johnson-Laird and Byrne’s of motivation, and other fl uctuating but basically (1993) model could be modifi ed to allow for the unimportant causes (in Cohen’s view). Th ere is infl uence of thinking styles in ways that the impec- nothing in such a conception that would motivate cable competence view of Cohen (1981) does not. any interest in patterns of errors or individual diff er- In this passage, Johnson-Laird and Byrne seem to ences in such errors. be arguing that there are no systematic control fea- One of the goals of this chapter is to reverse tures of the search process. But epistemically related the fi gure and ground in the rationality debate, thinking dispositions may in fact be refl ecting just which has tended to be dominated by the particu- such control features. lar way that philosophers frame the competence/ Individual diff erences in the extensiveness of performance distinction. From a psychological the search for contradictory models could arise standpoint, there may be important implications from a variety of cognitive factors that, although in precisely the aspects of performance that have they may not be completely systematic, may be far been backgrounded in this controversy (“adven- from “adventitious”—factors such as dispositions titious causes,” “peccadillos”). Th at is, whatever toward premature closure, cognitive confi dence, the outcome of the disputes about how humans refl ectivity, dispositions toward confi rmation bias, are “systematically programmed,” variation in the and ideational generativity. Th e decontextualizing “inclination toward questionable suppositions” is requirement of many heuristics and biases tasks is of psychological interest as a topic of study in its a feature that is emphasized by many critics of that own right. Th e research discussed in this chapter literature who, nevertheless, fail to see it as implying provides at least tentative indications that the “incli- a research program for diff erential psychology. For nation toward questionable suppositions” has some example, I have argued that to contextualize a prob- degree of domain generality and that it is predicted lem is such a ubiquitous reasoning style for human by thinking dispositions that concern the epistemic beings that it constitutes one of a very few so-called and pragmatic goals of the individual and that are fundamental computational biases of information part of the refl ective mind. processing (Stanovich, 2003, 2004). Th us, it is not Johnson-Laird and Byrne (1993; see Johnson- surprising that many people respond incorrectly Laird, 2006) articulate a view of rational thought when attempting a psychological task that is explic- that parses the competence/performance distinction itly designed to require a decontextualized reasoning much diff erently from that of Cohen (1981, 1982, style (contrary-to-fact syllogisms, argument evalua- 1986). It is a view that highlights the importance of tion, etc.). But recall the empirically demonstrated the refl ective mind and leaves room for individual variability on all of these tasks. Th e fact that some diff erences in important components of cognition. people do give the decontextualized response means At the heart of the rational competence that Johnson- that at least some people have available a larger rep- Laird and Byrne (1993) attribute to humans is not ertoire of reasoning styles, allowing them to reason perfect rationality but instead just one meta-princi- fl exibly reason so as to override fundamental com- ple: People are programmed to accept inferences as putational biases if the situation requires. valid provided that they have constructed no mental Another way of stressing the importance of indi- model of the premises that contradict the inference vidual diff erences in understanding the nature of (see Johnson-Laird, Chapter 9). Inferences are cat- rational thought is in terms of Dennett’s (1987) egorized as false when a mental model is discovered so-called intentional stance, which he marries to an that is contradictory. However, the search for con- assumption of idealized rationality. Dennett (1988) tradictory models is “not governed by any system- argues that we use the intentional stance for humans atic or comprehensive principles” (p. 178). and dogs but not for lecterns because for the latter

358 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335858 112/1/20112/1/2011 110:22:360:22:36 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

“there is no predictive leverage gained by adopting as follows: Expected value = ∑ pivi; where pi is the probability of each outcome and v is the value of each outcome. Th e symbol ∑ the intentional stance” (p. 496). However, in several i experiments discussed in this chapter, it has been is the summation sign, and simply means “add up all of the terms that follow.” Th e term utility refers to subjective value. Th us, the shown that there is additional predictive leverage calculation of expected utility involves identical mathematics to be gained by relaxing the idealized rationality except that a subjective estimate of utility is substituted for the assumption of Dennett’s (1987, 1988) intentional measure of objective value. stance and by positing measurable and systematic 3. It is important to note that the Meliorist recognizes two variation in intentional-level psychologies (that diff erent ways in which human decision-making performance might be improved. Th ese might be termed cognitive change and is, in the refl ective mind). Knowledge about such environmental change. First, it might be possible to teach people individual diff erences in people’s intentional-level better reasoning strategies and to have them learn rules of deci- psychologies can be used to predict variance in the sion making that are helpful (see Stanovich, 2009b). Th ese would normative/descriptive gap displayed on many rea- represent instances of cognitive change. Additionally, however, soning tasks. Consistent with the Meliorist con- research has shown that it is possible to change the environment so that natural human reasoning strategies will not lead to error clusion that there can be individual diff erences in (Gigerenzer, 2002; Milkman, Chugh, & Bazerman, 2009; Th aler human rationality, the results show that there is & Sunstein, 2008). For example, choosing the right default val- variability in reasoning that cannot be accommo- ues for a decision would be an example of an environmental dated within a model of perfect rational competence change. In short, environmental alterations (as well as cognitive operating in the presence of performance errors and changes) can prevent rational thinking problems. Th us, in cases where teaching people the correct reasoning strategies might be computational limitations. diffi cult, it may well be easier to change the environment so that decision-making errors are less likely to occur. Acknowledgments 4. Such empirical studies indicate that cognitive capacity and Th is research was supported by grants from the Social Sci- thinking dispositions measures are tapping separable variance. ences and Humanities Research Council of Canada and the Th e converging evidence on the existence of this separable vari- Canada Research Chairs program to Keith E. Stanovich. Laura ance is growing (Bruine de Bruin, Parker, & Fischhoff , 2007; Noveck and Keith Holyoak are thanked for their insightful com- Finucane & Gullion, 2010; Klaczynski, Gordon, & Fauth, 1997; ments on an earlier version of this chapter. Klaczynski & Lavallee, 2005; Klaczynski & Robinson, 2000; Kokis et al., 2002; Macpherson & Stanovich, 2007; Newstead, Notes Handley, Harley, Wright, & Farrelly, 2004; Parker & Fischhoff , 1. It should also be noted that in the view of rationality taken 2005; Sá & Stanovich, 2001; Stanovich & West, 1997, 1998c, in this chapter, rationality is an intentional-level personal entity 2000; Toplak, Liu, Macpherson, Toneatto, & Stanovich, 2007; and not an algorithmic-level subpersonal one (Bermudez, 2001; Toplak & Stanovich, 2002). Davies, 2000; Frankish, 2009; Stanovich, 1999, 2009a). A mem- ory system in the human brain is not rational or irrational—it is References merely effi cient or ineffi cient (or of high or low capacity). Th us, Ackerman, P. L. (1994). Intelligence, attention, and learning: subprocesses of the brain do not display rational or irrational Maximal and typical performance. In D. K. Detterman properties per se, although they may contribute in one way or (Ed.), Current topics in human intelligence (Vol. 4, pp. 1–27). another to personal decisions or beliefs that could be character- Norwood, NJ: Ablex. ized as such. Rationality concerns the actions of an entity in its Ackerman, P. L. (1996). A theory of adult development: environment that serve its goals. One of course could extrapolate Process, personality, interests, and knowledge. Intelligence, the notion of environment to include the interior of the brain 22, 227–257. itself and then talk of a submodule that chose strategies rationally Ackerman, P. L., & Heggestad, E. D. (1997). Intelligence, or not. Th is move creates two problems. First, what are the goals personality, and interests: Evidence for overlapping traits. of this subpersonal entity—what are its interests that its rational- Psychological Bulletin, 121, 219–245. ity is trying to serve? Th is is unclear in the case of a subpersonal Ackerman, P. L., & Kanfer, R. (2004). Cognitive, aff ective, and entity. Second, such a move regresses all the way down. We would conative aspects of adult intellect within a typical and maxi- need to talk of a neuron fi ring being either rational or irrational. mal performance framework. In D. Y. Dai & R. J. Sternberg As Oaksford and Chater (1998) put it, “the fact that a model is (Eds.), Motivation, emotion, and cognition: Integrative perspec- optimizing something does not mean that the model is a rational tives on intellectual functioning and development (119–141). model. Optimality is not the same as rationality . . . .Stomachs Mahwah, NJ: Lawrence Erlbaum Associates. may be well or poorly adapted to their function (digestion), but Ainslie, G. (1982). A behavioral economic approach to the they have no beliefs, desires or knowledge, and hence the ques- defence mechanisms: Freud’s energy theory revisited. Social tion of their rationality does not arise” (pp. 4 and 5). Science Information, 21, 735–780. 2. Th e principle of maximizing expected value says that the Ainslie, G. (2001). Breakdown of will. Cambridge, England: action that a rational person should choose is the one with the Cambridge University Press. highest expected value. Expected value is calculated by taking Anderson, J. R. (1990). Th e adaptive character of thought. the objective value of each outcome and multiplying it by the Hillsdale, NJ: Erlbaum. probability of that outcome and then summing those products Anderson, M. (2005). Marrying intelligence and cognition: over all of the possible outcomes. Symbolically, the formula is A developmental view. In R. J. Sternberg & J. E. Pretz

stanovich 359

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 335959 112/1/20112/1/2011 110:22:360:22:36 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

(Eds.), Cognition and intelligence (pp. 268–287). New York: Carroll, J. B. (1993). Human cognitive abilities: A survey of factor- Cambridge University Press. analytic studies. Cambridge, England: Cambridge University Arkes, H. R., & Ayton, P. (1999). Th e sunk cost and Concorde Press. eff ects: Are humans less rational than lower animals? Carruthers, P. (2006). Th e architecture of the mind. New York: Psychological Bulletin, 125, 591–600. Oxford University Press. Aron, A. R. (2008). Progress in executive-function research: Cattell, R. B. (1963). Th eory for fl uid and crystallized intel- From tasks to functions to regions to networks. Current ligence: A critical experiment. Journal of Educational Directions in Psychological Science, 17, 124–129. Psychology, 54, 1–22. Audi, R. (1993). Th e structure of justifi cation. Cambridge, Cattell, R. B. (1998). Where is intelligence? Some answers from England: Cambridge University Press. the triadic theory. In J. J. McArdle & R. W. Woodcock (Eds.), Audi, R. (2001). Th e architecture of reason: Th e structure and Human cognitive abilities in theory and practice (29–38). substance of rationality. Oxford, England: Oxford University Mahwah, NJ: Erlbaum. Press. Chaiken, S., Liberman, A., & Eagly, A. H. (1989). Heuristic and Aunger, R., & Curtis, V. (2008). Kinds of behaviour. Biology and systematic information within and beyond the persuasion Philosophy, 23, 317–345. context. In J. S. Uleman & J. A. Bargh (Eds.), Unintended Austin, E. J., & Deary, I. J. (2002). Personality dispositions. thought (pp. 212–252). New York: Guilford Press. In R. J. Sternberg (Ed.), Why smart people can be so stupid Cohen, L. J. (1981). Can human irrationality be experimentally (pp. 187–211). New Haven, CT: Yale University Press. demonstrated? Behavioral and Brain Sciences, 4, 317–370. Bargh, J. A., & Chartrand, T. L. (1999). Th e unbearable automa- Cohen, L. J. (1982). Are people programmed to commit falla- ticity of being. American Psychologist, 54, 462–479. cies? Further thoughts about the interpretation of experi- Baron, J. (1982). Personality and intelligence. In R. J. Sternberg mental data on probability judgment. Journal for the Th eory (Ed.), Handbook of human intelligence (pp. 308–351). of Social Behavior, 12, 251–274. Cambridge, England: Cambridge University Press. Cohen, L. J. (1986). Th e dialogue of reason. Oxford, England: Baron, J. (2008). Th inking and deciding (4th ed.). New York: Oxford University Press. Cambridge University Press. Cokely, E. T., & Kelley, C. M. (2009). Cognitive abilities and Barrett, H. C., & Kurzban, R. (2006). Modularity in cognition: superior decision making under risk: A protocol analysis and Framing the debate. Psychological Review, 113, 628–647. process model evaluation. Judgment and Decision Making, 4, Bates, T. C., & Shieles, A. (2003). Crystallized intelligence as a 20–33. product of speed and drive for experience: the relationship Coltheart, M. (1999). Modularity and cognition. Trends in of inspection time and openness to g and Gc. Intelligence, Cognitive Sciences, 3, 115–120. 31, 275–287. Cosmides, L., & Tooby, J. (1996). Are humans good intuitive Bazerman, M., & Moore, D. A. (2008). Judgment in managerial statisticians after all? Rethinking some conclusions from the decision making. New York: John Wiley. literature on judgment under uncertainty. Cognition, 58, Bazerman, M., Tenbrunsel, A., & Wade-Benzoni, K. (1998). 1–73. Negotiating with yourself and losing: Understanding and Cronbach, L. J. (1949). Essentials of psychological testing. New managing confl icting internal preferences. Academy of York: Harper. Management Review, 23, 225–241. Davies, M. (2000). Interaction without reduction: Th e relation- Bermudez, J. L. (2001). Normativity and rationality in delusional ship between personal and sub-personal levels of description. psychiatric disorders. Mind and Language, 16, 457–493. Mind and Society, 1, 87–105. Best, J. R., Miller, P. H., & Jones, L. L. (2009). Executive func- Dawes, R. M. (1998). Behavioral decision making and judg- tions after age 5: Changes and correlates. Developmental ment. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), Th e Review, 29, 180–200. handbook of social psychology (Vol. 1, pp. 497–548). Boston, Bickerton, D. (1995). Language and human behavior. Seattle: MA: McGraw-Hill. University of Washington Press. Dawkins, R. (1982). Th e extended phenotype. New York: Oxford Bishop, M. A., & Trout, J. D. (2005). and the University Press. psychology of human judgment. Oxford, England: Oxford Del Missier, F., Mantyla, T., & Bruine de Bruin, W. (2010). University Press. Executive functions in decision making: An individual dif- Brainerd, C. J., & Reyna, V. F. (2001). Fuzzy-trace theory: Dual ferences approach. Th inking and Reasoning, 16, 69–97. processes in memory, reasoning, and cognitive neuroscience. Dennett, D. C. (1987). Th e intentional stance. Cambridge, MA: In H. W. Reese & R. Kail (Eds.), Advances in child develop- Th e MIT Press. ment and behavior (Vol. 28, pp. 41–100). San Diego, CA: Dennett, D. C. (1988). Precis of “Th e Intentional Stance”. Academic Press. Behavioral and Brain Sciences, 11, 493–544. Bruine de Bruin, W., Parker, A. M., & Fischhoff , B. (2007). Dennett, D. C. (1995). Darwin’s dangerous idea: Evolution and Individual diff erences in adult decision-making competence. the meanings of life. New York: Simon & Schuster. Journal of Personality and Social Psychology, 92, 938–956. Dennett, D. C. (1996). Kinds of minds: Toward an understanding Cacioppo, J. T., Petty, R. E., Feinstein, J., & Jarvis, W. (1996). of consciousness. New York: Basic Books. Dispositional diff erences in cognitive motivation: Th e life de Sousa, R. (2007). Why think? Evolution and the rational mind. and times of individuals varying in need for cognition. Oxford, England: Oxford University Press. Psychological Bulletin, 119, 197–253. Doherty, M. (2003). Optimists, pessimists, and realists. In S. L. Camerer, C., Loewenstein, G., & Prelec, D. (2005). Neuro- Schneider & J. Shanteau (Eds.), Emerging perspectives on economics: How neuroscience can inform economics. judgment and decision research (pp. 643–679). New York: Journal of Economic Literature, 34, 9–64. Cambridge University Press.

360 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 336060 112/1/20112/1/2011 110:22:360:22:36 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Dohmen, T., Falk, A., Huff man, D., Marklein, F., & Sunde, U. Frank, M. J., Cohen, M., & Sanfey, A. G. (2009). Multiple (2009). Biased probability judgment: Evidence of incidence systems in decision making. Current Directions in Psychological and relationship to economic outcomes from a representative Science, 18, 73–77. sample. Journal of Economic Behavior and Organization, 72, Frankish, K. (2004). Mind and supermind. Cambridge, England: 903–915. Cambridge University Press. Duncan, J., Parr, A., Woolgar, A., Th ompson, R., Bright, P., Cox, Frankish, K. (2009). Systems and levels: Dual-system theories S., . . . Nimmo-Smith, I. (2008). Goal neglect and Spearman’s and the personal-subpersonal distinction. In J. S. B. T. Evans g: Competing parts of a complex task. Journal of Experimental & K. Frankish (Eds.), In two minds: Dual processes and beyond Psychology-General, 137, 131–148. (pp. 89–107). Oxford, England: Oxford University Press. Edwards, W. (1954). Th e theory of decision making. Psychological Frederick, S. (2005). Cognitive refl ection and decision making. Bulletin, 51, 380–417. Journal of Economic Perspectives, 19, 25–42. Edwards, W., & von Winterfeldt, D. (1986). On cognitive illusions Gawronski, B., & Bodenhausen, G. V. (2006). Associative and and their implications. In H. R. Arkes & K. R. Hammond propositional processes in evaluation: An integrative review (Eds.), Judgment and decision making (pp. 642–679). of implicit and explicit attitude change. Psychological Bulletin, Cambridge, England: Cambridge University Press. 132, 692–731. Engle, R. W. (2002). Working memory capacity as executive Geary, D. C. (2005). Th e origin of the mind: Evolution of brain, attention. Current Directions in Psychological Science, 11, cognition, and general intelligence. Washington, DC: American 19–23. Psychological Association. Evans, J. St. B. T. (1984). Heuristic and analytic processes in Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reasoning. British Journal of Psychology, 75, 451–468. reply to Kahneman and Tversky (1996). Psychological Review, Evans, J. St. B. T. (1989). Bias in human reasoning: Causes and 103, 592–596. consequences. Hove, England: Erlbaum. Gigerenzer, G. (2002). Calculated risks: How to know when num- Evans, J. St. B. T. (2003). In two minds: Dual-process accounts bers deceive you. New York: Simon & Schuster. of reasoning. Trends in Cognitive Sciences, 7, 454–459. Gigerenzer, G. (2007). Gut feelings: Th e intelligence of the uncon- Evans, J. St. B. T. (2006). Th e heuristic-analytic theory of rea- scious. New York: Viking Penguin. soning: Extension and evaluation. Psychonomic Bulletin and Gilbert, D. T. (2006). Stumbling on happiness. New York: Alfred Review, 13, 378–395. A. Knopf. Evans, J. St. B. T. (2007). Hypothetical thinking: Dual processes in Gilboa, I. (2010). Rational choice. Cambridge, MA: Th e MIT reasoning and judgment. New York: Psychology Press. Press. Evans, J. St. B. T. (2008). Dual-processing accounts of reasoning, Gilovich, T., Griffi n, D., & Kahneman, D. (Eds.). (2002). judgment and social cognition. Annual Review of Psychology, Heuristics and biases: Th e psychology of intuitive judgment. 59, 255–278. New York: Cambridge University Press. Evans, J. St. B. T. (2009). How many dual-process theories do we Goff , M., & Ackerman, P. L. (1992). Personality-intelligence need? One, two, or many? In J. Evans & K. Frankish (Eds.), relations: Assessment of typical intellectual engagement. In two minds: Dual processes and beyond (pp. 33–54). Oxford, Journal of Educational Psychology, 84, 537–552. England: Oxford University Press. Gray, J. R., Chabris, C. F., & Braver, T. S. (2003). Neural Evans, J. St. B. T. (2010). Th inking twice: Two minds in one brain. mechanisms of general fl uid intelligence. Nature Neuroscience, Oxford, England: Oxford University Press. 6, 316–322. Evans, J. St. B. T., & Frankish, K. (Eds.). (2009). In two Haidt, J. (2001). Th e emotional dog and its rational tail: A minds: Dual processes and beyond. Oxford, England: Oxford social intuitionist approach to moral judgment. Psychological University Press. Review, 108, 814–834. Evans, J. St. B. T., Newstead, S. E., & Byrne, R. M. J. (1993). Harman, G. (1995). Rationality. In E. E. Smith & D. N. Human reasoning: Th e psychology of deduction. Hove, England: Osherson (Eds.), Th inking (Vol. 3, pp. 175–211). Cambridge, Erlbaum. MA: Th e MIT Press. Evans, J. St. B. T., & Over, D. E. (1996). Rationality and reason- Hasher, L., Lustig, C., & Zacks, R. (2007). Inhibitory mecha- ing. Hove, England: Psychology Press. nisms and the control of attention. In A. Conway, C. Jarrold, Evans, J. St. B. T., & Over, D. E. (2010). Heuristic thinking and M. Kane, A. Miyake, & J. Towse (Eds.), Variation in working human intelligence: A commentary on Marewski, Gaissmaier memory (pp. 227–249). New York: Oxford University Press. and Gigerenzer. Cognitive Processing, 11, 171–175. Hastie, R., & Dawes, R. M. (2010). Rational choice in an uncer- Evans, J. St. B. T., & Wason, P. C. (1976). Rationalization in a tain world (2nd ed.). Th ousand Oaks, CA: . reasoning task. British Journal of Psychology, 67, 479–486. Hilton, D. J. (2003). Psychology and the fi nancial markets: Eysenck, H. J. (1994). Personality and intelligence: Psychometric Applications to understanding and remedying irrational and experimental approaches. In R. J. Sternberg & P. Ruzgis decision-making. In I. Brocas & J. D. Carrillo (Eds.), Th e (Eds.), Personality and intelligence (pp. 3–31). Cambridge, psychology of economic decisions. Vol. 1: Rationality and well- England: Cambridge University Press. being (pp. 273–297). Oxford, England: Oxford University Finucane, M. L., & Gullion, C. M. (2010). Developing a tool Press. for measuring the decision-making competence of older Horn, J. L., & Cattell, R. B. (1967). Age diff erences in fl uid and adults. Psychology and Aging, 25, 271–288. crystallized intelligence. Acta Psychologica, 26, 1–23. Fodor, J. A. (1983). Th e modularity of mind. Cambridge, MA: Horn, J. L., & Noll, J. (1997). Human cognitive capabilities: MIT University Press. Gf-Gc theory. In D. Flanagan, J. Genshaft, & P. Harrison Foley, R. (1987). Th e theory of epistemic rationality. Cambridge, (Eds.), Contemporary intellectual assessment: Th eories, tests, MA: Harvard University Press. and issues (pp. 53–91). New York: Guilford Press.

stanovich 361

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 336161 112/1/20112/1/2011 110:22:360:22:36 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Hsee, C. K., & Hastie, R. (2006). Decision and experience: Why Klaczynski, P. A., Gordon, D. H., & Fauth, J. (1997). Goal- don’t we choose what makes us happy? Trends in Cognitive oriented critical reasoning and individual diff erences in Sciences, 10, 31–37. critical reasoning biases. Journal of Educational Psychology, Hurley, S., & Nudds, M. (2006). Th e questions of animal ratio- 89, 470–485. nality: Th eory and evidence. In S. Hurley & M. Nudds Klaczynski, P. A., & Lavallee, K. L. (2005). Domain-specifi c (Eds.), Rational animals? (pp. 1–83). Oxford, England: identity, epistemic regulation, and intellectual ability Oxford University Press. as predictors of belief-based reasoning: A dual-process Jeff rey, R. C. (1983). Th e logic of decision (Second Ed.). Chicago, perspective. Journal of Experimental Child Psychology, IL: University of Chicago Press. 92, 1–24. Johnson-Laird, P. N. (1983). Mental models. Cambridge, MA: Klaczynski, P. A., & Robinson, B. (2000). Personal theories, Harvard University Press. intellectual ability, and epistemological beliefs: Adult age dif- Johnson-Laird, P. N. (2006). How we reason. Oxford, England: ferences in everyday reasoning tasks. Psychology and Aging, Oxford University Press. 15, 400–416. Johnson-Laird, P. N., & Byrne, R. M. J. (1993). Models and Koehler, D. J., & James, G. (2009). Probability matching in deductive rationality. In K. Manktelow & D. Over (Eds.), choice under uncertainty: Intuition versus deliberation. Rationality: Psychological and philosophical perspectives (pp. Cognition, 113, 123–127. 177–210). London: Routledge. Koehler, D. J., & James, G. (2010). Probability matching and Jungermann, H. (1986). Th e two camps on rationality. In H. R. strategy availability. Memory and Cognition, 38, 667–676. Arkes & K. R. Hammond (Eds.), Judgment and decision Koehler, J. J. (1996). Th e base rate fallacy reconsidered: making (pp. 627–641). Cambridge, England: Cambridge Descriptive, normative and methodological challenges. University Press. Behavioral and Brain Sciences, 19, 1–53. Kahneman, D. (2000). A psychological point of view: Violations Kokis, J., Macpherson, R., Toplak, M., West, R. F., & Stanovich, of rational rules as a diagnostic of mental processes. Behavioral K. E. (2002). Heuristic and analytic processing: Age trends and Brain Sciences, 23, 681–683. and associations with cognitive ability and cognitive styles. Kahneman, D. (2003). A perspective on judgment and choice: Journal of Experimental Child Psychology, 83, 26–52. Mapping bounded rationality. American Psychologist, 58, Krueger, J., & Funder, D. C. (2004). Towards a balanced social 697–720. psychology: Causes, consequences and cures for the problem- Kahneman, D., & Frederick, S. (2002). Representativeness seeking approach to social cognition and behavior. Behavioral revisited: Attribute substitution in intuitive judgment. In T. and Brain Sciences, 27, 313–376. Gilovich, D. Griffi n, & D. Kahneman (Eds.), Heuristics and Kruglanski, A. W., & Webster, D. M. (1996). Motivated clos- biases: Th e psychology of intuitive judgment (pp. 49–81). New ing the mind: “Seizing” and “freezing”. Psychological Review, York: Cambridge University Press. 103, 263–283. Kahneman, D., & Frederick, S. (2005). A model of heuris- Kuhberger, A. (2002). Th e rationality of risky decisions: A chang- tic judgment. In K. J. Holyoak & R. G. Morrison (Eds.), ing message. Th eory and Psychology, 12, 427–452. Th e Cambridge handbook of thinking and reasoning (pp. Larrick, R. P. (2004). Debiasing. In D. J. Koehler & N. Harvey 267–293). New York: Cambridge University Press. (Eds.), Blackwell handbook of judgment and decision making Kahneman, D., & Klein, G. (2009). Conditions for intuitive (pp. 316–337). Malden, MA: Blackwell. expertise a failure to disagree. American Psychologist, 64, Lee, C. J. (2006). Gricean charity: Th e Gricean turn in psychol- 515–526. ogy. Philosophy of the Social Sciences, 36, 193–218. Kahneman, D., Krueger, A. B., Schkade, D., Schwarz, N., & Lepine, R., Barrouillet, P., & Camos, V. (2005). What makes Stone, A. (2006). Would you be happier if you were richer? working memory spans so predictive of high-level cognition? A focusing illusion. Science, 312, 1908–1910. Psychonomic Bulletin and Review, 12, 165–170. Kahneman, D., & Tversky, A. (1972). Subjective probability: Leslie, A. M. (1987). Pretense and representation: Th e origins of A judgment of representativeness. Cognitive Psychology, 3, “Th eory of Mind”. Psychological Review, 94, 412–426. 430–454. Li, M., & Chapman, G. B. (2009). “100% of anything looks Kahneman, D., & Tversky, A. (1973). On the psychology of pre- good:” Th e appeal of one hundred percent. Psychonomic diction. Psychological Review, 80, 237–251. Bulletin and Review, 16, 156–162. Kahneman, D., & Tversky, A. (1983). Can irrationality be intelli- Lieberman, M. D. (2003). Refl exive and refl ective judgment gently discussed? Behavioral and Brain Sciences, 6, 509–510. processes: A social cognitive neuroscience approach. In J. P. Kahneman, D., & Tversky, A. (1996). On the reality of cognitive Forgas, K. R. Williams, & W. von Hippel (Eds.), Social judg- illusions. Psychological Review, 103, 582–591. ments: Implicit and explicit processes (pp. 44–67). New York: Kahneman, D., & Tversky, A. (Eds.). (2000). Choices, values, and Cambridge University Press. frames. Cambridge, England: Cambridge University Press. Lieberman, M. D. (2007). Social cognitive neuroscience: A Kanazawa, S. (2004). General intelligence as a domain-specifi c review of core processes. Annual Review of Psychology, 58, adaptation. Psychological Review, 111, 512–523. 259–289. Kane, M. J. (2003). Th e intelligent brain in confl ict. Trends in Lieberman, M. D. (2009). What zombies can’t do: A social cog- Cognitive Sciences, 7, 375–377. nitive neuroscience approach to the irreducibility of refl ective Kaufman, S. B., DeYoung, C. G., Gray, J. R., Jimenez, L., Brown, consciousness. In J. St. B. T. Evans & K. Frankish (Eds.), In J., & Mackintosh, N. J. (2010). Implicit learning as an abil- two minds: Dual processes and beyond (pp. 293–316). Oxford, ity. Cognition, 116, 321–340. England: Oxford University Press. Klaczynski, P. A. (2001). Analytic and heuristic processing infl u- Loewenstein, G. F. (1996). Out of control: Visceral infl uences ences on adolescent reasoning and decision making. Child on behavior. Organizational Behavior and Human Decision Development, 72, 844–861. Processes, 65, 272–292.

362 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 336262 112/1/20112/1/2011 110:22:360:22:36 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Luce, R. D., & Raiff a, H. (1957). Games and decisions. New York: handbook of judgment and decision making (pp. 3–18). Wiley. Malden, MA: Blackwell. Macpherson, R., & Stanovich, K. E. (2007). Cognitive ability, Parker, A. M., & Fischhoff , B. (2005). Decision-making compe- thinking dispositions, and instructional set as predictors of tence: External validation through an individual diff erences critical thinking. Learning and Individual Diff erences, 17, approach. Journal of Behavioral Decision Making, 18, 1–27. 115–127. Perkins, D. N. (1995). Outsmarting IQ: Th e emerging science of Manktelow, K. I. (2004). Reasoning and rationality: Th e pure learnable intelligence. New York: Free Press. and the practical. In K. I. Manktelow & M. C. Chung (Eds.), Perner, J. (1991). Understanding the representational mind. Psychology of reasoning: Th eoretical and historical perspectives Cambridge, MA: MIT Press. (pp. 157–177). Hove, England: Psychology Press. Plato. (1945). Th e republic. (F. MacDonald Cornford, Trans.). Marewski, J. N., Gaissmaier, W., & Gigerenzer, G. (2010). New York: Oxford University Press. Good judgments do not require complex cognition. Current Pollock, J. L. (1995). Cognitive carpentry. Cambridge, MA: Th e Directions in Psychological Science, 11, 103–121. MIT Press. Matthews, G., Zeidner, M., & Roberts, R. D. (2002). Emotional Postman, N. (1988). Conscientious objections. New York: Vintage intelligence: Science and myth. Cambridge, MA: MIT Press. Books. McClure, S. M., Laibson, D. I., Loewenstein, G., & Cohen, Prado, J., & Noveck, I. A. (2007). Overcoming perceptual fea- J. D. (2004). Separate neural systems value immediate and tures in logical reasoning: A parametric functional magnetic delayed monetary rewards. Science, 306, 503–507. resonance imaging study. Journal of Cognitive Neuroscience, Metcalfe, J., & Mischel, W. (1999). A hot/cool-system analysis of 19, 642–657. delay of gratifi cation: Dynamics of will power. Psychological Reber, A. S. (1993). Implicit learning and tacit knowledge. New Review, 106, 3–19. York: Oxford University Press. Milkman, K. L., Chugh, D., & Bazerman, M. H. (2009). How Rescher, N. (1988). Rationality: A philosophical inquiry into the can decision making be improved? Perspectives on Psychological nature and rationale of reason. Oxford, England: Oxford Science, 4, 379–383. University Press. Minsky, M. L. (1985). Th e society of mind. New York: Simon Richerson, P. J., & Boyd, R. (2005). Not by genes alone: How and Schuster. culture transformed human evolution. Chicago, IL: University Miyake, A., Friedman, N., Emerson, M. J., & Witzki, A. H. of Chicago Press. (2000). Th e utility and diversity of executive functions and Rips, L. J. (1994). Th e psychology of proof. Cambridge, MA: MIT their contributions to complex “frontal lobe” tasks: A latent Press. variable analysis. Cognitive Psychology, 41, 49–100. Sá, W., Kelley, C., Ho, C., & Stanovich, K. E. (2005). Th inking Moors, A., & De Houwer, J. (2006). Automaticity: A theo- about personal theories: Individual diff erences in the coor- retical and conceptual analysis. Psychological Bulletin, 132, dination of theory and evidence. Personality and Individual 297–326. Diff erences, 38, 1149–1161. Murphy, D., & Stich, S. (2000). Darwin in the madhouse: Sá, W., & Stanovich, K. E. (2001). Th e domain specifi city and Evolutionary psychology and the classifi cation of mental dis- generality of mental contamination: Accuracy and projection orders. In P. Carruthers & A. Chamberlain (Eds.), Evolution in judgments of mental content. British Journal of Psychology, and the human mind: Modularity, language and meta-cognition 92, 281–302. (pp. 62–92). Cambridge, England: Cambridge University Sá, W., West, R. F., & Stanovich, K. E. (1999). Th e domain spec- Press. ifi city and generality of belief bias: Searching for a generaliz- Newstead, S. E., Handley, S. J., Harley, C., Wright, H., & able critical thinking skill. Journal of Educational Psychology, Farrelly, D. (2004). Individual diff erences in deductive rea- 91, 497–510. soning. Quarterly Journal of Experimental Psychology, 57A, Salthouse, T. A., Atkinson, T. M., & Berish, D. E. (2003). 33–60. Executive functioning as a potential mediator of age-related Norman, D. A., & Shallice, T. (1986). Attention to action: cognitive decline in normal adults. Journal of Experimental Willed and automatic control of behavior. In R. J. Davidson, Psychology: General, 132, 566–594. G. E. Schwartz, & D. Shapiro (Eds.), Consciousness and self- Salthouse, T. A., & Pink, J. E. (2008). Why is working memory regulation (pp. 1–18). New York: Plenum. related to fl uid intelligence? Psychonomic Bulletin and Review, Norris, S. P., & Ennis, R. H. (1989). Evaluating critical thinking. 15, 364–371. Pacifi c Grove, CA: Midwest Publications. Samuels, R. (2005). Th e complexity of cognition: Tractability Oaksford, M., & Chater, N. (1998). An introduction to rational arguments for massive modularity. In P. Carruthers, S. models of cognition. In M. Oaksford & N. Chater (Eds.), Laurence, & S. Stich (Eds.), Th e innate mind (pp. 107–121). Rational models of cognition (pp. 1–18). New York: Oxford Oxford, England: Oxford University Press. University Press. Samuels, R. (2009). Th e magical number two, plus or minus: Oaksford, M., & Chater, N. (2007). Bayesian rationality: Th e Dual-process theory as a theory of cognitive kinds. In J. S. probabilistic approach to human reasoning. Oxford, England: B. T. Evans & K. Frankish (Eds.), In two minds: Dual pro- Oxford University Press. cesses and beyond (pp. 129–146). Oxford, England: Oxford Oechssler, J., Roider, A., & Schmitz, P. W. (2009). Cognitive University Press. abilities and behavioral biases. Journal of Economic Behavior Samuels, R., & Stich, S. P. (2004). Rationality and psychology. and Organization, 72, 147–152. In A. R. Mele & P. Rawling (Eds.), Th e Oxford handbook Over, D. E. (2000). Ecological rationality and its heuristics. of rationality (pp. 279–300). Oxford, England: Oxford Th inking and Reasoning, 6, 182–192. University Press. Over, D. E. (2004). Rationality and the normative/descriptive Savage, L. J. (1954). Th e foundations of statistics. New York: distinction. In D. J. Koehler & N. Harvey (Eds.), Blackwell Wiley.

stanovich 363

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 336363 112/1/20112/1/2011 110:22:370:22:37 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Schelling, T. C. (1984). Choice and consequence: Perspectives of Stanovich, K. E., & West, R. F. (1997). Reasoning indepen- an errant economist. Cambridge, MA: Harvard University dently of prior belief and individual diff erences in actively Press. open-minded thinking. Journal of Educational Psychology, 89, Schneider, W., & Chein, J. (2003). Controlled and automatic 342–357. processing: Behavior, theory, and biological processing. Stanovich, K. E., & West, R. F. (1998a). Cognitive ability Cognitive Science, 27, 525–559. and variation in selection task performance. Th inking and Schommer-Aikins, M. (2004). Explaining the epistemological Reasoning, 4, 193–230. belief system: Introducing the embedded systemic model Stanovich, K. E., & West, R. F. (1998b). Individual diff erences and coordinated research approach. Educational Psychologist, in framing and conjunction eff ects. Th inking and Reasoning, 39, 19–30. 4, 289–317. Shafi r, E., & LeBoeuf, R. A. (2002). Rationality. Annual Review Stanovich, K. E., & West, R. F. (1998c). Individual diff erences in of Psychology, 53, 491–517. rational thought. Journal of Experimental Psychology: General, Shiff rin, R. M., & Schneider, W. (1977). Controlled and auto- 127, 161–188. matic human information processing: II. Perceptual learn- Stanovich, K. E., & West, R. F. (1998d). Who uses base rates and ing, automatic attending, and a general theory. Psychological P(D/~H)? An analysis of individual diff erences. Memory and Review, 84, 127–190. Cognition, 26, 161–179. Sloman, S. A. (1996). Th e empirical case for two systems of rea- Stanovich, K. E., & West, R. F. (1999). Discrepancies between soning. Psychological Bulletin, 119, 3–22. normative and descriptive models of decision making and Sloman, S. A. (2002). Two systems of reasoning. In T. Gilovich, the understanding/acceptance principle. Cognitive Psychology, D. Griffi n, & D. Kahneman (Eds.), Heuristics and biases: Th e 38, 349–385. psychology of intuitive judgment (pp. 379–396). New York: Stanovich, K. E., & West, R. F. (2000). Individual diff erences in Cambridge University Press. reasoning: Implications for the rationality debate? Behavioral Slovic, P., & Peters, E. (2006). Risk perception and aff ect. Current and Brain Sciences, 23, 645–726. Directions in Psychological Science, 15, 322–325. Stanovich, K. E., & West, R. F. (2003). Evolutionary versus Smith, E. R., & DeCoster, J. (2000). Dual-process models in instrumental goals: How evolutionary psychology miscon- social and cognitive psychology: Conceptual integration and ceives human rationality. In D. E. Over (Ed.), Evolution and links to underlying memory systems. Personality and Social the psychology of thinking: Th e debate (pp. 171–230). Hove, Psychology Review, 4, 108–131. England and New York: Psychology Press. Sperber, D. (1994). Th e modularity of thought and the epidemi- Stanovich, K. E., & West, R. F. (2007). Natural myside bias is ology of representations. In L. A. Hirschfeld & S. A. Gelman independent of cognitive ability. Th inking and Reasoning, 13, (Eds.), Mapping the mind: Domain specifi city in cognition 225–247. and culture (pp. 39–67). Cambridge, England: Cambridge Stanovich, K. E., & West, R. F. (2008a). On the failure of intel- University Press. ligence to predict myside bias and one-sided bias. Th inking Stanovich, K. E. (1999). Who is rational? Studies of individual and Reasoning, 14, 129–167. diff erences in reasoning. Mahwah, NJ: Erlbaum. Stanovich, K. E., & West, R. F. (2008b). On the relative inde- Stanovich, K. E. (2001). in the study of intelli- pendence of thinking biases and cognitive ability. Journal of gence: Review of “Looking Down on Human Intelligence” Personality and Social Psychology, 94, 672–695. by Ian Deary. Trends in Cognitive Sciences, 5(2), 91–92. Stein, E. (1996). Without good reason: Th e rationality debate in Stanovich, K. E. (2002). Rationality, intelligence, and levels of philosophy and cognitive science. Oxford, England: Oxford analysis in cognitive science: Is dysrationalia possible? In University Press. R. J. Sternberg (Ed.), Why smart people can be so stupid (pp. Sternberg, R. J. (2003). Wisdom, intelligence, and creativity 124–158). New Haven, CT: Yale University Press. synthesized. Cambridge, England: Cambridge University Stanovich, K. E. (2003). Th e fundamental computational biases Press. of human cognition: Heuristics that (sometimes) impair Sternberg, R. J., & Grigorenko, E. L. (1997). Are cognitive styles decision making and problem solving. In J. E. Davidson still in style? American Psychologist, 52, 700–712. & R. J. Sternberg (Eds.), Th e psychology of problem solving Sternberg, R. J., Grigorenko, E. L., & Zhang, L. (2008). (pp. 291–342). New York: Cambridge University Press. Styles of learning and thinking matter in instruction and Stanovich, K. E. (2004). Th e robot’s rebellion: Finding meaning assessment. Perspectives on Psychological Science, 3, 486– in the age of Darwin. Chicago, IL: University of Chicago 506. Press. Stich, S. P. (1990). Th e fragmentation of reason. Cambridge, MA: Stanovich, K. E. (2009a). Distinguishing the refl ective, algo- MIT Press. rithmic, and autonomous minds: Is it time for a tri-process Strack, F., & Deutsch, R. (2004). Refl ective and impulsive deter- theory? In J. Evans & K. Frankish (Eds.), In two minds: Dual minants of social behavior. Personality and Social Psychology processes and beyond (pp. 55–88). Oxford, England: Oxford Review, 8, 220–247. University Press. Strathman, A., Gleicher, F., Boninger, D. S., & Scott Edwards, C. Stanovich, K. E. (2009b). What intelligence tests miss: Th e psy- (1994). Th e consideration of future consequences: Weighing chology of rational thought. New Haven, CT: Yale University immediate and distant outcomes of behavior. Journal of Press. Personality and Social Psychology, 66, 742–752. Stanovich, K. E. (2010). Decision making and rationality in the Taub, G. E., & McGrew, K. S. (2004). A confi rmatory factor modern world. New York: Oxford University Press. analysis of Cattell-Horn-Carroll theory and cross-age invari- Stanovich, K. E. (2011). Rationality and the refl ective mind. ance of the Woodcock-Johnson Tests of Cognitive Abilities New York: Oxford University Press. III. School Psychology Quarterly, 19, 72–87.

364 on the distinction between rationality and intelligence

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 336464 112/1/20112/1/2011 110:22:370:22:37 PPMM OUP UNCORRECTED PROOF – FIRST-PROOF, 12/01/11, NEWGEN

Tetlock, P. E., & Mellers, B. A. (2002). Th e great rationality von Neumann, J., & Morgenstern, O. (1944). Th e theory of games debate. Psychological Science, 13, 94–99. and economic behavior. Princeton, NJ: Princeton University Th agard, P., & Nisbett, R. E. (1983). Rationality and charity. Press. , 50, 250–267. Vranas, P. B. M. (2000). Gigerenzer’s normative critique of Th aler, R. H., & Shefrin, H. M. (1981). An economic theory of Kahneman and Tversky. Cognition, 76, 179–193. self-control. Journal of Political Economy, 89, 392–406. Wang, L. (2009). Money and fame: Vividness eff ects in the Th aler, R. H., & Sunstein, C. R. (2008). Nudge: Improving deci- National Basketball Association. Journal of Behavioral sions about health, wealth, and happiness. New Haven, CT: Decision Making, 22, 20–44. Yale University Press. Wason, P. C., & Evans, J. St. B. T. (1975). Dual processes in Toates, F. (2005). Evolutionary psychology: Towards a more reasoning? Cognition, 3, 141–154. integrative model. Biology and Philosophy, 20, 305–328. West, R. F., Toplak, M. E., & Stanovich, K. E. (2008). Heuristics Toates, F. (2006). A model of the hierarchy of behavior, cog- and biases as measures of critical thinking: Associations nition, and consciousness. Consciousness and Cognition, 15, with cognitive ability and thinking dispositions. Journal of 75–118. Educational Psychology, 100, 930–941. Toplak, M., Liu, E., Macpherson, R., Toneatto, T., & Stanovich, Wilson, T. D. (2002). Strangers to ourselves. Cambridge, MA: K. E. (2007). Th e reasoning skills and thinking dispositions Harvard University Press. of problem gamblers: A dual-process taxonomy. Journal of Wilson, T. D., & Gilbert, D. T. (2005). Aff ective forecasting: Behavioral Decision Making, 20, 103–124. Knowing what to want. Current Directions in Psychological Toplak, M. E., & Stanovich, K. E. (2002). Th e domain speci- Science, 14, 131–134. fi city and generality of disjunctive reasoning: Searching for Wu, G., Zhang, J., & Gonzalez, R. (2004). Decision under risk. a generalizable critical thinking skill. Journal of Educational In D. J. Koehler & N. Harvey (Eds.), Blackwell handbook of Psychology, 94, 197–209. judgment and decision making (pp. 399–423). Malden, MA: Toplak, M. E. & Stanovich, K. E. (2003). Associations between Blackwell Publishing. myside bias on an informal reasoning task and amount of Zeidner, M., & Matthews, G. (2000). Intelligence and per- post-secondary education. Applied Cognitive Psychology, 17, sonality. In R. J. Sternberg (Ed.), Handbook of intelligence 851–860. (pp. 581–610). New York: Cambridge University Press. Tversky, A., & Kahneman, D. (1974). Judgment under uncer- Zelazo, P. D. (2004). Th e development of conscious control in tainty: Heuristics and biases. Science, 185, 1124–1131. childhood. Trends in Cognitive Sciences, 8, 12–17.

stanovich 365

222_Holyoak_Ch22.indd2_Holyoak_Ch22.indd 336565 112/1/20112/1/2011 110:22:370:22:37 PPMM