Chapter 13: Judgement and decision making

Judgement researchers address the question “How do people integrate multiple, incomplete, and sometimes conflicting cues to infer what is happening in the external world?” (Hastie, 2001). In contrast, decision making involves choosing among various options. Decision-making researchers address the question “How do people choose what action to take to achieve labile [changeable], sometimes conflicting goals in an uncertain world?” (Hastie, 2001).

There are close relationships between the areas of judgement and decision making. More specifically, decision- making research covers all of the processes involved in deciding on a course of action. In contrast, judgement research focuses mainly on those aspects of decision making concerned with estimating the likelihood of various events. In addition, judgements are evaluated in terms of their accuracy, whereas decisions are evaluated on the basis of their consequences.

 Judgement research

We often change our opinion of the likelihood of something in the light of new information. Reverend Thomas Bayes provided a more precise way of thinking about this. Bayes’ theorem uses the relative probabilities of two hypotheses before data are obtained (prior odds), and we calculate the probabilities of obtaining observed data under each hypothesis (posterior odds).

Bayes’ theorem is an odds ratio ():

Kahneman and Tversky illustrate this with their taxi-cab problem (1972).

INTERACTIVE EXERCISE: Taxi-cab problem

Evidence indicates that people often take less account of the prior odds (base-rate information) than they should from Bayes’ theorem. Base-rate information was defined by Koehler (1996) as “the relative frequency with which an event occurs or an attribute is present in the population”. Kahneman and Tversky (1973) found evidence that people fail to take account of base-rate information.

CASE STUDY: Koehler (1996): the base rate reconsidered

Kahneman and Tversky argued that we rely on simple , or rules of thumb, because they are cognitively undemanding. They described the representativeness , in which events that are representative or typical of a class are assigned a high probability of occurrence. Kahneman and Tversky (1973) found participants often neglected base-rate information in favour of the representativeness heuristic.

The is the mistaken belief that the combination of two events is more likely than one of the two events alone. This seems to involve the representativeness heuristic. Tversky and Kahneman (1983) used the Linda problem. When given a description of Linda, most participants ranked “feminist bank teller” as more probable than either bank teller or feminist, which is incorrect. Many people misinterpret the statement, “Linda is a bank teller”, as implying she is not active in the feminist movement (Manktelow, 2012). However, the conjunction fallacy is still found even when almost everything possible is done to ensure participants interpret the problem correctly (Sides et al., 2002).

Base-rate information is sometimes both relevant and generally used. Krynski and Tenenbaum (2007) argued that we possess valuable causal knowledge that allows us make accurate judgements using base-rate information in everyday life. In the laboratory, however, the judgement problems we confront often fail to provide such knowledge. Krynski and Tenenbaum (2007) argued that the reasonably full causal knowledge available to participants allows them to solve the problem.

Tversky and Kahneman (1974) studied the , which involves estimating the frequencies of events on the basis of how easy or difficult it is to retrieve relevant information from long-term memory. Lichtenstein et al. (1978) found causes of death attracting publicity (e.g., murder) were judged to be more likely than, for example, suicide, when the opposite is actually the case. Pachur et al. (2012a) suggested Lichtenstein et al.’s results may have been due to the in which decisions are based on the feeling of dread. However, Oppenheimer (2004) provided convincing evidence that we do not always use the availability heuristic.

Much of the work on heuristics has been limited despite the experimental evidence for error-prone heuristic use. Tversky and Kahneman’s (1974) one-word definitions are vague and fail to make testable predictions. There has been limited theorising in this field (Fiedler and von Sydow, 2015). Error-prone judgements may be based on availability of information rather than incorrect heuristic use. Finally, much of the research severely lacks ecological validity.

 Judgement theories

The support theory was proposed by Tversky and Koehler (1994) based in part on the availability heuristic. The key assumption is that any given event will appear more or less likely depending on how it is described. A more explicit description of an event will typically be regarded as having a greater subjective probability because it:  draws attention to less obvious aspects;  overcomes memory limitations.

Mandel (2005) found the overall estimated probability of a terrorist attack was greater when participants were presented with explicit possibilities than when they were not. Redelmeier et al. (1995) found this phenomenon in experts as well as non-experts.

Sloman et al. (2004) obtained findings directly opposite to those predicted by support theory. Thus, an explicit description can reduce subjective probability if it leads us to focus on low-probability causes. Redden and Frederick (2011) argued that providing an explicit description can reduce subjective probability by making it more effortful to comprehend an event. This oversimplified theory doesn’t account for these findings.

Gigerenzer and Gaissmaier (2011) argued that heuristics are often very valuable. They focused on fast and frugal heuristics that involve rapid processing of relatively little information. One of the key fast and frugal heuristics is the take-the-best strategy, which has three components:  Search rule – search cues in order of validity.  Stopping rule – stop when a discriminatory cue is found.  Decision rule.

The most researched example is the recognition heuristic. If one of two objects is recognised and the other is not, then we infer that the recognised object has the higher value with respect to the criterion (Goldstein & Gigerenzer, 2002). Kruglanski and Gigerenzer (2011) argued that there is a two-step process in deciding which heuristic to use:  First, the nature of the task and individual memory limit the number of available heuristics.  Second, people select one of them based on the likely outcome of using it and its processing demands.

WEBLINK: Todd and Gigerenzer RESEARCH ACTIVITIES 1 & 2: Smart heuristics

Goldstein and Gigerenzer (2002) carried out several experiments to study the recognition heuristic and found up to 90% usage. They also found American and German students performed less well when tested on cities in their own country than those in another country. This was because they typically recognised both cities in the pair for their own country and couldn’t use the recognition heuristic. However, Pachur et al. (2012a) found in a meta- analysis that there was a correlation of +0.64 between usage of the recognition heuristic and its validity.

Heuristics sometimes outperform judgements based on much more complex calculations. For example, Wübben and van Wangenheim (2008) considered how managers of clothes shops decide whether customers are active (i.e., likely to buy again) or inactive. The hiatus heuristic is a very simple strategy – only customers who have purchased fairly recently are deemed to be active. Richter and Späth (2006) found the recognition heuristic was often not used when participants had access to inconsistent information.

There is accumulating evidence that the take-the-best strategy is used less often than is sometimes assumed. Newell et al. (2003) concluded that the take-the-best strategy was least likely to be used when the cost of obtaining information was low and the validities of the cues were unknown. Dieckmann and Rieskamp (2007) focused on information redundancy – simple strategies work best (and are more likely to be used) when environmental information is simple.

There is good evidence that people often use fast and frugal heuristics. These heuristics are fast and effective, and used particularly when individuals are under time or cognitive pressure. The approach has several limitations. Too much emphasis has been placed on using intuitions when humans have such a large capacity for logical reasoning (Evans & Over, 2010). The use of the recognition heuristic is more complex than assumed: people generally also consider why they recognise an object and only then decide whether to use the recognition heuristic (Newell, 2011). Other heuristic use is also more complex than claimed. Far too little attention has been paid to the issue of the importance of the decision that has to be made.

Natural frequency hypothesis Gigerenzer and Hoffrage (1995) provided an influential theoretical approach to account for better results with frequency data than with percentages. The approach relies on the notion of natural – the process of encountering instances in a population sequentially. Natural sampling happens in everyday life and may be the evolutionary basis for human ability with frequencies. In most word problems, participants are simply provided with frequency information and do not have to grapple with the complexities of natural sampling.

Judgement performance is better when problems are presented in the form of frequencies rather than probabilities or percentages. Fiedler (1988) used the Linda problem in standard and frequency versions. They found performance was better in the frequency version. Hoffrage et al. (2000) found medical students paid little attention to base-rate information in the probability versions, but performed better when given the frequency version. Fiedler et al. (2000) addressed the issue of how effectively people sample events. They found participants still failed to take into account base rates even when allowed to make their own sampling choices.

It makes sense to argue that use of natural or objective sampling could enhance the accuracy of many of our judgements. Judgements based on frequency information are often superior. Natural sampling has limitations. The frequency version of a problem is generally not easier than the probability version when steps are taken to avoid making the underlying problem structure much more obvious in the former version. There is often a yawning chasm between people’s actual sampling behaviour and the neat-and-tidy frequency data provided in laboratory experiments (e.g., Fiedler et al., 2000). Truly natural sampling may not be possible at all (Fiedler, 2008). People sometimes find it harder to think in frequencies than inprobabilities.

Dual-process theory Heuristics are extensively used because of advantages in speed, robustness and reduced cognitive effort. However, complex cognitive processes are also sometimes used. Kahneman (2003) and De Neys (2012) proposed a dual-process model, according to which probability judgements depend on processing within two systems:  System 1: an intuitive, automatic and immediate system (e.g., heuristics).  System 2: an analytical, controlled and rule-governed system.

According to the model, System 1 rapidly generates intuitive answers that are monitored or evaluated by System 2.

Since System 2 is more cognitively demanding, processing in this route takes more time. De Neys (2006) found participants who used System 2 to solve the Linda problem took 40% longer than those who used only System 1. De Neys and Glumicic (2008) found participants took longer to produce answers to incongruent problems in which System 1 and 2 processes would produce different answers.

Pennycook and Thompson (2012) found evidence that, for some problems, neither system is used. Cognitive load led to increased use of the information presented briefly. This was so regardless of whether this information referred to base rate or personality. Thus, usage of System 1 and System 2 processing depends more on how information is presented (Chun & Kruglanski, 2006).

There is evidence supporting the existence of two different processing systems. The model also explains individual differences in judgement performance. Limitations of the model are that it is based on the assumption that most people rely almost exclusively on System 1, errors can also occur in System 2, and many of the assumptions have been shown to be incorrect or oversimplified.

In making estimates, people often fail to take full account of base-rate information, and this is true even of experts. Base-rate information is more likely to be used if its causal relevance is clear. People often adopt biased sampling strategies, and are inaccurate even when using frequency data. One reason why people fail to make proper use of base-rate information is because of their reliance on the representativeness heuristic. Another commonly used rule of thumb is the availability heuristic. According to support theory, the subjective probability of an event increases as the description of the event becomes more explicit and detailed. The take- the-best and recognition heuristics are very simple rules of thumb that are often surprisingly accurate but are used less often than predicted theoretically. The dual-process model is a relatively successful account that explains individual differences in judgement performance.

 Decision making under risk

Von Neumann and Morgenstern (1944) developed the utility theory. They suggested we try to maximise utility (the subjective value we attach to an outcome). When we choose between simple options, we assess the expected utility or expected value of each one via the following formula:

Expected utility = (probability of a given outcome) × (utility of the outcome)

Von Neumann and Morgenstern (1944) made an important contribution by treating decisions as gambles. This approach was subsequently coupled with Savage’s (1954) mathematical approach, which led to subjective expected utility theory.

Tversky and Shafir (1992) and Kahneman and Tversky (1984) found two-thirds of their participants refused to bet on a coin toss and a majority preferred the choices with the smaller expected gain and the larger expected loss! This suggests people often use irrational decision making.

Prospect theory We can distinguish between risky and risk-free decision making. Kahneman and Tversky (1979) developed an approach to risky decision making known as prospect theory. Two main assumptions of this theory are: 1. Individuals identify a reference point representing their current state. 2. Individuals are more sensitive to potential losses than to potential gains; this is loss aversion.

WEBLINK: Prospect theory

One of the main lines of research on prospect theory has involved the framing effect, which means that decisions are influenced by irrelevant aspects of the situation. Tversky and Kahneman (1981) used the Asian disease problem to study this. According to prospect theory, framing effects should only be found when what is at stake has real value for the decision maker. When all the programmes were completely described (or all incompletely described), the framing effect disappeared (Mandel & Vartanian, 2011). Wang (1996) showed that social and moral factors not considered by prospect theory can influence performance on the Asian disease problem. Participants’ concern about fairness was greater in a small group than in a large-group context. Decisions were influenced by relationships between the participants, and by the social context and psychological factors.

The sunk-cost effect is when additional resources are expended to justify some previous commitment. This may be partly due to the importance of justifying one’s actions and is “a tendency for people to pursue a course of action even after it has proved to be suboptimal, because resources have been invested in that course of action” (Braverman & Blumenthal-Barby, 2012, p. 186). Baliga and Ely (2011) argued that individuals with complete memory of all the relevant information should be immune from the effect. This information would prevent them from focusing excessively on the costs incurred so far.

Prospect theory predicts that people overweight the probability of rare events. Hertwig et al. (2004) compared decision making based on descriptions or based on experience. People overweighted the probability of rare events when decisions were based on descriptions but underweighted the probability of rare events when decisions were based on experience. Hilbig and Glöckner (2011) argued that we can produce appropriate weighting of rare events and that this could be done by rapidly presenting participants with a large amount of detailed information (open sampling).

Suppose you were given the following choice: 1. 50% to win £1 (1.20 euro); 50% to lose £1 (1.20 euro). 2. 50% to win £5 (6 euros); 50% to lose £5 (6 euros). According to prospect theory, people are loss averse and so should choose (1) because it reduces potential losses. In fact, however, the typical finding across several studies using similar choices is loss neutrality – overall, participants do not favour one option over the other unless the stakes are high (Yechiam & Hochman, 2013). This consistent finding is contrary to prospect theory.

Yechiam and Hochman (2013) put forward an attentional model. They argued: 1. Losses produce a brief increase in arousal. 2. This directs attention to on-task events and makes the individual more sensitive to task reinforcements (losses and gains). 3. These processes lead to loss neutrality when information about potential gains and losses is presented at the same time. 4. In contrast, loss aversion is typically found on tasks where an individual’s attention is directed to gains or losses rather than both together.

Most participants are risk averse, especially when the stakes were high (Brooks et al., 2009). Personality influences individuals’ attitudes to risk. Those high in self-esteem are more likely to prefer risky gambles than those low in self-esteem (Josephs et al., 1992). Foster et al. (2011) found individuals high in narcissism engaged in riskier stock-market investing.

Prospect theory provides a more adequate account of decision making than previous approaches. The value function allows us to explain many phenomena – loss aversion, sunk-cost effect and framing effects. Prospect theory has various limitations:  It lacks a detailed explicit rationale for the value function.  It is oversimplified.  The predicted overweighting of rare events sometimes fails to materialise.  Loss aversion occurs less often than predicted by the theory.  Individual differences and social and emotional factors are underexplored in the model.

WEBLINK: Thinking of making a risky decision? CASE STUDY: Full-fat milk?

According to prospect theory, people are typically much more sensitive to potential losses than to potential gains. As a result, they are much more willing to take risks to avoid losses. An example of this is the sunk-cost effect. There is support for prospect theory in the framing effect, in which decisions are influenced by irrelevant aspects of the situation.

 Decision making: social and emotional factors

Emotional factors are important in decision making, given that winning and losing both have emotional consequences. Much research lies within neuroeconomics, in which cognitive neuroscience is used to increase understanding of decision making in the economic environment.

Kermer et al. (2006) found people often exaggerate how negative the impact of a loss is likely to be. Overestimation of the intensity and duration of negative emotional reactions to loss is known as impact and has been found with predictions about losses such as losing one’s job or a romantic partner (Kermer et al., 2006). Schlösser et al. (2013) found the riskiness of decision making was predicted by both anticipated and immediate emotions. Wins were experienced as rejoicing (personal choices) or as elation (computer choices), with elation being followed by riskier choices (Giorgetta et al., 2013).

Weller et al. (2007) found patients with damage to the ventromedial prefrontal cortex had elevated risk-seeking behaviour with respect to both potential gains and losses. Patients with amygdala damage also showed elevated risk-seeking behaviour with potential gains but not potential losses. Sokol-Hessner et al. (2013) studied the effects of emotion on risky financial decision making. When participants were instructed to engage in emotion regulation (designed to reduce their emotional involvement in the task), they showed reduced loss aversion.

Ritov and Baron (1990) conducted a study demonstrating . This is when individuals prefer inaction to action due to emotional factors. British parents were asked questions about having their children vaccinated against various diseases (Brown et al., 2010). They were willing to accept a higher risk of their children having a disease than of their children suffering adverse reactions to vaccination.

The is when individuals repeat an initial choice over a series of decision situations, in spite of changes in their preferences. Nicolle et al. (2011) found status quo bias on a difficult perceptual decision task.

Anderson (2003) proposed a rational-emotional model to account for the impact of emotions on decision making. Decision making is determined by rational factors based on inferences and outcome information, as well as experienced and anticipated emotion.

Social factors Tetlock (2002) emphasised the importance of social factors in his social functionalist approach. For example, people often behave like intuitive politicians in that they need to be able to justify their decisions to others.

Simonson and Staw (1992) studied the sunk-cost effect and found the tendency towards a sunk-cost effect was strongest in their high-accountability condition. Schwartz et al. (2004) found doctors showed a bias in decision making regardless of whether they were made accountable.

Decision making is sometimes influenced by social, emotional and moral considerations that are not included in prospect theory, as in omission bias. It is not clear within prospect theory why people are more sensitive to losses than to gains, nor why some people are more sensitive than others to losses. According to Tetlock’s social functionalist approach, decision making is strongly influenced by the social context. More specifically, when making decisions we often focus on accountability and on the need to justify ourselves to others.

 Complex decision making

In life we are often confronted by very complex problems. There are two important differences between decision making in the laboratory and the real world:  Decision making generally has much more serious consequences in the real world.  Laboratory-based decision makers typically make a single decision.

According to multi-attribute utility theory (Wright, 1984), decision makers should go through the following stages:  Identify attributes relevant to the decision.  Decide how to weight those attributes.  List all options under consideration.  Rate each option on each attribute.  Obtain a total utility (i.e., subjective desirability for each option by summing its weighted attribute values) and select the one with the highest weighted total.

In practice, people rarely adopt this approach because the procedure is complex and the set of relevant dimensions is not always clear or separate.

Simon (1957) proposed a realistic approach to complex decision making. He distinguished between unbounded rationality and bounded rationality. In essence, bounded rationality means we are as rational as the constraints of the environment and the mind permit. In practical terms, bounded rationality leads to satisficing. Schwartz et al. (2002) distinguished between individuals who are satisficers and those who are maximisers (perfectionists). They found advantages with being a satisficer, such as being happier and optimistic with less regret and self- blame.

WEBLINK: Bounded rationality: a response to rational analysis

Kaplan et al. (2011) modified Tversky’s (1972) elimination-by-aspects theory. In their two-stage theory, there is an initial stage resembling elimination by aspects, which reduces the options being considered to a manageable number. In the second stage, there are detailed comparisons of the patterns of attributes of the retained options.

Payne (1976) found participants used both simple and complex strategies in decision making. When there were many options, they used simple strategies like satisficing or elimination-by-aspects. When fewer options remained, they often switched to a more complex strategy. Similar findings were obtained by Lenton and Stewart (2008) when single women made selections from a real dating website with 4, 24 or 64 potential mates. Unsurprisingly, the women shifted from complex to simple strategies with increased potential mates.

Most theories assume a given individual’s assessment of the utility or preference (desirability × importance) of any given attribute remains constant. Simon et al. (2004) tested this assumption and found it not to be correct. Advanced nursing students prioritised a male or female patient for surgery because there were only sufficient resources for one operation (Svenson et al., 2009). After they had made their decision, their memory for the objective facts (e.g., life expectancy without surgery, probability of surviving surgery) was distorted to increase the apparent support for that decision.

An important factor in poor decision making is selective exposure – the tendency in decision making to prefer information consistent with one’s beliefs over inconsistent information. Fischer and Greitemeyer (2010) proposed a model according to which increased selective exposure is predicted when there is high defence motivation.

In view of the artificiality of much laboratory research, there has been much interest in naturalistic decision making. Galotti (2002) put forward a theory of naturalistic decision making involving five phases:  setting goals;  gathering information;  structuring the decision (i.e., listing options + the criteria for deciding among them);  making a final choice;  evaluating the decision.

Galotti (2007) found people consistently limited the amount of information (options and attributes) considered.

There are marked individual differences in the strategies used in naturalistic decision making. For example, Crossley and Highhouse (2005) studied the approaches taken by individuals searching for and choosing a job. They identified three distinct information-search strategies:  Focused search: This involved a focus on a small number of carefully selected potential employers.  Exploratory search: This involved taking into account several job options and making use of several information sources (e.g., friends, employment centres).  Haphazard search: This was a rather non-strategic approach resembling trial and error.

An influential approach to understanding expert decision making is the recognition-primed decision model put forward by Klein (e.g., 1998, 2008). When a situation is perceived as familiar or typical, experts match the situation to learned patterns of information (automatically). Otherwise effortful diagnostic thinking occurs. Klein (1998) supported this theory: experts typically rapidly categorised even a novel situation as an example of a familiar type of situation. After that, they simply retrieved the appropriate decision from long-term memory.

INTERACTIVE EXERCISE: Galotti (2002)

Unconscious thought theory It is often assumed that conscious thinking is more effective with complex decision making, whereas unconscious thinking is effective with simple decision making. However, Dijksterhuis and Nordgren (2006) argued for the opposite:  Conscious thought is more limited in capacity than the unconscious.  The unconscious naturally weights the relative importance of various attributes. Conscious thought often leads to suboptimal weighting.

According to unconscious thought theory, there is an interaction between mode of thought and complexity of decision making. Conscious thought is suited to simple decision making because of its limited capacity. Dijksterhuis et al. (2006) found that, when the decision was complex, participants selected the more desirable option more often in the unconscious thought condition. The opposite pattern was found for simple decisions. Mamede et al. (2010) argued that experts have much relevant, well-organised knowledge they can access effectively by conscious searching through long-term memory. As a result, conscious thought sometimes produces much better decision making than unconscious thought.

In real life we are often confronted by complex decisions for which not all attributes are immediately available or apparent. We also sometimes make irrational decisions. Simon proposed the notion of bounded rationality for human complex decision making. This supposes that we produce workable solutions using heuristics such as satisficing. This describes human complex decision making better than other approaches such as the multi- attribute utility approach or elimination-by-aspects theory. According to the unconscious thought theory, unconscious thought is more suited for complex decision making than conscious thought because of capacity constraints. There is evidence that, under some circumstances, unconscious thought leads to more effective decision making than conscious thought. However, the usefulness of conscious thought should not be underestimated.

Additional references Gigerenzer, G. & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102: 684–704. Hastie, R. (2001). Problems for judgement and decision making. Annual Review of , 52: 653–83. Koehler, J.J. (1996). The reconsidered: Descriptive, normative, and methodological challenges. Behavioral & Brain Sciences, 19: 1–17. Ritov, J. & Baron, J. (1990). Reluctance to vaccinate: Omission bias and . Journal of Behavioral Decision Making, 3: 263–77.