
Chapter 13: Judgement and decision making Judgement researchers address the question “How do people integrate multiple, incomplete, and sometimes conflicting cues to infer what is happening in the external world?” (Hastie, 2001). In contrast, decision making involves choosing among various options. Decision-making researchers address the question “How do people choose what action to take to achieve labile [changeable], sometimes conflicting goals in an uncertain world?” (Hastie, 2001). There are close relationships between the areas of judgement and decision making. More specifically, decision- making research covers all of the processes involved in deciding on a course of action. In contrast, judgement research focuses mainly on those aspects of decision making concerned with estimating the likelihood of various events. In addition, judgements are evaluated in terms of their accuracy, whereas decisions are evaluated on the basis of their consequences. Judgement research We often change our opinion of the likelihood of something in the light of new information. Reverend Thomas Bayes provided a more precise way of thinking about this. Bayes’ theorem uses the relative probabilities of two hypotheses before data are obtained (prior odds), and we calculate the probabilities of obtaining observed data under each hypothesis (posterior odds). Bayes’ theorem is an odds ratio (): Kahneman and Tversky illustrate this with their taxi-cab problem (1972). INTERACTIVE EXERCISE: Taxi-cab problem Evidence indicates that people often take less account of the prior odds (base-rate information) than they should from Bayes’ theorem. Base-rate information was defined by Koehler (1996) as “the relative frequency with which an event occurs or an attribute is present in the population”. Kahneman and Tversky (1973) found evidence that people fail to take account of base-rate information. CASE STUDY: Koehler (1996): the base rate fallacy reconsidered Kahneman and Tversky argued that we rely on simple heuristics, or rules of thumb, because they are cognitively undemanding. They described the representativeness heuristic, in which events that are representative or typical of a class are assigned a high probability of occurrence. Kahneman and Tversky (1973) found participants often neglected base-rate information in favour of the representativeness heuristic. The conjunction fallacy is the mistaken belief that the combination of two events is more likely than one of the two events alone. This seems to involve the representativeness heuristic. Tversky and Kahneman (1983) used the Linda problem. When given a description of Linda, most participants ranked “feminist bank teller” as more probable than either bank teller or feminist, which is incorrect. Many people misinterpret the statement, “Linda is a bank teller”, as implying she is not active in the feminist movement (Manktelow, 2012). However, the conjunction fallacy is still found even when almost everything possible is done to ensure participants interpret the problem correctly (Sides et al., 2002). Base-rate information is sometimes both relevant and generally used. Krynski and Tenenbaum (2007) argued that we possess valuable causal knowledge that allows us make accurate judgements using base-rate information in everyday life. In the laboratory, however, the judgement problems we confront often fail to provide such knowledge. Krynski and Tenenbaum (2007) argued that the reasonably full causal knowledge available to participants allows them to solve the problem. Tversky and Kahneman (1974) studied the availability heuristic, which involves estimating the frequencies of events on the basis of how easy or difficult it is to retrieve relevant information from long-term memory. Lichtenstein et al. (1978) found causes of death attracting publicity (e.g., murder) were judged to be more likely than, for example, suicide, when the opposite is actually the case. Pachur et al. (2012a) suggested Lichtenstein et al.’s results may have been due to the affect heuristic in which decisions are based on the feeling of dread. However, Oppenheimer (2004) provided convincing evidence that we do not always use the availability heuristic. Much of the work on heuristics has been limited despite the experimental evidence for error-prone heuristic use. Tversky and Kahneman’s (1974) one-word definitions are vague and fail to make testable predictions. There has been limited theorising in this field (Fiedler and von Sydow, 2015). Error-prone judgements may be based on availability of information rather than incorrect heuristic use. Finally, much of the research severely lacks ecological validity. Judgement theories The support theory was proposed by Tversky and Koehler (1994) based in part on the availability heuristic. The key assumption is that any given event will appear more or less likely depending on how it is described. A more explicit description of an event will typically be regarded as having a greater subjective probability because it: draws attention to less obvious aspects; overcomes memory limitations. Mandel (2005) found the overall estimated probability of a terrorist attack was greater when participants were presented with explicit possibilities than when they were not. Redelmeier et al. (1995) found this phenomenon in experts as well as non-experts. Sloman et al. (2004) obtained findings directly opposite to those predicted by support theory. Thus, an explicit description can reduce subjective probability if it leads us to focus on low-probability causes. Redden and Frederick (2011) argued that providing an explicit description can reduce subjective probability by making it more effortful to comprehend an event. This oversimplified theory doesn’t account for these findings. Gigerenzer and Gaissmaier (2011) argued that heuristics are often very valuable. They focused on fast and frugal heuristics that involve rapid processing of relatively little information. One of the key fast and frugal heuristics is the take-the-best strategy, which has three components: Search rule – search cues in order of validity. Stopping rule – stop when a discriminatory cue is found. Decision rule. The most researched example is the recognition heuristic. If one of two objects is recognised and the other is not, then we infer that the recognised object has the higher value with respect to the criterion (Goldstein & Gigerenzer, 2002). Kruglanski and Gigerenzer (2011) argued that there is a two-step process in deciding which heuristic to use: First, the nature of the task and individual memory limit the number of available heuristics. Second, people select one of them based on the likely outcome of using it and its processing demands. WEBLINK: Todd and Gigerenzer RESEARCH ACTIVITIES 1 & 2: Smart heuristics Goldstein and Gigerenzer (2002) carried out several experiments to study the recognition heuristic and found up to 90% usage. They also found American and German students performed less well when tested on cities in their own country than those in another country. This was because they typically recognised both cities in the pair for their own country and couldn’t use the recognition heuristic. However, Pachur et al. (2012a) found in a meta- analysis that there was a correlation of +0.64 between usage of the recognition heuristic and its validity. Heuristics sometimes outperform judgements based on much more complex calculations. For example, Wübben and van Wangenheim (2008) considered how managers of clothes shops decide whether customers are active (i.e., likely to buy again) or inactive. The hiatus heuristic is a very simple strategy – only customers who have purchased fairly recently are deemed to be active. Richter and Späth (2006) found the recognition heuristic was often not used when participants had access to inconsistent information. There is accumulating evidence that the take-the-best strategy is used less often than is sometimes assumed. Newell et al. (2003) concluded that the take-the-best strategy was least likely to be used when the cost of obtaining information was low and the validities of the cues were unknown. Dieckmann and Rieskamp (2007) focused on information redundancy – simple strategies work best (and are more likely to be used) when environmental information is simple. There is good evidence that people often use fast and frugal heuristics. These heuristics are fast and effective, and used particularly when individuals are under time or cognitive pressure. The approach has several limitations. Too much emphasis has been placed on using intuitions when humans have such a large capacity for logical reasoning (Evans & Over, 2010). The use of the recognition heuristic is more complex than assumed: people generally also consider why they recognise an object and only then decide whether to use the recognition heuristic (Newell, 2011). Other heuristic use is also more complex than claimed. Far too little attention has been paid to the issue of the importance of the decision that has to be made. Natural frequency hypothesis Gigerenzer and Hoffrage (1995) provided an influential theoretical approach to account for better results with frequency data than with percentages. The approach relies on the notion of natural sampling – the process of encountering instances in a population sequentially. Natural sampling happens in everyday life and may be the evolutionary basis for human ability with frequencies. In most word problems, participants are simply provided with frequency information and do not have
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-