Chapter 13: Judgement and Decision Making

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 13: Judgement and Decision Making Chapter 13: Judgement and decision making Judgement researchers address the question “How do people integrate multiple, incomplete, and sometimes conflicting cues to infer what is happening in the external world?” (Hastie, 2001). In contrast, decision making involves choosing among various options. Decision-making researchers address the question “How do people choose what action to take to achieve labile [changeable], sometimes conflicting goals in an uncertain world?” (Hastie, 2001). There are close relationships between the areas of judgement and decision making. More specifically, decision- making research covers all of the processes involved in deciding on a course of action. In contrast, judgement research focuses mainly on those aspects of decision making concerned with estimating the likelihood of various events. In addition, judgements are evaluated in terms of their accuracy, whereas decisions are evaluated on the basis of their consequences. Judgement research We often change our opinion of the likelihood of something in the light of new information. Reverend Thomas Bayes provided a more precise way of thinking about this. Bayes’ theorem uses the relative probabilities of two hypotheses before data are obtained (prior odds), and we calculate the probabilities of obtaining observed data under each hypothesis (posterior odds). Bayes’ theorem is an odds ratio (): Kahneman and Tversky illustrate this with their taxi-cab problem (1972). INTERACTIVE EXERCISE: Taxi-cab problem Evidence indicates that people often take less account of the prior odds (base-rate information) than they should from Bayes’ theorem. Base-rate information was defined by Koehler (1996) as “the relative frequency with which an event occurs or an attribute is present in the population”. Kahneman and Tversky (1973) found evidence that people fail to take account of base-rate information. CASE STUDY: Koehler (1996): the base rate fallacy reconsidered Kahneman and Tversky argued that we rely on simple heuristics, or rules of thumb, because they are cognitively undemanding. They described the representativeness heuristic, in which events that are representative or typical of a class are assigned a high probability of occurrence. Kahneman and Tversky (1973) found participants often neglected base-rate information in favour of the representativeness heuristic. The conjunction fallacy is the mistaken belief that the combination of two events is more likely than one of the two events alone. This seems to involve the representativeness heuristic. Tversky and Kahneman (1983) used the Linda problem. When given a description of Linda, most participants ranked “feminist bank teller” as more probable than either bank teller or feminist, which is incorrect. Many people misinterpret the statement, “Linda is a bank teller”, as implying she is not active in the feminist movement (Manktelow, 2012). However, the conjunction fallacy is still found even when almost everything possible is done to ensure participants interpret the problem correctly (Sides et al., 2002). Base-rate information is sometimes both relevant and generally used. Krynski and Tenenbaum (2007) argued that we possess valuable causal knowledge that allows us make accurate judgements using base-rate information in everyday life. In the laboratory, however, the judgement problems we confront often fail to provide such knowledge. Krynski and Tenenbaum (2007) argued that the reasonably full causal knowledge available to participants allows them to solve the problem. Tversky and Kahneman (1974) studied the availability heuristic, which involves estimating the frequencies of events on the basis of how easy or difficult it is to retrieve relevant information from long-term memory. Lichtenstein et al. (1978) found causes of death attracting publicity (e.g., murder) were judged to be more likely than, for example, suicide, when the opposite is actually the case. Pachur et al. (2012a) suggested Lichtenstein et al.’s results may have been due to the affect heuristic in which decisions are based on the feeling of dread. However, Oppenheimer (2004) provided convincing evidence that we do not always use the availability heuristic. Much of the work on heuristics has been limited despite the experimental evidence for error-prone heuristic use. Tversky and Kahneman’s (1974) one-word definitions are vague and fail to make testable predictions. There has been limited theorising in this field (Fiedler and von Sydow, 2015). Error-prone judgements may be based on availability of information rather than incorrect heuristic use. Finally, much of the research severely lacks ecological validity. Judgement theories The support theory was proposed by Tversky and Koehler (1994) based in part on the availability heuristic. The key assumption is that any given event will appear more or less likely depending on how it is described. A more explicit description of an event will typically be regarded as having a greater subjective probability because it: draws attention to less obvious aspects; overcomes memory limitations. Mandel (2005) found the overall estimated probability of a terrorist attack was greater when participants were presented with explicit possibilities than when they were not. Redelmeier et al. (1995) found this phenomenon in experts as well as non-experts. Sloman et al. (2004) obtained findings directly opposite to those predicted by support theory. Thus, an explicit description can reduce subjective probability if it leads us to focus on low-probability causes. Redden and Frederick (2011) argued that providing an explicit description can reduce subjective probability by making it more effortful to comprehend an event. This oversimplified theory doesn’t account for these findings. Gigerenzer and Gaissmaier (2011) argued that heuristics are often very valuable. They focused on fast and frugal heuristics that involve rapid processing of relatively little information. One of the key fast and frugal heuristics is the take-the-best strategy, which has three components: Search rule – search cues in order of validity. Stopping rule – stop when a discriminatory cue is found. Decision rule. The most researched example is the recognition heuristic. If one of two objects is recognised and the other is not, then we infer that the recognised object has the higher value with respect to the criterion (Goldstein & Gigerenzer, 2002). Kruglanski and Gigerenzer (2011) argued that there is a two-step process in deciding which heuristic to use: First, the nature of the task and individual memory limit the number of available heuristics. Second, people select one of them based on the likely outcome of using it and its processing demands. WEBLINK: Todd and Gigerenzer RESEARCH ACTIVITIES 1 & 2: Smart heuristics Goldstein and Gigerenzer (2002) carried out several experiments to study the recognition heuristic and found up to 90% usage. They also found American and German students performed less well when tested on cities in their own country than those in another country. This was because they typically recognised both cities in the pair for their own country and couldn’t use the recognition heuristic. However, Pachur et al. (2012a) found in a meta- analysis that there was a correlation of +0.64 between usage of the recognition heuristic and its validity. Heuristics sometimes outperform judgements based on much more complex calculations. For example, Wübben and van Wangenheim (2008) considered how managers of clothes shops decide whether customers are active (i.e., likely to buy again) or inactive. The hiatus heuristic is a very simple strategy – only customers who have purchased fairly recently are deemed to be active. Richter and Späth (2006) found the recognition heuristic was often not used when participants had access to inconsistent information. There is accumulating evidence that the take-the-best strategy is used less often than is sometimes assumed. Newell et al. (2003) concluded that the take-the-best strategy was least likely to be used when the cost of obtaining information was low and the validities of the cues were unknown. Dieckmann and Rieskamp (2007) focused on information redundancy – simple strategies work best (and are more likely to be used) when environmental information is simple. There is good evidence that people often use fast and frugal heuristics. These heuristics are fast and effective, and used particularly when individuals are under time or cognitive pressure. The approach has several limitations. Too much emphasis has been placed on using intuitions when humans have such a large capacity for logical reasoning (Evans & Over, 2010). The use of the recognition heuristic is more complex than assumed: people generally also consider why they recognise an object and only then decide whether to use the recognition heuristic (Newell, 2011). Other heuristic use is also more complex than claimed. Far too little attention has been paid to the issue of the importance of the decision that has to be made. Natural frequency hypothesis Gigerenzer and Hoffrage (1995) provided an influential theoretical approach to account for better results with frequency data than with percentages. The approach relies on the notion of natural sampling – the process of encountering instances in a population sequentially. Natural sampling happens in everyday life and may be the evolutionary basis for human ability with frequencies. In most word problems, participants are simply provided with frequency information and do not have
Recommended publications
  • The Availability Heuristic
    CHAPTER 11 THE AVAILABILITY HEURISTIC According to Amos Tversky and Daniel Kahneman (1974, p. 1127), the availability heuristic is a rule of thumb in which decision makers "assess the frequency of a class or the probability of an event by the ease with which instances or occurrences can be brought to mind." Usually this heuristic works quite well; all things being equal, common events are easier to remember or imagine than are uncommon events. By rely­ ing on availability to estimate frequency and probability, decision mak­ ers are able to simplify what might otherwise be very difficult judg­ ments. As with any heuristic, however, there are cases in which the general rule of thumb breaks down and leads to systematic biases. Some events are more available than others not because they tend to occur frequently or with high probability, but because they are inherently easier to think about, because they have taken place recently, because they are highly emotional, and so forth. This chapter examines three general questions: (1) What are instances in which the availability heuristic leads to biased judgments? (2) Do decision makers perceive an event as more likely after they have imagined it happening? (3) How is vivid information dif­ ferent from other information? AVAILABILITY GOES AWRY Which is a more likely cause of death in the United States-being killed by falling airplane parts or by a shark? Most people rate shark attacks as more probable than death from falling airplane parts (see Item #7 of the Reader Survey for your answer). Shark attacks certainly receive more publicity than do deaths from falling airplane parts, and they are far easier to imagine (thanks in part to movies such as Jaws).
    [Show full text]
  • Nudge Theory
    Nudge Theory http://www.businessballs.com/nudge-theory.htm Nudge theory is a flexible and modern concept for: • understanding of how people think, make decisions, and behave, • helping people improve their thinking and decisions, • managing change of all sorts, and • identifying and modifying existing unhelpful influences on people. Nudge theory was named and popularized by the 2008 book, 'Nudge: Improving Decisions About Health, Wealth, and Happiness', written by American academics Richard H Thaler and Cass R Sunstein. The book is based strongly on the Nobel prize- winning work of the Israeli-American psychologists Daniel Kahneman and Amos Tversky. This article: • reviews and explains Thaler and Sunstein's 'Nudge' concept, especially 'heuristics' (tendencies for humans to think and decide instinctively and often mistakenly) • relates 'Nudge' methods to other theories and models, and to Kahneman and Tversky's work • defines and describes additional methods of 'nudging' people and groups • extends the appreciation and application of 'Nudge' methodology to broader change- management, motivation, leadership, coaching, counselling, parenting, etc • offers 'Nudge' methods and related concepts as a 'Nudge' theory 'toolkit' so that the concept can be taught and applied in a wide range of situations involving relationships with people, and enabling people to improve their thinking and decision- making • and offers a glossary of Nudge theory and related terms 'Nudge' theory was proposed originally in US 'behavioral economics', but it can be adapted and applied much more widely for enabling and encouraging change in people, groups, or yourself. Nudge theory can also be used to explore, understand, and explain existing influences on how people behave, especially influences which are unhelpful, with a view to removing or altering them.
    [Show full text]
  • Cognitive Biases in Software Engineering: a Systematic Mapping Study
    Cognitive Biases in Software Engineering: A Systematic Mapping Study Rahul Mohanani, Iflaah Salman, Burak Turhan, Member, IEEE, Pilar Rodriguez and Paul Ralph Abstract—One source of software project challenges and failures is the systematic errors introduced by human cognitive biases. Although extensively explored in cognitive psychology, investigations concerning cognitive biases have only recently gained popularity in software engineering research. This paper therefore systematically maps, aggregates and synthesizes the literature on cognitive biases in software engineering to generate a comprehensive body of knowledge, understand state of the art research and provide guidelines for future research and practise. Focusing on bias antecedents, effects and mitigation techniques, we identified 65 articles (published between 1990 and 2016), which investigate 37 cognitive biases. Despite strong and increasing interest, the results reveal a scarcity of research on mitigation techniques and poor theoretical foundations in understanding and interpreting cognitive biases. Although bias-related research has generated many new insights in the software engineering community, specific bias mitigation techniques are still needed for software professionals to overcome the deleterious effects of cognitive biases on their work. Index Terms—Antecedents of cognitive bias. cognitive bias. debiasing, effects of cognitive bias. software engineering, systematic mapping. 1 INTRODUCTION OGNITIVE biases are systematic deviations from op- knowledge. No analogous review of SE research exists. The timal reasoning [1], [2]. In other words, they are re- purpose of this study is therefore as follows: curring errors in thinking, or patterns of bad judgment Purpose: to review, summarize and synthesize the current observable in different people and contexts. A well-known state of software engineering research involving cognitive example is confirmation bias—the tendency to pay more at- biases.
    [Show full text]
  • Infographic I.10
    The Digital Health Revolution: Leaving No One Behind The global AI in healthcare market is growing fast, with an expected increase from $4.9 billion in 2020 to $45.2 billion by 2026. There are new solutions introduced every day that address all areas: from clinical care and diagnosis, to remote patient monitoring to EHR support, and beyond. But, AI is still relatively new to the industry, and it can be difficult to determine which solutions can actually make a difference in care delivery and business operations. 59 Jan 2021 % of Americans believe returning Jan-June 2019 to pre-coronavirus life poses a risk to health and well being. 11 41 % % ...expect it will take at least 6 The pandemic has greatly increased the 65 months before things get number of US adults reporting depression % back to normal (updated April and/or anxiety.5 2021).4 Up to of consumers now interested in telehealth going forward. $250B 76 57% of providers view telehealth more of current US healthcare spend % favorably than they did before COVID-19.7 could potentially be virtualized.6 The dramatic increase in of Medicare primary care visits the conducted through 90% $3.5T telehealth has shown longevity, with rates in annual U.S. health expenditures are for people with chronic and mental health conditions. since April 2020 0.1 43.5 leveling off % % Most of these can be prevented by simple around 30%.8 lifestyle changes and regular health screenings9 Feb. 2020 Apr. 2020 OCCAM’S RAZOR • CONJUNCTION FALLACY • DELMORE EFFECT • LAW OF TRIVIALITY • COGNITIVE FLUENCY • BELIEF BIAS • INFORMATION BIAS Digital health ecosystems are transforming• AMBIGUITY BIAS • STATUS medicineQUO BIAS • SOCIAL COMPARISONfrom BIASa rea• DECOYctive EFFECT • REACTANCEdiscipline, • REVERSE PSYCHOLOGY • SYSTEM JUSTIFICATION • BACKFIRE EFFECT • ENDOWMENT EFFECT • PROCESSING DIFFICULTY EFFECT • PSEUDOCERTAINTY EFFECT • DISPOSITION becoming precise, preventive,EFFECT • ZERO-RISK personalized, BIAS • UNIT BIAS • IKEA EFFECT and • LOSS AVERSION participatory.
    [Show full text]
  • A Motivational Account of the Impact Bias DISSERTATION Presented In
    A Motivational Account of the Impact Bias DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Gina M. Hoover, M.A. Graduate Program in Psychology The Ohio State University 2011 Committee: Dr. Gifford Weary, Advisor Dr. Kentaro Fujita Dr. Jennifer Crocker Copyrighted by Gina M. Hoover 2011 Abstract The impact bias is the tendency for people to overestimate the intensity and duration of their feelings in response to some future outcome (Gilbert, Driver-Linn, & Wilson, 2002; Wilson & Gilbert, 2003; Wilson & Gilbert, 2005). Most explanations for the bias conceptualize it as an undesirable side effect of how the cognitive system goes about making predictions. Yet, perhaps such mispredictions can serve an important self- regulatory function, helping people achieve their goals. This research argued for a motivated impact bias. That is, there are times when people may be driven to predict feelings more extreme than they actually will experience. By overestimating one’s feelings to some future outcome, the stakes of succeeding or failing at gaining that outcome are raised. As such, this may energize people to pursue their goals to achieve the desired outcomes. The more one wants to gain (or avoid) some outcome, the more likely s/he will be to exhibit such an impact bias. To investigate this question, participants were asked to predict their feelings in response to the outcome of a performance task (visual responsiveness task in studies 1-3, creativity task in study 4, and verbal test in study 5) they would complete.
    [Show full text]
  • Integrating Temporal Biases the Interplay of Focal Thoughts and Accessibility Experiences Lawrence J
    PSYCHOLOGICAL SCIENCE Research Article Integrating Temporal Biases The Interplay of Focal Thoughts and Accessibility Experiences Lawrence J. Sanna1 and Norbert Schwarz2 1University of North Carolina at Chapel Hill and 2University of Michigan ABSTRACT—We provide an integrative account of temporal bi- come to mind. This interaction drives the emergence of these biases, ases (confidence changes, planning fallacy, impact bias, and as well as debiasing. hindsight bias). Students listed either 3 or 12 thoughts about success or failure before an upcoming real-life exam or im- TEMPORAL BIASES mediately after learning their grades. Previous explanations had focused on how thought content alone (what comes to mind) Figure 1 illustrates the time course of the four temporal biases that are influences temporal biases. We found, however, an interaction our focus here. between thought content and accessibility experiences (how eas- ily or difficultly thoughts come to mind). Thinking about 3 ways to succeed (success was easy to bring to mind) was equivalent to Confidence Changes thinking about 12 ways to fail (failure was difficult to bring to People are less confident in success when events draw near than they mind), and conversely, thinking about 3 ways to fail was are at a more distant time. For example, participants taking an im- equivalent to thinking about 12 ways to succeed. In no case was mediate test were less confident than those taking a test in 4 weeks thought content alone sufficient to predict the biases. These re- (Nisan, 1972; see also Gilovich, Kerr, & Medvec, 1993). Similarly, sults have implications for debiasing strategies and other judg- college seniors were more muted in estimating their first-job salaries ments over time.
    [Show full text]
  • John Collins, President, Forensic Foundations Group
    On Bias in Forensic Science National Commission on Forensic Science – May 12, 2014 56-year-old Vatsala Thakkar was a doctor in India but took a job as a convenience store cashier to help pay family expenses. She was stabbed to death outside her store trying to thwart a theft in November 2008. Bloody Footwear Impression Bloody Tire Impression What was the threat? 1. We failed to ask ourselves if this was a footwear impression. 2. The appearance of the impression combined with the investigator’s interpretation created prejudice. The accuracy of our analysis became threatened by our prejudice. Types of Cognitive Bias Available at: http://en.wikipedia.org/wiki/List_of_cognitive_biases | Accessed on April 14, 2014 Anchoring or focalism Hindsight bias Pseudocertainty effect Illusory superiority Levels-of-processing effect Attentional bias Hostile media effect Reactance Ingroup bias List-length effect Availability heuristic Hot-hand fallacy Reactive devaluation Just-world phenomenon Misinformation effect Availability cascade Hyperbolic discounting Recency illusion Moral luck Modality effect Backfire effect Identifiable victim effect Restraint bias Naive cynicism Mood-congruent memory bias Bandwagon effect Illusion of control Rhyme as reason effect Naïve realism Next-in-line effect Base rate fallacy or base rate neglect Illusion of validity Risk compensation / Peltzman effect Outgroup homogeneity bias Part-list cueing effect Belief bias Illusory correlation Selective perception Projection bias Peak-end rule Bias blind spot Impact bias Semmelweis
    [Show full text]
  • CHALK TALK Thinking About Thinking: Medical Decision Making Under the Microscope Christiana Iyasere, MD, and Douglas Wright, MD, Phd
    SGIM FORUM 2011; 34(11) CHALK TALK Thinking about Thinking: Medical Decision Making Under the Microscope Christiana Iyasere, MD, and Douglas Wright, MD, PhD Drs. Iyasere and Wright are faculty in the Inpatient Clinician Educator Service of the Department of Medicine at Massachusetts General Hospital in Boson, MA. ase: A 36-year-old African-Ameri- athletic—he graduated from college than 10 seconds and says “20,160” C can woman, healthy except for with a degree in physics and has before moving breezily along with treated hypothyroidism, visits you in completed several triathlons. Neil is a her coffee. Having Katrina’s input, are clinic complaining of six months of fa- veteran of the US Navy, where he you tempted to change your answer tigue and progressive shortness of served as fleet naval aviator and land- to questions 3a and 3b? Go ahead, breath with exertion. You thoroughly ing signal officer. Is Neil more likely to admit it. Aren’t you now more confi- interview and examine the patient. be: a) a librarian or b) an astronaut? dent that the correct answer is that Physical examination reveals conjunc- Question 2: Jot down a list of the product is closest to 20,000? tival pallor and dullness to percussion English words that begin with the let- Question 5: You have known one third of the way up both lung ter “r” (e.g. rooster). Next, jot down your medical school roommate Jus- fields. Something tells you to ask her a list of words that have an r in the tice for four years.
    [Show full text]
  • Predicting the Impact of Health States on Well-Being: Explanations and Remedies for Biased Judgments
    City Research Online City, University of London Institutional Repository Citation: Walsh, E. (2009). Predicting the Impact of Health States on Well-being: Explanations and Remedies for Biased Judgments. (Unpublished Doctoral thesis, City, University of London) This is the accepted version of the paper. This version of the publication may differ from the final published version. Permanent repository link: https://openaccess.city.ac.uk/id/eprint/18257/ Link to published version: Copyright: City Research Online aims to make research outputs of City, University of London available to a wider audience. Copyright and Moral Rights remain with the author(s) and/or copyright holders. URLs from City Research Online may be freely distributed and linked to. Reuse: Copies of full items can be used for personal research or study, educational, or not-for-profit purposes without prior permission or charge. Provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. City Research Online: http://openaccess.city.ac.uk/ [email protected] Predicting the Impact of Health States on Well-being: Explanations and Remedies for Biased Judgments Emma Walsh Thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy City University, London Table of Contents Table of Contents ........................................................................................................ 2 List of Tables ..............................................................................................................
    [Show full text]
  • Impact Bias in Student Evaluations of Higher Education Anthony Grimes*, Dominic Medway, Adrienne Foos† and Anna Goatman
    Studies in Higher Education, 2017 Vol. 42, No. 6, 945–962, http://dx.doi.org/10.1080/03075079.2015.1071345 Impact bias in student evaluations of higher education Anthony Grimes*, Dominic Medway, Adrienne Foos† and Anna Goatman Manchester Business School, University of Manchester, Booth Street West, Manchester M15 6PB, UK In the context of higher education, this study examines the extent to which affective evaluations of the student experience are influenced by the point at which they are made (i.e. before the experience begins, whilst it is happening and after it has ended). It adopts a between-groups quantitative analysis of the affective evaluations made by 360 future, current and past postgraduate students of a UK business school. The study validates the proposition that affective forecasts and memories of the student experience are considerably inflated in prospect and retrospect; a finding that implies a significant impact bias. It is concluded that the impact bias may have important implications for influencing the effectiveness of student decision-making, the timing and comparability of student course evaluations, and understanding the nature and effects of word-of-mouth communication regarding the student experience. Keywords: student attitudes; student evaluation; student expectations; student experience; students’ perceptions Introduction In recent years, the student experience has become a topic of growing interest and importance in higher education (HE). Accordingly, student evaluations of courses and institutions are now commonplace, with results feeding into various metrics, per- formance indicators and rankings of institutional excellence. The increased focus on the student experience in HE has been accompanied by significant academic research on the matter.
    [Show full text]
  • Mitigating Cognitive Bias and Design Distortion
    RSD2 Relating Systems Thinking and Design 2013 Working Paper. www.systemic-design.net Interliminal Design: Mitigating Cognitive Bias and Design Distortion 1 2 3 Andrew McCollough, PhD , DeAunne Denmark, MD, PhD , and Don Harker, MS “Design is an inquiry for action” - Harold Nelson, PhD Liminal: from the Latin word līmen, a threshold “Our values and biases are expressions of who we are. It isn’t so much that our prior commitments disable our ability to reason; it is that we need to appreciate that reasoning takes place, for the most part, in settings in which we are not neutral bystanders.” - Alva Noe In this globally interconnected and information-rich society, complexity and uncertainty are escalating at an unprecedented pace. Our most pressing problems are now considered wicked, or even super wicked, in that they commonly have incomplete, continuously changing, intricately interdependent yet contradictory requirements, urgent or emergency status, no central authority, and are often caused by the same entities charged with solving them (Rittel & Webber, 1973). Perhaps similar to philosopher Karl Jaspers’ Axial Age where ‘man asked radical questions’ and the ‘unquestioned grasp on life was loosened,’ communities of all kinds now face crises of “in-between”; necessary and large-scale dismantling of previous structures, institutions, and world-views has begun, but has yet to be replaced. Such intense periods of destruction and reconstruction, or liminality, eliminate boundary lines, reverse or dissolve social hierarchies, disrupt cultural continuity, and cast penetrating onto future outcomes. Originating in human ritual, liminal stages occur mid-transition, where initiates literally "stand at the threshold" between outmoded constructs of identity or community, and entirely new ways of being.
    [Show full text]
  • 1 Embrace Your Cognitive Bias
    1 Embrace Your Cognitive Bias http://blog.beaufortes.com/2007/06/embrace-your-co.html Cognitive Biases are distortions in the way humans see things in comparison to the purely logical way that mathematics, economics, and yes even project management would have us look at things. The problem is not that we have them… most of them are wired deep into our brains following millions of years of evolution. The problem is that we don’t know about them, and consequently don’t take them into account when we have to make important decisions. (This area is so important that Daniel Kahneman won a Nobel Prize in 2002 for work tying non-rational decision making, and cognitive bias, to mainstream economics) People don’t behave rationally, they have emotions, they can be inspired, they have cognitive bias! Tying that into how we run projects (project leadership as a compliment to project management) can produce results you wouldn’t believe. You have to know about them to guard against them, or use them (but that’s another article)... So let’s get more specific. After the jump, let me show you a great list of cognitive biases. I’ll bet that there are at least a few that you haven’t heard of before! Decision making and behavioral biases Bandwagon effect — the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink, herd behaviour, and manias. Bias blind spot — the tendency not to compensate for one’s own cognitive biases. Choice-supportive bias — the tendency to remember one’s choices as better than they actually were.
    [Show full text]