Heuristics and Biases in Project Management
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Ambiguity Aversion in Qualitative Contexts: a Vignette Study
Ambiguity aversion in qualitative contexts: A vignette study Joshua P. White Melbourne School of Psychological Sciences University of Melbourne Andrew Perfors Melbourne School of Psychological Sciences University of Melbourne Abstract Most studies of ambiguity aversion rely on experimental paradigms involv- ing contrived monetary bets. Thus, the extent to which ambiguity aversion is evident outside of such contexts is largely unknown, particularly in those contexts which cannot easily be reduced to numerical terms. The present work seeks to understand whether ambiguity aversion occurs in a variety of different qualitative domains, such as work, family, love, friendship, exercise, study and health. We presented participants with 24 vignettes and measured the degree to which they preferred risk to ambiguity. In a separate study we asked participants for their prior probability estimates about the likely outcomes in the ambiguous events. Ambiguity aversion was observed in the vast majority of vignettes, but at different magnitudes. It was predicted by gain/loss direction but not by the prior probability estimates (with the inter- esting exception of the classic Ellsberg ‘urn’ scenario). Our results suggest that ambiguity aversion occurs in a wide variety of qualitative contexts, but to different degrees, and may not be generally driven by unfavourable prior probability estimates of ambiguous events. Corresponding Author: Joshua P. White ([email protected]) AMBIGUITY AVERSION IN QUALITATIVE CONTEXTS: A VIGNETTE STUDY 2 Introduction The world is replete with the unknown, yet people generally prefer some types of ‘unknown’ to others. Here, an important distinction exists between risk and uncertainty. As defined by Knight (1921), risk is a measurable lack of certainty that can be represented by numerical probabilities (e.g., “there is a 50% chance that it will rain tomorrow”), while ambiguity is an unmeasurable lack of certainty (e.g., “there is an unknown probability that it will rain tomorrow”). -
A Task-Based Taxonomy of Cognitive Biases for Information Visualization
A Task-based Taxonomy of Cognitive Biases for Information Visualization Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic Three kinds of limitations The Computer The Display 2 Three kinds of limitations The Computer The Display The Human 3 Three kinds of limitations: humans • Human vision ️ has limitations • Human reasoning 易 has limitations The Human 4 ️Perceptual bias Magnitude estimation 5 ️Perceptual bias Magnitude estimation Color perception 6 易 Cognitive bias Behaviors when humans consistently behave irrationally Pohl’s criteria distilled: • Are predictable and consistent • People are unaware they’re doing them • Are not misunderstandings 7 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias, -
Downloads Automatically
VIRGINIA LAW REVIEW ASSOCIATION VIRGINIA LAW REVIEW ONLINE VOLUME 103 DECEMBER 2017 94–102 ESSAY ACT-SAMPLING BIAS AND THE SHROUDING OF REPEAT OFFENDING Ian Ayres, Michael Chwe and Jessica Ladd∗ A college president needs to know how many sexual assaults on her campus are caused by repeat offenders. If repeat offenders are responsible for most sexual assaults, the institutional response should focus on identifying and removing perpetrators. But how many offenders are repeat offenders? Ideally, we would find all offenders and then see how many are repeat offenders. Short of this, we could observe a random sample of offenders. But in real life, we observe a sample of sexual assaults, not a sample of offenders. In this paper, we explain how drawing naive conclusions from “act sampling”—sampling people’s actions instead of sampling the population—can make us grossly underestimate the proportion of repeat actors. We call this “act-sampling bias.” This bias is especially severe when the sample of known acts is small, as in sexual assault, which is among the least likely of crimes to be reported.1 In general, act sampling can bias our conclusions in unexpected ways. For example, if we use act sampling to collect a set of people, and then these people truthfully report whether they are repeat actors, we can overestimate the proportion of repeat actors. * Ayres is the William K. Townsend Professor at Yale Law School; Chwe is Professor of Political Science at the University of California, Los Angeles; and Ladd is the Founder and CEO of Callisto. 1 David Cantor et al., Report on the AAU Campus Climate Survey on Sexual Assault and Sexual Misconduct, Westat, at iv (2015) [hereinafter AAU Survey], (noting that a relatively small percentage of the most serious offenses are reported) https://perma.cc/5BX7-GQPU; Nat’ Inst. -
MITIGATING COGNITIVE BIASES in RISK IDENTIFICATION: Practitioner Checklist for the AEROSPACE SECTOR
MITIGATING COGNITIVE BIASES IN RISK IDENTIFICATION: Practitioner Checklist for the AEROSPACE SECTOR Debra L. Emmons, Thomas A. Mazzuchi, Shahram Sarkani, and Curtis E. Larsen This research contributes an operational checklist for mitigating cogni- tive biases in the aerospace sector risk management process. The Risk Identification and Evaluation Bias Reduction Checklist includes steps for grounding the risk identification and evaluation activities in past project experiences through historical data, and emphasizes the importance of incorporating multiple methods and perspectives to guard against optimism and a singular project instantiation-focused view. The authors developed a survey to elicit subject matter expert judgment on the value of the check- list to support its use in government and industry as a risk management tool. The survey also provided insights on bias mitigation strategies and lessons learned. This checklist addresses the deficiency in the literature in providing operational steps for the practitioner to recognize and implement strategies for bias reduction in risk management in the aerospace sector. DOI: https://doi.org/10.22594/dau.16-770.25.01 Keywords: Risk Management, Optimism Bias, Planning Fallacy, Cognitive Bias Reduction Mitigating Cognitive Biases in Risk Identification http://www.dau.mil January 2018 This article and its accompanying research contribute an operational FIGURE 1. RESEARCH APPROACH Risk Identification and Evaluation Bias Reduction Checklist for cognitive bias mitigation in risk management for the aerospace sector. The checklist Cognitive Biases & Bias described herein offers a practical and implementable project management Enabling framework to help reduce biases in the aerospace sector and redress the Conditions cognitive limitations in the risk identification and analysis process. -
Human Dimensions of Wildlife the Fallacy of Online Surveys: No Data Are Better Than Bad Data
Human Dimensions of Wildlife, 15:55–64, 2010 Copyright © Taylor & Francis Group, LLC ISSN: 1087-1209 print / 1533-158X online DOI: 10.1080/10871200903244250 UHDW1087-12091533-158XHuman Dimensions of WildlifeWildlife, Vol.The 15, No. 1, November 2009: pp. 0–0 Fallacy of Online Surveys: No Data Are Better Than Bad Data TheM. D. Fallacy Duda ofand Online J. L. Nobile Surveys MARK DAMIAN DUDA AND JOANNE L. NOBILE Responsive Management, Harrisonburg, Virginia, USA Internet or online surveys have become attractive to fish and wildlife agencies as an economical way to measure constituents’ opinions and attitudes on a variety of issues. Online surveys, however, can have several drawbacks that affect the scientific validity of the data. We describe four basic problems that online surveys currently present to researchers and then discuss three research projects conducted in collaboration with state fish and wildlife agencies that illustrate these drawbacks. Each research project involved an online survey and/or a corresponding random telephone survey or non- response bias analysis. Systematic elimination of portions of the sample population in the online survey is demonstrated in each research project (i.e., the definition of bias). One research project involved a closed population, which enabled a direct comparison of telephone and online results with the total population. Keywords Internet surveys, sample validity, SLOP surveys, public opinion, non- response bias Introduction Fish and wildlife and outdoor recreation professionals use public opinion and attitude sur- veys to facilitate understanding their constituents. When the surveys are scientifically valid and unbiased, this information is useful for organizational planning. Survey research, however, costs money. -
Levy, Marc A., “Sampling Bias Does Not Exaggerate Climate-Conflict Claims,” Nature Climate Change 8,6 (442)
Levy, Marc A., “Sampling bias does not exaggerate climate-conflict claims,” Nature Climate Change 8,6 (442) https://doi.org/10.1038/s41558-018-0170-5 Final pre-publication text To the Editor – In a recent Letter, Adams et al1 argue that claims regarding climate-conflict links are overstated because of sampling bias. However, this conclusion rests on logical fallacies and conceptual misunderstanding. There is some sampling bias, but it does not have the claimed effect. Suggesting that a more representative literature would generate a lower estimate of climate- conflict links is a case of begging the question. It only make sense if one already accepts the conclusion that the links are overstated. Otherwise it is possible that more representative cases might lead to stronger estimates. In fact, correcting sampling bias generally does tend to increase effect estimates2,3. The authors’ claim that the literature’s disproportionate focus on Africa undermines sustainable development and climate adaptation rests on the same fallacy. What if the climate-conflict links are as strong as people think? It is far from obvious that acting as if they were not would somehow enhance development and adaptation. The authors offer no reasoning to support such a claim, and the notion that security and development are best addressed in concert is consistent with much political theory and practice4,5,6. Conceptually, the authors apply a curious kind of “piling on” perspective in which each new paper somehow ratchets up the consensus view of a country’s climate-conflict links, without regard to methods or findings. Consider the papers cited as examples of how selecting cases on the conflict variable exaggerates the link. -
10. Sample Bias, Bias of Selection and Double-Blind
SEC 4 Page 1 of 8 10. SAMPLE BIAS, BIAS OF SELECTION AND DOUBLE-BLIND 10.1 SAMPLE BIAS: In statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population are less likely to be included than others. It results in abiased sample, a non-random sample[1] of a population (or non-human factors) in which all individuals, or instances, were not equally likely to have been selected.[2] If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling. Medical sources sometimes refer to sampling bias as ascertainment bias.[3][4] Ascertainment bias has basically the same definition,[5][6] but is still sometimes classified as a separate type of bias Types of sampling bias Selection from a specific real area. For example, a survey of high school students to measure teenage use of illegal drugs will be a biased sample because it does not include home-schooled students or dropouts. A sample is also biased if certain members are underrepresented or overrepresented relative to others in the population. For example, a "man on the street" interview which selects people who walk by a certain location is going to have an overrepresentation of healthy individuals who are more likely to be out of the home than individuals with a chronic illness. This may be an extreme form of biased sampling, because certain members of the population are totally excluded from the sample (that is, they have zero probability of being selected). -
Communication Science to the Public
David M. Berube North Carolina State University ▪ HOW WE COMMUNICATE. In The Age of American Unreason, Jacoby posited that it trickled down from the top, fueled by faux-populist politicians striving to make themselves sound approachable rather than smart. (Jacoby, 2008). EX: The average length of a sound bite by a presidential candidate in 1968 was 42.3 seconds. Two decades later, it was 9.8 seconds. Today, it’s just a touch over seven seconds and well on its way to being supplanted by 140/280- character Twitter bursts. ▪ DATA FRAMING. ▪ When asked if they truly believe what scientists tell them, NEW ANTI- only 36 percent of respondents said yes. Just 12 percent expressed strong confidence in the press to accurately INTELLECTUALISM: report scientific findings. ▪ ROLE OF THE PUBLIC. A study by two Princeton University researchers, Martin TRENDS Gilens and Benjamin Page, released Fall 2014, tracked 1,800 U.S. policy changes between 1981 and 2002, and compared the outcome with the expressed preferences of median- income Americans, the affluent, business interests and powerful lobbies. They concluded that average citizens “have little or no independent influence” on policy in the U.S., while the rich and their hired mouthpieces routinely get their way. “The majority does not rule,” they wrote. ▪ Anti-intellectualism and suspicion (trends). ▪ Trump world – outsiders/insiders. ▪ Erasing/re-writing history – damnatio memoriae. ▪ False news. ▪ Infoxication (CC) and infobesity. ▪ Aggregators and managed reality. ▪ Affirmation and confirmation bias. ▪ Negotiating reality. ▪ New tribalism is mostly ideational not political. ▪ Unspoken – guns, birth control, sexual harassment, race… “The amount of technical information is doubling every two years. -
Understanding Outcome Bias ∗
Understanding Outcome Bias ∗ Andy Brownbacky Michael A. Kuhnz University of Arkansas University of Oregon January 31, 2019 Abstract Disentangling effort and luck is critical when judging performance. In a principal-agent experiment, we demonstrate that principals' judgments of agent effort are biased by luck, despite perfectly observing the agent's effort. This bias can erode the power of incentives to stimulate effort when there is noise in the outcome process. As such, we explore two potential solutions to this \outcome bias" { control over irrelevant information about luck, and outsourcing judgment to independent third parties. We find that both are ineffective. When principals have control over information about luck, they do not avoid the information that biases their judgment. In contrast, when agents have control over the principal's information about luck, agents strategically manipulate principals' outcome bias to minimize their punishments. We also find that independent third parties are just as biased as principals. These findings indicate that outcome bias may be more widespread and pernicious than previously understood. They also suggest that outcome bias cannot be driven solely by disappointment nor by distributional preferences. Instead, we hypothesize that luck directly affects beliefs about agents even though effort is perfectly observed. We elicit the beliefs of third parties and principals and find that lucky agents are believed to exert more effort in general than identical, unlucky agents. JEL Classification: C92, D63, D83 Keywords: Experiment, Reciprocity, Outcome Bias, Attribution Bias, Blame ∗We wish to thank Gary Charness, Ryan Oprea, Peter Kuhn, Joshua Miller, Justin Rao, and James Andreoni for valuable feedback. -
Moral Hindsight
Special Issue: Moral Agency Research Article Moral Hindsight Nadine Fleischhut,1 Björn Meder,2 and Gerd Gigerenzer2,3 1Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany 2Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Berlin, Germany 3Center for Adaptive Behavior and Cognition, Harding Center for Risk Literacy, Max Planck Institute for Human Development, Berlin, Germany Abstract: How are judgments in moral dilemmas affected by uncertainty, as opposed to certainty? We tested the predictions of a consequentialist and deontological account using a hindsight paradigm. The key result is a hindsight effect in moral judgment. Participants in foresight, for whom the occurrence of negative side effects was uncertain, judged actions to be morally more permissible than participants in hindsight, who knew that negative side effects occurred. Conversely, when hindsight participants knew that no negative side effects occurred, they judged actions to be more permissible than participants in foresight. The second finding was a classical hindsight effect in probability estimates and a systematic relation between moral judgments and probability estimates. Importantly, while the hindsight effect in probability estimates was always present, a corresponding hindsight effect in moral judgments was only observed among “consequentialist” participants who indicated a cost-benefit trade-off as most important for their moral evaluation. Keywords: moral judgment, hindsight effect, moral dilemma, uncertainty, consequentialism, deontological theories Despite ongoing efforts, hunger crises still regularly occur all deliberately in these dilemmas, where all consequences are over the world. From 2014 to 2016,about795 million people presented as certain (Gigerenzer, 2010). Moreover, using worldwide were undernourished, almost all of them in dilemmas with certainty can result in a mismatch with developing countries (FAO, IFAD, & WFP, 2015). -
Ilidigital Master Anton 2.Indd
services are developed to be used by humans. Thus, understanding humans understanding Thus, humans. by used be to developed are services obvious than others but certainly not less complex. Most products bioengineering, and as shown in this magazine. Psychology mightbusiness world. beBe it more the comparison to relationships, game elements, or There are many non-business flieds which can betransfered to the COGNTIVE COGNTIVE is key to a succesfully develop a product orservice. is keytoasuccesfullydevelopproduct BIASES by ANTON KOGER The Power of Power The //PsychologistatILI.DIGITAL WE EDIT AND REINFORCE SOME WE DISCARD SPECIFICS TO WE REDUCE EVENTS AND LISTS WE STORE MEMORY DIFFERENTLY BASED WE NOTICE THINGS ALREADY PRIMED BIZARRE, FUNNY, OR VISUALLY WE NOTICE WHEN WE ARE DRAWN TO DETAILS THAT WE NOTICE FLAWS IN OTHERS WE FAVOR SIMPLE-LOOKING OPTIONS MEMORIES AFTER THE FACT FORM GENERALITIES TO THEIR KEY ELEMENTS ON HOW THEY WERE EXPERIENCED IN MEMORY OR REPEATED OFTEN STRIKING THINGS STICK OUT MORE SOMETHING HAS CHANGED CONFIRM OUR OWN EXISTING BELIEFS MORE EASILY THAN IN OURSELVES AND COMPLETE INFORMATION way we see situations but also the way we situationsbutalsotheway wesee way the biasesnotonlychange Furthermore, overload. cognitive avoid attention, ore situations, guide help todesign massively can This in. take people information of kind explainhowandwhat ofperception egory First,biasesinthecat andappraisal. ory, self,mem perception, into fourcategories: roughly bedivided Cognitive biasescan within thesesituations. forusers interaction andeasy in anatural situationswhichresults sible toimprove itpos and adaptingtothesebiasesmakes ingiven situations.Reacting ways certain act sively helpstounderstandwhypeople mas into consideration biases ing cognitive Tak humanbehavior. topredict likely less or andmore relevant illusionsare cognitive In each situation different every havior day. -
Cognitive Biases in Software Engineering: a Systematic Mapping Study
Cognitive Biases in Software Engineering: A Systematic Mapping Study Rahul Mohanani, Iflaah Salman, Burak Turhan, Member, IEEE, Pilar Rodriguez and Paul Ralph Abstract—One source of software project challenges and failures is the systematic errors introduced by human cognitive biases. Although extensively explored in cognitive psychology, investigations concerning cognitive biases have only recently gained popularity in software engineering research. This paper therefore systematically maps, aggregates and synthesizes the literature on cognitive biases in software engineering to generate a comprehensive body of knowledge, understand state of the art research and provide guidelines for future research and practise. Focusing on bias antecedents, effects and mitigation techniques, we identified 65 articles (published between 1990 and 2016), which investigate 37 cognitive biases. Despite strong and increasing interest, the results reveal a scarcity of research on mitigation techniques and poor theoretical foundations in understanding and interpreting cognitive biases. Although bias-related research has generated many new insights in the software engineering community, specific bias mitigation techniques are still needed for software professionals to overcome the deleterious effects of cognitive biases on their work. Index Terms—Antecedents of cognitive bias. cognitive bias. debiasing, effects of cognitive bias. software engineering, systematic mapping. 1 INTRODUCTION OGNITIVE biases are systematic deviations from op- knowledge. No analogous review of SE research exists. The timal reasoning [1], [2]. In other words, they are re- purpose of this study is therefore as follows: curring errors in thinking, or patterns of bad judgment Purpose: to review, summarize and synthesize the current observable in different people and contexts. A well-known state of software engineering research involving cognitive example is confirmation bias—the tendency to pay more at- biases.