Rudolph, Chang, Rauvola, & Zacher (2020)

Total Page:16

File Type:pdf, Size:1020Kb

Rudolph, Chang, Rauvola, & Zacher (2020) Meta-Analysis in Vocational Behavior 1 Meta-Analysis in Vocational Behavior: A Systematic Review and Recommendations for Best Practices Cort W. Rudolph Saint Louis University Kahea Chang Saint Louis University Rachel S. Rauvola Saint Louis University Hannes Zacher Leipzig University Note: This is a pre-print version of an in-press, accepted manuscript. Please cite as: Rudolph, C.W., Chang, K., Rauvola, R.S., & Zacher, H. (2020, In Press). Meta-analysis in vocational behavior: A systematic review and recommendations for best practices. Journal of Vocational Behavior. Author Note Cort W. Rudolph, Kahea Chang, & Rachel S. Rauvola, Department of Psychology, Saint Louis University, St. Louis, MO (USA). Hannes Zacher, Institute of Psychology, Leipzig University, Leipzig, Germany. Correspondence concerning this article may be addressed to Cort W. Rudolph, Department of Psychology, Saint Louis University, Saint Louis MO (USA), e-mail: [email protected] Meta-Analysis in Vocational Behavior 2 Abstract Meta-analysis is a powerful tool for the synthesis of quantitative empirical research. Overall, the field of vocational behavior has benefited from the results of meta-analyses. Yet, there is still quite a bit to learn about how we can improve the quality of meta-analyses reported in this field of inquiry. In this paper, we systematically review all meta-analyses published in the Journal of Vocational Behavior (JVB) to date. We do so to address two related goals: First, based on guidance from various sources (e.g., the American Psychological Association’s meta-analysis reporting standards; MARS), we introduce ten facets of meta-analysis that have particular bearing on statistical conclusion validity. Second, we systematically review meta-analyses published in JVB over the past 32 years, with a particular focus on the methods employed; this review informs a discussion of 19 associated “best practices” for researchers who are considering conducting a meta-analysis in the field of vocational behavior (or in related fields). Thus, this work serves as an important benchmark, indicating where we “have been” and where we “should go,” with respect to conducting and reporting meta-analyses on vocational behavior topics. Keywords: Systematic Review; Meta-Analysis; Methodology; Best Practices Highlights: - We review all meta-analyses published in the Journal of Vocational Behavior to date. - We evaluate studies using ten criteria related to statistical conclusion validity. - We derive 19 “best practices” for future meta-analyses in vocational behavior. Meta-Analysis in Vocational Behavior 3 Meta-Analysis in Vocational Behavior: A Systematic Review and Recommendations for Best Practices 1. Introduction The idea that quantitative research results can be statistically aggregated is not new. Over 100 years ago, Pearson (1904) empirically combined the results of multiple clinical studies of typhoid inoculations. Likewise, Fisher (1925) proposed a method for pooling p-values across null hypothesis significance tests. The introduction of more modern conceptualizations of “meta- analysis” as a research synthesis method is often attributed to Glass (1976), who is also credited for popularizing these ideas for the synthesis of psychological research. The field of vocational behavior was an early adopter of meta-analysis. Indeed, over the past 32 years, the flagship outlet for such work, the Journal of Vocational Behavior (JVB), has published 68 meta-analyses (see Figure 1), on topics ranging from occupational wellbeing (Assouline & Meir, 1987) to vocational interest congruence (Nye, Su, Rounds, & Drasgow, 2017). Meta-analyses have arguably been impactful to the field of vocational behavior as a whole. For example, as of the writing of this manuscript, four of the 25 “most downloaded” JVB articles in the past 90 days (16%), and three of the 25 “most cited” articles since 2016 (12%), have been meta-analyses. Moreover, according to Web of Science citation counts, a meta- analysis of organizational commitment by Meyer, Stanley, Herscovitch, and Topolnytsky (2002) has been cited just over 2,100 times; the median citation count across all 68 meta-analyses published in JVB is 55 (M = 147.02, SD = 309.95; see also Figure 2). In this manuscript, we systematically review and synthesize the entire corpus of meta- analytic articles that have been published in JVB. Systematic reviews are typically undertaken to synthesize the findings of primary empirical studies (e.g., Gough, Oliver, & Thomas, 2017). Our approach to this systematic review is somewhat different. Instead of integrating the findings of Meta-Analysis in Vocational Behavior 4 meta-analyses published in JVB in a general sense, our primary focus is on the methods employed to conduct meta-analyses and on the structure used to report these meta-analyses. Our goals for this systematic review are twofold: Our primary goal is to quantify the state of meta-analytic methods and to trace the development of meta-analytic methods applied to the study of vocational behavior phenomena over time, as published in JVB. We also aim to ascertain “gaps” that exist in the design, conduct, and presentation of meta-analytic studies published therein to date. Informed by the results of this systematic review, our second goal is to outline a set of “best practices” that are organized around the ten facets of our review and that guide the conduct and review of future meta-analyses in JVB, and for the field of vocational behavior more broadly defined (see Table 1). Thus, two overarching research questions that guide our review are, “How are meta-analyses published in JVB ‘done’?,” and “Do meta-analyses published in JVB conform to ‘best practices’?” To answer these questions, we organize our review around ten interrelated facets of the design and conduct of meta-analysis that have particular bearing on statistical conclusion validity (i.e., the extent to which the conclusions about meta-analyzed relationships are correct or “reasonable”; Shadish, Cook, & Campbell, 2002). These facets were derived from multiple sources. First, we consulted the American Psychological Association’s (APA) meta-analysis reporting standards—a comprehensive effort to establish criteria against which the scientific rigor of a meta-analysis can be judged (MARS; APA, 2008, 2010). Second, we considered more recent suggestions for applying MARS standards specifically to meta-analyses in the organizational sciences (Kepes, McDaniel, Brannick, & Banks, 2013). Third, we referenced recent “best practice” recommendations for the conduct of meta-analyses (Siddaway, Wood, & Hedges, 2019). Finally, we triangulated advice from each of these three sources against contemporary Meta-Analysis in Vocational Behavior 5 reference books regarding the design and conduct of meta-analyses (Borenstein, Hedges, Higgins, & Rothstein, 2011; Cooper, Hedges, & Valentine, 2009; Schmidt & Hunter, 2015). In our online appendix, we offer a “crosswalk,” tying common advice across these multiple sources to the ten facets of meta-analysis and the 19 best practices we derive therefrom: https://osf.io/pgujx/. Importantly, the primary focus of our review is on the statistical methods involved in the conduct of meta-analyses, and not on the supporting methods involved in such reviews (for a comprehensive review of literature search strategies that support systematic reviews and meta-analyses, see Harari, Parola, Hartwell, & Riegelman, 2020). Of note, our focus on the ten facets of meta-analysis is not designed to represent an exhaustive methodological summary and critique of every meta-analysis published in JVB to date. Rather, we focus on those ten facets of the design and conduct of meta-analysis that, if adopted prescriptively, would have the most “influence” on the broader applicability and impact of meta-analytic findings to the field as a whole. Moreover, our focus is on those facets of the meta-analytic process that are most actionable (i.e., those which researchers have most control over in the design, conduct, and reporting of meta-analyses), and that can be readily translated into best practices. Table 1 summarizes these ten facets and the best practice recommendations that we offer as guidance for researchers seeking to conduct meta-analyses of vocational behavior topics, including relevant cautionary notes and related practical advice, and notes about additional readings and resources to guide such efforts. To begin our discussion, we next summarize two predominant traditions of meta-analysis (i.e., Hedges-Olkin & Schmidt-Hunter) and then introduce the ten facets of meta-analysis that guided our review. 2. Two Traditions of Meta-Analysis The term “meta-analysis” refers to a process of systematically and quantitatively summarizing a body of literature. Generally speaking, meta-analyses are conducted to achieve a Meta-Analysis in Vocational Behavior 6 set of common goals. The overarching goal of any meta-analysis is to estimate average effects that are representative of population effects (e.g., population correlations, !xy) based upon the cumulation of multiple sample effects (e.g., correlations from individual primary studies, rxy). Moreover, meta-analyses generally involve procedures for differentially weighting such sample effects to account for variability in the precision of such estimates (e.g., weighting each rxy by its respective sample size, n). Finally, meta-analyses typically provide estimates of the variability of effects from study to study (i.e., estimates of
Recommended publications
  • A Task-Based Taxonomy of Cognitive Biases for Information Visualization
    A Task-based Taxonomy of Cognitive Biases for Information Visualization Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic Three kinds of limitations The Computer The Display 2 Three kinds of limitations The Computer The Display The Human 3 Three kinds of limitations: humans • Human vision ️ has limitations • Human reasoning 易 has limitations The Human 4 ️Perceptual bias Magnitude estimation 5 ️Perceptual bias Magnitude estimation Color perception 6 易 Cognitive bias Behaviors when humans consistently behave irrationally Pohl’s criteria distilled: • Are predictable and consistent • People are unaware they’re doing them • Are not misunderstandings 7 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias,
    [Show full text]
  • When Do Employees Perceive Their Skills to Be Firm-Specific?
    r Academy of Management Journal 2016, Vol. 59, No. 3, 766–790. http://dx.doi.org/10.5465/amj.2014.0286 MICRO-FOUNDATIONS OF FIRM-SPECIFIC HUMAN CAPITAL: WHEN DO EMPLOYEES PERCEIVE THEIR SKILLS TO BE FIRM-SPECIFIC? JOSEPH RAFFIEE University of Southern California RUSSELL COFF University of Wisconsin-Madison Drawing on human capital theory, strategy scholars have emphasized firm-specific human capital as a source of sustained competitive advantage. In this study, we begin to unpack the micro-foundations of firm-specific human capital by theoretically and empirically exploring when employees perceive their skills to be firm-specific. We first develop theoretical arguments and hypotheses based on the extant strategy literature, which implicitly assumes information efficiency and unbiased perceptions of firm- specificity. We then relax these assumptions and develop alternative hypotheses rooted in the cognitive psychology literature, which highlights biases in human judg- ment. We test our hypotheses using two data sources from Korea and the United States. Surprisingly, our results support the hypotheses based on cognitive bias—a stark contrast to expectations embedded within the strategy literature. Specifically, we find organizational commitment and, to some extent, tenure are negatively related to employee perceptions of the firm-specificity. We also find that employer-provided on- the-job training is unrelated to perceived firm-specificity. These results suggest that firm-specific human capital, as perceived by employees, may drive behavior in ways unanticipated by existing theory—for example, with respect to investments in skills or turnover decisions. This, in turn, may challenge the assumed relationship between firm-specific human capital and sustained competitive advantage.
    [Show full text]
  • Survivorship Bias Mitigation in a Recidivism Prediction Tool
    EasyChair Preprint № 4903 Survivorship Bias Mitigation in a Recidivism Prediction Tool Ninande Vermeer, Alexander Boer and Radboud Winkels EasyChair preprints are intended for rapid dissemination of research results and are integrated with the rest of EasyChair. January 15, 2021 SURVIVORSHIP BIAS MITIGATION IN A RECIDIVISM PREDICTION TOOL Ninande Vermeer / Alexander Boer / Radboud Winkels FNWI, University of Amsterdam, Netherlands; [email protected] KPMG, Amstelveen, Netherlands; [email protected] PPLE College, University of Amsterdam, Netherlands; [email protected] Keywords: survivorship bias, AI Fairness 360, bias mitigation, recidivism Abstract: Survivorship bias is the fallacy of focusing on entities that survived a certain selection process and overlooking the entities that did not. This common form of bias can lead to wrong conclu- sions. AI Fairness 360 is an open-source toolkit that can detect and handle bias using several mitigation techniques. However, what if the bias in the dataset is not bias, but rather a justified unbalance? Bias mitigation while the ‘bias’ is justified is undesirable, since it can have a serious negative impact on the performance of a prediction tool based on machine learning. In order to make well-informed product design decisions, it would be appealing to be able to run simulations of bias mitigation in several situations to explore its impact. This paper de- scribes the first results in creating such a tool for a recidivism prediction tool. The main con- tribution is an indication of the challenges that come with the creation of such a simulation tool, specifically a realistic dataset. 1. Introduction The substitution of human decision making with Artificial Intelligence (AI) technologies increases the depend- ence on trustworthy datasets.
    [Show full text]
  • Perioperative Management of ACE Inhibitor Therapy: Challenges of Clinical Decision Making Based on Surrogate Endpoints
    EDITORIAL Perioperative Management of ACE Inhibitor Therapy: Challenges of Clinical Decision Making Based on Surrogate Endpoints Duminda N. Wijeysundera MD, PhD, FRCPC1,2,3* 1Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Toronto, Ontario, Canada; 2Department of Anesthesia and Pain Management, Toronto General Hospital, Toronto, Ontario, Canada; 3Department of Anesthesia, University of Toronto, Toronto, Ontario, Canada. enin-angiotensin inhibitors, which include angio- major surgery can have adverse effects. While ACE inhibitor tensin-converting enzyme (ACE) inhibitors and an- withdrawal has not shown adverse physiological effects in the giotensin II receptor blockers (ARBs), have demon- perioperative setting, it has led to rebound myocardial isch- strated benefits in the treatment of several common emia in patients with prior myocardial infarction.6 Rcardiovascular and renal conditions. For example, they are Given this controversy, there is variation across hospitals1 prescribed to individuals with hypertension, heart failure with and practice guidelines with respect to perioperative manage- reduced ejection fraction (HFrEF), prior myocardial infarction, ment of renin-angiotensin inhibitors. For example, the 2017 and chronic kidney disease with proteinuria. Perhaps unsur- Canadian Cardiovascular Society guidelines recommend that prisingly, many individuals presenting for surgery are already renin-angiotensin inhibitors be stopped temporarily 24 hours on long-term ACE inhibitor or ARB therapy. For example, such before major inpatient
    [Show full text]
  • Assessing SRI Fund Performance Research: Best Practices in Empirical Analysis
    Sustainable Development Sust. Dev. (2011) Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/sd.509 Assessing SRI Fund Performance Research: Best Practices in Empirical Analysis Andrea Chegut,1* Hans Schenk2 and Bert Scholtens3 1 Department of Finance, Maastricht University, Maastricht, The Netherlands 2 Department of Economics, Utrecht University, Utrecht, The Netherlands 3 Department of Economics, Econometrics and Finance, University of Groningen, Groningen, The Netherlands ABSTRACT We review the socially responsible investment (SRI) mutual fund performance literature to provide best practices in SRI performance attribution analysis. Based on meta-ethnography and content analysis, five themes in this literature require specific attention: data quality, social responsibility verification, survivorship bias, benchmarking, and sensitivity and robustness checks. For each of these themes, we develop best practices. Specifically, for sound SRI fund performance analysis, it is important that research pays attention to divi- dend yields and fees, incorporates independent and third party social responsibility verifica- tion, corrects for survivorship bias and tests multiple benchmarks, as well as analyzing the impact of fund composition, management influences and SRI strategies through sensitivity and robustness analysis. These best practices aim to enhance the robustness of SRI finan- cial performance analysis. Copyright © 2011 John Wiley & Sons, Ltd and ERP Environment. Received 1 September 2009; revised 2 December 2009; accepted 4 January 2010 Keywords: mutual funds; socially responsible investing; performance evaluation; best practices Introduction N THIS PAPER, WE INVESTIGATE PERFORMANCE ATTRIBUTION ANALYSIS WITH RESPECT TO SOCIALLY RESPONSIBLE investment (SRI). This analysis is relevant in the decision making process of financial institutions in construct- ing and offering SRI portfolios.
    [Show full text]
  • Rethinking Survivorship Bias in Active/ Passive Comparisons
    Rethinking Survivorship Bias in Active/ Passive Comparisons Adjusting for survivorship bias has been standard practice in the evaluation of fund performance for the past 30 years. These adjustments aim to reduce the risk that average performance of both mutual funds and hedge funds is overstated, because the returns of now defunct — but likely underperforming — funds have been excluded from the calculation. In this article, we take a close look at the survivorship bias adjustment in one of the most visible comparators of active and passive fund performance — Morningstar’s Active/Passive Barometer — and examine how that adjustment is affecting perceptions of the value of active management. We discuss passive fund survivorship rates and how they might be incorporated in survivorship bias calculations. We conclude with observations regarding the limitations of survivorship bias adjustments, especially when evaluating performance over long periods. Survivorship bias adjustments and their limitations Much of the fund performance information available to investors doesn’t consider survivorship at all. Publicly available category rankings are computed considering just the funds that exist today and that have existed throughout the period under review. While these “current fund only” averages are relatively easy to calculate, researchers have argued that they tend to overstate the returns that investors have earned over the period. That’s because these averages exclude the performance of funds that existed at the beginning of the period but were no longer included in the average at the end of the period. These funds have been liquidated or merged into another fund or, as is often the case with hedge funds, have stopped providing information about returns.
    [Show full text]
  • Cognitive Biases: 15 More to Think About
    Tribunals, Autumn 2016 The JAC is not complacent and recognises there is more to do to increase judicial diversity. In 2015–16, 9% of JAC selections were BAME individuals; we want to improve that. We also want to see more candidates applying from different professional backgrounds, such as academia and the public sector. Diversity is improving across the judiciary with faster progress in some areas than others. Last year, 45% of the JAC’s recommended candidates were women. Recent statistics published by the Judicial Office show that 46% of tribunal judges are women and 12% are BAME. They also showed that the younger cohorts of judges are more diverse, a positive indicator for the future. We want the judiciary to reflect the society it serves and will continue to work with the government, judiciary and legal profession to ensure further progress is made. lori Frecker is Head of Equality and Diversity at the JAC Back to contents Cognitive biases: 15 more to think about DECISION MAKING By leslie Cuthbert In Lydia Seymour’s article in the Spring 2014 edition of Tribunals, she explained about a key unconscious bias known as ‘confrmation bias’. In the Autumn 2015 edition, I then described about the risks involved in being overconfdent. However, these are only two of the many cognitive biases that exist. Here are 15 other common cognitive, or unconscious, biases that we are all prone to falling foul of whether as witness, party or decision-maker. 1) Anchoring. This involves people being over-reliant on the frst piece of information that they receive.
    [Show full text]
  • To Err Is Human, but Smaller Funds Can Succeed by Mitigating Cognitive Bias
    FEATURE | SMALLER FUNDS CAN SUCCEED BY MITIGATING COGNITIVE BIAS To Err Is Human, but Smaller Funds Can Succeed by Mitigating Cognitive Bias By Bruce Curwood, CIMA®, CFA® he evolution of investment manage­ 2. 2000–2008: Acknowledgement, where Diversification meant simply expanding the ment has been a long and painful plan sponsors discovered the real portfolio beyond domestic markets into T experience for many institutional meaning of risk. less­correlated foreign equities and return plan sponsors. It’s been trial by fire and 3. Post­2009: Action, which resulted in a was a simple function of increasing the risk learning important lessons the hard way, risk revolution. (more equity, increased leverage, greater generally from past mistakes. However, foreign currency exposure, etc.). After all, “risk” doesn’t have to be a four­letter word Before 2000, modern portfolio theory, history seemed to show that equities and (!@#$). In fact, in case you haven’t noticed, which was developed in the 1950s and excessive risk taking outperformed over the there is currently a revolution going on 1960s, was firmly entrenched in academia long term. In short, the crux of the problem in our industry. Most mega funds (those but not as well understood by practitioners, was the inequitable amount of time and well­governed funds in excess of $10 billion whose focus was largely on optimizing effort that investors spent on return over under management) already have moved return. The efficient market hypothesis pre­ risk, and that correlation and causality were to a more comprehensive risk management vailed along with the rationality of man, supposedly closely linked.
    [Show full text]
  • Mutual Fund Survivorship
    University of Pennsylvania ScholarlyCommons Finance Papers Wharton Faculty Research 2002 Mutual Fund Survivorship Mark M. Carhart Jennifer N. Carpenter Anthony W. Lynch David K. Musto University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/fnce_papers Part of the Finance Commons, and the Finance and Financial Management Commons Recommended Citation Carhart, M. M., Carpenter, J. N., Lynch, A. W., & Musto, D. K. (2002). Mutual Fund Survivorship. Review of Financial Studies, 15 (5), 1439-1463. http://dx.doi.org/10.1093/rfs/15.5.1439 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/fnce_papers/323 For more information, please contact [email protected]. Mutual Fund Survivorship Abstract This article provides a comprehensive study of survivorship issues using the mutual fund data of Carhart (1997). We demonstrate theoretically that when survival depends on multiperiod performance, the survivorship bias in average performance typically increases with the sample length. This is empirically relevant because evidence suggests a multiyear survival rule for U.S. mutual funds. In the data we find the annual bias increases from 0.07% for 1-year samples to 1% for samples longer than 15 years. We find that survivor conditioning weakens evidence of performance persistence. Finally, we explain how survivor conditioning affects the relation between performance and fund characteristics. Disciplines Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: https://repository.upenn.edu/fnce_papers/323 This draft: August 9, 2001 MUTUAL FUND SURVIVORSHIP Mark M. Carhart*, Jennifer N. Carpenter**, Anthony W. Lynch× and David K. Musto+ * Quantitative Strategies, Goldman Sachs Asset Management.
    [Show full text]
  • Cognitive Biases in Fitness to Practise Decision-Making
    ADVICE ON BIASES IN FITNESS TO PRACTISE DECISION-MAKING IN ACCEPTED OUTCOME VERSUS PANEL MODELS FOR THE PROFESSIONAL STANDARDS AUTHORITY Prepared by: LESLIE CUTHBERT Date: 31 March 2021 Contents I. Introduction ................................................................................................. 2 Background .................................................................................................. 2 Purpose of the advice.................................................................................... 3 Summary of conclusions ................................................................................ 4 Documents considered .................................................................................. 4 Technical terms and explanations ................................................................... 5 Disclaimer .................................................................................................... 5 II. How biases might affect the quality of decision-making in the AO model, as compared to the panel model .............................................................................. 6 Understanding cognitive biases in fitness to practise decision-making ............... 6 Individual Decision Maker (e.g. a Case Examiner) in AO model ......................... 8 Multiple Case Examiners .............................................................................. 17 Group Decision Maker in a Panel model of 3 or more members ....................... 22 III. Assessment of the impact of these biases,
    [Show full text]
  • Making Crime Analysis More Inferential
    Making Crime Analysis More Inferential Dr Michael Townsley School of Criminology and Criminal Justice, Griffith University 21–27 April 2014 / International Summit On Scientific Criminal Analysis 1 / 35 Outline Defining What Analysis Is Five principles of statistical reasoning Three strategies to avoid errors 2 / 35 What Do We Mean by Analysis? Analysis is not simply descriptive. It must include some component of reasoning, inference or interpretation. Regurgitating numerical values or summarising the situation is not analysis Need a system for doing this, comprising: • Appropriate theory • Methods to generate and test hypotheses A system will allow you to generate knowledge about the criminal environment. 3 / 35 Theories for Crime Analysis: Environmental Criminology crime = motivation + opportunity • Rational choice • Routine activity • Crime pattern theory 4 / 35 Problems Let’s acknowledge the range of factors limiting analysts from doing their work: Organisational Individual Tasking Training Operational imperatives Highly variable performance Cognitive biases 5 / 35 Humans find patterns anywhere • Apophenia is the experience of seeing patterns or connections in random or meaningless data. • Pareidolia is a type of apophenia involving the perception of images or sounds in random stimuli (seeing faces in inanimate objects) 6 / 35 Seeing Faces 7 / 35 The hungry helicopter eats delicious soldiers 8 / 35 These boxes are planning something . 9 / 35 Cookie Monster spotted in Aisle 4 10 / 35 Outline Defining What Analysis Is Five principles of statistical
    [Show full text]
  • Taxonomies of Cognitive Bias How They Built
    Three kinds of limitations Three kinds of limitations Three kinds of limitations: humans A Task-based Taxonomy of Cognitive Biases for Information • Human vision ️ has Visualization limitations Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia • 易 Bezerianos, and Pierre Dragicevic Human reasoning has limitations The Computer The Display The Computer The Display The Human The Human 2 3 4 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's ️Perceptual bias ️Perceptual bias 易 Cognitive bias paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Magnitude estimation Magnitude estimation Color perception Behaviors when humans Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias, consistently behave irrationally Focusing effect, Forer effect or Barnum effect, Form function attribution bias, Framing effect, Frequency illusion or Baader–Meinhof
    [Show full text]