Errors in Probabilistic Reasoning and Judgment Biases

Total Page:16

File Type:pdf, Size:1020Kb

Load more

NBER WORKING PAPER SERIES ERRORS IN PROBABILISTIC REASONING AND JUDGMENT BIASES Daniel J. Benjamin Working Paper 25200 http://www.nber.org/papers/w25200 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 October 2018 This chapter will appear in the forthcoming Handbook of Behavioral Economics (eds. Doug Bernheim, Stefano DellaVigna, and David Laibson), Volume 2, Elsevier, 2019. For helpful comments, I am grateful to Andreas Aristidou, Nick Barberis, Pedro Bordalo, Colin Camerer, Christopher Chabris, Samantha Cherney, Bob Clemen, Gary Charness, Alexander Coutts, Chetan Dave, Juan Dubra, Craig Fox, Nicola Gennaioli, Tom Gilovich, David Grether, Zack Grossman, Ori Heffetz, Jon Kleinberg, Lawrence Jin, Annie Liang, Chuck Manski, Josh Miller, Don Moore, Ted O’Donoghue, Jeff Naecker, Collin Raymond, Alex Rees-Jones, Rebecca Royer, Josh Schwartzstein, Tali Sharot, Andrei Shleifer, Josh Tasoff, Richard Thaler, Joël van der Weele, George Wu, Basit Zafar, Chen Zhao, Daniel Zizzo, conference participants at the 2016 Stanford Institute for Theoretical Economics, and the editors of this Handbook, Doug Bernheim, Stefano DellaVigna, and David Laibson. I am grateful to Matthew Rabin for extremely valuable conversations about the topics in this chapter over many years. I thank Peter Bowers, Rebecca Royer, and especially Tushar Kundu for outstanding research assistance. The views expressed herein are those of the author and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peer- reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications. © 2018 by Daniel J. Benjamin. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source. Errors in Probabilistic Reasoning and Judgment Biases Daniel J. Benjamin NBER Working Paper No. 25200 October 2018 JEL No. D03,D90 ABSTRACT Errors in probabilistic reasoning have been the focus of much psychology research and are among the original topics of modern behavioral economics. This chapter reviews theory and evidence on this topic, with the goal of facilitating more systematic study of belief biases and their integration into economics. The chapter discusses biases in beliefs about random processes, biases in belief updating, the representativeness heuristic as a possible unifying theory, and interactions between biased belief updating and other features of the updating situation. Throughout, I aim to convey how much evidence there is for (and against) each putative bias, and I highlight when and how different biases may be related to each other. The chapter ends by drawing general lessons for when people update too much or too little, reflecting on modeling challenges, pointing to areas of economics to which the biases are relevant, and highlighting some possible directions for future work. Daniel J. Benjamin Center for Economics and Social Research University of Southern California 635 Downie Way, Suite 312 Los Angeles, CA 90089-3332 and NBER [email protected] Table of Contents Section 1. Introduction ............................................................................................................ 1 Section 2. Biased Beliefs About Random Sequences .............................................................. 9 2.A. The Gambler’s Fallacy and the Law of Small Numbers ................................................. 9 2.B. The Hot-Hand Bias ....................................................................................................... 16 2.C. Additional Biases in Beliefs About Random Sequences ............................................... 21 Section 3. Biased Beliefs About Sampling Distributions .......................................................24 3.A. Partition Dependence.................................................................................................... 24 3.B. Sample-Size Neglect and Non-Belief in the Law of Large Numbers ............................ 33 3.C. Sampling-Distribution-Tails Diminishing Sensitivity ................................................... 41 3.D. Overweighting the Mean and the Fallacy of Large Numbers ....................................... 43 3.E. Sampling-Distribution Beliefs for Small Samples ........................................................ 46 3.F. Summary and Comparison of Sequence Beliefs Versus Sampling-Distribution Beliefs 48 Section 4. Evidence on Belief Updating .................................................................................50 4.A. Conceptual Framework ................................................................................................ 54 4.B. Evidence from Simultaneous Samples .......................................................................... 57 4.C. Evidence from Sequential Samples ............................................................................... 78 Section 5. Theories of Biased Inference .................................................................................86 5.A. Biased Sampling-Distribution Beliefs ........................................................................... 87 5.B. Conservatism Bias ........................................................................................................ 96 5.C. Extreme-Belief Aversion ............................................................................................... 98 5.D. Summary .....................................................................................................................102 Section 6. Base-Rate Neglect ................................................................................................ 104 Section 7. The Representativeness Heuristic ....................................................................... 116 7.A. Representativeness ......................................................................................................116 7.B. The Strength-Versus-Weight Theory of Biased Updating ...........................................122 7.C. Economic Models of Representativeness ....................................................................125 7.D. Modeling Representativeness Versus Specific Biases .................................................134 Section 8. Prior-Biased Inference ........................................................................................ 136 8.A. Conceptual Framework ...............................................................................................136 8.B. Evidence and Models ...................................................................................................138 Section 9. Preference-Biased Inference ............................................................................... 149 9.A. Conceptual Framework ...............................................................................................149 9.B. Evidence and Models ...................................................................................................151 Section 10. Discussion........................................................................................................... 158 10.A. When Do People Update Too Much or Too Little? ...................................................158 10.B. Modeling Challenges ................................................................................................160 10.C. Generalizability from the Lab to the Field ................................................................163 10.D. Connecting With Other Areas of Economics ............................................................167 10.E. Some Possible Directions For Future Research .......................................................169 Section 1. Introduction Probabilistic beliefs are central to decision-making under risk. Therefore, systematic errors in probabilistic reasoning can matter for the many economic decisions that involve risk, including investing for retirement, purchasing insurance, starting a business, and searching for goods, jobs, or workers. This chapter reviews what psychologists and economists have learned about such systematic errors. At the cost of some precision, throughout this chapter I will use the term “belief biases” as shorthand for “errors in probabilistic reasoning.” By “bias,” in this chapter I will mean any deviation from correct reasoning about probabilities or Bayesian updating.1 This chapter’s area of research—which is often called “judgment under uncertainty” or “heuristics and biases” in psychology—was introduced by the psychologist Ward Edwards and his students and colleagues in the 1960s (e.g., Phillips and Edwards, 1966). This topic was the starting point of the collaboration between Daniel Kahneman and Amos Tversky. Their seminal early papers (e.g., Tversky and Kahneman, 1971, 1974) jumpstarted an enormous literature in psychology and influenced thinking in many other disciplines, including economics. Despite so much work by psychologists and despite being one of the original topics of modern behavioral economics, to date belief biases have received less attention from behavioral economists than time, risk, and social preferences. Belief biases have also made 1 My use of the same term “bias” for all of these deviations is not meant to obscure the distinctions between them in terms of their psychological origins. For example,
Recommended publications
  • (HCW) Surveys in Humanitarian Contexts in Lmics

    (HCW) Surveys in Humanitarian Contexts in Lmics

    Analytics for Operations working group GUIDANCE BRIEF Guidance for Health Care Worker (HCW) Surveys in humanitarian contexts in LMICs Developed by the Analytics for Operations Working Group to support those working with communities and healthcare workers in humanitarian and emergency contexts. This document has been developed for response actors working in humanitarian contexts who seek rapid approaches to gathering evidence about the experience of healthcare workers, and the communities of which they are a part. Understanding healthcare worker experience is critical to inform and guide humanitarian programming and effective strategies to promote IPC, identify psychosocial support needs. This evidence also informs humanitarian programming that interacts with HCWs and facilities such as nutrition, health reinforcement, communication, SGBV and gender. In low- and middle-income countries (LMIC), healthcare workers (HCW) are often faced with limited resources, equipment, performance support and even formal training to provide the life-saving work expected of them. In humanitarian contexts1, where human resources are also scarce, HCWs may comprise formally trained doctors, nurses, pharmacists, dentists, allied health professionals etc. as well as community members who perform formal health worker related duties with little or no trainingi. These HCWs frequently work in contexts of multiple public health crises, including COVID-19. Their work will be affected by availability of resources (limited supplies, materials), behaviour and emotion (fear), flows of (mis)information (e.g. understanding of expected infection prevention and control (IPC) measures) or services (healthcare policies, services and use). Multiple factors can therefore impact patients, HCWs and their families, not only in terms of risk of exposure to COVID-19, but secondary health, socio-economic and psycho-social risks, as well as constraints that interrupt or hinder healthcare provision such as physical distancing practices.
  • A Task-Based Taxonomy of Cognitive Biases for Information Visualization

    A Task-Based Taxonomy of Cognitive Biases for Information Visualization

    A Task-based Taxonomy of Cognitive Biases for Information Visualization Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic Three kinds of limitations The Computer The Display 2 Three kinds of limitations The Computer The Display The Human 3 Three kinds of limitations: humans • Human vision ️ has limitations • Human reasoning 易 has limitations The Human 4 ️Perceptual bias Magnitude estimation 5 ️Perceptual bias Magnitude estimation Color perception 6 易 Cognitive bias Behaviors when humans consistently behave irrationally Pohl’s criteria distilled: • Are predictable and consistent • People are unaware they’re doing them • Are not misunderstandings 7 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias,
  • Cognitive Bias Mitigation: How to Make Decision-Making More Rational?

    Cognitive Bias Mitigation: How to Make Decision-Making More Rational?

    Cognitive Bias Mitigation: How to make decision-making more rational? Abstract Cognitive biases distort judgement and adversely impact decision-making, which results in economic inefficiencies. Initial attempts to mitigate these biases met with little success. However, recent studies which used computer games and educational videos to train people to avoid biases (Clegg et al., 2014; Morewedge et al., 2015) showed that this form of training reduced selected cognitive biases by 30 %. In this work I report results of an experiment which investigated the debiasing effects of training on confirmation bias. The debiasing training took the form of a short video which contained information about confirmation bias, its impact on judgement, and mitigation strategies. The results show that participants exhibited confirmation bias both in the selection and processing of information, and that debiasing training effectively decreased the level of confirmation bias by 33 % at the 5% significance level. Key words: Behavioural economics, cognitive bias, confirmation bias, cognitive bias mitigation, confirmation bias mitigation, debiasing JEL classification: D03, D81, Y80 1 Introduction Empirical research has documented a panoply of cognitive biases which impair human judgement and make people depart systematically from models of rational behaviour (Gilovich et al., 2002; Kahneman, 2011; Kahneman & Tversky, 1979; Pohl, 2004). Besides distorted decision-making and judgement in the areas of medicine, law, and military (Nickerson, 1998), cognitive biases can also lead to economic inefficiencies. Slovic et al. (1977) point out how they distort insurance purchases, Hyman Minsky (1982) partly blames psychological factors for economic cycles. Shefrin (2010) argues that confirmation bias and some other cognitive biases were among the significant factors leading to the global financial crisis which broke out in 2008.
  • THE ROLE of PUBLICATION SELECTION BIAS in ESTIMATES of the VALUE of a STATISTICAL LIFE W

    THE ROLE of PUBLICATION SELECTION BIAS in ESTIMATES of the VALUE of a STATISTICAL LIFE W

    THE ROLE OF PUBLICATION SELECTION BIAS IN ESTIMATES OF THE VALUE OF A STATISTICAL LIFE w. k i p vi s c u s i ABSTRACT Meta-regression estimates of the value of a statistical life (VSL) controlling for publication selection bias often yield bias-corrected estimates of VSL that are substantially below the mean VSL estimates. Labor market studies using the more recent Census of Fatal Occu- pational Injuries (CFOI) data are subject to less measurement error and also yield higher bias-corrected estimates than do studies based on earlier fatality rate measures. These re- sultsareborneoutbythefindingsforalargesampleofallVSLestimatesbasedonlabor market studies using CFOI data and for four meta-analysis data sets consisting of the au- thors’ best estimates of VSL. The confidence intervals of the publication bias-corrected estimates of VSL based on the CFOI data include the values that are currently used by government agencies, which are in line with the most precisely estimated values in the literature. KEYWORDS: value of a statistical life, VSL, meta-regression, publication selection bias, Census of Fatal Occupational Injuries, CFOI JEL CLASSIFICATION: I18, K32, J17, J31 1. Introduction The key parameter used in policy contexts to assess the benefits of policies that reduce mortality risks is the value of a statistical life (VSL).1 This measure of the risk-money trade-off for small risks of death serves as the basis for the standard approach used by government agencies to establish monetary benefit values for the predicted reductions in mortality risks from health, safety, and environmental policies. Recent government appli- cations of the VSL have used estimates in the $6 million to $10 million range, where these and all other dollar figures in this article are in 2013 dollars using the Consumer Price In- dex for all Urban Consumers (CPI-U).
  • Bias and Fairness in NLP

    Bias and Fairness in NLP

    Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas
  • Nonresponse Bias and Trip Generation Models

    Nonresponse Bias and Trip Generation Models

    64 TRANSPORTATION RESEARCH RECORD 1412 Nonresponse Bias and Trip Generation Models PIYUSHIMITA THAKURIAH, As:H1sH SEN, SnM S66T, AND EDWARD CHRISTOPHER There is serious concern over the fact that travel surveys often On the other hand, if the model does not satisfy these con­ overrepresent smaller households with higher incomes and better ditions, bias will occur even if there is a 100 percent response education levels and, in general, that nonresponse is nonrandom. rate. These conditions are satisfied by the model if the func­ However, when the data are used to build linear models, such as trip generation models, and the model is correctly specified, tional form of the model is correct and all important explan­ estimates of parameters are unbiased regardless of the nature of atory variables are included. the respondents, and the issues of how response rates and non­ Because categorical models do not have any problems with response bias are ameliorated. The more important task then is their functional form, and weighting and related issues are the complete specification of the model, without leaving out var­ taken care of, the authors prefer categorical trip generation iables that have some effect on the variable to be predicted. The models. This preference is discussed in a later section. There­ theoretical basis for this reasoning is given along with an example fore, the issue that remains when assessing bias in estimates of how bias may be assessed in estimates of trip generation model parameters. Some of the methods used are quite standard, but from categorical trip generation models is whether the model the manner in which these and other more nonstandard methods includes all the relevant independent variables or at least all have been systematically put together to assess bias in estimates important predictors.
  • Equally Flexible and Optimal Response Bias in Older Compared to Younger Adults

    Equally Flexible and Optimal Response Bias in Older Compared to Younger Adults

    AGING AND RESPONSE BIAS Equally Flexible and Optimal Response Bias in Older Compared to Younger Adults Roderick Garton, Angus Reynolds, Mark R. Hinder, Andrew Heathcote Department of Psychology, University of Tasmania Accepted for publication in Psychology and Aging, 8 February 2019 © 2019, American Psychological Association. This paper is not the copy of record and may not exactly replicate the final, authoritative version of the article. Please do not copy or cite without authors’ permission. The final article will be available, upon publication, via its DOI: 10.1037/pag0000339 Author Note Roderick Garton, Department of Psychology, University of Tasmania, Sandy Bay, Tasmania, Australia; Angus Reynolds, Department of Psychology, University of Tasmania, Sandy Bay, Tasmania, Australia; Mark R. Hinder, Department of Psychology, University of Tasmania, Sandy Bay, Tasmania, Australia; Andrew Heathcote, Department of Psychology, University of Tasmania, Sandy Bay, Tasmania, Australia. Correspondence concerning this article should be addressed to Roderick Garton, University of Tasmania Private Bag 30, Hobart, Tasmania, Australia, 7001. Email: [email protected] This study was supported by Australian Research Council Discovery Project DP160101891 (Andrew Heathcote) and Future Fellowship FT150100406 (Mark R. Hinder), and by Australian Government Research Training Program Scholarships (Angus Reynolds and Roderick Garton). The authors would like to thank Matthew Gretton for help with data acquisition, Luke Strickland and Yi-Shin Lin for help with data analysis, and Claire Byrne for help with study administration. The trial-level data for the experiment reported in this manuscript are available on the Open Science Framework (https://osf.io/9hwu2/). 1 AGING AND RESPONSE BIAS Abstract Base-rate neglect is a failure to sufficiently bias decisions toward a priori more likely options.
  • Why So Confident? the Influence of Outcome Desirability on Selective Exposure and Likelihood Judgment

    Why So Confident? the Influence of Outcome Desirability on Selective Exposure and Likelihood Judgment

    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by The University of North Carolina at Greensboro Archived version from NCDOCKS Institutional Repository http://libres.uncg.edu/ir/asu/ Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment Authors Paul D. Windschitl , Aaron M. Scherer a, Andrew R. Smith , Jason P. Rose Abstract Previous studies that have directly manipulated outcome desirability have often found little effect on likelihood judgments (i.e., no desirability bias or wishful thinking). The present studies tested whether selections of new information about outcomes would be impacted by outcome desirability, thereby biasing likelihood judgments. In Study 1, participants made predictions about novel outcomes and then selected additional information to read from a buffet. They favored information supporting their prediction, and this fueled an increase in confidence. Studies 2 and 3 directly manipulated outcome desirability through monetary means. If a target outcome (randomly preselected) was made especially desirable, then participants tended to select information that supported the outcome. If made undesirable, less supporting information was selected. Selection bias was again linked to subsequent likelihood judgments. These results constitute novel evidence for the role of selective exposure in cases of overconfidence and desirability bias in likelihood judgments. Paul D. Windschitl , Aaron M. Scherer a, Andrew R. Smith , Jason P. Rose (2013) "Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment" Organizational Behavior and Human Decision Processes 120 (2013) 73–86 (ISSN: 0749-5978) Version of Record available @ DOI: (http://dx.doi.org/10.1016/j.obhdp.2012.10.002) Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment q a,⇑ a c b Paul D.
  • Guidance for Health Care Worker (HCW) Surveys in Humanitarian

    Guidance for Health Care Worker (HCW) Surveys in Humanitarian

    Analytics for Operations & COVID-19 Research Roadmap Social Science working groups GUIDANCE BRIEF Guidance for Health Care Worker (HCW) Surveys in humanitarian contexts in LMICs Developed by the Analytics for Operations & COVID-19 Research Roadmap Social Science working groups to support those working with communities and healthcare workers in humanitarian and emergency contexts. This document has been developed for response actors working in humanitarian contexts who seek rapid approaches to gathering evidence about the experience of healthcare workers, and the communities of which they are a part. Understanding healthcare worker experience is critical to inform and guide humanitarian programming and effective strategies to promote IPC, identify psychosocial support needs. This evidence also informs humanitarian programming that interacts with HCWs and facilities such as nutrition, health reinforcement, communication, SGBV and gender. In low- and middle-income countries (LMIC), healthcare workers (HCW) are often faced with limited resources, equipment, performance support and even formal training to provide the life-saving work expected of them. In humanitarian contexts1, where human resources are also scarce, HCWs may comprise formally trained doctors, nurses, pharmacists, dentists, allied health professionals etc. as well as community members who perform formal health worker related duties with little or no trainingi. These HCWs frequently work in contexts of multiple public health crises, including COVID-19. Their work will be affected
  • Straight Until Proven Gay: a Systematic Bias Toward Straight Categorizations in Sexual Orientation Judgments

    Straight Until Proven Gay: a Systematic Bias Toward Straight Categorizations in Sexual Orientation Judgments

    ATTITUDES AND SOCIAL COGNITION Straight Until Proven Gay: A Systematic Bias Toward Straight Categorizations in Sexual Orientation Judgments David J. Lick Kerri L. Johnson New York University University of California, Los Angeles Perceivers achieve above chance accuracy judging others’ sexual orientations, but they also exhibit a notable response bias by categorizing most targets as straight rather than gay. Although a straight categorization bias is evident in many published reports, it has never been the focus of systematic inquiry. The current studies therefore document this bias and test the mechanisms that produce it. Studies 1–3 revealed the straight categorization bias cannot be explained entirely by perceivers’ attempts to match categorizations to the number of gay targets in a stimulus set. Although perceivers were somewhat sensitive to base rate information, their tendency to categorize targets as straight persisted when they believed each target had a 50% chance of being gay (Study 1), received explicit information about the base rate of gay targets in a stimulus set (Study 2), and encountered stimulus sets with varying base rates of gay targets (Study 3). The remaining studies tested an alternate mechanism for the bias based upon perceivers’ use of gender heuristics when judging sexual orientation. Specifically, Study 4 revealed the range of gendered cues compelling gay judgments is smaller than the range of gendered cues compelling straight judgments despite participants’ acknowledgment of equal base rates for gay and straight targets. Study 5 highlighted perceptual experience as a cause of this imbalance: Exposing perceivers to hyper-gendered faces (e.g., masculine men) expanded the range of gendered cues compelling gay categorizations.
  • Running Head: EXTRAVERSION PREDICTS REWARD SENSITIVITY

    Running Head: EXTRAVERSION PREDICTS REWARD SENSITIVITY

    Running head: EXTRAVERSION PREDICTS REWARD SENSITIVITY Extraversion but not Depression Predicts Reward Sensitivity: Revisiting the Measurement of Anhedonic Phenotypes Submitted: 6/xx/2020 EXTRAVERSION PREDICTS REWARD SENSITIVITY 2 Abstract RecentLy, increasing efforts have been made to define and measure dimensionaL phenotypes associated with psychiatric disorders. One example is a probabiListic reward task deveLoped by PizzagaLLi et aL. (2005) to assess anhedonia, by measuring participants’ responses to a differentiaL reinforcement schedule. This task has been used in many studies, which have connected blunted reward response in the task to depressive symptoms, across cLinicaL groups and in the generaL population. The current study attempted to replicate these findings in a large community sample and aLso investigated possible associations with Extraversion, a personaLity trait Linked theoreticaLLy and empiricaLLy to reward sensitivity. Participants (N = 299) completed the probabiListic reward task, as weLL as the Beck Depression Inventory, PersonaLity Inventory for the DSM-5, Big Five Inventory, and Big Five Aspect ScaLes. Our direct replication attempts used bivariate anaLyses of observed variables and ANOVA modeLs. FolLow-up and extension anaLyses used structuraL equation modeLs to assess reLations among Latent reward sensitivity, depression, Extraversion, and Neuroticism. No significant associations were found between reward sensitivity (i.e., response bias) and depression, thus faiLing to replicate previous findings. Reward sensitivity (both modeLed as response bias aggregated across blocks and as response bias controlLing for baseLine) showed positive associations with Extraversion, but not Neuroticism. Findings suggest reward sensitivity as measured by this probabiListic reward task may be reLated primariLy to Extraversion and its pathologicaL manifestations, rather than to depression per se, consistent with existing modeLs that conceptuaLize depressive symptoms as combining features of Neuroticism and low Extraversion.
  • Correcting Sampling Bias in Non-Market Valuation with Kernel Mean Matching

    Correcting Sampling Bias in Non-Market Valuation with Kernel Mean Matching

    CORRECTING SAMPLING BIAS IN NON-MARKET VALUATION WITH KERNEL MEAN MATCHING Rui Zhang Department of Agricultural and Applied Economics University of Georgia [email protected] Selected Paper prepared for presentation at the 2017 Agricultural & Applied Economics Association Annual Meeting, Chicago, Illinois, July 30 - August 1 Copyright 2017 by Rui Zhang. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. Abstract Non-response is common in surveys used in non-market valuation studies and can bias the parameter estimates and mean willingness to pay (WTP) estimates. One approach to correct this bias is to reweight the sample so that the distribution of the characteristic variables of the sample can match that of the population. We use a machine learning algorism Kernel Mean Matching (KMM) to produce resampling weights in a non-parametric manner. We test KMM’s performance through Monte Carlo simulations under multiple scenarios and show that KMM can effectively correct mean WTP estimates, especially when the sample size is small and sampling process depends on covariates. We also confirm KMM’s robustness to skewed bid design and model misspecification. Key Words: contingent valuation, Kernel Mean Matching, non-response, bias correction, willingness to pay 2 1. Introduction Nonrandom sampling can bias the contingent valuation estimates in two ways. Firstly, when the sample selection process depends on the covariate, the WTP estimates are biased due to the divergence between the covariate distributions of the sample and the population, even the parameter estimates are consistent; this is usually called non-response bias.