Statistical Tips for Interpreting Scientific Claims

Total Page:16

File Type:pdf, Size:1020Kb

Statistical Tips for Interpreting Scientific Claims Research Skills Seminar Series 2019 CAHS Research Education Program Statistical Tips for Interpreting Scientific Claims Mark Jones Statistician, Telethon Kids Institute 18 October 2019 Research Skills Seminar Series | CAHS Research Education Program Department of Child Health Research | Child and Adolescent Health Service ResearchEducationProgram.org © CAHS Research Education Program, Department of Child Health Research, Child and Adolescent Health Service, WA 2019 Copyright to this material produced by the CAHS Research Education Program, Department of Child Health Research, Child and Adolescent Health Service, Western Australia, under the provisions of the Copyright Act 1968 (C’wth Australia). Apart from any fair dealing for personal, academic, research or non-commercial use, no part may be reproduced without written permission. The Department of Child Health Research is under no obligation to grant this permission. Please acknowledge the CAHS Research Education Program, Department of Child Health Research, Child and Adolescent Health Service when reproducing or quoting material from this source. Statistical Tips for Interpreting Scientific Claims CONTENTS: 1 PRESENTATION ............................................................................................................................... 1 2 ARTICLE: TWENTY TIPS FOR INTERPRETING SCIENTIFIC CLAIMS, SUTHERLAND, SPIEGELHALTER & BURGMAN, 2013 .................................................................................................................................. 15 3 ADDITIONAL RESOURCES – STATISTICAL TIPS FOR INTERPRETING SCIENTIFIC CLAIMS ..................... 18 3.1 STATISTICAL BIAS ............................................................................................................................. 18 3.2 CORRELATIONS ................................................................................................................................ 18 3.3 COGNITIVE BIAS ............................................................................................................................... 19 3.4 REFERENCES ..................................................................................................................................... 19 RESEARCH SKILLS SEMINAR SERIES 2019 CAHS Research Education Program Statistical Tips for Interpreting Scientific Claims Statistical Tips for Interpreting Scientific Claims Mark Jones|Biostatistician, Telethon Kids Institute [email protected] On behalf of Dr Julie Marsh Snr Research Fellow, Telethon Kids Institute Research Skills Seminar Series | CAHS Research Education Program Department of Child Health Research | Child and Adolescent Health Service Sutherland, Spiegelhalter and Burgman (2013). Twenty tips for interpreting scientific claims. Nature 503 (335) ResearchEducationProgram.org 2 The 20 Tips The 20 Tips 1. Differences and Chance Cause Variation 13. Significance is Significant 1. Differences and Chance Cause Variation 13. Significance is Significant 2. No Measurement is Exact 14. Separate No Effect from Non‐Significance 2. No Measurement is Exact 14. Separate No Effect from Non‐Significance 3. Bias is Rife 15. Effect Size Matters 3. Bias is Rife 15. Effect Size Matters 4. Bigger is usually Better for Sample Size 16. Study Relevance Limits Generalisation 4. Bigger is usually Better for Sample Size 16. Study Relevance Limits Generalisation 5. Correlation Does Not Imply Causation 17. Feelings Influence Risk Perception 5. Correlation Does Not Imply Causation 17. Feelings Influence Risk Perception 6. Regression to the Mean Can Mislead 18. Dependencies change the risks 6. Regression to the Mean Can Mislead 18. Dependencies change the risks 7. Extrapolating Beyond the Data is Risky 19. Data can be dredged, or cherry picked. 7. Extrapolating Beyond the Data is Risky 19. Data can be dredged, or cherry picked. 8. Beware the Base‐Rate Fallacy 20. Extreme measurements may mislead. 8. Beware the Base‐Rate Fallacy 20. Extreme measurements may mislead. 9. Controls are Important 9. Controls are Important 10. Randomisation minimises bias 10. Randomisation minimises bias Study Design 11. Seek Replication 11. Seek Replication Statistical Insights 12. Scientists are Human 12. Scientists are Human Human Factors 3 4 1 Randomisation (minimises bias) Randomization avoids bias EXPERIMENTAL UNIT ‐ Population smallest ‘thing’ to which Part 1: Study Design Treatment Group we allocate treatments. Treatment Randomised Allocation Placebo Simple Random Control Group Sample Source: Final Fantasy VII 5 Replication Controls are Important Seek Replication, not pseudoreplication “Results consistent across many studies, replicated on independent populations, are more likely to be solid.” “The results of several such experiments may be combined in a meta‐analysis to provide an overarching view of the topic with potentially much greater statistical power than any of the individual studies.” 7 8 2 Generalisability (aka external validity) Generalisability (aka external validity) Study relevance limits generalizations Study relevance limits generalizations “At its best a trial shows what can be accomplished with a medicine under careful observation and certain restricted conditions. The same results will not invariably or necessarily be observed when the medicine passes into general use.” Austin Bradford Hill, 1984 9 10 Sample Size (adopt the goldilocks principle) Bigger is usually better for sample size Part 2: Statistical Insights Average efficacy can be more reliably and accurately estimated from a study with hundreds of participants than from a study with only a few participants. Basic analyses often assume that participants are homogenous. 12 11 3 Sample Size (adopt the goldilocks principle) Differences and chance cause variation Bigger is usually better for sample size “The main challenge of research is teasing apart the importance of the process of interest from the innumerable other sources of variation.” too small too big just right 13 14 Differences and chance cause variation Differences and chance cause variation “The main challenge of research is teasing apart the importance of the process of “The main challenge of research is teasing apart the importance of the process of interest from the innumerable other sources of variation.” interest from the innumerable other sources of variation.” 15 16 4 Hypotheses, Effects and Significance Hypotheses, Effects and Significance Significance is significant, Separate no effect from non‐significance, Effect size matters Significance is significant, Separate no effect from non‐significance, Effect size matters Hypothesis Test is a statistical technique to assist with answering questions or Hypothesis Test is a statistical technique to assist with answering questions or claims about population parameters by sampling from the population of interest. claims about population parameters by sampling from the population of interest. P‐values give the probability that an observed association could have arisen solely P‐values give the probability that an observed association could have arisen solely as a result of chance bias in sampling, given the null hypothesis –it is dependent as a result of chance bias in sampling, given the null hypothesis –it is dependent on sample size and variability. on sample size and variability. p‐value=0.01 means there is a 1‐in‐100 probability that what looks like an effect of p‐value=0.01 means there is a 1‐in‐100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at the treatment could have occurred randomly, and in truth there was no effect at all. all. 17 18 Hypotheses, Effects and Significance Hypotheses, Effects and Significance Significance is significant, Separate no effect from non‐significance, Effect size matters Significance is significant, Separate no effect from non‐significance, Effect size matters Hypothesis Test is a statistical technique to assist with answering questions or Hypothesis Test is a statistical technique to assist with answering questions or claims about population parameters by sampling from the population of interest. claims about population parameters by sampling from the population of interest. P‐values give the probability that an observed association could have arisen solely P‐values give the probability that an observed association could have arisen solely as a result of chance bias in sampling, given the null hypothesis –it is dependent as a result of chance bias in sampling, given the null hypothesis –it is dependent on sample size and variability. on sample size and variability. p‐value=0.01 means there is a 1‐in‐100 probability that what looks like an effect of p‐value=0.01 means there is a 1‐in‐100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at the treatment could have occurred randomly, and in truth there was no effect at all. all. 19 20 5 Hypotheses, Effects and Significance Hypotheses, Effects and Significance Significance is significant, Separate no effect from non‐significance, Effect size matters Significance is significant, Separate no effect from non‐significance, Effect size matters 1. P‐values can indicate how incompatible the data are with a specified statistical model. 2. P‐values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone. 3. Scientific conclusions and business
Recommended publications
  • Conjunction Fallacy' Revisited: How Intelligent Inferences Look Like Reasoning Errors
    Journal of Behavioral Decision Making J. Behav. Dec. Making, 12: 275±305 (1999) The `Conjunction Fallacy' Revisited: How Intelligent Inferences Look Like Reasoning Errors RALPH HERTWIG* and GERD GIGERENZER Max Planck Institute for Human Development, Berlin, Germany ABSTRACT Findings in recent research on the `conjunction fallacy' have been taken as evid- ence that our minds are not designed to work by the rules of probability. This conclusion springs from the idea that norms should be content-blind Ð in the present case, the assumption that sound reasoning requires following the con- junction rule of probability theory. But content-blind norms overlook some of the intelligent ways in which humans deal with uncertainty, for instance, when drawing semantic and pragmatic inferences. In a series of studies, we ®rst show that people infer nonmathematical meanings of the polysemous term `probability' in the classic Linda conjunction problem. We then demonstrate that one can design contexts in which people infer mathematical meanings of the term and are therefore more likely to conform to the conjunction rule. Finally, we report evidence that the term `frequency' narrows the spectrum of possible interpreta- tions of `probability' down to its mathematical meanings, and that this fact Ð rather than the presence or absence of `extensional cues' Ð accounts for the low proportion of violations of the conjunction rule when people are asked for frequency judgments. We conclude that a failure to recognize the human capacity for semantic and pragmatic inference can lead rational responses to be misclassi®ed as fallacies. Copyright # 1999 John Wiley & Sons, Ltd. KEY WORDS conjunction fallacy; probabalistic thinking; frequentistic thinking; probability People's apparent failures to reason probabilistically in experimental contexts have raised serious concerns about our ability to reason rationally in real-world environments.
    [Show full text]
  • Applications and Issues in Assessment
    meeting of the Association of British Neurologists in April vessels: a direct comparison between intravenous and intra-arterial DSA. 1992. We thank Professor Charles Warlow for his helpful Cl/t Radiol 1991;44:402-5. 6 Caplan LR, Wolpert SM. Angtography in patients with occlusive cerebro- comments. vascular disease: siews of a stroke neurologist and neuroradiologist. AmcincalsouraalofNcearoradiologv 1991-12:593-601. 7 Kretschmer G, Pratschner T, Prager M, Wenzl E, Polterauer P, Schemper M, E'uropean Carotid Surgery Trialists' Collaboration Group. MRC European ct al. Antiplatelet treatment prolongs survival after carotid bifurcation carotid surgerv trial: interim results for symptomatic patients with severe endarterectomv. Analysts of the clinical series followed by a controlled trial. (70-90()',) or tvith mild stenosis (0-299',Y) carotid stenosis. Lantcet 1991 ;337: AstniSuirg 1990;211 :317-22. 1 235-43. 8 Antiplatelet Trialists' Collaboration. Secondary prevention of vascular disease 2 North American Svmptotnatic Carotid Endarterectomv Trial Collaborators. by prolonged antiplatelet treatment. BM7 1988;296:320-31. Beneficial effect of carotid endarterectomv in svmptomatic patients with 9 Murie JA, Morris PJ. Carotid endarterectomy in Great Britain and Ireland. high grade carotid stenosis. VEngl 7,1Med 1991;325:445-53. Br7Si(rg 1986;76:867-70. 3 Hankev CJ, Warlows CP. Svmptomatic carotid ischaemic events. Safest and 10 Murie J. The place of surgery in the management of carotid artery disease. most cost effective way of selecting patients for angiography before Hospital Update 1991 July:557-61. endarterectomv. Bt4 1990;300:1485-91. 11 Dennis MS, Bamford JM, Sandercock PAG, Warlow CP. Incidence of 4 Humphrev PRD, Sandercock PAG, Slatterv J.
    [Show full text]
  • The Status Quo Bias and Decisions to Withdraw Life-Sustaining Treatment
    HUMANITIES | MEDICINE AND SOCIETY The status quo bias and decisions to withdraw life-sustaining treatment n Cite as: CMAJ 2018 March 5;190:E265-7. doi: 10.1503/cmaj.171005 t’s not uncommon for physicians and impasse. One factor that hasn’t been host of psychological phenomena that surrogate decision-makers to disagree studied yet is the role that cognitive cause people to make irrational deci- about life-sustaining treatment for biases might play in surrogate decision- sions, referred to as “cognitive biases.” Iincapacitated patients. Several studies making regarding withdrawal of life- One cognitive bias that is particularly show physicians perceive that nonbenefi- sustaining treatment. Understanding the worth exploring in the context of surrogate cial treatment is provided quite frequently role that these biases might play may decisions regarding life-sustaining treat- in their intensive care units. Palda and col- help improve communication between ment is the status quo bias. This bias, a leagues,1 for example, found that 87% of clinicians and surrogates when these con- decision-maker’s preference for the cur- physicians believed that futile treatment flicts arise. rent state of affairs,3 has been shown to had been provided in their ICU within the influence decision-making in a wide array previous year. (The authors in this study Status quo bias of contexts. For example, it has been cited equated “futile” with “nonbeneficial,” The classic model of human decision- as a mechanism to explain patient inertia defined as a treatment “that offers no rea- making is the rational choice or “rational (why patients have difficulty changing sonable hope of recovery or improvement, actor” model, the view that human beings their behaviour to improve their health), or because the patient is permanently will choose the option that has the best low organ-donation rates, low retirement- unable to experience any benefit.”) chance of satisfying their preferences.
    [Show full text]
  • Thinking About the Ultimate Argument for Realism∗
    Thinking About the Ultimate Argument for Realism∗ Stathis Psillos Department of Philosophy and History of Science University of Athens Panepistimioupolis (University Campus) Athens 15771 Greece [email protected] 1. Introduction Alan Musgrave has been one of the most passionate defenders of scientific realism. Most of his papers in this area are, by now, classics. The title of my paper alludes to Musgrave’s piece “The Ultimate Argument for Realism”, though the expression is Bas van Fraassen’s (1980, 39), and the argument is Hilary Putnam’s (1975, 73): realism “is the only philosophy of science that does not make the success of science a miracle”. Hence, the code-name ‘no-miracles’ argument (henceforth, NMA). In fact, NMA has quite a history and a variety of formulations. I have documented all this in my (1999, chapter 4). But, no matter how exactly the argument is formulated, its thrust is that the success of scientific theories lends credence to the following two theses: a) that scientific theories should be interpreted realistically and b) that, so interpreted, these theories are approximately true. The original authors of the argument, however, did not put an extra stress on novel predictions, which, as Musgrave (1988) makes plain, is the litmus test for the ability of any approach to science to explain the success of science. Here is why reference to novel predictions is crucial. Realistically understood, theories entail too many novel claims, most of them about unobservables (e.g., that ∗ I want to dedicate this paper to Alan Musgrave. His exceptional combination of clear-headed and profound philosophical thinking has been a model for me.
    [Show full text]
  • Graphical Techniques in Debiasing: an Exploratory Study
    GRAPHICAL TECHNIQUES IN DEBIASING: AN EXPLORATORY STUDY by S. Bhasker Information Systems Department Leonard N. Stern School of Business New York University New York, New York 10006 and A. Kumaraswamy Management Department Leonard N. Stern School of Business New York University New York, NY 10006 October, 1990 Center for Research on Information Systems Information Systems Department Leonard N. Stern School of Business New York University Working Paper Series STERN IS-90-19 Forthcoming in the Proceedings of the 1991 Hawaii International Conference on System Sciences Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-19 Center for Digital Economy Research Stem School of Business IVorking Paper IS-90-19 2 Abstract Base rate and conjunction fallacies are consistent biases that influence decision making involving probability judgments. We develop simple graphical techniques and test their eflcacy in correcting for these biases. Preliminary results suggest that graphical techniques help to overcome these biases and improve decision making. We examine the implications of incorporating these simple techniques in Executive Information Systems. Introduction Today, senior executives operate in highly uncertain environments. They have to collect, process and analyze a deluge of information - most of it ambiguous. But, their limited information acquiring and processing capabilities constrain them in this task [25]. Increasingly, executives rely on executive information/support systems for various purposes like strategic scanning of their business environments, internal monitoring of their businesses, analysis of data available from various internal and external sources, and communications [5,19,32]. However, executive information systems are, at best, support tools. Executives still rely on their mental or cognitive models of their businesses and environments and develop heuristics to simplify decision problems [10,16,25].
    [Show full text]
  • JSM 2017 in Baltimore the 2017 Joint Statistical Meetings in Baltimore, Maryland, Which Included the CONTENTS IMS Annual Meeting, Took Place from July 29 to August 3
    Volume 46 • Issue 6 IMS Bulletin September 2017 JSM 2017 in Baltimore The 2017 Joint Statistical Meetings in Baltimore, Maryland, which included the CONTENTS IMS Annual Meeting, took place from July 29 to August 3. There were over 6,000 1 JSM round-up participants from 52 countries, and more than 600 sessions. Among the IMS program highlights were the three Wald Lectures given by Emmanuel Candès, and the Blackwell 2–3 Members’ News: ASA Fellows; ICM speakers; David Allison; Lecture by Martin Wainwright—Xiao-Li Meng writes about how inspirational these Mike Cohen; David Cox lectures (among others) were, on page 10. There were also five Medallion lectures, from Edoardo Airoldi, Emery Brown, Subhashis Ghoshal, Mark Girolami and Judith 4 COPSS Awards winners and nominations Rousseau. Next year’s IMS lectures 6 JSM photos At the IMS Presidential Address and Awards session (you can read Jon Wellner’s 8 Anirban’s Angle: The State of address in the next issue), the IMS lecturers for 2018 were announced. The Wald the World, in a few lines lecturer will be Luc Devroye, the Le Cam lecturer will be Ruth Williams, the Neyman Peter Bühlmann Yuval Peres 10 Obituary: Joseph Hilbe lecture will be given by , and the Schramm lecture by . The Medallion lecturers are: Jean Bertoin, Anthony Davison, Anna De Masi, Svante Student Puzzle Corner; 11 Janson, Davar Khoshnevisan, Thomas Mikosch, Sonia Petrone, Richard Samworth Loève Prize and Ming Yuan. 12 XL-Files: The IMS Style— Next year’s JSM invited sessions Inspirational, Mathematical If you’re feeling inspired by what you heard at JSM, you can help to create the 2018 and Statistical invited program for the meeting in Vancouver (July 28–August 2, 2018).
    [Show full text]
  • The American Statistician
    This article was downloaded by: [T&F Internal Users], [Rob Calver] On: 01 September 2015, At: 02:24 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: 5 Howick Place, London, SW1P 1WG The American Statistician Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/utas20 Reviews of Books and Teaching Materials Published online: 27 Aug 2015. Click for updates To cite this article: (2015) Reviews of Books and Teaching Materials, The American Statistician, 69:3, 244-252, DOI: 10.1080/00031305.2015.1068616 To link to this article: http://dx.doi.org/10.1080/00031305.2015.1068616 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes.
    [Show full text]
  • Working Memory, Cognitive Miserliness and Logic As Predictors of Performance on the Cognitive Reflection Test
    Working Memory, Cognitive Miserliness and Logic as Predictors of Performance on the Cognitive Reflection Test Edward J. N. Stupple ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Maggie Gale ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Christopher R. Richmond ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Abstract Most participants respond that the answer is 10 cents; however, a slower and more analytic approach to the The Cognitive Reflection Test (CRT) was devised to measure problem reveals the correct answer to be 5 cents. the inhibition of heuristic responses to favour analytic ones. The CRT has been a spectacular success, attracting more Toplak, West and Stanovich (2011) demonstrated that the than 100 citations in 2012 alone (Scopus). This may be in CRT was a powerful predictor of heuristics and biases task part due to the ease of administration; with only three items performance - proposing it as a metric of the cognitive miserliness central to dual process theories of thinking. This and no requirement for expensive equipment, the practical thesis was examined using reasoning response-times, advantages are considerable. There have, moreover, been normative responses from two reasoning tasks and working numerous correlates of the CRT demonstrated, from a wide memory capacity (WMC) to predict individual differences in range of tasks in the heuristics and biases literature (Toplak performance on the CRT. These data offered limited support et al., 2011) to risk aversion and SAT scores (Frederick, for the view of miserliness as the primary factor in the CRT.
    [Show full text]
  • Zusammenfassung
    Zusammenfassung Auf den folgenden beiden Seiten werden die zuvor detailliert darge- stellten 100 Fehler nochmals übersichtlich gruppiert und klassifiziert. Der kompakte Überblick soll Sie dabei unterstützen, die systemati- schen Fehler durch geeignete Maßnahmen möglichst zu verhindern beziehungsweise so zu reduzieren, dass sich keine allzu negativen Auswirkungen auf Ihr operatives Geschäft ergeben! Eine gezielte und strukturierte Auseinandersetzung mit den Fehler(kategorie)n kann Sie ganz allgemein dabei unterstützen, Ihr Unternehmen „resilienter“ und „antifragiler“ aufzustellen, das heißt, dass Sie von negativen Entwicklungen nicht sofort umgeworfen werden (können). Außerdem kann eine strukturierte Auseinander- setzung auch dabei helfen, sich für weitere, hier nicht explizit genannte – aber ähnliche – Fehler zu wappnen. © Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2019 402 C. Glaser, Risiko im Management, https://doi.org/10.1007/978-3-658-25835-1 403 Zusammenfassung Zu viele Zu wenig Nicht ausreichend Informationen Informationsgehalt Zeit zur Verfügung Deshalb beachten wir Deshalb füllen wir die Deshalb nehmen wir häufig typischerweise nur… Lücken mit… an, dass… Veränderungen Mustern und bekannten wir Recht haben Außergewöhnliches Geschichten wir das schaffen können Wiederholungen Allgemeinheiten und das Naheliegendste auch Bekanntes/Anekdoten Stereotypen das Beste ist Bestätigungen vereinfachten wir beenden sollten, was Wahrscheinlichkeiten und wir angefangen haben Zahlen unsere Optionen einfacheren
    [Show full text]
  • Observational Studies and Bias in Epidemiology
    The Young Epidemiology Scholars Program (YES) is supported by The Robert Wood Johnson Foundation and administered by the College Board. Observational Studies and Bias in Epidemiology Manuel Bayona Department of Epidemiology School of Public Health University of North Texas Fort Worth, Texas and Chris Olsen Mathematics Department George Washington High School Cedar Rapids, Iowa Observational Studies and Bias in Epidemiology Contents Lesson Plan . 3 The Logic of Inference in Science . 8 The Logic of Observational Studies and the Problem of Bias . 15 Characteristics of the Relative Risk When Random Sampling . and Not . 19 Types of Bias . 20 Selection Bias . 21 Information Bias . 23 Conclusion . 24 Take-Home, Open-Book Quiz (Student Version) . 25 Take-Home, Open-Book Quiz (Teacher’s Answer Key) . 27 In-Class Exercise (Student Version) . 30 In-Class Exercise (Teacher’s Answer Key) . 32 Bias in Epidemiologic Research (Examination) (Student Version) . 33 Bias in Epidemiologic Research (Examination with Answers) (Teacher’s Answer Key) . 35 Copyright © 2004 by College Entrance Examination Board. All rights reserved. College Board, SAT and the acorn logo are registered trademarks of the College Entrance Examination Board. Other products and services may be trademarks of their respective owners. Visit College Board on the Web: www.collegeboard.com. Copyright © 2004. All rights reserved. 2 Observational Studies and Bias in Epidemiology Lesson Plan TITLE: Observational Studies and Bias in Epidemiology SUBJECT AREA: Biology, mathematics, statistics, environmental and health sciences GOAL: To identify and appreciate the effects of bias in epidemiologic research OBJECTIVES: 1. Introduce students to the principles and methods for interpreting the results of epidemio- logic research and bias 2.
    [Show full text]
  • Correcting Sampling Bias in Non-Market Valuation with Kernel Mean Matching
    CORRECTING SAMPLING BIAS IN NON-MARKET VALUATION WITH KERNEL MEAN MATCHING Rui Zhang Department of Agricultural and Applied Economics University of Georgia [email protected] Selected Paper prepared for presentation at the 2017 Agricultural & Applied Economics Association Annual Meeting, Chicago, Illinois, July 30 - August 1 Copyright 2017 by Rui Zhang. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. Abstract Non-response is common in surveys used in non-market valuation studies and can bias the parameter estimates and mean willingness to pay (WTP) estimates. One approach to correct this bias is to reweight the sample so that the distribution of the characteristic variables of the sample can match that of the population. We use a machine learning algorism Kernel Mean Matching (KMM) to produce resampling weights in a non-parametric manner. We test KMM’s performance through Monte Carlo simulations under multiple scenarios and show that KMM can effectively correct mean WTP estimates, especially when the sample size is small and sampling process depends on covariates. We also confirm KMM’s robustness to skewed bid design and model misspecification. Key Words: contingent valuation, Kernel Mean Matching, non-response, bias correction, willingness to pay 2 1. Introduction Nonrandom sampling can bias the contingent valuation estimates in two ways. Firstly, when the sample selection process depends on the covariate, the WTP estimates are biased due to the divergence between the covariate distributions of the sample and the population, even the parameter estimates are consistent; this is usually called non-response bias.
    [Show full text]
  • Who Uses Base Rates and <Emphasis Type="Italic">P </Emphasis>(D
    Memory & Cognition 1998,26 (1), 161-179 Who uses base rates and P(D/,..,H)? An analysis ofindividual differences KEITHE. STANOVICH University ofToronto, Toronto, Ontario, Canada and RICHARD F. WEST James Madison University, Harrisonburg, Virginia In two experiments, involving over 900 subjects, we examined the cognitive correlates of the ten­ dency to view P(D/- H) and base rate information as relevant to probability assessment. Wefound that individuals who viewed P(D/- H) as relevant in a selection task and who used it to make the proper Bayesian adjustment in a probability assessment task scored higher on tests of cognitive ability and were better deductive and inductive reasoners. They were less biased by prior beliefs and more data­ driven on a covariation assessment task. In contrast, individuals who thought that base rates were rel­ evant did not display better reasoning skill or higher cognitive ability. Our results parallel disputes about the normative status of various components of the Bayesian formula in interesting ways. Itis ar­ gued that patterns of covariance among reasoning tasks may have implications for inferences about what individuals are trying to optimize in a rational analysis (J. R. Anderson, 1990, 1991). Twodeviations from normatively correct Bayesian rea­ and were asked to choose which pieces of information soning have been the focus ofmuch research. The two de­ they would need in order to determine whether the patient viations are most easily characterized ifBayes' rule is ex­ had the disease "Digirosa," The four pieces of informa­ pressed in the ratio form, where the odds favoring the focal tion were the percentage ofpeople with Digirosa, the per­ hypothesis (H) are derived by multiplying the likelihood centage of people without Digirosa, the percentage of ratio ofthe observed datum (D) by the prior odds favor­ people with Digirosa who have a red rash, and the per­ ing the focal hypothesis: centage ofpeople without Digirosa who have a red rash.
    [Show full text]