The Existence of Publication Bias and Risk Factors for Its Occurrence

Total Page:16

File Type:pdf, Size:1020Kb

The Existence of Publication Bias and Risk Factors for Its Occurrence The Existence of Publication Bias and Risk Factors for Its Occurrence Kay Dickersin, PhD Publication bias is the tendency on the parts of investigators, reviewers, and lication, the bias in terms ofthe informa¬ editors to submit or accept manuscripts for publication based on the direction or tion available to the scientific communi¬ strength of the study findings. Much of what has been learned about publication ty may be considerable. The bias that is when of re¬ bias comes from the social sciences, less from the field of medicine. In medicine, created publication study sults is on the direction or three studies have direct evidence for this bias. Prevention of based signifi¬ provided publica- cance of the is called tion bias is both from the scientific dissemina- findings publica¬ important perspective (complete tion bias. This term seems to have been tion of knowledge) and from the perspective of those who combine results from a used first in the published scientific lit¬ number of similar studies (meta-analysis). If treatment decisions are based on erature by Smith1 in 1980. the published literature, then the literature must include all available data that is Even if publication bias exists, is it of acceptable quality. Currently, obtaining information regarding all studies worth worrying about? In a scholarly undertaken in a given field is difficult, even impossible. Registration of clinical sense, it is certainly worth worrying trials, and perhaps other types of studies, is the direction in which the scientific about. If one believes that judgments should move. about medical treatment should be community all (JAMA. 1990;263:1385-1389) made using good, available evidence, then one should insist that all evidence be made available. In reality, however, medical to have The human intellect. .is more moved stitution is a case in Science de¬ decisions, date, mainly point. been the individual clinician's and excited than on accurate, and guided by by affirmatives by pends clear, precise and Re¬ negatives. wording in the descriptions ofwork per¬ training personal experience. Francis Bacon, 1621 formed and results obtained. It is im¬ cently, there has been a change in the have been The rise perative that there be only one possible way decisions made. EVER SINCE about 1450, the stan¬ interpretation of what is written. of consensus conferences, decision anal¬ dard method of imparting information Moreover, to advance, science de¬ ysis, expert systems, using clinical tri¬ and of has been on both in als as a basis for policy, and meta-analy- acquiring knowledge pends complete reporting, sis has decisions use of the written word. Even before terms of what experiments or studies propelled regarding medical toward a more scien¬ Gutenberg's printing press, the bene¬ were conducted and in terms of how an treatment fits of writing things down was recog¬ experiment or study was conducted. tific approach. nized on or even (Moses did not rely oral tradi¬ Practically, it is not possible PUBLICATION BIAS tion to and on the Ten desirable that or ev¬ interpret pass every experiment Historical Commandments correctly). The value ery element of an experiment be report¬ Aspects of the written tradition is manifold: it ed. Yet, there seem to be no established There seem to be no formal guidelines preserves, as in the recording of histori¬ standards by which an investi¬ in science as to when study results cal information about families or com¬ gator decides what is worth reporting: should or should not be published. The munities; it provides a basis for common the decision to report one's findings and decision as to what to include in a publi¬ understanding, as in lawmaking; and it the manner in which they are reported cation and whether to publish is largely provides the vehicle by which we share are a matter ofjudgment. personal, although dictated by the fash¬ information deemed to be important, as The question of how and when study ion of the times to a certain extent. in reporting the news of the day or the results are reported is of interest be¬ When Robert Boyle, the chemist, pub¬ latest scientific findings. cause of potential selection bias: given a lished his experiments on air in 1680, he In artistic endeavors, reinterpreta- set of characteristics about a study— was credited with being the first to re¬ tion of the written word, such as in design, operation, and outcome—could port the details of his experiments and translations, is acceptable, even wel¬ one predict the likelihood of publica¬ the precautions necessary for their rep¬ come. In other areas, the inherent ambi¬ tion? If one could, then that on which lication. This work ushered in a new guities of language lead to a constant our "knowledge" is based, the published type of report—one that described diffi¬ struggle to decipher the meaning or in¬ literature, is a biased representation of culties and errors. Thus, in the 1600s, tent of the written word. The US Con- knowledge as a whole. 1700s, and early 1800s, the usual scien¬ If the characteristics that determine tific report described not only the "posi¬ From the Department of Epidemiology, The Johns publication are related to study quality, tive" findings but also the "negative" or Hopkins University, School of Hygiene and Public then the selection bias incurred "nil" results. Health, Baltimore, Md. by Presented at The First International Congress on Peer studying only the published literature is Concerned about publication prac¬ Review in Biomedical Publication, Chicago, III, May 10\x=req-\ acceptable, even desirable. If, on the tices in the physical and life sciences, 12, 1989. other hand, the direction of re¬ lamented in 1661 that scientists Reprint requests to Department of Ophthalmology, study Boyle University of Maryland School of Medicine, 22 S Greene sults or the statistical significance ofthe did not write up single results but felt St, Baltimore, MD 21201 (Dr Dickersin). results is the reason for differential pub- compelled to refrain from publishing un- Downloaded from jama.ama-assn.org at Johns Hopkins University on August 5, 2011 til they had a "system" worked out that Table 1 —Manuscript Ratings for Same Manuscript With Varying Presentations of Results or Discussion10 they deemed worthy of formal presen¬ tation: "But the worst inconvenience of Mean Ratings all is yet to be mentioned, and that No. of Data Scientific Publication is, Presentation Referees Methods That whilst this vanity of thinking men Presentation Contribution Merit Positive results to write either or noth¬ 4.3 obliged systems results ing is in many excellent notions Negative 2.4 2.6 2.4 1.8 request, Methods or experiments are, by sober and mod¬ only 4.5 3.4 "2 Mixed results, est .... men, suppressed Apparent¬ Positive discussion 13 2.5 1.3 1.6 0.5 ly, the notion ofgoing to press only if one Mixed results, has something "big" to present is not negative discussion 14 2.7 2.0 1.7 modern at all. By the mid-1800s, the style of scien¬ tific was in the of writing process chang¬ Table 2.—Studies of Publication Bias in Medicine ing to the terse, rather technical ap¬ proach with which we are familiar. Index Follow-up Limitations of time (as science began to Source, y Subject Source Method Results move quite rapidly), journal space, the Simes, Cancer Cancer Publications Published trials 1986 trials trials and register show increased development of groups of scientists register efficacy of and a written combined working together forging treatment document together, the response to Dickersin Randomized, File of Questionnaire Published trials peer review, and economic dependence etal, controlled randomized, favor test on a that rewarded quick suc¬ 1987" trials controlled treatment more system trials often cess were all factors that led to a change in scientific and The Sommer, Menstrual Society Questionnaire Published studies writing publishing. 1987" cycle membership more often change in style that has taken place over research statistically the years is not inherently bad. The significant is whether the increased brevi¬ Chalmers Perinatal ODPT* ODPT full Strength of results problem etal, trials abstracts reports in abstract not ty has resulted in lost information and 1989* associated with whether it represents biased reporting. full publication Evidence for Publication Bias *ODPT indicates Oxford Database of Perinatal Trials. Perhaps as a result of the difficulties ofdesigning studies to address the prob¬ another received a manuscript that de¬ known to exist and its etiology well un¬ lem, more has been written to complain scribed negative results. A third group derstood. "Investigators are more about publication bias than to report was asked to evaluate a manuscript on strongly motivated to offer positive re¬ results of studies undertaken to evalu¬ the basis of the "Methods" section and sults for publication rather than null re¬ ate it. Most research on publication bias relevance alone; no data were provided. sults. Many journal editors select pa¬ has been done in the psychology and The fourth and fifth groups received pers for publication on this very basis, education fields.3"10 manuscripts with "mixed" results, with some of them expecting to see P values Sterling3 was probably the first to em¬ either a positive or negative "Discus¬ less than 0.05. Published clinical trials phasize that the tendency to publish sion" section. The referees used a scale are inevitably a positively biased positive results and reject negative of 0 to 6 (low to high) to rate the manu¬ selection."12 findings is a serious problem. He re¬ scripts for five items: relevance, meth¬ Information regarding publishing viewed all articles published in four ods, data presentation, scientific contri¬ practices is not easily obtained or readi¬ journals during 1 year (1955 or 1956) and bution, and publication merit.
Recommended publications
  • Pat Croskerry MD Phd
    Thinking (and the factors that influence it) Pat Croskerry MD PhD Scottish Intensive Care Society St Andrews, January 2011 RECOGNIZED Intuition Pattern Pattern Initial Recognition Executive Dysrationalia Calibration Output information Processor override T override Repetition NOT Analytical RECOGNIZED reasoning Dual Process Model Medical Decision Making Intuitive Analytical Orthodox Medical Decision Making (Analytical) Rational Medical Decision-Making • Knowledge base • Differential diagnosis • Best evidence • Reviews, meta-analysis • Biostatistics • Publication bias, citation bias • Test selection and interpretation • Bayesian reasoning • Hypothetico-deductive reasoning .„Cognitive thought is the tip of an enormous iceberg. It is the rule of thumb among cognitive scientists that unconscious thought is 95% of all thought – .this 95% below the surface of conscious awareness shapes and structures all conscious thought‟ Lakoff and Johnson, 1999 Rational blind-spots • Framing • Context • Ambient conditions • Individual factors Individual Factors • Knowledge • Intellect • Personality • Critical thinking ability • Decision making style • Gender • Ageing • Circadian type • Affective state • Fatigue, sleep deprivation, sleep debt • Cognitive load tolerance • Susceptibility to group pressures • Deference to authority Intelligence • Measurement of intelligence? • IQ most widely used barometer of intellect and cognitive functioning • IQ is strongest single predictor of job performance and success • IQ tests highly correlated with each other • Population
    [Show full text]
  • Cognitive Biases in Software Engineering: a Systematic Mapping Study
    Cognitive Biases in Software Engineering: A Systematic Mapping Study Rahul Mohanani, Iflaah Salman, Burak Turhan, Member, IEEE, Pilar Rodriguez and Paul Ralph Abstract—One source of software project challenges and failures is the systematic errors introduced by human cognitive biases. Although extensively explored in cognitive psychology, investigations concerning cognitive biases have only recently gained popularity in software engineering research. This paper therefore systematically maps, aggregates and synthesizes the literature on cognitive biases in software engineering to generate a comprehensive body of knowledge, understand state of the art research and provide guidelines for future research and practise. Focusing on bias antecedents, effects and mitigation techniques, we identified 65 articles (published between 1990 and 2016), which investigate 37 cognitive biases. Despite strong and increasing interest, the results reveal a scarcity of research on mitigation techniques and poor theoretical foundations in understanding and interpreting cognitive biases. Although bias-related research has generated many new insights in the software engineering community, specific bias mitigation techniques are still needed for software professionals to overcome the deleterious effects of cognitive biases on their work. Index Terms—Antecedents of cognitive bias. cognitive bias. debiasing, effects of cognitive bias. software engineering, systematic mapping. 1 INTRODUCTION OGNITIVE biases are systematic deviations from op- knowledge. No analogous review of SE research exists. The timal reasoning [1], [2]. In other words, they are re- purpose of this study is therefore as follows: curring errors in thinking, or patterns of bad judgment Purpose: to review, summarize and synthesize the current observable in different people and contexts. A well-known state of software engineering research involving cognitive example is confirmation bias—the tendency to pay more at- biases.
    [Show full text]
  • Opportunities for Selective Reporting of Harms in Randomized Clinical Trials: Selection Criteria for Nonsystematic Adverse Events
    Opportunities for selective reporting of harms in randomized clinical trials: Selection criteria for nonsystematic adverse events Evan Mayo-Wilson ( [email protected] ) Johns Hopkins University Bloomberg School of Public Health https://orcid.org/0000-0001-6126-2459 Nicole Fusco Johns Hopkins University Bloomberg School of Public Health Hwanhee Hong Duke University Tianjing Li Johns Hopkins University Bloomberg School of Public Health Joseph K. Canner Johns Hopkins University School of Medicine Kay Dickersin Johns Hopkins University Bloomberg School of Public Health Research Article Keywords: Harms, adverse events, clinical trials, reporting bias, selective outcome reporting, data sharing, trial registration Posted Date: February 5th, 2019 DOI: https://doi.org/10.21203/rs.2.268/v1 License: This work is licensed under a Creative Commons Attribution 4.0 International License. Read Full License Version of Record: A version of this preprint was published on September 5th, 2019. See the published version at https://doi.org/10.1186/s13063-019-3581-3. Page 1/16 Abstract Background: Adverse events (AEs) in randomized clinical trials may be reported in multiple sources. Different methods for reporting adverse events across trials, or across sources for a single trial, may produce inconsistent and confusing information about the adverse events associated with interventions Methods: We sought to compare the methods authors use to decide which AEs to include in a particular source (i.e., “selection criteria”) and to determine how selection criteria could impact the AEs reported. We compared sources (e.g., journal articles, clinical study reports [CSRs]) of trials for two drug-indications: gabapentin for neuropathic pain and quetiapine for bipolar depression.
    [Show full text]
  • Why Too Many Political Science Findings Cannot
    www.ssoar.info Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About it: A Review of Meta-scientific Research and a Call for Institutional Reform Wuttke, Alexander Preprint / Preprint Zeitschriftenartikel / journal article Empfohlene Zitierung / Suggested Citation: Wuttke, A. (2019). Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About it: A Review of Meta-scientific Research and a Call for Institutional Reform. Politische Vierteljahresschrift, 60(1). https:// doi.org/10.1007/s11615-018-0131-7 Nutzungsbedingungen: Terms of use: Dieser Text wird unter einer CC BY-NC Lizenz (Namensnennung- This document is made available under a CC BY-NC Licence Nicht-kommerziell) zur Verfügung gestellt. Nähere Auskünfte zu (Attribution-NonCommercial). For more Information see: den CC-Lizenzen finden Sie hier: https://creativecommons.org/licenses/by-nc/4.0 https://creativecommons.org/licenses/by-nc/4.0/deed.de Diese Version ist zitierbar unter / This version is citable under: https://nbn-resolving.org/urn:nbn:de:0168-ssoar-59909-5 Wuttke (2019): Credibility of Political Science Findings Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About it: A Review of Meta-scientific Research and a Call for Institutional Reform Alexander Wuttke, University of Mannheim 2019, Politische Vierteljahresschrift / German Political Science Quarterly 1, 60: 1-22, DOI: 10.1007/s11615-018-0131-7. This is an uncorrected pre-print. Please cite the original article. Witnessing the ongoing “credibility revolutions” in other disciplines, also political science should engage in meta-scientific introspection. Theoretically, this commentary describes why scientists in academia’s current incentive system work against their self-interest if they prioritize research credibility.
    [Show full text]
  • Systematic Reviews in Health Care: Meta-Analysis in Context
    © BMJ Publishing Group 2001 Chapter 4 © Crown copyright 2000 Chapter 24 © Crown copyright 1995, 2000 Chapters 25 and 26 © The Cochrane Collaboration 2000 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopy- ing, recording and/or otherwise, without the prior written permission of the publishers. First published in 1995 by the BMJ Publishing Group, BMA House, Tavistock Square, London WC1H 9JR www.bmjbooks.com First edition 1995 Second impression 1997 Second edition 2001 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0-7279-1488–X Typeset by Phoenix Photosetting, Chatham, Kent Printed and bound by MPG Books, Bodmin, Cornwall Contents Contributors viii Foreword xiii Introduction 1 Rationale, potentials, and promise of systematic reviews 3 MATTHIAS EGGER, GEORGE DAVEY SMITH, KEITH O’ROURKE Part I: Systematic reviews of controlled trials 2 Principles of and procedures for systematic reviews 23 MATTHIAS EGGER, GEORGE DAVEY SMITH 3 Problems and limitations in conducting systematic reviews 43 MATTHIAS EGGER, KAY DICKERSIN, GEORGE DAVEY SMITH 4 Identifying randomised trials 69 CAROL LEFEBVRE, MICHAEL JCLARKE 5 Assessing the quality of randomised controlled trials 87 PETER JÜNI, DOUGLAS G ALTMAN, MATTHIAS EGGER 6 Obtaining individual patient data from randomised controlled trials 109 MICHAEL J CLARKE, LESLEY A STEWART 7 Assessing the quality of reports
    [Show full text]
  • Understanding the Replication Crisis As a Base Rate Fallacy
    Understanding the Replication Crisis as a Base-Rate Fallacy PhilInBioMed, Université de Bordeaux 6 June 2018 [email protected] introduction — the replication crisis 52% of 1,576 scientists taking a survey conducted by the journal Nature agreed that there was a significant crisis of reproducibility Amgen successfully replicated only 6 out of 53 studies in oncology And then there is social psychology . introduction — the base rate fallacy screening for a disease, which affects 1 in every 1,000 individuals, with a 95% accurate test an individual S tests positive, no other risk factors; what is the probability that S has the disease? Harvard medical students, 1978 11 out of 60 got the correct answer introduction — the base rate fallacy introduction — the base rate fallacy base rate of disease = 1 in 1,000 = 0.1% (call this π) false positive rate = 5% (call this α) false positives among the 999 disease-free greatly outnumber the 1 true positive from the base rate fallacy to the replication crisis two types of error and accuracy type of error error rate accuracy type of accuracy Type-I (false +ve) α 1– α confidence level Type-II (false −ve) β 1– β power from the base rate fallacy to the replication crisis do not conflate False Positive Report Probability (FPRP) Pr (S does not have the disease, given that S tests positive for the disease) with False positive error rate (α) Pr (S tests positive for the disease, given that S does not have the disease) from the base rate fallacy to the replication crisis do not conflate Pr (the temperature will
    [Show full text]
  • Opportunities and Challenges of Using Systematic Reviews to Summarize
    Opportunities and challenges of using systematic reviews to summarize knowledge about “what works” in disease prevention & health promotion Kay Dickersin, MA, PhD NIH Office Of Disease Prevention Rockville, Maryland July 25, 2016 Kay Dickersin’s declaration of interests • Grants and contracts from agencies: – NIH-Cochrane Eyes and Vision – PCORI-Influence of multiple sources of data on meta- analysis – PCORI-Engagement of consumers – PCORI-Consumer Summit with G-I-N North America – AHRQ-Consumers United for Evidence-based Healthcare Conference Grant – FDA-Centers for Excellence in Regulatory Science Innovation (GC Alexander, PI) SEND QUESTIONS TO [email protected] USE @NIHPREVENTS & #NIHMTG ON TWITTER Reviews are necessary in health and healthcare • Systematic reviews of existing research scientifically summarize “what works” at any point in time. • Reasons for summarizing what works vary (e.g., understanding priorities for research, pursuing answers where there are knowledge gaps, or setting guidelines for care) SEND QUESTIONS TO [email protected] USE @NIHPREVENTS & #NIHMTG ON TWITTER What is a systematic review? • A review of existing knowledge that uses explicit, scientific methods. • Systematic reviews may also combine results quantitatively (“meta-analysis”) SEND QUESTIONS TO [email protected] USE @NIHPREVENTS & #NIHMTG ON TWITTER Types of review articles Individual patient data (IPD) meta- Systematic analyses reviews with meta-analyses Reviews that are not systematic Systematic (traditional, reviews All reviews
    [Show full text]
  • A Meta-Meta-Analysis
    Journal of Intelligence Article Effect Sizes, Power, and Biases in Intelligence Research: A Meta-Meta-Analysis Michèle B. Nuijten 1,* , Marcel A. L. M. van Assen 1,2, Hilde E. M. Augusteijn 1, Elise A. V. Crompvoets 1 and Jelte M. Wicherts 1 1 Department of Methodology & Statistics, Tilburg School of Social and Behavioral Sciences, Tilburg University, Warandelaan 2, 5037 AB Tilburg, The Netherlands; [email protected] (M.A.L.M.v.A.); [email protected] (H.E.M.A.); [email protected] (E.A.V.C.); [email protected] (J.M.W.) 2 Section Sociology, Faculty of Social and Behavioral Sciences, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands * Correspondence: [email protected]; Tel.: +31-13-466-2053 Received: 7 May 2020; Accepted: 24 September 2020; Published: 2 October 2020 Abstract: In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics).
    [Show full text]
  • Protecting Against Researcher Bias in Secondary Data Analysis
    Protecting against researcher bias in secondary data analysis: Challenges and potential solutions Jessie R. Baldwin1,2, PhD, Jean-Baptiste Pingault1,2, PhD, Tabea Schoeler,1 PhD, Hannah M. Sallis3,4,5, PhD & Marcus R. Munafò3,4,6, PhD 4,618 words; 2 tables; 1 figure 1 Department of Clinical, Educational and Health Psychology, Division of Psychology and Language Sciences, University College London, London, UK 2 Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, UK 3 MRC Integrative Epidemiology Unit at the University of Bristol, Bristol Medical School, University of Bristol, Bristol, UK 4 School of Psychological Science, University of Bristol, Bristol, UK 5 Centre for Academic Mental Health, Population Health Sciences, University of Bristol, Bristol, UK 6 NIHR Biomedical Research Centre, University Hospitals Bristol NHS Foundation Trust and University of Bristol, Bristol, UK Correspondence: Dr. Jessie R. Baldwin, Department of Clinical, Educational and Health Psychology, Division of Psychology and Language Sciences, University College London, London, WC1H 0AP, UK; [email protected] Funding: J.R.B is funded by a Wellcome Trust Sir Henry Wellcome fellowship (grant 215917/Z/19/Z). J.B.P is a supported by the Medical Research Foundation 2018 Emerging Leaders 1st Prize in Adolescent Mental Health (MRF-160-0002-ELP-PINGA). M.R.M and H.M.S work in a unit that receives funding from the University of Bristol and the UK Medical Research Council (MC_UU_00011/5, MC_UU_00011/7), and M.R.M is also supported by the National Institute for Health Research (NIHR) Biomedical Research Centre at the University Hospitals Bristol National Health Service Foundation Trust and the University of Bristol.
    [Show full text]
  • Threats of a Replication Crisis in Empirical Computer Science Andy Cockburn, Pierre Dragicevic, Lonni Besançon, Carl Gutwin
    Threats of a replication crisis in empirical computer science Andy Cockburn, Pierre Dragicevic, Lonni Besançon, Carl Gutwin To cite this version: Andy Cockburn, Pierre Dragicevic, Lonni Besançon, Carl Gutwin. Threats of a replication crisis in empirical computer science. Communications of the ACM, Association for Computing Machinery, 2020, 63 (8), pp.70-79. 10.1145/3360311. hal-02907143 HAL Id: hal-02907143 https://hal.inria.fr/hal-02907143 Submitted on 27 Jul 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Threats of a Replication Crisis in Empirical Computer Science Andy Cockburn1 Pierre Dragicevic2 Lonni Besançon3 Carl Gutwin4 1University of Canterbury, New Zealand 2Inria, Université Paris-Saclay, France 3Linköping University, Sweden 4University of Saskatchewan, Canada This is the authors’ own version. The final version is available at https://doi.org/10.1145/3360311 Key insights: • Many areas of computer science research (e.g., performance analysis, software engineering, arti- ficial intelligence, and human-computer interaction) validate research claims by using statistical significance as the standard of evidence. • A loss of confidence in statistically significant findings is plaguing other empirical disciplines, yet there has been relatively little debate of this issue and its associated ‘replication crisis’ in computer science.
    [Show full text]
  • Publication Bias
    CHAPTER 30 Publication Bias Introduction The problem of missing studies Methods for addressing bias Illustrative example The model Getting a sense of the data Is there evidence of any bias? Is the entire effect an artifact of bias? How much of an impact might the bias have? Summary of the findings for the illustrative example Some important caveats Small-study effects Concluding remarks INTRODUCTION While a meta-analysis will yield a mathematically accurate synthesis of the studies included in the analysis, if these studies are a biased sample of all relevant studies, then the mean effect computed by the meta-analysis will reflect this bias. Several lines of evidence show that studies that report relatively high effect sizes are more likely to be published than studies that report lower effect sizes. Since published studies are more likely to find their way into a meta-analysis, any bias in the literature is likely to be reflected in the meta-analysis as well. This issue is generally known as publication bias. The problem of publication bias is not unique to systematic reviews. It affects the researcher who writes a narrative review and even the clinician who is searching a database for primary papers. Nevertheless, it has received more attention with regard to systematic reviews and meta-analyses, possibly because these are pro- moted as being more accurate than other approaches to synthesizing research. In this chapter we first discuss the reasons for publication bias and the evidence that it exists. Then we discuss a series of methods that have been developed to assess Introduction to Meta-Analysis.
    [Show full text]
  • Mtg2016-Dickersin-References.Pdf
    References for NIH Office of Disease Prevention webinar July 25, 2016 Kay Dickersin Slides 1. Pai M, McCulloch M, Gorman JD, et al/ Systematic reviews and meta-analyses. an illustrated, step-by-step guide/ Natl Med J India 2004-17(2).86-95/ 2. Institute of Medicine. Finding what works in health care: standards for systematic reviews. March 23, 2011. Available at: http://www.iom.edu/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for­ Systematic-Reviews.aspx 3. Institute of Medicine. Clinical practice guidelines we can trust. March 23, 2011. Available at: http://www.nationalacademies.org/hmd/Reports/2011/Clinical-Practice-Guidelines-We­ Can-Trust.aspx 4. Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions (version 5.1.0). Available at: http://www.cochrane-handbook.org/ 5. Dijkers, M. KT Update (Vol. 4, No. 1 – December 2015) Available at: http://ktdrr.org/products/update/v4n1 6. Tricco !, Soobiah C, !ntony J, Cogo E, MacDonald H, Lillie E, Tran J, D’Souza J, Hui W, Perrier L, Welch V, Horsley T, Straus SE, Kastner M. A scoping review identifies multiple emerging knowledge synthesis methods, but few studies operationalize the method. Journal of Clinical Epidemiology 73: 19e28. Published online: February 15, 2016. DOI: http://dx.doi.org/10.1016/j.jclinepi.2015.08.030 7. Chandler J, Churchill R, Higgins J, Tovey D. Methodological standards for the conduct of new Cochrane Intervention Reviews. Version 2.2. 17 December 2012 – Available at: http://www.editorial-unit.cochrane.org/sites/editorial­ unit.cochrane.org/files/uploads/MECIR_conduct_standards%202.2%2017122012.pdf 8.
    [Show full text]