Publication and Other Reporting Biases in Cognitive Sciences

Total Page:16

File Type:pdf, Size:1020Kb

Publication and Other Reporting Biases in Cognitive Sciences TICS-1311; No. of Pages 7 Review Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention 1,2,3,4 5,6 7 8 John P.A. Ioannidis , Marcus R. Munafo` , Paolo Fusar-Poli , Brian A. Nosek , 1 and Sean P. David 1 Department of Medicine, Stanford University School of Medicine, Stanford, CA 94305, USA 2 Department of Health Research and Policy, Stanford University School of Medicine, Stanford, CA 94305, USA 3 Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA 94305, USA 4 Meta-Research Innovation Center at Stanford (METRICS), Stanford, CA 94305, USA 5 MRC Integrative Epidemiology Unit, UK Centre for Tobacco and Alcohol Studies, University of Bristol, Bristol, UK 6 School of Experimental Psychology, University of Bristol, Bristol, UK 7 Department of Psychosis Studies, Institute of Psychiatry, King’s College London, London, UK 8 Center for Open Science, and Department of Psychology, University of Virginia, VA, USA Recent systematic reviews and empirical evaluations of Definitions of biases and relevance for cognitive the cognitive sciences literature suggest that publica- sciences tion and other reporting biases are prevalent across The terms ‘publication bias’ and ‘selective reporting bias’ diverse domains of cognitive science. In this review, refer to the differential choice to publish studies or report we summarize the various forms of publication and particular results, respectively, depending on the nature reporting biases and other questionable research prac- or directionality of findings [1]. There are several forms of tices, and overview the available methods for probing such biases in the literature [2], including: (i) study publi- into their existence. We discuss the available empirical cation bias, where studies are less likely to be published evidence for the presence of such biases across the when they reach nonstatistically significant findings; (ii) neuroimaging, animal, other preclinical, psychological, selective outcome reporting bias, where multiple out- clinical trials, and genetics literature in the cognitive comes are evaluated in a study and outcomes found to sciences. We also highlight emerging solutions (from be nonsignificant are less likely to be published than are study design to data analyses and reporting) to prevent statistically significant ones; and (iii) selective analysis bias and improve the fidelity in the field of cognitive reporting bias, where certain data are analyzed using science research. different analytical options and publication favors the more impressive, statistically significant variants of the Introduction results (Box 1). The promise of science is that the generation and evalua- Cognitive science uses a range of methods producing a tion of evidence is done transparently to promote an effi- multitude of measures and rich data sets. Given this cient, self-correcting process. However, multiple biases quantity of data, the potential analyses available and may produce inefficiency in knowledge building. In this pressures to publish, the temptation to use arbitrary (in- review, we discuss the importance of publication and other stead of planned) statistical analyses, and to mine data are reporting biases, and present some potential correctives substantial and may lead to questionable research prac- that may reduce bias without disrupting innovation. Con- tices. Even extreme cases of fraud in the cognitive sciences, sideration of publication and other reporting biases is such as reporting of experiments that never took place, particularly timely for cognitive sciences because it is a falsifying results, and confabulating data, have been field that is expanding rapidly. As such, preventing or recently reported [3–5]. Overt fraud is probably rare, but remedying these biases will have substantial impact on evidence of reporting biases in multiple domains of the the efficient development of a credible corpus of published cognitive sciences (discussed below) raise concern about research. the prevalence of questionable practices that occur via motivated reasoning without any intent to be fraudulent. Corresponding author: Ioannidis, J.P.A. ([email protected]). Keywords: publication bias; reporting bias; cognitive sciences; neuroscience; bias. Tests for publication and other reporting biases 1364-6613/$ – see front matter Tests for single studies, specific topics, and wider ß 2014 Published by Elsevier Ltd. http://dx.doi.org/10.1016/j.tics.2014.02.010 disciplines Explicit documentation of publication and other reporting biases requires availability of protocols, data, and results of primary analyses from conducted studies so that these Trends in Cognitive Sciences xx (2014) 1–7 1 TICS-1311; No. of Pages 7 Review Trends in Cognitive Sciences xxx xxxx, Vol. xxx, No. x Box 1. Various publication and reporting biases compared with large studies, this may reflect publication or selective reporting biases, but alternative explanations Study publication bias can distort meta-analyses and systematic exist (as reviewed elsewhere [10]). Sensitivity and specific- reviews of a large number of published studies when authors are more likely to submit and/or editors more likely to publish studies ity of these tests in real life is unknown, but simulation with ‘positive’ (i.e., statistically significant) results or suppress studies have evaluated the performance in different set- ‘negative’ (i.e., nonsignificant) results. tings. Published recommendations suggest cautious use of Study publication bias is also called the ‘file drawer problem’ [88], such tests [10]. In brief, visual evaluations of inverted whereby most studies, with null effects, are never submitted and funnel plots without statistical testing are precarious thereby left in the file drawer. However, study publication bias is probably also accentuated by peer reviewers and editors, even for [11]; some test variants have better type I and type II ‘negative’ studies that are submitted by their authors for publication. error properties than others [12,13]; and, for most meta- Sometimes, studies with ‘negative’ results may be published, but analyses where there are a limited number of studies, the with greater delay, than studies with ‘positive’ results, causing the power of these tests is low [10]. so-called ‘time-lag bias’ [89]. In the ‘Proteus phenomenon’, ‘positive’ results are published quickly and then there is a window of opportunity for the Selection models publication also of ‘negative’ results contradicting the original Selection model approaches evaluate whether the pattern observation that may be tantalizing and interesting to editors and of results that have been accumulated from a number of investigators. The net effect is rapidly alternating extreme results [17,90]. studies suggests an underlying filtering process, such as Selective outcome or analyses reporting biases can occur in many the nonpublication of a given percentage of results that had forms, including selective reporting of some of the analyzed not reached formal statistical significance [14–16]. These outcomes in a study when others exist; selective reporting of a methods have been less widely applied than small-study specific outcome; and incomplete reporting of an outcome [91]. effect tests, even though they may be more promising. One Examples of selective analyses reporting include selective report- application of a selection model approach examined an ing of subgroup analyses [92], reporting of per protocol instead of intention-to-treat analyses [93] and relegation of primary out- entire discipline (Alzheimer’s disease genetics) and found comes to secondary outcomes [94] so that nonsignificant results that selection forces may be different for first and/or dis- for the pre-specified primary outcomes are subordinated to covery results versus subsequent replication and/or refu- nonprimary positive results [8]. tation results or late replication efforts [17]. Another As a result of all these selective reporting biases, the published method that probes for data ‘fiddling’ has been proposed literature is likely to have an excess of statistically significant results. Even when associations and effects are genuinely non- [18] to identify selective reporting of extremely promising null, the published results are likely to overestimate their P values in a body of published results. magnitude [95,96]. Excess significance can be compared against the published literature. Howev- Excess significance testing evaluates whether the number er, these are not often available. A few empirical studies of statistically significant results in a corpus of studies is have retrieved study protocols from authors or trial data too high, under some plausible assumptions about the from submissions to the US Food and Drug Administration magnitude of the true effect size [19]. The Ioannidis test (FDA) [6–9]. These studies have shown that deviations in [19] can be applied to meta-analyses of multiple studies the analysis plan between protocols and published papers and also to larger fields and disciplines where many meta- are common [6], and effect sizes of drug interventions are analyses are compiled. The number of expected studies larger in the published literature compared with the cor- with nominally statistically significant results is estimated responding data from the same trials submitted to the FDA by summing the calculated power of all the considered [7,8]. It is challenging, but more common, to detect these studies. Simulations
Recommended publications
  • Pat Croskerry MD Phd
    Thinking (and the factors that influence it) Pat Croskerry MD PhD Scottish Intensive Care Society St Andrews, January 2011 RECOGNIZED Intuition Pattern Pattern Initial Recognition Executive Dysrationalia Calibration Output information Processor override T override Repetition NOT Analytical RECOGNIZED reasoning Dual Process Model Medical Decision Making Intuitive Analytical Orthodox Medical Decision Making (Analytical) Rational Medical Decision-Making • Knowledge base • Differential diagnosis • Best evidence • Reviews, meta-analysis • Biostatistics • Publication bias, citation bias • Test selection and interpretation • Bayesian reasoning • Hypothetico-deductive reasoning .„Cognitive thought is the tip of an enormous iceberg. It is the rule of thumb among cognitive scientists that unconscious thought is 95% of all thought – .this 95% below the surface of conscious awareness shapes and structures all conscious thought‟ Lakoff and Johnson, 1999 Rational blind-spots • Framing • Context • Ambient conditions • Individual factors Individual Factors • Knowledge • Intellect • Personality • Critical thinking ability • Decision making style • Gender • Ageing • Circadian type • Affective state • Fatigue, sleep deprivation, sleep debt • Cognitive load tolerance • Susceptibility to group pressures • Deference to authority Intelligence • Measurement of intelligence? • IQ most widely used barometer of intellect and cognitive functioning • IQ is strongest single predictor of job performance and success • IQ tests highly correlated with each other • Population
    [Show full text]
  • Bias and Fairness in NLP
    Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas
    [Show full text]
  • The Effects of the Intuitive Prosecutor Mindset on Person Memory
    THE EFFECTS OF THE INTUITIVE PROSECUTOR MINDSET ON PERSON MEMORY DISSERTATION Presented in Partial Fulfillment of the Requirements for The Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Richard J. Shakarchi, B.S., M.A. ***** The Ohio State University 2002 Dissertation Committee: Professor Marilynn Brewer, Adviser Approved by Professor Gifford Weary ________________________________ Adviser Professor Neal Johnson Department of Psychology Copyright by Richard J. Shakarchi 2002 ABSTRACT The intuitive prosecutor metaphor of human judgment is a recent development within causal attribution research. The history of causal attribution is briefly reviewed, with an emphasis on explanations of attributional biases. The evolution of such explanations is traced through a purely-cognitive phase to the more modern acceptance of motivational explanations for attributional biases. Two examples are offered of how a motivational explanatory framework of attributional biases can account for broad patterns of information processing biases. The intuitive prosecutor metaphor is presented as a parallel explanatory framework for interpreting attributional biases, whose motivation is based on a threat to social order. The potential implications for person memory are discussed, and three hypotheses are developed: That intuitive prosecutors recall norm- violating information more consistently than non-intuitive prosecutors (H1); that this differential recall may be based on differential (biased) encoding of behavioral information (H2); and that this differential recall may also be based on biased retrieval of information from memory rather than the result of a reporting bias (H3). A first experiment is conducted to test the basic recall hypothesis (H1). A second experiment is conducted that employs accountability in order to test the second and third hypotheses.
    [Show full text]
  • Outcome Reporting Bias in COVID-19 Mrna Vaccine Clinical Trials
    medicina Perspective Outcome Reporting Bias in COVID-19 mRNA Vaccine Clinical Trials Ronald B. Brown School of Public Health and Health Systems, University of Waterloo, Waterloo, ON N2L3G1, Canada; [email protected] Abstract: Relative risk reduction and absolute risk reduction measures in the evaluation of clinical trial data are poorly understood by health professionals and the public. The absence of reported absolute risk reduction in COVID-19 vaccine clinical trials can lead to outcome reporting bias that affects the interpretation of vaccine efficacy. The present article uses clinical epidemiologic tools to critically appraise reports of efficacy in Pfzier/BioNTech and Moderna COVID-19 mRNA vaccine clinical trials. Based on data reported by the manufacturer for Pfzier/BioNTech vaccine BNT162b2, this critical appraisal shows: relative risk reduction, 95.1%; 95% CI, 90.0% to 97.6%; p = 0.016; absolute risk reduction, 0.7%; 95% CI, 0.59% to 0.83%; p < 0.000. For the Moderna vaccine mRNA-1273, the appraisal shows: relative risk reduction, 94.1%; 95% CI, 89.1% to 96.8%; p = 0.004; absolute risk reduction, 1.1%; 95% CI, 0.97% to 1.32%; p < 0.000. Unreported absolute risk reduction measures of 0.7% and 1.1% for the Pfzier/BioNTech and Moderna vaccines, respectively, are very much lower than the reported relative risk reduction measures. Reporting absolute risk reduction measures is essential to prevent outcome reporting bias in evaluation of COVID-19 vaccine efficacy. Keywords: mRNA vaccine; COVID-19 vaccine; vaccine efficacy; relative risk reduction; absolute risk reduction; number needed to vaccinate; outcome reporting bias; clinical epidemiology; critical appraisal; evidence-based medicine Citation: Brown, R.B.
    [Show full text]
  • Working Memory, Cognitive Miserliness and Logic As Predictors of Performance on the Cognitive Reflection Test
    Working Memory, Cognitive Miserliness and Logic as Predictors of Performance on the Cognitive Reflection Test Edward J. N. Stupple ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Maggie Gale ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Christopher R. Richmond ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Abstract Most participants respond that the answer is 10 cents; however, a slower and more analytic approach to the The Cognitive Reflection Test (CRT) was devised to measure problem reveals the correct answer to be 5 cents. the inhibition of heuristic responses to favour analytic ones. The CRT has been a spectacular success, attracting more Toplak, West and Stanovich (2011) demonstrated that the than 100 citations in 2012 alone (Scopus). This may be in CRT was a powerful predictor of heuristics and biases task part due to the ease of administration; with only three items performance - proposing it as a metric of the cognitive miserliness central to dual process theories of thinking. This and no requirement for expensive equipment, the practical thesis was examined using reasoning response-times, advantages are considerable. There have, moreover, been normative responses from two reasoning tasks and working numerous correlates of the CRT demonstrated, from a wide memory capacity (WMC) to predict individual differences in range of tasks in the heuristics and biases literature (Toplak performance on the CRT. These data offered limited support et al., 2011) to risk aversion and SAT scores (Frederick, for the view of miserliness as the primary factor in the CRT.
    [Show full text]
  • Blooming Where They're Planted: Closing Cognitive
    BLOOMING WHERE THEY’RE PLANTED: CLOSING COGNITIVE ACHIEVEMENT GAPS WITH NON-COGNITIVE SKILLS Elaina Michele Sabatine A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the School of Social Work Chapel Hill 2019 Approved by: Melissa Lippold Amy Blank Wilson Natasha Bowen Din-Geng Chen Jill Hamm ©2019 Elaina Sabatine ALL RIGHTS RESERVED ii ABSTRACT ELAINA SABATINE: Blooming Where They’re Planted: Closing Cognitive Achievement Gaps With Non-Cognitive Skills (Under the direction of Dr. Melissa Lippold) For the last several decades, education reform has focused on closing achievement gaps between affluent, white students and their less privileged peers. One promising area for addressing achievement gaps is through promoting students’ non-cognitive skills (e.g., self- discipline, persistence). In the area of non-cognitive skills, two interventions – growth mindset and stereotype threat – have been identified as promising strategies for increasing students’ academic achievement and closing achievement gaps. This dissertation explores the use of growth mindset and stereotype threat strategies in the classroom, as a form of academic intervention. Paper 1 examines the extant evidence for growth mindset and stereotype threat interventions. Paper 1 used a systematic review and meta-analysis to identify and analyze 24 randomized controlled trials that tested growth mindset and stereotype threat interventions with middle and high school students over the course of one school year. Results from meta- analysis indicated small, positive effects for each intervention on students’ GPAs. Findings highlight the influence of variation among studies and the need for additional research that more formally evaluates differences in study characteristics and the impact of study characteristics on intervention effects.
    [Show full text]
  • Cognitive Biases in Software Engineering: a Systematic Mapping Study
    Cognitive Biases in Software Engineering: A Systematic Mapping Study Rahul Mohanani, Iflaah Salman, Burak Turhan, Member, IEEE, Pilar Rodriguez and Paul Ralph Abstract—One source of software project challenges and failures is the systematic errors introduced by human cognitive biases. Although extensively explored in cognitive psychology, investigations concerning cognitive biases have only recently gained popularity in software engineering research. This paper therefore systematically maps, aggregates and synthesizes the literature on cognitive biases in software engineering to generate a comprehensive body of knowledge, understand state of the art research and provide guidelines for future research and practise. Focusing on bias antecedents, effects and mitigation techniques, we identified 65 articles (published between 1990 and 2016), which investigate 37 cognitive biases. Despite strong and increasing interest, the results reveal a scarcity of research on mitigation techniques and poor theoretical foundations in understanding and interpreting cognitive biases. Although bias-related research has generated many new insights in the software engineering community, specific bias mitigation techniques are still needed for software professionals to overcome the deleterious effects of cognitive biases on their work. Index Terms—Antecedents of cognitive bias. cognitive bias. debiasing, effects of cognitive bias. software engineering, systematic mapping. 1 INTRODUCTION OGNITIVE biases are systematic deviations from op- knowledge. No analogous review of SE research exists. The timal reasoning [1], [2]. In other words, they are re- purpose of this study is therefore as follows: curring errors in thinking, or patterns of bad judgment Purpose: to review, summarize and synthesize the current observable in different people and contexts. A well-known state of software engineering research involving cognitive example is confirmation bias—the tendency to pay more at- biases.
    [Show full text]
  • Stereotypes, Role Models, and the Formation of Beliefs
    Stereotypes, Role Models, and the Formation of Beliefs Alex Eble and Feng Hu⇤ September 2017 Abstract Stereotypes can erroneously reduce children’s beliefs about their own ability, negatively affect- ing effort in school and investment in skills. Because of dynamic complementarity in the formation of human capital, this may lead to large negative effects on later life outcomes. In this paper, we show evidence that role models, in the form of female math teachers, can counter the negative effects of gender-based stereotypes on girls’ perceptions of their math ability. A simple model of investment in human capital under uncertainty predicts the greatest effects of role models ac- cruing to girls who perceive themselves to be of low ability. We exploit random assignment of students to classes in Chinese middle schools to test this prediction, examining the effects of teacher-student gender match on students’ perceived ability, aspirations, investment, and aca- demic performance. We find that, for girls who perceive themselves to be of low ability, being assigned a female math teacher causes large gains in all of these outcomes. We see no effects of teacher-student gender match on children who do not perceive themselves to be of low math ability. We show evidence against the potential alternative explanations that differential teacher effort, skill, teaching methods, or differential attention to girls drive these effects. ⇤Eble: Teachers College, Columbia University. Email: [email protected] Hu: School of Economics and Manage- ment, University of Science and Technology Beijing [email protected]. The authors are grateful to Joe Cummins, John Friedman, Morgan Hardy, Asim Khwaja, Ilyana Kuziemko, Bentley MacCleod, Randy Reback, Jonah Rockoff, Judy Scott- Clayton, and Felipe Valencia for generous input, as well as seminar audiences at Columbia, Fordham, the 2017 IZA transat- lantic meeting, and PacDev.
    [Show full text]
  • Why Too Many Political Science Findings Cannot
    www.ssoar.info Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About it: A Review of Meta-scientific Research and a Call for Institutional Reform Wuttke, Alexander Preprint / Preprint Zeitschriftenartikel / journal article Empfohlene Zitierung / Suggested Citation: Wuttke, A. (2019). Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About it: A Review of Meta-scientific Research and a Call for Institutional Reform. Politische Vierteljahresschrift, 60(1). https:// doi.org/10.1007/s11615-018-0131-7 Nutzungsbedingungen: Terms of use: Dieser Text wird unter einer CC BY-NC Lizenz (Namensnennung- This document is made available under a CC BY-NC Licence Nicht-kommerziell) zur Verfügung gestellt. Nähere Auskünfte zu (Attribution-NonCommercial). For more Information see: den CC-Lizenzen finden Sie hier: https://creativecommons.org/licenses/by-nc/4.0 https://creativecommons.org/licenses/by-nc/4.0/deed.de Diese Version ist zitierbar unter / This version is citable under: https://nbn-resolving.org/urn:nbn:de:0168-ssoar-59909-5 Wuttke (2019): Credibility of Political Science Findings Why Too Many Political Science Findings Cannot be Trusted and What We Can Do About it: A Review of Meta-scientific Research and a Call for Institutional Reform Alexander Wuttke, University of Mannheim 2019, Politische Vierteljahresschrift / German Political Science Quarterly 1, 60: 1-22, DOI: 10.1007/s11615-018-0131-7. This is an uncorrected pre-print. Please cite the original article. Witnessing the ongoing “credibility revolutions” in other disciplines, also political science should engage in meta-scientific introspection. Theoretically, this commentary describes why scientists in academia’s current incentive system work against their self-interest if they prioritize research credibility.
    [Show full text]
  • Understanding the Replication Crisis As a Base Rate Fallacy
    Understanding the Replication Crisis as a Base-Rate Fallacy PhilInBioMed, Université de Bordeaux 6 June 2018 [email protected] introduction — the replication crisis 52% of 1,576 scientists taking a survey conducted by the journal Nature agreed that there was a significant crisis of reproducibility Amgen successfully replicated only 6 out of 53 studies in oncology And then there is social psychology . introduction — the base rate fallacy screening for a disease, which affects 1 in every 1,000 individuals, with a 95% accurate test an individual S tests positive, no other risk factors; what is the probability that S has the disease? Harvard medical students, 1978 11 out of 60 got the correct answer introduction — the base rate fallacy introduction — the base rate fallacy base rate of disease = 1 in 1,000 = 0.1% (call this π) false positive rate = 5% (call this α) false positives among the 999 disease-free greatly outnumber the 1 true positive from the base rate fallacy to the replication crisis two types of error and accuracy type of error error rate accuracy type of accuracy Type-I (false +ve) α 1– α confidence level Type-II (false −ve) β 1– β power from the base rate fallacy to the replication crisis do not conflate False Positive Report Probability (FPRP) Pr (S does not have the disease, given that S tests positive for the disease) with False positive error rate (α) Pr (S tests positive for the disease, given that S does not have the disease) from the base rate fallacy to the replication crisis do not conflate Pr (the temperature will
    [Show full text]
  • The Existence of Publication Bias and Risk Factors for Its Occurrence
    The Existence of Publication Bias and Risk Factors for Its Occurrence Kay Dickersin, PhD Publication bias is the tendency on the parts of investigators, reviewers, and lication, the bias in terms ofthe informa¬ editors to submit or accept manuscripts for publication based on the direction or tion available to the scientific communi¬ strength of the study findings. Much of what has been learned about publication ty may be considerable. The bias that is when of re¬ bias comes from the social sciences, less from the field of medicine. In medicine, created publication study sults is on the direction or three studies have direct evidence for this bias. Prevention of based signifi¬ provided publica- cance of the is called tion bias is both from the scientific dissemina- findings publica¬ important perspective (complete tion bias. This term seems to have been tion of knowledge) and from the perspective of those who combine results from a used first in the published scientific lit¬ number of similar studies (meta-analysis). If treatment decisions are based on erature by Smith1 in 1980. the published literature, then the literature must include all available data that is Even if publication bias exists, is it of acceptable quality. Currently, obtaining information regarding all studies worth worrying about? In a scholarly undertaken in a given field is difficult, even impossible. Registration of clinical sense, it is certainly worth worrying trials, and perhaps other types of studies, is the direction in which the scientific about. If one believes that judgments should move. about medical treatment should be community all (JAMA.
    [Show full text]
  • A Meta-Meta-Analysis
    Journal of Intelligence Article Effect Sizes, Power, and Biases in Intelligence Research: A Meta-Meta-Analysis Michèle B. Nuijten 1,* , Marcel A. L. M. van Assen 1,2, Hilde E. M. Augusteijn 1, Elise A. V. Crompvoets 1 and Jelte M. Wicherts 1 1 Department of Methodology & Statistics, Tilburg School of Social and Behavioral Sciences, Tilburg University, Warandelaan 2, 5037 AB Tilburg, The Netherlands; [email protected] (M.A.L.M.v.A.); [email protected] (H.E.M.A.); [email protected] (E.A.V.C.); [email protected] (J.M.W.) 2 Section Sociology, Faculty of Social and Behavioral Sciences, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands * Correspondence: [email protected]; Tel.: +31-13-466-2053 Received: 7 May 2020; Accepted: 24 September 2020; Published: 2 October 2020 Abstract: In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics).
    [Show full text]