
Running head: CORRECTING FOR OUTCOME REPORTING BIAS 1 1 Correcting for outcome reporting bias in a meta-analysis: A meta-regression approach 1 1 2 Robbie C. M. van Aert & Jelte M. Wicherts 1 3 Department of Methodology and Statistics, Tilburg University 4 Author Note 5 We thank the members of the Meta-Research Center at Tilburg University for their 6 feedback on an earlier version of this paper. 7 Correspondence concerning this article should be addressed to Robbie C. M. van Aert, 8 P.O. Box 90153, 5000 LE Tilburg, the Netherlands. E-mail: 9 [email protected] CORRECTING FOR OUTCOME REPORTING BIAS 2 10 Abstract 11 Outcome reporting bias (ORB) refers to the biasing effect caused by researchers selectively 12 reporting outcomes based on their statistical significance. ORB leads to inflated effect size 13 estimates in meta-analysis if only the outcome with the largest effect size is reported due to 14 ORB. We propose a new method (CORB) to correct for ORB that includes an estimate of 15 the variability of the outcomes’ effect size as a moderator in a meta-regression model. An 16 estimate of the variability of the outcomes’ effect size can be computed by assuming a 17 correlation among the outcomes. Results of a Monte-Carlo simulation study showed that 18 effect size in meta-analyses may be severely overestimated without correcting for ORB. 19 CORB accurately estimates effect size when overestimation caused by ORB is the largest. 20 Applying the method to a meta-analysis on the effect of playing violent video games on 21 aggression showed that the effect size estimate decreased when correcting for ORB. We 22 recommend to routinely apply methods to correct for ORB in any meta-analysis. We provide 23 annotated R code and functions to facilitate researchers to apply the CORB method. 24 Keywords: outcome reporting bias, meta-analysis, meta-regression, researcher degrees 25 of freedom 26 Word count: 8839 CORRECTING FOR OUTCOME REPORTING BIAS 3 27 Correcting for outcome reporting bias in a meta-analysis: A meta-regression approach 28 Introduction 29 There is ample evidence that findings reported in the psychological literature are 30 biased. For example, the vast majority of published studies in psychology report statistically 31 significant findings (Fanelli, 2010, 2012; Sterling, Rosenbaum, & Weinkam, 1995) while the 32 average low statistical power of studies in the psychological literature would imply that most 33 studies yield non-significant findings (Bakker, Dijk, & Wicherts, 2012; Cohen, 1990). 34 Moreover, 100 key findings in psychology were recently replicated to study the replicability 35 of psychological science in the Reproducibility Project: Psychology (Open Science 36 Collaboration, 2015), and effect sizes of the replicated studies were substantially smaller than 37 those of the original studies (correlation coefficient 0.197 vs. 0.403). 38 The most prominent explanation for the overrepresentation of statistically significant 39 effect sizes in the literature is the tendency of editors and reviewers to more positively 40 evaluate statistically significant studies (with a p-value below the α-level) compared to 41 non-significant studies, but researchers also appear less inclined to submit statistically 42 non-significant studies for publication (Cooper, DeNeve, & Charlton, 1997; Coursol & 43 Wagner, 1986). The failure to publish studies without a statistically significant effect size, 44 also known as publication bias, is widely considered to create bias in the literature. 45 Additional sources of bias might emerge if researchers are motivated (or feel pressured by a 46 publication system that is still strongly focused on statistical significance) to analyze their 47 data in such a way that it yields a statistically significant effect size. Importantly, often 48 multiple analysis approaches are valid and defensible (Steegen, Tuerlinckx, Gelman, & 49 Vanpaemel, 2016). For instance, 29 analysis teams were asked in a so-called many analyst 50 project to analyze the same data to answer the research question whether referees in football 51 are more likely to give dark skinned players a red card than white skinned players 52 (Silberzahn & Uhlmann, 2015; Silberzahn et al., 2018). The results obtained by the analysis CORRECTING FOR OUTCOME REPORTING BIAS 4 53 teams varied widely with odds ratio as observed effect size varying from 0.89 to 2.93. 54 Moreover, no analysis approach was deemed to be the best approach and multiple 55 approaches were evaluated as defensible according to the analysis teams who peer reviewed 56 each other’s analysis. 57 The leeway researchers have to make decisions in the process of setting up a study, 58 analyzing data, and reporting the results are often called researcher degrees of freedom or 59 p-hacking if this leeway is purposively used to obtain statistical significance (Simmons, 60 Nelson, & Simonsohn, 2011; Wicherts et al., 2016). John, Loewenstein, and Prelec (2012) 61 studied the self-admission rate and defensibility of 10 researcher degrees of freedom related 62 to analyzing data in a sample of 2,000 psychologists employed at universities in the United 63 States. The researcher degree of freedom that was most admitted (63.4%) was “in a paper, 64 failing to report all of a study’s dependent measures” and the vast majority of researchers 65 who admitted it deemed this decision to be defensible (1.84, standard deviation 0.39 on a 66 scale ranging from 0 = not defensible to 2 = defensible). A replication of this study in Italy 67 revealed that the prevalence of admitting not reporting all dependent measures was slightly 68 lower albeit substantial (47.9%, Agnoli, Wicherts, Veldkamp, Albiero, & Cubelli, 2017). 69 Selectively reporting dependent measures will bias the literature, especially if only 70 statistically significant measures are reported. Selectively reporting dependent measures is 71 referred to as outcome reporting bias or outcome switching in medical research. Outcome 72 reporting bias (ORB) is defined as the bias caused by reporting of outcomes/dependent 73 measures that “is driven by the significance and/or direction of the effect size” (Copas, 74 Dwan, Kirkham, & Williamson, 2014). Publication bias is closely related to ORB, but 75 publication bias refers to the suppression of an entire study from being published whereas 76 ORB is the suppression of outcomes being reported in a study. 77 Direct evidence for ORB has especially been obtained in the literature on medical 78 research (e.g., Lancee, Lemmens, Kahn, Vinkers, & Luykx, 2017; Rankin et al., 2017; CORRECTING FOR OUTCOME REPORTING BIAS 5 79 Wayant et al., 2017). A systematic review (Dwan et al., 2008; Dwan, Gamble, Williamson, & 80 Kirkham, 2013) identified five articles that studied ORB (Chan, Hróbjartsson, Haahr, 81 Gøtzsche, & Altman, 2004; Chan, Krleža-Jerić, Schmid, & Altman, 2004; Elm et al., 2008; 82 Ghersi, 2006; Hahn, Williamson, & Hutton, 2002). All articles studied ORB by comparing 83 the outcomes that were listed in protocols with the outcomes actually being reported in the 84 final publication. The overarching conclusion based on these five studies is that selective 85 reporting of outcomes is prevalent and that statistically significant outcomes are more likely 86 to be reported than non-significant outcomes. For example, Chan, Krleža-Jerić, Schmid, and 87 Altman (2004) and Chan, Hróbjartsson, Haahr, Gøtzsche, and Altman (2004) studied ORB 88 by comparing protocols approved by Danish ethical committees and funded by the Canadian 89 Institutes of Health Research and concluded that 50% and 31% of efficacy and 65% and 59% 90 of harm outcomes were not sufficiently reported in the final publication for being included in 91 a meta-analysis. Moreover, the odds of statistically significant outcomes being reported in 92 the final publication was more than twice as large as those of non-significant outcomes. 93 Qualitative research revealed that common reasons for not reporting all outcomes are that 94 the results are deemed uninteresting, a too small sample size for a particular outcome, and 95 space limitations by the journal (Smyth et al., 2011). Some researchers also indicated that 96 they were unaware of the negative consequences of not reporting all outcomes, which is no 97 surprise given the literature on hindsight biases combined with findings highlighting poor 98 statistical intuitions (Bakker, Hartgerink, Wicherts, & Maas, 2016; Tversky & Kahneman, 99 1971). 100 Research on ORB is more limited in the literature on psychological research, most 101 likely because of the common lack of transparent practices like data sharing and 102 preregistrations (Hardwicke et al., n.d.) what would enable meta-scientific studies of ORB. 103 Franco, Simonovits, and Malhotra (2016) compared the protocols of 32 psychology 104 experiments with the final publication that ended up in the literature. Less outcomes were 105 reported in 72% of the final publications than were listed in the protocol. LeBel et al. (2013) CORRECTING FOR OUTCOME REPORTING BIAS 6 106 studied ORB by emailing corresponding authors of articles published in prominent 107 psychology journals and asking them whether they had fully disclosed information about the 108 included outcomes as well as data exclusions, sample size, and conditions. Between 20% and 109 87.2% of the authors indicated to not have reported all the outcomes in their final 110 publication. O’Boyle, Gonzalez-Mule, and Banks (2017) compared hypotheses that were 111 tested in dissertations with the corresponding publications. Their results also provide 112 evidence for ORB, because 44.9% of the reported hypotheses in dissertations were 113 statistically significant compared to 65.9% in the publications implying that the results of 114 hypothesis tests were selectively reported. 115 Multiple methods have been developed to correct for ORB in a meta-analysis (Bowden, 116 Jackson, & Thompson, 2010; Hutton & Williamson, 2000; Jackson, Copas, & Sutton, 2005). 117 The method developed by Copas and colleagues (Copas, Dwan, Kirkham, & Williamson, 118 2014; Copas, Marson, Williamson, & Kirkham, 2019) is the recommended method by the 119 Outcome Reporting Bias in Trials (ORBIT) team.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages48 Page
-
File Size-