
ADJUSTING FOR PUBLICATION BIAS IN JASP 1 Adjusting for Publication Bias in JASP — Selection Models and Robust Bayesian Meta-Analysis František Bartoš1,2∗, Maximilian Maier1∗, & Eric-Jan Wagenmakers1 1 University of Amsterdam 2 Charles University, Prague ∗Both authors contributed equally. Correspondence concerning this article should be addressed to: František Bartoš, University of Amsterdam, Department of Psychological Methods, Nieuwe Achtergracht 129-B, 1018 VZ Amsterdam, The Netherlands, [email protected] Author Note This project was supported in part by a Vici grant (#016.Vici.170.083) to EJW. ADJUSTING FOR PUBLICATION BIAS IN JASP 2 Abstract Meta-analysis is essential for cumulative science, but its validity is compromised by publication bias. In order to mitigate the impact of publication bias, one may apply selection models, which estimate the degree to which non-significant studies are suppressed. Implemented in JASP, these methods allow researchers without programming experience to conduct state-of-the-art publication bias adjusted meta-analysis. In this tutorial, we demonstrate how to conduct a publication bias adjusted meta-analysis in JASP and interpret the results. First, we explain how frequentist selection models correct for publication bias. Second, we introduce Robust Bayesian Meta-Analysis (RoBMA), a Bayesian extension of the frequentist selection models. We illustrate the methodology with two data sets and discuss the interpretation of the results. In addition, we include example text to provide concrete guidance on reporting the meta-analytic results in an academic article. Finally, three tutorial videos are available at https://tinyurl.com/y4g2yodc. Keywords: Selection Models, Robust Bayesian Meta-Analysis, Model-Averaging, Publication Bias ADJUSTING FOR PUBLICATION BIAS IN JASP 3 Adjusting for Publication Bias in JASP — Selection Models and Robust Bayesian Meta-Analysis 1 Meta-analyses are a powerful tool for evidence synthesis. However, publication bias, 2 the preferential publishing of significant studies, leads to an overestimation of effect sizes 3 when accumulating evidence across a set of primary studies. Some researchers claim that 4 most research findings might never be published but remain in researchers’ file drawers 5 (e.g., Ioannidis, 2005; Rosenthal, 1979). Even if the true extent of publication bias was less 6 severe than these researchers have suggested, it would remain a formidable threat to the 7 validity of meta-analyses (Borenstein et al., 2009). 8 To alleviate this problem and explicitly account for publication bias, a variety of 9 statistical methods have been proposed (e.g., Carter & McCullough, 2014; Duval & 10 Tweedie, 2000; Egger et al., 1997; Iyengar & Greenhouse, 1988; Simonsohn et al., 2014; 11 Stanley & Doucouliagos, 2017). However, simulations have shown that most methods 12 perform poorly under heterogeneity (Carter et al., 2019; Maier et al., 2020; McShane et al., 13 2016; Renkewitz & Keiner, 2019) or do not provide meta-analytic estimates (Bartoš & 14 Schimmack, 2020; Brunner & Schimmack, 2020). Heterogeneity occurs whenever the 15 individual studies are not exact replications of each other and the true effect size varies 16 across primary studies, which is usually the case in psychology (e.g., McShane et al., 2016). 17 Under high heterogeneity, most tests for publication bias have either high false-positive 18 rates or low power; in addition, the associated meta-analytic effect size estimates are biased 19 or highly variable. An exception to this rule are selection models – these models provide an 20 explicit account of a p-value based publication bias process and adjust the estimate of 21 effect size accordingly. Selection models perform well even under high heterogeneity 22 (Carter et al., 2019; Maier et al., 2020; McShane et al., 2016). However, despite their 23 strong performance in simulations, selection models are rarely used in practice. Their 1 24 relative obscurity is arguably because selection models are considered overly complicated , 1 For instance, Rothstein et al. (2005, p. 172) remark that “Weight function models are complex and ADJUSTING FOR PUBLICATION BIAS IN JASP 4 25 and that they have, to the best of our knowledge, not yet been implemented in statistical 26 software packages with a graphical user interface (GUI), limiting their accessibility for 27 applied researchers. 28 To make selection models more readily available to applied researchers, we 29 implemented these models in the open-source statistical program JASP (JASP Team, 30 2020), as part of the Meta-Analysis module. The implementation concerns an intuitive 31 graphical interface for the R packages weightr (Coburn et al., 2019) and RoBMA (Bartoš & 32 Maier, 2020) that allow users to fit either frequentist or Bayesian selection models. Below 33 we first provide a conceptual introduction to frequentist selection models and show how to 34 fit these models using JASP. Second, we introduce a Bayesian selection method, Robust 35 Bayesian Meta-Analysis (RoBMA, Maier et al., 2020), and show how it can overcome 36 several limitations that are inherent to frequentist selection models. We explain how to 37 interpret the results using two examples: a meta-analysis on the influence of “interracial 38 dyads” on performance and observed behavior (Toosi et al., 2012) and a meta-analysis on 39 acculturation mismatch (Lui, 2015). We also provide an example report of a result section 40 that describes the application of both frequentist and Bayesian selection models to the 41 meta-analysis on the influence of “interracial dyads”. Finally, we recorded tutorial videos to 42 facilitate the application of the implemented methods further. The videos are available at 43 https://tinyurl.com/y4g2yodc. 44 Frequentist Selection Models 45 Selection models use weighted likelihood to account for studies that are missing due 46 to publication bias. Selection models are well-established amongst statisticians (e.g., 47 Iyengar & Greenhouse, 1988; Larose & Dey, 1998; Vevea & Hedges, 1995) and can 48 accommodate realistic assumptions regarding publication bias and heterogeneity. In 49 selection models, analysts specify p-value intervals with different assumed publication involve a substantial amount of computation. Thus they are unlikely to be used routinely in meta-analysis”. ADJUSTING FOR PUBLICATION BIAS IN JASP 5 50 probabilities, for example, “statistically significant” p-values (p < .05) versus 51 non-significant p-values (p > .05). The models typically use maximum likelihood to obtain 52 a bias adjusted pooled point estimate by accounting for the relative publication 53 probabilities in each interval (called weights) and using the weighted likelihood function. 54 Selection models can accommodate effect size heterogeneity by extending random effects 55 models (McShane et al., 2016; Rothstein et al., 2005; Vevea & Hedges, 1995, pp. 145-174). 56 Selection models can be specified flexibly in several ways. First, researchers can 57 decide between one-sided and two-sided selection. One-sided selection means that only 58 significant effects in the expected direction are more likely to be published. Commonly, 59 significant positive effects are more likely to be published, although in some cases, 60 significant negative effect sizes might be more likely to be published as well. Researchers 61 can specify the direction of selection flexibly. Two-sided selection means that the 62 probability of publication does not depend on the direction of the effect; in other words, 63 positive and negative effects have the same probability of being published, given that they 64 fall in the same p-value interval. 65 Second, researchers may also specify different intervals for different publication 66 probabilities. For example, to account for the fact that marginally significant results 67 (.05 < p < .10) are potentially more likely to be published than non-significant results, 68 researchers could specify this as a third interval. Note that, when the observed effect is in 69 the predicted direction, a marginally significant result using a two-sided test is significant 70 using a one-sided test. Therefore, a two-sided selection process with different publication 71 probabilities for significant versus “marginally significant” findings accommodates a 72 one-sided selection process with publication probabilities depending on whether or not the 73 p-value is statistically significant. ADJUSTING FOR PUBLICATION BIAS IN JASP 6 74 Example 1: Dyadic Interracial Interactions and Performance 75 Toosi et al. (2012) conducted a meta-analysis on the effect of “interracial” 76 interactions on positive attitudes, negative affect, nonverbal behavior, and “objective 77 measures of performance” (Toosi et al., 2012, p.1). The meta-analysis compared dyadic 78 same-race versus “interracial” interactions. A standard reanalysis confirms that 79 “performance” was slightly better in dyads of the same-race compared to dyads of different 80 race, r = 0.070, 95% CI [0.023, 0.117], p = .004, τ (on Cohen’s d scale) = 0.289, 95% CI 2 81 [0.173, 0.370]. Toosi et al. (2012) applied Egger’s regression (Egger et al., 1997) and 82 reported a lack of funnel plot asymmetry, suggesting that the data set is not contaminated 83 by publication bias. However, funnel plot based methods to assess publication bias have 84 repeatedly been criticized for having low power and generating a high proportion of 85 false-positives, especially under heterogeneity (e.g., Lau et al., 2006; Maier et al., 2020). 86 We, therefore, revisit the question of publication bias by reanalyzing this study using 87 selection models in JASP. 2 Our result is similar to that reported by Toosi et al. (2012), namely r = 0.070, 95% CI [0.03, 0.11], I2 = 67.29%. For the re-analysis, we used the data set as recoded by Stanley et al. (2018), accessible at https://osf.io/2vfyj/files/. ADJUSTING FOR PUBLICATION BIAS IN JASP 7 Figure 1 Results from Toosi et al. (2012) Using the Default Settings of JASP Selection Models Note. Screenshot from the JASP graphical user interface when analyzing the data of Toosi et al. (2012). The analysis settings are specified in the left panel and the associated output is shown in the right panel.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages35 Page
-
File Size-