Correcting for Outcome Reporting Bias in a Meta-Analysis: a Meta-Regression Approach

Total Page:16

File Type:pdf, Size:1020Kb

Correcting for Outcome Reporting Bias in a Meta-Analysis: a Meta-Regression Approach Running head: CORRECTING FOR OUTCOME REPORTING BIAS 1 1 Correcting for outcome reporting bias in a meta-analysis: A meta-regression approach 1 1 2 Robbie C. M. van Aert & Jelte M. Wicherts 1 3 Department of Methodology and Statistics, Tilburg University 4 Author Note 5 We thank the members of the Meta-Research Center at Tilburg University for their 6 feedback on an earlier version of this paper. 7 Correspondence concerning this article should be addressed to Robbie C. M. van Aert, 8 P.O. Box 90153, 5000 LE Tilburg, the Netherlands. E-mail: 9 [email protected] CORRECTING FOR OUTCOME REPORTING BIAS 2 10 Abstract 11 Outcome reporting bias (ORB) refers to the biasing effect caused by researchers selectively 12 reporting outcomes based on their statistical significance. ORB leads to inflated effect size 13 estimates in meta-analysis if only the outcome with the largest effect size is reported due to 14 ORB. We propose a new method (CORB) to correct for ORB that includes an estimate of 15 the variability of the outcomes’ effect size as a moderator in a meta-regression model. An 16 estimate of the variability of the outcomes’ effect size can be computed by assuming a 17 correlation among the outcomes. Results of a Monte-Carlo simulation study showed that 18 effect size in meta-analyses may be severely overestimated without correcting for ORB. 19 CORB accurately estimates effect size when overestimation caused by ORB is the largest. 20 Applying the method to a meta-analysis on the effect of playing violent video games on 21 aggression showed that the effect size estimate decreased when correcting for ORB. We 22 recommend to routinely apply methods to correct for ORB in any meta-analysis. We provide 23 annotated R code and functions to facilitate researchers to apply the CORB method. 24 Keywords: outcome reporting bias, meta-analysis, meta-regression, researcher degrees 25 of freedom 26 Word count: 8839 CORRECTING FOR OUTCOME REPORTING BIAS 3 27 Correcting for outcome reporting bias in a meta-analysis: A meta-regression approach 28 Introduction 29 There is ample evidence that findings reported in the psychological literature are 30 biased. For example, the vast majority of published studies in psychology report statistically 31 significant findings (Fanelli, 2010, 2012; Sterling, Rosenbaum, & Weinkam, 1995) while the 32 average low statistical power of studies in the psychological literature would imply that most 33 studies yield non-significant findings (Bakker, Dijk, & Wicherts, 2012; Cohen, 1990). 34 Moreover, 100 key findings in psychology were recently replicated to study the replicability 35 of psychological science in the Reproducibility Project: Psychology (Open Science 36 Collaboration, 2015), and effect sizes of the replicated studies were substantially smaller than 37 those of the original studies (correlation coefficient 0.197 vs. 0.403). 38 The most prominent explanation for the overrepresentation of statistically significant 39 effect sizes in the literature is the tendency of editors and reviewers to more positively 40 evaluate statistically significant studies (with a p-value below the α-level) compared to 41 non-significant studies, but researchers also appear less inclined to submit statistically 42 non-significant studies for publication (Cooper, DeNeve, & Charlton, 1997; Coursol & 43 Wagner, 1986). The failure to publish studies without a statistically significant effect size, 44 also known as publication bias, is widely considered to create bias in the literature. 45 Additional sources of bias might emerge if researchers are motivated (or feel pressured by a 46 publication system that is still strongly focused on statistical significance) to analyze their 47 data in such a way that it yields a statistically significant effect size. Importantly, often 48 multiple analysis approaches are valid and defensible (Steegen, Tuerlinckx, Gelman, & 49 Vanpaemel, 2016). For instance, 29 analysis teams were asked in a so-called many analyst 50 project to analyze the same data to answer the research question whether referees in football 51 are more likely to give dark skinned players a red card than white skinned players 52 (Silberzahn & Uhlmann, 2015; Silberzahn et al., 2018). The results obtained by the analysis CORRECTING FOR OUTCOME REPORTING BIAS 4 53 teams varied widely with odds ratio as observed effect size varying from 0.89 to 2.93. 54 Moreover, no analysis approach was deemed to be the best approach and multiple 55 approaches were evaluated as defensible according to the analysis teams who peer reviewed 56 each other’s analysis. 57 The leeway researchers have to make decisions in the process of setting up a study, 58 analyzing data, and reporting the results are often called researcher degrees of freedom or 59 p-hacking if this leeway is purposively used to obtain statistical significance (Simmons, 60 Nelson, & Simonsohn, 2011; Wicherts et al., 2016). John, Loewenstein, and Prelec (2012) 61 studied the self-admission rate and defensibility of 10 researcher degrees of freedom related 62 to analyzing data in a sample of 2,000 psychologists employed at universities in the United 63 States. The researcher degree of freedom that was most admitted (63.4%) was “in a paper, 64 failing to report all of a study’s dependent measures” and the vast majority of researchers 65 who admitted it deemed this decision to be defensible (1.84, standard deviation 0.39 on a 66 scale ranging from 0 = not defensible to 2 = defensible). A replication of this study in Italy 67 revealed that the prevalence of admitting not reporting all dependent measures was slightly 68 lower albeit substantial (47.9%, Agnoli, Wicherts, Veldkamp, Albiero, & Cubelli, 2017). 69 Selectively reporting dependent measures will bias the literature, especially if only 70 statistically significant measures are reported. Selectively reporting dependent measures is 71 referred to as outcome reporting bias or outcome switching in medical research. Outcome 72 reporting bias (ORB) is defined as the bias caused by reporting of outcomes/dependent 73 measures that “is driven by the significance and/or direction of the effect size” (Copas, 74 Dwan, Kirkham, & Williamson, 2014). Publication bias is closely related to ORB, but 75 publication bias refers to the suppression of an entire study from being published whereas 76 ORB is the suppression of outcomes being reported in a study. 77 Direct evidence for ORB has especially been obtained in the literature on medical 78 research (e.g., Lancee, Lemmens, Kahn, Vinkers, & Luykx, 2017; Rankin et al., 2017; CORRECTING FOR OUTCOME REPORTING BIAS 5 79 Wayant et al., 2017). A systematic review (Dwan et al., 2008; Dwan, Gamble, Williamson, & 80 Kirkham, 2013) identified five articles that studied ORB (Chan, Hróbjartsson, Haahr, 81 Gøtzsche, & Altman, 2004; Chan, Krleža-Jerić, Schmid, & Altman, 2004; Elm et al., 2008; 82 Ghersi, 2006; Hahn, Williamson, & Hutton, 2002). All articles studied ORB by comparing 83 the outcomes that were listed in protocols with the outcomes actually being reported in the 84 final publication. The overarching conclusion based on these five studies is that selective 85 reporting of outcomes is prevalent and that statistically significant outcomes are more likely 86 to be reported than non-significant outcomes. For example, Chan, Krleža-Jerić, Schmid, and 87 Altman (2004) and Chan, Hróbjartsson, Haahr, Gøtzsche, and Altman (2004) studied ORB 88 by comparing protocols approved by Danish ethical committees and funded by the Canadian 89 Institutes of Health Research and concluded that 50% and 31% of efficacy and 65% and 59% 90 of harm outcomes were not sufficiently reported in the final publication for being included in 91 a meta-analysis. Moreover, the odds of statistically significant outcomes being reported in 92 the final publication was more than twice as large as those of non-significant outcomes. 93 Qualitative research revealed that common reasons for not reporting all outcomes are that 94 the results are deemed uninteresting, a too small sample size for a particular outcome, and 95 space limitations by the journal (Smyth et al., 2011). Some researchers also indicated that 96 they were unaware of the negative consequences of not reporting all outcomes, which is no 97 surprise given the literature on hindsight biases combined with findings highlighting poor 98 statistical intuitions (Bakker, Hartgerink, Wicherts, & Maas, 2016; Tversky & Kahneman, 99 1971). 100 Research on ORB is more limited in the literature on psychological research, most 101 likely because of the common lack of transparent practices like data sharing and 102 preregistrations (Hardwicke et al., n.d.) what would enable meta-scientific studies of ORB. 103 Franco, Simonovits, and Malhotra (2016) compared the protocols of 32 psychology 104 experiments with the final publication that ended up in the literature. Less outcomes were 105 reported in 72% of the final publications than were listed in the protocol. LeBel et al. (2013) CORRECTING FOR OUTCOME REPORTING BIAS 6 106 studied ORB by emailing corresponding authors of articles published in prominent 107 psychology journals and asking them whether they had fully disclosed information about the 108 included outcomes as well as data exclusions, sample size, and conditions. Between 20% and 109 87.2% of the authors indicated to not have reported all the outcomes in their final 110 publication. O’Boyle, Gonzalez-Mule, and Banks (2017) compared hypotheses that were 111 tested in dissertations with the corresponding publications. Their results also provide 112 evidence for ORB, because 44.9% of the reported hypotheses in dissertations were 113 statistically significant compared to 65.9% in the publications implying that the results of 114 hypothesis tests were selectively reported. 115 Multiple methods have been developed to correct for ORB in a meta-analysis (Bowden, 116 Jackson, & Thompson, 2010; Hutton & Williamson, 2000; Jackson, Copas, & Sutton, 2005). 117 The method developed by Copas and colleagues (Copas, Dwan, Kirkham, & Williamson, 118 2014; Copas, Marson, Williamson, & Kirkham, 2019) is the recommended method by the 119 Outcome Reporting Bias in Trials (ORBIT) team.
Recommended publications
  • Bias and Fairness in NLP
    Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas
    [Show full text]
  • The Effects of the Intuitive Prosecutor Mindset on Person Memory
    THE EFFECTS OF THE INTUITIVE PROSECUTOR MINDSET ON PERSON MEMORY DISSERTATION Presented in Partial Fulfillment of the Requirements for The Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Richard J. Shakarchi, B.S., M.A. ***** The Ohio State University 2002 Dissertation Committee: Professor Marilynn Brewer, Adviser Approved by Professor Gifford Weary ________________________________ Adviser Professor Neal Johnson Department of Psychology Copyright by Richard J. Shakarchi 2002 ABSTRACT The intuitive prosecutor metaphor of human judgment is a recent development within causal attribution research. The history of causal attribution is briefly reviewed, with an emphasis on explanations of attributional biases. The evolution of such explanations is traced through a purely-cognitive phase to the more modern acceptance of motivational explanations for attributional biases. Two examples are offered of how a motivational explanatory framework of attributional biases can account for broad patterns of information processing biases. The intuitive prosecutor metaphor is presented as a parallel explanatory framework for interpreting attributional biases, whose motivation is based on a threat to social order. The potential implications for person memory are discussed, and three hypotheses are developed: That intuitive prosecutors recall norm- violating information more consistently than non-intuitive prosecutors (H1); that this differential recall may be based on differential (biased) encoding of behavioral information (H2); and that this differential recall may also be based on biased retrieval of information from memory rather than the result of a reporting bias (H3). A first experiment is conducted to test the basic recall hypothesis (H1). A second experiment is conducted that employs accountability in order to test the second and third hypotheses.
    [Show full text]
  • Outcome Reporting Bias in COVID-19 Mrna Vaccine Clinical Trials
    medicina Perspective Outcome Reporting Bias in COVID-19 mRNA Vaccine Clinical Trials Ronald B. Brown School of Public Health and Health Systems, University of Waterloo, Waterloo, ON N2L3G1, Canada; [email protected] Abstract: Relative risk reduction and absolute risk reduction measures in the evaluation of clinical trial data are poorly understood by health professionals and the public. The absence of reported absolute risk reduction in COVID-19 vaccine clinical trials can lead to outcome reporting bias that affects the interpretation of vaccine efficacy. The present article uses clinical epidemiologic tools to critically appraise reports of efficacy in Pfzier/BioNTech and Moderna COVID-19 mRNA vaccine clinical trials. Based on data reported by the manufacturer for Pfzier/BioNTech vaccine BNT162b2, this critical appraisal shows: relative risk reduction, 95.1%; 95% CI, 90.0% to 97.6%; p = 0.016; absolute risk reduction, 0.7%; 95% CI, 0.59% to 0.83%; p < 0.000. For the Moderna vaccine mRNA-1273, the appraisal shows: relative risk reduction, 94.1%; 95% CI, 89.1% to 96.8%; p = 0.004; absolute risk reduction, 1.1%; 95% CI, 0.97% to 1.32%; p < 0.000. Unreported absolute risk reduction measures of 0.7% and 1.1% for the Pfzier/BioNTech and Moderna vaccines, respectively, are very much lower than the reported relative risk reduction measures. Reporting absolute risk reduction measures is essential to prevent outcome reporting bias in evaluation of COVID-19 vaccine efficacy. Keywords: mRNA vaccine; COVID-19 vaccine; vaccine efficacy; relative risk reduction; absolute risk reduction; number needed to vaccinate; outcome reporting bias; clinical epidemiology; critical appraisal; evidence-based medicine Citation: Brown, R.B.
    [Show full text]
  • Working Memory, Cognitive Miserliness and Logic As Predictors of Performance on the Cognitive Reflection Test
    Working Memory, Cognitive Miserliness and Logic as Predictors of Performance on the Cognitive Reflection Test Edward J. N. Stupple ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Maggie Gale ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Christopher R. Richmond ([email protected]) Centre for Psychological Research, University of Derby Kedleston Road, Derby. DE22 1GB Abstract Most participants respond that the answer is 10 cents; however, a slower and more analytic approach to the The Cognitive Reflection Test (CRT) was devised to measure problem reveals the correct answer to be 5 cents. the inhibition of heuristic responses to favour analytic ones. The CRT has been a spectacular success, attracting more Toplak, West and Stanovich (2011) demonstrated that the than 100 citations in 2012 alone (Scopus). This may be in CRT was a powerful predictor of heuristics and biases task part due to the ease of administration; with only three items performance - proposing it as a metric of the cognitive miserliness central to dual process theories of thinking. This and no requirement for expensive equipment, the practical thesis was examined using reasoning response-times, advantages are considerable. There have, moreover, been normative responses from two reasoning tasks and working numerous correlates of the CRT demonstrated, from a wide memory capacity (WMC) to predict individual differences in range of tasks in the heuristics and biases literature (Toplak performance on the CRT. These data offered limited support et al., 2011) to risk aversion and SAT scores (Frederick, for the view of miserliness as the primary factor in the CRT.
    [Show full text]
  • Blooming Where They're Planted: Closing Cognitive
    BLOOMING WHERE THEY’RE PLANTED: CLOSING COGNITIVE ACHIEVEMENT GAPS WITH NON-COGNITIVE SKILLS Elaina Michele Sabatine A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the School of Social Work Chapel Hill 2019 Approved by: Melissa Lippold Amy Blank Wilson Natasha Bowen Din-Geng Chen Jill Hamm ©2019 Elaina Sabatine ALL RIGHTS RESERVED ii ABSTRACT ELAINA SABATINE: Blooming Where They’re Planted: Closing Cognitive Achievement Gaps With Non-Cognitive Skills (Under the direction of Dr. Melissa Lippold) For the last several decades, education reform has focused on closing achievement gaps between affluent, white students and their less privileged peers. One promising area for addressing achievement gaps is through promoting students’ non-cognitive skills (e.g., self- discipline, persistence). In the area of non-cognitive skills, two interventions – growth mindset and stereotype threat – have been identified as promising strategies for increasing students’ academic achievement and closing achievement gaps. This dissertation explores the use of growth mindset and stereotype threat strategies in the classroom, as a form of academic intervention. Paper 1 examines the extant evidence for growth mindset and stereotype threat interventions. Paper 1 used a systematic review and meta-analysis to identify and analyze 24 randomized controlled trials that tested growth mindset and stereotype threat interventions with middle and high school students over the course of one school year. Results from meta- analysis indicated small, positive effects for each intervention on students’ GPAs. Findings highlight the influence of variation among studies and the need for additional research that more formally evaluates differences in study characteristics and the impact of study characteristics on intervention effects.
    [Show full text]
  • Stereotypes, Role Models, and the Formation of Beliefs
    Stereotypes, Role Models, and the Formation of Beliefs Alex Eble and Feng Hu⇤ September 2017 Abstract Stereotypes can erroneously reduce children’s beliefs about their own ability, negatively affect- ing effort in school and investment in skills. Because of dynamic complementarity in the formation of human capital, this may lead to large negative effects on later life outcomes. In this paper, we show evidence that role models, in the form of female math teachers, can counter the negative effects of gender-based stereotypes on girls’ perceptions of their math ability. A simple model of investment in human capital under uncertainty predicts the greatest effects of role models ac- cruing to girls who perceive themselves to be of low ability. We exploit random assignment of students to classes in Chinese middle schools to test this prediction, examining the effects of teacher-student gender match on students’ perceived ability, aspirations, investment, and aca- demic performance. We find that, for girls who perceive themselves to be of low ability, being assigned a female math teacher causes large gains in all of these outcomes. We see no effects of teacher-student gender match on children who do not perceive themselves to be of low math ability. We show evidence against the potential alternative explanations that differential teacher effort, skill, teaching methods, or differential attention to girls drive these effects. ⇤Eble: Teachers College, Columbia University. Email: [email protected] Hu: School of Economics and Manage- ment, University of Science and Technology Beijing [email protected]. The authors are grateful to Joe Cummins, John Friedman, Morgan Hardy, Asim Khwaja, Ilyana Kuziemko, Bentley MacCleod, Randy Reback, Jonah Rockoff, Judy Scott- Clayton, and Felipe Valencia for generous input, as well as seminar audiences at Columbia, Fordham, the 2017 IZA transat- lantic meeting, and PacDev.
    [Show full text]
  • A Classification of Biases Relating to the Production, Dissemination and Consumption of News
    A Classification of Biases Relating to the Production, Dissemination and Consumption of News Brendan Spillane Vincent Wade [email protected] [email protected] ADAPT Centre ADAPT Centre Trinity College Dublin Trinity College Dublin ABSTRACT There are many difficulties in studying bias at the production, dissemination, or consumption stages of the news pipeline. These include the difficulty of identifying high quality empirical research, the lack of agreed terminology and definitions, and the overlapping nature of many forms of bias. Much of the empirical research in the domain is disjointed and there are few examples of concerted efforts to address overarching research challenges. This paper details ongoing work to create a classification of biases relating to news. It is divided into three sub-classifications focusing on the production, dissemination, and consumption stages of the news pipeline. CCS CONCEPTS • Human-centered computing ! HCI theory, concepts and models; Interaction design theory, concepts and paradigms. KEYWORDS News; Bias; Classification of Biases ACM Reference Format: Brendan Spillane and Vincent Wade. A Classification of Biases Relating to the Production, Dissemination and Consumption of News. In Proceedings of . CHI 2020 Workshop on Detection and Design for Cognitive Biases in People and Computing Systems, April 25, 2020, Honolulu HI, USA., Article , 12 pages. © 2020. Proceedings of the CHI 2020 Workshop on Detection and Design for Cognitive Biases in People and Computing Systems, April 25, 2020, Honolulu HI, USA. Copyright is held by the owner/author(s) A Classification of Biases Relating to the Production, Dissemination and Consumption of News ,, INTRODUCTION One of the core issues affecting the study of bias at any stage of the news consumption pipeline is the lack of an easily accessible and referenceable classification of biases.
    [Show full text]
  • Bias Miguel Delgado-Rodrı´Guez, Javier Llorca
    635 J Epidemiol Community Health: first published as 10.1136/jech.2003.008466 on 13 July 2004. Downloaded from GLOSSARY Bias Miguel Delgado-Rodrı´guez, Javier Llorca ............................................................................................................................... J Epidemiol Community Health 2004;58:635–641. doi: 10.1136/jech.2003.008466 The concept of bias is the lack of internal validity or Ahlbom keep confounding apart from biases in the statistical analysis as it typically occurs when incorrect assessment of the association between an the actual study base differs from the ‘‘ideal’’ exposure and an effect in the target population in which the study base, in which there is no association statistic estimated has an expectation that does not equal between different determinants of an effect. The same idea can be found in Maclure and the true value. Biases can be classified by the research Schneeweiss.5 stage in which they occur or by the direction of change in a In this glossary definitions of the most com- estimate. The most important biases are those produced in mon biases (we have not been exhaustive in defining all the existing biases) are given within the definition and selection of the study population, data the simple classification by Kleinbaum et al.2 We collection, and the association between different have added a point for biases produced in a trial determinants of an effect in the population. A definition of in the execution of the intervention. Biases in data interpretation, writing, and citing will not the most common biases occurring in these stages is given. be discussed (see for a description of them by ..........................................................................
    [Show full text]
  • Publication and Other Reporting Biases in Cognitive Sciences
    TICS-1311; No. of Pages 7 Review Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention 1,2,3,4 5,6 7 8 John P.A. Ioannidis , Marcus R. Munafo` , Paolo Fusar-Poli , Brian A. Nosek , 1 and Sean P. David 1 Department of Medicine, Stanford University School of Medicine, Stanford, CA 94305, USA 2 Department of Health Research and Policy, Stanford University School of Medicine, Stanford, CA 94305, USA 3 Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA 94305, USA 4 Meta-Research Innovation Center at Stanford (METRICS), Stanford, CA 94305, USA 5 MRC Integrative Epidemiology Unit, UK Centre for Tobacco and Alcohol Studies, University of Bristol, Bristol, UK 6 School of Experimental Psychology, University of Bristol, Bristol, UK 7 Department of Psychosis Studies, Institute of Psychiatry, King’s College London, London, UK 8 Center for Open Science, and Department of Psychology, University of Virginia, VA, USA Recent systematic reviews and empirical evaluations of Definitions of biases and relevance for cognitive the cognitive sciences literature suggest that publica- sciences tion and other reporting biases are prevalent across The terms ‘publication bias’ and ‘selective reporting bias’ diverse domains of cognitive science. In this review, refer to the differential choice to publish studies or report we summarize the various forms of publication and particular results, respectively, depending on the nature reporting biases and other questionable research prac- or directionality of findings [1]. There are several forms of tices, and overview the available methods for probing such biases in the literature [2], including: (i) study publi- into their existence.
    [Show full text]
  • Public Perceptions of Media Bias: a Meta-Analysis of American Media Outlets During the 2012 Presidential Election
    Public Perception of Media Bias by Daniel Quackenbush— 51 Public Perceptions of Media Bias: A Meta-Analysis of American Media Outlets During the 2012 Presidential Election Daniel Quackenbush* Media Arts & Entertainment Elon University Abstract There has been a considerable surge of scholastic inquiry in recent years into understanding the fac- tors responsible for the fluctuating levels of public trust in theAmerican news media. With every election year, the American public continues to perpetuate the stereotype that the American news media is ideologically bi- ased, negatively shaping other citizens’ views of the American political system and impacting their willingness to participate in the electoral process. This study asserts that the likely factors contributing to public percep- tion of a liberal media bias are indicative of the ideological preferences of partisan individuals and customers, rather than any blatant compromises of professional integrity by American journalists. Furthermore, supported by a selection of existing media bias literature and a multi-tiered meta-analysis of 2012 electoral coverage patterns, this study found strong evidence to suggest an overwhelming conservative media bias within the 2012 election coverage by American media outlets. Public opinion is formed and expressed by machinery. The newspapers do an immense amount of thinking for the average man and woman. In fact, they supply them with such a continuous stream of standardized opinion, borne along upon an equally inexhaustible flood of news and sensation, collected from every part of the world every hour of the day, that there is neither the need nor the leisure for personal reflection. All this is but part of a tremendous educating process.
    [Show full text]
  • Title: Publication Bias in the Social Sciences: Unlocking the File Drawer Authors
    Title: Publication Bias in the Social Sciences: Unlocking the File Drawer Authors: Annie Franco,1 Neil Malhotra,2* Gabor Simonovits1 Affiliations: 1Department of Political Science, Stanford University, Stanford, CA 2Graduate School of Business, Stanford University, Stanford, CA *Correspondence to: [email protected] Abstract: We study publication bias in the social sciences by analyzing a known population of conducted studies221 in totalwhere there is a full accounting of what is published and unpublished. We leverage TESS, an NSF-sponsored program where researchers propose survey- based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than null results, and 60 percentage points more likely to be written up. We provide not only direct evidence of publication bias, but also identify the stage of research production at which publication bias occurs—authors do not write up and submit null findings. One Sentence Summary: We examine published and unpublished social science studies of comparable quality from a known population and find substantial evidence of publication bias, arising from authors who do not write up and submit null findings. Main Text: Publication bias occurs when “publication of study results is based on the direction or significance of the findings” (1). One pernicious form of publication bias is the greater likelihood of statistically significant results being published than statistically insignificant results, holding fixed research quality. Selective reporting of scientific findings is often referred to as the “file drawer” problem (2).
    [Show full text]
  • How Selective Reporting Shapes Inferences About Conflict
    How Selective Reporting Shapes Inferences about Conflict Yuri M. Zhukov∗ and Matthew A. Baumy October 7, 2016 Abstract By systematically under- or over-reporting violence by different actors, media organizations convey potentially contradictory information about how a conflict is likely to unfold, and whether outside intervention is necessary to stop it. These reporting biases affect not only statistical inference, but also public knowledge and policy preferences. Using new event data on the ongoing armed conflict in Eastern Ukraine, we perform parallel analyses of data from Ukrainian, rebel, Russian and third party sources. We show that actor-specific reporting bias can yield estimates with vastly different implications for conflict resolution: Ukrainian sources predict frequent unilateral escalation by rebels, pro-Russian rebel sources predict unilateral es- calation by government troops, while outside sources predict that transgressions by either side should be quite rare. Experimental evidence suggests that news consumers tend to support in- tervention against whichever side is shown to be committing the violence. We argue that these kinds of reporting biases can potentially make conflicts more difficult to resolve – hardening attitudes against negotiated settlement, and in favor of military action. ∗[email protected], Assistant Professor of Political Science, University of Michigan ymatthew [email protected], Marvin Kalb Professor of Global Communications, Harvard Kennedy School 1 How we respond to a civil conflict depends on what we know about it. That, in turn, de- pends on where we get our information. Not every event is observable, and not every observed event is publicly reported. Information providers diverge in the events and actors that attract their attention.
    [Show full text]