Is Psychology Suffering from a Replication Crisis? What Does “Failure to Replicate” Really Mean?

Total Page:16

File Type:pdf, Size:1020Kb

Is Psychology Suffering from a Replication Crisis? What Does “Failure to Replicate” Really Mean? Is Psychology Suffering From a Replication Crisis? What Does “Failure to Replicate” Really Mean? Scott E. Maxwell University of Notre Dame Michael Y. Lau Teachers College, Columbia University George S. Howard University of Notre Dame Psychology has recently been viewed as facing a replica- mistakenly reporting an effect when in reality no effect tion crisis because efforts to replicate past study findings exists) at 5%. However, as a number of authors (e.g., frequently do not show the same result. Often, the first Gelman & Loken, 2014; John, Loewenstein, & Prelec, study showed a statistically significant result but the rep- 2012; Simmons, Nelson, & Simonsohn, 2011) have dis- lication does not. Questions then arise about whether the cussed, data analyses in psychology and other fields are first study results were false positives, and whether the often driven by the observed data. Data-driven analyses replication study correctly indicates that there is truly no include but are not limited to noticing apparent patterns in effect after all. This article suggests these so-called failures the data and then testing them for significance, testing to replicate may not be failures at all, but rather are the effects on multiple measures, testing effects on subgroups result of low statistical power in single replication studies, of participants, fitting multiple latent variable models, in- and the result of failure to appreciate the need for multiple cluding or excluding various covariates, and stopping data replications in order to have enough power to identify true collection once significant results have been obtained. effects. We provide examples of these power problems and Some of these practices may be entirely appropriate de- suggest some solutions using Bayesian statistics and meta- pending on the specific circumstances, but even at best the analysis. Although the need for multiple replication studies existence of such practices makes it difficult to evaluate the may frustrate those who would prefer quick answers to accuracy of a single published study because these prac- psychology’s alleged crisis, the large sample sizes typically tices typically increase the probability of obtaining a sig- needed to provide firm evidence will almost always require nificant result. Gelman and Loken (2014) state that “Fisher concerted efforts from multiple investigators. As a result, it offered the idea of p values as a means of protecting remains to be seen how many of the recently claimed researchers from declaring truth based on patterns in noise. failures to replicate will be supported or instead may turn In an ironic twist, p values are now often manipulated to out to be artifacts of inadequate sample sizes and single lend credence to noisy claims based on small samples” (p. study replications. 460). The question of whether a pattern seemingly identified Keywords: false positive results, statistical power, meta- in an original study is in fact more than just noise can often analysis, equivalence tests, Bayesian methods best be addressed by testing whether the pattern can be replicated in a new study, which has led to increased sychologists have recently become increasingly attention to the role of replication in psychological re- concerned about the likely overabundance of false search. Moonesinghe, Khoury, and Janssens (2007) have Ppositive results in the scientific literature. For ex- shown that successful replications can greatly lower the ample, Simmons, Nelson, and Simonsohn (2011) state that risk of inflated false positive results. Both Moonesinghe et “In many cases, a researcher is more likely to falsely find al. (2007, p. 218) and Simons (2014, p. 76) maintain that evidence that an effect exists than to correctly find evidence This document is copyrighted by the American Psychological Association or one of its allied publishers. replication is “the cornerstone of science” because only that it does not” (p. 1359). In a similar vein, Ioannidis This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. replication can adjudicate whether a single study reporting (2005) concluded that for disciplines where statistical sig- an original result represents a true finding or a false positive nificance is a virtual prerequisite for publication, “most result. Perspectives on Psychological Science devoted a current published research findings are false” (p. 696). special section to replicability in 2012 (Pashler & Wagen- Such concerns led Pashler and Wagenmakers (2012) to conclude that there appears to be a “crisis of confidence in psychological science reflecting an unprecedented doubt Scott E. Maxwell, Department of Psychology, University of Notre Dame; Michael Y. Lau, Department of Counseling and Clinical Psychology, among practitioners about the reliability of research find- Teachers College, Columbia University; George S. Howard, Department ings in the field” (p. 528). Simmons et al. (2011) state that of Psychology, University of Notre Dame. “a field known for publishing false positives loses its cred- We thank Melissa Maxwell Davis, David Funder, John Kruschke, ibility” (p. 1359). Will Shadish, and Steve West for their valuable comments on an earlier version of this paper. An initial reaction might be that psychology is im- Correspondence concerning this article should be addressed to Scott mune to such concerns because published studies typically E. Maxwell, Department of Psychology, University of Notre Dame, Notre appear to control the probability of a Type I error (i.e., Dame, IN 46556. E-mail: [email protected] September 2015 ● American Psychologist 487 © 2015 American Psychological Association 0003-066X/15/$12.00 Vol. 70, No. 6, 487–498 http://dx.doi.org/10.1037/a0039400 arisen in several disciplines, much of the concern has focused on psychology. A 2012 article in The Chronicle of Higher Education, for example, raised the question, “Is Psychology About to Come Undone?” (Bartlett, 2012). More recently, a 2014 Chronicle of Higher Education article described the apparent crisis as “repligate” (Bartlett, 2014). A particular replication may fail to confirm the results of an original study for a variety of reasons, some of which may include intentional differences in procedures, mea- sures, or samples as in a conceptual replication (Cesario, 2014; Simons, 2014; Stroebe & Strack, 2014). Although conceptual replication studies can be very informative, they may not be able to identify false positive results in the published literature, because if the replication study fails to find an effect previously reported in a published study, any discrepancy in results may simply be due to procedural differences in the two studies. For this reason, there has been an increased emphasis recently on exact (or direct) replications. If exactly replicating the procedures of the Scott E. original study fails to replicate the results, then it might Maxwell seem reasonable to conclude that the results of the original study are in reality nothing more than a Type I error (i.e., mistakenly reporting an effect when, in reality, no effect exists). The primary purpose of our article is to explain makers, 2012). More recently, this journal has begun a new why even an exact replication may fail to obtain findings type of article, a Registered Replication Report (RRR). In consistent with the original study and yet the effect iden- an APS Observer column, Roediger (2012) stated that “By tified in the original study may very well be true despite following the practice of both direct and systematic repli- these discrepant findings. cation, of our own research and of others’ work, we would It might seem straightforward to decide whether a avoid the greatest problems we are now witnessing” (para. replication study is a success or a failure, at least from a 19). Along these lines, collaborative efforts such as the narrow statistical perspective. Generally speaking, a pub- Reproducibility Project (Open Science Collaboration, lished original study has in all likelihood demonstrated a 2012) and the psychfiledrawer.org website, which provides statistically significant effect. In the current zeitgeist, a an archive of replication studies, reflect systematic efforts replication study is usually interpreted as successful if it to assess the extent to which original findings published in also demonstrates a statistically significant effect. On the the literature are replicable and can be trusted. other hand, a replication study that fails to show statistical Several recent apparent replication failures have been significance would typically be interpreted as a failure.1 widely publicized and have begun to cast doubt in some An immediate limitation of this perspective is that the minds on the extent to which the field more broadly is beset replication study may have failed to produce a statistically with a preponderance of results that cannot be replicated. significant result because it may have been underpowered. Most notably, various replication studies (e.g., Galak, Le- There is always some probability that a nonsignificant Boeuf, Nelson, & Simmons, 2012; Ritchie, Wiseman, & result may be a Type II error (i.e., failing to reject the null French, 2012) apparently fail to confirm Bem’s (2011) hypothesis even though it is false). However, this limitation This document is copyrighted by the American Psychological Association or one of its allied publishers. highly publicized findings regarding the existence of psi. seems to have an immediate solution, namely to design the This article is intended solely for the personal use of theAnother individual user and is not to be disseminated broadly. highly publicized example is the apparent failure replication study so as to have adequate statistical power of Doyen, Klein, Pichon, and Cleeremans (2012) and Pa- and thus minimal risk of a Type II error. As Simons (2014) shler, Coburn, and Harris (2012) to replicate Bargh’s work states, “If an effect is real and robust, any competent on the influence of subtle priming on behavior. More researcher should be able to obtain it when using the same generally, out of 14 replication attempts organized by procedures with adequate statistical power” (p.
Recommended publications
  • Arxiv:2102.03380V2 [Cs.AI] 29 Jun 2021
    Reproducibility in Evolutionary Computation MANUEL LÓPEZ-IBÁÑEZ, University of Málaga, Spain JUERGEN BRANKE, University of Warwick, UK LUÍS PAQUETE, University of Coimbra, CISUC, Department of Informatics Engineering, Portugal Experimental studies are prevalent in Evolutionary Computation (EC), and concerns about the reproducibility and replicability of such studies have increased in recent times, reflecting similar concerns in other scientific fields. In this article, we discuss, within the context of EC, the different types of reproducibility and suggest a classification that refines the badge system of the Association of Computing Machinery (ACM) adopted by ACM Transactions on Evolutionary Learning and Optimization (https://dlnext.acm.org/journal/telo). We identify cultural and technical obstacles to reproducibility in the EC field. Finally, we provide guidelines and suggest tools that may help to overcome some of these reproducibility obstacles. Additional Key Words and Phrases: Evolutionary Computation, Reproducibility, Empirical study, Benchmarking 1 INTRODUCTION As in many other fields of Computer Science, most of the published research in Evolutionary Computation (EC) relies on experiments to justify their conclusions. The ability of reaching similar conclusions by repeating an experiment performed by other researchers is the only way a research community can reach a consensus on an empirical claim until a mathematical proof is discovered. From an engineering perspective, the assumption that experimental findings hold under similar conditions is essential for making sound decisions and predicting their outcomes when tackling a real-world problem. The “reproducibility crisis” refers to the realisation that many experimental findings described in peer-reviewed scientific publications cannot be reproduced, because e.g. they lack the necessary data, they lack enough details to repeat the experiment or repeating the experiment leads to different conclusions.
    [Show full text]
  • The Reproducibility Crisis in Research and Open Science Solutions Andrée Rathemacher University of Rhode Island, [email protected] Creative Commons License
    University of Rhode Island DigitalCommons@URI Technical Services Faculty Presentations Technical Services 2017 "Fake Results": The Reproducibility Crisis in Research and Open Science Solutions Andrée Rathemacher University of Rhode Island, [email protected] Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License. Follow this and additional works at: http://digitalcommons.uri.edu/lib_ts_presentations Part of the Scholarly Communication Commons, and the Scholarly Publishing Commons Recommended Citation Rathemacher, Andrée, ""Fake Results": The Reproducibility Crisis in Research and Open Science Solutions" (2017). Technical Services Faculty Presentations. Paper 48. http://digitalcommons.uri.edu/lib_ts_presentations/48http://digitalcommons.uri.edu/lib_ts_presentations/48 This Speech is brought to you for free and open access by the Technical Services at DigitalCommons@URI. It has been accepted for inclusion in Technical Services Faculty Presentations by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. “Fake Results” The Reproducibility Crisis in Research and Open Science Solutions “It can be proven that most claimed research findings are false.” — John P. A. Ioannidis, 2005 Those are the words of John Ioannidis (yo-NEE-dees) in a highly-cited article from 2005. Now based at Stanford University, Ioannidis is a meta-scientist who conducts “research on research” with the goal of making improvements. Sources: Ionnidis, John P. A. “Why Most
    [Show full text]
  • Why Replications Do Not Fix the Reproducibility Crisis: a Model And
    1 63 2 Why Replications Do Not Fix the Reproducibility 64 3 65 4 Crisis: A Model and Evidence from a Large-Scale 66 5 67 6 Vignette Experiment 68 7 69 8 Adam J. Berinskya,1, James N. Druckmanb,1, and Teppei Yamamotoa,1,2 70 9 71 a b 10 Department of Political Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Department of Political Science, Northwestern 72 University, 601 University Place, Evanston, IL 60208 11 73 12 This manuscript was compiled on January 9, 2018 74 13 75 14 There has recently been a dramatic increase in concern about collective “file drawer” (4). When publication decisions depend 76 15 whether “most published research findings are false” (Ioannidis on factors beyond research quality, the emergent scientific 77 16 2005). While the extent to which this is true in different disciplines re- consensus may be skewed. Encouraging replication seems to 78 17 mains debated, less contested is the presence of “publication bias,” be one way to correct a biased record of published research 79 18 which occurs when publication decisions depend on factors beyond resulting from this file drawer problem (5–7). Yet, in the 80 19 research quality, most notably the statistical significance of an ef- current landscape, one must also consider potential publication 81 20 fect. Evidence of this “file drawer problem” and related phenom- biases at the replication stage. While this was not an issue 82 21 ena across fields abounds, suggesting that an emergent scientific for OSC since they relied on over 250 scholars to replicate the 83 22 consensus may represent false positives.
    [Show full text]
  • Intro to Science Studies I
    PHIL 209A / SOCG 255A / HIGR 238 / COGR 225A Intro to Science Studies I Fall 2017 Instructor: Kerry McKenzie [email protected] Seminars: Tuesday 9.30-12.20pm, HSS 3027. Office Hours: Wednesday 2-4pm, HSS 8088. 1 Overview. This course is a philosophically slanted introduction to Science Studies. Our central question is a conceptual one, whose relevance to Science Studies should speak for itself – namely, what is science, and what distinguishes science from other fields? In grappling with this question we’ll familiarize ourselves with central works in the philosophy of science canon, and make glancing acquaintance with some more contemporary issues in scientific epistemology and metaphysics. But our guiding motif is a normative one: what, if anything, makes science entitled to the privileged status that it enjoys within culture? In more detail. The ‘question of demarcation’ – that of what, if anything, makes what we call ‘science’ science – was a central preoccupation of many of the major 20th century philosophers of science. While interest in this topic waned after the appearance of Larry Laudan’s ‘Demise of the Demarcation Problem’ in 1983, the question of what separates science from ‘pseudoscience’ is now making something of a comeback. In this course, we will review the approaches to demarcation offered by the philosophers Popper, Kuhn, and Lakatos – all of which serve as concise introductions to the dominant themes of their work as a whole – and then examine some more contemporary approaches more centred on pragmatics and the philosophy of language. We will then consider how homeopathy – for most a pseudoscience par excellence – fares with regard to the criteria we’ll have studied.
    [Show full text]
  • Psychology's Replication Crisis and the Grant Culture: Righting the Ship
    PPSXXX10.1177/1745691616687745LilienfeldRighting the Ship 687745research-article2017 Perspectives on Psychological Science 2017, Vol. 12(4) 660 –664 Psychology’s Replication Crisis and the © The Author(s) 2017 Reprints and permissions: Grant Culture: Righting the Ship sagepub.com/journalsPermissions.nav DOI:https://doi.org/10.1177/1745691616687745 10.1177/1745691616687745 www.psychologicalscience.org/PPS Scott O. Lilienfeld Department of Psychology, Emory University Abstract The past several years have been a time for soul searching in psychology, as we have gradually come to grips with the reality that some of our cherished findings are less robust than we had assumed. Nevertheless, the replication crisis highlights the operation of psychological science at its best, as it reflects our growing humility. At the same time, institutional variables, especially the growing emphasis on external funding as an expectation or de facto requirement for faculty tenure and promotion, pose largely unappreciated hazards for psychological science, including (a) incentives for engaging in questionable research practices, (b) a single­minded focus on programmatic research, (c) intellectual hyperspecialization, (d) disincentives for conducting direct replications, (e) stifling of creativity and intellectual risk taking, (f) researchers promising more than they can deliver, and (g) diminished time for thinking deeply. Preregistration should assist with (a), but will do little about (b) through (g). Psychology is beginning to right the ship, but it will need to confront the increasingly deleterious impact of the grant culture on scientific inquiry. Keywords replication, grants, preregistration, confirmation bias “Science is a struggle for truth against method­ psychological phenomena, such as behavioral priming, ological, psychological, and sociological obstacles” the effects of facial expressions on emotions, and the (Fanelli & Ioannidis, 2013, p.
    [Show full text]
  • A Bayesian Treatise on the Replication Crisis in Science
    A Bayesian Treatise on The Replication Crisis in Science by True Gibson A thesis submitted to Oregon State University Honors College in partial fulfillment of the requirements for the degree of Honors Baccalaureate of Science in Biochemistry and Biophysics Honors Scholar Presented May 25, 2018 Commencement June 2018 AN ABSTRACT OF THE THESIS OF True Gibson for the degree of Honors Baccalaureate of Science in Biochemistry and Biophysics presented on May 25, 2018. Title: A Bayesian Treatise on The Replication Crisis in Science. Abstract approved: __________________________________________________ Jonathan Kaplan Abstract Over the past decade, it has come to light that many published scientific findings cannot be reproduced. This has led to the replication crisis in science. Many researchers feel that they can no longer trust much of what they read in scientific journals, and the public is becoming ever more dubious of published research findings. I argue herein that the replication crisis is a result of the mistaken belief that the methods of classical statistical inference provide a coherent objective basis for evaluating scientific hypotheses. To solve the replication crisis, I suggest that the scientific community ought to invoke a Bayesian framework of statistical inference which integrates both successful and failed replication attempts to determine the confidence they should have in any given scientific hypothesis. Key Words: Inductive Reasoning, Replication Crisis, Reproducibility, Statistical Inference, Bayesian Statistics Corresponding
    [Show full text]
  • Researchers Overturn Landmark Study on the Replicability of Psychological Science
    Researchers overturn landmark study on the replicability of psychological science By Peter Reuell Harvard Staff Writer Category: HarvardScience Subcategory: Culture & Society KEYWORDS: psychology, psychological science, replication, replicate, reproduce, reproducibility, Center for Open Science, Gilbert, Daniel Gilbert, King, Gary King, Science, Harvard, FAS, Faculty of Arts and Sciences, Reuell, Peter Reuell Summary: A 2015 study claiming that more than half of all psychology studies cannot be replicated turns out to be wrong. Harvard researchers have discovered that the study contains several statistical and methodological mistakes, and that when these are corrected, the study actually shows that the replication rate in psychology is quite high – indeed, it is statistically indistinguishable from 100%. RELATED LINKS: According to two Harvard professors and their collaborators, a 2015 landmark study showing that more than half of all psychology studies cannot be replicated is actually wrong. In an attempt to determine the replicability of psychological science, a consortium of 270 scientists known as The Open Science Collaboration (OSC) tried to replicate the results of 100 published studies. More than half of them failed, creating sensational headlines worldwide about the “replication crisis” in psychology. But an in-depth examination of the data by Daniel Gilbert (Edgar Pierce Professor of Psychology at Harvard University), Gary King (Albert J. Weatherhead III University Professor at Harvard University), Stephen Pettigrew (doctoral student in the Department of Government at Harvard University), and Timothy Wilson (Sherrell J. Aston Professor of Psychology at the University of Virginia) has revealed that the OSC made some serious mistakes that make this pessimistic conclusion completely unwarranted: The methods of many of the replication studies turn out to be remarkably different from the originals and, according to Gilbert, King, Pettigrew, and Wilson, these “infidelities” had two important consequences.
    [Show full text]
  • The Replication Crisis, the Rise of New Research Practices and What It Means for Experimental Economics ESA Report
    The replication crisis, the rise of new research practices and what it means for experimental economics ESA Report Lionel Page* Charles N. Noussair Robert Slonim 1. Introduction In the wake of the replication crisis in psychology, a range of new approaches has been advocated to improve scientific practices and the replicability of published studies in the behavioural sciences. The ESA Executive Committee commissioned an ad hoc committee to review the issues raised by the replication crisis, and how they affect research in experimental economics. The present report is the result of this review. Its content has greatly benefited from the personal views and insights of a large number of ESA members. The views within the community of researchers in experimental economics are diverse. The present report does not aim at determining a strict ESA policy. Rather, it aims to bring to the community of experimental economists the collective wisdom spread among experimentalists. The report presents a discussion of the different issues related to replicability and discusses the different potential solutions, with their benefits and pitfalls. The report also contains a series of recommendations that aim to address the challenges presented by the replication crisis, while respecting the diversity of views within the ESA community. 2. The issues raised by the replication crisis In recent years, there has been a growing concern that, for a substantial proportion of studies published in scientific journals, the results are false (Ioannidis, 2005). This concern has been particularly strong in psychology, and it has spread to other social sciences, including economics. In psychology, many studies, some of which are very well-known and discussed in textbooks, have seen attempts at replication fail.
    [Show full text]
  • 1 Why Replication Is Overrated Current Debates About The
    Uljana Feest. Paper to be presented at PSA2018: The 26th Biennial Meeting of the Philosophy of Science Association, Nov. 1-4, 2018, Seattle Why Replication is Overrated Current debates about the replication crisis in psychology take it for granted that direct replication is valuable and focus their attention on questionable research practices in regard to statistical analyses. This paper takes a broader look at the notion of replication as such. It is argued that all experimentation/replication involves individuation judgments and that research in experimental psychology frequently turns on probing the adequacy of such judgments. In this vein, I highlight the ubiquity of conceptual and material questions in research, and I argue that replication is not as central to psychological research as it is sometimes taken to be. 1. Introduction: The “Replication Crisis” In the current debate about replicability in psychology, we can distinguish between (1) the question of why not more replication studies are done (e.g., Romero 2017) and (2) the question of why a significant portion (more than 60%) of studies, when they are done, fail to replicate (I take this number from the Open Science Collaboration, 2015). Debates about these questions have been dominated by two assumptions, namely, first, that it is in general desirable that scientists conduct replication studies that come as close as possible to the original, and second, that the low replication rate can often be attributed to statistical problems with many initial studies, sometimes referred to as “p-hacking” and “data-massaging.”1 1 An important player in this regard is the statistician Andrew Gelman who has been using his blog as a public platform to debate methodological problems with mainstream social psychology (http://andrewgelman.com/).
    [Show full text]
  • Why Has Personality Psychology Played an Outsized Role in the Credibility Revolution?
    Theory: State of the Art Review [Invited Paper] Why Has Personality Psychology Played an Outsized Role in the Credibility Revolution? Olivia E. Atherton 1 , Joanne M. Chung 2 , Kelci Harris 3 , Julia M. Rohrer 4 , David M. Condon 5 , Felix Cheung 6 , Simine Vazire 7 , Richard E. Lucas 8 , M. Brent Donnellan 8, Daniel K. Mroczek 1,9 , Christopher J. Soto 10 , Stephen Antonoplis 11 , Rodica Ioana Damian 12 , David C. Funder 13 , Sanjay Srivastava 5 , R. Chris Fraley 14, Hayley Jach 7 , Brent W. Roberts 14,15 , Luke D. Smillie 7 , Jessie Sun 16 , Jennifer L. Tackett 9 , Sara J. Weston 5 , K. Paige Harden 17,18 , Katherine S. Corker 19 [1] Department of Medical Social Sciences, Northwestern University, Chicago, IL, USA. [2] Department of Psychology, University of Toronto, Mississauga , Canada. [3] Department of Psychology, University of Victoria, Victoria, Canada. [4] Department of Psychology, Leipzig University, Leipzig, Germany. [5] Department of Psychology, University of Oregon, Eugene, OR, USA. [6] Department of Psychology, University of Toronto, Toronto, Canada. [7] School of Psychological Sciences, University of Melbourne, Melbourne, Australia. [8] Department of Psychology, Michigan State University, East Lansing, MI, USA. [9] Department of Psychology, Northwestern University, Evanston, IL, USA. [10] Department of Psychology, Colby College, Waterville, ME, USA. [11] Department of Psychology, University of California, Berkeley, CA, USA. [12] Department of Psychology, University of Houston, Houston, TX, USA. [13] Department of Psychology, University of California, Riverside, CA, USA. [14] Department of Psychology, University of Illinois, Champaign, IL, USA. [15] Hector Research Institute of Education and Sciences and Psychology, University of Tübingen, Tübingen, Germany.
    [Show full text]
  • The Statistical Crisis in Science: How Is It Relevant to Clinical Neuropsychology?∗ 1. Introduction
    The statistical crisis in science: How is it relevant to clinical neuropsychology?∗ Andrew Gelmany and Hilde M. Geurtsz 2 Jan 2017 Abstract There is currently increased attention to the statistical (and replication) crisis in science. Biomedicine and social psychology have been at the heart of this crisis, but similar problems are evident in a wide range of fields. We discuss three examples of replication challenges from the field of social psychology and some proposed solutions, and then consider the applicability of these ideas to clinical neuropsychology. In addition to procedural developments such as pre- registration and open data and criticism, we recommend that data be collected and analyzed with more recognition that each new study is a part of a learning process. The goal of improv- ing neuropsychological assessment, care, and cure is too important to not take good scientific practice seriously. 1. Introduction In the last few years psychology and other experimental sciences have been rocked with what has been called a replication crisis. Recent theoretical and experimental results have cast doubt on what previously had been believed to be solid findings in various fields (for examples, see Begley and Ellis, 2012, Button et al., 2013, Baker, 2016, Dumas-Mallet et al., 2016, Ioannidis and Panagiotou, 2011). Here we focus on psychology, a field in which big steps have been made to try to address the crisis. Where do we refer to when we speak of the statistical crisis in science? From the theoretical direction a close examination of the properties of hypothesis testing has made it clear that conven- tional “statistical significance” does not necessarily provide a strong signal in favor of scientific claims (Cohen, 1994, Ioannidis, 2005, Simmons, Nelson, and Simonsohn, 2011, Francis, 2013, Button et al., 2013, Gelman and Carlin, 2014).
    [Show full text]
  • Towards a More Robust and Replicable Science of Infant Development
    Towards a more robust and replicable science of infant development Michael C. Frank Department of Psychology, Stanford University Please address correspondence to: Michael C. Frank, 450 Serra Mall, Stanford, CA 94305. [email protected]. Towards a more robust and replicable science of infant development If you scroll through the table of contents of Infant Behavior and Development ten years ​ ​ from now, what mix of articles should you find? If the past predicts the future, most articles will be empirical reports: one or a series of experiments conducted at a single site, typically investigating a novel phenomenon or with a new take on a previous experimental paradigm. On the other hand, based on recent changes in psychology more generally, we might expect a broader range of articles. Some might still be new empirical reports from a single laboratory; but others might be larger-scale collaborative studies; still others will be confirmatory replication research; and others will be new methodological and statistical innovations. These changes have been triggered by a host of different factors, but two threads stand out. The first was a recognition that many of the analytic practices followed by psychologists could substantially inflate the probability of a spurious finding being reported (Simmons, Nelson, ​ & Simonsohn, 2011). The second was initial experimental evidence for the low rate of ​ replicability – finding the same conclusions from a new dataset – in a sample of high-profile psychology findings (Open Science Collaboration, 2015). These two initial threads have become ​ ​ part of a rich fabric of discussion surrounding the nature of scientific practices and their consequences for the robustness of the research literature (Munafò et al., 2017).
    [Show full text]