Researchers Overturn Landmark Study on the Replicability of Psychological Science
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Arxiv:2102.03380V2 [Cs.AI] 29 Jun 2021
Reproducibility in Evolutionary Computation MANUEL LÓPEZ-IBÁÑEZ, University of Málaga, Spain JUERGEN BRANKE, University of Warwick, UK LUÍS PAQUETE, University of Coimbra, CISUC, Department of Informatics Engineering, Portugal Experimental studies are prevalent in Evolutionary Computation (EC), and concerns about the reproducibility and replicability of such studies have increased in recent times, reflecting similar concerns in other scientific fields. In this article, we discuss, within the context of EC, the different types of reproducibility and suggest a classification that refines the badge system of the Association of Computing Machinery (ACM) adopted by ACM Transactions on Evolutionary Learning and Optimization (https://dlnext.acm.org/journal/telo). We identify cultural and technical obstacles to reproducibility in the EC field. Finally, we provide guidelines and suggest tools that may help to overcome some of these reproducibility obstacles. Additional Key Words and Phrases: Evolutionary Computation, Reproducibility, Empirical study, Benchmarking 1 INTRODUCTION As in many other fields of Computer Science, most of the published research in Evolutionary Computation (EC) relies on experiments to justify their conclusions. The ability of reaching similar conclusions by repeating an experiment performed by other researchers is the only way a research community can reach a consensus on an empirical claim until a mathematical proof is discovered. From an engineering perspective, the assumption that experimental findings hold under similar conditions is essential for making sound decisions and predicting their outcomes when tackling a real-world problem. The “reproducibility crisis” refers to the realisation that many experimental findings described in peer-reviewed scientific publications cannot be reproduced, because e.g. they lack the necessary data, they lack enough details to repeat the experiment or repeating the experiment leads to different conclusions. -
1 Psychology As a Robust Science Dr Amy Orben Lent Term 2020 When
Psychology as a Robust Science Dr Amy Orben Lent Term 2020 When: Lent Term 2020; Wednesdays 2-4pm (Weeks 0-6; 15.1.2020 – 26.2.2020), Mondays 2-4pm (Week 7; 2.3.2020) Where: Lecture Theatre, Experimental Psychology Department, Downing Site Summary: Is psychology a robust science? To answer such a question, this course will encourage you to think critically about how psychological research is conducted and how conclusions are drawn. To enable you to truly understand how psychology functions as a science, however, this course will also need to discuss how psychologists are incentivised, how they publish and how their beliefs influence the inferences they make. By engaging with such issues, this course will probe and challenge the basic features and functions of our discipline. We will uncover multiple methodological, statistical and systematic issues that could impair the robustness of scientific claims we encounter every day. We will discuss the controversy around psychology and the replicability of its results, while learning about new initiatives that are currently reinventing the basic foundations of our field. The course will equip you with some of the basic tools necessary to conduct robust psychological research fit for the 21st century. The course will be based on a mix of set readings, class discussions and lectures. Readings will include a diverse range of journal articles, reviews, editorials, blog posts, newspaper articles, commentaries, podcasts, videos, and tweets. No exams or papers will be set; but come along with a critical eye and a willingness to discuss some difficult and controversial issues. Core readings • Chris Chambers (2017). -
Promoting an Open Research Culture
Promoting an open research culture Brian Nosek University of Virginia -- Center for Open Science http://briannosek.com/ -- http://cos.io/ The McGurk Effect Ba Ba? Da Da? Ga Ga? McGurk & MacDonald, 1976, Nature Adelson, 1995 Adelson, 1995 Norms Counternorms Communality Secrecy Open sharing Closed Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Organized skepticism Organized dogmatism Consider all new evidence, even Invest career promoting one’s own against one’s prior work theories, findings Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Organized skepticism Organized dogmatism Consider all new evidence, even Invest career promoting one’s own against one’s prior work theories, findings Quality Quantity Anderson, Martinson, & DeVries, -
David A. Reinhard
David A. Reinhard University of Massachusetts Amherst Psychology of Peace and Violence Program Department of Psychological and Brain Sciences 135 Hicks Way, Tobin Hall, Room 627 Amherst, MA 01003 [email protected] | www.david-reinhard.com Education Ph.D. University of Virginia Social Psychology, 2017 Dissertation: The Influence of Temporal Group Identities on Goal-Pursuit Committee: Benjamin Converse (Chair), Gerald Clore, Shigehiro Oishi, Kieran O’Connor M.A. University of Virginia Social Psychology, 2014 Thesis: Activating Mental Representations of Rivals Increases Motivation Committee: Benjamin Converse (Chair), Gerald Clore, Timothy Wilson B.A. University of Michigan Honors Program Psychology (High Honors), 2010 Thesis: The Influence of Weight Cues on Product Perceptions Advisor: Norbert Schwarz Academic and Professional Appointments University of Massachusetts Amherst Psychology of Peace and Violence Program, Department of Psychology Postdoctoral Research Associate, 2017-Present PI: Bernhard Leidner University of Michigan Research Center for Group Dynamics, Institute for Social Research Research Associate (Lab Manager), 2010-2011 PI: Stephanie Brown, Sara Konrath Research Interests Motivation and Goal Pursuit, Rivalry, Temporal Collective Identities, Social Cognition, Emotion Grants and Fellowships Society for the Study of Peace, Conflict, and Violence APA Division 48, Small Grants Program for Peace Psychology ($500). “Destructive or Constructive International Conflict: Enemies or Rivals? An Empirical Investigation and Intervention in Israel and Iran”. 2018 August 2018 University of Massachusetts Amherst, Campus Climate Improvement Grants ($2,500). “Bridging the Great Divide: Democrat-Republican Rivalry and Its Consequences for Diversity, Disparity, and Inclusion on Campus.” 2018. The Society for the Psychological Study of Social Issues, Grants-in-Aid ($1,998). “De-escalating Conflict in International Rivalries”. -
Hop://Cos.Io/ Brian Nosek University of Virginia Hop://Briannosek.Com
hp://cos.io/ Brian Nosek University of Virginia h"p://briannosek.com/ General Article Psychological Science XX(X) 1 –8 False-Positive Psychology: Undisclosed © The Author(s) 2011 Reprints and permission: sagepub.com/journalsPermissions.nav Flexibility in Data Collection and Analysis DOI: 10.1177/0956797611417632 Allows Presenting Anything as Significant http://pss.sagepub.com Joseph P. Simmons1, Leif D. Nelson2, and Uri Simonsohn1 1The Wharton School, University of Pennsylvania, and 2Haas School of Business, University of California, Berkeley Abstract In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process. Keywords methodology, motivated reasoning, publication, disclosure Received 3/17/11; Revision accepted 5/23/11 Our job as scientists is to discover truths about the world. We Which control variables should be considered? Should spe- generate hypotheses, collect data, and examine whether or not cific measures be combined or transformed or both? the data are consistent with those hypotheses. -
The Reproducibility Crisis in Research and Open Science Solutions Andrée Rathemacher University of Rhode Island, [email protected] Creative Commons License
University of Rhode Island DigitalCommons@URI Technical Services Faculty Presentations Technical Services 2017 "Fake Results": The Reproducibility Crisis in Research and Open Science Solutions Andrée Rathemacher University of Rhode Island, [email protected] Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License. Follow this and additional works at: http://digitalcommons.uri.edu/lib_ts_presentations Part of the Scholarly Communication Commons, and the Scholarly Publishing Commons Recommended Citation Rathemacher, Andrée, ""Fake Results": The Reproducibility Crisis in Research and Open Science Solutions" (2017). Technical Services Faculty Presentations. Paper 48. http://digitalcommons.uri.edu/lib_ts_presentations/48http://digitalcommons.uri.edu/lib_ts_presentations/48 This Speech is brought to you for free and open access by the Technical Services at DigitalCommons@URI. It has been accepted for inclusion in Technical Services Faculty Presentations by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. “Fake Results” The Reproducibility Crisis in Research and Open Science Solutions “It can be proven that most claimed research findings are false.” — John P. A. Ioannidis, 2005 Those are the words of John Ioannidis (yo-NEE-dees) in a highly-cited article from 2005. Now based at Stanford University, Ioannidis is a meta-scientist who conducts “research on research” with the goal of making improvements. Sources: Ionnidis, John P. A. “Why Most -
Why Replications Do Not Fix the Reproducibility Crisis: a Model And
1 63 2 Why Replications Do Not Fix the Reproducibility 64 3 65 4 Crisis: A Model and Evidence from a Large-Scale 66 5 67 6 Vignette Experiment 68 7 69 8 Adam J. Berinskya,1, James N. Druckmanb,1, and Teppei Yamamotoa,1,2 70 9 71 a b 10 Department of Political Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139; Department of Political Science, Northwestern 72 University, 601 University Place, Evanston, IL 60208 11 73 12 This manuscript was compiled on January 9, 2018 74 13 75 14 There has recently been a dramatic increase in concern about collective “file drawer” (4). When publication decisions depend 76 15 whether “most published research findings are false” (Ioannidis on factors beyond research quality, the emergent scientific 77 16 2005). While the extent to which this is true in different disciplines re- consensus may be skewed. Encouraging replication seems to 78 17 mains debated, less contested is the presence of “publication bias,” be one way to correct a biased record of published research 79 18 which occurs when publication decisions depend on factors beyond resulting from this file drawer problem (5–7). Yet, in the 80 19 research quality, most notably the statistical significance of an ef- current landscape, one must also consider potential publication 81 20 fect. Evidence of this “file drawer problem” and related phenom- biases at the replication stage. While this was not an issue 82 21 ena across fields abounds, suggesting that an emergent scientific for OSC since they relied on over 250 scholars to replicate the 83 22 consensus may represent false positives. -
2020 Impact Report
Center for Open Science IMPACT REPORT 2020 Maximizing the impact of science together. COS Mission Our mission is to increase the openness, integrity, and reproducibility of research. But we don’t do this alone. COS partners with stakeholders across the research community to advance the infrastructure, methods, norms, incentives, and policies shaping the future of research to achieve the greatest impact on improving credibility and accelerating discovery. Letter from the Executive Director “Show me” not “trust me”: Science doesn’t ask for Science is trustworthy because it does not trust itself. Transparency is a replacement for trust. Transparency fosters self-correction when there are errors trust, it earns trust with transparency. and increases confidence when there are not. The credibility of science has center stage in 2020. A raging pandemic. Partisan Transparency is critical for maintaining science’s credibility and earning public interests. Economic and health consequences. Misinformation everywhere. An trust. The events of 2020 make clear the urgency and potential consequences of amplified desire for certainty on what will happen and how to address it. losing that credibility and trust. In this climate, all public health and economic research will be politicized. All The Center for Open Science is profoundly grateful for all of the collaborators, findings are understood through a political lens. When the findings are against partners, and supporters who have helped advance its mission to increase partisan interests, the scientists are accused of reporting the outcomes they want openness, integrity, and reproducibility of research. Despite the practical, and avoiding the ones they don’t. When the findings are aligned with partisan economic, and health challenges, 2020 was a remarkable year for open science. -
1 Maximizing the Reproducibility of Your Research Open
1 Maximizing the Reproducibility of Your Research Open Science Collaboration1 Open Science Collaboration (in press). Maximizing the reproducibility of your research. In S. O. Lilienfeld & I. D. Waldman (Eds.), Psychological Science Under Scrutiny: Recent Challenges and Proposed Solutions. New York, NY: Wiley. Authors’ Note: Preparation of this chapter was supported by the Center for Open Science and by a Veni Grant (016.145.049) awarded to Hans IJzerman. Correspondence can be addressed to Brian Nosek, [email protected]. 1 Alexander A. Aarts, Nuenen, The Netherlands; Frank A. Bosco, Virginia Commonwealth University; Katherine S. Button, University of Bristol; Joshua Carp, Center for Open Science; Susann Fiedler, Max Planck Institut for Research on Collective Goods; James G. Field, Virginia Commonwealth University; Roger Giner-Sorolla, University of Kent; Hans IJzerman, Tilburg University; Melissa Lewis, Center for Open Science; Marcus Munafò, University of Bristol; Brian A. Nosek, University of Virginia; Jason M. Prenoveau, Loyola University Maryland; Jeffrey R. Spies, Center for Open Science 2 Commentators in this book and elsewhere describe evidence that modal scientific practices in design, analysis, and reporting are interfering with the credibility and veracity of the published literature (Begley & Ellis, 2012; Ioannidis, 2005; Miguel et al., 2014; Simmons, Nelson, & Simonsohn, 2011). The reproducibility of published findings is unknown (Open Science Collaboration, 2012a), but concern that is lower than desirable is widespread - -
Research Idea and Literature Review on the Effect of Affective Forecasting on Emotional Experience
International Journal of New Developments in Engineering and Society ISSN 2522-3488 Vol. 4, Issue 4: 123-132, DOI: 10.25236/IJNDES.040411 Research Idea and Literature Review on the Effect of Affective Forecasting on Emotional Experience Sophie Liu Waterloo Collegiate Institute, Waterloo, N2L3P2, Canada ABSTRACT: According to the study by Timothy Wilson and Daniel Gilbert on affective forecasting (2003), people’s prediction of their emotional reactions towards future events could influence their actual emotional responses. The aim of this research idea is to further investigate the impact of affective forecasting on the intensity of affective experience by conducting two experiments in which participants would be divided into two groups based on their predictions of the intensity of their emotional experience towards a fear-inducing event and a happiness-inducing event, and the intensity of the actual emotional experience of the two groups would be recorded and compared. Several related studies on affective forecasting are reviewed in the introduction section. KEYWORDS: Affective Forecasting, Affective Experience, Expectation Effect, Literature Review, Research Proposal Introduction Affective forecasting [1] is the prediction of one’s future affect/feelings when experiencing certain events. The concept was first investigated by Kahneman and Snell as “hedonic forecasts” when they examined it impact on decision making in 1990 [2]. In 2003 psychologists Timothy Wilson and Daniel Gilbert coined it with the name of “affective forecasting” [1]. While earlier researches focused on the emotional forecasts rather than the actual response, other researchers (e.g. Baron, 1992; Gilbert, Driver-Linn, & Wilson, 2002; Gilbert & Wilson, 2000; Gilbert, Wilson, 2003; Loewenstein & Frederick, 1997) later moved on to examine the accuracy of forecasts by measuring both predicted and experienced emotional response. -
Intro to Science Studies I
PHIL 209A / SOCG 255A / HIGR 238 / COGR 225A Intro to Science Studies I Fall 2017 Instructor: Kerry McKenzie [email protected] Seminars: Tuesday 9.30-12.20pm, HSS 3027. Office Hours: Wednesday 2-4pm, HSS 8088. 1 Overview. This course is a philosophically slanted introduction to Science Studies. Our central question is a conceptual one, whose relevance to Science Studies should speak for itself – namely, what is science, and what distinguishes science from other fields? In grappling with this question we’ll familiarize ourselves with central works in the philosophy of science canon, and make glancing acquaintance with some more contemporary issues in scientific epistemology and metaphysics. But our guiding motif is a normative one: what, if anything, makes science entitled to the privileged status that it enjoys within culture? In more detail. The ‘question of demarcation’ – that of what, if anything, makes what we call ‘science’ science – was a central preoccupation of many of the major 20th century philosophers of science. While interest in this topic waned after the appearance of Larry Laudan’s ‘Demise of the Demarcation Problem’ in 1983, the question of what separates science from ‘pseudoscience’ is now making something of a comeback. In this course, we will review the approaches to demarcation offered by the philosophers Popper, Kuhn, and Lakatos – all of which serve as concise introductions to the dominant themes of their work as a whole – and then examine some more contemporary approaches more centred on pragmatics and the philosophy of language. We will then consider how homeopathy – for most a pseudoscience par excellence – fares with regard to the criteria we’ll have studied. -
David A. Reinhard
David A. Reinhard University of Massachusetts Amherst Psychology of Peace and Violence Program Department of Psychological and Brain Sciences 135 Hicks Way, Tobin Hall, Room 627 Amherst, MA 01003 [email protected] | www.david-reinhard.com Education Ph.D. University of Virginia Social Psychology, 2017 Dissertation: The Influence of Temporal Group Identities on Goal-Pursuit Committee: Benjamin Converse (Chair), Gerald Clore, Shigehiro Oishi, Kieran O’Connor M.A. University of Virginia Social Psychology, 2014 Thesis: Activating Mental Representations of Rivals Increases Motivation Committee: Benjamin Converse (Chair), Gerald Clore, Timothy Wilson B.A. University of Michigan Honors Program Psychology (High Honors), 2010 Thesis: The Influence of Weight Cues on Product Perceptions Advisor: Norbert Schwarz Academic and Professional Appointments University of Massachusetts Amherst Psychology of Peace and Violence Program, Department of Psychology Postdoctoral Research Associate, 2017-Present PI: Bernhard Leidner University of Michigan Research Center for Group Dynamics, Institute for Social Research Research Associate (Lab Manager), 2010-2011 PI: Stephanie Brown, Sara Konrath Research Interests Motivation and Goal Pursuit, Rivalry, Temporal Collective Identities, Social Cognition, Emotion Fellowships and Grants University of Virginia Presidential Fellowship in Data Science ($28,000). “The role of emotions in political discourse”. 2016-2017 December 2017 The Society for Personality and Social Psychology, Small Research Grant ($1,500). “De- escalating Conflict in International Rivalries”. 2018 Publications Clore, G.L. & Reinhard, D.A. (in press). Emotional intensity: It’s the thought that counts. In R. Davidson, A. Shackman, A. Fox, & R. Lapate (Eds.), The Nature of Emotion: A volume of short essays addressing fundamental questions in emotion. Oxford: Oxford University Press.