Hop://Cos.Io/ Brian Nosek University of Virginia Hop://Briannosek.Com

Total Page:16

File Type:pdf, Size:1020Kb

Hop://Cos.Io/ Brian Nosek University of Virginia Hop://Briannosek.Com hp://cos.io/ Brian Nosek University of Virginia h"p://briannosek.com/ General Article Psychological Science XX(X) 1 –8 False-Positive Psychology: Undisclosed © The Author(s) 2011 Reprints and permission: sagepub.com/journalsPermissions.nav Flexibility in Data Collection and Analysis DOI: 10.1177/0956797611417632 Allows Presenting Anything as Significant http://pss.sagepub.com Joseph P. Simmons1, Leif D. Nelson2, and Uri Simonsohn1 1The Wharton School, University of Pennsylvania, and 2Haas School of Business, University of California, Berkeley Abstract In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process. Keywords methodology, motivated reasoning, publication, disclosure Received 3/17/11; Revision accepted 5/23/11 Our job as scientists is to discover truths about the world. We Which control variables should be considered? Should spe- generate hypotheses, collect data, and examine whether or not cific measures be combined or transformed or both? the data are consistent with those hypotheses. Although we It is rare, and sometimes impractical, for researchers to aspire to always be accurate, errors are inevitable. make all these decisions beforehand. Rather, it is common Perhaps the most costly error is a false positive, the incor- (and accepted practice) for researchers to explore various ana- rect rejection of a null hypothesis. First, once they appear in lytic alternatives, to search for a combination that yields “sta- the literature, false positives are particularly persistent. tistical significance,” and to then report only what “worked.” Because null results have many possible causes, failures to The problem, of course, is that the likelihood of at least one (of replicate previous findings are never conclusive. Furthermore, many) analyses producing a falsely positive finding at the 5% because it is uncommon for prestigious journals to publish null level is necessarily greater than 5%. findings or exact replications, researchers have little incentive This exploratory behavior is not the by-product of mali- to even attempt them. Second, false positives waste resources: cious intent, but rather the result of two factors: (a) ambiguity They inspire investment in fruitless research programs and can in how best to make these decisions and (b) the researcher’s lead to ineffective policy changes. Finally, a field known for desire to find a statistically significant result. A large literature publishing false positives risks losing its credibility. documents that people are self-serving in their interpretation In this article, we show that despite the nominal endorse- ment of a maximum false-positive rate of 5% (i.e., p ≤ .05), current standards for disclosing details of data collection and Corresponding Authors: Joseph P. Simmons, The Wharton School, University of Pennsylvania, 551 analyses make false positives vastly more likely. In fact, it is Jon M. Huntsman Hall, 3730 Walnut St., Philadelphia, PA 19104 unacceptably easy to publish “statistically significant” evi- E-mail: [email protected] dence consistent with any hypothesis. Leif D. Nelson, Haas School of Business, University of California, Berkeley, The culprit is a construct we refer to as researcher degrees Berkeley, CA 94720-1900 of freedom. In the course of collecting and analyzing data, E-mail: [email protected] researchers have many decisions to make: Should more data Uri Simonsohn, The Wharton School, University of Pennsylvania, 548 be collected? Should some observations be excluded? Which Jon M. Huntsman Hall, 3730 Walnut St., Philadelphia, PA 19104 conditions should be combined and which ones compared? E-mail: [email protected] Electronic copy available at: http://ssrn.com/abstract=1850704 Open access, freely available online Essay Why Most Published Research Findings Are False John P. A. Ioannidis factors that infl uence this problem and is characteristic of the fi eld and can Summary some corollaries thereof. vary a lot depending on whether the There is increasing concern that most fi eld targets highly likely relationships Modeling the Framework for False or searches for only one or a few current published research fi ndings are Positive Findings false. The probability that a research claim true relationships among thousands is true may depend on study power and Several methodologists have and millions of hypotheses that may bias, the number of other studies on the pointed out [9–11] that the high be postulated. Let us also consider, same question, and, importantly, the ratio rate of nonreplication (lack of for computational simplicity, of true to no relationships among the confi rmation) of research discoveries circumscribed fi elds where either there relationships probed in each scientifi c is a consequence of the convenient, is only one true relationship (among fi eld. In this framework, a research fi nding yet ill-founded strategy of claiming many that can be hypothesized) or is less likely to be true when the studies conclusive research fi ndings solely on the power is similar to fi nd any of the conducted in a fi eld are smaller; when the basis of a single study assessed by several existing true relationships. The effect sizes are smaller; when there is a formal statistical signifi cance, typically pre-study probability of a relationship greater number and lesser preselection for a p-value less than 0.05. Research being true is R⁄(R + 1). The probability of tested relationships; where there is is not most appropriately represented of a study fi nding a true relationship greater fl exibility in designs, defi nitions, and summarized by p-values, but, refl ects the power 1 − β (one minus outcomes, and analytical modes; when unfortunately, there is a widespread the Type II error rate). The probability there is greater fi nancial and other notion that medical research articles of claiming a relationship when none interest and prejudice; and when more truly exists refl ects the Type I error teams are involved in a scientifi c fi eld It can be proven that rate, α. Assuming that c relationships in chase of statistical signifi cance. most claimed research are being probed in the fi eld, the Simulations show that for most study expected values of the 2 × 2 table are designs and settings, it is more likely for fi ndings are false. given in Table 1. After a research a research claim to be false than true. fi nding has been claimed based on Moreover, for many current scientifi c should be interpreted based only on achieving formal statistical signifi cance, fi elds, claimed research fi ndings may p-values. Research fi ndings are defi ned the post-study probability that it is true often be simply accurate measures of the here as any relationship reaching is the positive predictive value, PPV. prevailing bias. In this essay, I discuss the formal statistical signifi cance, e.g., The PPV is also the complementary implications of these problems for the effective interventions, informative probability of what Wacholder et al. conduct and interpretation of research. predictors, risk factors, or associations. have called the false positive report “Negative” research is also very useful. probability [10]. According to the 2 “Negative” is actually a misnomer, and × 2 table, one gets PPV = (1 − β)R⁄(R ublished research fi ndings are the misinterpretation is widespread. − βR + α). A research fi nding is thus sometimes refuted by subsequent However, here we will target evidence, with ensuing confusion relationships that investigators claim Citation: Ioannidis JPA (2005) Why most published P exist, rather than null fi ndings. research fi ndings are false. PLoS Med 2(8): e124. and disappointment. Refutation and As has been shown previously, the controversy is seen across the range of Copyright: © 2005 John P. A. Ioannidis. This is an research designs, from clinical trials probability that a research fi nding open-access article distributed under the terms and traditional epidemiological studies is indeed true depends on the prior of the Creative Commons Attribution License, probability of it being true (before which permits unrestricted use, distribution, and [1–3] to the most modern molecular reproduction in any medium, provided the original research [4,5]. There is increasing doing the study), the statistical power work is properly cited. concern that in modern research, false of the study, and the level of statistical Abbreviation: PPV, positive predictive value fi ndings may be the majority or even signifi cance [10,11]. Consider a 2 × 2 the vast majority of published research table in which research fi ndings are John P. A. Ioannidis is in the Department of Hygiene compared against the gold standard and Epidemiology, University of Ioannina School of claims [6–8]. However, this should Medicine, Ioannina, Greece, and Institute for Clinical not be surprising. It can be proven of true relationships in a scientifi c Research and Health Policy Studies, Department of that most claimed research fi ndings fi eld. In a research fi eld both true and Medicine, Tufts-New England Medical Center, Tufts false hypotheses can be made about University School of Medicine, Boston, Massachusetts, are false. Here I will examine the key United States of America. E-mail: [email protected] the presence of relationships.
Recommended publications
  • 1 Psychology As a Robust Science Dr Amy Orben Lent Term 2020 When
    Psychology as a Robust Science Dr Amy Orben Lent Term 2020 When: Lent Term 2020; Wednesdays 2-4pm (Weeks 0-6; 15.1.2020 – 26.2.2020), Mondays 2-4pm (Week 7; 2.3.2020) Where: Lecture Theatre, Experimental Psychology Department, Downing Site Summary: Is psychology a robust science? To answer such a question, this course will encourage you to think critically about how psychological research is conducted and how conclusions are drawn. To enable you to truly understand how psychology functions as a science, however, this course will also need to discuss how psychologists are incentivised, how they publish and how their beliefs influence the inferences they make. By engaging with such issues, this course will probe and challenge the basic features and functions of our discipline. We will uncover multiple methodological, statistical and systematic issues that could impair the robustness of scientific claims we encounter every day. We will discuss the controversy around psychology and the replicability of its results, while learning about new initiatives that are currently reinventing the basic foundations of our field. The course will equip you with some of the basic tools necessary to conduct robust psychological research fit for the 21st century. The course will be based on a mix of set readings, class discussions and lectures. Readings will include a diverse range of journal articles, reviews, editorials, blog posts, newspaper articles, commentaries, podcasts, videos, and tweets. No exams or papers will be set; but come along with a critical eye and a willingness to discuss some difficult and controversial issues. Core readings • Chris Chambers (2017).
    [Show full text]
  • Promoting an Open Research Culture
    Promoting an open research culture Brian Nosek University of Virginia -- Center for Open Science http://briannosek.com/ -- http://cos.io/ The McGurk Effect Ba Ba? Da Da? Ga Ga? McGurk & MacDonald, 1976, Nature Adelson, 1995 Adelson, 1995 Norms Counternorms Communality Secrecy Open sharing Closed Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Organized skepticism Organized dogmatism Consider all new evidence, even Invest career promoting one’s own against one’s prior work theories, findings Norms Counternorms Communality Secrecy Open sharing Closed Universalism Particularlism Evaluate research on own merit Evaluate research by reputation Disinterestedness Self-interestedness Motivated by knowledge and discovery Treat science as a competition Organized skepticism Organized dogmatism Consider all new evidence, even Invest career promoting one’s own against one’s prior work theories, findings Quality Quantity Anderson, Martinson, & DeVries,
    [Show full text]
  • The Reproducibility Crisis in Research and Open Science Solutions Andrée Rathemacher University of Rhode Island, [email protected] Creative Commons License
    University of Rhode Island DigitalCommons@URI Technical Services Faculty Presentations Technical Services 2017 "Fake Results": The Reproducibility Crisis in Research and Open Science Solutions Andrée Rathemacher University of Rhode Island, [email protected] Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 License. Follow this and additional works at: http://digitalcommons.uri.edu/lib_ts_presentations Part of the Scholarly Communication Commons, and the Scholarly Publishing Commons Recommended Citation Rathemacher, Andrée, ""Fake Results": The Reproducibility Crisis in Research and Open Science Solutions" (2017). Technical Services Faculty Presentations. Paper 48. http://digitalcommons.uri.edu/lib_ts_presentations/48http://digitalcommons.uri.edu/lib_ts_presentations/48 This Speech is brought to you for free and open access by the Technical Services at DigitalCommons@URI. It has been accepted for inclusion in Technical Services Faculty Presentations by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. “Fake Results” The Reproducibility Crisis in Research and Open Science Solutions “It can be proven that most claimed research findings are false.” — John P. A. Ioannidis, 2005 Those are the words of John Ioannidis (yo-NEE-dees) in a highly-cited article from 2005. Now based at Stanford University, Ioannidis is a meta-scientist who conducts “research on research” with the goal of making improvements. Sources: Ionnidis, John P. A. “Why Most
    [Show full text]
  • 2020 Impact Report
    Center for Open Science IMPACT REPORT 2020 Maximizing the impact of science together. COS Mission Our mission is to increase the openness, integrity, and reproducibility of research. But we don’t do this alone. COS partners with stakeholders across the research community to advance the infrastructure, methods, norms, incentives, and policies shaping the future of research to achieve the greatest impact on improving credibility and accelerating discovery. Letter from the Executive Director “Show me” not “trust me”: Science doesn’t ask for Science is trustworthy because it does not trust itself. Transparency is a replacement for trust. Transparency fosters self-correction when there are errors trust, it earns trust with transparency. and increases confidence when there are not. The credibility of science has center stage in 2020. A raging pandemic. Partisan Transparency is critical for maintaining science’s credibility and earning public interests. Economic and health consequences. Misinformation everywhere. An trust. The events of 2020 make clear the urgency and potential consequences of amplified desire for certainty on what will happen and how to address it. losing that credibility and trust. In this climate, all public health and economic research will be politicized. All The Center for Open Science is profoundly grateful for all of the collaborators, findings are understood through a political lens. When the findings are against partners, and supporters who have helped advance its mission to increase partisan interests, the scientists are accused of reporting the outcomes they want openness, integrity, and reproducibility of research. Despite the practical, and avoiding the ones they don’t. When the findings are aligned with partisan economic, and health challenges, 2020 was a remarkable year for open science.
    [Show full text]
  • The Trouble with Scientists How One Psychologist Is Tackling Human Biases in Science
    How One Psychologist Is Tackling Human Biases in Science http://nautil.us/issue/24/error/the-trouble-with-scientists Purchase This Artwork BIOLOGY | SCIENTIFIC METHOD The Trouble With Scientists How one psychologist is tackling human biases in science. d BY PHILIP BALL ILLUSTRATION BY CARMEN SEGOVIA MAY 14, 2015 c ADD A COMMENT f FACEBOOK t TWITTER m EMAIL U SHARING ometimes it seems surprising that science functions at all. In 2005, medical science was shaken by a paper with the provocative title “Why most published research findings are false.”1 Written by John Ioannidis, a professor of medicine S at Stanford University, it didn’t actually show that any particular result was wrong. Instead, it showed that the statistics of reported positive findings was not consistent with how often one should expect to find them. As Ioannidis concluded more recently, “many published research findings are false or exaggerated, and an estimated 85 percent of research resources are wasted.”2 It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions. “Seeing the reproducibility rates in psychology and other empirical science, we can safely say that something is not working out the way it should,” says Susann Fiedler, a behavioral economist at the Max Planck Institute for Research on Collective Goods in Bonn, Germany.
    [Show full text]
  • Researchers Overturn Landmark Study on the Replicability of Psychological Science
    Researchers overturn landmark study on the replicability of psychological science By Peter Reuell Harvard Staff Writer Category: HarvardScience Subcategory: Culture & Society KEYWORDS: psychology, psychological science, replication, replicate, reproduce, reproducibility, Center for Open Science, Gilbert, Daniel Gilbert, King, Gary King, Science, Harvard, FAS, Faculty of Arts and Sciences, Reuell, Peter Reuell Summary: A 2015 study claiming that more than half of all psychology studies cannot be replicated turns out to be wrong. Harvard researchers have discovered that the study contains several statistical and methodological mistakes, and that when these are corrected, the study actually shows that the replication rate in psychology is quite high – indeed, it is statistically indistinguishable from 100%. RELATED LINKS: According to two Harvard professors and their collaborators, a 2015 landmark study showing that more than half of all psychology studies cannot be replicated is actually wrong. In an attempt to determine the replicability of psychological science, a consortium of 270 scientists known as The Open Science Collaboration (OSC) tried to replicate the results of 100 published studies. More than half of them failed, creating sensational headlines worldwide about the “replication crisis” in psychology. But an in-depth examination of the data by Daniel Gilbert (Edgar Pierce Professor of Psychology at Harvard University), Gary King (Albert J. Weatherhead III University Professor at Harvard University), Stephen Pettigrew (doctoral student in the Department of Government at Harvard University), and Timothy Wilson (Sherrell J. Aston Professor of Psychology at the University of Virginia) has revealed that the OSC made some serious mistakes that make this pessimistic conclusion completely unwarranted: The methods of many of the replication studies turn out to be remarkably different from the originals and, according to Gilbert, King, Pettigrew, and Wilson, these “infidelities” had two important consequences.
    [Show full text]
  • Open Science 1
    Running head: OPEN SCIENCE 1 Open Science Barbara A. Spellman, University of Virginia Elizabeth A. Gilbert, Katherine S. Corker, Grand Valley State University Draft of: 20 September 2017 OPEN SCIENCE 2 Abstract Open science is a collection of actions designed to make scientific processes more transparent and results more accessible. Its goal is to build a more replicable and robust science; it does so using new technologies, altering incentives, and changing attitudes. The current movement toward open science was spurred, in part, by a recent series of unfortunate events within psychology and other sciences. These events include the large number of studies that have failed to replicate and the prevalence of common research and publication procedures that could explain why. Many journals and funding agencies now encourage, require, or reward some open science practices, including pre-registration, providing full materials, posting data, distinguishing between exploratory and confirmatory analyses, and running replication studies. Individuals can practice and promote open science in their many roles as researchers, authors, reviewers, editors, teachers, and members of hiring, tenure, promotion, and awards committees. A plethora of resources are available to help scientists, and science, achieve these goals. Keywords: data sharing, file drawer problem, open access, open science, preregistration, questionable research practices, replication crisis, reproducibility, scientific integrity Thanks to Brent Donnellan (big thanks!), Daniël Lakens, Calvin Lai, Courtney Soderberg, and Simine Vazire OPEN SCIENCE 3 Open Science When we (the authors) look back a couple of years, to the earliest outline of this chapter, the open science movement within psychology seemed to be in its infancy.
    [Show full text]
  • Improving Reproducibility in Research: the Role of Measurement Science
    Volume 124, Article No. 124024 (2019) https://doi.org/10.6028/jres.124.024 Journal of Research of the National Institute of Standards and Technology Improving Reproducibility in Research: The Role of Measurement Science Robert J. Hanisch1, Ian S. Gilmore2, and Anne L. Plant1 1National Institute of Standards and Technology, Gaithersburg, MD 20899, USA 2National Physical Laboratory, Teddington, TW11 0LW, United Kingdom [email protected] [email protected] [email protected] Summary: • We report on a workshop held 1–3 May 2018 at the National Physical Laboratory, Teddington, U.K., in which the focus was how the world’s national metrology institutes might help to address the challenges of reproducibility of research. • The workshop brought together experts from the measurement and wider research communities in physical sciences, data analytics, life sciences, engineering, and geological science. The workshop involved 63 participants from metrology laboratories (38), academia (16), industry (5), funding agencies (2), and publishers (2). The participants came from the U.K., the United States, Korea, France, Germany, Australia, Bosnia and Herzegovina, Canada, Turkey, and Singapore. • Topics explored how good measurement practice and principles could foster confidence in research findings and how to manage the challenges of increasing volume of data in both industry and research. Key words: confidence; replicability; uncertainty. Accepted: September 5, 2019 Published: September 18, 2019 https://doi.org/10.6028/jres.124.024 1. Motivation and Scope Much has been written in the press recently suggesting that there is a “reproducibility crisis” in scientific research. This stems from well-publicized papers such as those by Brian Nosek et al.
    [Show full text]
  • Metascience 2019 Symposium Program
    DAY 4 | SUNDAY, SEPTEMBER 8th, 2019 GENERAL INFORMATION PROGRAM 6:30 – 7:30 AM Breakfast (Reception Room, Sheraton Palo Alto) **For hosted attendees only, registration required** 8:00 – 9:15 AM FUNDER PANEL: Review and Future Directions Moderator: Brian Nosek Center for Open Science, USA Panelists: Chonnettia Jones Wellcome Trust, UK Dawid Potgieter Templeton World Charity Foundation, BHS Arthur “Skip” Lupia National Science Foundation, USA COFFEE BREAK 9:45 – 11:00 AM PANEL DISCUSSION: Reflections on metascience topics and findings Moderator: Jon Krosnick Stanford University, USA Panelists: Jon Yewdell NIAID/DIR, USA Lisa Feldman Barrett Northeastern University, USA Kathleen Vohs University of Minnesota, USA Norbert Schwarz University of Southern California, USA BREAK 11:15 – 12:30 PM PANEL DISCUSSION: Journalists’ perspective on metascience and engagement with the broader public Moderator: Leif Nelson University of California, USA Panelists: Ivan Oransky Retraction Watch, USA Christie Aschwanden Emerging Form, USA Richard Harris National Public Radio, USA Stephanie M. Lee BuzzFeed News, USA LUNCH BREAK (CENTENNIAL LAWN) ADDRESS TAXI/UBER/LYFT Cubberley Auditorium | Use the following address: 1:30 PM UNCONFERENCE (Centennial Lawn) School of Education | 615 Escondido Road, Stanford, Breakouts, Open Space for Collaboration Stanford University CA 94305 **Beverages & Snacks provided** 485 Lasuen Mall Stanford, CA 94305 Walking to Cubberly Auditorium from Taxi/Uber/Lyft drop-off: It is PARKING a 10-15 minute walk to Cubberley Complimentary Conference Auditorium. Please allow yourself Parking is available plenty of time for a stroll through in the Galvez Lot. the beautiful Stanford Campus! Follow the Metascience signs to Walking to Cubberly Auditorium Cubberley Auditorium. from Galvez Lot: This lot is a 15-20 minute walk from the Cubberley CONTACT Auditorium.
    [Show full text]
  • The Open Science Framework: Improving Science by Making It Open And
    OPEN SCIENCE! i! The Open Science Framework: Improving Science by Making It Open and Accessible Jeffrey Robert Spies Bryan, Ohio M.A. Quantitative Psychology, University of Notre Dame, 2007 B.S. Computer Science, University of Notre Dame, 2004 A Dissertation presented to the Graduate Faculty of the University of Virginia in Candidacy for the Degree of Doctor of Philosophy Department of Psychology University of Virginia May, 2013 OPEN SCIENCE ii Abstract There currently exists a gap between scientific values and scientific practices. This gap is strongly tied to the current incentive structure that rewards publication over accurate science. Other problems associated with this gap include reconstructing exploratory narratives as confirmatory, the file drawer effect, an overall lack of archiving and sharing, and a singular contribution model - publication - through which credit is obtained. A solution to these problems is increased disclosure, transparency, and openness. The Open Science Framework (http://openscienceframework.org) is an infrastructure for managing the scientific workflow across the entirety of the scientific process, thus allowing the facilitation and incentivization of openness in a comprehensive manner. The current version of the OSF includes tools for documentation, collaboration, sharing, archiving, registration, and exploration. OPEN SCIENCE iii Table of Contents Abstract ............................................................................................................................... ii Table of Contents
    [Show full text]
  • Brian A. Nosek
    Last Updated: July 2, 2019 1 BRIAN A. NOSEK University of Virginia, Department of Psychology, Box 400400, Charlottesville, VA 22904-4400 Center for Open Science, 210 Ridge McIntire Rd, Suite 500, Charlottesville, VA 22903-5083 http://briannosek.com/ | http://cos.io/ | [email protected] Positions 2014- Professor University of Virginia 2013- Executive Director Center for Open Science 2008-2014 Associate Professor University of Virginia 2003-2013 Executive Director Project Implicit 2011-2012 Visiting Scholar CASBS, Stanford University 2008-2011 Director of Graduate Studies University of Virginia 2002-2008 Assistant Professor University of Virginia 2005 Visiting Scholar Stanford University 2001-2002 Exchange Scholar Harvard University Education Ph.D. 2002, Yale University, Psychology Thesis: Moderators of the relationship between implicit and explicit attitudes Advisor: Mahzarin R. Banaji M.Phil. 1999, Yale University, Psychology Thesis: Uses of response latency in social psychology M.S. 1998, Yale University, Psychology Thesis: Gender differences in implicit attitudes toward mathematics B.S. 1995, California Polytechnic State University, San Luis Obispo, Psychology Minors: Computer Science and Women's Studies Center for Open Science: co-Founder, Executive Director Web site: http://cos.io/ Primary infrastructure: http://osf.io/ A non-profit organization that aims to increase openness, integrity, and reproducibility of research. Building tools to facilitate scientists’ workflow, project management, transparency, and sharing. Community-building for open science practices. Supporting metascience research. Project Implicit: co-Founder Information Site: http://projectimplicit.net/ Research and Education Portal: https://implicit.harvard.edu/ Project Implicit is a multidisciplinary collaboration and non-profit for research and education in the social and behavioral sciences, especially for research in implicit social cognition.
    [Show full text]
  • Dr. Brian Nosek, Co-Founder and Executive Director, Center for Open
    1 Testimony of Brian A. Nosek, Ph.D. Executive Director Center for Open Science Professor Department of Psychology University of Virginia Before the Committee on Science, Space, and Technology U.S. House of Representatives November 13, 2019 “Strengthening Transparency or Silencing Science? The Future of Science in EPA Rulemaking” Chairwoman Johnson, Ranking Member Lucas, and Members of the Committee, on behalf of myself and the Center for Open Science, thank you for the opportunity to discuss the role of promoting transparency and reproducibility for maximizing the return on research investments, and responsible management of research transparency with competing interests of privacy protections for sensitive data. The bottom line summary of my remarks is: 1. Making open the default for research plans, data, materials, code, and outcomes will ​ reduce friction in discovery and maximize return on research investments 2. Extending existing policy frameworks about transparency and openness across ​ ​ ​ federal agencies will help improve research efficiency. These frameworks can help decision-makers navigate situations in which principles of security and privacy are in conflict with principles of transparency and openness. 3. Rulemaking should be informed by the best available evidence. Sometimes the best ​ available evidence is based on data that cannot be transparent, has high uncertainty, or has unknown reproducibility. Developing tools that clarify uncertainty will improve policymaking and shape research priorities. I joined the faculty at the University of Virginia in the Department of Psychology in 2002. My substantive areas of expertise are research methodology, implicit bias, and the gap between values and practices. In 2013, Jeff Spies and I launched the Center for Open Science (COS) out of my lab as a non-profit technology and culture change organization.
    [Show full text]