<<

1 OPEN

An Open Science Workflow for More Credible, Rigorous for The Portable Mentor (ed. Mitchell J. Prinstein) Katherine S. Corker

Author Note

Katherine S. Corker, Grand Valley State University, Department of Psychology, [email protected] https://orcid.org/0000-0002-7971-1678

Thank you to Julia Bottesini and Sarah Schiavone for their thoughtful feedback. All errors and omissions are my own.

Preprint version as of March 19, 2021 2 OPEN SCIENCE Abstract

Part of what distinguishes science from other ways of knowing is that scientists show their work. Yet when probed, it turns out that much of the process of research is hidden away: in personal files, in undocumented conversations, in point-and-click menus, and so on. In recent years, a movement towards more open science has arisen in psychology. Open science practices capture a broad swath of activities designed to take parts of the research process that were previously known only to a research team and make them more broadly accessible (e.g., , open analysis code, pre-registration, materials). Such practices increase the value of research by increasing , which may in turn facilitate higher research quality. Plus, open science practices are now required at many journals. This chapter will introduce open science practices and provide plentiful resources for researchers seeking to integrate these practices into their workflow.

Keywords: Open science, , pre-registration, , research rigor, transparency

Author Biography

Katie Corker is an associate professor of psychology at Grand Valley State University. She earned her PhD in personality and social psychology at Michigan State University in 2012, just as the described in the chapter was unfolding. She is a past president and current executive officer for the Society for the Improvement of Psychological Science (SIPS), which she helped to found in order to transform idle discussions about whether psychology needed improving into tangible actions to improve the field. She came to psychology as an undergraduate student who was delighted to learn that you could use science to study humans. It is therefore fitting that as a professor Katie’s work revolves around the study of scientists and their practices. Outside of working hours, Katie loves to travel, and she is happiest on the trail, at the beach, or with her friends and family at the local pub quiz. 3 OPEN SCIENCE An Open Science Workflow for More Credible, Rigorous Research

Recent years have heralded a relatively tumultuous time in the history of psychological science. The past decade saw the publication of a landmark paper that attempted to replicate 100 studies and estimated that just 39% of studies published in top psychology journals were replicable (Open Science Collaboration, 2015). There was also a surplus of studies failing to replicate high profile effects that had long been taken as fact (e.g., Hagger et al., 2014; Harris et al., 2013; Wagenmakers et al., 2016). Taken together, suddenly, the foundations of much psychological research seemed very shaky. As with similar evidence in other scientific fields (e.g., biomedicine, criminology), these findings have led to a collective soul searching dubbed the “replication crisis” or the “credibility revolution” (Nelson et al., 2018; Vazire, 2018). Clearly, something about the way scientists had gone about their work in the past wasn’t effective at uncovering replicable findings, and changes were badly needed. An impressive collection of meta-scientific studies (i.e., studies about scientists and scientific practices) have revealed major shortcomings in standard research and statistical methods (e.g., Button et al., 2013; John et al., 2012; Nuijten et al., 2016; Simmons et al., 2011). These studies point to a clear way to improve not only replicability but also the accuracy of scientific conclusions: open science. Open science refers to a radically transparent approach to the research process. “Open” refers to sharing – making accessible – parts of the research process that have traditionally been known only to an individual researcher or research team. In a standard research article, authors summarize their research methods and their findings, leaving out many details along the way. Among other things, open science includes sharing research materials (protocols) in full, making data and analysis code publicly available, and pre-registering (i.e., making plans public) study designs, hypotheses, and analysis plans. Psychology has previously gone through periods of unrest similar to the 2010s, with methodologists and statisticians making persuasive pleas for more transparency and rigor in research (e.g., Bakan, 1966; Meehl, 1978; Cohen, 1994; Kerr, 1998). Yet, it is only now with improvements in technology and research infrastructure, together with concerted efforts in journals and scientific societies by reformers, that changes have begun to stick (Spellman, 2015). Training in open science practices is now a required part of becoming a research psychologist. The goal of this chapter is to briefly review the shortcomings in scientific practice that open science practices address and then to give a more detailed account of open science itself. We’ll consider what it means to work openly and offer pragmatic advice for getting started.

Why Open Science?

When introducing new researchers to the idea of open science, the need for such practices seems obvious and self-evident. Doesn’t being a scientist logically imply an obligation to transparently show one’s work and subject it to rigorous scrutiny? Yet, abundant evidence reveals that researchers have not historically lived up to this ideal and that the failure to do transparent, rigorous work has hindered scientific progress.

Old Habits Die Hard 4 OPEN SCIENCE Several factors in the past combined to create conditions that encouraged researchers to avoid open science practices. First, incentives in academic contexts have not historically rewarded such behaviors and, in some cases, may have actually punished them (Smaldino & McElreath, 2016). To get ahead in an academic career, publications are the coin of the realm, and jobs, promotions, and accolades can sometimes be awarded based on number of publications, rather than publication quality. Second, human conspire to fool us into thinking we have discovered something when we actually have not (Bishop, 2020). For instance, confirmation allows us to selectively interpret results in ways that support our pre-existing beliefs or theories, which may be flawed. Self-serving biases might cause defensive reactions when critics point out errors in our methods or conclusions. Adopting open science practices can expose researchers to cognitive discomfort (e.g., pre-existing beliefs are challenged; higher levels of transparency mean that critics are given ammunition), which we might naturally seek to avoid. Finally, psychology uses an apprenticeship model of researcher training, which means that the practices of new researchers might only be as good as the practices of the more senior academics training them. When questionable research practices are taught as normative by research mentors, higher quality open science practices might be dismissed as methodological pedantry. Given the abundant evidence of flaws in psychology’s collective body of knowledge, we now know how important it is to overcome the hurdles described here and transition to a higher standard of practice. Incentives are changing, and open science practices are becoming the norm at many journals (Nosek et al., 2015). A new generation of researchers is being trained to employ more rigorous practices. And although the cognitive biases just discussed might be some of the toughest problems to overcome, greater levels of transparency in the process help fortify the ability of the process to serve as a check on researcher biases.

Benefits of Open Science Practices

A number of benefits of open science practices are worth emphasizing. First, increases in transparency make it possible for errors to be detected and for science to self-correct. The self- correcting nature of science is often heralded as a key feature that distinguishes scientific approaches from other ways of knowing. Yet, self-correction is difficult, if not impossible, when details of research are routinely withheld (Vazire & Holcombe, 2020). Second, openly sharing research materials (protocols), analysis code, and data provides new opportunities to extend upon research and adds value above and beyond what a single study would add. For example, future researchers can more easily replicate a study’s methods if they have access to a full protocol and materials; secondary data analysts and meta-analysts can perform novel analyses on raw data if they are shared. Third, collaborative work becomes easier when teams employ the careful documentation that is well honed for followers of open science practices. Even massive collaborations across time and location become possible when research materials and data are shared following similar standards (Moshontz et al., 2018). Finally, the benefits of open science practices accrue not only to the field at large, but also to individual researchers. Working openly provides a tangible record of your contributions as a researcher, which may be useful when it comes to applying for funding, awards, or jobs. Markowitz (2015) describes five “selfish” reasons to work reproducibly, chiefly: (a) to avoid 5 OPEN SCIENCE “disaster” (i.e., major errors), (b) because it’s easier, (c) to smooth the peer review process, (d) to allow others to build on your work, and (e) to build your reputation. Likewise, McKiernan et al. (2016) review the ample evidence that articles that feature open science practices tend to be more cited, more discussed in the media, attract more funding and job offers, and are associated with having a larger network of collaborators. Allen and Mehler (2019) review benefits (along with challenges) specifically for early career researchers. All of this is not to say that there are not costs or downsides to some of the practices discussed here. For one thing, learning and implementing new techniques takes time, though experience shows that you’ll become faster and more efficient with practice. Additionally, unsupportive research mentors or other senior collaborators can make it challenging to embrace open science practices. The power dynamics in such relationships may mean that there is little flexibility in the practices that early career researchers can employ. Trying to propose new techniques can be stressful and might strain advisor-advisee relationships, but see Kathawalla et al. (2021) for rebuttals to these issues and other common worries. In spite of these persistent challenges and the old pressures working against adoption of open science practices, I hope to convince you that the benefits of working openly are numerous – both to the field and to individual researchers. As a testament to changing norms and incentives, open science practices are spreading and taking hold in psychology (Christensen et al., 2019; Tenney et al., 2021). Let us consider in more detail what we actually mean by open science practices.

Planning Your Research

Many open science practices boil down to forming or changing your work habits so that more parts of your work are available to be observed others. But like other healthy habits (eating healthy food, exercising), open science practices may take some initial effort to put into place. You may also find that what works well for others doesn’t work well for you, and it may take some trial and error to arrive at a workflow that is both effective and sustainable. However, the benefits that you’ll reap from establishing these habits – both immediate and delayed – are well worth putting in the effort. It may not seem like it, but there is no better time in your career to begin than now. Likewise, you may find that many open science practices are most easily implemented early in the research process, during the planning stages. But fear not: if a project is already underway, we’ll consider ways to add transparency to the research process at later stages as well. Here, we’ll discuss using the Open Science Framework (osf.io), along with pre-registration and registered , as you plan your research.

Managing the Open Science Workflow: The Open Science Framework

The Open Science Framework (OSF; https://osf.io) is a powerful research management tool. Using a tool like OSF allows you to organize all stages of the research process in one location, which can help you stay organized. Using OSF is also not tied to any specific academic institution, so you won’t have to worry about transferring your work when you inevitably change jobs (perhaps several times). Other tools exist that can do many of the things OSF can (some researchers like to use GitHub, figshare, or , for instance), but OSF was specifically created for managing scientific research and has a number of features that make it uniquely 6 OPEN SCIENCE suited for the task. OSF’s core functions include (but are not limited to) long-term archival of research materials, analysis code, and data; a flexible but robust pre-registration tool; and support for collaborative workflow management. Later in the chapter, we’ll discuss the ins and outs of each of these practices, but here I want to review a few of the ways that OSF is specialized for these functions. The main unit of work on OSF is the “project.” Each project has a stable URL and the potential to create an associated digital object identifier (DOI). This means that researchers can make reference to OSF project pages in their research articles without worry that links will cease to function or shared content will become unavailable. A sizable preservation fund promises that content shared on OSF will remain available for at least 50 years, even if the service should cease to operate. This stability makes OSF well-suited to host part of the scientific record. A key feature of projects is that they can be made public (accessible to all) or private (accessible only to contributors). This feature allows you to share your work publicly when you are ready, whether that is immediately or only after a project is complete. Another feature is that projects can be shared using “view-only” links. These links have the option to remove contributor names to enable the materials shared in a project to be accessible to peer reviewers at journals that use masked review. Projects can have any number of contributors, making it possible to easily work collaboratively even with a large team. An activity tracker gives a detailed and complete account of changes to the project (e.g., adding or removing a file, editing the project wiki page), so you always know who did what, and when, within a project. Another benefit is the ability to easily connect OSF to other tools (e.g., Google drive, GitHub) to further enhance OSF’s capabilities. Within projects, it is possible to create nested “components.” Components have their own URLs, DOIs, privacy settings, and contributor list. It is possible, for instance, to create a component within a project and to restrict access to that component alone while making the rest of the project publicly accessible. If particular parts of a project are sensitive or confidential, components can be a useful way to maintain the privacy of that information. Similarly, perhaps it is necessary for part of a research group to have access to parts of a research project and for others to not have the access. Components allow researchers this fine-grained level of control. Finally, OSF’s pre-registration function allows projects and components to be “frozen” (i.e., saved as time-stamped copies that cannot be edited). Researchers can opt to pre-register their projects using one of many templates, or they can simply upload the narrative text of their research plans. In this way, researchers and editors can be confident about which elements of a study were pre-specified and which were informed by the research process or outcomes. The review of OSF’s features here is necessarily brief. Soderberg (2018) provides a step- by-step guide for getting started with OSF. Tutorials are also available on the ’s YouTube channel. I recommend selecting a project – perhaps one for which you are the lead contributor – to try out OSF and get familiar with its features in greater detail. Later, you may want to consider using a project template, like the one that I use in my lab (Corker, 2016), to standardize the appearance and organization of your OSF projects.

Pre-Registration and Registered Reports

Learning about how to pre-register research involves much more than just learning how to use a particular tool (like OSF) to complete the registration process. Like other research methods, training and practice are needed to become skilled at this key open science technique 7 OPEN SCIENCE (Tackett et al., 2020). Pre-registration refers to publicly reporting study designs, hypotheses, and/ or analysis plans prior to the onset of a research project. Additionally, the pre-registered plan should be shared in an accessible repository, and it should be “read-only” (i.e., not editable after posting). As we’ll see, there are several reasons a researcher might choose to pre-register, along with a variety of benefits of doing so. But the most basic function of the practice is that pre- registration clearly delineates the parts of a research project that were specified before the onset of a project from those parts that were decided on along the way or based on observed data. Depending on their goals, researchers might pre-register for different reasons (Ledgerwood, 2018; Navarro, 2019; da Silva Frost & Ledgerwood, 2020). First, researchers may want to constrain particular data analytic choices prior to encountering the data. Doing so makes it clear to the researchers, and to readers, that the presented analysis is not merely the one most favorable to the authors’ predictions, nor the one with the lowest p-value. Second, researchers might desire to specify theoretical predictions prior to encountering a result. In so doing, they set up conditions that enable a strong test of the theory, including the possibility for falsification of alternative hypotheses (Platt, 1964). Third, researchers may seek to increase the transparency of their research process, documenting particular plans and, crucially, when those plans were made. In addition to the scientific benefits of transparency, pre-registration can also facilitate more detailed planning than usual, potentially increasing research quality as potential pitfalls are caught early enough to be remedied. Some of these reasons are more applicable to certain types of research than others, but nearly all research can benefit from some form of pre-registration. For instance, some research is descriptive and does not test hypotheses stemming from a theory. Other research might feature few or no statistical analyses. The theory testing or analytic constraint functions of pre- registration might not be applicable in these instances. However, the benefits of increased transparency and enhanced planning stand to benefit many kinds of research (but see Devezer et al., in press, for a critical take on the value of pre-registration). A related, but distinct practice is Registered Reports (Chambers, 2013). In a registered , authors submit a study proposal – usually as a consisting of a complete , proposed method, and proposed analysis section – to a journal that offers the format. The manuscript (known at that point as “stage 1”) is then peer-reviewed, after which it can be rejected, accepted, or receive a revise and resubmit. Crucially, once the stage 1 manuscript is accepted (most likely after revision following peer review), the journal agrees to publish the final paper regardless of the statistical significance of results, provided the agreed upon plan has been followed – a phase of publication known as “in-principle acceptance.” Once results are in, the paper (at this point known as a “stage 2” manuscript) goes out again for peer review to verify that the study was executed as agreed. When stage 1 proposals are published (either as stand-alone manuscripts or as supplements to the final stage 2 manuscripts), registered reports allow readers to confirm which parts of a study have been planned ahead of time, just like ordinary pre-registrations. Likewise, registered reports limit strategic analytic flexibility, allow strong tests of hypotheses, and increase the transparency of research. Crucially, however, registered reports also address publication bias, because papers are not accepted or rejected on the basis of the outcome of the research. Furthermore, the two-stage peer review process has an even greater potential to improve study quality, because researchers receive the benefit of peer critique during the design phase of a study when there is still time to correct flaws. Finally, because the publication process is overseen by an editor, undisclosed deviations from the pre-registered plan may be less likely to 8 OPEN SCIENCE occur than they are with unreviewed pre-registration. Pragmatically, registered reports might be especially worthwhile in contentious areas of study where it is useful to jointly agree on a critical test ahead of time with peer critics. Authors can also enjoy the promise of acceptance of the final product prior to investing resources in data collection. Table 1 lists guidance and templates that have been developed across different subfields and research methods to enable nearly any study to be pre-registered. A final conceptual distinction is worth brief mention. Pre-registrations are documentation of researchers’ plans for their studies (in systematic reviews of health research, these documents are known as protocols). When catalogued and searchable, pre-registrations form a registry. In the United States, the most common study registry is clinicaltrials.gov, because the National Institutes of Health requires studies that it funds to be registered there. PROSPERO (Page et al., 2018) is the main registry for health-related systematic reviews. Entries in clinicaltrials.gov and PROSPERO must follow a particular format, and adhering to that format may or may not fulfill researchers’ pre-registration goals (for analytic constraint, for hypothesis testing, or for increasing transparency). For instance, when registering a study in clinicaltrials.gov, researchers must declare their primary outcomes (i.e., dependent variables) and distinguish them from secondary outcomes, but they are not required to submit a detailed analysis plan. A major benefit of study registries is to track the existence of studies independent of final publications. Registries also allow the detection of questionable research practices like outcome switching (e.g., Goldacre et al., 2019). But entries in clinicaltrials.gov and PROSPERO fall short in many ways when it comes to achieving the various goals of pre-registration discussed above. It is important to distinguish brief registry entries from more detailed pre-registrations and protocols.

Doing the Research

Open science considerations are as relevant when you are actually conducting your research as they are when you are planning it. One of the things you have surely already learned in your graduate training is that research projects often take a long time to complete. It may be several months, or perhaps even longer, after you have planned a study and collected the data before you are actually finalizing a manuscript to submit for publication. And even once an initial draft is completed, you will again have a lengthy wait while the paper is reviewed, after which time you will invariably have to return to the project for revisions. To make matters worse, as your career unfolds, you will begin to juggle multiple such projects simultaneously. Put briefly: you need a robust system of documentation to keep track of these many projects. In spite of the importance of this topic, most psychology graduate programs have little in the way of formal training in these practices. Here, I will provide an overview of a few key topics in this area, but you would be well served to dig more deeply into this area on your own. In particular, Briney (2015) provides a length treatment on data management practices. (Here “data” is used in the broad sense to mean information, which includes but extends beyond participant responses.) Henry (2021a; 2021b) provides an overview of many relevant issues as well. Another excellent place to look for help in this area is your university library. Librarians are experts in data management, and libraries often host workshops and give consultations to help researchers improve their practices. Several practices are part of the array of options available to openly document your research process. Here, I’ll introduce open lab notebooks, open protocols/materials, and open data/analysis code. Klein et al. (2018) provides a detailed, pragmatic look at these topics, 9 OPEN SCIENCE highlighting considerations around what to share, how to share, and when to share.

Open Lab Notebooks

One way to track your research as it unfolds is to keep a detailed lab notebook. Recently, some researchers have begun to keep open, digital lab notebooks (Campbell, 2018). Put briefly, open lab notebooks allow outsiders to access the research process in its entirety in real time (Bradley et al., 2011). Open lab notebooks might include entries for data collected, experiments run, analyses performed, and so on. They can also include accounts of decisions made along the way – for instance, to change an analysis strategy or to modify the participant recruitment protocol. Open lab notebooks are a natural complement to pre-registration insofar as a pre- registration spells out a plan for a project, and the lab notebook documents the execution (or alteration) of that plan. In fact, for some types of research, where the a priori plan is relatively sparse, an open lab notebook can be an especially effective way to transparently document exploration as it unfolds. On a spectrum from completely open research to completely opaque research, the practice of keeping an open lab notebook marks the far (open) end of the scale. For some projects (or some researchers) the costs of keeping a detailed open lab notebook in terms of time and effort might greatly exceed the scientific benefits for transparency and record keeping. Other practices may achieve similar goals more efficiently. But for some projects, the practice could prove invaluable. To decide whether or not an open lab notebook is right for you, consider the examples given in Campbell (2018). You can also see an open notebook in action here: https://osf.io/3n964/ (Koessler et al., 2020).

Open Protocols and Open Materials

A paper’s Method section is designed to describe a study protocol – that is, its design, participants, procedure, and materials – in enough detail that an independent researcher could replicate the study. In actuality, many key details of study protocols are omitted from Method sections (Errington, 2019). To remedy this information gap, researchers should share full study protocols, along with the research materials themselves, as supplemental files. Protocols can include things like complete scripts for experimental research assistants, video demonstrations of techniques (e.g., a participant interaction or a neurochemical assay), and full copies of study questionnaires. The goal is for another person to be able to execute a study fully without any assistance from the original author. Research materials that have been created specifically for a particular study – for instance the actual questions asked of participants or program files for an experimental task – are especially important to share. If existing materials are used, the source where those materials can be accessed should be cited in full. If there are limitations on the availability of materials, which might be the case if materials are proprietary or have restricted access for ethical reasons, those limitations should be disclosed in the manuscript.

Reproducible Analyses, Open Code, and Open Data One of the basic features of scientific research products is that they should be independently reproducible. A finding that can only be recreated by one person is a magic trick, 10 OPEN SCIENCE not a scientific truism. Here, reproducible means that results can be recreated using the same data originally used to make a claim. By contrast, replicability implies the repetition of a study’s results using different data (e.g., a new sample). Note that also, a finding can be reproducible, or even replicable, but not be a valid or accurate representation of reality (Vazire et al., 2020). Reproducibility can be thought of as a minimally necessary precursor to later validity claims. In psychology, analyses of quantitative data very often form the backbone of our scientific claims. Yet, the reproducibility of data analytic procedures may never be checked, or if they are checked, findings may not be reproducible (Stodden et al., 2018; Obels et al., 2020). Even relatively simple errors in reporting threaten the accuracy of the research literature (Nuijten et al., 2016). Luckily, these problems are fixable, if we are willing to put in the effort. Specifically, researchers should share the code underlying their analyses and, when legally and ethically permissible, they should share their data. But beyond just sharing the “finished product,” it may be helpful to think about preparing your data and code to share while the project is actually under way (Klein et al., 2018). Whenever possible, analyses should be conducted using analysis code – also known as scripting or syntax – rather than by using point and click menus in statistical software or doing hand calculations in spreadsheet programs. To further enhance the reproducibility of reported results, you can write your results section using a language called R Markdown. Succinctly, R Markdown combines descriptive text with results (e.g., statistics, counts) drawn directly from analyses. When results are prepared in this way, there is no need to worry about typos or other transcription errors making their way into your paper, because numbers from results are pulled directly from statistical output. Additionally, if there is a change to the data – say, if analyses need to be re-run on a subset of cases – the result text will automatically update with little effort. Peikert and Brandmeier (2019) describe a possible workflow to achieve reproducible results using R Markdown along with a handful of other tools. Rouder (2016) details a process for sharing data as it is generated – so-called “born open” data. This method also preserves the integrity of original data. When combined with Peikert and Brandmeier’s technique, the potential for errors to affect results or reporting is greatly diminished. Regardless of the particular scripting language that you use to analyze your data, the code, along with the data itself, should be well documented to enable use by others, including reviewers and other researchers. You will want to produce a codebook, also known as a data dictionary, to accompany your data and code. Buchanan et al. (2021) describe the ins and outs of data dictionaries. Arslan (2019) writes about an automated process for codebook generation using R statistical software.

Version Control

When it comes to tracking research products in progress, a crucial concept is known as version control. A version control system permits contributors to a paper or other product (such as analysis code) to automatically track who made changes to the text and when they made them. Rather than saving many copies of a file in different locations and under different names, there is only one copy of a version-controlled file. But because changes are tracked, it is possible to roll back a file to an earlier version (for instance, if an error is detected). On large collaborative projects, it is vital to be able to work together simultaneously and to be able to return to an earlier version of the work if needed. Working with version-controlled files decreases the potential for mistakes in research to 11 OPEN SCIENCE go undetected. Rouder et al. (2019) describes practices, including the use of version control, that help to minimize mistakes and improve research quality. Vuorre and Curley (2018) provide specific guidance for using Git, one of the most popular version control systems. An additional benefit of learning to use these systems is their broad applicability in non-academic research settings (e.g., at technology and health companies). Indeed, developing skills in domain general areas like statistics, research design, and programming will broaden the array of opportunities available to you when your training is complete.

Working Openly Facilitates Teamwork and Collaboration

Keeping an open lab notebook, sharing a complete research protocol, or producing a reproducible analysis script that runs on open data might seem laborious compared to closed research practices, but there are advantages of these practices beyond the scientific benefits of working transparently. Detailed, clear documentation is needed for any collaborative research, and the need might be especially great in large teams. Open science practices can even facilitate massive collaborations, like those managed by the Psychological Science Accelerator (PSA; Moshontz et al., 2018). The PSA is a global network of over 500 laboratories that coordinates large investigations of democratically selected study proposals. It enables even teams with limited resources to study important questions at a large enough scale to yield rich data and precise answers. Open science practices are baked into all parts of the research process, and indeed, such efforts would not be feasible or sustainable without these standard operating procedures. Participating in a large collaborative project, such as one run by the PSA, is an excellent way to develop your open science skillset. It can be exciting and quite rewarding to work in such a large team, but in so doing, there is also the opportunity to learn from the many other collaborators on the project.

Writing It Up: Open Science and Your Manuscript

The most eloquent study with the most interesting findings is scientifically useless until the findings are communicated to the broader research community. Indeed, scientific communication may be the most important part of the research process. Yet skillfully communicating results isn’t about mechanically relaying the outcomes of hypothesis tests. Rather, it’s about writing that leaves the reader with a clear conclusion about the contribution of a project. In addition to being narratively compelling, researchers employing open science practices will also want to transparently and honestly describe the research process. Adept readers may sense a conflict between these two goals – crafting a compelling narrative vs. being transparent and honest – but in reality, both can be achieved.

Writing Well and Transparently

Gernsbacher (2018) provides detailed guidance on preparing a high-quality manuscript (with a clear narrative) while adhering to open science practices. She writes that the best articles are transparent, reproducible, clear, and memorable. To achieve clarity and memorability, authors must attend to good writing practices like writing short sentences and paragraphs and seeking feedback. These techniques are not at odds with transparency and reproducibility, which 12 OPEN SCIENCE can be achieved through honest, detailed, and clear documentation of the research process. Even higher levels of detail can be achieved by including supplemental files along with the main manuscript. One issue, of course, is how to decide which information belongs in the main paper vs. the supplemental materials. A guiding principle is to organize your paper to help the reader understand the paper’s contribution while transparently describing what you’ve done and learned. Gernsbacher (2018) advises having an organized single file as a supplement to ease the burden on reviewers and readers. A set of well labeled and organized folders in your OSF project (e.g., Materials, Data, Analysis Code, Manuscript Files) can also work well. Consider including a “readme” file or other descriptive text to help readers understand your file structure. If a project is pre-registered, it is important that all of the plans (and hypotheses, if applicable) in the study are addressed in the main manuscript. Even results that are not statistically significant deserve discussion in the paper. If planned methods have changed, this is normal and absolutely fine. Simply disclose the change (along with accompanying rationale) in the paper, or better yet, file an addendum to your pre-registration when the change is made before proceeding. Likewise, when analysis plans change, disclose the change in the final paper. If the originally planned analysis strategy and the preferred strategy are both valid techniques, and others might disagree about which strategy is best, present results using both strategies. The details of the comparative analyses can be placed in a supplement, but discuss the analyses in the main text of the paper. A couple of additional tools to assist with writing your open science manuscript are worth mention. First, Aczel et al. (2020) provide a consensus-based transparency checklist that authors can complete to confirm that they have made all relevant transparency-based disclosures in their papers. The checklist can also be shared (e.g., on OSF) alongside a final manuscript to help guide readers through the disclosures. Second, R Markdown can be used to draft the entire text of your paper, not just the results section. Doing so allows you to render the final paper using a particular typesetting style more easily. More importantly, the full paper will then be reproducible. Rather than work from scratch, you may want to use the papaja package (Aust & Barth, 2020), which provides an R Markdown template. Many researchers also like to use papaja in concert with Zotero (https://www.zotero.org/), an open-source reference manager.

Selecting a Journal for Your Research

Beyond questions of a journal’s topical reach and its reputation in the field, different journals have different policies when it comes to open science practices. When selecting a journal, you will want to review that journal’s submission guidelines to ensure that you understand and comply with its requirements. Another place to look for guidance on a journal’s stance on open science practices is editorial statements. These statements usually appear within the journal itself, but if the journal is owned by a society, they may also appear in society publications (e.g., American Psychological Association Monitor, Association for Psychological Science Observer). Many journals are signatories of the TOP (Transparency and Promotion) Guidelines, which specify three different levels of adoption for eight different transparency standards (Nosek et al., 2015; see also https://topfactor.org). Journals with policies at level 1 require authors to disclose details about their studies in their manuscripts – for instance, whether or not the data associated with studies are available. At level 2, sharing of study components 13 OPEN SCIENCE (e.g., materials, data, or analysis code) is required for publication, with exceptions granted for valid legal and ethical restrictions on sharing. At level 3, the journal or its designee verifies the shared components – for instance, a journal might check whether a study’s results can be reproduced from shared analysis code. Importantly, journals can adopt different levels of transparency for the different standards. For instance, a journal might adopt level 1 (disclose) for pre-registration of analysis plans, but level 3 (verify) for study materials. Again, journal submission guidelines, along with editorial statements, provide guidance as to the levels adopted for each standard. Some journals also offer badges for adopting transparent practices. At participating journals, authors declare whether they have pre-registered a study or shared materials and/or data, and the journal then marks the resulting paper with up to three badges (pre-registration, open data, open materials) indicating the availability of the shared content. A final consideration are the pre-printing policies of a journal. Almost certainly, you will want the freedom to share your work on a repository like PsyArXiv (https://psyarxiv.com). Preprint repositories allow authors to share their research ahead of publication, either before submitting the work for peer review at a journal or after the peer review process is complete. Some repositories deem the latter class of manuscripts “post-prints” to distinguish them from papers that have not yet been published in a journal. Sharing early copies of your work will enable to you to get valuable feedback prior to journal submission. Even if you are not ready to share a pre-publication copy of your work, sharing the final post- print increases access to the work – especially for those without access through a library including researchers in many countries, scholars without university affiliations, and the general public. Manuscripts shared on PsyArXiv are indexed on , increasing their discoverability. You can check the policies of your target journal at the Sherpa Romeo database (https://v2.sherpa.ac.uk/romeo/). The journals with the most permissive policies allow sharing of the author copy of a paper (i.e., what you send to the journal, not the typeset version) immediately on disciplinary repositories like PsyArXiv. Other journals impose an embargo on sharing of perhaps one or two years. A very small number of journals will not consider manuscripts that have been shared as . It’s best to understand a journal’s policy before choosing to submit there. Importantly, sharing pre- or post-print copies of your work is free to do, and it greatly increases the reach of your work. Another option (which may even be a requirement depending on your research funder) is to publish your work in a fully journal (called “gold” open access) or in a traditional journal with the option to pay for your article to be made open access (called “hybrid” open access). Gold open access journals use the fees from articles to cover the costs of publishing, but articles are free to read for everyone without a subscription. Hybrid journals, on the other hand, charge libraries large subscription fees (as they do with traditional journals), and they charge authors who opt to have their articles made open access, effectively doubling the journal’s revenue without incurring additional costs. The fees for hybrid open access are almost never worth it, given that authors can usually make their work accessible for free using preprint repositories. Fees to publish your work in a gold open access journal currently vary from around $1,000 USD on the low end to $3,000 USD or more on the high end. Typically, a research funder pays these fees, but if not, there may be funds available from your university library or research support office. Some journals offer fee waivers for authors who lack access to grant or university 14 OPEN SCIENCE funding for these costs. Part of open science means making the results of research as accessible as possible. Gold open access journals are one means of achieving this goal, but preprint repositories play a critical role as well.

Coda: The Importance of Community

Certainly, there are many tools and techniques to learn when it comes to open science practices. When you are just beginning, you will likely want to take it slow to avoid becoming overwhelmed. Additionally, not every practice described here will be relevant for every project. With time, you will learn to deploy the tools you need to serve a particular project’s goals. Yet, it is also important not to delay beginning to use these practices. Now is the time in your career where you are forming habits that you will carry with you for many years. You want to lay a solid foundation for yourself, and a little effort to learn a new skill or technology now will pay off down the road. One of the best ways to get started with open science practices is to join a supportive community of other researchers who are also working towards the same goal. Your region or university might have a branch of ReproducibiliTea (https://reproducibilitea.org/), a journal club devoted to discussing and learning about open science practices. If it doesn’t, you could gather a few friends and start one, or you could join one of the region-free online clubs. Twitter is another excellent place to keep up to date on new practices, and it’s also great for developing a sense of community. Another option is to attend the annual meeting of the Society for the Improvement of Psychological Science (SIPS; http://improvingpsych.org). The SIPS meeting features workshops to learn new techniques, alongside active sessions (hackathons and unconferences) where researchers work together to develop new tools designed to improve psychological methods and practices. Interacting with other scholars provides an opportunity to learn from one another, but also provides important social support. Improving your research practices is a career-long endeavor; it is surely more fun not to work alone.

Recommended Reading

Briney, K. (2015). Data management for researchers: Organize, maintain and share your data for research success. Pelagic Publishing Ltd.

Christensen, G., Freese, J., & Miguel, E. (2019). Transparent and reproducible social science research: How to do open science. University of California Press.

Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403–414.

Kathawalla, U. K., Silverstein, P., & Syed, M. (2021). Easing into open science: A guide for graduate students and their advisors. Collabra: Psychology, 7(1), 18684. 15 OPEN SCIENCE References

Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, Š., Benjamin, D., Chambers, C. D., Fisher, A., Gelman, A., Gernsbacher, M. A., Ioannidis, J., Johnson, E., Jonas, K., Kousta, S., Lilienfeld, S. O., Lindsay, S., Morey, C. C., Munafò, M., Newell, B. R., Pashler, H. ... & Wagenmakers, E. J. (2020). A consensus-based transparency checklist. Nature Human Behaviour, 4(1), 4-6. https://doi.org/10.1038/s41562-019-0772-6

Allen, C., & Mehler, D. M. (2019). Open science challenges, benefits and tips in early career and beyond. PLoS Biology, 17(5), e3000246. https://doi.org/10.1371/journal.pbio.3000246

Arslan, R. C. (2019). How to automatically document data with the codebook package to facilitate data reuse. Advances in Methods and Practices in Psychological Science, 2, 169–187. https://doi.org/10.1177/2515245919838783

Aust, F., & Barth, M. (2020, July 7). papaja: Create APA manuscripts with R Markdown. https://github.com/crsh/papaja

Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66(6), 423-437. https://doi.org/10.1037/h0020412

Benning, S. D., Bachrach, R. L., Smith, E. A., Freeman, A. J., & Wright, A. G. C. (2019). The registration continuum in clinical science: A guide toward transparent practices. Journal of Abnormal Psychology, 128(6), 528–540. https://doi.org/10.1037/abn0000451

Bishop, D. V. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture. Quarterly Journal of Experimental Psychology, 73(1), 1–19. https://doi.org/10.1177/1747021819886519

Bosnjak, M., Fiebach, C., Mellor, D. T., Mueller, S., O'Connor, D. B., Oswald, F. L., & Sokol- Chang, R. (2021, February 22). A template for preregistration of quantitative research in psychology: Report of the Joint Psychological Societies Preregistration Task Force. https://doi.org/10.31234/osf.io/d7m5r

Bradley, J. C., Lang, A. S., Koch, S., & Neylon, C. (2011). Collaboration using open notebook science in academia. In S. Elkins, A. S. Lang, S. Koch, and C. Neylon (Eds.) Collaborative Computational Technologies for Biomedical Research (pp. 423-452). John Wiley & Sons, Inc. https://doi.org/10.1002/9781118026038.ch25

Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., Perugini, M., Spies, J. R., & van 't Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217-224. https://doi.org/10.1016/j.jesp.2013.10.005

Briney, K. (2015). Data management for researchers: Organize, maintain and share your data for research success. Pelagic Publishing Ltd. 16 OPEN SCIENCE

Buchanan, E. M., Crain, S. E., Cunningham, A. L., Johnson, H. R., Stash, H., Papadatou-Pastou, M., Isager, P. M., Carlsson, R., & Aczel, B. (2021). Getting started creating data dictionaries: How to create a shareable data set. Advances in Methods and Practices in Psychological Science, 4(1), 1-10. https://doi.org/10.1177/2515245920928007

Button, K., Ioannidis, J., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376. https://doi.org/10.1038/nrn3475

Campbell, L. (2018, January 29). Week 3: Open notebook. https://web.archive.org/web/20210309195308/https://www.lornecampbell.org/?p=179

Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49(3), 609-610. https://doi.org/10.1016/j.cortex.2012.12.016

Christensen, G., Freese, J., & Miguel, E. (2019). Transparent and reproducible social science research: How to do open science. University of California Press.

Christensen, G., Wang, Z., Paluck, E. L., Swanson, N., Birke, D. J., Miguel, E., & Littman, R. (2019, October 18). Open science practices are on the rise: The State of Social Science (3S) survey. https://doi.org/10.31222/osf.io/5rksu

Cohen, J. (1994). The earth is round (p<. 05). American Psychologist, 49(12), 997-1003. https://doi.org/10.1037/0003-066X.49.12.997

Corker, K. S. (2016, January 15). PMG Lab - Project template. https://doi.org/10.17605/OSF.IO/ SJTYR

Crüwell, S., & Evans, N. J. (2020, September 19). Preregistration in complex contexts: A preregistration template for the application of cognitive models. https://doi.org/10.31234/osf.io/2hykx

Da Silva Frost, A., & Ledgerwood, A. (2020). Calibrate your confidence in research findings: A tutorial on improving research methods and practices. Journal of Pacific Rim Psychology, 14, E14. https://doi.org/10.1017/prp.2020.7

Devezer, B., Navarro, D. J., Vandekerckhove, J., & Buzbas, E. O. (In press). The case for formal methodology in scientic reform. Royal Society Open Science. https://doi.org/ 10.1101/2020.04.26.048306

Dirnagl, U. (2019). Preregistration of exploratory research: Learning from the golden age of discovery. PLoS Biology, 18(3): e3000690. https://doi.org/10.1371/journal.pbio.3000690

Errington, T. M. (2019, September 5). Reproducibility Project: Cancer Biology - Barriers to replicability in the process of research. https://doi.org/10.17605/OSF.IO/KPR7U 17 OPEN SCIENCE

Gernsbacher, M. A. (2018). Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in Methods and Practices in Psychological Science, 1(3), 403–414. https://doi.org/10.1177/2515245918754485

Goldacre, B., Drysdale, H., Dale, A., Milosevic, I., Slade, E., Hartley, P., Marston, C., Powell- Smith, A., Heneghan, C., & Mahtani, K. R. (2019). COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time. Trials, 20(1), 1-16. https://doi.org/10.1186/s13063-019-3173-2

Flannery, J. E. (2020, October 22). fMRI Preregistration Template. https://osf.io/6juft

Flourney, J. C., Vijayakumar, N., Cheng, T. W., Cosme, D., Flannery, J. E., & Pfeifer, J. H. (2020). Improving practices and inferences in developmental cognitive neuroscience. Developmental Cognitive Neuroscience, 45, 100807 https://doi.org/10.1016/j.dcn.2020.100807

Hagger, M. S., Chatzisarantis, N. L. D., Alberts, H., Anggono, C. O., Batailler, C., Birt, A. R., Brand, R., Brandt, M. J., Brewer, G., Bruyneel, S., Calvillo, D. P., Campbell, W. K., Cannon, P. R., Carlucci, M., Carruth, N. P., Cheung, T., Crowell, A., De Ridder, D. T. D., Dewitte, S., … Zwienenberg, M. (2016). A multilab preregistered replication of the ego-depletion effect. Perspectives on Psychological Science, 11(4), 546–573. https://doi.org/10.1177/1745691616652873

Harris, C. R., Coburn, N., Rohrer, D., & Pashler, H. (2013). Two failures to replicate high- performance-goal priming effects. PloS One, 8(8), e72467. https://doi.org/10.1371/journal.pone.0072467

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., Piñeiro, R., Rosenblatt, F., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19, 1-13. https://doi.org/10.1177/1609406920976417

Haven, T. L., & Van Grootel, L. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229-244. https://doi.org/10.1080/08989621.2019.1580147

Havron, N., Bergmann, C., & Tsuji, S. (2020). Preregistration in infant research—A primer. Infancy, 25(5), 734-754. https://doi.org/10.1111/infa.12353

Henry, T. R. (2021a, February 26). Data Management for Researchers: Three Tales. https://doi.org/10.31234/osf.io/ga9yf

Henry, T. R. (2021b, February 26). Data Management for Researchers: 8 Principles of Good Data Management. https://doi.org/10.31234/osf.io/5tmfe

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532. 18 OPEN SCIENCE https://doi.org/10.1177/0956797611430953

Johnson, A. H., & Cook, B. G. (2019). Preregistration in single-case design research. Exceptional Children, 86(1), 95–112. https://doi.org/10.1177/0014402919868529

Kathawalla, U. K., Silverstein, P., & Syed, M. (2021). Easing into open science: A guide for graduate students and their advisors. Collabra: Psychology, 7(1), 18684. https://doi.org/10.1525/ collabra.18684

Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196-217. https://doi.org/10.1207/2Fs15327957pspr0203_4

Kirtley, O. J., Lafit, G., Achterhof, R., Hiekkaranta, A. P., & Myin-Germeys, I. (2021). Making the black box transparent: A template and tutorial for registration of studies using experience- sampling methods. Advances in Methods and Practices in Psychological Science, 4(1), 1-16. https://doi.org/10.1177/2515245920924686

Klein, O., Hardwicke, T. E., Aust, F., Breuer, J., Danielsson, H., Mohr, A. H., IJzerman, H., Nilsonne, G., Vanpaemel, W., & Frank, M. C., (2018). A practical guide for transparency in psychological science. Collabra: Psychology, 4(1), 20. https://doi.org/10.1525/collabra.158

Koessler, R. B., Campbell, L., & Kohut, T. (2019, February 27). Open notebook. https://osf.io/3n964/

Krypotos, A.-M., Klugkist, I., Mertens, G., & Engelhard, I. M. (2019). A step-by-step guide on preregistration and effective for psychopathology research. Journal of Abnormal Psychology, 128(6), 517–527. https://doi.org/10.1037/abn0000424

Ledgerwood, A. (2018). The preregistration revolution needs to distinguish between predictions and analyses. Proceedings of the National Academy of , 115(45), E10516-E10517. https://doi.org/10.1073/pnas.1812592115

Markowetz, F. (2015). Five selfish reasons to work reproducibly. Genome Biology, 16, 274. https://doi.org/10.1186/s13059-015-0850-7

McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., McDougall, D., Nosek, B. A., Ram, K., Soderberg, C. K., Spies, J. R., Thaney, K., Updegrove, A., Woo, K. H., & Yarkoni, T. (2016). Point of view: How open science helps researchers succeed. eLife, 5, e16800. https://doi.org/10.7554/eLife.16800

Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46(4), 806-834. https://doi.org/10.1037/0022-006X.46.4.806

Mertens, G., & Krypotos, A.-M. (2019). Preregistration of analyses of preexisting data. Psychologica Belgica, 59(1), 338–352. http://doi.org/10.5334/pb.493 19 OPEN SCIENCE

Mertzen, D., Lago, S., & Vasishth, S. (2021, March 4). The benefits of preregistration for hypothesis-driven bilingualism research. https://doi.org/10.31234/osf.io/nm3eg

Moher, D., Shamseer, L., Clarke, M. Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., Stewart, L. A., & PRISMA-P Group. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4(1), 1-9. https://doi.org/10.1186/2046-4053-4-1

Moreau, D., & Wiebels, K. (2021). Assessing change in intervention research: The benefits of composite outcomes. Advances in Methods and Practices in Psychological Science, 4(1), 1-14. https://doi.org/10.1177/2515245920931930

Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., ... & Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501-515. https://doi.org/10.1177/2515245918797607

Navarro, D. (2019, January 17). Prediction, pre-specification and transparency [blog post]. https://featuredcontent.psychonomic.org/prediction-pre-specification-and-transparency/

Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology's renaissance. Annual Review of Psychology, 69, 511-534. https://doi.org/10.1146/annurev-psych-122216-011836

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., Ishiyama, J., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422-1425. https://doi.org/10.1126/science.aab2374

Nuijten, M. B., Hartgerink, C. H., Van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48(4), 1205-1226. https://doi.org/10.3758/s13428-015-0664-2

Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of open data and computational reproducibility in Registered Reports in psychology. Advances in Methods and Practices in Psychological Science, 229–237. https://doi.org/10.1177/2515245920918872

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716

Page, M. J., Shamseer, L., & Tricco, A. C. (2018). Registration of systematic reviews in PROSPERO: 30,000 records and counting. Systematic Reviews, 7(1), 32. https://doi.org/10.1186/ s13643-018-0699-4

Paul, M., Govaart, G., & Schettino, A. (2021, March 1). Making ERP research more transparent: 20 OPEN SCIENCE Guidelines for preregistration. https://doi.org/10.31234/osf.io/4tgve

Peikert, A., & Brandmaier, A. M. (2019, November 11). A reproducible data analysis workflow with R Markdown, Git, Make, and Docker. https://doi.org/10.31234/osf.io/8xzqy

Platt, J. R. (1964). Strong inference. Science, 146(3642), 347-353. https://www.jstor.org/stable/1714268

Roettger, T. B. (In press). Preregistration in experimental linguistics: Applications, challenges, and limitations. Linguistics. https://doi.org/10.31234/osf.io/vc9hu

Rouder, J. N. (2016). The what, why, and how of born-open data. Behavior Research Methods, 48(3), 1062–1069. https://doi.org/10.3758/s13428-015-0630-z

Rouder, J. N., Haaf, J. M., & Snyder, H. K. (2019). Minimizing mistakes in psychological science. Advances in Methods and Practices in Psychological Science, 2(1), 3–11. https://doi.org/10.1177/2515245918801915

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632

Shamseer, L., Moher, D., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P., Stewart, L. A., & the PRISMA-P Group. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: Elaboration and explanation. BMJ, 350, g7647. https://doi.org/10.1136/bmj.g7647

Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384

Soderberg, C. K. (2018). Using OSF to share data: A step-by-step guide. Advances in Methods and Practices in Psychological Science, 1(1), 115–120. https://doi.org/10.1177/2515245918757689

Spellman, B. A. (2015). A Short (Personal) Future History of Revolution 2.0. Perspectives on Psychological Science, 10(6), 886–899. https://doi.org/10.1177/1745691615609918

Stodden, V., Seiler, J., & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences, 115, 2584– 2589. https://10.1073/pnas.1708290115

Tackett, J. L., Brandes, C. M., Dworak, E. M., & Shields, A. N. (2020). Bringing the (pre)registration revolution to graduate training. Canadian Psychology/Psychologie canadienne, 61(4), 299–309. https://doi.org/10.1037/cap0000221 21 OPEN SCIENCE Tenney, E., Costa, E., Allard, A., & Vazire, S. (2020). Open science and reform practices in organizational behavior research over time (2011 to 2019). Organizational Behavior and Human Decision Processes, 162, 218-223. https://doi.org/10.1016/j.obhdp.2020.10.015

Topor, M., Pickering, J. S., Barbosa Mendes, A., Bishop, D. V. M., Büttner, F. C., Elsherif, M. M., Evans, T. R., Henderson, E. L., Kalandadze, T., Nitschke, F. T., Staaks, J. P. C., van den Akker, O., Yeung, S. K., Zaneva, M., Lam, A., Madan, C. R., Moreau, D., O’Mahony, A., Parker, A., Riegelman, A., Testerman, M., & Westwood, S. J. (2021, March 5). An integrative framework for planning and conducting Non-Interventional, Reproducible, and Open Systematic Reviews (NIRO-SR). https://doi.org/10.31222/osf.io/8gu5z van 't Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology—A discussion and suggested template. Journal of Experimental Social Psychology, 67, 2-12. https:// doi.org/10.1016/j.jesp.2016.03.004

Van den Akker, O., Peters, G.-J. Y., Bakker, C., Carlsson, R., Coles, N. A., Corker, K. S., Feldman, G., Mellor, D., Moreau, D., Nordström, T., Pfeiffer, N., Pickering, J., Riegelman, A., Topor, M., van Veggel, N., & Yeung, S. K. (2020, September 15). Inclusive systematic review registration form. https://doi.org/10.31222/osf.io/3nbea

Van den Akker, O., Weston, S. J., Campbell, L., Chopik, W. J., Damian, R. I., Davis-Kean, P., Hall, A. N., Kosie, J., E., Kruse, E., Olsen, J., Ritchie, S. J., Valentine, K. D., van 't Veer, A., & Bakker, M. (2021, February 21). Preregistration of secondary data analysis: A template and tutorial. https://doi.org/10.31234/osf.io/hvfmr

Vazire, S., & Holcombe, A. O. (2020, August 13). Where are the self-correcting mechanisms in science? https://doi.org/10.31234/osf.io/kgqzt

Vazire, S., Schiavone, S. R., & Bottesini, J. G. (2020, October 7). Credibility beyond replicability: Improving the four validities in psychological science. https://doi.org/10.31234/osf.io/bu4d3

Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411–417. https://doi.org/10.1177/1745691617751884

Vuorre, M., & Curley, J. P. (2018). Curating research assets: A tutorial on the Git version control system. Advances in Methods and Practices in Psychological Science, 1(2), 219–236. https://doi.org/10.1177/2515245918754826

Wagenmakers, E.-J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B., Albohn, D. N., Allard, E. S., Benning, S. D., Blouin-Hudon, E.-M., Bulnes, L. C., Caldwell, T. L., Calin- Jageman, R. J., Capaldi, C. A., Carfagno, N. S., Chasten, K. T., Cleeremans, A., Connell, L., DeCicco, J. M., … Zwaan, R. A. (2016). Registered Replication Report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11(6), 917–928. https://doi.org/10.1177/1745691616674458 22 OPEN SCIENCE

Weston, S. J., Ritchie, S. J., Rohrer, J. M., & Przybylski, A. K. (2019). Recommendations for increasing the transparency of analysis of preexisting data sets. Advances in Methods and Practices in Psychological Science, 2(3), 214–227. https://doi.org/10.1177/2515245919848684

Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., ...Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3, 160018. https://doi.org/10.1038/sdata.2016.18 23 OPEN SCIENCE Table 1

Guides and Templates for Pre-Registration

Method/Subfield Source Clinical science Benning et al. (2019) Cognitive modeling application Crüwell & Evans (2020) Developmental cognitive neuroscience Flourney et al. (2020) EEG/ERP Paul et al. (2021) Experience sampling Kirtley et al. (2021) Experimental social psychology van 't Veer & Giner-Sorolla (2016) Exploratory research Dirnagl (2020) fMRI Flannery (2020) Infant research Havron et al. (2020) Intervention research Moreau & Wiebels (2021) Linguistics Roettger (2020) Mertzen et al. (2021) Psychopathology Krypotos et al. (2019) Qualitative research Haven & Van Grootel (2019) Haven et al. (2020) Quantitative research Bosnjak et al. (2021) Replication research Brandt et al. (2014) Secondary data analysis Weston et al. (2019) Mertens & Krypotos (2019) Van den Akker et al. (2021) Single-case design Johnson & Cook (2019) Systematic review (general) Van den Akker et al. (2020) Systematic review and meta-analysis Moher et al. (2015) protocols (PRISMA-P) Shamseer et al. (2015) Systematic review (non-interventional) Topor, Pickering et al. (2021)