The Role of Metadata in Reproducible Computational Research

The Role of Metadata in Reproducible Computational Research

The Role of Metadata in Reproducible Computational Research Jeremy Leipzig1; Daniel Nüst2; Charles Tapley Hoyt3; Stian Soiland-Reyes4,5;Karthik Ram6; Jane Greenberg1 1 Metadata Research Center, Drexel University, College of Computing and Informatics, Philadelphia PA, USA 2 Institute for Geoinformatics, University of Münster, Münster, Germany 3 Laboratory of Systems Pharmacology, Harvard Medical School, Boston, USA 4 eScience Lab, Department of Computer Science, The University of Manchester, Manchester, UK 5 INDE lab, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands 6 Berkeley Institute for Data Science, University of California, Berkeley, USA Abstract Reproducible computational research (RCR) is the keystone of the scientific method for in silico analyses, packaging the transformation of raw data to published results. In addition to its role in research integrity, RCR can significantly accelerate evaluation and reuse. This potential and wide support for the FAIR principles have motivated interest in metadata standards supporting RCR. Metadata provides context and provenance to raw data and methods and is essential to both discovery and validation. Despite this shared connection with scientific data, few studies have explicitly described the relationship between metadata and RCR. This article employs a functional content analysis to identify metadata standards that support RCR functions across an analytic stack consisting of input data, tools, notebooks, pipelines, and publications. Our article provides background context, explores gaps, and discovers component trends of embeddedness and methodology weight from which we derive recommendations for future work. Keywords: reproducible research, reproducible computational research, RCR, reproducibility, replicability, metadata, provenance, workflows, pipelines, ontologies, notebooks, containers, software dependencies, semantic, FAIR 1/53 Contents The Role of Metadata in Reproducible Computational Research 1 Abstract 1 Contents 2 Introduction 3 Reproducible Computational Research 3 Reproducibility Crisis 4 Big Data, Big Science, and Open Data 6 Metadata 8 Goals and Methods 9 The RCR metadata stack 10 Synthesis Review 10 1. Input 12 Examples 12 DICOM - An embedded file header 12 EML - Flexible user-centric data documentation 13 MIAME - A submission-centric minimal standard 14 Future directions - encoding, findability, granularity 16 2. Tools 16 Examples 17 CRAN, EDAM, & CodeMeta - Tool description and citation 17 Dependency and package management metadata 19 Fledgling standards for containers 19 Future directions 20 Automated repository metadata 20 Data as a dependency 20 3. Statistical reports & Notebooks 21 Examples 22 RMarkdown headers 22 Statistical and Machine Learning Metadata Standards 23 Future directions - parameter tracking 24 4. Pipelines 24 Examples 25 CWL - A configuration-based framework for interoperability 25 Future directions 27 Interoperable script and workflow provenance 27 Packaging and binding building blocks 27 5. Publication 30 Examples 31 Formalization of the Results of Biological Discovery 31 Future directions - reproducible articles 32 2 Discussion 32 Embeddedness vs connectedness 33 Methodology weight and standardization 35 Sphere of Influence 35 Metadata capital and reuse 36 Incentives/Evangelism/Education/Cultural gaps etc 37 Recommendations & Future Work 37 Acknowledgements 39 References 39 Introduction Digital technology and computing have transformed the scientific enterprise. As evidence, many scientific workflows have become fully digital, from the problem scoping stage and data collection tasks to analyses, reporting, storage, and preservation. Another key factor includes federal 1 and institutional 2,3 recommendations and mandates to build a sustainable research infrastructure, to support FAIR principles 4, and reproducible computational research (RCR). Metadata has emerged as a crucial component, supporting these advances, with standards supporting the research life-cycle. Reflective of change, there have been many case studies on reproducibility 5, although few studies have systematically examined the role of metadata in supporting RCR. Our aim in this work is to review metadata developments that are directly applicable to RCR, identify gaps, and recommend further steps involving metadata toward building a more robust RCR environment. To lay the groundwork for these recommendations, we first review the RCR and metadata, examine how they relate across different stages of an analysis, and discuss what common trends emerge from this approach. Reproducible Computational Research Reproducible Research is an umbrella term that encompasses many forms of scientific quality - from generalizability of underlying scientific truth, exact replication of an experiment with or without communicating intent, to the open sharing of analysis for reuse. Specific to computational facets of scientific research, Reproducible Computational Research (RCR)6 encompasses all aspects of in silico analyses, from the propagation of raw data collected from the wet lab, field, or instrumentation, through intermediate data structures, to open code and statistical analysis, and finally publication. Reproducible research points to several underlying concepts of scientific validity – terms that should be unpacked to be understood. Stodden et al. 7 devised a five-level hierarchy of research, classifying it as – reviewable, replicable, confirmable, auditable, and open or reproducible. Whitaker 8 describes an analysis as "reproducible" in the narrow sense that a user can produce identical results provided the data and code from the original, and "generalisable" if it produces similar results when both data is swapped out for similar data ("replicability"), and if underlying code is swapped out with comparable 3 replacements ("robustness") (Figure 1). Figure 1: Whitaker's matrix of reproducibility 9 While these terms may confuse those new to reproducibility, a review by Barba disentangles the terminology while providing a historical context of the field 10. A wider perspective places reproducibility as a first-order benefit of applying FAIR principles: Findability, Accessibility, Interoperability, and Reusability. In the next sections, we will engage reproducibility in the general sense and will use "narrow-sense" to refer to the same data, same code condition. Reproducibility Crisis The scientific community’s challenge with irreproducibility in research has been extensively documented 11. Two events in the life sciences stand as watershed moments in this crisis – the publication of manipulated and falsified predictive cancer therapeutic signatures by a biomedical researcher at Duke and subsequent forensic investigation by Keith Baggerly and David Coombes 12, and a review by scientists at Amgen who could replicate the results of only 6 out of 53 cancer studies 13. These events involved different aspects - poor data structures and missing protocols, respectively. Together with related studies 14 they underscore recurring reproducibility problems due to a lack of detailed methods, missing controls, and other protocol failures in Inadequate understanding or misuse of statistics, including inappropriate statistical tests and or misinterpretation of results, also plays a recurring role in irreproducibility 15. Regardless of intent, these activities fall under the umbrella term of "questionable research practices". It bears speculation whether these types of incidents are more likely to occur in novel statistical approaches compared to conventional ones. Subsequent surveys of researchers 11 have identified selective reporting, while theory papers 16 have emphasized the insidious combination of underpowered designs and publication bias, essentially a multiple testing problem on a global scale. We contend that RCR metadata has a role to play in addressing all of these issues and to shift the narrative from a crisis to opportunities 17. 4 In the wake of this newfound interest in reproducibility, both the variety and volume of related case studies increased after 2015 (Figure 2). Likert-style surveys and high-level publication-based censuses (see Figure 3) in which authors tabulate data or code availability are most prevalent. Additionally, low-level reproductions, in which code is executed, replications in which new data is collected and used, tests of robustness in which new tools or methods are used, and refactors to best practices are also becoming more popular. While the life sciences have generated more than half of these case studies, areas of the social and physical sciences are increasingly the subjects of important reproduction and replication efforts. These case studies have provided the best source of empirical data for understanding reproducibility and will likely continue to be valuable for evaluating the solutions we review in the next sections. Figure 2: Case studies in reproducible research 5. The term "case studies" is used in a general sense to describe any study of reproducibility. A reproduction is an attempt to arrive at comparable results with identical data using computational methods described in a paper. A refactor involves refactoring existing code into frameworks and reproducible best practices while preserving the original data. A replication involves generating new data and applying existing methods to achieve comparable results. A test of robustness applies various protocols, workflows, statistical models, or parameters to a given

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    53 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us