Better Reporting for Better Research: A

Total Page:16

File Type:pdf, Size:1020Kb

Better Reporting for Better Research: A Kenall et al. GigaScience (2015) 4:32 DOI 10.1186/s13742-015-0071-8 EDITORIAL Open Access Better reporting for better research: a checklist for reproducibility Amye Kenall1*, Scott Edmunds2, Laurie Goodman2, Liz Bal1, Louisa Flintoft3, Daniel R Shanahan1 and Tim Shipley4 How easy is it to reproduce or replicate the findings of a is an activity that continually builds on and verifies itself. published paper? In 2013 one researcher, Phil Bourne, But we also must develop policies that better support this asked just this. How easy would it be to reproduce the process—policies, for example, that promote transparency results of a computational biology paper? [1]. The an- and allow for improved verification of research. swer: 280 hours. Such a number is surprising, given the Clearly important for clinical research, verification is theoretical reproducibility of computational research equally important for preclinical research, something we and given Bourne was attempting to reproduce work all have an equal stake in. No one can innovate new done in his own lab. Now at the National Institutes of drugs overnight, no matter how rich they are, no matter Health (NIH) as Associate Director of Data Sciences, which doctor they see. Better, more robust preclinical Bourne is concerned with the reproducibility of all NIH research benefits us all.1 Our ability to rely on published funded work, not just his own—and the problem is data for potential therapeutics is critical, and recently its large. In addition to work in computational biology (which reliability has been called into question [7]. theoretically should be more easily reproducible than “wet One well-publicised example of this was brought to lab” work), hallmark papers in cancer through to psych- light in an oncology study of preclinical research find- ology have been flagged as largely unreproducible [2, 3]. ings in which researchers were able to confirm only Closer to home, GigaScience has carried out similar work 11 % of the findings [8, 9]. Although the relevance of to quantify reproducibility in their content. Despite being more robust research is clear in the area of oncology, it scrutinized and tested by seven referees, it still took about is also important for more exploratory research that might half a man-month worth of resources to reproduce the never make it to the preclinical setting. Funding and time results reported in just one of the tables [4]. “Reprodu- are both increasingly limited, and the waste generated cibility” is now increasingly on the radar of funders and from follow-up work based on irreproducible research is is making its rounds in the wider media as well, with high. A recent study by Freedman et al. estimated this at concerns of reproducibility making headlines at The approximately $28 billion a year for preclinical research in Economist [5] and New York Times [6], amongst other the United States alone [10]. outlets. Funder update Why is this important? The NIH have recently taken bold steps to begin to It is critical to note that irreproducible work doesn’t tackle the need for better design, more appropriate necessarily mean fraud occurred, nor even that the analysis, and greater transparency in the conduct and findings are incorrect; likewise, reproducible research reporting of research. In January 2014 the NIH an- can still be incorrect. While this key point is well- nounced they would fund more training for scientists understood by most scientists, this is not always easy to in data management and restructure their grant review explain to the general public. However, as most re- process to better value other research objects, such as search is paid for through tax payers, public trust in data [11]. But it is peer review and the editorial policies research is essential. We—researchers, funders, and and practices of journals that have come under the publishers—must do a better job at communicating this greatest scrutiny, and in June 2014 a set of guidelines message to the public. We must better explain that science for reporting preclinical research were proposed by the NIH to meet the perceived need for more stringent * Correspondence: [email protected] 1BioMed Central, London, United Kingdom standards [12]. These guidelines ask journals to ensure, for Full list of author information is available at the end of the article example, that authors have included a minimum set of © 2015 Kenall et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http:// creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Kenall et al. GigaScience (2015) 4:32 information on study design, that statistical checks have reason we are implementing this first as a trial, for which been carried out by reviewers, and that authors have been we will collect feedback and monitor its success. given enough information to enable animal strains, cell In the first instance, the checklist will be rolled out on a lines, reagents, and so on, to be uniquely identified re- small group of select journals: BMC Biology, BMC Neuro- agents. (For a full list of requirements, see the NIH Princi- science, Genome Biology,andGigaScience.In6months’ ples and Guidelines for Reporting Preclinical Research.) time, we plan to review the data we have collected around this trial, checking whether reporting has increased and BioMed central author and reviewer checklist collating author, editor, and reviewer feedback on the trial, Journals clearly have an important part to play in help- with the aim to roll out the checklist (with any revisions) ing to ensure as far as possible that experimental design across all BioMed Central journals. We have designed the and analysis are appropriate, and that reporting stan- checklist to act as an aid to authors, editors, and reviewers dards are met. This month BioMed Central will launch rather than a burden to submission and look forward to a trial checklist for authors and referees with these hearing your thoughts as the trial progresses. explicit aims. BioMed Central has long supported transparency in Endnotes reporting for both biology and medicine, even working 1For further discussion of this around clinical trial with Editorial Board Members developing and endorsing transparency and reliability, see Ben Goldacre’s Bad standards such as MIQE-precis [13], and the EQUATOR Pharma. Network guidelines, such PRISMA [14]. The trial checklist 2To better support our authors in adhering to this builds on these accepted standards and the principles checklist, we have also recently revised our section on data behind them, formalising, tailoring and standardising availability, detailing where authors can deposit their data these efforts across journals. and how to cite their data in their manuscript. We also The checklist addresses three areas of reporting: ex- have in-house staff available to work with authors to find perimental design and statistics, resources, and avail- a home for their data. http://www.biomedcentral.com/ ability of data and materials [15]. Some of the NIH about/editorialpolicies#DataandMaterialRelease Guidelines were straightforward to implement, given 3The Center for Open Science with stakeholders from they were policies long in place at BioMed Central. research have recently devised an easy to use set of guide- However, we used these new guidelines as an opportun- lines based on eight standards and three levels of adher- ity to ensure that these as well as our long-standing ence. With this checklist, all journals will adhere to level 2 policies already in place had the best chance of being requirements. At present, all BioMed Central journals ad- adhered to by authors and by reviewers by integrating here to level 1 requirements. http://www.sciencemag.org/ them into our internal systems and workflows. Authors content/348/6242/1422.figures-only willbeaskedonsubmissiontoconfirmthattheyhave included the information asked for in the checklist or Competing interests give reasons for any instances where it is not made LB, LF, AK, TS, and DS are employees at BioMed Central. SE and LG are employees at the BGI. available or not applicable.2 Likewise, reviewers will be asked to confirm the information has been satisfactorily Authors’ contributions reported and reviewed. AK wrote the first draft and SE, LG, LB, TS, and DS contributed additional This also has the aim of making editors’ jobs more edits to the text and comments. All authors read and approved the final manuscript. Core members of the Reproducibility Working Group straightforward. With a clear and simple checklist on responsible for the checklist and its implementation were LB, AK, TS, and what information to include in the manuscript, less time DS. All group members contributed equally to the project. should be spent liaising with authors. Plans are also in place to integrate our new checklist into BioMed Central Acknowledgments We thankfully acknowledge the useful feedback on the checklist from Susanna Roadshows and Author Workshops (http://roadshow. Sansone at BioSharing (https://www.biosharing.org/), the entire BMC Biologyand biomedcentral.com/), helping to ensure researchers are Genome Biology editorial teams, including Penny Austin and Rafal Marszalek and made aware of the reporting standards before publication. the Research Integrity team, especially Maria Kowalczuk and Elizabeth Moylan (http://www.biomedcentral.com/authors/biomededitors), at BioMed Central. BioMed Central is not the first to implement reporting This editorial was published jointly in BMC Neuroscience, Genome Biology, and 3 guidelines, with the Center for Open Science [16] and GigaScience. our colleagues at Nature. [17] also recently announcing Author details similar initiatives. Implementing reporting guidelines, ra- 1BioMed Central, London, United Kingdom.
Recommended publications
  • Gies for Increasing Citations
    CORE Metadata, citation and similar papers at core.ac.uk Provided by International SERIES on Information Systems and Management in Creative eMedia (CreMedia) Maximized Research Impact: Effective Strate- gies for Increasing Citations Nader Ale Ebrahim*1, Hossein Gholizadeh2, Artur Lugmayr3 1Research Support Unit, Centre for Research Services, Institute of Research Management and Monitoring (IPPP), University of Malaya, Malaysia, Email: [email protected], *Corresponding author 2Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia 3VisLab., School of Media, Culture and Creative Arts, Curtin University, Perth, Australia Abstract The high competitive environment has forced higher education authori- ties to set their strategies to improve university ranking. Citations of published pa- pers are among the most widely used inputs to measure national and global uni- versity ranking (which accounts for 20% of QS, 30% of THE, and etc.). Therefore, from one hand, improving the citation impact of a search is one of the university manager’s strategies. On the other hand, the researchers are also looking for some helpful techniques to increase their citation record. This chapter by reviewing the relevant articles covers 48 different strategies for maximizing research impact and visibility. The results show that some features of article can help predict the num- ber of article views and citation counts. The findings presented in this chapter could be used by university authorities, authors, reviewers, and editors to maxim- ize the impact of articles in the scientific community. Keywords: University ranking, Improve citation, Citation frequency, Research impact, Open access, h-index Introduction The research output is an essential part of an institution’s measure and evaluation of research quality.
    [Show full text]
  • Tracking the Popularity and Outcomes of All Biorxiv Preprints
    1 Meta-Research: Tracking the popularity and outcomes of all bioRxiv 2 preprints 3 Richard J. Abdill1 and Ran Blekhman1,2 4 1 – Department of Genetics, Cell Biology, and Development, University of Minnesota, 5 Minneapolis, MN 6 2 – Department of Ecology, Evolution, and Behavior, University of Minnesota, St. Paul, 7 MN 8 9 ORCID iDs 10 RJA: 0000-0001-9565-5832 11 RB: 0000-0003-3218-613X 12 13 Correspondence 14 Ran Blekhman, PhD 15 University of Minnesota 16 MCB 6-126 17 420 Washington Avenue SE 18 Minneapolis, MN 55455 19 Email: [email protected] 1 20 Abstract 21 The growth of preprints in the life sciences has been reported widely and is 22 driving policy changes for journals and funders, but little quantitative information has 23 been published about preprint usage. Here, we report how we collected and analyzed 24 data on all 37,648 preprints uploaded to bioRxiv.org, the largest biology-focused preprint 25 server, in its first five years. The rate of preprint uploads to bioRxiv continues to grow 26 (exceeding 2,100 in October 2018), as does the number of downloads (1.1 million in 27 October 2018). We also find that two-thirds of preprints posted before 2017 were later 28 published in peer-reviewed journals, and find a relationship between the number of 29 downloads a preprint has received and the impact factor of the journal in which it is 30 published. We also describe Rxivist.org, a web application that provides multiple ways 31 to interact with preprint metadata. 32 Introduction 33 In the 30 days of September 2018, four leading biology journals – The Journal of 34 Biochemistry, PLOS Biology, Genetics and Cell – published 85 full-length research 35 articles.
    [Show full text]
  • A Reference Bacterial Genome Dataset Generated on the Miniontm Portable Single-Molecule Nanopore Sequencer Joshua Quick1,2, Aaron R Quinlan3 and Nicholas J Loman1*
    Quick et al. GigaScience (2015) 4:6 DOI 10.1186/s13742-015-0043-z ERRATUM Open Access Erratum: A reference bacterial genome dataset generated on the MinIONTM portable single-molecule nanopore sequencer Joshua Quick1,2, Aaron R Quinlan3 and Nicholas J Loman1* Recently we noticed that we could not reproduce Author details Figure three (Figure 1 here) while analysing this data as 1Institute of Microbiology and Infection, University of Birmingham, Birmingham, B15 2TT, UK. 2NIHR Surgical Reconstruction and Microbiology part of a new experiment [1]. This was due to a modifica- Research Centre, University of Birmingham, B15 2TT, Birmingham, UK. 3Center tion to the script used for extracting sequence alignment for Public Health Genomics, University of Virginia, Virginia, Charlottesville, VA profiles from the dataset [2]. On further investigation we 22908, US. found that an error in this script had been reported and Received: 31 December 2014 Accepted: 5 January 2015 a fix supplied by an anonymous online contributor on October 27th 2014 via Github [3]. The error prevented mismatches from being properly counted in the calcu- References lation of read accuracy (insertions and deletions were). 1. Quick J, Quinlan AR, Loman NJ. A reference bacterial genome dataset generated on the minion™ portable single-molecule nanopore We therefore present updated versions of Figures three sequencer. GigaScience. 2014;3:22. (Figure 1 here) and four (Figure 2 here) generated by the 2. Quinlan AR. 2014. https://github.com/arq5x/nanopore-scripts/blob/ corrected script. We are grateful to the anonymous con- master/count-errors.py.Github 3. Anon. 2014. https://github.com/arq5x/nanopore-scripts/pull/1.Github tributor who noticed this error.
    [Show full text]
  • Open Science and the Role of Publishers in Reproducible Research
    Open science and the role of publishers in reproducible research Authors: Iain Hrynaszkiewicz (Faculty of 1000), Peter Li (BGI), Scott Edmunds (BGI) Chapter summary Reproducible computational research is and will be facilitated by the wide availability of scientific data, literature and code which is freely accessible and, furthermore, licensed such that it can be reused, inteGrated and built upon to drive new scientific discoveries without leGal impediments. Scholarly publishers have an important role in encouraGing and mandating the availability of data and code accordinG to community norms and best practices, and developinG innovative mechanisms and platforms for sharinG and publishinG products of research, beyond papers in journals. Open access publishers, in particular the first commercial open access publisher BioMed Central, have played a key role in the development of policies on open access and open data, and increasing the use by scientists of leGal tools – licenses and waivers – which maximize reproducibility. Collaborations, between publishers and funders of scientific research, are vital for the successful implementation of reproducible research policies. The genomics and, latterly, other ‘omics communities historically have been leaders in the creation and wide adoption of policies on public availability of data. This has been throuGh policies, such as Fort Lauderdale and the Bermuda Principles; infrastructure, such as the INSDC databases; and incentives, such as conditions of journal publication. We review some of these policies and practices, and how these events relate to the open access publishinG movement. We describe the implementation and adoption of licenses and waivers prepared by Creative Commons, in science publishinG, with a focus on licensing of research data published in scholarly journals and data repositories.
    [Show full text]
  • Citation Databases for Legal Scholarship
    University at Buffalo School of Law Digital Commons @ University at Buffalo School of Law Law Librarian Journal Articles Law Librarian Scholarship 3-19-2020 Citation Databases for Legal Scholarship John R. Beatty University at Buffalo School of Law, [email protected] Follow this and additional works at: https://digitalcommons.law.buffalo.edu/law_librarian_articles Part of the Law Commons, Law Librarianship Commons, and the Scholarly Communication Commons Recommended Citation John R. Beatty, Citation Databases for Legal Scholarship, 39 Legal Ref. Servs. Q. 56 (2020). Available at: https://digitalcommons.law.buffalo.edu/law_librarian_articles/50 This is an Accepted Manuscript of an article published by Taylor & Francis in Legal Reference Services Quarterly on March 19, 2020, available online: https://www.tandfonline.com/doi/full/10.1080/0270319X.2020.1738198. This Article is brought to you for free and open access by the Law Librarian Scholarship at Digital Commons @ University at Buffalo School of Law. It has been accepted for inclusion in Law Librarian Journal Articles by an authorized administrator of Digital Commons @ University at Buffalo School of Law. For more information, please contact [email protected]. Citation Databases for Legal Scholarship John R. Beatty a* a Charles B. Sears Law Library, University at Buffalo, State University of New York, Buffalo, NY, USA John Beatty, Charles B. Sears Law Library, O’Brian Hall, Buffalo, NY 14260, [email protected], ORCID ID: 0000-0002-4940-8483 John R. Beatty is the Faculty Scholarship Outreach Librarian at the Charles B. Sears Law Library at the University at Buffalo. This is an Accepted Manuscript of an article published by Taylor & Francis in Legal Reference Services Quarterly on March 19, 2020, available online: https://www.tandfonline.com/doi/full/10.1080/0270319X.2020.1738198.
    [Show full text]
  • Citations and Gamed Metrics: Academic Integrity Lost
    Acad. Quest. (2021) 34.1 DOI 10.51845/34s.1.18 10.51845/34s.1.18 Articles Citations and Gamed Metrics: Academic Integrity Lost Jaime A. Teixeira da Silva Author-based metrics (ABMs) and journal-based metrics (JBMs), as well as citations, are the dominant currency of intellectual recognition in academia. These metrics and citations continue to be dominated by the Clarivate Analytics Journal Impact Factor (JIF) and are still widely used as a surrogate to determine prestige, scientific credit, and influence.1 However, their dominance in this area needs to be rethought, especially in China, which accounts for approximately 19 percent of total scientific research papers globally.2 China, like several other countries, games the JIF for financial compensation in pay-to-publish schemes based on JIF-carrying journals.3 A competing metric, CiteScore, is also showing signs of being gamed.4 It is possible that CiteScore may have evolved, following Goodhart’s Law, as JIF’s relevance as an “academic” metric began to fray. 5 The culture of metrics, especially the JIF, to measure success, reputation, and productivity, or JIF-based praise platforms such as Clarivate Analytics’ “Highly Cited Researchers” report inflate the problem because they elevate the perceived excellence of single individuals above all others, when in fact most publishing success is collaborative.6 The survival of many academics and journals in this publish-or-perish culture depends on such metrics, as authors seek journals with higher JIFs in which to publish their findings. In the gold 1 Erin C. McKiernan, Lesley A. Schimanski, Carol Muñoz Nieves, Lisa Matthias, Meredith T.
    [Show full text]
  • Open Access—What's New, What's Worked, What Hasn't
    Annual Meeting Reports Open Access—What’s New, What’s Worked, What Hasn’t Moderator: began talking about requiring OA, it was data and software be freely available and Deborah Kahn thought that publishers would jump right that reviews be signed. Executive Vice President, Publishing into the Gold OA model: An author pays The Journal of Neuroinflammation has BioMed Central a fee and an article appears in a journal been OA from its beginnings in 2004 London, United Kingdom in which all articles are OA. Not surpris- because the editors-in-chief believed that ingly, many publishers have been leery of OA was the wave of the future. Current Speakers: suddenly converting all their titles to Gold Editor-in-Chief Sue T Griffin present- Barbara Goldman OA without some idea of how it will affect ed the viewpoint of this established OA Director, Journals their revenue. Some are using a hybrid OA journal. Like GigaScience, the Journal of American Society for Microbiology model, in which an author can buy OA for Neuroinflammation has no embargo period, Washington, DC an article in a subscription journal. so all articles are made available on publi- Deborah Kahn, of BioMed Central, pub- cation. However, authors are charged a pro- Laurie Goodman lisher of more than 260 OA journals, led a cessing fee, whereas GigaScience has no fees Editor-in-Chief lively and informative panel that comprised for authors. Another difference between GigaScience five OA viewpoints: those of the author, these OA journals is in their style of peer BioMed Central the editor of an established OA journal, review.
    [Show full text]
  • Assessing the Impact and Quality of Research Data Using Altmetrics and Other Indicators
    Konkiel, S. (2020). Assessing the Impact and Quality of Research Data Using Altmetrics and Other Indicators. Scholarly Assessment Reports, 2(1): 13. DOI: https://doi.org/10.29024/sar.13 COMMISSIONED REPORT Assessing the Impact and Quality of Research Data Using Altmetrics and Other Indicators Stacy Konkiel Altmetric, US [email protected] Research data in all its diversity—instrument readouts, observations, images, texts, video and audio files, and so on—is the basis for most advancement in the sciences. Yet the assessment of most research pro- grammes happens at the publication level, and data has yet to be treated like a first-class research object. How can and should the research community use indicators to understand the quality and many poten- tial impacts of research data? In this article, we discuss the research into research data metrics, these metrics’ strengths and limitations with regard to formal evaluation practices, and the possible meanings of such indicators. We acknowledge the dearth of guidance for using altmetrics and other indicators when assessing the impact and quality of research data, and suggest heuristics for policymakers and evaluators interested in doing so, in the absence of formal governmental or disciplinary policies. Policy highlights • Research data is an important building block of scientific production, but efforts to develop a framework for assessing data’s impacts have had limited success to date. • Indicators like citations, altmetrics, usage statistics, and reuse metrics highlight the influence of research data upon other researchers and the public, to varying degrees. • In the absence of a shared definition of “quality”, varying metrics may be used to measure a dataset’s accuracy, currency, completeness, and consistency.
    [Show full text]
  • Research Prioritization Through Prediction of Future Impact on Biomedical Science: a Position Paper on Inference-Analytics Ganapathiraju and Orii
    Score Rank Re-rank by Predicted Impact on Future Science Score Rank Re-rank by Confidence Score Rank Re-rank by Resource Requirements or Reagent Availability Inference Analytics: A Necessary Next step to Big Data Analytics in Biology Research prioritization through prediction of future impact on biomedical science: a position paper on inference-analytics Ganapathiraju and Orii Ganapathiraju and Orii GigaScience 2013, 2:11 http://www.gigasciencejournal.com/content/2/1/11 Ganapathiraju and Orii GigaScience 2013, 2:11 http://www.gigasciencejournal.com/content/2/1/11 RESEARCH Open Access Research prioritization through prediction of future impact on biomedical science: a position paper on inference-analytics Madhavi K Ganapathiraju1,2* and Naoki Orii1,2 Abstract Background: Advances in biotechnology have created “big-data” situations in molecular and cellular biology. Several sophisticated algorithms have been developed that process big data to generate hundreds of biomedical hypotheses (or predictions). The bottleneck to translating this large number of biological hypotheses is that each of them needs to be studied by experimentation for interpreting its functional significance. Even when the predictions are estimated to be very accurate, from a biologist’s perspective, the choice of which of these predictions is to be studied further is made based on factors like availability of reagents and resources and the possibility of formulating some reasonable hypothesis about its biological relevance. When viewed from a global perspective, say from that of a federal funding agency, ideally the choice of which prediction should be studied would be made based on which of them can make the most translational impact.
    [Show full text]
  • A Locally Funded Puerto Rican Parrot (Amazona Vittata) Genome Sequencing Project Increases Avian Data and Advances Young Researcher Education Oleksyk Et Al
    Downloaded from https://academic.oup.com/gigascience/article-abstract/1/1/2047-217X-1-14/2656136 by University of Lethbridge user on 20 March 2020 A locally funded Puerto Rican parrot (Amazona vittata) genome sequencing project increases avian data and advances young researcher education Oleksyk et al. Oleksyk et al. GigaScience 2012, 1:14 http://www.gigasciencejournal.com/content/1/1/14 28 Oleksyk et al. GigaScience 2012, 1:14 http://www.gigasciencejournal.com/content/1/1/14 DATANOTE Open Access A locally funded Puerto Rican parrot Downloaded from https://academic.oup.com/gigascience/article-abstract/1/1/2047-217X-1-14/2656136 by University of Lethbridge user on 20 March 2020 (Amazona vittata) genome sequencing project increases avian data and advances young researcher education Taras K Oleksyk1*, Jean-Francois Pombert2, Daniel Siu3, Anyimilehidi Mazo-Vargas1, Brian Ramos1, Wilfried Guiblet1, Yashira Afanador1, Christina T Ruiz-Rodriguez1,4, Michael L Nickerson4, David M Logue1, Michael Dean4, Luis Figueroa5, Ricardo Valentin6 and Juan-Carlos Martinez-Cruzado1 Abstract Background: Amazona vittata is a critically endangered Puerto Rican endemic bird, the only surviving native parrot species in the United States territory, and the first parrot in the large Neotropical genus Amazona, to be studied on a genomic scale. Findings: In a unique community-based funded project, DNA from an A. vittata female was sequenced using a HiSeq Illumina platform, resulting in a total of ~42.5 billion nucleotide bases. This provided approximately 26.89x average coverage depth at the completion of this funding phase. Filtering followed by assembly resulted in 259,423 contigs (N50 = 6,983 bp, longest = 75,003 bp), which was further scaffolded into 148,255 fragments (N50 = 19,470, longest = 206,462 bp).
    [Show full text]
  • Efficient and Accurate Error Correction for Illumina RNA-Seq Reads Li Song1 and Liliana Florea1,2*
    Song and Florea GigaScience (2015) 4:48 DOI 10.1186/s13742-015-0089-y TECHNICAL NOTE Open Access Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads Li Song1 and Liliana Florea1,2* Abstract Background: Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. Findings: We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Conclusions: Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/. Keywords: Next-generation sequencing, RNA-seq, Error correction, k-mers Introduction based methods, which are the most popular of the Next-generation sequencing of cellular RNA (RNA-seq) three, classify a k-mer as trusted or untrusted depending has become the foundation of virtually every transcrip- on whether the number of occurrences in the input tomic analysis.
    [Show full text]
  • Impact Factor Volatility to a Single Paper: a Comprehensive Analysis
    Impact Factor volatility to a single paper: A comprehensive analysis Manolis Antonoyiannakisa,b aDepartment of Applied Physics & Applied Mathematics, Columbia University, 500 W. 120th St., Mudd 200, New York, NY 10027 bAmerican Physical Society, Editorial Office, 1 Research Road, Ridge, NY 11961-2701 ORCID: 0000-0001-6174-0668 Abstract We study how a single paper affects the Impact Factor (IF) by analyzing data from 3,088,511 papers published in 11639 journals in the 2017 Journal Citation Reports of Clarivate Analytics. We find that IFs are highly volatile. For example, the top-cited paper of 381 journals caused their IF to increase by more than 0.5 points, while for 818 journals the relative increase exceeded 25%. And one in 10 journals had their IF boosted by more than 50% by their top three cited papers. Because the single-paper effect on the IF is inversely proportional to journal size, small journals are rewarded much more strongly than large journals for a highly-cited paper, while they are pe- nalized more for a low-cited paper, especially if their IF is high. This skewed reward mechanism incentivizes high-IF journals to stay small, to remain competitive in rankings. We discuss the im- plications for breakthrough papers appearing in prestigious journals. We question the reliability of IF rankings given the high IF sensitivity to a few papers for thousands of journals. Keywords: Science of Science, Impact Factor, Volatility, Citation Distributions, Bibliostatistics 1. Introduction & Motivation The effect of a journal’s scale (i.e., size) on its citation average cannot be overstated.
    [Show full text]