A Protocol of a Cross-Sectional Study Evaluating an Online Tool for Early Career Peer Reviewers Assessing Reports of Randomised Controlled Trials

Total Page:16

File Type:pdf, Size:1020Kb

A Protocol of a Cross-Sectional Study Evaluating an Online Tool for Early Career Peer Reviewers Assessing Reports of Randomised Controlled Trials Open Access Protocol BMJ Open: first published as 10.1136/bmjopen-2017-017462 on 15 September 2017. Downloaded from A protocol of a cross-sectional study evaluating an online tool for early career peer reviewers assessing reports of randomised controlled trials Anthony Chauvin,1,2,3 David Moher,4,5 Doug Altman,6 David L Schriger,7 Sabina Alam,8 Sally Hopewell,9 Daniel R Shanahan,10 Alessandro Recchioni,10 Philippe Ravaud,1,2 Isabelle Boutron1,2,11 To cite: Chauvin A, Moher D, ABSTRACT Strengths and limitations of this study Altman D, et al. A protocol Introduction Systematic reviews evaluating the impact of a cross-sectional study of interventions to improve the quality of peer review for ► To the best of our knowledge, this will be the first evaluating an online tool for biomedical publications highlighted that interventions were early career peer reviewers study that will assess the efficacy of using an limited and have little impact. This study aims to compare assessing reports of randomised innovative online tool to help early career peer the accuracy of early career peer reviewers who use an controlled trials. BMJ Open reviewers to assess the quality of reporting of innovative online tool to the usual peer reviewer process 2017;7:e017462. doi:10.1136/ randomised controlled trials. bmjopen-2017-017462 in evaluating the completeness of reporting and switched ► A randomised selection of trials from a large panel of primary outcomes in completed reports. biomedical journals. ► Prepublication history and Methods and analysis This is a cross-sectional additional material for this ► The absence of background for choosing the a priori study of individual two-arm parallel-group randomised paper are available online. To effect size. view these files, please visit controlled trials (RCTs) published in the BioMed Central the journal online (http:// dx. doi. series medical journals, BMJ, BMJ Open and Annals of org/ 10. 1136/ bmjopen- 2017- Emergency Medicine and indexed with the publication type 017462). ‘Randomised Controlled Trial’. First, we will develop an online editorial peer review process is described as tool and training module based (a) on the Consolidated journal editors relying on the views of inde- Received 2 May 2017 Standards of Reporting Trials (CONSORT) 2010 checklist pendent experts in making decisions on, for Revised 20 July 2017 and the Explanation and Elaboration document that would example, the publication of submitted manu- http://bmjopen.bmj.com/ Accepted 28 July 2017 be dedicated to junior peer reviewers for assessing the scripts or presentation of reports at meetings.2 completeness of reporting of key items and (b) the Centre The peer review system is considered the best for Evidence-Based Medicine Outcome Monitoring Project method for helping scientific editors decide process used to identify switched outcomes in completed on the acceptability of a manuscript for reports of the primary results of RCTs when initially publication.3 Nevertheless, the effectiveness submitted. Then, we will compare the performance of early of the system has been questioned. In 2010, career peer reviewers who use the online tool to the usual peer review process in identifying inadequate reporting and a report commissioned by the UK House of switched outcomes in completed reports of RCTs at initial Commons showed that the cost to the UK on September 25, 2021 by guest. Protected copyright. journal submission. The primary outcome will be the mean Higher Education Institutions in terms of number of items accurately classified per manuscript. The staff time was £110 to £165 million per year secondary outcomes will be the mean number of items for peer review and up to £30 million per year accurately classified per manuscript for the CONSORT items for the work done by editors and editorial and the sensitivity, specificity and likelihood ratio to detect boards.4 Worldwide, peer review is estimated the item as adequately reported and to identify a switch in to cost £1.9 billion annually and accounts outcomes. We aim to include 120 RCTs and 120 early career for about one-quarter of the overall costs of peer reviewers. scholarly publishing and distribution.5 The Ethics and dissemination The research protocol was human resources were estimated to be about approved by the ethics committee of the INSERM Institutional 6 Review Board (21 January 2016). The study is based on 15 million hours by 2010. voluntary participation and informed written consent. Furthermore, the peer review system may Trial registration number NCT03119376. fail to identify important flaws in the quality For numbered affiliations see of submitted manuscripts and published end of article. articles.7 8 A recent study by Hopewell et al9 Correspondence to INTRODUCTION showed that peer reviewers failed to detect Dr Anthony Chauvin; The peer review process is a cornerstone important deficiencies in the reporting of anthony. chauvin@ aphp. fr of biomedical research publication.1 The methods and results of randomised trials. Chauvin A, et al. BMJ Open 2017;7:e017462. doi:10.1136/bmjopen-2017-017462 1 Open Access BMJ Open: first published as 10.1136/bmjopen-2017-017462 on 15 September 2017. Downloaded from There is also considerable indirect evidence of the failure be reported extracted from the Explanation and Elab- of peer review. Systematic reviews of clinical questions oration document, as well as (an) example(s) of good highlight a high prevalence of key information not being reporting. provided in peer-reviewed articles. Moreover, there is The peer reviewer tool will use the existing CobWeb evidence to suggest that peer reviewers are not able to elements reworded to be appropriate for peer reviewers detect fraud,10 mistakes11 12 or spin.13 14 Systematic reviews and for the items not considered in CobWeb, we will evaluating the impact of interventions to improve the extract the bullet points from the Explanation and Elab- quality of peer review for biomedical publications high- oration document of the CONSORT 2010 following the lighted few randomised controlled trials (RCTs) and that same process.21 interventions such as training have little or no impact.15 16 The tool will feature bullet points eliciting the meaning How the peer review process is currently organised can of each checklist item. This tool will allow for standard- explain the difficulties encountered. A number of tasks ising the completion of this task, be an opportunity to are expected from peer reviewers when assessing reports train early career peer reviewers to become more expert of an RCT.17 They have to assess the completeness of peer reviewers and be used to provide standardised reporting, whether outcomes were switched from that and personalised feedback to authors based on the reported in the trial registry, the choice of the compar- CONSORT items and bullet points; the authors would ator and intervention, the relevance of outcomes, the receive a request to report each item (with associated risk of bias, the external validity, the presence of spin, the bullet points) that the early career peer reviewer identi- originality, importance and relevance of the RCT to the fied as being inadequately reported. journal’s readers, etc. It may be unrealistic to expect a The objectives of this project are as follows: peer reviewer to complete all of these tasks. 1. Develop an online tool and training module based Furthermore, the assessment of the completeness of (a) on the CONSORT 2010 checklist and the reporting should be a prerequisite in the peer review Explanation and Elaboration document modified for process, because lack of transparency is a barrier to the early career peer reviewers (ie, early stage researchers: assessment of important aspects of a trial such as the risk of masters students, PhD students, residents involved bias18 and reproducibility.19 Another essential task of peer in clinical research during their study and clinicians review is the identification of switched outcomes (ie, a involved in clinical research who never peer reviewed discrepancy between the prespecified outcomes reported a manuscript) for assessing the completeness of in the trial registry or protocol and outcomes reported reporting of 10 key items and (b) the COMPare in the manuscript). To avoid reporting bias(es),20 all process used to identify switched in primary outcomes prespecified outcomes should be reported in the manu- in completed reports of RCTs. script, and authors should report any changes to trial 2. Compare the performance of early career peer 21 outcomes after the trial started, with reasons. The Centre reviewers who use the peer reviewer tool with usual http://bmjopen.bmj.com/ for Evidence-Based Medicine Outcome Monitoring peer reviewers in identifying inadequate reporting Project (COMPare) assessed RCTs published in the five and switched outcomes in completed reports of RCTs. general medical journals with the highest impact factor. The planning, conduct, analysis and reporting of the They compared the outcomes published to the outcomes study were discussed by the study’s scientific committee. recorded in trial registries or protocols. Among the 67 RCTs examined in July 2016 (www. COMPare- Trials. org), only nine had no discrepancies; 354 outcomes were not METHODS AND ANALYSIS 18 Study design reported and 357 new outcomes were ‘silently’ added. on September 25, 2021 by guest. Protected copyright. Despite being essential, the assessment of the complete- We approached the issue of peer reviewing as if it
Recommended publications
  • Stephen P. Lock on "Journalology"
    INSTITUTE FOn SCIENTl FICl NF0F4MAT10N8 3501 MARKET ST PHILADELPHIA PA 19104 Stephen P. Lock on “Journalology” Number 3 January 15, 1990 I don’t remember exactly how long I have Lock: Editor and Scholar known Stephen P. Lock. Our first encounter was probably at a meeting of the Council of Biology Editors (CBE) some 20 years Stephen P. Lock was born in 1929 and re- ago. It is somewhat unsettling to realize that ceived his education at Queen’s College, so much time has gone by. Equally disquiet- Cambridge University, UK, and the Medical ing is the notion that Steve will soon retire College of St. Bartholomew’s Hospital. as editor of the venerable British Medical Trained as a hematologist, he served on the Journal (BMJ). But in the near future Steve staffs of various London teaching hospitals, will indeed step down, having helped train including St. Bartholomew’s and the Hos- his successor, due to be appointed in the fall pital for Sick Children, before being ap- of 1990. pointed assistant editor of BMJ in 1964. It seems odd to refer to Lock as an elder After serving as senior assistant editor and statesman of biomedical editing-however deputy editor, Lock became editor in 1975. accurate that description might be. The word I am amused that Steve opened his talk that comes to mind when I think of Steve with the traditional, mythical reminder about is youth. This youthfidness is reflected in the estimates of the number of extant jour- his writing and in his approach to problems. nals.
    [Show full text]
  • Assessing the Utility of an Institutional Publications Officer: a Pilot Assessment
    Assessing the utility of an institutional publications officer: a pilot assessment Kelly D. Cobey1,2,3, James Galipeau1, Larissa Shamseer1,2 and David Moher1,2 1 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada 2 School of Epidemiology, Public Health, and Preventative Medicine, University of Ottawa, Ottawa, Canada 3 School of Natural Sciences, Department of Psychology, University of Stirling, Stirling, United Kingdom ABSTRACT Background. The scholarly publication landscape is changing rapidly. We investigated whether the introduction of an institutional publications officer might help facilitate better knowledge of publication topics and related resources, and effectively support researchers to publish. Methods. In September 2015, a purpose-built survey about researchers' knowledge and perceptions of publication practices was administered at five Ottawa area research institutions. Subsequently, we publicly announced a newly hired publications officer (KDC) who then began conducting outreach at two of the institutions. Specifically, the publications officer gave presentations, held one-to-one consultations, developed electronic newsletter content, and generated and maintained a webpage of resources. In March 2016, we re-surveyed our participants regarding their knowledge and perceptions of publishing. Mean scores to the perception questions, and the percent of correct responses to the knowledge questions, pre and post survey, were computed for each item. The difference between these means or calculated percentages was then examined across the survey measures. Results. 82 participants completed both surveys. Of this group, 29 indicated that they had exposure to the publications officer, while the remaining 53 indicated they did not. Interaction with the publications officer led to improvements in half of the knowledge items (7/14 variables).
    [Show full text]
  • Core Competencies for Scientific Editors Of
    Moher et al. BMC Medicine (2017) 15:167 DOI 10.1186/s12916-017-0927-0 CORRESPONDENCE Open Access Core competencies for scientific editors of biomedical journals: consensus statement David Moher1,2* , James Galipeau3, Sabina Alam4, Virginia Barbour5, Kidist Bartolomeos6, Patricia Baskin7,8, Sally Bell-Syer9,10, Kelly D. Cobey1,2,11, Leighton Chan12, Jocalyn Clark13, Jonathan Deeks14, Annette Flanagin15, Paul Garner16, Anne-Marie Glenny17, Trish Groves18, Kurinchi Gurusamy19, Farrokh Habibzadeh20,21,22, Stefanie Jewell-Thomas23, Diane Kelsall24, José Florencio Lapeña Jr22,25,26,27, Harriet MacLehose28, Ana Marusic29,30, Joanne E. McKenzie31, Jay Shah32,33,34, Larissa Shamseer1,2, Sharon Straus35, Peter Tugwell2,36,37, Elizabeth Wager38,39, Margaret Winker22 and Getu Zhaori40 Abstract Background: Scientific editors are responsible for deciding which articles to publish in their journals. However, we have not found documentation of their required knowledge, skills, and characteristics, or the existence of any formal core competencies for this role. Methods: We describe the development of a minimum set of core competencies for scientific editors of biomedical journals. Results: The 14 key core competencies are divided into three major areas, and each competency has a list of associated elements or descriptions of more specific knowledge, skills, and characteristics that contribute to its fulfillment. Conclusions: We believe that these core competencies are a baseline of the knowledge, skills, and characteristics needed to perform competently the duties of a scientific editor at a biomedical journal. Keywords: Core competencies, Scientific editor, Biomedical journal, Delphi, Expert consensus, Editor role Introduction and in guidance for members of editor organizations Scientific editors (editors are responsible for the content [3–8].
    [Show full text]
  • Downloads Presented on the Abstract Page
    bioRxiv preprint doi: https://doi.org/10.1101/2020.04.27.063578; this version posted April 28, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. A systematic examination of preprint platforms for use in the medical and biomedical sciences setting Jamie J Kirkham1*, Naomi Penfold2, Fiona Murphy3, Isabelle Boutron4, John PA Ioannidis5, Jessica K Polka2, David Moher6,7 1Centre for Biostatistics, Manchester Academic Health Science Centre, University of Manchester, Manchester, United Kingdom. 2ASAPbio, San Francisco, CA, USA. 3Murphy Mitchell Consulting Ltd. 4Université de Paris, Centre of Research in Epidemiology and Statistics (CRESS), Inserm, Paris, F-75004 France. 5Meta-Research Innovation Center at Stanford (METRICS) and Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, Stanford University, Stanford, CA, USA. 6Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada. 7School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada. *Corresponding Author: Professor Jamie Kirkham Centre for Biostatistics Faculty of Biology, Medicine and Health The University of Manchester Jean McFarlane Building Oxford Road Manchester, M13 9PL, UK Email: [email protected] Tel: +44 (0)161 275 1135 bioRxiv preprint doi: https://doi.org/10.1101/2020.04.27.063578; this version posted April 28, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.
    [Show full text]
  • Developing PRISMA-RR, a Reporting Guideline for Rapid Reviews of Primary Studies (Protocol)
    Developing PRISMA-RR, a reporting guideline for rapid reviews of primary studies (Protocol) Adrienne Stevens1,2, Chantelle Garritty1,2, Mona Hersi1, David Moher1,3,4 1Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Canada; 2TRIBE Graduate Program, University of Split School of Medicine, Croatia; 3Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Canada; 4School of Epidemiology and Public Health, University of Ottawa, Canada; Funding: The development of PRISMA-RR is supported by a grant from the Canadian Institutes of Health Research (CIHR FRN-142310). The funder had no role in the development of this protocol and will not have a role in the data collection, analyses, interpretation of the data, and publication of the findings. Funding will be sought to develop the Explanation and Elaboration manuscript. Developing PRISMA-RR, a reporting guideline for rapid reviews of primary studies February 2018 INTRODUCTION Systematic reviews are known to be the best evidence upon which to make healthcare decisions, but can take years to complete (1). Rapid reviews have emerged as accelerated and/or abbreviated versions of systematic reviews with certain concessions made in the systematic review process to accommodate decision-making situations that require an expedited compilation of the evidence (2). Rapid reviews are usually conducted for a specific requestor and are typically understood to take 6 months or less to complete. Rapid reviews may differ from systematic reviews in a number of ways, such as the number of abbreviated methods or shortcuts employed and the breadth or scope of the question(s) posed (2–4). Some rapid reviews involve accelerated data mining processes and targeted screening of studies.
    [Show full text]
  • The Evolution of the Science Citation Index Search Engine to the Web of Science, Scientometric Evaluation and Historiography
    The Evolution of the Science Citation Index Search Engine to the Web of Science, Scientometric Evaluation and Historiography Eugene Garfield Chairman Emeritus, Thomson ISI 3501 Market Street, Philadelphia PA 19104 Fax: 215-387-1266 Tel. 215-243-2205 [email protected] www.eugenegarfield.org Presented at the University of Barcelona January 24, 2007 _________________________________ The Science Citation Index was proposed over fifty years ago to facilitate the dissemination and retrieval of scientific literature. Its unique search engine based on citation searching was not widely adopted until it was made available online in 1972. Its by product Journal Citation Reports became available in 1975 including its rankings by impact factor. Impact factors were not widely adopted until about a decade ago when they began to be used as surrogates for expected citation frequencies for recently published papers--a highly controversial application of scientometrics in evaluating scientists and institutions. The inventor of the SCI and its companion Social Sciences Citation Index will review its history and discuss their more recent use in graphically visualizing micro-histories of scholarly topics. Using the patented HistCite software for algorithmic historiographic analysis, the genealogy of the Watson-Crick discovery of the double helix structure of DNA and its relationship to the work of Heidelberger, Avery, and others will be discussed. ____________________________________ It is now over fifty years since the idea of the Science Citation Index (SCI) was first promulgated in Science magazine in 1955.i However, as the older generation of scientists will remember Current Contents is the information service that proved to be the primordial revolutionary “idea” which made the practical realization of the SCI possible.
    [Show full text]
  • Where to Publish?
    Where (Not) to Publish? B. Mohan Kumar Outline • What factors decide choice of journals • Journal impact factors • Predatory journals WHERE TO REPORT Choosing a journal -- international or local International journals: More prestigious (than local ones); Wider readerships Higher standards and rejection rates; Publishing in international journals requires more efforts, but is more rewarding in the long run. How do you choose a journal for publishing the paper? • Whoever will accept my article • Has friends on the editorial board • Has good artwork • Has (does not have) ads • Has fast turnaround time • Highest Journal Impact Factor • Impact factors are used to rank and, therefore, identify the so-called best journal to publish one’s work. • Journal most relevant to my field • Journal likely to be read by colleagues • Journal with reputation for timely, helpful review • Journal affiliated with professional society • Other factors : Intuitively - prestige? Past experience • Ask a librarian or respected colleague Journal Impact Factor: the central dogma of scholarly publishing Journal Impact Factor: Historical Aspects • Eugene Garfield, the founder of the Institute for Scientific Information (ISI, now the Thomson Scientific) in 1955 envisioned creating a metric to evaluate journal performance. • Over the next 20 years, Garfield and the late Irving H. Sher, worked to refine the Journal Impact Factor concept. • Science Citation Index in the 1960s, subsequently published in 1975 the Journal Citation Reports (JCR). • Calculated each year for those journals which Thomson Reuters indexes and are published in JCR. • A highly influential ranking tool used by participants in all phases of the research publishing cycle — librarians, publishers, editors, authors and information analysts.
    [Show full text]
  • Potential Predatory and Legitimate Biomedical Journals
    Shamseer et al. BMC Medicine (2017) 15:28 DOI 10.1186/s12916-017-0785-9 RESEARCHARTICLE Open Access Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison Larissa Shamseer1,2* , David Moher1,2, Onyi Maduekwe3, Lucy Turner4, Virginia Barbour5, Rebecca Burch6, Jocalyn Clark7, James Galipeau1, Jason Roberts8 and Beverley J. Shea9 Abstract Background: The Internet has transformed scholarly publishing, most notably, by the introduction of open access publishing. Recently, there has been a rise of online journals characterized as ‘predatory’, which actively solicit manuscripts and charge publications fees without providing robust peer review and editorial services. We carried out a cross-sectional comparison of characteristics of potential predatory, legitimate open access, and legitimate subscription-based biomedical journals. Methods: On July 10, 2014, scholarly journals from each of the following groups were identified – potential predatory journals (source: Beall’s List), presumed legitimate, fully open access journals (source: PubMed Central), and presumed legitimate subscription-based (including hybrid) journals (source: Abridged Index Medicus). MEDLINE journal inclusion criteria were used to screen and identify biomedical journals from within the potential predatory journals group. One hundred journals from each group were randomly selected. Journal characteristics (e.g., website integrity, look and feel, editors and staff, editorial/peer review process, instructions to authors,
    [Show full text]
  • Hong Kong Principles
    1 The Hong Kong Principles for Assessing Researchers: 2 Fostering Research Integrity 3 4 David Moher1,2, Lex Bouter3,4, Sabine Kleinert5, Paul Glasziou6, Mai Har Sham7 5 Virginia Barbour8, Anne-Marie Coriat9, Nicole Foeger10, Ulrich Dirnagl11 6 7 1Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute; 2School of 8 Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; 3Department of Epidemiology 9 and Biostatistics, Amsterdam University Medical Centers, location VUmc; 4Department of Philosophy, 10 Faculty of Humanities, Vrije Universiteit, Amsterdam, The Netherlands; 5The Lancet, London Wall Office, 11 London, UK; 6Institute for Evidence-Based healthcare, Bond University, Gold Coast, Qld, Australia; and 12 7School of Biomedical Sciences, LKS Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong 13 Kong SAR, China; 8Queensland University of Technology (QUT), Brisbane, Australia; 9Wellcome Trust, 14 London; 10Austrian Agency for Research Integrity, Vienna, Austria; 11Berlin Institute of Health, QUEST 15 Center for Transforming Biomedical Research, Berlin, Germany 16 17 David Moher: ORCID 0000-0003-2434-4206 18 Lex Bouter: ORCID 0000-0002-2659-5482 19 Sabine Kleinert: ORCID 0000-0001-7826-1188 20 Paul Glasziou: ORCID 0000-0001-7564-073X 21 Mai Har Sham: ORCID 0000-0003-1179-7839 22 Virginia Barbour: ORCID: 0000-0002-2358-2440 23 Anne-Marie Coriat: ORCID 0000-0003-2632-1745 24 Nicole Foeger: ORCID 0000-0001-7775-7325 25 Ulrich Dirnagl: ORCID 0000-0003-0755-6119 26 27 28 23rd November 2019 29 Abstract 30 31 The primary goal of research is to advance knowledge. For that knowledge to benefit research and 32 society, it must be trustworthy.
    [Show full text]
  • Systematic Review of the Effectiveness of Training Programs in Writing For
    Galipeau et al. Systematic Reviews 2013, 2:41 http://www.systematicreviewsjournal.com/content/2/1/41 PROTOCOL Open Access Systematic review of the effectiveness of training programs in writing for scholarly publication, journal editing, and manuscript peer review (protocol) James Galipeau1*, David Moher1,2, Becky Skidmore3, Craig Campbell4, Paul Hendry2, D William Cameron1,2, Paul C Hébert1,2 and Anita Palepu5 Abstract Background: An estimated $100 billion is lost to ‘waste’ in biomedical research globally, annually, much of which comes from the poor quality of published research. One area of waste involves bias in reporting research, which compromises the usability of published reports. In response, there has been an upsurge in interest and research in the scientific process of writing, editing, peer reviewing, and publishing (that is, journalology) of biomedical research. One reason for bias in reporting and the problem of unusable reports could be due to authors lacking knowledge or engaging in questionable practices while designing, conducting, or reporting their research. Another might be that the peer review process for journal publication has serious flaws, including possibly being ineffective, and having poorly trained and poorly motivated reviewers. Similarly, many journal editors have limited knowledge related to publication ethics. This can ultimately have a negative impact on the healthcare system. There have been repeated calls for better, more numerous training opportunities in writing for publication, peer review, and publishing. However, little research has taken stock of journalology training opportunities or evaluations of their effectiveness. Methods: We will conduct a systematic review to synthesize studies that evaluate the effectiveness of training programs in journalology.
    [Show full text]
  • Influential Journals in Health Research: a Bibliometric Study José M
    Merigó and Núñez Globalization and Health (2016) 12:46 DOI 10.1186/s12992-016-0186-4 RESEARCH Open Access Influential journals in health research: a bibliometric study José M. Merigó and Alicia Núñez* Abstract Background: There is a wide range of intellectual work written about health research, which has been shaped by the evolution of diseases. This study aims to identify the leading journals over the last 25 years (1990–2014) according to a wide range of bibliometric indicators. Methods: The study develops a bibliometric overview of all the journals that are currently indexed in Web of Science (WoS) database in any of the four categories connected to health research. The work classifies health research in nine subfields: Public Health, Environmental and Occupational Health, Health Management and Economics, Health Promotion and Health Behavior, Epidemiology, Health Policy and Services, Medicine, Health Informatics, Engineering and Technology, and Primary Care. Results: The results indicate a wide dispersion between categories being the American Journal of Epidemiology, Environmental Health Perspectives, American Journal of Public Health, and Social Science & Medicine, the journals that have received the highest number of citations over the last 25 years. According to other indicators such as the h-index and the citations per paper, some other journals such as the Annual Review of Public Health and Medical Care, obtain better results which show the wide diversity and profiles of outlets available in the scientific community. The results are grouped and studied according to the nine subfields in order to identify the leading journals in each specific sub discipline of health. Conclusions: The work identifies the leading journals in health research through a bibliometric approach.
    [Show full text]
  • Predatory Journals: Do Not Enter
    Commentary Predatory Journals: Do Not Enter Faizan Khan, BSc1, 2, David Moher PhD1,2 1 School of Epidemiology, Public Health and Preventive Medicine, University of Ottawa 2 Clinical Epidemiology Program, Ottawa Hospital Research Institute ABSTRACT The escalation in open-access publishing has fueled the rise of questionable businesses, namely, ‘predatory’ journals. Predatory jour- nals and their publishers seek manuscripts through aggressive electronic solicitation and execute flawed peer-review practices, conse- quently undermining the scholarly record and current research cultures, globally. As these journals are not indexed in any legitimate databases, the research they publish is often undiscoverable and fails to be disseminated to a worldwide readership. Acknowledging this threat to the credibility of science, this article aims to alert researchers at various levels of their career of the increasing global issue of predatory journals, and offer helpful advice and resources for identifying and avoiding them. RÉSUMÉ L’essor de la publication en accès libre a alimenté l’expansion d’entreprises douteuses, à savoir des revues « prédatrices ». Les revues prédatrices et leurs éditeurs recherchent des manuscrits au moyen de requêtes électroniques agressives et effectuent des évaluations erronées par les pairs, compromettant ainsi les archives scientifiques et les présentes cultures de recherche dans l’ensemble. Puisque ces revues ne sont pas répertoriées dans des bases de données légitimes, la recherche qu’elles publient est souvent introuvable et ne parvient pas à être diffusée à un lectorat mondial. En reconnaissant cette menace à la crédibilité scientifique, cet article vise à alerter les chercheurs à diverses étapes de leur carrière du problème global croissant que constituent les revues prédatrices, et à leur offrir des conseils utiles et des ressources permettant de les identifier et de les éviter.
    [Show full text]