<<

GLOBAL STATE OF PEER 2018 REVIEW Global State of — 1

FOREWORD 2

INTRODUCTION 4 0 What is peer review? 5 The role of the editor 7 Why is peer review so important? 8 Why do researchers choose to peer review? 12 Previous on peer review 13

WHO IS DOING THE REVIEW? 14 Supply and demand of peer review, by region 15 Supply and demand of editors 18 Growth in review output over time 20 Supply and demand of peer review, by research area 21 Review distribution within regions and research areas 22 1 Reviewer distribution, by gender 24

HOW EFFICIENT IS THE PEER REVIEW PROCESS? 27 The peer review funnel 28 Why do reviewers decline review invitations? 30 How long does it take to write a review? 32 2 The tortoise and the hare: Review time by research area 34 WHAT DO WE KNOW ABOUT PEER REVIEW QUALITY? 36 Does size matter? 37 The power of the Journal ? 38 Researcher perceptions of quality 41 3 Direct measures of review quality 43

WHAT DOES THE FUTURE HOLD? 44 Reviewer fatigue on the rise 45 Reviewers are getting faster 45 The rise of China 46 Supplying the reviewer pool 47 4 A shift to greater transparency? 48 Consensus: Incentives matter 49

CLOSING REMARKS 50

APPENDICES 52 Global State of Peer Review — 2

FOREWORD As has grown over Working with our colleagues at ScholarOne and Web of , we developed this report the last six years, so too has to shine some light on these questions. We awareness of the important hope it will prove to be a useful resource for the community and inform the role that peer review — and of the scholarly ecosystem in the coming peer reviewers — play in years as we address the challenges of: • ensuring the quality and efficiency of the our ecosystem. in the face of rapidly increasing However, there is still a substantial gap publication rates; in our basic understanding of who the • the emergence of new, innovative models reviewers are, how much review work they of peer review; undertake, the quality of those reviews, • the shift to and the and what researchers think of it all. emergence of predatory publishers; and • increasing diversity in gender and geographic distribution.

One of the key findings in this report is that researchers from emerging regions are under-represented in the peer review process. While there are indications this trend may self-correct in the long run, we believe it is an urgent issue to address. We will struggle to meet the growing demands on the system without the contributions of the full global research community. The reality is that this peer review disparity is harming the development of non-Western researchers — fewer review invitations means fewer opportunities to see the latest research trends, learn what journals are looking for in a great , make professional connections with journal editors, and develop critical skills.

With that said, I am proud to present the first edition of the Global State of Peer Review report. I hope you find it as informative to read as it was for us to create, and look forward to your feedback, ideas, and suggestions.

Regards, DR ANDREW PRESTON, PUBLONS COFOUNDER SEPTEMBER 2018 Global State of Peer Review — 3

ABOUT THIS REPORT This inaugural Global This is one of the largest sets of data about peer review ever collated. We hope it will State of Peer Review bring a new level of transparency to the world of peer review, as well as lay the groundwork report has been developed for future iterations of this report that delve to investigate the state of into more areas of peer review. scholarly journal peer review. The datasets do include limitations, such as greater coverage of the hard , i.e., To do that, we have focused on four big Science, Technology, and Medical questions, each of which form a (STEM) research areas. While data covering in this report. peer review of the and Social 1. Who is doing the review? Sciences (HSS) are included, they are not as well classified as the data for traditional STEM 2. How efficient is the peer review process? research areas. We acknowledge this and 3. What do we know about peer review hope to explore the HSS disciplines in greater quality? detail as the data become available. 4. What does the future hold? See pages 54-57 in the appendix for a full description of the data, , We have combined one of the largest limitations, and assumptions used peer review surveys of the global research throughout this report. community with data derived from Clarivate Analytics’ powerful cross-publisher sources.

Our primary Data sources: • — aggregated publication and research field data. To learn more about • ScholarOne Manuscripts — aggregated the Global Sate of Peer submission and peer review data. • Publons — aggregated peer review data. Review report, or about • Publons’ 2018 Global Reviewer *: Publons, visit: • Over 11,800 researchers globally, largely publons.com/community/gspr sourced from the Publons community and authors with articles indexed in Web of Science. Or get in touch via: •  with key leaders at five publishers. [email protected]

* Visit publons.com/community/gspr for the full DOI: Publons’ 2018 Global Reviewer Survey report. https://doi.org/10.14322/publons.GSPR2018 Global State of Peer Review — 4

INTRO— DUCTION

WHAT IS PEER REVIEW? Scholarly0 peer review (also known as refereeing) is the process of subjecting an author’s scholarly work, research, or ideas to the scrutiny of others who are experts in the same field. Global State of Peer Review — 5

HOW IT WORKS HISTORY FIG.1 — THE BASIC WORKFLOW The peer review process is largely recognized to have SUBMIT begun in 1776 with the publication of Philosophical Transactions by the Royal

EDITORIAL Society and has evolved CHECK over hundreds of years. During the 20th century, public funding for scientific research grew dramatically and so did the volume of published literature INVITE and corresponding peer review process. REVIEWER(S)

REVIEWER(S) ACCEPT “IF REVIEWERS ARE EDUCATED ON THE RESPONSIBILITIES

REVIEW(S) COMPLETED AND EXPECTATIONS OF BEING A REVIEWER...THEY SPEND

EDITOR REJECT RESUBMIT THE RIGHT AMOUNT OF TIME DECISION ON A REVIEW, THE EDITORS THEN RECEIVE COMMENTS ACCEPT IN A TIMELY FASHION, AND AUTHORS RECEIVE RICH PUBLISH AND VALUABLE FEEDBACK.” KRISTEN MARCHETTI, ASSOCIATE DIRECTOR OF PEER REVIEW, SAGE Global State of Peer Review — 6

TYPES OF PEER REVIEW During the 20th century, While there are hints that more transparent forms of peer review are gaining mindshare most fields of research and (see chapter 4), the single-blind and double- journals settled into a “single- blind peer review still dominate today. blind” peer review process For another view on the prevalence of different peer review models, we can look at how much where two or three reviewers transparency publishers and journals allow provide the author and editor their reviews to have on Publons: with feedback through one or more rounds of review.

FIG.2 — COMMON AND EMERGING FIG. 3 — PEER REVIEW POLICIES ON PUBLONS, PEER REVIEW MODELS BY JOURNAL Data source: Publons SINGLE-BLIND – reviewers unknown to author A breakdown of publisher-elected policies from a sample of 15,000 journals on Publons, where 63% REVIEWER AUTHOR (9,530) have a publisher-elected review display policy, and 37% (5,520) are unspecified.

SIGNED BLIND DOUBLE-BLIND – author and reviewers unknown (reviewer name (name not to each other published) published)

OPEN REVIEWER AUTHOR (review content 1.62% 0.15% published)

CLOSED (content TRIPLE-BLIND – authors, reviewers and editors 0.11% 61.83% all unknown to each other not published)

REVIEWER EDITOR AUTHOR

OPEN-IDENTITIES – author and reviewers are aware of each other’s identity

REVIEWER AUTHOR

OPEN REPORTS – review reports are published alongside the relevant article

OPEN IDENTITIES & OPEN REPORTS – all parties known to each other and review reports are public

OPEN FINAL-VERSION COMMENTING – review or commenting on final “” publications Global State of Peer Review — 7

THE ROLE OF THE EDITOR Editors also do the hard work of selecting Regardless of the type of reviewers and sourcing reviews. Without them, peer review, it’s important it is unlikely that all manuscripts — particularly the tricky ones — would be reviewed. The hard to acknowledge the role of work that goes on behind the scenes to make journal editors in the process. this happen is often forgotten in discussions about new peer review or publishing models. The editor-in-chief sets the vision and direction of a journal, and editors make critical The role of an editor is often a thankless decisions about whether to publish a paper one. But, it is a crucial part of the peer or not. They are the final arbiters in the review process. question of whether a paper contains sound research and/or meets the criteria of the journal in question. This can be a challenging job when presented with conflicting from multiple reviewers and authors. Global State of Peer Review — 8

WHY IS PEER REVIEW SO IMPORTANT?

Peer reviewers and editors FIG.4 — GROWTH IN are gatekeepers of the PUBLISHED RESEARCH INDEXED IN WEB OF research literature used to SCIENCE, 1980–2017 Data source: Web of Science. document and communicate Number of articles indexed in Web of Science from 1980 human discovery. to 2017. The dip in 2017 is because, at the time these data Reviewers and editors work hard to ensure were extracted, not all content that only sound research is published. for 2017 had been received and Where there are problems in research indexed. Note that part of the (slowing publication times, rising article rapid increase in early-to-mid 2000 can be attributed to retractions, ), an expansion of the Web peer review is often at their root. of Science index.

Notwithstanding these problems, peer review has done a remarkable job of scaling and allowing us to develop a trusted corpus of research literature as the academic research process industrialized, and then digitized, in the 20th century. This is evidenced by Annual growth% in the dramatic increase in peer reviewed published2.6 article volume publication output since 1980 (see Figure 4). since 2013 (excl. 2017) ) Collection (millions 23 Core cien ce fS bo We he nt 1 si cument Do 1980 1990 2000 2010 2017 Global State of Peer Review — 9

Since 2013, article publication volumes FIG.5 — TOP FIVE REGIONS BY have grown by 2.6% per year while PUBLICATION OUTPUT IN 2017 submissions have grown by 6.1% per year. Data source: Web of Science Articles indexed in Web of Science in 2017. Percentages The world of research is now so large, represent at least one affiliation on publication. varied, and geographically diverse that few editors have the breadth and depth of or the networks necessary to evaluate all submissions themselves. Outsourcing to expert peer reviewers is essential if an editor is to make an informed publication decision.

0% 10% 20% 30%

HOW IMPORTANT IS PEER

REVIEW TO RESEARCHERS? Importantly, our survey shows that researchers continue to regard peer review INDIA 4.3% as a critical tool for ensuring the quality and integrity of the literature. GERMANY 6.0%

FIG.6 — HOW RESEARCHERS REGARD PEER REVIEW UK 7.5% Data source: Publons’ 2018 Global Reviewer Survey 98% of respondents consider peer review either important (31.2%) or extremely important (66.8%) for ensuring the general CHINA 15.5% quality and integrity of . US 25.1%

IMPORTANT 31.2%

EXTREMELY IMPORTANT

UNIMPORTANT 0.8% EXTREMELY UNIMPORTANT 0.4% NOT SURE/DON’T KNOW 0.8% 66.8% Global State of Peer Review — 10

INCENTIVES MATTER

FIG.7 — TOP FOUR ACHIEVEMENTS MOST Publishing peer reviewed COMMONLY CITED AS BEING IMPORTANT research has become the TO OVERALL CAREER SUCCESS Data source: Publons’ 2018 Global Reviewer Survey key indicator of research Respondents identified the following factors as contributing most to overall career success from a list output and impact. of 12 options: “getting published in respected journals”; “being highly cited in respected journals”; “securing This is largely because the standards upheld grant funding”; and “general research, teaching or by peer review mean that only research administrative work”. scrutinized by peers, and therefore meeting BEING some quality threshold, is published. PUBLISHED IN RESPECTED Consequently, funding, hiring, and JOURNALS 53.4% promotion decisions are often tied BEING HIGHLY to published research records. CITED IN RESPECTED When combined with shrinking research JOURNALS 27% budgets, it is no surprise that researchers are increasingly desperate to publish and publish fast. The entire community recognizes this.

ASIDE • As incentives drive researchers to publish as much as possible, some researchers have resorted to chopping results into GENERAL smaller and smaller papers. This process RESEARCH, TEACHING OR is called salami slicing. ADMINISTRATIVE • In physics, where electrons, photons, and WORK 18.8% gluons are discussed daily, the minimum SECURING unit of publishable material — the thinnest GRANT FUNDING possible salami slice — is referred to 35.9% as the “publon”. This is where the name “Publons” comes from. “ENSURING THE INTEGRITY OF THE PEER REVIEW PROCESS IS VITAL AND AN INCREASING CHALLENGE THAT WE CANNOT FAIL TO MEET.” TAMI POTTEN, EDITORIAL DEVELOPMENT MANAGER, BRITISH INSTITUTE OF Global State of Peer Review — 11

It is clearly important (and lucrative) for researchers to publish peer reviewed articles and to act as editors for journals. Historically, however, there has been no equivalent reward or incentive for peer reviewing. Researchers overwhelmingly believe there should be. 84.8% of survey respondents think institutions should more explicitly require and recognize peer review contributions

FIG.8 — PERCEPTIONS ABOUT THE ROLE OF INSTITUTIONS IN RECOGNIZING PEER REVIEW Data source: Publons’ 2018 Global Reviewer Survey Global State of Peer Review — 12

WHY DO RESEARCHERS CHOOSE TO PEER REVIEW? IT’S PART OF MY JOB AS A RESEARCHER Despite comparatively weak incentives and recognition, 40.8% peer review does happen. DO MY FAIR SHARE / RECIPROCATE So why do researchers give FOR REVIEWS OF MY WORK up valuable research and % teaching time to contribute? 35.1 What motivates them? KEEP UP-TO-DATE WITH THE LATEST Researchers see peer reviewing as part RESEARCH TRENDS IN MY FIELD of their job, something they should reciprocate, and as a necessary part of contributing to 32.9% the integrity of the published literature. They are also quite aware that it is a valuable ENSURE THE QUALITY & INTEGRITY way to stay up-to-date with research trends OF RESEARCH in their field. PUBLISHED IN MY FIELD are observed across research % fields within the dataset: 32.9 • Roughly 50% of researchers in Physics, VOLUNTARY Mathematics, , and Space SERVICE TO MY Science cited “reviewing is part of my job” FIELD / RESEARCH COMMUNITY as the reason they review, while only 30% of respondents in Clinical Medicine gave % priority to this response. 17.5 • Instead, most reviewers in Clinical DEVELOP PERSONAL Medicine cited “to improve my own REPUTATION & CAREER writing skills” as the most common PROGRESSION reason they chose to peer review. 13.9%

IMPROVE MY OWN 13.5% WRITING SKILLS

FIG.9 — WHY RESEARCHERS PEER REVIEW BUILD RELATIONSHIPS Data source: Publons’ 2018 Global Reviewer Survey WITH JOURNALS Respondents were asked to select up to two / EDITORS reasons from a list of nine options. 11.9% Global State of Peer Review — 13

PREVIOUS RESEARCH ON PEER REVIEW Historically, studies of peer review have been hindered by the very of the process. Single-blind peer reviews are not published While we cannot acknowledge all of these and the name of the reviewer is not revealed, here, we would like to recognize some of making it very difficult to do large-scale, the large-scale surveys that have engaged cross-publisher studies of the peer review with peer reviewers to inform this work. process. This has led to a rather complex and incomplete picture of peer review, and Like ours here today, most of these surveys produced little overall consensus from were created in consultation with editors the research community on key concerns, and publishers, and used to garner an including transparency, accountability, understanding of reviewers’ and editorial . and needs, with the ultimate goal of opening up and improving the future of the system: Thankfully, this is slowly changing. • Sense About Science Peer Review Researchers and publishers alike have been Survey, 2009 able to draw back the curtain on peer review •  Peer Review Study, 2016 in recent years as more and more scholarly • Publishing Research Consortium communications systems move online. Also, Peer Review Survey, 2015 , which is increasingly viewed across the industry as an important aspect of • PRE (AAAS) Survey, 2016 , has enabled a wealth of diverse • Peer Review: A Global View, research, helping the scholarly community a Taylor & Francis survey, 2015 collectively weigh up and scrutinize the various stages of the peer review process. This shift has led to a number of fundamental studies and initiatives over the past decade. Global State of Peer Review — 14

WHO IS DOING THE REVIEW?

In to understand KEY FINDINGS • USA dominates absolute contributions peer review, it is necessary to peer review, contributing 32.9% 1to understand who is of all reviews compared to 25.4% of published article output. contributing to the peer • China reviews substantially less review process. (8.8%) than its article output (13.8%) would predict. This chapter explores who is doing the review • Established regions review more than by region, research field, and gender. The emerging regions relative Review Distribution Index is presented, giving to their respective article outputs. a measure of how the reviewing effort is • Editors are disproportionately selected distributed within research areas and regions. from established regions. These data provide a basis for investigation into issues such as potential bias and • Editors disproportionately select distribution of the peer review workload. reviewers from their own region. • Absolute review contributions are growing in all regions, but more rapidly in emerging regions. China, in particular, is rapidly increasing review output. • There are few studies of gender in peer review, but early indications are that male participation is higher than female. Global State of Peer Review — 15

SUPPLY AND DEMAND OF PEER REVIEW, BY REGION When a researcher submits a manuscript to a journal they create demand for peer review. When a researcher completes a review they are supplying peer review. Figure 10 presents the supply and demand FIG.10 — PEER REVIEW SUPPLY of reviews by region. USA supplies by far AND DEMAND, BY REGION the most reviews. China ranks second, Data source: ScholarOne producing almost exactly as many reviews as The total number of completed reviews the UK, despite creating more than twice the (left axis) and submitted manuscripts (right axis) across 20 selected regions. demand through manuscript submissions. Emerging regions such as India, Turkey, and Reviews 2013–2017: Iran, all review substantially less than their Supply: Reviews completed manuscript submissions would suggest. Demand: Manuscripts submitted 25 12. 5 20 10 15 7. 5 10 Supply of reviews (millions) Demand for reviews (millions) 5 2. 55 0 0 UK USA IRAN ITALY INDIA SPAIN CHINA JAPAN BRAZIL TAIWAN TURKEY FRANCE POLAND SWEDEN CANADA GERMANY MALAYSIA SOUTH KOREA NETHERLANDS Global State of Peer Review — 16

To further investigate relative review rates, the In the bottom third are the nine emerging number of reviews a region has contributed regions . These regions peer review less than per submitted manuscript has been charted their article output suggests. Together, the against the number of publications that region emerging regions accounted for 29.3% of Web contributed between 2013–17. of Science articles but performed just 18.9% On the right are China and USA – of ScholarOne reviews between 2013–2017. the global research powerhouses. In the top two thirds of the chart are the 11 established regions that dominate the peer review process. Together they accounted for 66.1% of articles in Web of Science and 68.1% of ScholarOne reviews between 2013–2017.

FIG.11 — RELATIVE PEER REVIEW CONTRIBUTIONS, BY REGION Data source: ScholarOne. Regions charted by the number of reviews they performed per submitted manuscript against total publication output (logarithmic scale), between 2013–17. The size of each Established Emerging circle represents the total number of reviews completed. 3

JAPAN USA

CANADA UK 2 AUSTRALIA

NETHERLANDS GERMANY

ITALY SWEDEN FRANCE

SPAIN BRAZIL 1 POLAND SOUTH KOREA TAIWAN

ScholarOne reviews / submitted manuscript CHINA MALAYSIA TURKEY

INDIA

IRAN 0 200,000 400,000 600,000 800,000 1,000,000 2,000,000 4,000,000 WoS publications (log scale) Global State of Peer Review — 17

ESTABLISHED AND FIG.11B — KEY PEER REVIEW STATISTICS OF EMERGING AND ESTABLISHED REGIONS EMERGING REGIONS Data source: Web of Science, ScholarOne & Publons

In developing this report we Established Emerging selected the top 20 regions Regions Regions when ordered by combined 66.1% 29.3% WOS PUBLICATIONS research output (publications 68.1% 19.1% REVIEWS and reviews). 1.95 0.66 REVIEWS PER SUBMISSION These regions were categorized into groups of 16.4 15.1 MEDIAN REVIEW TIME (DAYS) established and emerging regions based on the REVIEW INVITATION 49.5% 56.6% typical classification of emerging economies. AGREEMENT RATE Data were then aggregated by these groups. 527.5 249.9 REVIEW LENGTH (WORDS) Some key results are tabulated to the right. It is clear that emerging regions peer review 96.1% 3.9% EDITORS less than their article output would predict. Why? To answer that question, we turn first to the editor.

FIG.12 — TOP 20 REGIONS, BY RESEARCH OUTPUT Data source: Web of Science, ScholarOne & Publons UK USA IRAN ITALY INDIA SPAIN CHINA JAPAN BRAZIL TAIWAN TURKEY FRANCE POLAND SWEDEN CANADA GERMANY MALAYSIA AUSTRALIA SOUTH KOREA NETHERLANDS Global State of Peer Review — 18

SUPPLY AND DEMAND OF EDITORS Editors play a critical role The group of established regions accounts for 96.1% of editors while emerging regions in the peer review process, account for just 3.9%. This distribution of editors is a substantial divergence from what particularly when it comes article or peer review output would predict. to selecting reviewers and sourcing reviews. So where are all the editors, and who are they inviting to peer review?

Figure 13 compares the distribution of journal editors across selected regions with % the distribution of publication output across 96.1of Publons editors are from those regions. established regions

FIG.13 — DISTRIBUTION OF JOURNAL EDITORS 2013–17 by region: AND PEER REVIEW ACROSS REGIONS Total number of reviews completed Data source: Publons & Web of Science Total number of editors 0 12,50 25,000,000 s No. of editors in Publon No. of reviews in ScholarOne 0 0 UK USA IRAN ITALY INDIA SPAIN CHINA JAPAN BRAZIL TAIWAN TURKEY FRANCE POLAND SWEDEN CANADA GERMANY MALAYSIA AUSTRALIA SOUTH KOREA NETHERLANDS Global State of Peer Review — 19 Editor region

AUSTRALIA BRAZIL 00.375 0.75 CANADA CHINA FRANCE GERMANY INDIA IRAN ITALY JAPAN FIG.14 — EDITOR MALAYSIA Reviewer region AND REVIEWER NETHERLANDS CONNECTIONS Editor region ACROSS REGION POLAND Data source: Publons SOUTH KOREA Proportion00.375 of reviews0.75 that SPAIN editors in different regions SWEDEN (left) commissioned from TAIWAN reviewers in different TURKEY regions (bottom). Each editor region UK (row) is normalized. USA Reviewer region UK USA IRAN ITALY INDIA SPAIN CHINA JAPAN BRAZIL TAIWAN TURKEY FRANCE POLAND SWEDEN CANADA GERMANY MALAYSIA AUSTRALIA SOUTH KOREA NETHERLANDS

The regions an editor and a reviewer EDITORIAL COMMENT: are affiliated with for a given review are There are multiple reasons editors compared in Figure 14 in the form of a and reviewers might come from the heatmap. The dark diagonal line down the same region: chart shows that 38% of all reviews involve an editor and reviewer from the same • Editors may have preferences region, substantially higher than the 14% about who they invite. rate expected if editors selected reviewers • Reviewers could have preferences at a rate proportional to the amount of about who they review for. review that region completes. • There may be regional differences in reviewer acceptance rates. Figure 14 suggests editors are more likely to be from established regions and are more • An editor’s network may be strongest likely to invite reviewers from their own in these places. region. This may partly explain why there • Reviewing expertise may happen are fewer reviewers from emerging regions to sit in these regions. than publication output would predict. • Review quality could vary by region. • Editors may have preferences toward regions where English is spoken. Global State of Peer Review — 20

GROWTH IN REVIEW OUTPUT OVER TIME While review output is EDITORIAL COMMENT Could review contributions lag behind growing in all regions, the article output? Presumably review output of emerging regions contributions from emerging regions will continue to rapidly increase, so what will (led by China) is growing the world of peer review look like in 2025? more rapidly, albeit from We discuss this in chapter 4. a smaller . In fact, since 2015, China has been reviewing more than the United Kingdom. This rate of growth is encouraging. Since 2015 we have been witnessing the emergence of these regions.

FIG.15 — GROWTH OF REVIEW CONTRIBUTIONS OVER TIME, USING 2013 AS A BASELINE % % 193Growth of emerging 224Growth of China’s review Data source: ScholarOne regions’ review volume volume 2013–17 USA Established China Emerging 2013–17 250 200 150 % growth 100 50

0 2013 2014 2015 2016 2017 Global State of Peer Review — 21

SUPPLY AND DEMAND OF PEER REVIEW, BY RESEARCH AREA Reviewing practices across EDITORIAL COMMENT The wide variety in review per submission different research areas rates is indicative of different cultures across have been investigated research areas. This can be partly attributed to different approaches to peer review but by categorizing article will certainly be impacted by desk-reject submission and peer rates: across our dataset, there were an average of 2.7 reviews per submission after review completion data desk rejects were removed from the sample. into 22 Essential Science There may be other effects too — more data are required to explore further. Indicator Research Areas. On average, 2.7 reviews are completed for each submission across all research areas, although there is a wide range of review intensity, relative to submission rates, between FIG.16 — RATIO OF REVIEWS COMPLETED, BY FIELD, 2013–17 research areas. While Clinical Medicine is clearly the largest research area by publication Data source: ScholarOne output, on a review per submission basis, this The ratio of peer reviews completed for each manuscript submission, across research areas. Green line shows the fiel produces the fourth lowest number of mean (1.6 reviews / submission). Note: these figures reviews per submission. may be distorted by the relative size of the sample.

GEOSCIENCES COMPUTER SCIENCE NEUROSCIENCE & BEHAVIOR ENGINEERING SPACE SCIENCE MULTIDISCIPLINARY PHYSICS CHEMISTRY & BIOCHEMISTRY PLANT & ANIMAL SCIENCE PSYCHIATRY/ PHARMACOLOGY & SOCIAL SCIENCES, GENERAL ENVIRONMENT/ECOLOGY MATHEMATICS ECONOMICS & BUSINESS CLINICAL MEDICINE MOLECULAR BIOLOGY & GENETICS SCIENCES AGRICULTURAL 0.00 1.00 2.00 3.00 Completed reviews / original submissions Editor country Editor country Reviewer country Reviewer country NETHERLANDS SOUTH KOREA SOUTH AUSTRALIA MALAYSIA GERMANY CANADA SWEDEN POLAND FRANCE TURKEY TAIWAN BRAZIL JAPAN CHINA SPAIN INDIA ITALY IRAN USA UK

Country Country eventually, invited be to review. will, they expertise subject-matter with that if there are in aregion researchers 17 Figures hard sciences. 18 and confirm regions reviewersIn from emerging general, area (Figure 17). region and review volume across researchcomparing by observed be can hardthe sciences growingThe representation of China in AGRICULTURAL SCIENCES BIOLOGY & BIOCHEMISTRY CHEMISTRY CLINICAL MEDICINE play arelatively role strong in COMPUTER SCIENCE ECONOMICS & BUSINESS ENGINEERING ENVIRONMENT/ECOLOGY GEOSCIENCES MATERIALS SCIENCES ES ES I

MATHEMATICS I MICROBIOLOGY MOLECULAR BIOLOGY & GENETICS MULTIDISCIPLINARY NEUROSCIENCE & BEHAVIOR PHARMACOLOGY & TOXICOLOGY PHYSICS PLANT & ANIMAL SCIENCE PSYCHIATRY/PSYCHOLOGY SOCIAL SCIENCES, GENERAL SPACE SCIENCE “THE PEER REVIEW“THE PEER Data ScholarOne source: FIG.17 FIG.17 BY RESEARCH AREA REGION, NORMALIZED AND AREA RESEARCH ESI BY VOLUME TAMI POTTEN, EDITORIAL DEVELOPMENT TAPPING WIDER A INTO MANAGER, BRITISH INSTITUTE OF RADIOLOGY CERTAINLY BENEFIT FROM WOULD AND ENOUGH DIVERSE NOT IS COMMUNITY SPECTRUM OF EXPERTISE” SPECTRUM 0 0 — REVIEW 0.3 0.3 0.6 0.6 Review Peer of State Global

22 Editor country Editor country Reviewer country Reviewer country MOLECULAR BIOLOGY & GENETICS PHARMACOLOGY & TOXICOLOGY SOCIAL SCIENCES, GENERAL NEUROSCIENCE & BEHAVIOR PSYCHIATRY/PSYCHOLOGY BIOLOGY & BIOCHEMISTRY PLANT & ANIMAL SCIENCE AGRICULTURAL SCIENCES ENVIRONMENT/ECOLOGY ECONOMICS & BUSINESS MATERIALS SCIENCES COMPUTER SCIENCE MULTIDISCIPLINARY CLINICAL MEDICINE SPACE SCIENCE MICROBIOLOGY MATHEMATICS GEOSCIENCES ENGINEERING CHEMISTRY PHYSICS

ESI Research Area ESI Research Area AUSTRALIA BRAZIL CANADA CHINA FRANCE GERMANY INDIA IRAN

ITALY Region JAPAN Region MALAYSIA NETHERLANDS POLAND SOUTH KOREA SPAIN SWEDEN TAIWAN TURKEY UK

USA Review Peer of State Global Data source: ScholarOne Data ScholarOne source: FIG.18 FIG.18 BY REGION REGION, NORMALIZED AND AREA RESEARCH ESI BY VOLUME 0 0 — REVIEW 0.2 0.2 — 0.4 0.4

23 Global State of Peer Review — 24

GENDER DISTRIBUTION

Using an algorithmic ASIDE A study by American Geophysical Union approach, which predicts (AGU) found that, while the proportion of gender based on name female reviewers increased from 2012 to 2016, women of all ages had fewer and region, the following opportunities to take part in peer review.

gender distribution has The AGU’s analysis indicated that this bias been estimated for is a result of (mostly male) authors and editors suggesting women as reviewers researchers on Publons. less often than they do men.

While much more data are needed to They found that of the reviewers suggested draw firm conclusions, it seems women by male authors, 16% were female. are underrepresented in peer review, Meanwhile, reviewers suggested by or possibly in research as a whole. female authors were 22% female. Male editors subsequently invited only 18% female reviewers, while female editors went on to invite 22%.*

FIG.19 — GENDER DISTRIBUTION This is analogous to the finding in this FOR RESEARCHERS ON PUBLONS report that editors are more likely to Data source: Publons does not collect gender data. invite reviewers from their own region. A tool that predicts gender based on names has been used to analyze the first names of researchers on Publons by comparing them against the tool’s dataset of popular names for men and women worldwide. “SCHOLARLY COMMUNICATIONS HAS AN URGENT NEED TO INCREASE DIVERSITY AND INCLUSION IN EVERY ASPECT OF ITS ENTERPRISE, AND PEER REVIEW IS NO EXCEPTION.” FEMALE MALE UNKNOWN PETER BERKERY, EXECUTIVE DIRECTOR, ASSOCIATION OF UNIVERSITY PRESSES

* nature.com/news/journals-invite-too- 22% 45% 33% few-women-to-referee-1.21337 Global State of Peer Review — 25

BREAKOUT: THE PEER REVIEW NETWORK To convey the dimensions and interrelationships For example, reviewers who only review Physics in the peer review landscape, Figure 20 will be in one cluster, while those who review visualizes a network in which research areas both Physics and Materials Science are in (dark blue and yellow nodes) are connected another cluster connecting those two nodes. via common reviewers (dark blue clusters). The research areas are connected to the journals for which reviewers do their reviews. The dark blue clusters between research areas represent reviewers with common expertise.

FIG.20 — THE PEER REVIEW NETWORK: SHARE OF COMMON REVIEWERS BETWEEN RESEARCH AREAS MATERIALS Data source: Publons SCIENCE N.B. The research fields for this figure are based on Web of Science categories rather than ESI Research Areas.

PHYSICS ARTS & HUMANITIES

MATHEMATICS APPLIED PHYSICAL SCIENCES & ENGINEERING

ECONOMICS, BUSINESS & EARTH FINANCE & SPACE SCIENCES

COMPUTER SCIENCE & SOCIAL ENGINEERING SCIENCES

PSYCHOLOGY CHEMISTRY

MEDICAL BIOLOGY SPECIALTIES SCIENCES & APPLIED & MEDICAL & BIOMEDICAL BIOLOGICAL TECHNOLOGIES SCIENCE SCIENCES Global State of Peer Review — 26

BREAKOUT: THE REVIEW DISTRIBUTION INDEX (RDI) The RDI is based on the Lorenz curve: The distribution of peer review work can be • Most famously known for measuring represented similarly. income distribution within a population Imagine everyone in a population writes an (Gini curve or Gini index). equal share of reviews. Such a scenario (for • In a population where every researcher example, 100 people, each completing one did the same number of reviews, the review) would result in an RDI coefficient of 0. curve would be a straight, diagonal line connecting the bottom left corner to If the line is curved, the distribution of reviews the top right corner. performed is, to some degree, skewed across • The more convex the curve, the more the population. A perfectly unequal distribution uneven the distribution of reviewers. — one person in the population completing all reviews — is represented by an RDI coefficient • The area between the curve and the straight of 1. The lower the coefficient, the more evenly line gives us a measure of that inequality. distributed the reviewing workload is; the • The greater the area the more uneven higher the coefficient, the more skewed the distribution. the workload.

FIG.21 — THE REVIEW DISTRIBUTION INDEX RDI Data source: Publons Established 0.599 Within Publons ~10% of reviewers are responsible for ~50% of the peer review records. Emerging 0.642

Publons data shows that 100

distribution % Even 10of reviewers perform %

dist Reviewer

ribution % Reviews 50of all reviews

0 Reviewers % 100 Global State of Peer Review — 27

HOW EFFICIENT IS THE PEER REVIEW PROCESS?

It is critical for the quality of KEY FINDINGS • On average, an editor in 2017 needed the literature that the peer to send out 2.4 peer review invitations 2review process is able to scale to get one peer review report done. This has increased from an average of 1.9 as article submissions grow. invitations in 2013. • Reviewers from established regions are less likely to accept and complete review assignments than reviewers in other regions. • Reviewers most commonly decline review invitations because the article is outside 75% their area of expertise and/or because of journal editors say that they are too busy with their own research. “finding reviewers and • The median time to complete a review is 16.4 days after accepting the review getting them to accept invitation. review invitations” is the • Reviewers from emerging regions tend to complete reviews faster than reviewers hardest part of their job. from other regions. 2016 PUBLONS’ EDITOR SURVEY • Review turnaround time has been decreasing in some, but not all, research areas. This chapter investigates the nuts and bolts of the peer review workflow. Global State of Peer Review — 28

THE PEER REVIEW FUNNEL The peer review process for Reviewer completion rates are decreasing each year, while the total number of review a manuscript typically starts invitations sent is increasing at 9.8% year-on- with the editor sending review year (faster than the rate at which articles are accepted for publication, which is growing at invitations to one or more 4.9% annually). Is this discrepancy a sign potential reviewers. that reviewer fatigue is setting in? Ideally, all reviewers that receive an invitation would accept it and then go on to complete a review. This is not always the case. Potential reviewers drop off at each stage, forming a funnel (see Figure 23). % % 9.8Increase year-on-year 4.9Annual growth in article of reviewer invitations acceptance rates

INVITED AGREED/INVITED COMPLETED/INVITED

50mil 0.55

40mil n

0.50 30mil

ACCEPTED 20mil

Reviewers invited 0.45 16.4 DAYS 10mil Rate of agreement/completio

COMPLETED 0 0.40 2013 2014 2015 2016 2017 Year

FIG.23 — THE PEER REVIEW FUNNEL FIG.24 — TOTAL REVIEWERS INVITED, AND Data source: ScholarOne PROPORTION OF INVITATIONS AGREED TO AND REVIEWS COMPLETED, FROM 2013–17 Shows median time from agreeing to review through to completion of review report. Data source: ScholarOne On the left y-axis are absolute numbers of invitations to review over the last five years. On the right y-axis are reviewer invitation agreement and completion rates. Reviews completed Review invitation agreement rate 0 0.25 0.50 0.75 1.00 the mean (51.3%). mean the 2013–17. between regions 20 shows line yellow The to), across (not agreed sent been has invitation an after assignments review complete reviewers Rate at which Data ScholarOne source: FIG.25 FIG.25 BY REGION Data source: ScholarOne and Web of Science ofScience Web and ScholarOne source: Data AND RELATIVE REVIEW WORKLOAD BY REGION 0 0.2 0.4 0.6 0.8 FIG.26 JAPAN Relative review workload review Relative rate agreement invitation Review —

USA — CHINA REVIEWER COMPLETION REVIEWER RATES, REVIEW INVITATION AGREEMENT RATES POLAND

ESTABLISHED ITALY

BRAZIL

CHINA TAIWAN

SPAIN

AUSTRALIA EMERGING

TURKEY

2.53.0 3.544.05.5 5.0 .5 GERMANY Relative review workload (reviews/submission) INDIA rates, reveals anegative relationship. manuscript) with reviewer invitation agreement accepted of per reviews performs aregion reviewer (the workload Comparing number 49.5% with 56.6% compared for emerging regions established regions have rates. completion Together, higher the Taiwan and Brazil, Poland, — China, — to there is for atendency emerging regions at rates reviewLooking completion by region, are asked not being to review. notdo want to review, they because but emerging impliesThis that low review rates in EDITORIAL COMMENTS SOUTH KOREA agreement rate agreement regions’ Established 49.5 MALAYSIA

SWEDEN

regions

CANADA have rate acompletion of are not because reviewers are not because % USA Review Peer of State Global NETHERLANDS 56.6 agreement rate agreement Emerging regions’ FRANCE EMERGING ESTABLISHED

IRAN

UK

%

29 . Global State of Peer Review — 30

WHY DO REVIEWERS DECLINE REVIEW INVITATIONS?

EDITORIAL COMMENTS Many researchers report “receiving requests to review articles outside their area of expertise”. This could be due to increasing % specialization in many research areas and the corresponding concentration of relative 70.6“Article was outside my area of expertise” expertise into small groups of individuals, making it difficult for editors to understand the current subject-matter expertise of prospective reviewers.

Researchers appear to be facing a decision % between working on their own research “Too42 busy with my and helping — as a reviewer — to develop own research” others’ research. Should universities more formally require and recognize peer review contributions to better balance the FIG.27 — THE MOST COMMON REASONS REVIEWERS DECLINE INVITATIONS incentives for peer review with the traditional Data source: Publons’ 2018 Global Reviewer Survey focus on metrics? Researchers Respondents were invited to select up to two options from appear to favor the idea (see Figure 8). a list of 12 common reasons for declining a review request.

“WE RECOGNIZE THAT THE INDIVIDUALS WE ASK TO REVIEW MANUSCRIPTS ARE FITTING THIS INTO THEIR BUSY SCHEDULES. WITH THIS IN WE WANT THE REVIEW TO BE POSITIVE, EFFICIENT AND PROVIDE A GREAT LEARNING EXPERIENCE FOR ALL INVOLVED.” KRISTEN MARCHETTI, ASSOCIATE DIRECTOR OF PEER REVIEW, SAGE Global State of Peer Review — 31

BREAKOUT: HOW MANY REVIEWS ARE PERFORMED EACH YEAR? Roughly 2.9 million peer reviewed articles were indexed in Web of Science in 2016. How many reviews 5Median hrs hours spent Estimated2.9 days m spent are going on behind the writting each review in 2016 rewiewing in 2016 scenes to support this? No-one has these data so an estimate for 2016 has been made based on the following estimates which are based on our general knowledge of the industry 19.2Mean days to complete d 260Researcher-days m spent but no specific data: each review in 2016 waiting for reviews in 2016 • Total publications in 2016: 2.9 million • Global manuscript acceptance rate: 55% • Total rejections: 2.5 million • Reviews per publication: 3 (two reviews in first round and one in second round) • Reviews per rejection: 2 (two reviews 13.7Estimated reviews m in 2016 in first round) N.B. this estimate for total number of reviews per year is used • Estimated number of reviews each to scale the review counts presented throughout this report, year: 13.7 million allowing us to provide an estimate for the global market. However it is estimated, a lot of peer review is happening, placing a huge workload on researchers and delays that are potentially slowing down research by hundreds of thousands of years. A lack of transparency in the peer review process makes it difficult to know exactly how much review is done or the time it takes to perform. Global State of Peer Review — 32

HOW LONG DOES IT TAKE TO WRITE A REVIEW? It takes reviewers a median The review times shown cover the period between agreement to review and delivery of 16.4 days to complete of the completed review. This excludes the a review after agreeing to time it takes an editor to find reviewers, for the reviewer to accept a review invitation, or for the assignment (the mean an editor to find a new reviewer if the original is 19.1 days). reviewer rejects the invitation. These delays can quickly add weeks or Looking at how long it typically takes to months to the process. Even in the best case complete a review, the distribution has a scenario, it is highly likely to be weeks between sharp peak at 12 days followed by a long tail submission and acceptance for publication. (beyond 120 days). This right skew is evident in a histogram of all reviews and drags median and mean review times to the right of the peak.

Due to that skewed distribution, median review times have been used for all further analyses in this report.

FIG.28 — TIME TO COMPLETE A REVIEW IN DAYS Data source: ScholarOne Time in days to return a completed review (after acceptance of the invitation) from 2013–17.

Median16.4 review time 19.1Mean review time

020406080 100 Time (days) Global State of Peer Review — 33

China and Japan return reviews the fastest, EDITORIAL COMMENT while Malaysia, Iran, and Canada are the As publishers push to improve publication slowest. The established regions are slower times, the next frontier may well be more to review (median of 16.4 days) than the responsive reviewers sourced from emerging regions (median of 15.1 days). emerging regions.

MALAYSIA FIG.29 — MEDIAN TIME IRAN TO COMPLETE A REVIEW, CANADA BY SELECT REGIONS TAIWAN BRAZIL Data source: ScholarOne INDIA Established Emerging USA AUSTRALIA Median time in days to return UK a completed review (after SWEDEN agreement to the invitation) NETHERLANDS from 2013–17, across FRANCE 20 regions. SPAIN ITALY POLAND TURKEY GERMANY SOUTH KOREA Median review CHINA time in days JAPAN across all regions 0 Time (days) 17.5 16.2

ECONOMICS & BUSINESS FIG.30 — MEDIAN TIME MATHEMATICS TO COMPLETE A REVIEW, SOCIAL SCIENCES, GENERAL BY ESI RESEARCH AREAS PSYCHIATRY/PSYCHOLOGY ENGINEERING Data source: ScholarOne GEOSCIENCES Median time in days to return COMPUTER SCIENCE a completed review (after ENVIRONMENT/ECOLOGY agreement to the invitation), SPACE SCIENCE for all reviews from 2013–17, NOT CATEGORIZED by Research Area. PLANT & ANIMAL SCIENCE AGRICULTURAL SCIENCES MULTIDISCIPLINARY NEUROSCIENCE & BEHAVIOR MICROBIOLOGY PHYSICS Median review time in MOLECULAR BIOLOGY & GENETICS MATERIALS SCIENCES days across all fields BIOLOGY & BIOCHEMISTRY (N.B. this dataset CLINICAL MEDICINE excludes reviews from PHARMACOLOGY & TOXICOLOGY journals that do not have CHEMISTRY an associated ESI field) 0 Time (days) 30 15.5 Global State of Peer Review — 34

THE TORTOISE AND THE HARE: REVIEW TIME BY RESEARCH AREA

There are two ways to The downward trend over the five years (2013–17) holds when considering the shorten the review cycle. aggregate dataset. As Figure 32 shows, The first is to make the process of finding review turnaround times has been decreasing reviewers and confirming they are available concurrent with reviewers becoming less to review as fast as possible. The second is likely to agree to a review invitation! to get reviewers to review faster.

For some research areas — Computer Science, FIG.31A (BELOW) & 31B (OVER PAGE) — Multidisciplinary, Geosciences, Engineering, MEDIAN TIME IN DAYS TO COMPLETE A REVIEW, BY SELECT RESEARCH AREAS FROM 2013–17

35 and Agricultural Sciences in particular — Data source: ScholarOne it appears publishers have found a way In Fig.31A (below) we are showing the five largest to speed up median review times by more decreases in review time. Fig.31B (next page) shows than two days over the last five years. the five smallest decreases (including mathematics, where review time has increased). The straight

30 lines show the linear fit. 25

ENGINEERING Days GEOSCIENCES

20 COMPUTER SCIENCE

AGRICULTURAL SCIENCES 15 MULTIDISCIPLINARY 10 2013 Years 2017

Global State of Peer Review — 35 35

MATHEMATICS 30

SOCIAL SCIENCES,

25 GENERAL Days 20

NOT CATEGORIZED

NEUROSCIENCE

15 & BEHAVIOR MOLECULAR BIOLOGY & GENETICS 10 2013 Years 2017

EDITORIAL COMMENT Anecdotally, faster review times may be because journals are reducing the deadlines

they set reviewers. This has been shown to reduce the time it takes to complete reviews

WEB OF OF WEB SCIENCE PUBLICATIONS MEDIAN REVIEW TURNAROUND TIME (DAYS) AVERAGE NO. OF REVIEW INVITATIONS REQUIRED without having a substantial impact on 2013 2,658,406 16.8 1.94 reviewer completion rates. Research seems to back this up.* 2014 2,762,875 16.6 2.01

2015 2,836,048 16.3 2.10 * aeaweb.org/articles?id=10.1257/jep.28.3.169

2016 2,949,895 16.3 2.24

2017 2,860,334 16.0 2.38

FIG.32 — COMPARISON OF WEB OF SCIENCE PUBLICATIONS, MEDIAN TIME IN DAYS TO COMPLETE A REVIEW, AND AVERAGE NUMBER OF REVIEW INVITATIONS SENT IN ORDER TO GET ONE REVIEW DONE Data source: Web of Science and ScholarOne N.B. Not all publications for 2017 were indexed at the time the data were pulled. Global State of Peer Review — 36

WHAT DO WE KNOW ABOUT PEER REVIEW QUALITY?

Not all peer reviews KEY FINDINGS • Review length varies drastically by region; are created equal. established regions write significantly longer reviews than emerging regions. 3Some review reports are excellent, providing objective and constructive feedback to • Journals with higher Journal Impact authors, and clear recommendations to Factors tend to receive longer reviews. editors about whether or not a paper is • Emerging regions, on average, review for fit for publication. journals with lower Journal Impact Factors and write shorter reviews. Other reviews offer little useful information, • Editors of journals with higher Journal leaving authors confused, editors unsure Impact Factors may solicit more reviews about the soundness of a paper, and from established regions (who tend to ultimately delaying the sharing of knowledge. write longer reviews). How do we tell them apart? Measuring the • Journals with higher Journal Impact quality of peer review is difficult, and arguably Factors tend to have faster review subjective. For example, does the “quality” turnaround times. of a review mean its length, clarity, helpfulness, thoroughness, or perhaps even how long it took to complete?

The current state of the art is akin to the understanding of the literature before Eugene Garfield pioneered the field of . This chapter investigates review length as an (admittedly weak) indicator of quality, and how the journal that commissions a review might predict the length of that review. Global State of Peer Review — 37

DOES SIZE MATTER? According to Publons data, EDITORIAL COMMENT If editors value long reviews, or if non-native the average length of a English speakers’ reviews are harder to read, this may explain some of the bias toward reviewer review report is 477 words. selection observed from editors in Chapter 1. There is a clear difference in average review length between established (528 words) and emerging (250 words) regions.

Why are there large differences between some regions? One hypothesis is that reviewers in emerging regions are less 528Average review wordcount Average250 review wordcount comfortable writing in English. from established regions from emerging regions 800 400 Publons review length (words ) 0 UK USA IRAN ITALY INDIA SPAIN CHINA JAPAN BRAZIL FRANCE SWEDEN TURKEY CANADA GERMANY MALAYSIA AUSTRALIA NETHERLANDS

FIG.33 — MEAN REVIEW LENGTH, BY REGION 584 UK Data source: Publons Fig.33A (above) shows ITALY424 the median review length USA (in words) for different CHINA255 regions, for all reviews performed by reviewers affiliated with that region in Publons. Fig.33B (right) shows same data for select regions. 522 548AUSTRALIA Global State of Peer Review — 38

JOURNAL IMPACT FACTOR Interestingly, we observe a shift to longer reviews as the Journal Impact Factor (JIF) increases. This could mean that reviewers put more effort in when reviewing for journals with higher JIFs, or that editors at these journals focus on recruiting top reviewers who write longer FIG.34 — REVIEW LENGTH reviews. If either of these were the case, JIF BY JIF PERCENTILE would be a predictor of review quality. There Data source: Publons may, however, be confounding factors. Review length (in words) is represented by box and whisker graphs. The box It is possible that editors of journals with represents the upper and lower quartiles higher JIFs solicit more reviews from regions with the median shown. The whisker that tend to write longer reviews, regardless of portions encompass 99% of the data. Each ventile contains approximately quality, or that higher JIFs are associated with 4,500 reviews.The dashed line shows subjects that tend to have longer reviews. median review length (477 words). 2000 1000 Review length (words ) 0 1st 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th 12th 13th 14th 15th 16th 17th 18th 19th 20th (0, 0.6) (1.2, 1,4) (1.9, 2.1) (2.1, 2.3) (2.3, 2.5) (2.5, 2.6) (0.6, 1.0) (1.0, 1.2) (1.4, 1.5) (1.5, 1.7) (1.7, 1.9) (2.6, 3.0) (3.0, 3.1) (3.1, 3.5) (3.5, 3.8) (3.8, 4.2) (4.2, 4.8) (4.8, 5.2) (5.2, 6.4) (6.4, 8.5)

Journal Impact Factor ventile # (and range) Global State of Peer Review — 39

Figure 36 shows that India, Iran, Turkey, Established regions tend to have longer and Egypt have both the shortest review median review lengths and higher median length and lowest median JIF. JIFs. This suggests that reviews for journals with higher JIFs are predominantly done by Japan, China, and Brazil have median reviewers from established regions. JIFs, in line with the lower end of the established regions. IRAN EGYPT CHINA INDIA TURKEY JAPAN GREECE BRAZIL ITALY PORTUGAL NORWAY AUSTRALIA SPAIN FINLAND CANADA UK USA SWITZERLAND GERMANY FRANCE DENMARK BELGIUM SWEDEN NEW ZEALAND NETHERLANDS .0 2. 53 F Median JI .0 1. 52 200 250 300 350 400 450 500 550 Median review length (words)

EDITORIAL COMMENT FIG.36 — MEDIAN REVIEW WORDCOUNT, Is a sticking point for editors? BY JIF, ACROSS REGIONS Could the average length of a review from Data source: Publons regions such as China and India reduce The median JIF and review length (in words) for different regions, for all reviews performed by the likelihood they are asked to review even reviewers affiliated with that region in Publons. though they are, on average, more likely to Established Emerging accept a review invitation than other regions? Global State of Peer Review — 40

Turning to the question of motivation, we see that as a journal’s JIF increases, the time it takes for a review to be completed decreases. Does the pull of reviewing for a prestigious journal boost reviewer motivation, and potentially also review length? FIG.37 — MEDIAN DAYS TO COMPLETE A REVIEW, BY JIF PERCENTILE It seems clear that the JIF is correlated with Data source: Publons review length and the time it takes to review, Median number of days for a review assignment to although the causation is not understood. be completed (after invitation agreed to), by JIF, from There are also hints these variables could 2013–17. The box represents the upper and lower quartiles be related to review quality, although it is with the median shown. The whisker portions encompass 99% of the data. Data include many outliers and wide impossible to assert with confidence given distributions, but also an observable trend — that the the data available at the time of this report. higher a journal’s JIF, the faster the review time. ) 2 10 1 10 0th 10th 20th 30th 40th 60th 70th 80th 90th 100th Review turnaround time (days, log scale ) (0, 0.8) (0.8,1.1 (1.1, 1.3) (1.3, 1.6) (1.6, 1.8) (1.8, 2.2) (2.2, 2.6) (2.6, 3.2) (3.2, 4.5)

JIF percentile (and range) (4.5, 72.406)

“ENSURING THE INTEGRITY OF THE PEER REVIEW PROCESS IS VITAL AND AN INCREASING CHALLENGE THAT WE CANNOT FAIL TO MEET.” TAMI POTTEN, EDITORIAL DEVELOPMENT MANAGER, BRITISH INSTITUTE OF RADIOLOGY Global State of Peer Review — 41

RESEARCHER PERCEPTIONS OF QUALITY “THE EMPHASIS BY OPEN Researchers were asked how satisfied they are with ACCESS JOURNALS ON the review process, providing QUICK TURN-AROUND TIME some insight into researcher perceptions of review quality. PUTS PRESSURE ON OTHER Researchers seem relatively happy with the quality and objectivity (if not speed) of the JOURNALS TO KEEP UP, review reports they receive. WHILE STILL UPHOLDING THE QUALITY OF THEIR PEER REVIEW PROCESS.” KRISTEN MARCHETTI, ASSOCIATE DIRECTOR OF PEER REVIEW, SAGE Global State of Peer Review — 42

FIG.38 — RESEARCHER SATISFACTION WITH PARTS OF THE PEER REVIEW PROCESS Data source: Publons’ 2018 Global Reviewer Survey We asked “Considering your typical peer review experiences, how satisfied are you with each of the following?“ NEITHER NEITHER SATISFIED % SATISFIED % SATISFIED % SATISFIED EXTREMELEY EXTREMELEY DISSATISFIED % DISSATISFIED DISSATISFIED % DISSATISFIED DISSATISFIED OR OR DISSATISFIED

TIME TAKEN BY PEER REVIEWERS 4.7 24.4 28.1 39.8 3.1

OVERALL OBJECTIVITY OF PEER REVIEW REPORTS 2.0 15.9 25.6 52.3 4.2

QUALITY OF REVIEW REPORTS 1.8 15.2 27.5 50.8 4.6 Global State of Peer Review — 43

DIRECT MEASURES OF REVIEW QUALITY The scholarly community The ability to measure the quality of review will become increasingly important as lacks a robust measure offerings, like the ability for reviewers to get recognition for their contributions, reach of review quality. mainstream adoption. It is important that Current measures are indirect proxies like any measure is built on community consensus review length or the prestige of the journal and is used across the industry for it to be that commissioned the review. In that useful in the way that have proven sense, the current understanding of peer to be for bibliometric analysis. review quality resembles the understanding Are there other ways to evaluate review of research articles before the development quality? And, are there ways to work around of bibliometrics. the flaws of the systems outlined here? Further research in this area is encouraged. Is it possible to develop a technique for measuring the quality of peer review at scale? Three possible approaches include: • Sentiment analysis: Use natural language processing to extract positive, neutral, or negative comments from reviews. Sentiment might be very interesting, but is ultimately unlikely to become a direct measure of review quality due to the fact good and bad reviews can be both critical “THE PEER REVIEW SYSTEM and/or approving of manuscripts. • Editor evaluations: Editors at many journals IS FOUNDED ON TRUST — already rate reviewers on a simple Likert- style scale. These ratings are typically used internally, if at all. A move to increased TRUST THAT EVERYONE transparency of review could involve more wide-scale usage of these evaluations. This has the potential to be extremely INVOLVED IN THE PROCESS valuable to reviewers, but will require extra work for the editor. ACTS WITH INTEGRITY.” • Review quality evaluations: The standard ELIZABETH MOYLAN, PUBLISHER, RESEARCH instrument or academic evaluation of INTEGRITY AND PUBLISHING , WILEY review quality was developed decades ago. While it requires significant effort to evaluate a review using this protocol, it has been shown to be relatively consistent and reliable. Is there some way to automate this process? Global State of Peer Review — 44

WHAT DOES THE FUTURE HOLD?

This report shows emergent KEY FINDINGS • The task of finding reviewers who will trends in scholarly peer accept a review invitation is projected review:4 Article submissions to get harder. • The time a reviewer takes to complete and publication output a review assignment is decreasing – continue to rise, putting but can this continue? greater strain on the peer • Chinese review rates are increasing rapidly but it will be many years before they reach review system. Meanwhile, parity with established regions. reviewer invitation agreement • 40% of survey respondents entered the reviewing pool only after publishing rates are in decline. What their first paper. does this mean for the • Reviewers believe more , greater recognition, and better incentives will have future of peer review? a positive impact on the efficacy of the peer review process. This chapter explores what • Younger reviewers are more likely to the peer review landscape review for journals with open peer review models than older reviewers. might look like in years to come, and reviewer perceptions about what could make a difference. Global State of Peer Review — 45

REVIEWER FATIGUE ON THE RISE In 2013, an editor had to invite an average of 1.9 reviewers to get one review done. By the end of 2017, this had increased to an average of 2.4 invitations for every completed review. If this trend continues, by 2025, an average of 3.6 invitations will be required to complete a single review.

FIG.39 — ACTUAL AND PROJECTED AVERAGE NUMBER OF REVIEW INVITATIONS REQUIRED TO GENERATE ONE COMPLETED REVIEW ASSIGNMENT * twitter.com/MicrobiomDigest/ status/958325868115451905 Data source: ScholarOne Anecdotally, it’s becoming increasingly difficult to find researchers willing to 2013 — 1.9 invitations review. Is this a sign that reviewer fatigue is setting in?

2017 — 2.4 invitations

2025 — 3.6 invitations

REVIEWERS ARE GETTING FASTER In 2013, a reviewer took 16.8 days to complete a review. By the end of 2017, this had decreased to 16.0 days. This change is more pronounced in 16.8 specific research areas (see Figure 30), but if Median time the trend continues, review turnaround times (in days) to complete a are projected to average 14.7 days by 2025. review in 2013 14.7 Projected median time (in days) to complete a review in 2025 Global State of Peer Review — 46

THE RISE OF CHINA China surpassed the UK It is questionable whether current growth rates for Chinese publication and review output will in review output in 2015, sustain over the long-term, but assuming that and continues to grow existing trends continue, it will take until 2031 until China is reviewing, on average, as much rapidly, but still lags behind per submission as the group of established most established regions regions. China is, however, projected to reach reviewing parity with the United States, in in terms of reviews per absolute terms, in 2024. submission. In 2017, China produced 0.8 reviews per submission compared with an average of 2.3 reviews per submission for all established regions. 0.8China’s reviews per submission, compared to

2.3average reviews per submission across established regions.

CHINA Global State of Peer Review — 47

SUPPLYING THE REVIEWER POOL Demand for peer review Could formal peer review training help introduce more researchers to the reviewer is increasing concurrent pool, sooner? with reviewers becoming • 88% of survey respondents believe that peer review training is either important less responsive to review (46.3%) or extremely important (41.6%) invitations. for ensuring high quality peer review. • 80% of respondents believe that more It is also clear that regions are reviewing at peer review training will have either a different rates relative to their research output positive (58.7%) or extremely positive (see Figure 11). This may be due to incentive (20.9%) impact on the overall efficacy structures that vary by region, or bias on of the peer review process. behalf of those managing the system.

Whatever the reason, it pays to investigate current for supplying the reviewer pool and ask — are they sufficient to FIG.40 — HOW RESEARCHERS RECEIVED keep pace with the growing demand THEIR FIRST REVIEW REQUEST for peer review? Data source:: Publons’ 2018 Global Reviewer Survey

Figure 40 shows that 41% of survey respondents received their first review assignment only after an editor discovered them as a corresponding author. Meanwhile 39.4% never received any peer review training. % “MANY PUBLISHERS PROVIDE TRAINING 16.5Approached by AND SUPPORT FOR PEER REVIEWERS, an unknown editor BUT INSTITUTIONS NEED TO PLAY AN % ACTIVE ROLE TOO. HAND-IN-HAND 40.6After an editor discovers them as a WITH THAT TRAINING COULD BE corresponding author CERTIFICATION AND SHOULD BE RECOGNITION FOR PEER REVIEWERS TOO.” % 16.1Asked to undertake a ELIZABETH MOYLAN, PUBLISHER, RESEARCH review with or on behalf INTEGRITY AND PUBLISHING ETHICS, WILEY of a supervisor/PI Global State of Peer Review — 48

A SHIFT TO GREATER TRANSPARENCY? Do different journal review Probing further, there are differences in attitudes toward open peer review models policies affect reviewers’ across age groups: willingness to accept peer • 40% of respondents under 26 are likely or highly likely to review for journals that make review invitations? Does author and reviewer identities, and review this differ by age group? reports public, compared with only 22.3% of respondents aged between 56–65. Overall, reviewers seem more likely to accept Could attitudes toward open peer review be review invitations from journals using single-, shifting as a new crop of researchers enter the double-, and triple-blind peer review models, fray? Or do researchers become more wary of than any of the open peer review variants open peer review models as they grow older? listed in Publons’ 2018 Global Reviewer Survey. And how will this change as open access continues to make gains?

FIG 41— THE EFFECT OF JOURNAL REVIEW POLICIES ON PEER REVIEW INVITATION ACCEPTANCE RATES Data source: Publons’ 2018 Global Reviewer Survey

VERY UNLIKELY UNLIKELY MAKES NO DIFFERENCE LIKELY LIKELY VERY Single-blind 2.9% 4.7% 19.3% 31.5% 41.7% Double-blind 1.7% 2.9% 17.9% 29.6% 47.9% Triple-blind 4.6% 9.2% 24.8% 26.1% 35.4% Open identities 15.1% 27.3% 26.9% 20.6% 10.1% Open reports 13.4% 24.5% 29.5% 22.4% 10.2% Open identities and reports 18.6% 30.0% 26.3% 16.6% 8.6% Open final-version commenting 13.8% 23.0% 32.2% 21.3% 9.7%

of respondents under 26 are likely or highly likely to review of respondents for journals that make author aged between and reviewer identities, and 56–65 answered 40% review reports public. 22.3% in the same way. Global State of Peer Review — 49

CONSENSUS: INCENTIVES MATTER Projecting the future state of peer review is fraught with uncertainty. Researchers, however, have a clear idea of what will make a difference: greater recognition and more formalized incentives for peer review. 83% of survey respondents stated that greater recognition and career incentives to peer review would have a positive (54.7%) or extremely positive (28.6%) impact on the overall efficacy of the peer review process. 83% of survey respondents believe that greater recognition and career incentives for peer review would have a positive impact.

FIG.35 —RESEARCHER ATTITUDES TO MORE INCENTIVES FOR PEER REVIEW Data source: Publons’ 2018 Global Reviewer Survey Global State of Peer Review — 50

CLOSING REMARKS The 2018 Global State of • What little information there is on gender in peer review shows gender inequality Peer Review report marks is no less of a problem than geographical the first time that we, as inequality. • The number of manuscript submissions an industry, have had the and the number of review requests required means to draw together to complete a review are increasing rapidly, meaning the task of finding reviewers who cross-publisher insights will accept a review invitation is likely to on who is doing peer review, get even harder. • Emerging regions are more likely to accept how efficient the process is, review invitations and complete reviews what we know of the quality faster, but their reviews are much shorter in length. of peer review, and the big • The vast majority of researchers think more peer review trends to be training and recognition will improve the efficacy of peer review, and that universities worried or excited about. need to be doing more to acknowledge peer review contributions. Never before have we been able to link such broad data on reviewer opinions, reviewer An overarching finding of this report is that behavior, manuscript submissions, research now that peer review is easily measurable fields, researcher demographics, and and verifiable, there is no longer any excuse publication outputs to pry open the black to exclude it from analyses of research and box of peer review. researchers. Rather than being hidden away This combination of data have shown: in journal silos and relegated to a small, unverifiable mention at the end of a long • ­Established regions disproportionately curriculum vitae, peer review can — and should dominate the peer review process, in large — be used to gain a more complete picture part due to compounding geographical of research and the standing of researchers. in the appointment of editors and Understanding and addressing these issues their reliance on local reviewers. in peer review may well be the key to • Notwithstanding these things, emerging maintaining public faith in — and justifying regions in general, and China in particular, funding of — academic research. are rapidly increasing their peer review outputs — but are years and years away from bridging the gap. Global State of Peer Review — 51

This report only scratches the surface. We We at Publons look forward to the day can learn so much about research from peer that writing a great review to help a peer review. As review transparency increases, so improve their work is recognized as a greater too will the opportunities to analyze the quality contribution to the field than collecting the of peer reviews, their prescient evaluations of scraps from the lab floor to submit to any yet-to-be-published research trends, and any journal that promises publication. interplay between a researcher’s impact on DANIEL JOHNSTON, their field as a peer reviewer, an editor, PUBLONS COFOUNDER and an author.

In the near term, some clear, immediate action points arise from the findings in this report. As an industry, we need to get better at training researchers to be great peer reviewers, both to ensure that reviewers are up to their important task, and to expand the reviewer pool by giving journal editors the confidence to approach qualified reviewers outside their personal networks.

We also need to expand our concept of “reviewer recognition” beyond simply listing the number of completed reviews on a profile. The high proportion of researchers that reject review invitations they are perfectly qualified to complete, along with the near consensus that research institutions should more explicitly recognize peer review contributions, show there is still a long way to go to remedy the incentives imbalance. Funders, institutions, publishers, researchers, and the organizations that connect and serve them, all have a part to play in supporting researchers to do what is best for research. Global State of Peer Review — 52

APPENDICES

PARTIES INVOLVED IN PREPARING THE REPORT This report has been prepared by Publons in collaboration with ScholarOne and Web of Science. Publons, ScholarOne, and Web of Science are part of Clarivate Analytics, the global leader providing trusted insights and analytics that accelerate the pace of innovation. Global State of Peer Review — 53

GLOSSARY Terminology, abbreviations, and definitions used.

COMPLETED Reviews that have been completed OPEN Reviewers and authors are aware REVIEWS and returned to the journal. IDENTITIES of each others’ identities.

Where a journal editor rejects OPEN Reviewers and authors are aware a submitted manuscript before IDENTITIES of each others’ identities and review commissioning any peer reviews, AND OPEN reports are public. DESK REJECT usually because they deem the REPORTS manuscript outside the journal’s scope or obviously not meeting the journal’s quality standards. OPEN FINAL Review or commenting on the final VERSION “version of record” publication. COMMENTING DOUBLE-BLIND Authors and reviewers are unknown PEER REVIEW to each other. Review reports are published OPEN REPORTS alongside the relevant article. China, Brazil, Turkey, India, Iran, South EMERGING Korea, Taiwan, Malaysia, and Poland. REGIONS These regions are typically referred Also known as refereeing, this is the to as emerging economies. process of subjecting an author’s PEER REVIEW scholarly work, research, or ideas to the scrutiny of others who are USA, Germany, Italy, Spain, France, experts in the same field. ESTABLISHED Netherlands, Sweden, Canada, Australia, UK, and Japan. These ECONOMIES regions are not typically referred to as Predatory open-access publishing emerging economies. is an exploitative business model that involves charging publication fees to PREDATORY authors without providing the editorial PUBLISHING ESSENTIAL 22 broad specialty fields (research and publishing services associated SCIENCE areas), each associated with a with legitimate journals (open discrete set of journals indexed by INDICATORS access or not). Clarivate Analytics. These are tracked RESEARCH in Essential Science Indicators (ESI), AREAS a component of Web of Science. PUBLISHED ARTICLE The total number of scholarly articles Bibliometric and OUTPUT / accepted for publication, based on INCITES analytics package based on Web ARTICLE articles indexed in Web of Science. of Science Core Collection data. OUTPUT

A measure of the frequency with which the “average article” in a journal A global online survey of 11,838 has been cited in a particular year researchers conducted by Publons or period. The annual JCR Impact PUBLONS’ between May–July 2018. The JOURNAL Factor is a ratio between citations 2018 GLOBAL survey aims to gauge attitudes and IMPACT and recent citable items published. REVIEWER perceptions toward peer review FACTOR Thus, the Impact Factor of a journal SURVEY of scholarly journals. Full survey is calculated by dividing the number report available at: publons.com/ of current year citations by the source community/gspr items published in that journal during the previous two years. REVIEWER INVITATION The rate at which review invitations A manuscript that has been are accepted, but not necessarily AGREEMENT completed, by peer reviewers. MANUSCRIPT / submitted to a journal to consider RATE for publication, but may not yet SUBMISSION be reviewed and has not yet been accepted or rejected for publication. Global State of Peer Review — 54

REVIEWER The rate at which review invitations An online, subscription-based sent out by editors result in a indexing service COMPLETION completed peer review report with a comprehensive citation RATE being submitted. search. Originally produced by the Institute for Scientific Information WEB OF (ISI), it is now maintained by Clarivate REVIEW SCIENCE Analytics. It gives access to multiple A measure of the distribution of DISTRIBUTION databases that reference cross- review workload within a population. INDEX disciplinary research, which allows for in-depth exploration of specialized sub-fields within an academic or RELATIVE The number of reviews performed scientific discipline. REVIEW RATES per manuscript submission.

A curated collection of over 20,000 A cross-publisher peer review peer reviewed, high-quality scholarly SCHOLARONE workflow management tool owned journals published worldwide MANUSCRIPTS by Clarivate Analytics. (including open access journals) in WEB OF over 250 Science, Social Sciences, and Humanities disciplines. SCIENCE CORE SINGLE-BLIND Reviewers are unknown to Conference proceedings and COLLECTION PEER REVIEW the authors. data are also available. A full list of the indices contained in the Web of Science Core Collection is available TRIPLE-BLIND Authors, reviewers and editors at: images.webofknowledge.com/ PEER REVIEW are all unknown to each other. images/help/WOK/hp_database.html Global State of Peer Review — 55

METHODOLOGY Data, statistics, research every major publisher, but comprises a representational grouping of larger areas, and assumptions commercial publishers, associations, and societies. The size of the ScholarOne used in this report. dataset means it is reasonable to assume it approximates the wider industry, although The data that provides the biases may appear if the distribution of, e.g., Journal Impact Factors, were different in basis for this report was ScholarOne relative to the rest of the market, extracted between April or if use of the tool resulted in efficiencies not experienced by journals that don’t use and August 2018, from a peer review management tool. We have a range of Data sources. no reason to expect this, although note that ScholarOne is a commercial product and could These Data sources, and their limitations are: select against relatively lower budget journals. • ScholarOne Manuscripts Note that manuscript submissions counts A cross-publisher peer review workflow include manuscripts that are desk-rejected, management tool used by thousands of i.e., never sent for peer review. journals globally. • Publons Anonymised and aggregated data were A cross-publisher peer review extracted for the reporting period 2013–17. containing over 2.2 million reviews and This resulted in a dataset of over 10 million 400k+ peer reviewers globally at the time original submissions and data were pulled. Publons data used in 15 million peer reviews. this report come from an aggregated and Reviews that took more than six standard anonymised sample containing over deviations longer than the mean to 15,000 scholarly journals. complete were filtered out (less than Publons data are limited to reviews, 0.03% of reviews). Total review and reviewers, and editors that self-elect to submission counts were scaled so that use the platform to track their reviewing the absolute numbers used throughout the and editorial work. report approximately match our estimates for global peer review outputs (page 31). Regional information is extracted from a user-input country field. The dataset included reviews from all rounds of review, regardless of whether • Web of Science Core Collection the manuscript was eventually accepted A curated collection of over 20,000 peer or rejected. reviewed, high-quality scholarly journals Approximately 55% of journals using published worldwide in over 250 Science, ScholarOne are indexed in Science Citation Social Sciences, and Humanities disciplines. Index Extended and Conference proceedings and book data and have a Journal Impact are also available. A full list of the indices Factor and publication types are biased contained in Web of Science Core Collection towards Science, Technology, Engineering, is available at images.webofknowledge. and Mathematics (STEM) areas and com/images/help/WOK/hp_database.html larger volume publications. The median The dataset used for this report includes the publication size is 144 submissions/year. Emerging Sources Citation Index (ESCI). Data in ScholarOne does not cover Global State of Peer Review — 56

• Publons’ 2018 Global Reviewer Survey We also note that research activity is not A global online survey of over 11,000 evenly distributed across the ESI Research researchers conducted by Publons Areas. The classification schema is skewed to between May–July 2018. The survey aims STEM fields — all of the Humanities and Social to gauge attitudes and perceptions toward Sciences are grouped into three research peer review of scholarly journals. Full fields. We hope to commission further detailed survey report available at: publons.com/ studies into the Humanities community/gspr in future years. The tables and figures in this report are Note that ESI re-categorizes papers in based on one or more of these underlying “multidisciplinary” journals such as Science, Data sources. The Data source used is Nature, PNAS, etc., by scrutinizing each referenced next to each table, figure, paper’s cited and citing references, and and statistic. assigning one of the 21 other categories Research Area Classification Schemas based on algorithmic analysis of the citations. 22 broad specialty fields (research areas) are The actual ESI designation “Multidisciplinary” used generally throughout the report, each is used for papers that, for lack of citation associated with a discrete set of journals density or other reason, cannot be assigned indexed by Clarivate Analytics. These are a specific field. This makes it hard to draw any tracked in Essential Science Indicators (ESI), conclusions with respect to Multidisciplinary a component of Web of Science. research in this report. ESI Research Areas were assigned to reviews, In some figures (see details in next section), reviewers, editors, and submissions based alternative research area classification on the journal that commissioned the review. schemas are used due to limitations in Primary research areas were extracted by mapping the underlying data to the ESI associating Web of Science categories with Research Area schema. ScholarOne titles and then mapping to an ESI Regions Research Area. For journals associated with The top 20 regions, when ordered by multiple categories, the mapping was to the combined submissions and reviews in category in which the title had the highest ScholarOne, were selected. These 20 JIF percentile. regions were then categorized into groups of Web of Science categories are assigned established and emerging regions based on the by editors during indexing into the Science typical classification of emerging regions. We Citation Index and Social Science Citation note that the established and emerging groups Index, which means that this filter primarily risk being dominated by the United States and selects for journals with a Journal Impact China, respectively, due to the relative scale of Factor. For this reason, only about 60% of their research output. ScholarOne reviews have an ESI research For Publons’ data, a reviewer’s or editor’s area. This has the potential to add bias to affiliated region is determined by where the studies based on research area since it user has reported their current affiliated place presumably selects for relatively of work to be located. strong journals. For ScholarOne data, a reviewer’s region is derived using the self-reported location of the reviewer. About 87% of researchers who completed a review have an assigned region. Global State of Peer Review — 57

DETAILS, ASSUMPTIONS AND LIMITATIONS FOR FIGURES IN THIS REPORT A summary of key considerations, limitations and caveats for tables and figures used in this report.

All data were aggregated and Because articles can have multiple anonymised by contributing authors, and authors can come from organisations prior to analysis. multiple regions, Web of Science articles can be double counted, which All ScholarOne data have been scaled FIG. 11B means that percentages per region to match our estimates for the total GENERAL are over-inflated (by roughly 30%) number of reviews occurring globally relative to reviews, as reviews have (see Chapter 2). only one author. Three significant figures are used in general. Any more detail than this is unlikely to be statistically signficant. Figs 13 and 14 use Publons data. The sample is relatively small (21,963 FIGS. 13 & 14 reviews in this case) and may suffer A simplified approximation of the from bias as it depends on researchers FIG.1 publishing workflow. who have self-selected to use Publons.

A simplified illustration of common peer review models used in scholarly FIG.2 publishing. We acknowledge this is not exhaustive and other nomenclature can be applied to the models referenced.

A sample of 15,000 journals with more than 10 unique review records on FIG.3 Publons is used. 63% (9,530) have a publisher-elected review display policy, and 37% (5,520) are unspecified.

Publications indexed in the Web of Science Core Collection (including the Emerging Sources Citation Index). At the time the data were extracted, FIG.4 not all content for 2017 had been received and indexed. Part of the rapid increase in early-to-mid 2000 can be attributed to an expansion of the Web of Science indices.

Manuscript submissions include FIG.10 manuscripts that are desk-rejected, i.e., never sent for peer review. Global State of Peer Review — 58

None of the Data sources used in Reviews for journals that do not have this report collect gender data. Fig. an associated ESI Research Area 19 relies on a tool that compares the FIGS. 29, 30, are excluded from this dataset. This first names of researchers on Publons biases the data to reviews for journals against the tool’s dataset of popular 31A & 31B with higher JIFs, as these journals are names for men and women worldwide. more likely to have an associated ESI This provides about 40% coverage, Research Area. which is enough information for relative predictions, but its limitations should FIG.19 be taken into consideration when Reviewers are affiliated with a region in Publons based on the location of the reading the results. This is especially FIG.33 true given that names from certain institution that they self-report to be regions may be difficult to parse. working at or affiliated with. Genders were inferred from researcher names and regions using the python module “gender_guesser 0.4.0”. The The regions shown in Fig. 36 differ library accepts all unicode characters from those generally referred to throughout this report. The regions in but the database primarily contains FIG.36 Latin names. Fig. 36 are the 25 regions for which Publons has the most data to explore median review length. Web of Science research field classifications are used as opposed to the ESI Research Areas used FIG.20 elsewhere in the report. This is due to limitations mapping the underlying Data source to ESI Research Area classification schema.

Data used to generate the Review Distribution Indices and coefficients are limited to reviewers who have self-elected to sign up to the Publons FIGS. 21 & 22 platform. In regions with a smaller representation on Publons, it is more likely the distribution of review work will be skewed.

Japan is an outlier both in review invitation agreement rates and review turnaround times. This could be partly due to the fact that a higher proportion of Japanese journals are configured to simply assigned review tasks rather than sending out an invitation first. We investigate this further. Reviewer responsiveness: Effectively FIGS. 25 & 29 there are two schema for managing reviewers in ScholarOne Manuscripts: – 1. Select, invite, assign, and score. – 2. Select, assign, and score. Roughly 5% of sites use the second, invite-free scheme, so aside from Fig. 13, where we chose to show all stages of reviewer responsiveness, we have opted to use all data representing the “select” and “assign” phases. Global State of Peer Review — 59

”NOW THAT PEER REVIEW IS EASILY MEASURABLE AND VERIFIABLE, THERE IS NO LONGER ANY EXCUSE TO EXCLUDE IT FROM ANALYSES OF RESEARCH AND RESEARCHERS.” DANIEL JOHNSTON, , PUBLONS COFOUNDER Global State of Peer Review — 60

ABOUT PUBLONS ABOUT CLARIVATE Publons works with researchers, publishers ANALYTICS and research institutions to speed up science Clarivate Analytics is the global leader in and research by harnessing the power of peer providing trusted insights and analytics to review. Publons peer review solutions are accelerate the pace of innovation. Building designed to help bring transparency, efficiency, on a heritage going back more than a quality, and recognition to peer review. century and a half, we have built some of the Publons, founded in 2013 and now part of most trusted brands across the innovation Clarivate Analytics, has offices in Wellington, lifecycle, including Web of Science, Cortellis, New Zealand and London, UK. Derwent, CompuMark, MarkMonitor and Follow Publons on: Techstreet. Today, Clarivate Analytics is a new and independent company on a bold facebook.com/publons entrepreneurial mission to help our clients radically reduce the time from new ideas @publons to life-changing innovations. For more information, please visit: clarivate.com Global State of Peer Review — 61

To learn more about the Global Sate of Peer Review report, or about Publons, visit: publons.com/community/gspr

Or get in touch via: [email protected]

DOI: https://doi.org/10.14322/publons.GSPR2018 ©2018 Publons

publons.com facebook.com/publons @publons #PublonsGSPR