Does the Use of Open, Non-Anonymous Peer Review In
Total Page:16
File Type:pdf, Size:1020Kb
Does the use of open, non-anonymous peer review in scholarly publishing introduce bias? Evidence from the F1000 post-publication open peer review publishing model Mike Thelwall, Statistical Cybermetrics Research Group, University of Wolverhampton, Wulfruna Street, Wolverhampton WV1 1LY, UK. ORCID:0000-0001-6065-205X Verena Weigert, Jisc, 15 Fetter Lane, London EC4A 1BW. ORCID: 0000-0003-4887-7867 Liz Allen, F1000, 34-42, Cleveland Street, London, W1T 4LB. ORCID: 0000-0002-9298-3168 Zena Nyakoojo, F1000Research, London W1T 4LB, UK. ORCID: 0000-0001-6812-4120 Eleanor-Rose Papas, Editorial Department, F1000Research, London W1T 4LB, UK. ORCID: 0000-0003-4293-4488 Competing interests: LA, ZN, and E-RP are currently employed by F1000Research Ltd. The data analysis was conducted independently by MT, and they were not involved in the pre- publication editorial checks or the peer review process. Background: Peer review remains an important feature of scholarly publishing to assure scrutiny and trust in research findings. As part of moves towards open knowledge practices, making peer review more open is increasingly cited as a way to deliver even fuller scrutiny and transparency of assessments around research – and there are many flavours of open peer review now in use across scholarly publishing. Open peer review, particularly where reviews are fully attributable and non-anonymised, has however been subject to criticism for the bias that may accompany its use. This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review – and used in the post- publication, peer review system operated by the open-access scholarly publishing platform F1000Research. First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer? Methods: Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review publishing model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own. The effects of these potential influences were investigated for all first versions of articles published between July 13, 2012, and July 8, 2019, to F1000Research. Results: First, in 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review; the difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement. Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity. Conclusions: There is some evidence that being based in the same country as an author may influence a reviewer’s decision when the reviewer identity is public. There does however seem to be no evidence that being able to read an existing published review prior to submitting your own review overly influences a reviewer’s decision or encourages conformity. These findings are based entirely upon peer review data derived from the F1000Research platform which is one that offers open, invited peer review within a relatively unique post- publication publishing system. The findings do help to dispel some of the critique levelled at aspects of open peer review, though if the data were available we would ideally compare our results with data derived using alternative and established peer review practices (both open and closed). We would like to see a more robust evidence base to support decision making around aspects of the scholarly publishing system and peer review in particular. Keywords: Open science; Peer review; peer review biases; open peer review; F1000Research, post-publication peer review Introduction The trend towards more open and collaborative knowledge practices (‘open science’) has led to an increasing number of journals and publishing outlets offering ‘open peer review’ to various degrees (Lee & Moher, 2017; Morey, Chambers, Etchells, et al., 2016; Ross-Hellauer, 2017). F1000 today perhaps offers one of the most comprehensive versions of ‘open peer review’; across all its publishing platforms, a post-publication peer review model is provided alongside invited, open peer review. Each article, after initial editorial and objective checks (e.g. for ethical approval; plagiarism; and inclusion of underlying data) is published before peer review. The subsequent peer review process, notable in the debate about the value of open peer review, publishes the identities and affiliations of all peer reviewers alongside their narrative report. The publication process is iterative (‘continuous publishing’), with authors able to respond directly to their reviewers’ comments by providing a revised article version, linked to the previous version to preserve the revision history. To keep the process rapid and iterative, peer reviewer reports are published (following an initial check) as soon as they are submitted thus allowing the possibility for subsequent reviewers to read already published reports before submitting their own review. Additionally, the post-publication peer review model is intended to shift the role of peer review in scholarly publishing from its traditional focus on selecting content for publication, to one that helps to shape the research to be the best it can be – and providing this openly, allows readers to reflect on the development of a piece and consider any reviewer critique. Nevertheless, open peer review introduces the potential for specific types of reviewer biases and since peer review of scholarly output continues to provide an important quality control function (Siler, Lee, & Bero, 2015), it is important to examine the benefits and potential biases that newer open systems of peer review may introduce. There are many types of bias described within peer review processes and practices operated by grant funding agencies and in scholarly publishing, including according to: author/grant applicant demographic characteristics (Ceci & Williams, 2011; Fox, Burns, & Meyer, 2016; Tamblyn, Girad, Qian & Hanley 2018; Witteman, Hendricks, Straus & Tannenbaum 2019); author-reviewer speciality differences (cognitive distance) (Wang & Sandström, 2015); country of affiliation/article country of origin (Harris, Macinko, Jimenez, Mahfoud, & Anderson, 2015), and perceptions of authors’ institutional prestige (Peters & Ceci, 1982). Perhaps because such studies revealing some biases have been typically tested on the single-blind model (with author/grant applicant identity known by reviewers), there remains a widespread belief that double-blind peer review introduces the ‘least’ bias and is superior to open peer review approaches (Rodríguez-Bravo, Nicholas, Herman, et al., 2017; Mulligan, Hall, & Raphael, 2013; Moylan, Harold, O’Neill, & Kowalczuk, 2014). However, beliefs about the merits of open peer review are supported by limited empirical evidence and based upon survey and attitudinal research (Ross-Hellauer, Deppe, & Schmidt, 2017). From studies providing empirical insights into open peer review, there is evidence that providing peer review transparency has a minimal effect on review quality. Two randomised control trials of The BMJ did not find quality differences between the reviews provided when reviewers knew that their identities would be revealed, compared to blinded review (Van Rooyen, Godlee, Evans, Black & Smith, 1999) or when reviewers were told their reviews may be published online (Van Rooyen, Delamothe, & Evans, 2010). A study of a psychology journal found that reviews tended to be more carefully written when they were intended to be openly available (Walsh, Rooney, Appleby, & Wilkinson, 2000). A systematic review of research into peer review for biomedical journals found nine studies into concealing author or reviewer identity, but no conclusion could be reached overall about how this influenced review quality (Jefferson, Alderson, Wager, & Davidoff, 2002). However, a recent study of a large volume of articles submitted to Elsevier journals concluded that using an open peer review process does not compromise the quality and participation in the process, at least when reviewers are able to protect their anonymity (Bravo, Grimaldo, López-Iñesta, Mehmani, Squazzoni, 2019), although The BMJ reviewers were less willing to review when their identities would be revealed (Van Rooyen, Godlee, Evans, Black & Smith, 1999). While there is some evidence that open peer review might deter reviewer acceptance rates (Ross- Hellauer, Deppe, & Schmidt, 2017), there is limited empirical evidence about the effect that revealing reviewers’ identities and affiliations and ‘signed peer review’ policies, has on peer review in practice. As part of the efforts to provide a firmer evidence base to support decision-making around the use of open peer review, this article tests the extent to which a post-publication peer