The Impact Factor Fallacy
Total Page:16
File Type:pdf, Size:1020Kb
bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 1 Title: 2 The impact factor fallacy 3 Authors: 4 Frieder Michel Paulus 1; Nicole Cruz 2,3; Sören Krach 1 5 Affiliations: 6 1 Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, 7 Ratzeburger Allee 160, D-23538 Lübeck, Germany 8 2 Department of Psychological Sciences, Birkbeck, University of London, London, UK 9 3 Laboratoire CHArt, École Pratique des Hautes Études (EPHE), Paris, France 10 Corresponding Authors: 11 Frieder Paulus, phone: ++49-(0)451- 31017527, email: [email protected] 12 Sören Krach, phone: ++49-(0)451-5001717, email: [email protected] 13 Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, 14 Ratzeburger Allee 160, D-23538 Lübeck, Germany 15 Author contributions: 16 FMP, NC, and SK wrote the manuscript. 17 Running title: The impact factor fallacy 18 Word count abstract: 135 19 Word count manuscript (including footnotes): 4013 20 Number of tables: 1 21 Number of figures: 0 1 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 22 Abstract 23 The use of the journal impact factor (JIF) as a measure for the quality of individual 24 manuscripts and the merits of scientists has faced significant criticism in recent years. We add to 25 the current criticism in arguing that such an application of the JIF in policy and decision making 26 in academia is based on false beliefs and unwarranted inferences. To approach the problem, we 27 use principles of deductive and inductive reasoning to illustrate the fallacies that are inherent to 28 using journal based metrics for evaluating the work of scientists. In doing so, we elaborate that if 29 we judge scientific quality based on the JIF or other journal based metrics we are either guided by 30 invalid or weak arguments or in fact consider our uncertainty about the quality of the work and 31 not the quality itself. 2 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 32 Introduction 33 The journal impact factor (JIF) was initially used to help librarians make decisions about 34 journals (Garfield, 2006). However, during the last decades the usage of the JIF has significantly 35 changed. In deviating from its original purpose it is now widely used to evaluate the quality of 36 individual publications and the work of scientists (Amin & Mabe, 2003; Arnold & Fowler, 2010). 37 Since then, the measure itself has been criticized for various reasons. For example, it is well 38 known that the JIF is an inaccurate estimate for the expected number of citations of an article 39 within a specific journal (Callaway, 2016; Larivière et al., 2016) and that it is relatively easy to 40 manipulate (McVeigh & Mann, 2009; Tort, Targino, & Amaral, 2012). Nonetheless, the JIF has 41 deeply affected the work of scientists and decision making in academia. Scientists get jobs, 42 tenure, grants, and bonuses based on the impact of the journals they are publishing their 43 manuscripts in, outgrowths’ which were critically discussed in many previous reviews, comments 44 and editorials (Brembs, Button, & Munafò, 2013; Casadevall & Fang, 2014; Della Sala & 45 Crawford, 2007; DePellegrin & Johnston, 2015; Lehmann, Jackson, & Lautrup, 2006; Reich, 46 2013; Seglen, 1997; Simons, 2008; Werner, 2015). Notably, the JIF has also been explicitly 47 referred to as a tool to decide how to distribute funds across institutions, for example in Germany 48 (DFG, 2004), and thereby affects policy making on a much larger scale. 49 "For the calculation of the performance-based bonus of the unit providing the service 50 (department or clinic) the original publications may be used with the unweighted impact 51 factor of the publication organ, in the sense of a step-wise introduction of quality criteria. 3 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 52 Thereby, a first- and last authorship may be considered with one third each and the 53 remaining third can be distributed across all remaining authors […]."1 54 Besides such explicit usage of the JIF for evaluating scientific excellence, the JIF also 55 implicitly affects other measures which have been suggested to better approximate the quality of 56 a scientist's work or of a specific study (e.g. the h-index, Hirsch, 2005 and the Relative Citation 57 Ratio (RCR), Hutchins, Yuan, Anderson, & Santangelo, 2015). For example, there is some 58 evidence that the number of citations of an article is influenced by the JIF of the journal where 59 the article was published, regardless of the quality of the article itself (Callaham, Wears, & 60 Weber, 2002; Cantrill, 2016). This implies that measures that are based on the citations of the 61 individual articles are still influenced by the JIF of the publication organ. With the many different 62 ways of how the JIF can influence decision making in academia, it is not surprising that empirical 63 data now demonstrate the JIF to be one of the most powerful predictors for academic success 64 (Van Dijk, Manor, & Carey, 2014).We could recently show that some scientists may have 65 adapted to these reward principles in their environment by showing a greater reward signal in the 66 brain’s reward structures in the prospect of an own high impact publication (Paulus, Rademacher, 67 Schäfer, Müller-Pinzler, & Krach, 2015). 68 In line with the rising initiatives to prevent the use of the JIF for evaluating the quality of 69 science (see e.g. the DORA initiative, Alberts, 2013, Cagan, 2013 or see the report of the German 70 Science Council, 2015), we have considerable doubts that the arguments in support of using the 1 "Für die Berechnung der LOM [leistungsorientierte Mittel; remark of authors] der jeweiligen leistungserbringenden Einheit (Abteilung bzw. Klinik) kann im Sinne einer stufenweisen Einführung von Qualitätskriterien die Bewertung erfolgter Original-Publikationen unter Verwendung des ungewichteten Impact Faktor der jeweiligen Publikationsorgane (JIF) erfolgen. Dabei können Erst- und Letztautorschaft mit je einem Drittel berücksichtigt werden; das verbleibende Drittel kann auf alle übrigen Autoren verteilt werden […]." (German Research Foundation, 2004, p. 15). 4 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 71 JIF for measuring scientific excellence are justified. In this comment we want to look at the 72 problem of using the JIF from a different perspective and carefully (re)evaluate the arguments for 73 its use as an estimate of scientific quality. Thereby, we hope to better understand the beliefs about 74 the JIF that influence decisions in academia and the implications of policies that use the JIF to 75 assess and remunerate scientific quality. Beyond the specific case of the JIF, this exercise might 76 also help to specify more general misconceptions when using journal based properties to evaluate 77 science, in order to overcome incentive structures based on journal based metrics altogether. 78 Deductive fallacy when using the JIF 79 A basic belief when using the JIF for evaluating the quality of a specific manuscript 80 seems to be that (1) if a paper is published in a high impact factor journal (p) then the paper is of 81 high quality (q) 2. Why would scientists believe this? A straightforward reason is the idea that it is 82 more difficult to publish in a high impact factor journal because higher standards of research 83 quality and novelty have to be passed in order to be accepted. The average number of citations of 84 a journal's articles within in a specific time period signals the average breadth of interest in these 85 articles during that time period, which can of course be affected by many factors other than 86 research quality. But as a first approximation, let us suppose that belief (1) is the case. What can 87 we conclude from it? 88 If we see a paper published in a high impact factor journal, we could then draw the 89 deductively valid inference of modus ponens (MP: if p then q, p, therefore q )3 and conclude that 2 When we speak of "high" and "low" impact in this paper, the arguments we make are independent of whether "high" and "low" refer to the absolute JIF of a journal, or to the JIF relative to a specific research domain. 3 Here p and q stand for arbitrary propositions. For example, p might stand for "This paper is published in a high impact factor journal" and q for "This paper is of high quality". 5 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 90 the paper is of high quality.