Discussion About the New Nature Index
Total Page:16
File Type:pdf, Size:1020Kb
Discussion about the new Nature Index Robin Haunschild* & Lutz Bornmann** * Corresponding author: Max Planck Institute for Solid State Research Heisenbergstr. 1, 70569 Stuttgart, Germany. E-mail: [email protected] ** Division for Science and Innovation Studies Administrative Headquarters of the Max Planck Society Hofgartenstr. 8, 80539 Munich, Germany. E-mail: [email protected] Abstract Very recently, the Nature Index has been proposed and published. It offers the possibility to rank institutions and countries according to the absolute numbers of papers published in a selection of 68 journals. We discuss the main elements of the Nature Index, and we wish to start a discussion about the Nature Index by stating four main points of criticism. The main problem of this Index seems to be the use of absolute numbers without normalization. Furthermore, measures of quality are reintroduced on the basis of reputation of journals rather than quality (impact) of single publications. 2 Letter to the Editor Dear Sir, Very recently, the Nature Index (NI) has been proposed and published (Campbell and Grayson 2014, Campbell and Grayson 2014). The NI compares the number of publications published by institutions and countries in selected journals. The authors of the NI hope that the NI “rather than providing some authoritative analysis, will act as a conversation starter and a nucleation point for ideas for further analysis” (Campbell and Grayson 2014). We wish to fulfill this hope and start a conversation about the NI here. Currently, 68 journals are part of the NI. Selection of these 68 journals was carried out by polling 68 active scientists, proposed by the editorial staff of Nature journals, for their favorite 10 journals in which they would publish their best scientific works. This small poll was tried to be confirmed by a larger poll via email targeting 100,000 scientists in life, physical and medical sciences. This second set of unknown scientists is supposed to compose a broad geographical mix. From those 100,000 scientists, only 2,800 (2.8%) replied to the survey. The 68 journals, which are part of the NI, constitute less than 1% of the journals in the Web of Science (WoS, Thomson Reuters), but are supposed to be responsible for 30% of the citations in natural sciences (see http://www.natureindex.com/faq). The Nature Publishing Group plans a periodical review of the size and content of the journal set. The NI provides three indicators: the raw article count (AC), the fractional count (FC), and the weighted fractional count (WFC). While the AC is an article count of an institution or country, the FC accounts for the number of coauthors from other countries (or institutions), and the WFC introduces a further weighting: As journals in the category Astronomy & Astrophysics publish more (approximately five times more) papers than journals in other 3 categories, the FC of journals in the category Astronomy & Astrophysics is weighted with a global factor of 0.2 which gives the WFC. The current NI (for the time period 1 September 2013 - 31 August 2014) is available at the URL http://www.natureindex.com (a beta release). We have several points of criticism regarding the NI: First, the NI indicators depend more on the publication output of a country or institution in absolute numbers than on the quality or impact of these publications. Using the absolute numbers in the NI, Harvard University ranks lower than the Chinese Academy of Sciences according to AC, FC, and WFC. If one instead constructs a normalized index (e.g. relative to the full publication output in the same time frame), the picture is reversed. According to such a relative AC, Harvard University ranks higher than the Chinese Academy of Sciences. Second, there is no apparent reason for 68 journals in the NI proposed by 68 scientists. The NI could as well be composed of 100 journals proposed by 150 scientists. Furthermore, it is not completely clear how different disciplines have been considered in the selection of the journals. Third, the set of 68 journals was attempted to be verified by an email polling of 100,000 scientists of which only 2.8% bothered to reply. The low response rate may indicate that the NI did not seem to be important to most of the polled scientists (we assume that the scientists knew the reason for the survey). Furthermore, it is not clear what kind of sample the 100,000 scientists are? Is it a random sample? Fourth, the NI measures the performance of institutes and countries on the basis of the reputation of journals. This kind of measurement follows a tradition in line with the journal impact factor (JIF). The many critiques of using the JIF to evaluate research (see San Francisco Declaration on Research Assessment at http://am.ascb.org/dora/) should encourage us to seek for better indicators to measure research performance. A performance measurement 4 should be conducted on the basis of individual papers and not journals (Bornmann, Marx et al. 2012). 5 References Bornmann, L., et al. (2012). "Diversity, value and limitations of the Journal Impact Factor and alternative metrics." Rheumatology International (Clinical and Experimental Investigations) 32(7): 1861-1867. Campbell, N. and M. Grayson (2014). "Index 2014 Global." Nature 515(7526): S49-S49. Campbell, N. and M. Grayson (2014). "Introducing the index." Nature 515(7526): S52-S53. 6 .