<<

Scientometric indicators and their exploitation by journal publishers

Name: Pranay Parsuram

Student number: 2240564

Course: Master’s ( and Studies)

Supervisor: Prof. Fleur Praal

Second reader: Dr. Adriaan van der Weel

Date of completion: 12 July 2019

Word count: 18177 words

1

Contents

1. Introduction ...... 3 2. Scientometric Indicators ...... 8 2.1. Journal ...... 8 2.2. h-Index ...... 10 2.3. ™ ...... 11 2.4. SCImago Journal Rank...... 13 2.5. Source Normalized Impact Per Paper ...... 14 2.6. CiteScore ...... 15 2.6. General Limitations of Count ...... 16 3. Conceptual Framework ...... 18 3.1. Academic ...... 18 3.2. Publishing ...... 21 3.2.1. Researcher Considerations ...... 23 4. Scientometric Indicators and Journal Publishers ...... 27 4.1. Use of Scientometric Indicators ...... 27 4.1.1. Journal Pricing ...... 28 4.2. Manipulation of Scientometric Indicators ...... 32 4.2.1. Type ...... 32 4.2.2. Co-authorship and Subject Area ...... 33 4.2.3. Manipulation of Citable Items ...... 35 4.2.4. Self-citation ...... 36 4.2.5. Accessibility ...... 37 4.3. Alternative to Scientometric Indicators...... 43 4.4. Open ...... 45 5. Conclusion ...... 47 ...... 50

2

1. Introduction

The emergence of the Web has shown considerable promise as a means to transform academic .1 This is because the Web offers academics and various institutions to ‘publish, annotate, review, discover and make links between research outputs’.2 Given these affordances, governments and funding agencies in many countries are now able to get directly involved in systematically evaluating the scientific outputs of and research institutions and their research productivity and quality. This systematic evaluation aids with providing measures to improve the performance of a given institution and provides a basis for decision-making about the allocation of research funding. The major means for determining the overall scientific output of an institution is .3 In this thesis, I will shed light on how journal publishers exploit and manipulate this dependence of universities and research institutions on scientometrics for their own commercial interests. This of the thesis provides a brief introduction of scientometrics and its use in the context of my .

Until around 1970, research regarding the growth and of academic and, in particular, scientific knowledge, was considered to be a philosophical field. More emphasis was laid on the validity of knowledge. However, the question regarding how this knowledge was being produced was not focused on. The latter question was considered to belong to the realm of the social .4 Then, in 1969, the term scientometrics was coined in Russia. This term was intended to examine all aspects of the literature of science and technology. By 1978, the term had attained wide recognition because of the establishment of a journal called Scientometrics in Hungary.5 This journal is still in publication today, and it deals with ‘the quantitative features and characteristics of science and scientific research’.6 Furthermore, Tague-Sutcliffe provides a simple definition of scientometrics as follows: ‘Scientometrics is the study of the quantitative aspects of science as a discipline or economic activity.’7

1 J. Stewart, tag">R. Procter, R. Williams & M. Poschen, ‘The role of academic publishers in shaping the development of Web 2.0 services for ’, New Media & , 15:3 (2013), p. 414. 2 Ibid., p. 414. 3 D. Pontille & D. Torny, ‘The controversial policies of journal ratings: Evaluating social sciences and ’, Research Evaluation, 19:5 (2010), pp. 347–348. 4 L. Leydesdorff, The challenge of scientometrics: The development, , and self- of scientific , (Universal Publishers, 2011), p. 15. 5 W. Hood & C. Wilson, ‘The literature of , scientometrics, and informetrics’, Scientometrics, 52:2 (2001), p. 293. 6 Anon., ‘Description’, Scientometrics (10 June 2019). 7 J. Tague-Sutcliffe, ‘An introduction to informetrics’, Information Processing & Management, 28:1 (1992), p. 1. 3

Scientometrics is related to and has overlapping fields with bibliometrics.8 Bibliometrics has been in use in different forms for over a century or more. However, the term was officially coined only in 1969,9 around the same time as scientometrics. The major difference between bibliometrics and scientometrics is that the former mainly focuses on the quantitative aspects of the literature of science and scholarship, whereas the latter considers the literature as well as other aspects of science and technology, such as researcher practices, socio-organizational structures, research and development management and governmental regulations.10 Inherently, both concepts are used as a tool to measure research output in the form of publications, and as such, the terms have been used inconsistently and inter- changeably in existing literature. However, as scientometrics is a broader concept, it fits better in the context of this thesis; therefore, I will be using this term throughout. The two basic metrics involved in scientometrics are the number of publications and the number of that these publications receive. These metrics can be evaluated at different levels and for different objects such as a single publication, a researcher, a researcher unit, an institution or a country.11 Conventional scientometrics only provides raw data about the publication and citation count, which is generally derived from such as , , PubMed, and others. To interpret this data, more sophisticated indicators have been implemented, for example, journal impact factor (IF), h- index, field normalized citation indicators, Eigenfactor™ (EF), SCImago journal rank (SJR), source normalised impact per paper (SNIP) and CiteScore. The reason for the introduction of these indicators is that publication and citation differ by disciplines and even sub- disciplines. Therefore, more objective and normalized indicators are required for comparisons between disciplines.12 One of the first scientometric indicators was the IF,13 and it was proposed in the 1950s.14 Since then, scientometric indicators have gradually become one of the main means for characterisation of research .15 Scientometric indicators are an effective means of determining the evolution of research in a given field and can hence help

8 Hood & Wilson, ‘The literature of bibliometrics’, p. 291. 9 Ibid., p. 292. 10 Ibid., pp. 293–294. 11 J. Wilsdon, J. Bar-Ilan, R. Frodeman, E. Lex, I. Peters & P.F. Wouters, ‘Next-generation metrics: Responsible metrics and evaluation for ,’ Report of the European Commission Expert Group on , (2017),

Nevertheless, researchers have realised that journal publications serve not only as a means for communication but also as an indicator of quality and impact20 and hence career development.21 Their selection of when, where and how to publish their work aims at maximising dissemination to the audience, registering their claim on the work done and gaining prestige among their peers and superiors. As journal articles have become the dominant form of publication, even in disciplines where they were not dominant in the past, researchers themselves have increasingly started relying on journals, in particular high-status journals, and have come to perceive other channels of communication, including those that are better suited to application- or practical-based research, to have a low status and prestige in the academic world.22 This has directly affected the number of journals and the amount of research published in them. Overall, the number of academic journals has increased from 39,565 in 2003 to 61,620 in 2008, and among them, the number of peer-reviewed journals

16 Ò. Miró, P. Burbano, C.A. Graham, D.C. Cone, J. Ducharme, A.F. Brown & F.J. Martín-Sánchez, ‘ of h-index and other bibliometric markers of productivity and repercussion of a selected sample of worldwide emergency researchers’, Emergency Medical Journal, 34:3 (2017), p. 175. 17 Wilsdon et al., ‘Next-generation metrics’, p. 9. 18 V.D. Kosteas, ‘Journal impact factors and month of publication’, Letters, 135 (2015), p. 77. 19 J. Fry, C. Oppenheim, C. Creaser, W. Johnson, M. Summers, S. White, G. Butters, J. Craven, J. Griffiths & D. Hartley, ‘Communicating knowledge: how and why UK researchers publish and disseminate their findings’, Research Information Network and JISC, (2009), (10 June 2019), pp. 17–18. 20 Ibid., pp. 17–18. 21 Kosteas, ‘Journal impact factors and month of publication’, p. 77. 22 Fry et al., ‘Communicating knowledge’, pp. 17–20. 5 has increased from 17,649 in 2002 to 23,973 in 2008. Moreover, the annual average number of articles per journal has increased from 72 in 1972 to 123 in 1995.23 Overall, the number of articles published each year as well as the number of journals have both grown steadily by about 3% and 3.5% per year, respectively, for over two centuries or so, though there are some indications that growth has accelerated in recent years.24 As a result, in 2002, Morgan Stanley reported that academic journals have been the fastest growing media sub-sector of the past 15 years.25

Moreover, some scientometric indicators, in particular IF, have rapidly evolved in to being more than just a measure of a journal’s relevance and have come to be viewed as an indicator of journal and prestige, thus having far-reaching implications.26 This prestige predicts the overall flow of information in a given field, thus increasing the chances of both the journal and author getting noticed and achieving recognition within the .27 Considering that journal publishers have to take into account the economic considerations of publishing,28 this prestige becomes especially important for financial viability. This is because publications are viewed as commodities sold by publishers to . Since libraries play a central role in financing the publication infrastructure, it becomes important for journals that their publications, i.e. the commodities, are of a high quality and are unique so as to maximise interest from libraries.29 In such a scenario, as the IF of a journal is assumed to be a marker for the quality of research it publishes,30 it acts as a marketing tool for the given journal. Furthermore, because of its importance as a marketing tool, journal publishers are inclined to manipulate it and even exploit it for their own interests. In this thesis, I will examine the use of scientometric indicators by journals for marketing and economisation purposes. To do so, I will first describe the various indicators in use today and how they are or can be used by journal publishers. I will also consider the effects of these indicators on author motivations to submit their work to specific journals.

23 C. Tenopir & D.W. King, ‘The growth of journals publishing’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), p. 110. 24 M. Ware & M. Mabe, The STM report: An overview of scientific and scholarly journal publishing, 4th edition (The Hague: International Association of Scientific, Technical and Medical Publishers, 2015), p. 6. 25 Morgan Stanley, ‘Scientific publishing: Knowledge is power’ Morgan Stanley Equity Research Europe (London), 30 September, 2002 (16 May 2019). 26 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 240. 27 N.C. Taubert & P. Weingart, ‘Changes in scientific publishing: A heuristic for analysis’, in The future of scholarly publishing: and the economics of digitization (Cape Town, South : African Minds, 2017), p. 3. 28 Ibid., p. 6. 29 Ibid., pp. 6–9. 30 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 240. 6

Lastly, I will also examine some alternatives to existing metrics that may play an important role in future academic communication and research.

7

2. Scientometric Indicators

Scientometric indicators are used to assess research performance quantitatively. These assessments may be done in conjunction with other metrics or using the scientometric indicators as stand-alone tools. There are a number of scientometric indicators with varying levels of sophistication.31 This chapter describes some of the most widely used indicators at present.

2.1. Journal Impact Factor The journal IF is the most commonly employed scientometric indicator today.32 It was first proposed by Eugene Garfield in 195533 and has become the leading scientometric ranking in use today.34 The main aim for introducing it was to achieve a ‘a bibliographic for science literature that can eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers’.35 This concept was introduced by Garfield to overcome the issue of researchers being influenced by unfounded assertions and unsubstantiated claims while writing. In other words, the researchers were not always aware about criticisms regarding a certain finding. In the 1950s, in the absence of the Internet, a researcher would have to spend a considerable amount of time investigating the bibliographic predecessors of a given article. However, a would make this check easier and more efficient. Moreover, this index was mainly aimed at minimising the citation of poorly conceived studies. The citation index was developed using a simple numerical-code system to identify individual scientific articles. According to this system, first, an alphabetical list of all periodicals was provided along with the numerical codes for each one of them.36 Thus, the first part of the code gave the periodical in which the article was published. The second part of the code corresponded to articles in that periodical. Under each numerical code, code numbers of other articles that referred to the given article were to be provided; in addition, for each citing source, the type of article, i.e. original article, review, etc. was to be mentioned. The availability of the code

31 T. van Leeuwen, ‘Bibliometric research evaluations, Web of Science and the Social Sciences and Humanities: a problematic relationship?’, Bibliometrie-Praxis und Forschung, 2 (2013), p. 8-1. 32 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 240. 33 Garfield, ‘Citation indexes to science’, p. 108. 34 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 240. 35 Garfield, ‘Citation indexes to science’, p. 108. 36 When this system was proposed, a number of periodicals already had numerical codes. The alphabetical list was supposed to act as a guide in a catalogue. 8 could be particularly useful for determining the overall historical impact of a given article.37 This system was based on Shepard’s Citations, a legal research system that has been used by lawyers and jurists with considerable success since 1873. Both systems are a means of , and not primary research. The main function of Shepard’s Citations was to test the validity of a case based on previous rulings. However, it had another important function: it helped index all cases that were automatically derived out of the first one.38 This latter function was considered to be useful for scientific communication.39 The major difference for IF was that scientific disciplines were divided into broad categories (journals becoming categories here) and the number of years covered were restricted.40

Although the initial aim for the was to provide a list of articles citing any given article, with the expectation being that researchers would go through these citing articles to decide the impact and quality of the cited article, over the years, IF seems to have become more of a quantitative measure for a journal. In the latter sense, the IF resembles the system proposed by Gross & Gross.41 The concept proposed by Gross & Gross was for chemistry journals and was meant to act as a guide for libraries to decide which journals to purchase for their students without any or subjectivity. Here, the number of citations to a given journal in a five-year period with respect to the total number of articles the journal published was considered.42 The IF in its current form was defined by Garfield only in the 1970s.43 At present, the IF of journals are published every September by the Thomson Reuters Institute for Scientific Information (ISI).44 The ISI publishes both two-year and five- year IFs.45 The formula for the two-year IF of a journal for a given year, e.g., 2018, is as follows:46

퐶𝑖푡푎푡𝑖표푛푠 𝑖푛 2018 푡표 푎푟푡𝑖푐푙푒푠 푝푢푏푙𝑖푠ℎ푒푑 𝑖푛 2016 − 17 퐽표푢푟푛푎푙 퐼푚푝푎푐푡 퐹푎푐푡표푟 = 푁푢푚푏푒푟 표푓 푎푟푡𝑖푐푙푒푠 푝푢푏푙𝑖푠ℎ푒푑 𝑖푛 2016 − 17

37 Ibid., pp. 108–109. 38 W.C. Adair, ‘Citation indexes for ?’, American Documentation (Pre-1986), 6:1 (1955), pp. 31–32. 39 Garfield, ‘Citation indexes to science’, p. 108. 40 Adair, ‘Citation indexes for scientific literature?’, p. 32; IF ranges of journals greatly differ by discipline. 41 Garfield, ‘Citation indexes to science’, p. 109. 42 P.L. Gross & E.M. Gross, ‘College libraries and chemical ’, Science, 66:1713 (1927), pp. 385–386. 43 E. Garfield, ‘Citations-to divided by items-published gives journal impact factor; ISI lists the top fifty high- impact journals of science’, Current Contents, 7 (1972), pp. 5–8. 44 C. Scully & H. Lodge, ‘Impact factors and their significance; overrated or misused?’, British Dental Journal, 198:7 (2005), p. 391. 45 Kosteas, ‘Journal impact factors and month of publication’, p. 77. 46 Formula derived from Scully & Lodge, ‘Impact factors and their significance’, pp. 391–392. For five-year IF, the year range for the numerator and denominator would be 2013–17. 9

However, the IF has a number of shortcomings. First, it only provides an assessment of the journal quality and not of individual articles in the journal.47 Even as a marker for journal quality, it is not entirely accurate, as a single path-breaking article in a journal may be cited numerous times to greatly increase the numerator of the formula, whereas many other articles may not be cited at all. In such a case, the journal’s IF is entirely dependent on the citations of one article alone. Another issue is that the IF of a journal can be easily manipulated depending on the type of the article. For example, a journal publishing more review articles usually has a higher IF than one publishing more original articles, as the former are more frequently cited. In addition, there is disparity among journal IFs depending on their fields and type. For instance, scientific journals generally have higher IFs than clinical journals. Lastly, the IF is only valid for journals and does not consider and book chapters.48

2.2. h-Index The h-index was proposed by Hirsch in 2005. This indicator aims to provide a broad assessment of an individual researcher’s work and publication record, based on the number of papers published over a given period of time and the number of times each paper is cited. The index was initially proposed for physicists49 but has since been employed by other scientific disciplines as well.50 According to Hirsch’s definition, ‘a scientist has index h if h of his or her Np papers have at least h citations each and the other (Np − h) papers have ≤h citations each’.51 In other words, if a scientist has an h-index of 20, it means that he or she has published 20 papers, with each having been cited at least 20 times.52 A simple method for calculating the index would be to first look up the number a researcher’s published works. Then, the list should be arranged in descending of the number of times the publications have been cited. Continue through the list until the number of citations for a publication becomes lesser than the number of papers on the list. The number of papers here would give the h-index value. Thus, an h-index of 0 means that the scientists have either not published

47 Wilsdon et al., ‘Next-generation metrics’, p. 9. 48 Scully & Lodge, ‘Impact factors and their significance’, pp. 392–393. 49 J.E. Hirsch, ‘An index to quantify an individual’s scientific research output’, Proceedings of the National of Sciences, 102:46 (2005), p. 16569. 50 L. Bornmann & H.D. Daniel, ‘What do we know about the h index?’, Journal of the American Society for and Technology, 58:9 (2007), p. 1381. 51 Hirsch, ‘An index to quantify an individual’s scientific research output’, p. 16569. 52 Derived from Bornmann & Daniel, ‘What do we know about the h index?’, p. 1381. 10 papers or that their papers have had no or negligible visible impact. Thus, this index ensures that only the impactful papers authored by a scientist are considered for assessment and the others are neglected. Therefore, this index supports enduring performers with high publishing productivity coupled with high or at least above-average impact.53

However, the h-index too has some shortcomings. Primary among them is that the h- index tends to favour senior researchers over junior and untenured ones.54 This is because the h-index does not decrease over time and tenured researchers would generally have higher h- index values, as their overall productivity would be higher;55 as a result, it does not accurately reflect recent scientific achievement.56 Another common complaint is that the h-index does not consider extreme values; however, in science, a single valuable paper can lead to a major breakthrough.57 Therefore, highly cited and significant papers carry the same weight as any other paper while calculating the index.58 Further, since the h-index is derived from the same databases as those used by ISI to collate journal IF, it does not consider books and book chapters.59 In addition, unclear and incorrect citations within the databases as well as some journals or publications not being indexed in the databases may lead to incorrect calculation of h-index values.60 Moreover, like IF, the h-index too can be manipulated through self- citation. Also, single- and multi-authored papers are treated identically in this system. Lastly, like IF, it is greatly dependent on the discipline and the overall number of scientists working in and the output of a discipline.61

2.3. Eigenfactor™ EF was developed by Carl Bergstrom and Jevin West in 2007 to address two major issues related to both IF and h-index: that citations had the same value irrespective of the prestige of the journal in which they were published, and that these indices did not take into account

53 Bornmann & Daniel, ‘What do we know about the h index?’, p. 1381. 54 Wilsdon et al., ‘Next-generation metrics’, p. 9. 55 H.L. Roediger III, ‘The h-index in science: A new measure of scholarly contribution’, Observer: The Academic Observer, 19:4 (2006), (20 May 2019). 56 W.E. Schreiber & D.M. Giustini, ‘Measuring Scientific Impact With the h-Index: A Primer for Pathologists’, American Journal of Clinical Pathology, 151:3 (2018), p. 288. 57 Roediger III, ‘The h-index in science’. 58 Schreiber & Giustini, ‘Measuring Scientific Impact With the h-Index’, p. 288. 59 Roediger III, ‘The h-index in science’. 60 Bornmann & Daniel, ‘What do we know about the h index?’, p. 1383. 61 Schreiber & Giustini, ‘Measuring Scientific Impact With the h-Index’, p. 288. 11 among different disciplines and their journals.62 Thus, the main aim of the EF was to provide a more sophisticated metric to measure citation data by using network analysis.63 To achieve this, Bergstrom and West used a computational algorithm, known as the EF algorithm, to extract information inherent to citation networks. The algorithm is related to a class of network statistics known as eigenvector centrality measures.64 It computes the visitation frequency of a given journal directly from a matrix that records how often other journals cite the given journal.65 Importantly, the EF does not consider journal self- citations.66 The proposed approach is similar to that used by Google to rank web pages while returning search results. Google’s algorithm considers not only the number of hyperlinks a given page receives but also where those hyperlinks come from. In a similar vein, the EF algorithm ranks journals as web pages based on the citation data obtained from the ISI,67 which play the role of hyperlinks.68 Thus, a journal’s EF score is considered to indicate its overall importance in the scientific community69 over a 5-year period.70

The Article Influence™ score (AIS) is closely related to the EF.71 This score is calculated by dividing the EF score of a journal by the total number of articles published by the journal in the given period, normalised as a fraction of all articles in all journals. In general, the AIS provides a per-article comparison of journals and determines the average influence of a given journal on the scientific community over a 5-year period.72 The normalisation of the number of articles allows a comparison of the AIS between journals.73 For example, if journal A has an AIS of 1.00 and journal B has an AIS of 5.00, the articles in journal B are, on average, considered to be five times more influential than those in journal A in terms of its AIS.

62 C.T. Bergstrom, J.D. West & M.A. Wiseman, ‘The eigenfactor™ metrics’, Journal of , 28:45 (2008), p. 11433. 63 Anon., ‘About’, EIGENFACTOR.org (5 June 2019). 64 The eigenvector is widely used in matrices and hence is ideal when considering networks. 65 Bergstrom et al., ‘The eigenfactor™ metrics’, p. 11433. 66 F. Franchignoni & S.M. Lasa, ‘Bibliometric indicators and core journals in physical and rehabilitation medicine’, Journal of Rehabilitation Medicine, 43:6 (2011), p. 472. 67 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 241. 68 Bergstrom et al., ‘The eigenfactor™ metrics’, p. 11433. 69 Anon., ‘About’, EIGENFACTOR.org. 70 Franchignoni & Lasa, ‘Bibliometric indicators and core journals’, p. 472. 71 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, pp. 241–242. 72 Franchignoni & Lasa, ‘Bibliometric indicators and core journals’, p. 472. 73 Anon., ‘About’, EIGENFACTOR.org. 12

Although the EF coupled with the AIS have been considered viable alternatives to IF,74 especially considering its use of normalisation, it does have some limitations. First, like IF, it considers the influence of a journal and not of individual articles or researchers. Moreover, the categorisation of disciplines and hence the matrix used for calculation of EF has been found to be problematic. The categorisation is considered to be too broad, inconsistent, inaccurate and incomplete. This is because the categorisation is done using software and not manually. As a result, some journals have been assigned to incorrect categories, or some categories are too broad.75 This can lead to a distorted picture of the standing of some journals in their given field or sub-field and makes it rather difficult to reproduce refined league lists of journals.76 Lastly, EF and AIS are currently only available for journals listed in the (JCR) , which have been found to be less comprehensive than other citation databases.77

2.4. SCImago Journal Rank The SJR, first proposed in 2010, is also an indicator of a journal’s prestige, but it is independent of a journal’s size. Like EF, SJR is based on citation weighting schemes and eigenvector centrality, and it aims to measure the average prestige per paper in a journal.78 In fact, its calculation and the algorithm and mechanism used is very similar to those used for EF; however, what distinguishes it from EF is that it considers the Scopus database, which is considered to be more comprehensive than the JCR database used for EF.79 Moreover, SJR considers a 3-year citation window for each journal.80 The developers of SJR, however, found some issues and hence proposed an improved version, known as SJR2, in 2012. This indicator considers the prestige of the citing journal as well as its closeness to the cited journal through vector calculations.81 By doing so, it aims to also consider the amount of ‘prestige’ of each journal transferred to another by considering the percentage of citations of

74 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 242. 75 P. Jacsó, ‘The problems with the subject categories schema in the EigenFactor database from the perspective of ranking journals by their prestige and impact’, Online Information Review, 36:5 (2012), pp. 764–765. 76 Ibid., p. 758 77 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 242. 78 B. González-Pereira, V.P. Guerrero-Bote & F. Moya-Anegón, ‘A new approach to the metric of journals’ scientific prestige: The SJR indicator’, , 4:3 (2010), pp. 379–380. 79 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 242. 80 González-Pereira et al., ‘A new approach to the metric of journals’ scientific prestige’, p. 390. 81 V.P. Guerrero-Bote & F. Moya-Anegón, ‘A further step forward in measuring journals’ scientific prestige: The SJR2 indicator’, Journal of Informetrics, 6:4 (2012), p. 674. 13 the former made in articles of the latter. Overall, the SJR and SJR2 provide the average per paper of a journal.82

Like other indicators, SJR too has some limitations. First, as with EF and IF, it does not provide information about individual articles or researchers but journals instead. Moreover, unlike EF, SJR does consider self-citations up to a limit of 33%,83 which can still be problematic. Lastly, Scopus only contains citation data after the year 1996;84 considering that journal articles have been widely read, circulated and cited since the early 1940s,85 SJR does not seem to provide a complete picture of a journal’s overall historical impact.

2.5. Source Normalized Impact Per Paper SNIP was proposed by H.F. Moed in 2010,86 and it measures a given journal’s contextual citation impact by considering the characteristics of the journal’s subject field, the temporal maturation of citation impact87 and coverage of the subject field’s literature within a database.88 These considerations help avoid the distortion in impact caused due to differences in different disciplines and sub-disciplines.89 Mathematically, SNIP is the ratio of a journal’s citation count per paper and the citation potential within the journal’s field.90 The inclusion of the citation potential ensures that niche specialities or sub-specialities that have a tendency to be cited less frequently are given higher weightage to create a more balanced rating system.91 Moreover, the disciplines are determined on an article-to-article basis and not a journal-to- journal basis,92 thus making it a good indicator, especially for multi-disciplinary journals.

However, SNIP has some shortcomings too. First, SNIP does not recognise the difference between original and review articles; hence, it is not safe from distortion caused by article type. Second, like the other journal factors, it is a reflection of journal quality even if comparisons are made on an article-to-article basis. Thus, it does not indicate the impact of

82 Franchignoni & Lasa, ‘Bibliometric indicators and core journals’, pp. 472–473. 83 González-Pereira et al., ‘A new approach to the metric of journals’ scientific prestige’, p. 381. 84 L. Wildgaard, J.W. Schneider & B. Larsen, ‘A review of the characteristics of 108 author-level bibliometric indicators’, (2014), (1 July 2019), p. 39. 85 Tenopir & King, ‘The growth of journals publishing’, pp. 106–107. 86 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 243. 87 This indicates how rapidly an article is likely to have an impact within a particular field. 88 H.F. Moed, ‘Measuring contextual citation impact of scientific journals’, Center for Science and Technology Studies, 13 November 2009, , p. 1. 89 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 243. 90 Moed, ‘Measuring contextual citation impact’, p. 9. 91 Oosthuizen & Fenton, ‘Alternatives to the impact factor’, p. 243. 92 Moed, ‘Measuring contextual citation impact’, p. 16. 14 individual articles. Lastly, although all scores are normalised according to the citation potential of a field, it takes into account neither the development and growth of literature within a given field or sub-field nor the frequency of papers in a given field being cited from other fields.93

2.6. CiteScore CiteScore was launched on 8 December 2016 by giant to directly compete with IF.94 The overall mechanism and formula used for their calculation is identical.95 However, CiteScore differs from IF in that it considers almost twice as many titles as those considered while IF; it takes into account editorials, letters and news items while calculating the score; and is calculated over a 3-year window. Also, CiteScore values are calculated for journals as well as conference proceedings and the scores are openly available.96 In fact, all citation data related to CiteScore can be easily accessed, making its calculation seem more transparent than that of IF.97

Despite this transparency and the relatively short lifetime of CiteScore, it has already been subject to some criticism. First, because of its similarities to IF, most issues related to IF persist.98 In fact, the expansion of article types considered may likely have a negative impact, as it may make CiteScore more vulnerable to manipulation. Further, like SJR, it only considers the entries included in the Scopus database,99 thus only considering citation data from after 1996. Moreover, the subject-area categorisation of journals has been found to be inconsistent, especially in fields such as pharmacy.100 Lastly, CiteScore is funded and developed by Elsevier, one of the largest journal publishers in the world, as a direct competitor to IF.101 This leads to strong suspicions about its legitimacy given the major conflict of interest.

93 Ibid., pp. 16–17. 94 J.A.T. Da Silva & A.R. Memon, ‘CiteScore: A cite for sore eyes, or a valuable, transparent metric?’, Scientometrics, 111:1 (2017), p. 553. 95 F. Fernandez-Llimos, ‘Differences and similarities between Journal Impact Factor and CiteScore’, Pharmacy Practice (Granada), 16:2 (2018), p. 1. 96 Da Silva & Memon, ‘CiteScore: A cite for sore eyes’, pp. 554–555. 97 Fernandez-Llimos, ‘Differences and similarities between Journal Impact Factor and CiteScore’, p. 2. 98 Check Section 2.1 for details. 99 Da Silva & Memon, ‘CiteScore: A cite for sore eyes’, pp. 554–555. 100 Fernandez-Llimos, ‘Differences and similarities between Journal Impact Factor and CiteScore’, p. 2. 101 Da Silva & Memon, ‘CiteScore: A cite for sore eyes’, p. 554. 15

2.6. General Limitations of Citation Count Other than the limitations mentioned above, all scientometric indicators have one major inherent limitation: their dependence on citation counts. This is because citations counts are purely a quantitative measure; yet, they are used as a means for determining the quality of journals. Moreover, they continue to remain the basis for scientometric indicators. In particular, the IF, which is the most widely used scientometric indicator, has become deeply entrenched in academia for measuring the impact of research despite it being severely criticised by journal publishers and researchers alike.102 However, the IF is determined purely by the number of citations that articles in a given journal receive, with no consideration of which researchers cite those articles and which journals the citing articles are published in. In doing so, the IF arguably only measures the popularity of a given article or journal and not its prestige.103 The calculation is akin to calling the highest-grossing movie or best-selling book as the most prestigious movie or book, respectively, although, in reality, they may be the most popular but not necessarily the most prestigious because the total number does not provide an idea of who liked those works. Although modern scientometric indicators like EF and SJR do consider the status of the citing journal, they still derive considerable amounts of data from the ISI database, which is used to determine IF; this means that the prestige of the citing journal is still determined by the IF, thus making them problematic as well. In addition, the h-index, which suggests the quality of an individual researcher, depends on citation counts alone, thus still providing a more quantitative viewpoint.

Moreover, citing a work to criticise or oppose it is also considered a citation.104 Applying the same movie or book analogy, this implies that every review, good or bad, for a given movie or book is considered to impact cinema or literature, respectively. This issue is observed in all scientometric indicators, as none of them critically analyse the citations. Thus, even the popularity suggested by pure numbers in citation counts is misleading in that it may represent notoriety or, in some cases, vehement disagreement.105 Therefore, using citation counts and hence scientometric indicators as a qualitative measure for impact of research is a flawed approach in itself. However, as long as scientometric indicators continue to influence hiring, tenure and promotion decisions for researchers, they will remain an inescapable part

102 B. Cope & M. Kalantzis, ‘Evaluating Webs of Knowledge: A Critical Examination of the “Impact Factor”’, Logos, 21:3-4 (2010), pp. 61–65. 103 J. Bollen, M.A. Rodriquez & H. Van de Sompel, ‘Journal status’, Scientometrics, 69:3 (2006), p. 669. 104 Cope & Kalantzis, ‘Evaluating Webs of Knowledge’, p. 62. 105 Ibid., p. 62. 16 of academic communication and the overall research process. Consequently, their constant presence and significance make them vulnerable to misuse, manipulation and exploitation.

17

3. Conceptual Framework

3.1. Academic Publishing In this chapter, I will focus on the overall communication flow of academic publishing, as this will make clear the roles and responsibilities of different stakeholders in this type of publishing, thus providing context for the subsequent chapter of my thesis. Before I proceed to describing academic publishing, let us look at publishing in general. The verb ‘to publish’ itself is said to have been derived from the Anglo-Norman word puplier and the Middle French word publier, which loosely translated mean to make something public or known.106 The root words originally implied to read a work in public107 or to make public a will or edict.108 Undoubtedly, one could argue that this is what publishing entails today, albeit on a much broader scope. However, defining publishing as a mere activity or step is too simplistic. This is because publishing entails a lot more than simply making something public. It is a process that also has social as well as economic contexts.109 Moreover, the word publishing today has an alternative definition: the of publishing.110 This is further evidence of it being a process involving multiple players and steps and inter-relationships between them. Furthermore, the business of publishing would clearly have an economic dimension for each stakeholder.

One of most landmark representations of publishing as a process, which takes into account the various considerations mentioned above, is Robert Darnton’s communication circuit. This circuit was first proposed in the 1980s to describe book publishing in the eighteenth century.111 However, many aspects of the model remain relevant today, although the model does require some changes mainly due to the recent increase in the use of digital formats for publishing and other technological advancements. Figure 1 shows an updated version of Darnton’s communication circuit, which includes the aspects involved in digital publishing.

106 M. Bhaskar, The Content Machine: Towards a Theory of Publishing from the Printing Press to the Digital Network, (Anthem Press, 2013), p. 16. 107 R. Chartier, Forms and meanings: Texts, performances, and audiences from codex to computer, (Philadelphia: of Pennsylvania Press, 1995), p. 33. 108 Bhaskar, The Content Machine, p. 17. 109 Ibid., p. 17. 110 Ibid., p. 19. 111 R. Darnton, ‘What is the history of books?’, Daedalus, 111:3 (1982), pp. 69–71. 18

Figure 1: Digital publishing communication circuit (©Intellect Books, 2013).112

This circuit shows an excellent overview of the involved in digital book publishing. In simple terms, the circuit depicts the steps associated with digital book publishing, including the various stakeholders involved as well as the factors influencing them. Although the model is not applicable to academic publishing per se and requires some changes due to the complex dynamics involved in academic publishing, it is still relevant from the point of view of providing the crux of publishing for the purpose of this thesis. In particular, it is interesting to note the various socio-economic factors affecting publishing and communication as a whole, and this forms an important part of publishing. These factors become especially important when considering publishing as a business; in some ways, the external factors can be seen as driving forces behind the publishing business. Moreover, the circuit suggests a two-way relationship between the author and the reader in that even though these stakeholders may not always be in direct contact, reader tastes and opinions do

112 P.R. Murray & C. Squires, ‘The digital publishing communications circuit’, Book 2.0, 3:1 (2013), p. 6. 19 influence the author, both before and after the creation of content. Another major feature of publishing is the distribution or sale of published material to the readers. This in itself inherently makes it more than simply making something public. As shown in Figure 1, the publisher receives the content from the author and provides the finished material to the readers. Thus, publishing can act as both a product, with the providing a raw product that goes through the process of publishing to become a finished product, and a service, wherein the publishing process serves the author by helping him / her refine the content and the readers by providing them with new content. In that sense, the publisher itself acts as a middleman between the author and the reader, which reflects Oscar Wilde’s definition: ‘a publisher is simply a useful middleman’.113

Here, the middleman performs two main functions: filtering and amplification. Filtering here refers to the filtering of content; a publishing house would filter the content on the basis of a number of social, economic, political and considerations to reach a decision on whether the given content should be published. This filtering makes publishers act as a gatekeeper of information and content. Filtering is diverse in that in it could be because of idealistic reasons or economic reasons. Similarly, it could be massively inclusive or extremely exclusive in .114 Thus, filtering is expected to determine the overall value of publishing a given work. The function of amplification implies to ensure that a given work is distributed and consumed as widely as possible, leading to it being distributed and consumed by different people.115 Amplification thus involves a series of actions, with the end purpose of them being to increase the consumption as well as awareness of a work.116

However, in academic publishing, the role of the publisher is different compared to that described above. To delve deeper into understanding this role, it is important to understand academic communication. Academic communication is broadly classified into two categories: formal academic communication and informal academic communication.117 Examples of informal communication include personal correspondence, lectures, seminar,

113 W.R. Cole, ‘No Author Is a Man of Genius to His Publisher’, , 3 September, 1989 (11 June 2019). 114 Bhaskar, The Content Machine, pp. 106–109. 115 This would work differently in academic publishing, wherein the major focus for distribution would be the target audience and then others. 116 Ibid., pp. 114–115. 117 H.E. Roosendaal & P.A.T.M. Geurts, ‘Forces and functions in scientific communication: an analysis of their interplay’, Cooperative Research Information Systems in Physics, 31 (1997), (10 June 2019)p. 11. 20 blogs etc. Formal communication usually refers to published research.118 This could be either in the form of a scholarly or a scientific article published in a journal.119 In academic publishing, the publisher gives a stamp of quality to help formalise a researcher’s work.120 For example, a piece of research posted online by a researcher as a blog is not considered to be published. In order for the given blog to be considered to be a publication, the author would have to publish it in an academic journal or as a scholarly monograph.121 Thus, in a sense, an academic publisher formalises or legitimises a researcher’s work by publishing it. Although this holds true for academic publishing in general, in my thesis, I will only be concentrating on publishers of academic journal articles, since scientometric indicators are more relevant for them.

3.2. Academic Journal Publishing Similar to general publishing, in academic publishing, the publisher acts as a middleman and gatekeeper of information. As mentioned in the previous section, in this case, the publisher plays the major role of legitimising the research and its results, which in turn helps future research. Figure 2 shows the role of academic journals in the overall research process.

118 Ibid., p. 8. 119 J.B. Thompson, Books in the digital age: The transformation of academic and higher education publishing in Britain and the , (Polity, 2005), pp. 81–84. 120 F. Praal & A. van der Weel, ‘Taming the digital wilds: How to find authority in an alternative publication paradigm’, TXT, 2016 (2016), p. 98. 121 This could be done with or without modification depending on the target audience and the style required by the journal to which the blog is submitted. 21

Figure 2: Journal articles in the research process.122

As shown in the figure, researchers conduct their work in a given institute or research space to obtain results. To ensure that the results are legitimised, they submit their work to journal publishers. The journal publisher then publishes the content after filtering and delivers it to the readers, who are mostly other researchers, either directly or indirectly through libraries. Some of these readers then use the articles to conduct future research. Thus, journal articles not only help with the dissemination of research but also promote further research. Therefore, they play a critical role in the overall research process. Moreover, legitimisation of a researcher’s work is achieved by the publisher by performing four major functions: registration, certification, dissemination and archiving.123

122 Adapted from Thompson, Books in the digital age, p. 82. 123 Roosendaal & Geurts, ‘Forces and functions in scientific communication’, p. 2. 22

3.2.1. Researcher Considerations Given that journal articles themselves play a role in the research process, researchers and journal publishers have an inter-dependent relationship: the journal publisher requires reliable content to be produced by the researcher and the researcher needs a journal to help legitimise his / her findings and to provide access to latest research. As a result of this dependency, the overall research process has affected academic publishing and vice versa.124 Given the growth in the number of journals and hence journal articles being published since the mid- twentieth century, journal publishers have a larger amount of content to choose from and researchers have more journals to consider.125

Researchers have certain considerations while determining the journal they wish to submit their content to. Moreover, like Darnton’s communication circuit, my proposed model also has some socio-economic factors in play. The most important among them is research funding. Apart from the funding aspect, political and legal guidelines, in conjunction with the guidelines of research institutions / universities, and intellectual and social influences also play a role. Because these factors affect the overall research process, they automatically affect all stakeholders involved in the process and become major considerations for researchers while selecting journals.

The other considerations for researchers deal with the four major functions of the academic publisher mentioned in the previous section. Registration refers to acknowledging the fact publicly that the given academic(s) have researched a specific topic or have made a certain discovery. In other words, publishing helps the researcher(s) stake claim to a given result / discovery, whether path-breaking or not,126 thus not only distributing the content but also placing a time-stamp on the research. When the first scientific journals were launched, the Journal des Sçavans in 1665 in Paris and the Philosophical Transactions of the Royal Society in 1666 in London, most researchers, in particular, scientists, had reservations about making their findings public. Although they were concerned about staking their claim to a discovery, they believed that the sharing of their findings would give a competitive edge to their rivals. Consequently, many scientists made their findings public through the use of ingenious cryptic messages, codes and anagrams.127 As a result, relevant readers were not

124 D.C. Prosser, ‘Researchers and scholarly communications: an evolving interdependency’, in D. Shorley & M. Jubb (eds.), The Future of Scholarly Communication (Facet Publishing, 2013), p. 39. 125 This is an ideal scenario. As we will see in the subsequent chapter, the researchers are at a disadvantage here. 126 Prosser, ‘Researchers and scholarly communications’, p. 39. 127 Ibid., p. 40. 23 aware of the latest findings, making it difficult for more research to be conducted or for researchers to collaborate.128 The level of secrecy was so high that , the editor of the Philosophical Transactions of the Royal Society, wrote to leading scientists of the day, describing to them the merits of making their results public in an explicit and clear manner for the purpose of registration and staking claim.129 Thus, right from the start, the main incentive for publication in a journal was not communication or to make something public to aid further research, but it was to register one’s claim, thus proving a scientist’s intellectual worth.130 Over the years, by the mid-nineteenth century, making one’s findings public became not only the norm but also a requirement to justify a researcher’s intellectual worth, eventually becoming a major input for the reward structure for researchers.131 Thus, registration is the most basic expectation of a researcher from a journal, and therefore, the journal being considered a legitimate one becomes a pre-requisite for any researcher wishing to get published.

Apart from assuring the scientists of registration, Oldenburg also mentioned that publishing would help with the certification of results.132 Certification here refers to qualitatively validating a claim or discovery made by the researcher(s).133 This he hoped to achieve by ensuring that all submitted articles were reviewed by members of the Council of the Society.134 This was perhaps the first instance of peer-review for journal articles, which has become one of the cornerstones of academic publishing over the years.135 Peer-review is ‘the process by which research output is subjected to scrutiny and critical assessment by individuals who are experts in those areas’.136 It is widely believed by the academic community, including researchers and journal editors, that an academic publication must be peer-reviewed ‘to establish its value to the field, its originality, and its argumentative rigor’.137 Moreover, the accuracy and quality of work that has not been peer-reviewed cannot

128 Roosendaal & Geurts, ‘Forces and functions in scientific communication’, p. 16. 129 Prosser, ‘Researchers and scholarly communications’, p. 40. 130 Prosser, ‘Researchers and scholarly communications’, p. 40. 131 Ibid., pp. 40–42. 132 Roosendaal & Geurts, ‘Forces and functions in scientific communication’, p. 16. 133 Prosser, ‘Researchers and scholarly communications’, p. 41 134 Roosendaal & Geurts, ‘Forces and functions in scientific communication’, p. 16. 135 I. Hames, ‘ in a rapidly evolving publishing landscape’, in Academic and Professional Publishing (Chandos Publishing, 2012), p. 15. 136 Ibid., pp. 16–17. 137 D.A. Stirling, ‘Editorial peer review: Its strengths and weaknesses’, Journal of the Association for Information Science and Technology, 52 (2001), p. 984. 24 be trusted.138 Consequently, although there are some academic journals that do not conduct peer-review, these journals are usually considered to be less prestigious than peer-reviewed ones. Therefore, the absence or presence of peer-review becomes an important consideration for researchers while selecting journals. This benefits not only the authors, who can improve the quality of their work based on the received critical assessment, but also the readers, who are assured of getting access to high-quality, robust and relevant research.

While registration and certification have been considerations for researchers while choosing journals from the outset, the emphasis on measuring the impact of their work on current and future research and development has been in practice only for the last two to three decades.139 As a result, it is important for researchers that their published material is made discoverable to others. This enhanced discoverability would increase the probability of their work being noticed and cited by other researchers. As is evident from Chapter 2, since most scientometric indicators are derived from citation data, dissemination is a very important consideration for researchers while selecting journals. Dissemination refers to creating awareness of a given researcher’s work among the target audience, in particular his peers.140 This awareness can be created through improving the discoverability of articles and by improving their accessibility to users. The increased discoverability and wider access would increase the probability of a given article being read and cited. This may benefit the author, as it can broaden the scope and reach of his / her results and may also aid the reader to get easy access to the research they wish to refer to. Consequently, researchers would prefer submitting their work to journals that are able to maximise the discoverability of their work. Earlier, this would have been determined mainly by the popularity of the journal. However, today, most journals have gone online. In fact, around 90 percent of journals in English are now available online. Also, most literature searches are carried out online.141 Moreover, there has been a tremendous increase in the number of articles being written,142 although the overall output of a single researcher is still similar.143 This means that there has been an

138 M. Ware, Peer review: benefits, perceptions and alternative (London: Publishing Research Consortium, 2008), , p. 6. 139 M. De Rond & A.N. Miller, ‘: bane or boon of academic ?’, , 14:4 (2005), p. 322. 140 Prosser, ‘Researchers and scholarly communications’, p. 41. 141 B. Cope & A. Phillips, ‘Introduction’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), pp. 1–2. 142 Tenopir & King, ‘The growth of journals publishing’, p. 110. 143 B. Cope & M. Kalantzis, ‘ of epistemic disruption: Transformations in the knowledge system of the academic journal’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), p. 14. 25 increase in the number of researchers, thus increasing competition for funding. As a result, ensuring that one’s research is discoverable has become all the more difficult. Yet, getting one’s work noticed and cited is important for researchers’ career prospects and for future funding. Therefore, researchers usually choose to submit to journals that are prestigious, have a wide readership and are easily accessible, thus increasing the probability of their work being cited.144

The last function is archiving, which refers to the preservation of the published material over time.145 This part has become fairly more convenient, as journals have become electronic. For example, the very first issue of the Philosophical Transactions of the Royal Society, along with all subsequent issues are available in digital format.146 Again, this function seems to benefit both the author and reader, as the author is assured of long-term preservation of his / her research, and it provides the reader the opportunity to refer to historical research on a given topic and thus analyse the evolution of a certain field. Moreover, in terms of archiving, not only publishers but also libraries have started creating electronic archives and warehouses of information material. The only roadblock in this is the lack of standardisation in how the information is stored, as there are variations from one or publisher to another.147 Despite this, archiving, like registration, is a fundamental expectation of a researcher from a journal.

144 More details on accessibility will be provided in Chapter 4. 145 Prosser, ‘Researchers and scholarly communications’, p. 41. 146 Tenopir & King, ‘The growth of journals publishing’, p. 106. 147 Roosendaal & Geurts, ‘Forces and functions in scientific communication’, p. 17. 26

4. Scientometric Indicators and Journal Publishers

The previous chapter suggests that in the research cycle, the researcher and journal publisher have a symbiotic relationship, with the latter depending on the former for content and the former needing the latter to register, certify, disseminate and archive that content. However, this is far from the truth. In reality, the relationship is skewed in favour of the journal publishers, in particular large publishers with multiple titles in their portfolios and publishers owning the most-cited journals. Like most sectors in the communication and media industries, academic and journal publishing has undergone considerable consolidation in the last three decades or so. This has led to an oligopolistic journal market that is controlled by a few dominant players, where these players wield an inordinate amount of power.148 As a result, publishers, who only own the means of dissemination, tend to dictate the production of content itself.149 This chapter will explore the reasons for this and will also analyse how journal publishers can exploit and manipulate scientometric indicators for their own gain.

4.1. Use of Scientometric Indicators First and foremost, scientometric indicators are used to evaluate researcher performance. Other than evaluation, the scientometric indicators, in particular IF, also serve as a tool for journals to prove their prestige, or more accurately popularity, in the academic world. The point is most visible when visiting the official website of any journal listed on the ISI database. For example, Figure 3 shows a screen-shot of the journal metrics for Science & Engineering: A. The journal metrics are advertised on the home-page just below all the tabs providing more information about pages the visitor can navigate to. As shown in the figure, the CiteScore, IF,150 five-year IF, SNIP and SJR are provided. Values for one of more of the scientometric indicators are usually provided for all journals. The main reason for this is that these indicators strongly impact how the journal is perceived in terms of quality and hence influences how authors select journals they wish to submit their content to.

148 W. Peekhaus, ‘The enclosure and alienation of academic publishing: Lessons for the professoriate’, tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 10:2 (2012), pp. 578–580. 149 Ibid., p. 584. 150 Two-year IF values are the standard released by Thomson Reuters. More recently, they have also started posting 5-year IFs. However, not all journals advertise this. 27

Figure 3: Screen-shot of home-page of journal Materials Science & Engineering: A.

4.1.1. Journal Pricing Subscription-based journals have two broad categories of customers: individuals and libraries. However, the number individual subscribers has been in constant decline. In fact, many publishers have stopped trying to attract individual subscribers; even if they do, they are usually forced to provide heavy discounts to attract individuals in the first place. As a result, libraries form the main market for journal subscriptions.151 The power shift in academic communication in favour of the journals occurred in the 1980s and 1990s. This is because government strategies with respect to research spending underwent major changes during these decades due to the rise of neo-liberalism. Neo-liberalism is linked to globalisation and ‘is a particular element of globalization in that it constitutes the form through which domestic and global economic relations are structured.’ 152 While neo- liberalism had a number of effects on government spending, especially with respect to higher education, it also led to a re-calibration in how research was conducted and evaluated. In general, governments emphasised on greater funding to universities for research to be conducted for the purpose of commercialisation. Consequently, more funding was allocated to applied research at the expense of other departments, including library budgets.153 Overall,

151 Ware & Mabe, The STM report, p. 19. 152 M. Olssen & M.A. Peters, ‘Neoliberalism, higher education and the knowledge economy: From the free market to knowledge capitalism’, Journal of Education Policy, 20:3 (2005), p. 313. 153 Peekhaus, ‘The enclosure and alienation of academic publishing’, pp. 578–580. 28 the growth in research budgets has consistently outpaced the growth in library budgets.154 As a result, the spending power of libraries diminished. This decline in spending power was further exacerbated by the increasing prices of journals. In fact, during the 1980s and 1990s, journal prices increased at rates well above the rate of . This increase was especially dramatic in the science, technology, engineering and medicine (STEM) fields, though increases were noted in all fields.155 This phenomenon of skyrocketing journal prices coupled with static or declining library budgets is known as the ‘’.156 The increase in prices was coupled with an increase in the number of journals itself.157

As a result of the larger catalogues to choose from and decreased spending power, the libraries had to be more selective about the journals they purchased. Library decisions on which journals to subscribe to are based on a number of considerations. Nevertheless, the most major is the metrics of the journals,158 most often the IF.159 In general, the collection policies of academic libraries are governed by the objective of maintaining existing research holdings while attempting to expand their holdings through the acquisition of new titles. Consequently, they are expected to subscribe to as many core journals as possible. The faculty and users of these libraries also expect the library to provide them access to key disciplinary journals. An important point to remember here is that unlike other goods and commodities, competing journal and journal articles are not substitutes for each other despite their similar or overlapping subject areas. As a result, libraries generally cannot replace existing journals even if newer journals in the same discipline are available at a lower price. This is because newer journals would initially not be under the scope of scientometric indicators. Once they do attain a metric, it can be a long and arduous process for them to gain a reputation compared to a journal that has already been in circulation for decades. Also, journal prices are not necessarily governed by their IF or quality.160 Therefore, libraries have been caught between the needs of the faculty members to have access to certain journals and increasing prices of journals.

154 Ware & Mabe, The STM report, p. 69. 155 Thompson, Books in the digital age, p. 99. 156 Peekhaus, ‘The enclosure and alienation of academic publishing’, p. 582. 157 Thompson, Books in the digital age, p. 99. 158 E. Roldan-Valadez, S.Y. Salazar-Ruiz, R. Ibarra-Contreras & C. Rios, ‘Current concepts on bibliometrics: a brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics’, Irish Journal of Medical Science, 1971- (2018), p. 3 159 Other scientometric indicators are a fairly recent phenomenon. During the time of the serials crisis, IF was the most dominant scientometric indicator in use. 160 Peekhaus, ‘The enclosure and alienation of academic publishing’, p. 582. 29

During the serials crisis, libraries had no option but to acquire subscriptions to the most popular journals161 and some core journals in certain niche areas even if the journals in question were expensive.162 Moreover, because of the perception among the innovation community that STEM research is more ‘useful’ than social sciences and humanities research, the acquisition strategy mainly concentrated on the former.163 Journals with high IFs or other core disciplinary STEM journals were aware of their inelastic demand and hence were in a position to dictate prices to libraries. In doing so, publishers that owned journals with higher IFs tightened their stranglehold over the market, thereby leading to the closure or acquisition of a number of newer journals.164 This led to a vicious cycle, wherein already declining library budgets were stretched thinner due to increasing subscription prices of journals. As universities who owned these libraries came under increasing financial pressure, in some countries like the USA, tenure-track jobs became scarcer and the universities increasingly started using adjunct .165 As a result, the competition for positions in the universities increased, leading to researchers to aim to publish in journals with high IFs. Because of this, the other journals had to contend with research of lower quality, leading to many libraries cancelling their subscriptions. This point is further confirmed by the fact that journal distribution by publisher is highly skewed, especially in the STEM fields. According to a report about STEM publishers from 2015, 95 percent of STEM publishers only publish one or two journals. In contrast, the top 100 publish 67 percent of all journals. In fact, the top 5 publish over 35 percent of all journals and the top 4, Elsevier, Springer Nature, - Blackwell and Taylor & Francis, publish over 2000 journals each.166 Because of the diverse portfolio of journals these publishers have to offer, they have gained an upper-hand when it comes to determining and negotiating prices with libraries.167 Moreover, the majority of the journals included in the ISI database are owned by commercial publishers,168 thus confirming how lucrative journal publishing actually is. In addition, this lucrative nature of journal

161 Thompson, Books in the digital age, p. 99. 162 As discussed above, IFs do change according to the subject areas. Hence, although a niche journal may have a low general IF, it may be regarded as a leading journal in its discipline, necessitating libraries to purchase it. 163 J. Olmos-Peñuela, P. Benneworth & E. Castro-Martinez, ‘Are “STEM from Mars and SSH from Venus”?: Challenging disciplinary stereotypes of research’s social value’, Science and Public Policy, 41:3 (2013), p. 384. 164 Peekhaus, ‘The enclosure and alienation of academic publishing’, p. 582. 165 Thompson, Books in the digital age, p. 176. 166 Ware & Mabe, The STM report, p. 45. 167 Thompson, Books in the digital age, p. 99. 168 Ware & Mabe, The STM report, p. 45. 30 publishing directly impacts the monograph business, with fewer publishers undertaking monograph publishing and libraries having smaller budgets for .169

Another phenomenon closely related to journal pricing and exploited by journal publishers for commercial profit is bundling. Bundling is the process by which publishers sell access to a diverse collection of journals, ranging from dozens to hundreds of titles, instead of selling subscriptions to individual journals.170 Due to the emergence of electronic journals, the sales of individual journal subscriptions have declined. As a result, most subscriptions are sold to libraries in the form of bundles. Overall, 95 percent of the large and 75 percent of the medium journal publishers offer content in the form of bundles. The practice of bundling is less prevalent among small journal publishers, with only approximately 40 percent offering content in the form of bundles. However, small publishers do tend to co-operate with other small publishers or larger publishers to offer content in multi-publisher bundles. Multi- publisher bundles are also offered by agents or aggregators who act as middlemen between the libraries and the journals. Overall, around 90 percent of libraries purchase journal subscriptions in the form of bundles.171

The main issue with bundling is that libraries are given a limited choice to select the journals in a given bundle.172 As a result, they are forced by publishers to subscribe to journals they would otherwise not have opted for just so that they can access the journals they really wish to add to their collection. In doing so, publishers can offset the losses of their less- popular journals, including ones with low IFs, by bundling them with the more profitable and popular journals with high IFs or core disciplinary journals. Another strategy they use is offering both print and electronic versions of the same journal to libraries. In such a scenario, electronic access to all titles in the bundle is sold at a price reflecting the library’s existing print subscriptions along with a top-up fee for electronic access to titles not included in the bundle. This model is usually referred to as ‘Big Deal’, although it is less prevalent at present. In the case of both bundling and Big Deal, contracts are usually in the form of multi- year deals and libraries are generally not allowed to cancel a single title in the bundle during the contract period.173 This ensures a guaranteed income on all journals in the bundle over a fixed period of time, irrespective of their IF. In other words, they use the high IFs of some

169 Thompson, Books in the digital age, pp. 98–100. 170 Peekhaus, ‘The enclosure and alienation of academic publishing’, p. 583. 171 Ware & Mabe, The STM report, pp. 20–21. 172 Peekhaus, ‘The enclosure and alienation of academic publishing’, p. 583. 173 Ibid., p. 583. 31 titles to guarantee profit for all titles. Moreover, in the case of Big Deal, libraries are mainly renting access to back issues of some journal titles. If the contract is terminated or re- negotiated, they would lose electronic access to the titles, which would be detrimental to their archival function. This allows journal publishers to have a greater bargaining power during contract negotiations. In addition, this encourages large publishers with numerous titles under them to further assert their monopoly over the journal publishing industry.174

4.2. Manipulation of Scientometric Indicators Because of the emphasis placed by research institutions on their academics getting work published in journals with high IFs, journal publishers have realised the importance of scientometric indicators for their survival. This has inevitably led to the manipulation and eventual exploitation of the IF and, more recently, other scientometric indicators by these publishers. In this section, I will analyse the various strategies used by publishers to do so.

4.2.1. Article Type The type of an article can greatly impact its citation pattern. For example, review articles tend to be cited more often than original articles.175 This is because review articles support citation practices, as the researchers tend to cite secondary data from literature reviews rather than referring to the original articles.176 As a result, there has been a six-fold increase in the number of review articles published between 1991 and 2005, compared to a two-fold increase in the number of original articles published during the same period.177 The increase in the number of review articles has led to a lazy trend among researchers to cite secondary summaries instead of original sources.178 Journals have inevitably exploited this trend.179 They have started accepting and publishing more review articles to improve their IF. The fact that 60% of the supposed top journals publish only review articles (and no original articles)

174 Ibid., p. 583. 175 Scully & Lodge, ‘Impact factors and their significance’, pp. 392–393. 176 Cope & Kalantzis, ‘Evaluating Webs of Knowledge’, p. 62. 177 I.D. Craig & L. Ferguson, ‘Journals ranking and impact factors: How the performance of journals is measured’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), p. 172. 178 Cope & Kalantzis, ‘Evaluating Webs of Knowledge’, p. 62. 179 This trend has also been exploited by researchers; the increase in the number of review articles is partly because more researchers are writing them. 32 confirms the greater number of citations received by the former.180 Moreover, since IF is the one of the most important considerations for a researcher while selecting a journal,181 the high-IF journals would be the preferred options for authors in a particular field. Another strategic use of review articles by journal publishers to boost their IFs is by inviting certain researchers and experts in the field to write review articles for them. For example, , a journal that has consistently maintained a high IF, commissions all its review articles. The same holds true for other high-IF journals that publish both review and original articles, such as New Journal of Medicine and . Similarly, and Science, both considered to be highly reputable journals, accept review articles as direct submissions or commission review articles.182 In such a scenario, by only inviting highly respected individuals in a given field to submit review articles, the journals can maximise the number of citations those articles receive based on the reputation of the author. This ensures that the article is widely read in the field, thus increasing its probability of being cited. These authors may gladly accept such invitations from the journal editors, as publishing in a high-IF journal would improve their own professional credibility. Thus, journal publishers obtain from the authors to boost the IF. Moreover, review articles are typically downloaded more often than original articles, which indicates a wider readership pool.183 This provides further impetus for journals to publish them.

4.2.2. Co-authorship and Subject Area Another factor with the potential to influence the number of references to an article is the number of authors. In general, co-authorship varies according to discipline with biomedical articles being found to be the most likely to be co-authored, followed by the physical sciences and engineering and social sciences, with and humanities having the least proportion of co-authored articles. In fact, the average level of co-authorship is considered to be a primary reason for variations in average IFs among fields. Overall, the variation may be so significant that a leading journal in terms of IF in a particular field may have an IF lower than that of one of the less-popular journals in another discipline. This is because researchers tend to cite their own work or those of their colleagues.184 Hence, a greater number of authors

180 Ibid., p. 62. 181 Craig & Ferguson, ‘Journals ranking and impact factors’, p. 169. 182 This was verified by checking the websites of each journal individually. 183 Craig & Ferguson, ‘Journals ranking and impact factors’, pp. 171–172. 184 Ware & Mabe, The STM report, pp. 54–61. 33 usually implies a higher probability of the article being cited. Moreover, the growing globalisation of research thanks to advances in technology has led to international co- authorship, with researchers from different parts of the world collaborating on different projects.185 Such articles typically have a wider audience as research communities from different countries in the world would read those articles because of their familiarity with the author and his / her work. For example, in 2012, almost half the citations received by Chinese articles were from China itself.186 Thus, because co-authorship impacts citation counts and readability, publishing decisions based on the number and nationalities of authors also allow publishers to manipulate the IF of journals, especially in the STEM disciplines.

Apart from co-authorship, the subject matter of the articles can also play a role in their citation patterns. For example, an article written on a ‘hot’ or ‘trendy’ topic may generate more research interest in the immediate future. This would directly impact its audience and hence the probability of it getting cited. As a result, the IF may also be manipulated by journal editors by mostly selecting articles on trendy or emerging topics that would help boost the IF in a given time window.187 This is especially true for journals that deal with a broader range of disciplines. This is because, in many cases, authors would prefer to submit their articles to journals with a high IF and covering a broad range of sub-disciplines rather than a highly specialised journal in their field with a comparatively lower IF.188 Therefore, the high-IF journals would likely get the first pick for articles pertaining to such topics, even if they do not actively seek out such articles. Lastly, not only the discipline but also the style of writing can influence the probability of citation. As a result, journal editors may select articles by researchers who have framed their work in a populist way to ensure wider readership and hence higher probability of getting cited. Therefore, articles that have the potential to be important in niche or specialised sub-fields may be cut to make way for articles that are citable. In other words, the journal is catering more to prospective authors who can cite their material in the future rather than readers who cannot. This point is further reflected by the fact that in some cases, even if the IF of a journal increases, its overall readership declines.189

185 Ibid., p. 59. 186 Ibid., p. 59. 187 Craig & Ferguson, ‘Journals ranking and impact factors’, p. 173. 188 Ibid., p. 173. 189 Cope & Kalantzis, ‘Evaluating Webs of Knowledge’, p. 66. 34

4.2.3. Manipulation of Citable Items The manipulation of citable items is perhaps one of the most frequently cited criticism for scientometric indicators, in particular the IF. This is because, most scientometric indicators are calculated using the number of citations a journal receives as the numerator and the number of articles it publishes as the denominator. However, the definition of published content differs. This is because, most formulae only consider the number of ‘citable items’, a definition that is rather subjective.190 In general, most indicators only consider articles and reviews as citable items; however, when determining the number of citations received, articles, reviews, letters and editorials are considered.191 For example, according to JCR data released by ISI in 2006, The Lancet published 309 citable items in 2015 and 271 in 2014 and was cited 13,983 and 13,759 times by articles published in 2016, respectively. This would give an IF of 47.831. However, when the denominator was taken based on a search of PubMed for articles published, The Lancet published 1,992 articles in 2015 and 1,770 articles in 2014. Considering this as the denominator, the IF is computed to have a much lower value of 7.374.192

The issue of defining citable items is further compounded by the fact that ISI is essentially a commercial organisation. In fact, on 11 July 2016, it was taken over by Onex Corporation and Baring Private Equity Asia and re-christened as Analytics.193 As a result of this commercial interest and the fact that most of their calculations are based on proprietary data,194 it is possible for journal publishers to collude with them or persuade them to exclude as many articles as possible from the numerator.195 Moreover, some editors have gone as far as to change the designation of some papers to ensure that they are not counted in the denominator of citable items but are included in the numerator of citations.196 By doing so, editors have the potential to double the IFs of their journals.197 In the case of the IF, ISI’s refusal to divulge details of their calculations has only led to more suspicions being raised about the integrity of both the journals and ISI.198

190 Fernandez-Llimos, ‘Differences and similarities between Journal Impact Factor and CiteScore’, p. 2. 191 Da Silva & Memon, ‘CiteScore: A cite for sore eyes’, p. 555. 192 Fernandez-Llimos, ‘Differences and similarities between Journal Impact Factor and CiteScore’, p. 2. 193 Da Silva & Memon, ‘CiteScore: A cite for sore eyes’, pp. 553–554. 194 M. Rossner, H. Van Epps & E. Hill, ‘Show me the data’, The Journal of , 179:6 (2007), p. 1092. 195 Cope & Kalantzis, ‘Evaluating Webs of Knowledge’, p. 63. 196 R.A. Brumback, ‘Worshiping False Idols: The Impact Factor Dilemma’, Journal of Child , 23 (2008), p. 367. 197 R. Smith, ‘Commentary: The Power of the Unrelenting Impact factor: Is it a Force for Good or Harm?’, International Journal of Epidemiology, 35 (2006), p. 1130. 198 Brumback, ‘Worshiping False Idols’, p. 367. 35

4.2.4. Self-citation Self-citation is generally considered to be one of the least-respected ways to manipulate citation counts. Self-citation can either be done by journal editors recommending authors to cite more work from their journal or through editorials by the journal editors themselves.199 Even some authors are guilty of trying to manipulate the impact of their own research through self-citation. Nevertheless, since many recent scientometric indicators do not consider self-citations or consider it only up to a certain limit, in the case of authors, the effect is not major. In contrast, even if self-citations are neglected, large publishers with numerous titles can still find ways to manipulate citation counts. This can be done through cross-citation. In such a scenario, publisher X’s journal A includes citations to journal B and vice versa. Thus, the titles that have high IFs can be used to boost the IFs of other titles. A case of this shown in Figure 4.

Figure 4: Screen-shot of the end-list references of an editorial.200

The figure shows the end-list references of an editorial for the journal Organic and Biomolecular Chemistry. This journal is published by the Royal Society of Chemistry, who at present publish around 44 peer-reviewed journals. The editorial itself is a rather short one of

199 Craig & Ferguson, ‘Journals ranking and impact factors’, p. 174. 200 C.V. Potter, S. Thomas, J.L. Dean, A.P. Kybett, R. Kidd, M. James & H. Saxton, ‘Comment: 2004’s fastest organic and biomolecular chemistry!’, Journal of Material Chemistry, 14 (2004), p. E21. 36 roughly 3 journal pages of main text. However, despite its length, the end-list references section contains a total of 127 entries. Out of these, more than 100 entries cite works published in Organic and Biomolecular Chemistry (highlighted in yellow in the screen-shot). The other entries in the end-list references are from sister journals, most from Chemical Communications (highlighted in green) and a couple from (not in screen-shot), which are also published by the Royal Society of Chemistry. Interestingly, the editorial was published in 2004, just a year after Organic and Biomolecular Chemistry was launched, which means all self-citations were within the time-window for IF calculation. In fact, every entry in the end-list references was either from the year 2003 or 2004. Apart from including this editorial in Organic and Biomolecular Chemistry, it was included in four other journals, all published by the Royal Society of Chemistry. In other words, even if one does not consider self-citations, the journal received more than 400 citations from its sister journals alone. Fortunately, only one of these editorials was indexed by ISI, so the citation count was not quadrupled and did not drastically affect the IF.201 Nevertheless, this shows that even if self-citations are not considered, scientometric indicators can be manipulated through cross- citations. In fact, some journal editors have sent copies of articles previously published in their journals along with the review copy of another article to peer-reviewers, asking them the feasibility or possibility of including citations to the published articles in the references list.202 This practice is also known as ‘citation cartels’.203 Such cartels can also be facilitated by collusion between editors of different journals, thus attempting to manipulate the citation counts of both journals.

4.2.5. Accessibility As mentioned above, dissemination is a major function performed by the publisher and hence the accessibility becomes a major factor for researchers while selecting the journals they wish to submit their work to. Journals are still traditionally disseminated through subscription models, in particular to libraries. For individual researchers, some journals have also introduced a pay-per-view (PPV) model, wherein users can access individual articles instead

201 Craig & Ferguson, ‘Journals ranking and impact factors’, pp. 174–175. 202 A. Hemmingsson, T. Mygind, A. Skjennald & J. Edgren, ‘Manipulation of impact factors by editors of scientific journals’, American Journal of Roentgenology, 178:3 (2002), p. 767. 203 Cope & Kalantzis, ‘Evaluating Webs of Knowledge’, p. 63. 37 of entire journal issues for a fixed cost.204 Both of these models can be grouped as toll access models. Although toll access models enable the discoverability of articles, they can only be accessed either by researchers affiliated to institutes who subscribe to journals adopting this model or individuals with the financial means to purchase PPV access to articles. However, with the availability of new digital technologies in the Internet age of the early 2000s, a new model, known as open access (OA), has been thoroughly experimented with and implemented. Its main aim is to circulate and disseminate research as widely as possible and to any reader who may either be interested in it or profit from it.205 To ensure that this is achieved, OA aims for published articles to be available online for free for all Internet users.206 The term OA was coined at the Open Access Initiative (OAI) held in Budapest, Hungary, in 2002; the initiative was a declaration drafted by a committee involving academics, publishers and corporates, and it called upon researchers to deposit their published works in open electronic archives developed according to OAI standards and on journals to launch a new generation of titles that support OA.207 Since 2002, the OAI declaration has undergone changes in light of latest developments in technology.208 Today, a number of journals have adopted OA models or different versions of it, especially because many researchers seem to believe that OA enables wider circulation and higher visibility.209

In its current form, OA makes online digital copies of published material available free of charge, free of most copyright and licensing restrictions, and free of barriers to access, such as user registration or digital rights management.210 OA is available in a variety of modes. The modes are determined by what is made open, when it is made open and how it is made open. Here the what refers to the version of the published article available. The different versions are as follows: (1) initial draft of the article, which has not been peer reviewed, also known as a pre-print; (2) authors’ revised version, in which they have resolved the reviewer comments to the satisfaction of the journal editor, also known as the accepted ; and (3) final version published in the journal, also known as . In

204 A. Phillips, ‘Business models in journals publishing’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), pp. 94–95. 205 Cope & Kalantzis, ‘Signs of epistemic disruption’, pp. 24–25. 206 S. Harnad, ‘The post-Gutenberg open access journal’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), p. 125. 207 L. Chan, D. Cuplinskas, M. Eisen, F. Friend, Y. Genova, J.C. Guédon, M. Hagemann, S. Harnad, R. Johnson, R. Kupryte & M. La Manna, ‘Read the Budapest Open Access Initiative’, Budapest Open Access Initiative (24 June 2019). 208 Anon., ‘Background’, Budapest Open Access Initiative (24 June 2019). 209 Ware & Mabe, The STM report, p. 72. 210 Ibid., p. 88. 38 terms of the when, the following options are available: (1) prior to publication, (2) immediately on publication and (3) a pre-determined period after publication. How the article is made available depends on the business model adopted by the publisher.211 The mode of OA is usually determined by a combination of one or more of these factors. The most common modes in use are listed below:

1. Delayed OA: The version of record is made available for free after an embargo period. However, the journal still implements a subscription-based model.212 2. Hybrid OA: Journals offer authors the option to make their version of record available immediately after publication for a certain fee.213 3. Green OA: The accepted manuscript is made available on an online repository either immediately or after an embargo period depending on the publisher.214 4. Gold OA: The version of record is available immediately on publication. In this case, many journals request authors to pay a fee at the time of submission, known as the article processing charge (APC), to cover costs of publication and other services offered to the author.215 Although OA as a concept is meant to be free, in most of the above cases, it is free only for the readers to access, irrespective of whether it is immediate or delayed. Researchers wishing to get published in a Gold-OA journal still have to pay APCs.216 In fact, some OA journals may also ask authors to pay a non-refundable submission fee regardless of the outcome of the peer-review process, although this practice is more common for subscription journals.217 This practice of charging a submission fee is all the more blatant considering that unlike in the case for general publishing, in academic publishing, all content that is published in journals and that helps build a journal’s reputation is acquired for free.218 Moreover, for journals with high rejection rates and with high IFs, which would most likely receive a higher number of submissions, this can lead to a significant source of revenue. In addition, some journals levy additional charges on top of the APC for colour or other services. The APC is usually paid by the researcher’s funder or institution, especially in the STEM

211 Ibid., p. 88. 212 Ibid., p. 88. 213 Cope & Kalantzis, ‘Signs of epistemic disruption’, pp. 25–26. 214 Harnad, ‘The post-Gutenberg open access journal’, pp. 130–131. 215 Ware & Mabe, The STM report, pp. 88–90. 216 This is the practice followed by most journals, except for titles owned by not-for-profit publishers. 217 Ware & Mabe, The STM report, p. 90. 218 Peekhaus, ‘The enclosure and alienation of academic publishing’, p. 581. 39 fields.219 In some cases, if a particular journal already has an agreement with the researcher’s institution, the APC may be reduced or waived completely, depending on the terms of the agreement.220 As these agreements usually involve subscriptions or other co-operations in the economic interests of both parties, the journal in question is not necessarily losing money. Moreover, a publisher offering a discount on the APC for one of its high-IF journals through such a co-operation may encourage a library or research institution to subscribe to its other titles, irrespective of their IF, thus leading to bundling of a different type. APCs for Gold-OA journals usually range from $1000 to $5000, though some publishers do have more subsidised APCs. In addition, journals levy extra charges for lengthier articles or expedited submissions. Furthermore, APCs vary depending on the copyright restrictions of the articles, with some journals imposing additional charges if researchers or their institutions opt for the most open format, CC-BY, of the license.221 Thus, high-ranking journals offer a variety of licenses to guarantee profits, with the more open the license, the higher the fee. Since all of these costs are determined by individual journals and publishers, there is no cap on the costs a researcher or his / her institution may have to pay, especially if the journal they wish to submit to has a high IF or is considered to be prestigious in a given field.222 The hybrid-OA model provides a rather safe and low-risk path for subscription journals to experiment with OA. Asking researchers or their institutions to pay an additional fee for OA enables them to conduct market research regarding the value of OA for researchers and their institutions. At present, almost all major journals offer hybrid OA, although the uptake by the researchers and their institutions has been rather low.223 However, this model is extremely concerning given that it leaves considerable scope for exploitation by journals, in particular by those with high IFs. The main mechanism here is that the funder or institution is paying the journal to freely disseminate the content created by the researchers, and at the same time, in the case of high-IF journals, the institution in question would have most likely already purchased a rather costly subscription for the other toll access articles in the journal. This implies that some journals can generate income from the same institution twice: by making them pay for a subscription and making them pay for OA. This issue has been termed as ‘double-dipping’.224 Moreover, APCs for journals following the hybrid-OA

219 In other fields, where funding is limited, researchers may have to themselves bear the costs. 220 Ware & Mabe, The STM report, pp. 90–92. 221 Ibid., pp. 93–94. 222 That being said, no clear correlation has been found between APCs and IF despite numerous studies (for details, see Ware & Mabe, The STM report). Nevertheless, high-IF journals do have the potential to exploit this. 223 Ibid., p. 92. 224 Ibid., p. 92. 40 model is costlier than that for those following the Gold-OA model, usually around $3000.225 Although there are some journals who offer Gold OA without levying APCs, these journals do use various funding models, such as grants, membership subscriptions, advertising, subscription to print editions only and subsidies. In some cases, the journals initially operate without APCs; however, once they have established a reputation (i.e. high IF) and attracted sufficient authors, they tend to move to an APC model.226 Apart from benefitting large publishers and popular journals, the OA movement has also helped medium or small publishers, especially those who have high rejection rates and a modest subscription revenue. In some disciplines, funding is available to researchers provided they make their results available via OA227 or researchers are mandated to make their published work available either directly through the journal or via open repositories.228 Journals in these fields have also started increasingly shifting to a form of the OA model.229 Another model used is the cascade or second-tier journal model. Here a paper rejected by journal A is transferred to journal B, usually on recommendation by the editor. Journal B here would be a cascade journal and journal A is called the supporter journal. Here both journals are usually sister journals and the cascade journal assures the author of a quick peer-review, since the article has already been reviewed earlier.230 In this model, the APCs are shared and the supporter journal here acts as a feeder to the cascade journal. The model is most effective when the supporter journal is a popular one and the cascade journal, also known as the second-tier journal, is newer.231 Due to the journals being owned by the same publisher or already having entered an agreement, the APC is shared between the two.232 In some ways, this is similar to bundling, wherein a popular journal (the one with the higher IF) helps support a smaller journal (a newer journal or one with a lower IF), with the only difference being that researchers can decide whether they wish to submit to the recommended journal or not. However, given that this model enables authors to ‘get published as quickly as possible’,233 many may opt to submit to the cascade journal.

225 Ibid., p. 93. 226 Ibid., pp. 92–95. 227 Ibid., p. 95. 228 Harnad, ‘The post-Gutenberg open access journal’, p. 131. 229 Ware & Mabe, The STM report, p. 95. 230 A. Wood, ‘Cascade Journals: What and why?’, The Wiley Network (25 June 2019). 231 In this strategy, both journals usually operate using an OA model. For toll access, such a phenomenon already exists in the form of bundling. 232 Ware & Mabe, The STM report, p. 99. 233 Wood, ‘Cascade Journals’. 41

However, the most successful OA model for journal publishers is delayed OA. This is because the journals strategically decide the embargo period for articles (usually between 12 and 24 months), depending on the discipline of the journal. For example, this model is most common among journals in rapidly developing or competitive fields, wherein immediate access is required; therefore, delayed access would not drastically affect their subscription sales.234 This ensures that articles are available in OA, without affecting their subscription numbers in the long run. Moreover, the increased visibility and accessibility of the version of record would most likely increase its probability of citation, thus improving the journal’s IF. This is further confirmed by that fact that, on average, delayed-OA journals have twice the citation rates of closed subscription journals.235 Thus, the journals get the best of both worlds, high subscription sales plus improved visibility and discoverability for more citations. A similar strategy is also used by Green-OA journals implementing an embargo period, wherein the author can upload the accepted version of the article to an open online repository. Even in the case of Green-OA journals with no embargo period, uploading accepted versions of articles to repositories actually helps journals, as the presence of articles in repositories has been found to increase the number of downloads of those articles from publisher sites.236 As a result, most journal publishers have fairly liberal policies when it comes to self-archiving,237 albeit for their own vested interests. While the repositories do enable content to be provided for free, the responsibility of the development and upkeep of these repositories mostly lies with the libraries. Accordingly, the funding for maintaining these repositories in almost all cases comes from libraries, with no additional budget provided, further weakening their already declining budgets.238 Thus, although many journals actually benefit from these repositories in terms of the discoverability of their articles, they do not contribute to their maintenance. In fact, increasing subscription costs actually lead to libraries having smaller budgets for these repositories. Moreover, the embargo periods on the articles mandated by the journals are too long compared to those mandated by research institutions. In fact, , an initiative undertaken by a coalition of various national research-funding institutions of Europe, with the support of the European Research Council, clearly states that by 2021, all articles based on research ‘funded by public or private grants provided by national, regional and international research councils and funding bodies, must be published in Open Access

234 Ware & Mabe, The STM report, p. 101. 235 Ibid., p. 102. 236 Ibid., p. 127. 237 Ibid., p. 114. 238 Ibid., p. 115. 42

Journals, on Open Access Platforms, or made immediately available through Open Access Repositories without embargo.’239 This means that researchers are caught between the two mandates, having to choose between publishing in a journal of their choice or jeopardising their existing and future funding. Overall, only in combination with OA is expected to save costs for both libraries and publishers; however, these estimates have been hotly contested by publishers, who correctly argue that most economic analyses under-estimate the efficiency of the subscription model. Nevertheless, OA has relatively low barriers for entry into the publishing business, given that APCs and not institutional subscriptions form the major source of revenue. Thus, this gives the opportunity to newer publishers to try and break the oligopoly of the top publishers if the latter refuse to offer OA; however, this is easier said than done, as most top publishers offer a variety of options and agreements to maintain their dominant positions in the market. Moreover, these low barriers also have some shortcomings. Recent years have seen the emergence of predatory publishers. These publishers usually market themselves as OA journals and then take advantage of researchers by charging them additional fees on acceptance. Another issue is the emergence of highjacked journals. These journals usually create a dummy website mimicking that of actual journals to attract submissions and APCs.240 As a result, the status and reputation of OA is mixed and its full implementation will only be realised after careful considerations about the economics for each stakeholder. In particular for journals, the expected lowered revenues may lead to smaller journals attracting fewer authors, further strengthening the status of the top journal publishers.241

4.3. Alternative to Scientometric Indicators Because of the criticism directed towards traditional scientometric indicators and their dependence on citation counts, alternatives to these indicators have been proposed. The most significant among them is alternative metrics. The concept was first introduced in 2009 as article-level metrics; the main aims were to overcome the limitations of citation counts, in particular, the distinction between supporting and critical comments and to find a more rapid

239 Anon., ‘About’, Plan S, (9 July 2019). 240 Ware & Mabe, The STM report, pp. 114–123. 241 Ibid., pp. 124–125. 43 way for evaluating recently published papers.242 In its present form, alternative metrics allows the impact of all types of scientific publications, such as books reports, data and other non- traditional publications, to be measured at an article / post level.243 Alternative metrics is mainly based on applications such as blogs, Twitter, ResearchGate and Mendeley244 and considers not only various academic publications but also evaluates how research is tweeted, blogged about, shared via social media and bookmarked.245 Hence, it has also become a means to measure the broader societal impacts of academic research. Obviously, because of the various measurable signals on social media, such as likes / dislikes, shares, followers, subscribers, hashtags, and tags, there are several categorisations of this metrics.246 However, in this sub-section, I will only be providing a brief overview of the characteristics. Overall, alternative metrics has three levels of categories depending on the level of engagement: • Access: access or storage of a given article or its related • Appraise: mention of an article either in academic publications or on social media or other platforms • Apply: use of article in theories, or datasets The major advantage of alternative metrics is that it can measure the impact of academic communication on society by considering the opinions of both the academic and non- academic communities. This also makes it a better fit to evaluate inter-disciplinary research, as it provides a more holistic view based on information obtained from experts as well as non-experts. Moreover, alternative metrics helps with rapid evaluation of research and is capable of measuring multiple signals, e.g. comments, shares, likes etc. Therefore, it has the ability to assess different types of research objects like data, software tools and applications. Lastly, although alternative metrics is an inherently quantitative indicator, it also offers the option of evaluating qualitative information through the analysis of comments or user profiles.247

242 C. Neylon & S. Wu, ‘Article-level metrics and the evolution of scientific impact’, PLoS Biology, 7:11 (2009), p. e1000242. 243 Wilsdon et al., ‘Next-generation metrics’, pp. 9–10. 244 Ibid., pp. 9–10. 245 F. Galligan & S. Dyas-Correia, ‘Altmetrics: rethinking the way we measure’, Serials Review, 39:1 (2013), p. 56. 246 Wilsdon et al., ‘Next-generation metrics’, pp. 9–10. 247 Ibid., p. 11. 44

However, alternative metrics does have some limitations. The major limitation is the lack of free access to underlying data. Moreover, since a considerable amount of data is received through social media platforms, the terms and conditions of these platforms need to be considered. In particular, user data cannot be re-distributed legally.248 Also, because user data is protected, there is no stopping journals or researchers from uploading fake reviews or blogs regarding articles for their promotion. Journals and researchers could also ask their employees and colleagues, respectively, to tweet or like certain posts to try and exaggerate their impact and support. Also, researchers would be burdened with more responsibilities such as networking and keeping pace with latest developments in web-based platforms. Lastly, alternative metrics is yet to gain acceptance from the scientific community, thus preventing its widespread implementation.249

4.4. Open Science Recently, research institutions and policy-makers seem to be gaining understanding of the inherent limitations of citation counts and the detrimental effect they have on their own interests. Therefore, many circles in the world have become more inclined to the concept of Open Science (OS). In the most basic sense, OS is ‘transparent and accessible knowledge that is shared and developed through collaborative networks’.250 The initial idea was proposed in the late 2000s. Since then, numerous think tanks, institutes and committees have been established to provide recommendations for the implementation of OS.251 One of the most important among them is the European Commission’s Open Science Policy Platform (OSPP). Their main aims are to provide recommendations regarding how to develop and implement the OS initiative in Europe, to point out and address issues related to this initiative, to support policy formulation and implementation, and to troubleshoot any other cross-cutting issues affecting OS implementation.252 Since being first established in May 2016, they have had regular meetings to formulate a number of recommendations for OS pertaining to each stakeholder. Their recommendations to journal publishers in particular are to use standard identifiers for researchers, outputs and contributions; to adopt an OA model;

248 Ibid., p. 12. 249 Ibid., p. 12. 250 R. Vicente-Sáez & C. Martínez-Fuentes, ‘Open Science now: A systematic for an integrated definition’, Journal of Business Research, 88 (2018), p. 434. 251 P. Mirowski, ‘The future (s) of open science’, Social Studies of Science, 48:2 (2018), p. 171. 252 Anon., ‘Policies, information and services’, Open Science Policy Platform, (27 June 2019). 45 and to ensure transparency during data collation and metrics for both researchers and journals.253 By doing so, they hope to move away from traditional scientometric indicators and move to a less quantitative means of researcher evaluation. To this end, they have already formulated a career evaluation matrix to completely revamp the way in which research and academics are evaluated, with more emphasis on evaluation by peers and more responsibilities for senior researchers to help their junior colleagues. The matrix also mandates publishing research in OA journals or self-archiving in OA repositories to ensure the openness of science.254 Although the overall concept of OS and its emphasis on openness is a good initiative, especially in terms of ensuring that the non-academic community can be informed on the work done by researchers, it does have some major obstacles before it can be implemented. First, it does not address the issue of the ‘publish or perish’ culture prevalent in researcher evaluation. Researchers are still expected to publish their research, but on OA platforms. In addition, a closer look at the criteria mentioned in the evaluation matrix shows more issues. By moving away from scientometric indicators to peer-review to determine research quality, the possibility of bias increases. Also, the entire matrix is rather subjective in nature, making the evaluation even more problematic. Moreover, the matrix expects researchers to share their preliminary results and raw data through archives as well. This data could easily be re-used by commercial organisations for their own financial gain. Also, even journals may provide incentives to researchers whose preliminary results look promising to publish with them under OA. Given that large publishers will be in a better position to provide these incentives, it would not help eliminate the existing oligopoly in the journal publishing industry. Furthermore, self-archiving is known to be unsystematic and unstructured;255 thus, the OS initiative’s emphasis on self-archiving can only be implemented once the process is more streamlined. Lastly, the main objective of this initiative seems to be to wrest power from the journal publishers and provide a more inclusive environment for the non-academic community, without stressing on making the lives of researchers easier.

253 J. Edmond, R. Lawrence, S. Leonelli, N. Lossau, C. MacCallam, et al., ‘OSPP Combined Recommendations for the Embedding of Open Science’, Open Science Policy Platform, (27 June 2019). 254 C. O’Carroll, B. Rentier, C. Cabello Valdès, F. Esposito, E. Kaunismaa, et al., Evaluation of research careers fully acknowledging Open Science practices-rewards, incentives and/or recognition for researchers practicing Open Science (Publication Office of the , 2017), pp. 5–6. 255 Wilsdon et al., ‘Next-generation metrics’, p. 156. 46

5. Conclusion

This thesis describes the various scientometric indicators in use today and their exploitation and manipulation by journals. To do so, I have firstly described why scientometric indicators have gained importance, in particular, in the context of measuring the impact of individual researchers’ work. The emphasis of funding organisations and research institutions on academics getting their work published and more importantly cited has created an imbalance among the various stakeholders involved in academic publishing. This imbalance has led to publishers being in a dominant position with respect to the researchers as well as libraries. To further contextualise this position of the publishers, I have first described the features of the most common scientometric indicators in use today. Although the indicators have different algorithms and calculations in play, they all have two common features: (1) they are all either directly or indirectly derived from citation counts, i.e. the number of times an article is cited, and (2) they only provide a predominantly quantitative measure of a publication’s or title’s impact because of the inherent limitations of citation counts. Thus, despite being almost universally used in some form or the other as a proxy for research impact, scientometric indicators are unreliable from a qualitative perspective. This unreliability makes them vulnerable to manipulation and exploitation, in particular by journal publishers. To lay the foundation for the ways in which the indicators can be manipulated or exploited, I have proposed a conceptual framework describing the flow of academic publishing and the overall research process. The framework attempts to describe the various stakeholders involved in academic publishing and their roles and inter-relationships to ensure that research output is appropriately registered, certified, disseminated and archived. These four steps should ideally help legitimise a researcher’s work and boost a journal’s overall reputation, thus creating a symbiotic relationship between the two stakeholders. This relationship would in turn ensure smooth publication, where the interests of all stakeholders are satisfactorily catered to. However, as I have shown in my thesis, the inter-relationships between the stakeholders are not ideal, with journals being in a position to dictate terms to the other stakeholders, in particular the researchers. Scientometric indicators play a major role in this dominant position assumed by the journals. By strategically manipulating and exploiting the indicators for their own vested interests, the major journal publishers have managed to create an oligopoly, in particular, in the STEM fields. This ensures that journals with high ratings

47 thrive, and their publishers take advantage of their high ratings to ensure that even the other journals managed by them not only survive but also make a profit. This is achieved by strategically marketing and selling their highly rated journals to libraries in bundles with their other journals. Apart from selling, journals also use various other strategies to manipulate their scientometric indicator ratings. This is achieved by being selective about the type and subject area of the articles they accept, considering the number of authors involved in writing the articles before acceptance, manipulating the number of citable items, increasing the number of self- and cross-citations and exploiting author preferences for OA. A major factor leading to the journal publishers’ dominant positions is the ‘publish or perish’ culture prevalent in research institutions today. As a result of this culture, researchers are constantly pressurized by their institutions to publish their research in a highly rated journal and journals exploit this need of researchers for their own profit. Therefore, creating a more balanced relationship between all stakeholders in academic publishing is the need of the hour, and this can only be achieved if all parties are committed to working for the welfare of the society as a whole. Although the OS initiative may manage to create more balance between some stakeholders, it does not resolve the situation for researchers in terms of the need to publish. Researchers still face the same pressure, only this time with the added responsibility of following institutional mandates and managing online repositories. In such a scenario, the OS initiative is only the first step in achieving parity among stakeholders but not the only one. This initiative needs to be modified to also consider researchers’ interests. While encouraging researchers to publish on OA platforms is in the best interests of all,256 expecting them to publish even their preliminary results only burdens them to increase the number of their publications. Hence, the OS initiative cannot completely satisfy the needs and interests of all stakeholders involved. A more appropriate solution would be to form a committee with representatives of each stakeholder as well as neutral parties from social and political circles to come up with a robust solution that considers the well-being and interests of all involved. This committee should focus on alleviating the constant pressure on researchers to publish their work and on formulating a type of profit- sharing model between libraries and journal publishers to facilitate future research as well as journal development. Ultimately, this thesis has mainly focused on highlighting the inherent problems of scientometric indicators and their exploitation by journal publishers. In doing so, it provides the basis for future research on the consequences of this exploitation on

256 Including publishers, since as we noted above, OA can be a lucrative model too. 48 researchers and research institutions / libraries and on finding a solution to maintain a balance between all stakeholders involved in academic publishing.

49

Bibliography

Primary sources Bhaskar, M., The Content Machine: Towards a Theory of Publishing from the Printing Press to the Digital Network, (Anthem Press, 2013). Cope, B. & A. Phillips, ‘Introduction’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), pp. 1–9. Cope, B. & M. Kalantzis, ‘Evaluating Webs of Knowledge: A Critical Examination of the “Impact Factor”’, Logos, 21:3-4 (2010), pp. 58–73. Cope, B. & M. Kalantzis, ‘Signs of epistemic disruption: Transformations in the knowledge system of the academic journal’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), pp. 13–61. Craig, I.D. & L. Ferguson, ‘Journals ranking and impact factors: How the performance of journals is measured’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), pp. 159–193. Garfield, E., ‘Citation indexes to science. A new dimension in documentation through association of ideas’, Science, 122 (1955), pp. 108–112. Gross, P.L. & E.M. Gross, ‘College libraries and chemical education’, Science, 66:1713 (1927), pp. 385–389. Harnad, S., ‘The post-Gutenberg open access journal’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), pp. 125–137. Hood, W. & C. Wilson, ‘The literature of bibliometrics, scientometrics, and informetrics’, Scientometrics, 52:2 (2001), pp. 291–314. Murray, P.R. & C. Squires, ‘The digital publishing communications circuit’, Book 2.0, 3:1 (2013), pp. 3–23. Oosthuizen, J.C. & J.E. Fenton, ‘Alternatives to the impact factor’, The Surgeon, 12:5 (2014), pp. 239–243. Peekhaus, W., ‘The enclosure and alienation of academic publishing: Lessons for the professoriate’, tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 10:2 (2012), pp. 577–599. Phillips, A., ‘Business models in journals publishing’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), pp. 87–103.

50

Prosser, D.C., ‘Researchers and scholarly communications: an evolving interdependency’, in D. Shorley & M. Jubb (eds.), The Future of Scholarly Communication (Facet Publishing, 2013), pp. 39–49. Roldan-Valadez, E., S.Y. Salazar-Ruiz, R. Ibarra-Contreras & C. Rios, ‘Current concepts on bibliometrics: a brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics’, Irish Journal of Medical Science, 1971- (2018), pp. 1–13 Roosendaal, H.E. & P.A.T.M. Geurts, ‘Forces and functions in scientific communication: an analysis of their interplay’, Cooperative Research Information Systems in Physics, 31 (1997), (10 June 2019). Tenopir, C. & D.W. King, ‘The growth of journals publishing’, in B. Cope and A. Phillips (eds.), The future of the academic journal (Oxford: Chandos, 2009), pp. 105–123. Thompson, J.B., Books in the digital age: The transformation of academic and higher education publishing in Britain and the United States, (Polity, 2005). Ware, M. & M. Mabe, The STM report: An overview of scientific and scholarly journal publishing, 4th edition (The Hague: International Association of Scientific, Technical and Medical Publishers, 2015). Wilsdon, J., J. Bar-Ilan, R. Frodeman, E. Lex, I. Peters & P.F. Wouters, ‘Next-generation metrics: Responsible metrics and evaluation for open science,’ Report of the European Commission Expert Group on Altmetrics, (2017), (10 June 2019).

Secondary sources Adair, W.C., ‘Citation indexes for scientific literature?’, American Documentation (Pre- 1986), 6:1 (1955), pp. 31–32. Bergstrom, C.T., J.D. West & M.A. Wiseman, ‘The eigenfactor™ metrics’, Journal of Neuroscience, 28:45 (2008), pp. 11433–11434. Bollen, J., M.A. Rodriquez & H. Van de Sompel, ‘Journal status’, Scientometrics, 69:3 (2006), pp. 669–687. Bornmann, L. & H.D. Daniel, ‘What do we know about the h index?’, Journal of the American Society for Information Science and Technology, 58:9 (2007), pp. 1381– 1385.

51

Brumback, R.A., ‘Worshiping False Idols: The Impact Factor Dilemma’, Journal of Child Neurology, 23 (2008), pp. 365–367. Chan, L., D. Cuplinskas, M. Eisen, F. Friend, Y. Genova, J.C. Guédon, M. Hagemann, S. Harnad, R. Johnson, R. Kupryte & M. La Manna, ‘Read the Budapest Open Access Initiative’, Budapest Open Access Initiative (24 June 2019). Chartier, R., Forms and meanings: Texts, performances, and audiences from codex to computer, (Philadelphia: University of Pennsylvania Press, 1995). Cole, W.R., ‘No Author Is a Man of Genius to His Publisher’, The New York Times, 3 September, 1989 (11 June 2019). Da Silva, J.A.T. & A.R. Memon, ‘CiteScore: A cite for sore eyes, or a valuable, transparent metric?’, Scientometrics, 111:1 (2017), pp. 553–556. Darnton, R., ‘What is the history of books?’, Daedalus, 111:3 (1982), pp. 65–83. De Rond, M. & A.N. Miller, ‘Publish or perish: bane or boon of academic life?’, Journal of Management Inquiry, 14:4 (2005), pp. 321–329. Edmond, J., R. Lawrence, S. Leonelli, N. Lossau, C. MacCallam, et al., ‘OSPP Combined Recommendations for the Embedding of Open Science’, Open Science Policy Platform, (27 June 2019). Fernandez-Llimos, F., ‘Differences and similarities between Journal Impact Factor and CiteScore’, Pharmacy Practice (Granada), 16:2 (2018), pp. 1–3. Franchignoni, F. & S.M. Lasa, ‘Bibliometric indicators and core journals in physical and rehabilitation medicine’, Journal of Rehabilitation Medicine, 43:6 (2011), pp. 471– 476. Fry, J., C. Oppenheim, C. Creaser, W. Johnson, M. Summers, S. White, G. Butters, J. Craven, J. Griffiths & D. Hartley, ‘Communicating knowledge: how and why UK researchers publish and disseminate their findings’, Research Information Network and JISC, (2009), (10 June 2019). Galligan, F. & S. Dyas-Correia, ‘Altmetrics: rethinking the way we measure’, Serials Review, 39:1 (2013), pp. 56–61.

52

Garfield, E., ‘Citations-to divided by items-published gives journal impact factor; ISI lists the top fifty high-impact journals of science’, Current Contents, 7 (1972), pp. 5–8. González-Pereira, B., V.P. Guerrero-Bote & F. Moya-Anegón, ‘A new approach to the metric of journals’ scientific prestige: The SJR indicator’, Journal of Informetrics, 4:3 (2010), pp. 379–391. Guerrero-Bote, V.P. & F. Moya-Anegón, ‘A further step forward in measuring journals’ scientific prestige: The SJR2 indicator’, Journal of Informetrics, 6:4 (2012), pp. 674– 688. Hames, I., ‘Peer review in a rapidly evolving publishing landscape’, In Academic and Professional Publishing (Chandos Publishing, 2012), pp. 15–52. Hemmingsson, A., T. Mygind, A. Skjennald & J. Edgren, ‘Manipulation of impact factors by editors of scientific journals’, American Journal of Roentgenology, 178:3 (2002), p. 767. Hirsch, J.E., ‘An index to quantify an individual’s scientific research output’, Proceedings of the National Academy of Sciences, 102:46 (2005), pp. 16569–16572. Jacsó, P., ‘The problems with the subject categories schema in the EigenFactor database from the perspective of ranking journals by their prestige and impact’, Online Information Review, 36:5 (2012), pp. 758–766. Kosteas, V.D., ‘Journal impact factors and month of publication’, , 135 (2015), pp. 77–79. Leydesdorff, L., The challenge of scientometrics: The development, measurement, and self- organization of scientific communications, (Universal Publishers, 2011). Miró, Ò., P. Burbano, C.A. Graham, D.C. Cone, J. Ducharme, A.F. Brown & F.J. Martín- Sánchez, ‘Analysis of h-index and other bibliometric markers of productivity and repercussion of a selected sample of worldwide emergency medicine researchers’, Emergency Medical Journal, 34:3 (2017), pp. 175–181. Mirowski, P., ‘The future (s) of open science’, Social Studies of Science, 48:2 (2018), pp. 171–203. Moed, H.F. ‘Measuring contextual citation impact of scientific journals’, Center for Science and Technology Studies, 13 November 2009, . Morgan Stanley, ‘Scientific publishing: Knowledge is power’ Morgan Stanley Equity Research Europe (London), 30 September, 2002 (16 May 2019).

53

Neylon, C. & S. Wu, ‘Article-level metrics and the evolution of scientific impact’, PLoS Biology, 7:11 (2009), p. e1000242. O’Carroll, C., B. Rentier, C. Cabello Valdès, F. Esposito, E. Kaunismaa, et al., Evaluation of research careers fully acknowledging Open Science practices-rewards, incentives and/or recognition for researchers practicing Open Science (Publication Office of the European Union, 2017). Olmos-Peñuela, J., P. Benneworth & E. Castro-Martinez, ‘Are “STEM from Mars and SSH from Venus”?: Challenging disciplinary stereotypes of research’s social value’, Science and Public Policy, 41:3 (2013), pp. 384–400. Olssen, M. & M.A. Peters, ‘Neoliberalism, higher education and the knowledge economy: From the free market to knowledge capitalism’, Journal of Education Policy, 20:3 (2005), pp. 313–345. Pontille, D. & D. Torny, ‘The controversial policies of journal ratings: Evaluating social sciences and humanities’, Research Evaluation, 19:5 (2010), pp. 347–360. Potter, C.V., S. Thomas, J.L. Dean, A.P. Kybett, R. Kidd, M. James & H. Saxton, ‘Comment: 2004’s fastest organic and biomolecular chemistry!’, Journal of Material Chemistry, 14 (2004), pp. E17–E22. Praal, F. & A. van der Weel, ‘Taming the digital wilds: How to find authority in an alternative publication paradigm’, TXT, 2016 (2016), pp. 97–102. Roediger III, H.L., ‘The h-index in science: A new measure of scholarly contribution’, Observer: The Academic Observer, 19:4 (2006), (20 May 2019). Rossner, M., H. Van Epps & E. Hill, ‘Show me the data’, The Journal of Cell Biology, 179:6 (2007), pp. 1091–1092. Schreiber, W.E. & D.M. Giustini, ‘Measuring Scientific Impact With the h-Index: A Primer for Pathologists’, American Journal of Clinical Pathology, 151:3 (2018), pp. 286– 291. Scully, C. & H. Lodge, ‘Impact factors and their significance; overrated or misused?’, British Dental Journal, 198:7 (2005), p. 391. Smith, R., ‘Commentary: The Power of the Unrelenting Impact factor: Is it a Force for Good or Harm?’, International Journal of Epidemiology, 35 (2006), pp. 1129–1130.

54

Stewart, J., R. Procter, R. Williams & M. Poschen, ‘The role of academic publishers in shaping the development of Web 2.0 services for scholarly communication’, New Media & Society, 15:3 (2013), pp. 413–432. Stirling, D.A., ‘Editorial peer review: Its strengths and weaknesses’, Journal of the Association for Information Science and Technology, 52 (2001), p. 984. Tague-Sutcliffe, J., ‘An introduction to informetrics’, Information Processing & Management, 28:1 (1992), pp. 1–3. Taubert, N.C. & P. Weingart, ‘Changes in scientific publishing: A heuristic for analysis’, in The future of scholarly publishing: Open access and the economics of digitization (Cape Town, : African Minds, 2017), pp. 1–33. van Leeuwen, T., ‘Bibliometric research evaluations, Web of Science and the Social Sciences and Humanities: a problematic relationship?’, Bibliometrie-Praxis und Forschung, 2 (2013), pp. 8-1–8-18. Vicente-Sáez, R. & C. Martínez-Fuentes, ‘Open Science now: A systematic literature review for an integrated definition’, Journal of Business Research, 88 (2018), pp. 428–436. Ware, M., Peer review: benefits, perceptions and alternatives (London: Publishing Research Consortium, 2008), . Wildgaard, L., J.W. Schneider & B. Larsen, ‘A review of the characteristics of 108 author- level bibliometric indicators’, (2014), (1 July 2019). Wood, A., ‘Cascade Journals: What and why?’, The Wiley Network (25 June 2019).

Websites Anon., ‘About’, EIGENFACTOR.org (5 June 2019). Anon., ‘About’, Plan S, (9 July 2019). Anon., ‘Background’, Budapest Open Access Initiative (24 June 2019). Anon., ‘Description’, Scientometrics (10 June 2019).

55

Anon., ‘Policies, information and services’, Open Science Policy Platform, (27 June 2019).

56