Mechanisms of Change and the Role of Journal Self-Citations

Total Page:16

File Type:pdf, Size:1020Kb

Mechanisms of Change and the Role of Journal Self-Citations publications Article Journals that Rise from the Fourth Quartile to the First Quartile in Six Years or Less: Mechanisms of Change and the Role of Journal Self-Citations Juan Miguel Campanario Departamento de Física y Matemáticas, Universidad de Alcalá, Alcalá de Henares, 28871 Madrid, Spain; [email protected] Received: 6 September 2018; Accepted: 20 November 2018; Published: 26 November 2018 Abstract: Journal self-citations may be increased artificially to inflate a journal’s scientometric indicators. The aim of this study was to identify possible mechanisms of change in a cohort of journals that rose from the fourth (Q4) to the first quartile (Q1) over six years or less in Journal Citation Reports (JCR), and the role of journal self-citations in these changes. A total of 51 different journals sampled from all JCR Science Citation Index (SCI) subject categories improved their rank position from Q4 in 2009 to Q1 in any year from 2010 to 2015. I identified changes in the numerator or denominator of the Journal Impact Factor (JIF) that were involved in each year-to-year transition. The main mechanism of change was the increase in the number of citations used to compute the JIF. The effect of journal self-citations in the increase of the JIF was studied. The main conclusion is that there was no evidence of widespread JIF manipulation through the overuse of journal self-citations. Keywords: journal rankings; web of science categories; journal impact factor; journal self-citations 1. Introduction and Objectives Different criteria are used to evaluate and rank academic journals. Perhaps the most frequently used citation-based ranking is the Journal Impact Factor (JIF), introduced by Eugene Garfield and first published in 1975 by the Institute of Scientific Information (ISI), then by Thomson Reuters, and currently by Clarivate Analytics as part of Journal Citation Reports (JCR) [1,2]. The JIF of a given journal for year Y is calculated according to the following equation [3]: Citations_in_Y_to_documents_published_in_Y1_and_Y2 JIF(Y) = . (1) Citable_items_published_in_Y1_and_Y2 In Equation (1), Y1 and Y2 are the two years before Y. The “citable items” include only articles and reviews. However, other documents can be cited and they are often cited [4]. Journal rankings of specific research fields are often used for evaluation purposes, both of authors and institutions [5]. The use of reference sets based on Web of Science subject categories became an established practice in evaluative bibliometrics [6]. Many years ago, ISI created subject categories for JCR; these categories were assigned by ISI staff on the basis of a number of criteria including the journal’s title and its citation patterns [7,8]. Many journals appear in more than one category [9]. For a journal editor, increasing a journal’s impact factor may be an important objective [10]. Journal editors often publish editorials and letters in which they explain that one of their main goals is to maintain or improve their JIF or their journal’s rank position in JCR [11,12]. Actually, they can even publish editorials and commissioned opinion articles that cause these increases [13]. In some countries (for example, Spain), academics are awarded economic bonuses for publishing in prestigious journals, especially those ranked highly in their JCR categories [14,15]. Similarly, many Publications 2018, 6, 47; doi:10.3390/publications6040047 www.mdpi.com/journal/publications Publications 2018, 6, 47 2 of 15 Chinese universities pay monetary rewards to staff members who publish Science Citation Index (SCI) articles in journals with a high JIF [16]. Competition to publish in highly ranked journals can be fierce and, as a consequence, journals actively try to increase their rank position. Paji´cstudied the stability of seven citation-based journal rankings [17], and discovered that many journals moved from one quartile to another. The plots he published illustrated instances of changes from Q4 to Q1 from one year to the next. Percentiles are frequently used in journal ranking systems. Currently, many journals remain in the same JCR quartile for years, or move up only in small, single-quartile steps. This makes journals that experience relatively fast transitions from low to high rank positions a strategic research objective. 1.1. Journal Self-Citations and the JIF According Bornmann and Haunschild, “citations are a target-oriented metric which measures impact on science” ([18], p. 230). Many comments and letters were published on the topic of author and journal self-citation. For example, authors complained that self-citations may be increased artificially to inflate a journal’s scientometric indicators [19–26]. However, as noted by Frandsen, little evidence exists that relates self-citations to the JIF [27]. Rousseau suggested that a high self-citation rate may be an expression of low journal visibility [28]. Peritz and Bar-Ilan studied the fields of bibliometrics and scientometrics, and discovered an increase in journal self-citation [29]. Mirsaeid, Motamedi, and Ghorbani selected 12 Iranian medical journals included in Web of Science and 26 Iranian medical journals included in ISC (Islamic World Science Citation Center), and studied the correlation between self-citation and JIF. These authors found that there was no significant difference between self-citation rates in these two databases. In addition, they found no significant differences between these two databases in the correlation of journal self-citation with impact factor [30]. Yang, Gao, and Zhang compared the journal self-citation rates for 99 Chinese scientific journals, and then compared the results with a similar set of 99 non-Chinese journals. They found that, in general, self-citation rates were higher in Chinese journals [31]. Humphrey, Kiseleva, and Schleicher studied the self-citation trends over the period of 2000–2012 for all business and management journals indexed in JCR. In some instances, they found strong increases in self-citation relative to external citations [32]. Lin studied the performance of Asian science and technology (S&T) journals in international citation indicators. She found that journal self-citations among the studied journals had no significant effect on the journals’ JIF values [33]. Hongling investigated the self-citation rates from 2007 to 2009 in journals from China, Japan, India, and Korea included in the SCI. She found that Korea had the highest self-citation rate, and Japan the lowest. She also discovered that the academic influence of journals with very low or very high self-citation rates was small [34]. Huang and Lin studied the influence of journal self-citations on the JIF and journal immediacy index (JII) [35]. They used data for 10 years of publications from 20 journals in environmental engineering, and found that the inclusion of journal self-citations only slightly modified the JIF and JII values. Mimouni et al. investigated the self-citation rate of 117 pediatrics journals listed in the JCR. They discovered that there was a significant difference between JIF and corrected JIF (without self-citations) among all journals. They also found that self-citation was more prevalent in journals with a lower JIF and with a lower corrected JIF [36]. Torabian et al. studied the relationship between self-citation and JIF in open-access medical science journals indexed by ISI and Directory of Open-Access Journals (DOAJ) in 2007–2008. They found that, after deleting self-citations, 60% of the journals increased in rank, 27% decreased in rank, and 13% remained unchanged. They also found a significant relationship between self-citation and JIF [37]. Publications 2018, 6, 47 3 of 15 Opthof studied the top 50 journals listed in the Web of Science category of cardiac and cardiovascular systems [38]. He studied inflation of the impact factor due to journal self-citation, and concluded that that journal self-citation among cardiovascular journals was substantial. As noted above, a simple technique to increase the JIF consists of publishing editorial material with many journal self-citations [39]. However, in previous studies, we found that manipulation of the JIF by publishing large amounts of editorial material with many citations to the journal itself was not a widely used strategy [40]. In another study, we tried ascertaining the possible effect of journal self-citations on the increase in the JIF of journals for which this scientometric indicator rose at least fourfold in only a few years. Again, we found no evidence of widespread manipulation of the JIF through the massive use of journal self-citations [41]. In another article, I studied the factors (citations, self-citations, and number of articles) that led to large changes in the JIF in only one year in a sample of 360 journals. I discovered that about 54% of the increases in the JIF were associated with changes in journal self-citation patterns [42]. Yang et al. used data mining techniques to detect JIF manipulation [43]. These authors used eight algorithms to find suitable methods for detecting impact factor manipulation. Note that, in general, the effect of added citations on the JIF will be more noticeable for journals ranked in lower positions. These journals have low JIFs and, thus, benefit more obviously from any additional citations they receive. 1.2. Coercive Citations The concept of mandatory or coercive self-citation is not new. This refers to “requests that (i) give no indication that the manuscript was lacking in attribution; (ii) make no suggestion as to specific articles, authors, or a body of work requiring review; and (iii) only guide authors to add citations from the editor’s journal” ([44], p. 542). Resnik, Gutierrez-Ford, and Peddala provide another definition of coercive citations: “unnecessary references to his/her publication(s)” ([45], p. 307). Coercitive citations could also be used for personal editors’ benefit [46]. Esfe et al. discussed the types, reasons, benefits, and drawbacks of this type of self-citation [47].
Recommended publications
  • Use Your Author's Rights to Make Articles Freely Available
    Use your author’s rights to make articles freely available blogs.lse.ac.uk/impactofsocialsciences/2012/11/19/carling-authors-rights-open-access/ 11/19/2012 Debates on open access publishing may rumble on for some time to come. Until a perfect solution is found, Jørgen Carling writes that self-archiving, while not perfect, allows researchers to bring their work out from behind paywalls without jeopardizing academic integrity, and, at no cost. Most academic journals allow authors to post their articles online after a while, for instance 12 or 18 months after publications. Very few social scientists do, unfortunately. Making publications available in this way is referred to as ‘green open access’ or ‘self-archiving’. While the debates about open access and the future of academic publishing continue, self-archiving is something most authors could do right away to make their work more widely available. To start self-archiving, you must be aware that an article exists in three important versions: 1) the version that you originally submitted; 2) the version that was eventually accepted for publication, after peer review; and 3) the published version incorporating copy-editing, layout and correction of proofs. It is version 2 that you are usually allowed to post after a certain period of time, never version 3. The difference between your final text (version 2) and the copy-edited published text (version 3) would usually make little difference to the general reader. Part of the argument for open-access publishing is indeed that science should become more widely available to all, not just people with access to university libraries.
    [Show full text]
  • Journal Rankings Paper
    NBER WORKING PAPER SERIES NEOPHILIA RANKING OF SCIENTIFIC JOURNALS Mikko Packalen Jay Bhattacharya Working Paper 21579 http://www.nber.org/papers/w21579 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 September 2015 We thank Bruce Weinberg, Vetla Torvik, Neil Smalheiser, Partha Bhattacharyya, Walter Schaeffer, Katy Borner, Robert Kaestner, Donna Ginther, Joel Blit and Joseph De Juan for comments. We also thank seminar participants at the University of Illinois at Chicago Institute of Government and Public Affairs, at the Research in Progress Seminar at Stanford Medical School, and at the National Bureau of Economic Research working group on Invention in an Aging Society for helpful feedback. Finally, we thank the National Institute of Aging for funding for this research through grant P01-AG039347. We are solely responsible for the content and errors in the paper. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peer- reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications. © 2015 by Mikko Packalen and Jay Bhattacharya. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source. Neophilia Ranking of Scientific Journals Mikko Packalen and Jay Bhattacharya NBER Working Paper No. 21579 September 2015 JEL No. I1,O3,O31 ABSTRACT The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress.
    [Show full text]
  • Forthcoming in Research Policy 2019 Academic Misconduct
    Forthcoming in Research Policy 2019 Academic Misconduct, Misrepresentation and Gaming: A Reassessment Mario Biagioli Center for Science & Innovation Studies University of California, Davis Davis, CA 95616 Martin Kenney* Department of Human Ecology University of California, Davis Davis, CA 95616 Ben R. Martin Science Policy Research Unit (SPRU) University of Sussex, Brighton, UK and John P. Walsh School of Public Policy Georgia Institute of Technology Atlanta, Georgia * Corresponding author: [email protected] 1 Electronic copy available at: https://ssrn.com/abstract=3282001 Abstract The motivation for this Special Issue is increasing concern not only with academic misconduct but also with less easily defined forms of misrepresentation and gaming. In an era of intense emphasis on measuring academic performance, there has been a proliferation of scandals, questionable behaviors and devious stratagems involving not just individuals but also organizations, including universities, editors and reviewers, journal publishers, and conference organizers. This introduction first reviews the literature on the prevalence of academic misconduct, misrepresentation and gaming (MMG). The core of the article is organized around a life-cycle model of the production and dissemination of research results. We synthesize the findings in the MMG literature at the level of the investigator or research team, emphasizing that misbehavior extends well beyond fabrication and falsification to include behaviors designed to exaggerate or to mislead readers as to the significance of research findings. MMG is next explored in the post-research review, publication, and post-publication realms. Moving from the individual researcher to the organizational level, we examine how MMG can be engaged in by either journals or organizations employing or funding the researchers.
    [Show full text]
  • Downloaded in XML Format on May 25, 2020 from NCBI (Ftp://Ftp.Ncbi.Nlm.Nih.Gov/Pubmed/)
    bioRxiv preprint doi: https://doi.org/10.1101/2020.08.12.248369; this version posted August 13, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. Detecting potential reference list manipulation within a citation network Jonathan D. Wren1,2,3,4*, Constantin Georgescu1 1Genes and Human Disease Research Program, Oklahoma Medical Research Foundation, 825 N.E. 13th Street, Oklahoma City, Oklahoma 73104-5005 2Biochemistry and Molecular Biology Dept, 3Stephenson Cancer Center, 4Department of Geriatric Medicine, University of Oklahoma Health Sciences Center. *Corresponding author: [email protected]; [email protected] Abstract Although citations are used as a quantifiable, objective metric of academic influence, cases have been documented whereby references were added to a paper solely to inflate the perceived influence of a body of research. This reference list manipulation (RLM) could take place during the peer-review process (e.g., coercive citation from editors or reviewers), or prior to it (e.g., a quid-pro-quo between authors). Surveys have estimated how many people may have been affected by coercive RLM at one time or another, but it is not known how many authors engage in RLM, nor to what degree. Examining a subset of active, highly published authors (n=20,803) in PubMed, we find the frequency of non-self citations (NSC) to one author coming from one paper approximates Zipf’s law. We propose the Gini Index as a simple means of quantifying skew in this distribution and test it against a series of “red flag” metrics that are expected to result from RLM attempts.
    [Show full text]
  • The Journal Ranking System Undermining the Impact of 2 Brazilian Science 3 4 Rodolfo Jaffé1 5 6 1 Instituto Tecnológico Vale, Belém-PA, Brazil
    bioRxiv preprint doi: https://doi.org/10.1101/2020.07.05.188425; this version posted July 6, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. 1 QUALIS: The journal ranking system undermining the impact of 2 Brazilian science 3 4 Rodolfo Jaffé1 5 6 1 Instituto Tecnológico Vale, Belém-PA, Brazil. Email: [email protected] 7 8 Abstract 9 10 A journal ranking system called QUALIS was implemented in Brazil in 2009, intended to rank 11 graduate programs from different subject areas and promote selected national journals. Since this 12 system uses a complicated suit of criteria (differing among subject areas) to group journals into 13 discrete categories, it could potentially create incentives to publish in low-impact journals ranked 14 highly by QUALIS. Here I assess the influence of the QUALIS journal ranking system on the 15 global impact of Brazilian science. Results reveal a steeper decrease in the number of citations 16 per document since the implementation of this QUALIS system, compared to the top Latin 17 American countries publishing more scientific articles. All the subject areas making up the 18 QUALIS system showed some degree of bias, with social sciences being usually more biased 19 than natural sciences. Lastly, the decrease in the number of citations over time proved steeper in a 20 more biased area, suggesting a faster shift towards low-impact journals ranked highly by 21 QUALIS.
    [Show full text]
  • Ranking Forestry Journals Using the H-Index
    Ranking forestry journals using the h-index Journal of Informetrics, in press Jerome K Vanclay Southern Cross University PO Box 157, Lismore NSW 2480, Australia [email protected] Abstract An expert ranking of forestry journals was compared with journal impact factors and h -indices computed from the ISI Web of Science and internet-based data. Citations reported by Google Scholar offer an efficient way to rank all journals objectively, in a manner consistent with other indicators. This h-index exhibited a high correlation with the journal impact factor (r=0.92), but is not confined to journals selected by any particular commercial provider. A ranking of 180 forestry journals is presented, on the basis of this index. Keywords : Hirsch index, Research quality framework, Journal impact factor, journal ranking, forestry Introduction The Thomson Scientific (TS) Journal Impact Factor (JIF; Garfield, 1955) has been the dominant measure of journal impact, and is often used to rank journals and gauge relative importance, despite several recognised limitations (Hecht et al., 1998; Moed et al., 1999; van Leeuwen et al., 1999; Saha et al., 2003; Dong et al., 2005; Moed, 2005; Dellavalle et al., 2007). Other providers offer alternative journal rankings (e.g., Lim et al., 2007), but most deal with a small subset of the literature in any discipline. Hirsch’s h-index (Hirsch, 2005; van Raan, 2006; Bornmann & Daniel, 2007a) has been suggested as an alternative that is reliable, robust and easily computed (Braun et al., 2006; Chapron and Husté, 2006; Olden, 2007; Rousseau, 2007; Schubert and Glänzel, 2007; Vanclay, 2007; 2008).
    [Show full text]
  • Australian Business Deans Council 2019 Journal Quality List Review Final Report 6 December 2019
    Australian Business Deans Council 2019 Journal Quality List Review Final Report 6 December 2019 1 About the Australian Business Deans Council The Australian Business Deans Council (ABDC) is the peak body of Australian university business schools. Our 38 members graduate one-third of all Australian university students and more than half of the nation’s international tertiary students. ABDC’s mission is to make Australian business schools better, and to foster the national and global impact of Australian business education and research. ABDC does this by: • Being the collective and collegial voice of university business schools • Providing opportunities for members to share knowledge and best practice • Creating and maintaining strong, collaborative relationships with affiliated national and international peak industry, higher education, professional and government bodies • Engaging in strategic initiatives and activities that further ABDC’s mission. Australian Business Deans Council Inc. UNSW Business School, Deans Unit, Level 6, West Lobby, College Road, Kensington, NSW, Australia 2052 T: +61 (0)2 6162 2970 E: [email protected] 2 Table of Contents Acknowledgements 4 Background and Context 4 Method and Approach 7 Outcomes 10 Beyond the 2019 Review 13 Appendix 1 – Individual Panel Reports 14 Information Systems 15 Economics 20 Accounting 37 Finance including Actuarial Studies 57 Management, Commercial Services and Transport and Logistics 63 (and Other, covering 1599) Marketing and Tourism 78 Business and Taxation Law 85 Appendix 2 – Terms
    [Show full text]
  • Understanding Research Metrics INTRODUCTION Discover How to Monitor Your Journal’S Performance Through a Range of Research Metrics
    Understanding research metrics INTRODUCTION Discover how to monitor your journal’s performance through a range of research metrics. This page will take you through the “basket of metrics”, from Impact Factor to usage, and help you identify the right research metrics for your journal. Contents UNDERSTANDING RESEACH METRICS 2 What are research metrics? What are research metrics? Research metrics are the fundamental tools used across the publishing industry to measure performance, both at journal- and author-level. For a long time, the only tool for assessing journal performance was the Impact Factor – more on that in a moment. Now there are a range of different research metrics available. This “basket of metrics” is growing every day, from the traditional Impact Factor to Altmetrics, h-index, and beyond. But what do they all mean? How is each metric calculated? Which research metrics are the most relevant to your journal? And how can you use these tools to monitor your journal’s performance? For a quick overview, download our simple guide to research metrics. Or keep reading for a more in-depth look at what’s in the “basket of metrics” and how to interpret it. UNDERSTANDING RESEACH METRICS 3 Citation-based metrics Citation-based metrics IMPACT FACTOR What is the Impact Factor? The Impact Factor is probably the most well-known metric for assessing journal performance. Designed to help librarians with collection management in the 1960s, it has since become a common proxy for journal quality. The Impact Factor is a simple research metric: it’s the average number of citations received by articles in a journal within a two-year window.
    [Show full text]
  • Ten Good Reasons to Use the Eigenfactor Metrics in Our Opinion, There Are Enough Good Reasons to Use the Eigenfactor Method to Evaluate Journal Influence: 1
    Ten good reasons to use the EigenfactorTMmetrics✩ Massimo Franceschet Department of Mathematics and Computer Science, University of Udine Via delle Scienze 206 – 33100 Udine, Italy [email protected] Abstract The Eigenfactor score is a journal influence metric developed at the De- partment of Biology of the University of Washington and recently intro- duced in the Science and Social Science Journal Citation Reports maintained by Thomson Reuters. It provides a compelling measure of journal status with solid mathematical background, sound axiomatic foundation, intriguing stochastic interpretation, and many interesting relationships to other ranking measures. In this short contribution, we give ten reasons to motivate the use of the Eigenfactor method. Key words: Journal influence measures; Eigenfactor metrics; Impact Factor. 1. Introduction The Eigenfactor metric is a measure of journal influence (Bergstrom, 2007; Bergstrom et al., 2008; West et al., 2010). Unlike traditional met- rics, like the popular Impact Factor, the Eigenfactor method weights journal citations by the influence of the citing journals. As a result, a journal is influential if it is cited by other influential journals. The definition is clearly recursive in terms of influence and the computation of the Eigenfactor scores involves the search of a stationary distribution, which corresponds to the leading eigenvector of a perturbed citation matrix. The Eigenfactor method was initially developed by Jevin West, Ben Alt- house, Martin Rosvall, and Carl Bergstrom at the University of Washington ✩Accepted for publication in Information Processing & Management. DOI: 10.1016/j.ipm.2010.01.001 Preprint submitted to Elsevier January 18, 2010 and Ted Bergstrom at the University of California Santa Barbara.
    [Show full text]
  • Please Don't Aim for a Highly Cited Paper
    AUSTRALIAN UNIVERSITIES’ REVIEW Please don’t aim for a highly cited paper Michael C Calver Murdoch University Citation-based metrics are important in determining careers, so it is unsurprising that recent publications advise prospective authors on how to write highly cited papers. While such publications offer excellent advice on structuring and presenting manuscripts, there are significant downsides, including: restrictions in the topics researched, incentives to misconduct and possible detriments to motivation, innovation and collegiality. Guides to writing highly cited papers also assume that all citations are equal, ignoring new directions in bibliometric research identifying ‘quality’ and perfunctory citations. Rather than pursuing citations, with the uncertainty about their significance and the potential negative consequences, authors may fare better by following evidence from several disciplines indicating that persistence, a focused research program, good methodology and publishing in relevant journals are more important in career development and disciplinary influence than the odd star paper. Research administrators could encourage such steps by considering innovative new multivariate assessments of research productivity, including assessing social impact. Keywords: citation, quality citation, motivation, misconduct, innovation, highly cited Introduction and clarity of style, encouraging people to aim consciously to write highly cited papers is concerning Increasingly, researchers find their track records under for five main reasons: (i) it narrows the scope of research scrutiny as supervisors and funding agencies seek the undertaken or published; (ii) the focus on reward most successful groups to fund, or individuals to reward may reduce intrinsic motivation, innovation and true with appointments, promotion, tenure or prizes (Corsi, collaboration but encourage mistakes and misconduct; D’Ippoliti & Lucidi, 2010; Oswald, 2010; Teixeira et al., (iii) despite substantial research, the significance of 2013).
    [Show full text]
  • Positioning Open Access Journals in a LIS Journal Ranking
    Positioning Open Access Journals in a LIS Journal Ranking Jingfeng Xia This research uses the h-index to rank the quality of library and information science journals between 2004 and 2008. Selected open access (OA) journals are included in the ranking to assess current OA development in support of scholarly communication. It is found that OA journals have gained momentum supporting high-quality research and publication, and some OA journals have been ranked as high as the best traditional print journals. The findings will help convince scholars to make more contri- butions to OA journal publications, and also encourage librarians and information professionals to make continuous efforts for library publishing. ccording to the Budapest the complexity of publication behaviors, Open Access Initiatives, open various approaches have been developed access (OA) denotes “its free to assist in journal ranking, of which availability on the public comparing the rates of citation using internet, permitting any users to read, citation indexes to rate journals has been download, copy, distribute, print, search, popularly practiced and recognized in or link to the full texts of these articles, most academic disciplines. ISI’s Journal crawl them for indexing, pass them as Citation Reports (JCR) is among the most data to software, or use them for any other used rankings, which “offers a systematic, lawful purpose, without financial, legal, objective means to critically evaluate the or technical barriers other than those world’s leading journals, with quantifi- inseparable from gaining access to the able, statistical information based on internet itself.”1 Therefore, an OA journal citation data.”4 Yet, citation-based journal is referred to as one that is freely available rankings, such as JCR, have included few online in full text.
    [Show full text]
  • New Approaches to Ranking Economics Journals
    05‐12 New Approaches to Ranking Economics Journals Yolanda K. Kodrzycki and Pingkang Yu Abstract: We develop a flexible, citations‐ and reference‐intensity‐adjusted ranking technique that allows a specified set of journals to be evaluated using a range of alternative criteria. We also distinguish between the influence of a journal and that of a journal article, with the latter concept arguably being more relevant for measuring research productivity. The list of top economics journals can (but does not necessarily) change noticeably when one examines citations in the social science and policy literatures, and when one measures citations on a per‐ article basis. The changes in rankings are due to the broad interest in applied microeconomics and economic development, to differences in citation norms and in the relative importance assigned to theoretical and empirical contributions, and to the lack of a systematic effect of journal size on influence per article. We also find that economics is comparatively self‐ contained but nevertheless draws knowledge from a range of other disciplines. JEL Classifications: A10, A12 Keywords: economics journals, social sciences journals, policy journals, finance journals, rankings, citations, research productivity, interdisciplinary communications. Yolanda Kodrzycki is a Senior Economist and Policy Advisor at the Federal Reserve Bank of Boston. Her email address is [email protected]. Pingkang Yu is a Ph.D. student at George Washington University. His email address is [email protected]. Much of the research for this paper was conducted while Yu was with the Federal Reserve Bank of Boston. This paper, which may be revised, is available on the web site of the Federal Reserve Bank of Boston at http://www.bos.frb.org/economic/wp/index.htm.
    [Show full text]