Critique of Hirsch's Citation Index: a Combinatorial Fermi Problem

Total Page:16

File Type:pdf, Size:1020Kb

Critique of Hirsch's Citation Index: a Combinatorial Fermi Problem Critique of Hirsch’s Citation Index: A Combinatorial Fermi Problem Alexander Yong n 2005, physicist J. E. Hirsch [Hi05] pro- desires a quantitative baseline for what “compara- posed the h-index to measure the quality ble,” “very different,” and “similar” actually mean. of a researcher’s output. This metric is the Now, while this would appear to be a matter for largest integer n such that the person has statisticians, we show how textbook combinatorics n papers with at least n citations each, and sheds some light on the relationship between Iall other papers have weakly less than n citations. the h-index and Ncitations. We present a simple Although the original focus of Hirsch’s index was model that raises specific concerns about potential on physicists, the h-index is now widely popular. misuses of the h-index. For example, Google Scholar and the Web of Science To begin, think of the list of a researcher’s highlight the h-index, among other metrics such citations per paper in decreasing order λ = (λ1 ≥ as total citation count, in their profile summaries. λ2 ≥ · · · ≥ λNpapers ) as a partition of size Ncitations. An enticing point made by Hirsch is that the Graphically, λ is identified with its Young diagram. h-index is an easy and useful supplement to a • • For example, λ = (5; 3; 1; 0) $ . citation count (Ncitations), since the latter metric • • may be skewed by a small number of highly cited papers or textbooks. In his words: A combinatorialist will recognize that the h- “I argue that two individuals with similar index of λ is the side-length of the Durfee square h’s are comparable in terms of their overall (marked using •’s above): this is the largest h × h scientific impact, even if their total number square that fits in λ. This simple observation of papers or their total number of citations is nothing new, and appears in both the bib- is very different. Conversely, comparing two liometric and combinatorial literature; see, e.g., individuals (of the same scientific age) with [AnHaKi09, FlSe09]. In particular, since the Young a similar number of total papers or of total diagram of size Ncitations with maximum h-index citation count and very different h values, is roughly a square, we see graphically that the one with the higher h is likely to be the p 0 ≤ h ≤ b Ncitationsc. more accomplished scientist.” Next, consider the following question: It seems to us that users might tend to eyeball Given Ncitations, what is the estimated range of h? differences of h’s and citation counts among in- Taking only Ncitations as input hardly seems dividuals during their assessments. Instead, one like sufficient information to obtain a meaningful Alexander Yong is associate professor of mathematics at answer. It is exactly for this reason that we call the University of Illinois at Urbana-Champaign. His email the question a combinatorial Fermi problem, by address is [email protected]. analogy with usual Fermi problems; see the section DOI: http://dx.doi.org/10.1090/noti1164 “Combinatorial Fermi Problems.” 1040 Notices of the AMS Volume 61, Number 9 Table 1. Confidence intervals for h-index. Ncitations 50 100 200 300 400 500 750 1000 1250 Interval for h [2, 5] [3, 7] [5, 9] [7, 11] [8, 13] [9, 14] [11, 17] [13, 20] [15, 22] 1500 1750 2000 2500 3000 3500 4000 4500 5000 5500 [17, 24] [18, 26] [20, 28] [22, 31] [25, 34] [27, 36] [29, 39] [31, 41] [34, 43] [35, 45] 6000 6500 7000 7500 8000 9000 10000 [36, 47] [37, 49] [39, 51] [40, 52] [42, 54] [44, 57] [47, 60] Table 2. Fields medalists 1998–2010. Medalist Award year Ncitations h Rule of thumb est. Confidence interval T. Gowers 1998 1012 15 17.2 [13, 20] R. Borcherds 1998 1062 14 17.6 [14, 21] C. McMullen 1998 1738 25 22.5 [18, 26] M. Kontsevich 1998 2609 23 27.6 [22, 32] L. Lafforgue 2002 133 5 6.2 [4, 8] V. Voevodsky 2002 1382 20 20.0 [16, 23] G. Perelman 2006 362 8 10.0 [7, 12] W. Werner 2006 1130 19 18.2 [14, 21] A. Okounkov 2006 1677 24 22.1 [18, 25] T. Tao 2006 6730 40 44.3 [38, 51] C. Ngô 2010 228 9 8.2 [5, 10] E. Lindenstrauss 2010 490 12 12.0 [9, 14] S. Smirnov 2010 521 12 12.3 [9, 15] C. Villani 2010 2931 25 29.2 [24, 33] Since we assume no prior knowledge, we con- “The Euler–Gauss Identity and its Application to sider each citation profile in an unbiased manner. the h-index.” That is, each partition of Ncitations is chosen with The asymptotic result we use, due to equal probability. In fact, there is a beautiful the- E. R. Canfield–S. Corteel–C. D. Savage [CaCoSa98], ory concerning the asymptotics of these uniform gives the mode size of the Durfee square when random partitions. This was largely developed Ncitations ! 1. Since their formula is in line with !!Not Supplied!! !!Not Supplied!! Notices of the AMS 1 by A. Vershik and his collaborators; see, e.g., the our computations, even for low Ncitations, we survey [Su10]. reinterpret their work as the rule of thumb for Actually, we are interested in “low” (practi- h-index: cal) values of where not all asymptotic p Ncitations 6 log 2 p p results are exactly relevant. Instead, we use gen- h = Ncitations ≈ 0:54 Ncitations: π erating series and modern desktop computation The focus of this paper is on mathematicians. For to calculate the probability that a random λ has the vast majority of those tested, the actual -index Durfee square size h. More specifically, we ob- h tain Table 1 using the Euler–Gauss identity for computed using MathSciNet or Google Scholar falls partitions: into the confidence intervals. Moreover, we found that the rule of thumb is fairly accurate for pure 1 2 Y 1 X xk mathematicians. For example, Table 2 shows this (1) = 1 + : !!Not1 Supplied!!− xi !!NotQ Supplied!!k − j 2 Noticesfor post-1998 of the Fields AMS medalists.1 1 i=1 k≥1 j=1(1 x ) The proof of (1) via Durfee squares is regularly 1 taught to undergraduate combinatorics students; Citations pre-2000 in MathSciNet are not complete. Google scholar and Thompson Reuters’ Web of Science also have it is recapitulated in the section “The Euler–Gauss sources of error. We decided that MathSciNet was our most Identity and its Application to the h-index.” The complete option for analyzing mathematicians. For rela- pedagogical aims of this note are elaborated upon tively recent Fields medalists, the effect of lost citations is in sections “Combinatorial Fermi Problems” and reduced. October 2014 Notices of the AMS 1041 In [Hi05] it was indicated that the h-index Euler–Gauss Identity ::: ”) asserts concentration has predictive value for winning the Nobel Prize. around the mode Durfee square. Thus, theoretically, However, the relation of the h index to the Fields one expects the rule of thumb to be nearly correct Medal is, in our opinion, unclear. A number of the for large Ncitations. medalists’ h values above are shared (or exceeded) Alas, this is empirically not true, even for pure by noncontenders of similar academic age, or with mathematicians. However, we observe something p those who have similar citation counts. Perhaps related: 0:54 Ncitations is higher than the actual this reflects a cultural difference between the h for almost every very highly cited (Ncitations > mathematics and the scientific communities. 10; 000) scholar in mathematics, physics, computer In the section “Further Comparisons with Em- science and statistics (among others) we considered. pirical Data,” we analyze mathematicians in the On the rare occasion this fails, the estimate is National Academy of Sciences, where we show that only beat by a small percentage (< 5 percent). The the correlation between the rule of thumb and drift in the other direction is often quite large (50 actual h-index is R = 0:94. After removing book percent or more is not unusual in certain areas of citations, R = 0:95. We also discuss Abel Prize engineering or biology). winners and associate professors at three research Near equality occurs among Abel Prize winners. universities. We also considered all prominent physicists high- Ultimately, readers are encouraged to do checks lighted in [Hi05] (except Cohen and Anderson, due of the estimates themselves. to name conflation in Web of Science). The guess is We discuss three implications/possible applica- always an upper bound (on average 14–20 percent tions of our analysis. too high). Near equality is met by D. J. Scalapino (25; 881 citations; 1:00), C. Vafa (22; 902 citations; 0 99), J. N. Bahcall (27 635 citations; 0 98); we have Comparing h’s when Ncitations’s Are Very : ; : given the ratio true h . Different estimated h One reason for highly cited people to have a It is understood that the h-index usually grows lower than expected h-index is that they tend with . However, when are citation counts Ncitations to have highly cited textbooks. Also, famous so different that comparing ’s is uninformative? h academics often run into the “Matthew effect” For example, = 40 (6 730 citations) while hTao ; (e.g., gratuitous citations of their most well-known = 24 (1 677 citations). The model asserts hOkounkov ; articles or books). the probability of hOkounkov ≥ 32 is less than 1 in 10 million. Note the Math Genealogy Project has Anomalous h-Indices fewer than 200; 000 mathematicians. These orders of magnitude predict that no More generally, our estimates give a way to flag an anomalous h-index for active researchers; i.e., mathematician with 1; 677 citations has an h- those that are far outside the confidence interval index of 32, even though technically it can be as or those for which the rule of thumb is especially high as 40.
Recommended publications
  • Citation Analysis for the Modern Instructor: an Integrated Review of Emerging Research
    CITATION ANALYSIS FOR THE MODERN INSTRUCTOR: AN INTEGRATED REVIEW OF EMERGING RESEARCH Chris Piotrowski University of West Florida USA Abstract While online instructors may be versed in conducting e-Research (Hung, 2012; Thelwall, 2009), today’s faculty are probably less familiarized with the rapidly advancing fields of bibliometrics and informetrics. One key feature of research in these areas is Citation Analysis, a rather intricate operational feature available in modern indexes such as Web of Science, Scopus, Google Scholar, and PsycINFO. This paper reviews the recent extant research on bibliometrics within the context of citation analysis. Particular focus is on empirical studies, review essays, and critical commentaries on citation-based metrics across interdisciplinary academic areas. Research that relates to the interface between citation analysis and applications in higher education is discussed. Some of the attributes and limitations of citation operations of contemporary databases that offer citation searching or cited reference data are presented. This review concludes that: a) citation-based results can vary largely and contingent on academic discipline or specialty area, b) databases, that offer citation options, rely on idiosyncratic methods, coverage, and transparency of functions, c) despite initial concerns, research from open access journals is being cited in traditional periodicals, and d) the field of bibliometrics is rather perplex with regard to functionality and research is advancing at an exponential pace. Based on these findings, online instructors would be well served to stay abreast of developments in the field. Keywords: Bibliometrics, informetrics, citation analysis, information technology, Open resource and electronic journals INTRODUCTION In an ever increasing manner, the educational field is irreparably linked to advances in information technology (Plomp, 2013).
    [Show full text]
  • How Can Citation Impact in Bibliometrics Be Normalized?
    RESEARCH ARTICLE How can citation impact in bibliometrics be normalized? A new approach combining citing-side normalization and citation percentiles an open access journal Lutz Bornmann Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Hofgartenstr. 8, 80539 Munich, Germany Downloaded from http://direct.mit.edu/qss/article-pdf/1/4/1553/1871000/qss_a_00089.pdf by guest on 01 October 2021 Keywords: bibliometrics, citation analysis, citation percentiles, citing-side normalization Citation: Bornmann, L. (2020). How can citation impact in bibliometrics be normalized? A new approach ABSTRACT combining citing-side normalization and citation percentiles. Quantitative Since the 1980s, many different methods have been proposed to field-normalize citations. In this Science Studies, 1(4), 1553–1569. https://doi.org/10.1162/qss_a_00089 study, an approach is introduced that combines two previously introduced methods: citing-side DOI: normalization and citation percentiles. The advantage of combining two methods is that their https://doi.org/10.1162/qss_a_00089 advantages can be integrated in one solution. Based on citing-side normalization, each citation Received: 8 May 2020 is field weighted and, therefore, contextualized in its field. The most important advantage of Accepted: 30 July 2020 citing-side normalization is that it is not necessary to work with a specific field categorization scheme for the normalization procedure. The disadvantages of citing-side normalization—the Corresponding Author: Lutz Bornmann calculation is complex and the numbers are elusive—can be compensated for by calculating [email protected] percentiles based on weighted citations that result from citing-side normalization. On the one Handling Editor: hand, percentiles are easy to understand: They are the percentage of papers published in the Ludo Waltman same year with a lower citation impact.
    [Show full text]
  • Research Performance of Top Universities in Karnataka: Based on Scopus Citation Index Kodanda Rama PES College of Engineering, [email protected]
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Library Philosophy and Practice (e-journal) Libraries at University of Nebraska-Lincoln September 2019 Research Performance of Top Universities in Karnataka: Based on Scopus Citation Index Kodanda Rama PES College of Engineering, [email protected] C. P. Ramasesh [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/libphilprac Part of the Scholarly Communication Commons, and the Scholarly Publishing Commons Rama, Kodanda and Ramasesh, C. P., "Research Performance of Top Universities in Karnataka: Based on Scopus Citation Index" (2019). Library Philosophy and Practice (e-journal). 2889. https://digitalcommons.unl.edu/libphilprac/2889 Research Performance of Top Universities in Karnataka: Based on Scopus Citation Index 1 2 Kodandarama and C.P. Ramasesh ABSTRACT: [Paper furnishes the results of the analysis of citations of research papers covered by Scopus database of Elsevier, USA. The coverage of the database is complete; citations depicted by Scopus upto June 2019 are considered. Study projects the research performance of six well established top universities in the state of Karnataka with regard the number of research papers covered by scholarly journals and number of scholars who have cited these research papers. Also projected is the average citations per research paper and h-Index of authors. Paper also projects the performance of top faculty members who are involved in contributing research papers. Collaboration with authors of foreign countries in doing research work and publishing papers are also comprehended in the study, including the trends in publishing research papers which depict the decreasing and increasing trends of research work.] INTRODUCTION: Now-a-days, there is emphasis on improving the quality of research papers on the whole.
    [Show full text]
  • God and Design: the Teleological Argument and Modern Science
    GOD AND DESIGN Is there reason to think a supernatural designer made our world? Recent discoveries in physics, cosmology, and biochemistry have captured the public imagination and made the design argument—the theory that God created the world according to a specific plan—the object of renewed scientific and philosophical interest. Terms such as “cosmic fine-tuning,” the “anthropic principle,” and “irreducible complexity” have seeped into public consciousness, increasingly appearing within discussion about the existence and nature of God. This accessible and serious introduction to the design problem brings together both sympathetic and critical new perspectives from prominent scientists and philosophers including Paul Davies, Richard Swinburne, Sir Martin Rees, Michael Behe, Elliott Sober, and Peter van Inwagen. Questions raised include: • What is the logical structure of the design argument? • How can intelligent design be detected in the Universe? • What evidence is there for the claim that the Universe is divinely fine-tuned for life? • Does the possible existence of other universes refute the design argument? • Is evolutionary theory compatible with the belief that God designed the world? God and Design probes the relationship between modern science and religious belief, considering their points of conflict and their many points of similarity. Is God the “master clockmaker” who sets the world’s mechanism on a perfectly enduring course, or a miraculous presence continually intervening in and altering the world we know? Are science and faith, or evolution and creation, really in conflict at all? Expanding the parameters of a lively and urgent contemporary debate, God and Design considers the ways in which perennial questions of origin continue to fascinate and disturb us.
    [Show full text]
  • — Emerging Sources Citation Index
    WEB OF SCIENCE™ CORE COLLECTION — EMERGING SOURCES CITATION INDEX THE VALUE OF COVERAGE IN INTRODUCING THE EMERGING SOURCES WEB OF SCIENCE CITATION INDEX This year, Clarivate Analytics is launching the Emerging INCREASING THE VISIBILITY OF Sources Citation Index (ESCI), which will extend the SCHOLARLY RESEARCH universe of publications in Web of Science to include With content from more than 12,700 top-tier high-quality, peer-reviewed publications of regional international and regional journals in the Science importance and in emerging scientific fields. ESCI will Citation Index Expanded™ (SCIE),the Social Sciences also make content important to funders, key opinion Citation Index® (SSCI), and the Arts & Humanities leaders, and evaluators visible in Web of Science Core Citation Index® (AHCI); more than 160,000 proceedings Collection even if it has not yet demonstrated citation in the Conference Proceedings Citation Index; and impact on an international audience. more than 68,000 books in the Book Citation Index; Journals in ESCI have passed an initial editorial Web of Science Core Collection sets the benchmark evaluation and can continue to be considered for for information on research across the sciences, social inclusion in products such as SCIE, SSCI, and AHCI, sciences, and arts and humanities. which have rigorous evaluation processes and selection Only Web of Science Core Collection indexes every paper criteria. All ESCI journals will be indexed according in the journals it covers and captures all contributing to the same data standards, including cover-to-cover institutions, no matter how many there are. To be indexing, cited reference indexing, subject category included for coverage in the flagship indexes in the assignment, and indexing all authors and addresses.
    [Show full text]
  • Introduction to Bibliometrics
    Introduction to Bibliometrics Bibliometrics A new service of the university library to support academic research Carolin Ahnert and Martin Bauschmann This work is licensed under: Creative Commons Attribution 4.0 International License. Chemnitz ∙ 16.05.17 ∙ Carolin Ahnert, Martin Bauschmann www.tu-chemnitz.de Introduction to Bibliometrics What is bibliometrics? – short definition „Bibliometrics is the statistical analysis of bibliographic data, commonly focusing on citation analysis of research outputs and publications, i.e. how many times research outputs and publications are being cited“ (University of Leeds, 2014) Quantitative analysis and visualisation of scientific research output based on publication and citation data Chemnitz ∙ 16.05.17 ∙ Carolin Ahnert, Martin Bauschmann www.tu-chemnitz.de Introduction to Bibliometrics What is bibliometrics? – a general classification descriptive bibliometrics evaluative bibliometrics Identification of relevant research topics Evaluation of research performance cognition or thematic trends (groups of researchers, institutes, Identification of key actors universities, countries) interests Exploration of cooperation patterns and Assessment of publication venues communication structures (especially journals) Interdisciplinarity Productivity → visibility → impact → Internationality quality? examined Topical cluster constructs Research fronts/ knowledge bases Social network analysis: co-author, co- Indicators: number of articles, citation methods/ citation, co-word-networks etc. rate, h-Index,
    [Show full text]
  • Optimize Your Web of Science Journey for Quicker Discovery, Easier Research Analysis and Better Collection Management
    Optimize your Web of Science journey for quicker discovery, easier research analysis and better collection management. And preview our new and improved platform. Rachel Mangan- Solutions Consultant [email protected] July 2020 Agenda 1. How to conduct efficient searching across different databases and staying up to date with powerful alert 2. How to identify collaborators and funding sources 3. Obtain instant access to Full Text via the free Kopernio plugin 4. How to monitor Open Access research 5. How discover experts in your field with our new Author Search and consolidation with the free Publons platform 6. How to identify impactful journals in your field with publisher neutral metrics 7. Preview of our new and improved Web of Science platform 3 Efficient searching Insert footer 4 What can I learn about Palm Oil research? 5 Efficient searching in Web of Science • Basic= Topic • Advanced = Topic • Advanced= Title, Abstract, Author Keywords & Keywords Plus • All Database= Topic • Specialist Databases (Medline) = Controlled Indexing • Search within results • Analyse/ Refine • Citation Network • Edit Query • Combine sets (and, or & not) • Alerting (single product or All Databases) 6 Web of Science Platform Multidisciplinary research experience and content types all linked together - across the sciences, social sciences, and arts and humanities 34,000+ 87 Million Journals across the Patents for over 43 platform million inventions 22,000+ 8.9 Million+ Total journals in the Data Sets and Data Core Collection Studies 254 Backfiles
    [Show full text]
  • Fermi Estimates: from Harry Potter to ET
    Fermi Estimates: from Harry Potter to ET David Wakeham May 30, 2019 Abstract Some notes on Fermi estimates and methods for doing them, presented at the UBC Physics Circle in May, 2019. We’ll look at various applications, including global computer storage, the length of the Harry Potter novels, and the probability of intelligent aliens in the galaxy. Note: This is a rough draft only; I hope to polish these and add pictures and additional exercises at some point. For any comments, suggestions, or corrections, please get in touch at <[email protected]>. Contents 1 Introduction 1 1.1 Why Fermi problems? . 2 1.2 Powers of 10 . 2 2 Fermi problems 4 2.1 Comparison . 4 2.2 Factorisation . 6 2.3 Units . 7 2.4 Piano tuners . 10 3 Aliens 11 3.1 Counting aliens . 12 3.2 Cosmic numbers . 13 1 Introduction I’m going to be talking about Fermi problems: what they are, methods for solving them, and if there’s time, a fun application to the search for extraterrestrial intelligence, i.e. aliens we can talk to. Who knows what a Fermi problem is? Has anyone done one before? A Fermi problem is an order of magnitude estimate. Roughly speaking, it means to guess an answer to the nearest power of 10. 1 1.1 Why Fermi problems? Fermi problems are great because the nearest power of 10 is a very forgiving notion of correct- ness, and because they’re so forgiving, we can accurately estimate many things which seem impossible at first sight.
    [Show full text]
  • Engineering Notation Versus Scientific Notation
    ESTIMATION Chapter 5 Estimation Problems Number of hair on your head? Drops of water in a lake? HafiZ of Quran in the world? Estimation Problems •How many cubic yards of concrete are needed to pave one mile of interstate highway (two lanes each direction) ? •How many feet of wire are needed to connect the lighting systems in an automobile? Enrico Fermi Brilliant scientist and engineer Worked in Manhattan Project Development of Nuclear weapons Witnessed Trinity Test First atomic bomb explosion Enrico Fermi •Nobel laureate •Taught at University of Chicago • Gave students problems: • Much information missing • Solution seemed impossible •Such problems: Fermi problems Trinity Test • After the explosion •Brighter than in full daylight • After a few seconds • Rising flames lost their brightness • Huge pillar of smoke • Rose rapidly beyond the clouds • A height of the order of 30,000 feet Trinity Test •About 40 seconds after the explosion • Dropped small pieces of paper • From a height of 6 feet • Measured displacement of pieces of paper • Shift was about 2½ meters •Fermi estimated the intensity of explosion: • As from ten thousand tons of T.N.T. Trinity Test • calculated the explosion to be 19 kilotons. • By observing the behavior of falling bits of paper ten miles from the ground zero, Fermi's estimation of 10 kilotons was in error by less than a factor of 2. • After the war, Fermi taught at the university of Chicago where he became famous for his unsolvable problems FERMI PROBLEMS • Fermi's problems require the person considering them to determine the answer with far less information than would be necessary to calculate an accurate value.
    [Show full text]
  • Understanding Research Metrics INTRODUCTION Discover How to Monitor Your Journal’S Performance Through a Range of Research Metrics
    Understanding research metrics INTRODUCTION Discover how to monitor your journal’s performance through a range of research metrics. This page will take you through the “basket of metrics”, from Impact Factor to usage, and help you identify the right research metrics for your journal. Contents UNDERSTANDING RESEACH METRICS 2 What are research metrics? What are research metrics? Research metrics are the fundamental tools used across the publishing industry to measure performance, both at journal- and author-level. For a long time, the only tool for assessing journal performance was the Impact Factor – more on that in a moment. Now there are a range of different research metrics available. This “basket of metrics” is growing every day, from the traditional Impact Factor to Altmetrics, h-index, and beyond. But what do they all mean? How is each metric calculated? Which research metrics are the most relevant to your journal? And how can you use these tools to monitor your journal’s performance? For a quick overview, download our simple guide to research metrics. Or keep reading for a more in-depth look at what’s in the “basket of metrics” and how to interpret it. UNDERSTANDING RESEACH METRICS 3 Citation-based metrics Citation-based metrics IMPACT FACTOR What is the Impact Factor? The Impact Factor is probably the most well-known metric for assessing journal performance. Designed to help librarians with collection management in the 1960s, it has since become a common proxy for journal quality. The Impact Factor is a simple research metric: it’s the average number of citations received by articles in a journal within a two-year window.
    [Show full text]
  • Publications from Serbia in the Science Citation Index Expanded: a Bibliometric Analysis
    Scientometrics (2015) 105:145–160 DOI 10.1007/s11192-015-1664-9 Publications from Serbia in the Science Citation Index Expanded: a bibliometric analysis 1 2,3 3 Dragan Ivanovic´ • Hui-Zhen Fu • Yuh-Shan Ho Received: 11 July 2015 / Published online: 20 August 2015 Ó Akade´miai Kiado´, Budapest, Hungary 2015 Abstract This paper presents a bibliometric analysis of articles from the Republic of Serbia in the period 2006–2013 that are indexed in the Thomson Reuters SCI-EXPANDED database. Analysis included 27,157 articles with at least one author from Serbia. Number of publications and its characteristics, collaboration patterns, as well as impact of analyzed articles in world science will be discussed in the paper. The number of articles indexed by SCI-EXPANDED has seen an increase in terms of Serbian articles that is considerably greater than the increase in number of all articles in SCI-EXPANDED. Researchers from Serbia have been involved in researches presented in several articles which have significant impact in world science. Beside indicators which take into account number of publications of certain type and number of citations, the analysis presented in this paper also takes authorship into consideration. The most cited analyzed articles have more than ten authors, but there were no first and corresponding authors from Serbia in these articles Keywords Serbia Á Web of Science Á Research trends Á Collaborations Á Impact Introduction Bibliometric analysis is an useful method for characterizing scientific research (Moravcsik 1985; Fu and Ho 2013) which can be used for making decisions regarding the further development of science (Lucio-Arias and Leydesdorff 2009).
    [Show full text]
  • Citations Versus Altmetrics
    Research Data Explored: Citations versus Altmetrics Isabella Peters1, Peter Kraker2, Elisabeth Lex3, Christian Gumpenberger4, and Juan Gorraiz4 1 [email protected] ZBW Leibniz Information Centre for Economics, Düsternbrooker Weg 120, D-24105 Kiel (Germany) & Kiel University, Christian-Albrechts-Platz 4, D-24118 Kiel (Germany) 2 [email protected] Know-Center, Inffeldgasse 13, A-8010 Graz (Austria) 3 [email protected] Graz University of Technology, Knowledge Technologies Institute, Inffeldgasse 13, A-8010 Graz (Austria) 4 christian.gumpenberger | [email protected] University of Vienna, Vienna University Library, Dept of Bibliometrics, Boltzmanngasse 5, A-1090 Vienna (Austria) Abstract The study explores the citedness of research data, its distribution over time and how it is related to the availability of a DOI (Digital Object Identifier) in Thomson Reuters’ DCI (Data Citation Index). We investigate if cited research data “impact” the (social) web, reflected by altmetrics scores, and if there is any relationship between the number of citations and the sum of altmetrics scores from various social media-platforms. Three tools are used to collect and compare altmetrics scores, i.e. PlumX, ImpactStory, and Altmetric.com. In terms of coverage, PlumX is the most helpful altmetrics tool. While research data remain mostly uncited (about 85%), there has been a growing trend in citing data sets published since 2007. Surprisingly, the percentage of the number of cited research data with a DOI in DCI has decreased in the last years. Only nine repositories account for research data with DOIs and two or more citations. The number of cited research data with altmetrics scores is even lower (4 to 9%) but shows a higher coverage of research data from the last decade.
    [Show full text]