Seersuite: Developing a Scalable and Reliable Application Framework for Building Digital Libraries by Crawling the Web

Total Page:16

File Type:pdf, Size:1020Kb

Seersuite: Developing a Scalable and Reliable Application Framework for Building Digital Libraries by Crawling the Web SeerSuite: Developing a scalable and reliable application framework for building digital libraries by crawling the web Pradeep B. Teregowda Isaac G. Councill Juan Pablo Fernandez´ R. Pennsylvania State University Google Pennsylvania State University Madian Khabsa Shuyi Zheng Pennsylvania State University Pennsylvania State University C. Lee Giles Pennsylvania State University Abstract modules. CiteSeerx, an instance of SeerSuite is one of the top SeerSuite is a framework for scientific and academic dig- ranked resources on the web and indexes nearly one and ital libraries and search engines built by crawling scien- half million documents. The collection spans computer tific and academic documents from the web with a fo- and information science (CIS) and related areas such as cus on providing reliable, robust services. In addition mathematics, physics and statistics. CiteSeerx acquires to full text indexing, SeerSuite supports autonomous ci- its documents primarily by automatically crawling au- tation indexing and automatically links references in re- thors web sites for academic and research documents. search articles to facilitate navigation, analysis and eval- CiteSeerx daily receives approximately two million hits uation. SeerSuite enables access to extensive document, and has more than two hundred thousand documents citation, and author metadata by automatically extract- downloaded from its cache. The MyCiteSeer personal ing, storing and indexing metadata. SeerSuite also sup- portal has over ten thousand registered users. ports MyCiteSeer, a personal portal that allows users to While the SeerSuite application framework has most monitor documents, store user queries, build document of the functionality of CiteSeer, SeerSuite represents a portfolios, and interact with the document metadata. We complete redesign of CiteSeer. SeerSuite takes advan- describe the design of SeerSuite and the deployment and x tage of and includes state of the art machine learning usage of CiteSeer as an instance of SeerSuite. methods and uses open source applications and modules. The structure and architecture of SeerSuite is designed to 1 Introduction enhance the ease of maintenance and to reduce the cost of operations. SeerSuite is designed to run on off the Efficient and reliable access to the vast scientific and shelf hardware infrastructure. x scholarly publications on the web requires advanced cita- With CiteSeer , SeerSuite focuses primarily on CIS. tion index retrieval systems [21] such as CiteSeer [22, 4], However, there are requests for similar focused services CiteSeerx [5], Google Scholar [9], ACM Portal [1], etc. in other fields such as chemistry [33] and archaeol- The SeerSuite application framework provides an unique ogy [37]. SeerSuite can be adopted to providing ser- x advanced and automatic citation index system that is us- vices similar to those provided by CiteSeer in these ar- able and comprehensive, and provides efficient access to eas. SeerSuite is a part of the project ChemX Seer [3], scientific publications. To realize these goals our design a digital library search engine and collaboration service focuses on reliable, scalable and robust services. for chemistry. Though designed as a framework for x A previous implementation, CiteSeer (maintained as CiteSeer -like digital libraries and search engines, the of this date), was designed to support such services. modular and extensible framework allows for applica- However, CiteSeer was a research prototype and, as such, tions that use SeerSuite components as stand alone ser- suffered serious limitations. SeerSuite was designed to vices or as a part of other systems. provide a framework that would replace CiteSeer, to pro- vide most of its functionality, but designed to be extensi- 2 Background and Related Work ble. SeerSuite improves on several aspects of the original CiteSeer with features such as reliability, robustness and Domain specific repositories and digital library systems scalability. It does this by adopting a multi-tier architec- have been very popular over the last decades with several ture with a service orientation and a loose coupling of examples such as arXiv [23] for physics and RePEc [15] for economics. The Greenstone digital library [10] was overall architecture in the web application, data storage, developed with a similar goal. These repository systems, metadata extraction and ingestion, crawling and mainte- unlike CiteSeerx, depend on manual submission of doc- nance sections. ument metadata, a tedious and resource expensive pro- cess. As such CiteSeerx which crawls authors web page for documents is in many ways a unique system, closest in design to Google Scholar. The popularity of services provided by the origi- nal CiteSeer and limitations in its design and archi- tecture were the motivation behind the design of Seer- Suite [18, 30]. The design of SeerSuite was an incre- mental process involving users and researchers. User in- put was received in the form of feedback and feature re- quests. Exchange of ideas between researchers and users across the world through collaborations, reviews and dis- cussions have made a significant contribution to the de- sign. In addition to ensuring reliable scalable services, portability of the overall system and components was identified as an essential feature that would encourage adoption of SeerSuite elsewhere. During the process of designing the architecture of SeerSuite, other academic repository content management system (CMS) architec- tures such as Fedora [28] and DSpace [8] were studied. The Blackboard architecture pattern [34] had a strong in- fluence on the design of the metadata extraction system. The main obstacles in adopting existing repository and CMS systems were the levels of customization and the Figure 1: SeerSuite Architecture effort required to meet SeerSuite design requirements. More specifically, implementation of the citation graph structure with a focus on automatic metadata extraction and workflow requirements for maintaining and updating 3.1 Web application citation graph structures made these approaches cumber- The web application makes use of the model view con- some to use. troller architecture implemented with the Spring frame- In addition to repository, search engine and digital work. The application presentation uses a mix of java library architectures, advances in metadata extraction server pages and javascript to generate the user inter- methods [24, 25, 19, 27, 32] and the availability of open face. Design of the user interface allows the look and source systems have influenced SeerSuite design. We be- feel to be modified by switching Cascading Style Sheets gin our discussion of SeerSuite by describing the archi- (CSS). The use of JavaServer Pages Standard Tag Li- tecture. brary (JSTL) supports template based construction. The web pages allow further interaction through the use of 3 Architecture javascript frameworks. While the development environ- ment is mostly Linux, components are developed with In the context of SeerSuite reliability refers to the ability portability as a focus. Adoption of the spring framework of the framework instances to provide around the clock allows development to concentrate on application design. service with minimal downtime, scalability to the ability The framework supports the application by handling in- to support increasing number of user requests and doc- teractions between the user and database in a transparent uments in the collection, and robustness to the ability of manner. the instance to continue providing services while some of Servlets use Data Access Objects (DAO) with the sup- the underlying services are unavailable or resource con- port of the framework to interact with databases and the strained. repository. The index, database and repository enable An outline of SeerSuite architecture is shown in fig- the data objects crawled from the web to be stored and ure 1. By adopting a loosely coupled approach for mod- accessed through the web, using efficient programmatic ules and subsystem, we ensure that instances can be interfaces. The web application is deployed through a scaled and can provide robust service. We describe the web application archive file. 2 3.2 Data storage final level contains metadata, text and cached files. The document identifier structure ensures that the there are a The databases store metadata and relationships in the nominal number of sub folders under any folder in the form of tables providing transaction support, where tree structure. transactions include adding, updating or deleting objects. An index provides a fast efficient method of storing The database design partitions the data storage across and accessing text and metadata through the use of an three main databases. This allows for growth in any inverted file. SeerSuite uses the Apache Solr an Index one component to be handled by further partitioning the application [16] supported by Apache Tomcat to provide database horizontally. full text and metadata indexing operations. The meta- The main database contains objects and is focused on data items are obtained from the database and the full transactions and version tracking of objects central to text from the repository. The interaction of the applica- SeerSuite and digital libraries implementations. Objects tion in the form of controllers is through the REST (Rep- stored in the main database include document metadata, resentational state transfer) [20] interface. This allows author, citation, keywords, tags, and hub (URL).
Recommended publications
  • Transitive Reduction of Citation Networks Arxiv:1310.8224V2
    Transitive reduction of citation networks James R. Clough, Jamie Gollings, Tamar V. Loach, Tim S. Evans Complexity and Networks group, Imperial College London, South Kensington campus, London, SW7 2AZ, United Kingdom March 28, 2014 Abstract In many complex networks the vertices are ordered in time, and edges represent causal connections. We propose methods of analysing such directed acyclic graphs taking into account the constraints of causality and highlighting the causal struc- ture. We illustrate our approach using citation networks formed from academic papers, patents, and US Supreme Court verdicts. We show how transitive reduc- tion reveals fundamental differences in the citation practices of different areas, how it highlights particularly interesting work, and how it can correct for the effect that the age of a document has on its citation count. Finally, we transitively reduce null models of citation networks with similar degree distributions and show the difference in degree distributions after transitive reduction to illustrate the lack of causal structure in such models. Keywords: directed acyclic graph, academic paper citations, patent citations, US Supreme Court citations 1 Introduction Citation networks are complex networks that possess a causal structure. The vertices are documents ordered in time by their date of publication, and the citations from one document to another are represented by directed edges. However unlike other directed networks, the edges of a citation network are also constrained by causality, the edges must always point backwards in time. Academic papers, patent documents, and court judgements all form natural citation networks. Citation networks are examples of directed acyclic graphs, which appear in many other contexts: from scheduling problems [1] to theories of the structure of space-time [2].
    [Show full text]
  • Citation Analysis for the Modern Instructor: an Integrated Review of Emerging Research
    CITATION ANALYSIS FOR THE MODERN INSTRUCTOR: AN INTEGRATED REVIEW OF EMERGING RESEARCH Chris Piotrowski University of West Florida USA Abstract While online instructors may be versed in conducting e-Research (Hung, 2012; Thelwall, 2009), today’s faculty are probably less familiarized with the rapidly advancing fields of bibliometrics and informetrics. One key feature of research in these areas is Citation Analysis, a rather intricate operational feature available in modern indexes such as Web of Science, Scopus, Google Scholar, and PsycINFO. This paper reviews the recent extant research on bibliometrics within the context of citation analysis. Particular focus is on empirical studies, review essays, and critical commentaries on citation-based metrics across interdisciplinary academic areas. Research that relates to the interface between citation analysis and applications in higher education is discussed. Some of the attributes and limitations of citation operations of contemporary databases that offer citation searching or cited reference data are presented. This review concludes that: a) citation-based results can vary largely and contingent on academic discipline or specialty area, b) databases, that offer citation options, rely on idiosyncratic methods, coverage, and transparency of functions, c) despite initial concerns, research from open access journals is being cited in traditional periodicals, and d) the field of bibliometrics is rather perplex with regard to functionality and research is advancing at an exponential pace. Based on these findings, online instructors would be well served to stay abreast of developments in the field. Keywords: Bibliometrics, informetrics, citation analysis, information technology, Open resource and electronic journals INTRODUCTION In an ever increasing manner, the educational field is irreparably linked to advances in information technology (Plomp, 2013).
    [Show full text]
  • Citation Analysis As a Tool in Journal Evaluation Journals Can Be Ranked by Frequency and Impact of Citations for Science Policy Studies
    Citation Analysis as a Tool in Journal Evaluation Journals can be ranked by frequency and impact of citations for science policy studies. Eugene Garfield [NOTE: Tbe article reptintedbere was referenced in the eoay vbi.b begins m @g. 409 is Volume I, IIS ia. #dverte#t omission W6Sdiscovered too Me to iwctnde it at it~ proper Iocatioa, immediately follom”ag tbe essay ) As a communications system, the net- quinquennially, but the data base from work of journals that play a paramount which the volumes are compiled is role in the exchange of scientific and maintained on magnetic tape and is up- technical information is little under- dated weekly. At the end of 1971, this stood. Periodically since 1927, when data base contained more than 27 mi[- Gross and Gross published their study tion references to about 10 million dif- (1) of references in 1 year’s issues of ferent published items. These references the Journal of the American Chemical appeared over the past decade in the Socie/y, pieces of the network have footnotes and bibliographies of more been illuminated by the work of Brad- than 2 million journal articles, commu- ford (2), Allen (3), Gross and nications, letters, and so on. The data Woodford (4), Hooker (5), Henkle base is, thus, not only multidisciplinary, (6), Fussier (7), Brown (8), and it covers a substantial period of time others (9). Nevertheless, there is still no and, being in machine-readable form, is map of the journal network as a whok. amenable to extensive manipulation by To date, studies of the network and of computer.
    [Show full text]
  • How Can Citation Impact in Bibliometrics Be Normalized?
    RESEARCH ARTICLE How can citation impact in bibliometrics be normalized? A new approach combining citing-side normalization and citation percentiles an open access journal Lutz Bornmann Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Hofgartenstr. 8, 80539 Munich, Germany Downloaded from http://direct.mit.edu/qss/article-pdf/1/4/1553/1871000/qss_a_00089.pdf by guest on 01 October 2021 Keywords: bibliometrics, citation analysis, citation percentiles, citing-side normalization Citation: Bornmann, L. (2020). How can citation impact in bibliometrics be normalized? A new approach ABSTRACT combining citing-side normalization and citation percentiles. Quantitative Since the 1980s, many different methods have been proposed to field-normalize citations. In this Science Studies, 1(4), 1553–1569. https://doi.org/10.1162/qss_a_00089 study, an approach is introduced that combines two previously introduced methods: citing-side DOI: normalization and citation percentiles. The advantage of combining two methods is that their https://doi.org/10.1162/qss_a_00089 advantages can be integrated in one solution. Based on citing-side normalization, each citation Received: 8 May 2020 is field weighted and, therefore, contextualized in its field. The most important advantage of Accepted: 30 July 2020 citing-side normalization is that it is not necessary to work with a specific field categorization scheme for the normalization procedure. The disadvantages of citing-side normalization—the Corresponding Author: Lutz Bornmann calculation is complex and the numbers are elusive—can be compensated for by calculating [email protected] percentiles based on weighted citations that result from citing-side normalization. On the one Handling Editor: hand, percentiles are easy to understand: They are the percentage of papers published in the Ludo Waltman same year with a lower citation impact.
    [Show full text]
  • Don's Conference Notes
    Don’s Conference Notes by Donald T. Hawkins (Freelance Conference Blogger and Editor) <[email protected]> Information Transformation: Open. Global. Plenary Sessions Collaborative: NFAIS’s 60th Anniversary Meeting Regina Joseph, founder of Sibylink (http://www.sibylink.com/) and co-founder of pytho (http://www.pytho.io/), consultancies that Column Editor’s Note: Because of space limitations, this is an specialize in decision science and information design, said that we are abridged version of my report on this conference. You can read the gatekeepers of knowledge and information. Information has never full article which includes descriptions of additional sessions at been more accessible, in demand, but simultaneously under attack. https://against-the-grain.com/2018/04/nfaiss-60th-anniversary- There is both a challenge and an opportunity in information system meeting/. — DTH availability and diversity. News outlets have become organs of influ- ence, and social networks are changing our consumption of information (for example, 26% of news retrieval is through social media). We are In 1958, G. Miles Conrad, director of Biological Abstracts, con- willingly allowing ourselves to be controlled. How will we be able to vened a meeting of representatives from 14 information services to harness the advantages of open access to information when the ability collaborate and cooperate in sharing technology and discussing issues of to access it might be compromised? We need people with multiple mutual interest. The National Federation of Abstracting and Indexing areas of specialist knowledge but who are also connected with broad (now Advanced Information) Services (NFAIS) was formed as a result and general knowledge.
    [Show full text]
  • Google Scholar, Web of Science, and Scopus
    Journal of Informetrics, vol. 12, no. 4, pp. 1160-1177, 2018. https://doi.org/10.1016/J.JOI.2018.09.002 Google Scholar, Web of Science, and Scopus: a systematic comparison of citations in 252 subject categories Alberto Martín-Martín1 , Enrique Orduna-Malea2 , Mike 3 1 Thelwall , Emilio Delgado López-Cózar Version 1.6 March 12, 2019 Abstract Despite citation counts from Google Scholar (GS), Web of Science (WoS), and Scopus being widely consulted by researchers and sometimes used in research evaluations, there is no recent or systematic evidence about the differences between them. In response, this paper investigates 2,448,055 citations to 2,299 English-language highly-cited documents from 252 GS subject categories published in 2006, comparing GS, the WoS Core Collection, and Scopus. GS consistently found the largest percentage of citations across all areas (93%-96%), far ahead of Scopus (35%-77%) and WoS (27%-73%). GS found nearly all the WoS (95%) and Scopus (92%) citations. Most citations found only by GS were from non-journal sources (48%-65%), including theses, books, conference papers, and unpublished materials. Many were non-English (19%- 38%), and they tended to be much less cited than citing sources that were also in Scopus or WoS. Despite the many unique GS citing sources, Spearman correlations between citation counts in GS and WoS or Scopus are high (0.78-0.99). They are lower in the Humanities, and lower between GS and WoS than between GS and Scopus. The results suggest that in all areas GS citation data is essentially a superset of WoS and Scopus, with substantial extra coverage.
    [Show full text]
  • A New Index for the Citation Curve of Researchers
    A New Index for the Citation Curve of Researchers Claes Wohlin School of Engineering Blekinge Institute of Technology Box 520 SE­37225 Ronneby Sweden E­mail: [email protected] Abstract Internet has made it possible to move towards researcher and article impact instead of solely focusing on journal impact. To support citation measurement, several indexes have been proposed, including the h‐index. The h‐index provides a point estimate. To address this, a new index is proposed that takes the citation curve of a researcher into account. This article introduces the index, illustrates its use and compares it to rankings based on the h‐index as well as rankings based on publications. It is concluded that the new index provides an added value, since it balances citations and publications through the citation curve. Keywords: impact factor, citation analysis, h‐index, g‐index, Hirsch, w‐index 1. Introduction The impact of research is essential. In attempts to quantify impact, different measures have been introduced. This includes the impact factor for journals introduced 45 years ago by GARFIELD and SHER [1963] and GARFIELD [2006]. However, it may be argued that the impact of a journal becomes less important as research articles becomes available on‐line and could be easily located even if they are not published in a high impact journal. In general, high impact articles are more important than the journal itself. Historically, highly cited articles have been published in high impact journals, but this pattern may change as for example conference articles are as easily accessible as journal articles.
    [Show full text]
  • Google Scholar: the Democratization of Citation Analysis?
    Google Scholar: the democratization of citation analysis? Anne-Wil Harzing Ron van der Wal Version November 2007 Accepted for Ethics in Science and Environmental Politics Copyright © 2007 Anne-Wil Harzing and Ron van der Wal. All rights reserved. Dr. Anne-Wil Harzing Email: [email protected] University of Melbourne Web: www.harzing.com Department of Management Faculty of Economics & Commerce Parkville Campus Melbourne, VIC 3010 Australia Google Scholar: the democratization of citation analysis? Anne-Wil Harzing* 1, Ron van der Wal2 1 Department of Management, University of Melbourne, Parkville Campus, Parkville, Victoria 3010, Australia * Email: [email protected] 2 Tarma Software Research, GPO Box 4063, Melbourne, Victoria 3001 Australia Running head: citation analysis with Google Scholar Key words: Google Scholar, citation analysis, publish or perish, h-index, g-index, journal impact factor Abstract Traditionally, the most commonly used source of bibliometric data is Thomson ISI Web of Knowledge, in particular the (Social) Science Citation Index and the Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). This paper presents an alternative source of data (Google Scholar, GS) as well as three alternatives to the JIF to assess journal impact (the h-index, g-index and the number of citations per paper). Because of its broader range of data sources, the use of GS generally results in more comprehensive citation coverage in the area of Management and International Business. The use of GS particularly benefits academics publishing in sources that are not (well) covered in ISI. Among these are: books, conference papers, non-US journals, and in general journals in the field of Strategy and International Business.
    [Show full text]
  • Finding Seminal Scientific Publications with Graph Mining
    DEGREE PROJECT, IN COMPUTER SCIENCE , SECOND LEVEL STOCKHOLM, SWEDEN 2015 Finding seminal scientific publications with graph mining MARTIN RUNELÖV KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION Finding seminal scientific publications with graph mining MARTIN RUNELÖV Master’s Thesis at CSC Supervisor at CSC: Olov Engwall Supervisor at SICS: John Ardelius August 16, 2015 Abstract We investigate the applicability of network analysis to the prob- lem of finding seminal publications in scientific publishing. In particular, we focus on the network measures betweenness cen- trality, the so-called backbone graph, and the burstiness of cita- tions. The metrics are evaluated using precision-related scores with respect to gold standards based on fellow programmes and manual annotation. Citation counts, PageRank, and random se- lection are used as baselines. We find that the backbone graph provides us with a way to possibly discover seminal publications with low citation count, and combining betweenness and bursti- ness gives results on par with citation count. Referat Användning av grafanalys för att hitta betydelsefulla vetenskapliga artiklar I detta examensarbete undersöks det huruvida analys av cite- ringsgrafer kan användas för att finna betydelsefulla vetenskapli- ga publikationer. Framför allt studeras ”betweenness”-centralitet, den så kallade ”backbone”-grafen samt ”burstiness” av citering- ar. Dessa mått utvärderas med hjälp av precisionsmått med av- seende på guldstandarder baserade på ’fellow’-program samt via manuell annotering. Antal citeringar, PageRank, och slumpmäs- sigt urval används som jämförelse. Resultaten visar att ”backbone”-grafen kan bidra till att eventuellt upptäcka bety- delsefulla publikationer med ett lågt antal citeringar samt att en kombination av ”betweenness” och ”burstiness” ger resultat i nivå med de man får av att räkna antal citeringar.
    [Show full text]
  • Bibliometric Study of Indian Open Access Social Science Literature Deep Kumar Kirtania [email protected]
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Library Philosophy and Practice (e-journal) Libraries at University of Nebraska-Lincoln 2018 Bibliometric Study of Indian Open Access Social Science Literature Deep Kumar Kirtania [email protected] Follow this and additional works at: http://digitalcommons.unl.edu/libphilprac Part of the Library and Information Science Commons Kirtania, Deep Kumar, "Bibliometric Study of Indian Open Access Social Science Literature" (2018). Library Philosophy and Practice (e-journal). 1867. http://digitalcommons.unl.edu/libphilprac/1867 Bibliometric Study of Indian Open Access Social Science Literature Deep Kumar Kirtania Library Trainee, West Bengal Secretariat Library & M.Phil Scholar, Department of Library & Information Science University of Calcutta [email protected] Abstract: the purpose of this study is to trace out the growth and development social science literature in open access environment published from India. Total 1195 open access papers published and indexed in Scopus database in ten years have considered for the present study. Research publication from 2008 to 2017 have been analyzed based on literature growth, authorship pattern, activity index, prolific authors and institutions, publication type, channel and citation count have examined to provide a clear picture of Indian social science research. The study shows the dominance of shared authorship and sixty percentages of total articles have been cited. This original research paper described the research productivity of social science in open access context and will be helpful to the social scientist and library professional as a whole. Key Words: Bibliometric study, Research Growth, Social Sciences, Open Access, Scopus, India Introduction: Scholarly communications have been the primary source of creating and sharing knowledge by academics and researchers from the mid 1600s (Chan, Gray & Kahn, 2012).
    [Show full text]
  • Author Template for Journal Articles
    Mining citation information from CiteSeer data Dalibor Fiala University of West Bohemia, Univerzitní 8, 30614 Plzeň, Czech Republic Phone: 00420 377 63 24 29, fax: 00420 377 63 24 01, email: [email protected] Abstract: The CiteSeer digital library is a useful source of bibliographic information. It allows for retrieving citations, co-authorships, addresses, and affiliations of authors and publications. In spite of this, it has been relatively rarely used for automated citation analyses. This article describes our findings after extensively mining from the CiteSeer data. We explored citations between authors and determined rankings of influential scientists using various evaluation methods including cita- tion and in-degree counts, HITS, PageRank, and its variations based on both the citation and colla- boration graphs. We compare the resulting rankings with lists of computer science award winners and find out that award recipients are almost always ranked high. We conclude that CiteSeer is a valuable, yet not fully appreciated, repository of citation data and is appropriate for testing novel bibliometric methods. Keywords: CiteSeer, citation analysis, rankings, evaluation. Introduction Data from CiteSeer have been surprisingly little explored in the scientometric lite- rature. One of the reasons for this may have been fears that the data gathered in an automated way from the Web are inaccurate – incomplete, erroneous, ambiguous, redundant, or simply wrong. Also, the uncontrolled and decentralized nature of the Web is said to simplify manipulating and biasing Web-based publication and citation metrics. However, there have been a few attempts at processing the Cite- Seer data which we will briefly mention. Zhou et al.
    [Show full text]
  • Citation Analysis: Web of Science, Scopus
    Citation analysis: Web of science, scopus Golestan University of Medical Sciences Information Management and Research Network Citation Analysis • Citation analysis is the study of the impact and assumed quality of an article, an author, or an institution based on the number of times works and/or authors have been cited by others • Citation analysis is the examination of the frequency, patterns, and graphs of citations in documents. It uses the pattern of citations, links from one document to another document, to reveal properties of the documents. A typical aim would be to identify the most important documents in a collection. A classic example is that of the citations between academic articles and books.[1][2] The judgements produced by judges of law to support their decisions refer back to judgements made in earlier cases so citation analysis in a legal context is important. Another example is provided by patents which contain prior art, citation earlier patents relevant to the current claim. Citation Databases • Citation databases are databases that have been developed for evaluating publications. The citation databases enable you to count citations and check, for example, which articles or journals are the most cited ones • In a citation database you get information about who has cited an article and how many times an author has been cited. You can also list all articles citing the same source. • Most important citation database are • “Web of Science”, • “Scopus” • “Google Scholar” Web of Sciences • Web of Science is owned and produced by Thomson Reuters. WoS is composed of three databases containing citations from international scientific journals: • Arts & Humanities Citation Index - AHCI • Social Sciences Citation Index - SSCI • Science Citation Index - SCI • Journal Coverage: • Aims to include the best journals of all fields.
    [Show full text]