New Methods for Metadata Extraction from Scientific Literature

Total Page:16

File Type:pdf, Size:1020Kb

New Methods for Metadata Extraction from Scientific Literature Systems Research Institute, Polish Academy of Sciences Dominika Tkaczyk ICM, University of Warsaw New Methods for Metadata Extraction from Scientific Literature Ph.D. Thesis Supervisor: Professor Marek Niezg´odka, PhD, DSc arXiv:1710.10201v1 [cs.DL] 27 Oct 2017 Auxiliary supervisor:Lukasz Bolikowski, PhD ICM, University of Warsaw November 2015 Abstract Spreading the ideas and announcing new discoveries and findings in the scientific world is typically realized by publishing and reading scientific literature. Within the past few decades we have witnessed digital revolution, which moved scholarly communication to electronic media and also resulted in a substantial increase in its volume. Nowadays keep- ing track with the latest scientific achievements poses a major challenge for the researchers. Scientific information overload is a severe problem that slows down scholarly communica- tion and knowledge propagation across the academia. Modern research infrastructures facilitate studying scientific literature by providing intelligent search tools, proposing similar and related documents, building and visualizing interactive citation and author networks, assessing the quality and impact of the articles using citation-based statistics, and so on. In order to provide such high quality services the system requires the access not only to the text content of stored documents, but also to their machine-readable metadata. Since in practice good quality metadata is not always available, there is a strong demand for a reliable automatic method of extracting machine- readable metadata directly from source documents. Our research addresses these problems by proposing an automatic, accurate and flexible algorithm for extracting wide range of metadata directly from scientific articles in born- digital form. Extracted information includes basic document metadata, structured full text and bibliography section. Designed as a universal solution, proposed algorithm is able to handle a vast variety of publication layouts with high precision and thus is well-suited for analyzing heteroge- neous document collections. This was achieved by employing supervised and unsuper- vised machine-learning algorithms trained on large, diverse datasets. The evaluation we conducted showed good performance of proposed metadata extraction algorithm. The comparison with other similar solutions also proved our algorithm performs better than competition for most metadata types. Proposed method is a reliable and accurate solution to the problem of extracting the metadata from documents. It allows modern research infrastructures to provide intelligent tools and services supporting the process of consuming the growing volume of scientific literature by the readers, which results in facilitating the communication among the sci- entists and the overall improvement of the knowledge propagation and the quality of the research in the scientific world. Contents 1 Overview 1 1.1 Background and Motivation . .1 1.2 Problem Statement . .4 1.3 Key Contributions . .7 1.4 Thesis Structure . .8 2 State of the Art 9 2.1 Metadata and Content Formats . .9 2.2 Relevant Machine Learning Techniques . 12 2.3 Document Analysis . 17 3 Document Content Extraction 36 3.1 Algorithm Overview . 36 3.2 Document Layout Analysis . 37 3.3 Document Region Classification . 51 3.4 Metadata Extraction . 52 3.5 Bibliography Extraction . 61 3.6 Structured Body Extraction . 68 3.7 Our Contributions . 78 3.8 Limitations . 79 4 Experiments and Evaluation 81 4.1 Datasets Overview . 82 4.2 Page Segmentation . 93 4.3 Content Classification . 95 4.4 Affiliation Parsing . 105 4.5 Reference Parsing . 107 4.6 Extraction Evaluation . 110 4.7 Time Performance . 125 5 Summary and Future Work 130 5.1 Summary . 130 5.2 Conclusions . 131 i CONTENTS 5.3 Outlook . 132 A Detailed Evaluation Results 134 A.1 Content Classification Parameters . 134 A.2 Content Classification Evaluation . 134 A.3 Performance Scores . 136 A.4 Statistical Analysis . 136 B System Architecture 152 B.1 System Usage . 152 B.2 System Components . 154 Acknowledgements 156 ii List of Figures 1.1 The number of records by year of publication in PubMed . .3 1.2 The number of records by year of publication and type in DBLP . .4 1.3 Citation and related documents networks in Scopus . .5 1.4 Documents and authors network in COMAC . .6 1.5 Intelligent metadata input interface from Mendeley . .7 3.1 The architecture of the extraction algorithm . 38 3.2 The bounding boxes of characters . 42 3.3 The bounding boxes of words . 43 3.4 The bounding boxes of lines . 44 3.5 The bounding boxes of zones . 44 3.6 Nearest neighbours in Docstrum algorithm . 45 3.7 Nearest-neighbour distance histogram in Docstrum algorithm . 46 3.8 The reading order of zones . 48 3.9 The mutual position of two zone groups . 51 3.10 Affiliations associated with authors using indexes . 58 3.11 Affiliations associated with authors using distance . 58 3.12 An example of a parsed affiliation string . 59 3.13 The references section with marked first lines of the references . 65 3.14 An example of a parsed bibliographic reference . 66 3.15 Examples of headers in different layouts . 73 3.16 An example of a multi-line header . 76 4.1 Semi-automatic method of creating GROTOAP2 dataset . 86 4.2 The histogram of the zone label coverage of the documents . 87 4.3 The results of page segmentation evaluation . 95 4.4 The results of page segmentation evaluation with filtered zones . 96 4.5 Average F-score for category classifier for various numbers of features . 99 4.6 Average F-score for metadata classifier for various numbers of features . 100 4.7 Average F-score for body classifier for various numbers of features . 100 4.8 The visualization of the confusion matrix of category classification evaluation103 4.9 The visualization of the confusion matrix of metadata classification evaluation104 4.10 The visualization of the confusion matrix of body classification evaluation . 106 iii LIST OF FIGURES 4.11 The visualization of the confusion matrix of affiliation token classification . 107 4.12 The visualization of the confusion matrix of citation token classification evaluation . 109 4.13 The results of bibliographic reference parser evaluation . 110 4.14 The results of the evaluation of extracting basic metadata on PMC . 117 4.15 The results of the evaluation of extracting basic metadata on Elsevier . 118 4.16 The results of the evaluation of extracting authorship metadata on PMC . 118 4.17 The results of the evaluation of extracting authorship metadata on Elsevier 119 4.18 The results of the evaluation of extracting bibliographic metadata on PMC 119 4.19 The results of the evaluation of extracting bibliographic metadata on Elsevier120 4.20 The results of the evaluation of extracting bibliographic references . 121 4.21 The results of the evaluation of extracting document headers on PMC . 121 4.22 The histogram of the extraction algorithm's processing time . 126 4.23 Processing time as a function of the number of pages of a document . 126 4.24 The boxplots of the percentage of time spent on each phase of the extraction 127 4.25 The percentage of time spent on each algorithm step . 128 4.26 The comparison of the average processing time of different extraction algo- rithms . 129 B.1 The extraction algorithm demonstrator service . 152 iv List of Tables 3.1 The decomposition of layout analysis . 41 3.2 The decomposition of metadata extraction . 56 3.3 The decomposition of bibliography extraction . 63 3.4 The decomposition of body extraction . 70 4.1 The dataset types used for the experiments . 83 4.2 The comparison of the parameters of GROTOAP and GROTOAP2 datasets 85 4.3 The results of manual evaluation of GROTOAP2 . 90 4.4 The results of the indirect evaluation of GROTOAP2 . 91 4.5 Final SVM kernels and parameters values for SVM classifiers . 102 4.6 The results of category classification evaluation . 103 4.7 The results of metadata classification evaluation . 105 4.8 The results of body classification evaluation . 105 4.9 The results of affiliation token classification evaluation . 106 4.10 The results of citation token classification evaluation . 108 4.11 The scope of the information extracted by various metadata extraction systems112 4.12 The number and percentage of successfully processed files for each system . 113 4.13 The winners of extracting each metadata category in each test sets . 123 4.14 The summary of the metadata extraction systems comparison . 124 A.1 The results of SVM parameters searching for category classification . 135 A.2 The results of SVM parameters searching for metadata classification . 135 A.3 The results of SVM parameters searching for body classification . 135 A.4 The confusion matrix of category classification evaluation . 136 A.5 The confusion matrix of metadata classification evaluation . 137 A.6 The confusion matrix of body content classification evaluation . 137 A.7 The confusion matrix of affiliation token classification evaluation . 138 A.8 The confusion matrix of citation token classification evaluation . 138 A.9 The results of the comparison evaluation of basic metadata extraction . 139 A.10 The results of the comparison evaluation of authorship metadata extraction 140 A.11 The results of the comparison evaluation of bibliographic metadata extraction141 A.12 The results of the comparison evaluation of bibliographic references extraction142 A.13 The results of the comparison evaluation of section headers extraction . 142 v LIST OF TABLES A.14 The results of statistical tests of the performance of title extraction . 143 A.15 The results of statistical tests of the performance of abstract extraction . 143 A.16 The results of statistical tests of the performance of keywords extraction . 144 A.17 The results of statistical tests of the performance of authors extraction . 144 A.18 The results of statistical tests of the performance of affiliations extraction . 145 A.19 The results of statistical tests of the performance of relations author-affiliation extraction .
Recommended publications
  • BIOINFORMATICS Doi:10.1093/Bioinformatics/Btq383
    Vol. 26 ECCB 2010, pages i568–i574 BIOINFORMATICS doi:10.1093/bioinformatics/btq383 Utopia documents: linking scholarly literature with research data T. K. Attwood1,2,∗, D. B. Kell3,4, P. McDermott1,2, J. Marsh3, S. R. Pettifer2 and D. Thorne3 1School of Computer Science, 2Faculty of Life Sciences, 3School of Chemistry, University of Manchester, Oxford Road, Manchester M13 9PL and 4Manchester Interdisciplinary Biocentre, 131 Princess Street, Manchester M1 7DN, UK ABSTRACT pace with the inexorable data deluge from ongoing high-throughput Motivation: In recent years, the gulf between the mass of biology projects (i.e. from whole genome sequencing). For example, accumulating-research data and the massive literature describing to put this in context, Bairoch estimates that it has taken 23 years and analyzing those data has widened. The need for intelligent tools to manually annotate about half of Swiss-Prot’s 516 081 entries to bridge this gap, to rescue the knowledge being systematically (Bairoch, 2009; Boeckmann et al., 2003), a painfully small number isolated in literature and data silos, is now widely acknowledged. relative to the size of its parent resource, UniProtKB (The UniProt Results: To this end, we have developed Utopia Documents, a Consortium, 2009), which currently contains ∼11 million entries. novel PDF reader that semantically integrates visualization and data- Hardly surprising, then, that he should opine, ‘It is quite depressive analysis tools with published research articles. In a successful to think that we are spending millions in grants for people to perform pilot with editors of the Biochemical Journal (BJ), the system experiments, produce new knowledge, hide this knowledge in a has been used to transform static document features into objects often badly written text and then spend some more millions trying that can be linked, annotated, visualized and analyzed interactively to second guess what the authors really did and found’ (Bairoch, (http://www.biochemj.org/bj/424/3/).
    [Show full text]
  • Basic Unix Commands
    Introduction to bioinformatics Basic Unix commands A CRITICAL GUIDE 1 Version: 1 August 2018 A Critical Guide to Basic Unix commands Basic Unix commands Overview This Critical Guide briefly introduces the Unix Operating System, and provides a subset of some of the most helpful and commonly used commands, including those that allow various types of search, navigation and file manipulation. Several keystroke short-cuts are also explained, which help to make the routine use of Unix commands more efficient. Teaching Goals & Learning Outcomes This Guide showcases some of the simplest, most frequently used commands to help new users to understand and gain confidence in using the Unix Operating System. On reading the Guide, you’ll be able to use a range of commands to: • manipulate files, directories and processes; • navigate directory structures and explore their contents; • search for files, and search and compare file contents; • direct command outputs into files or into other commands; and • explain what many simple commands mean and how they’re used. Part of the barrier for new users is that Unix commands can ap- 1 Introduction pear rather opaque, as they take the form of shorthand ‘contrac- tions’ or acronyms that describe or stand for the commands they represent (e.g., ls for list, mv for move, rm for remove, cp for copy, For many users, a computer’s Operating System (OS) is a black and so on). Generally, a command’s default behaviour or output box that conceals the technical wizardry controlling its hardware may be modified by adding different ‘qualifiers’. Qualifiers are pre- and software.
    [Show full text]
  • The Uniprot Knowledgebase
    Bringing bioinformatics into the classroom The UniProtBioinformatics Knowledgebase UniProtKB The Power of Computers in Biology A PRACTICAL GUIDE 1 Version: 6 June 2019 A Practical Guide to Using Computers in Biology The Power of Computers in Biology Overview This Practical Guide introduces simple bioinformatics approaches for database searching and sequence analysis. A ‘mystery’ gene is used as an exemplar: we first characterise the gene, then use it to explore the impact of gene loss in humans. Analyses are run both online and at the command line, the latter specifically using Raspberry Pi computers running the 4273π variant of Linux (4273pi.org). Teaching Goals & Learning Outcomes This Guide introduces a popular Web-based tool for searching biological sequence databases, and shows how similar functionality can be achieved using the Linux command line. On reading the Guide and completing the exercises, you will be able to: • search biological sequence databases using the online program BLAST, and navigate GenPept sequence records; • execute some basic Linux commands to perform a set of simple file-manipulation tasks; • perform BLAST searches via the Linux command line; and • evaluate the biological implications of search results, with reference to mutations and function. the protein level; and ii) insertions or deletions, where a nucleotide 1 Introduction base, or bases, are absent in one sequence relative to the other – changes like this will have significant impacts if they cause frame- shifts that disrupt the reading frame of a protein-coding sequence. The advent of computers and computer-driven technologies has This Guide explores how a popular bioinformatics tool can be opened new opportunities for, and has given unprecedented power used to search sequence databases and analyse particular genes.
    [Show full text]
  • Open Semantic Annotation of Scientific Publications Using DOMEO Paolo Ciccarese1, Marco Ocana2, Tim Clark1,2,3* from Bio-Ontologies 2011 Vienna, Austria
    Ciccarese et al. Journal of Biomedical Semantics 2012, 3(Suppl 1):S1 http://www.jbiomedsem.com/supplements/3/S1/S1 JOURNAL OF BIOMEDICAL SEMANTICS PROCEEDINGS Open Access Open semantic annotation of scientific publications using DOMEO Paolo Ciccarese1, Marco Ocana2, Tim Clark1,2,3* From Bio-Ontologies 2011 Vienna, Austria. 15-16 July 2011 * Correspondence: Abstract [email protected] 1 Harvard Medical School and Background: Our group has developed a useful shared software framework for Massachusetts General Hospital, Boston MA, USA performing, versioning, sharing and viewing Web annotations of a number of kinds, using an open representation model. Methods: The Domeo Annotation Tool was developed in tandem with this open model, the Annotation Ontology (AO). Development of both the Annotation Framework and the open model was driven by requirements of several different types of alpha users, including bench scientists and biomedical curators from university research labs, online scientific communities, publishing and pharmaceutical companies. Several use cases were incrementally implemented by the toolkit. These use cases in biomedical communications include personal note-taking, group document annotation, semantic tagging, claim-evidence-context extraction, reagent tagging, and curation of textmining results from entity extraction algorithms. Results: We report on the Domeo user interface here. Domeo has been deployed in beta release as part of the NIH Neuroscience Information Framework (NIF, http:// www.neuinfo.org) and is scheduled for production deployment in the NIF’s next full release. Future papers will describe other aspects of this work in detail, including Annotation Framework Services and components for integrating with external textmining services, such as the NCBO Annotator web service, and with other textmining applications using the Apache UIMA framework.
    [Show full text]
  • Pdfextractor Creates a Data Science Object from Referenced Data in Another Pa- Per
    THE UNIVERSITY OF CHICAGO A FRAMEWORK FOR REPRODUCIBLE COMPUTATIONAL RESEARCH A DISSERTATION SUBMITTED TO THE FACULTY OF THE DIVISION OF THE PHYSICAL SCIENCES IN CANDIDACY FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF COMPUTER SCIENCE BY QUAN TRAN PHAM CHICAGO, ILLINOIS AUGUST 2014 TABLE OF CONTENTS LIST OF FIGURES . v LIST OF TABLES . vii ACKNOWLEDGMENTS . viii ABSTRACT . x TERMINOLOGY . xi 1 INTRODUCTION . 1 1.1 Growth of Reproducible Scientific Research . .2 1.2 Challenges of Making Computation-based Research Reproducible . .4 1.3 Problem Overview . .5 1.3.1 (G1) Develop a methodology and framework to support authors and readers in the conduct of reproducible research . .6 1.3.2 (G2) Increase the efficiency of reproducible research process for au- thors, while making it easy for readers to conduct verification at vari- ous granularity levels . .7 1.3.3 (G3) Make the framework generalizable to many computational envi- ronments . .7 1.4 Solution Overview . .7 1.5 Solution Demonstration . 10 1.6 Dissertation Roadmap . 16 2 RELATED WORK . 18 2.1 Computational Artifact Publication and Usage . 18 2.1.1 Inside Publication; Static Verification . 19 2.1.2 Inside Publication; Dynamic Verification . 19 2.1.3 Outside Publication; Static Verification . 20 2.1.4 Outside Publication; Dynamic Verification . 22 2.2 Computation Artifact Creation . 24 2.2.1 Software Approaches . 24 2.2.2 Virtual Machine Approach . 25 3 SOLE FRAMEWORK . 27 3.1 Publication and Purpose . 27 3.2 Science Objects . 28 3.3 The SOLE Environment . 31 3.4 Tools in the SOLE Reproducible Research Framework . 32 3.4.1 Language Object Creation Tool .
    [Show full text]