Detecting Anatomical and Functional Connectivity Relations In

Total Page:16

File Type:pdf, Size:1020Kb

Detecting Anatomical and Functional Connectivity Relations In Detecting Anatomical and Functional Connectivity Relations in Biomedical Literature via Language Representation Models Ibrahim Burak Ozyurt Joseph Menke Anita Bandrowski FDI Lab Dept. of Neuroscience Scicrunch.com FDI Lab Dept. of Neuroscience UCSD La Jolla, USA San Diego, USA UCSD La Jolla, USA [email protected] [email protected] [email protected] Maryann E. Martone FDI Lab Dept. of Neuroscience UCSD La Jolla, USA [email protected] Abstract are they sufficiently detailed to include the granu- lar paths that these nerves travel. Such information Understanding of nerve-organ interactions is would be needed, for example, to understand where crucial to facilitate the development of effec- reliable access points to a particular nerve might tive bioelectronic treatments. Towards the end of developing a systematized and computable be so that stimulation only affects the most rele- wiring diagram of the autonomic nervous sys- vant nerve or to understand the mechanisms behind tem (ANS), we introduce a curated ANS con- stimulation applied at particular locations. Many nectivity corpus together with several neural scientific studies contain information about individ- language representation model based connec- ual nerves and at times the paths they traverse, but tivity relation extraction systems. We also to our knowledge, no systematic approach has been show that active learning guided curation for attempted to bring these large quantities of infor- labeled corpus expansion significantly outper- mation together into a computationally accessible forms randomly selecting connectivity relation candidates minimizing curation effort. Our fi- format. nal relation extraction system achieves F1 = The SPARC project is building a cross-species 72.8% on anatomical connectivity and F1 = connectivity knowledge base that contains detailed 74.6% on functional connectivity relation ex- information about individual nerves, their path- traction. ways, cells of origin and synaptic targets. To date, 1 Introduction this knowledge base has been populated through the development of detailed models of circuitry by The NIH Common Fund’s Stimulating Peripheral experts funded through the SPARC project using Activity to Relieve Conditions (SPARC) program the ApiNATOMY platform (Kokash and de Bono, aims to transform our understanding of nerve-organ 2021). ApiNATOMY provides a modeling lan- interactions to help spur the development of ef- guage for representing the complexity of functional fective bioelectronic treatments. Bioelectronic and anatomical circuitry in a standardized form. medicine represents the convergence of molecu- The circuitry contained in these models represent lar medicine, neuroscience, engineering and com- expert knowledge derived from the synthesis of puting to develop devices to diagnose and treat the expert’s own work and the synthesis of, in diseases (Olofsson and Tracey, 2017). One of the some cases , hundreds of scientific publications. projects within this large consortium is to create However, to ensure that information in the SPARC a systematized and computable wiring diagram of knowledge base is comprehensive and up to date, the autonomic nervous system, a part of the “wiring i.e., it represents the current state of knowledge system” that travels throughout the body transmit- about autonomic nervous system (ANS) connectiv- ting messages between the peripheral organs and ity, we sought to augment the expert-based model the brain or spinal cord. While diagrams of nerves approach with experimental information derived are currently available in medical texts (Standring from the primary scientific literature. As there are and Gray, 2008), the SPARC program seeks to map thousands of papers and additional sources like text these connections at higher levels of detail and with books, we utilized natural language processing to greater accuracy. Additionally, the diagrams in identify sentences that contained information on these medical texts are not generally queryable, nor neuronal connectivity in the ANS. 27 Proceedings of the Second Workshop on Scholarly Document Processing, pages 27–35 June 10, 2021. ©2021 Association for Computational Linguistics The task was approached by first gathering the surface level and represent long distance relation- relevant scientific literature by matching bodily ships via either convolutional or recurrent neural structures at a variety of anatomical levels (i.e. networks and an attention mechanism (Zhang et al., gasserian ganglion, vagus nerve, brainstem, etc.) 2017). from a constructed set of vocabulary at sentence In biomedical domain, relation extraction work level. Then, annotators classified each structure to is traditionally focused on protein-protein, gene- structure relationship using only the information disease or protein-chemical interactions. Sev- provided within the sentence based on the con- eral labeled datasets, such as GAD (Bravo nectivity types defined in our annotation guideline. et al., 2015) (a gene-disease relation dataset) and This structured data were then used to train our con- CHEMPROT (Krallinger et al., 2017) (a protein- nectivity relations models. Data from two curators chemical multi-relation dataset) are publicly avail- was used to assess the inter-curator agreement to de- able. Neural sequence models have also been termine if the annotation guidelines are sufficient to applied to protein-chemical relation extraction “teach” the task to humans. We assessed connectiv- task (Lim and Kang, 2018). ity statements into several types including, anatom- Recently, sentence level transformer based lan- ical connectivity, functional connectivity, structural guage representation models such as BERT (De- connectivity, topological connectivity and general vlin et al., 2019) have shown superior down- connectivity as well as no connectivity. The general stream performance on many NLP tasks. A connectivity and no connectivity categories can be biomedical domain adapted version of BERT called thought of as statements that are too vague to be BioBERT (Lee et al., 2019) has been shown state of of much direct use for our use case. The most im- the art performance on several biomedical relation portant statements are anatomical connectivity, elu- extraction tasks. cidating which parts are connected physically and While most of the transformer based language functional connectivity, elucidating which parts are representation models are pretrained on sentences connected functionally. A definition and an exam- where a predefined percentage of the tokens are ple for each connectivity type used for annotation masked and the model learns to predict the masked is shown in Table1. Of course with single sen- tokens, a recently introduced language representa- tences, it is difficult to define a direct functional tion model, ELECTRA (Clark et al., 2020) learns relationship, which typically rests on the latency to discriminate if a token in the original input is with which a signal is detected between two ele- replaced by a language generator model or not. The ments (Bennett, 2001). However, statements about generator model is a BERT like generative model latency are very rare in the subset of the peripheral that is co-trained with the discriminative model. nervous system literature, whereas somewhat more While there are efforts to extract brain con- general statements about functional relationships nectivity information from neuroscience litera- that, for example, describe damage to one area ture (Richardet et al., 2015), their focus is in the and altered functioning in another, are more abun- cognitive parts of the brain instead of ANS. In this dant. We hypothesize that when such statements paper, we introduce a labeled ANS connectivity cor- are reasonably abundant, a detection classifier will pus, together with four biomedical domain adapted be easier to train. ELECTRA models, that we have used to develop an anatomical and functional connectivity relation In relation extraction, long-range relations are extraction system that outperforms BioBERT. usually handled using dependency parse tree infor- mation. In traditional feature-based models, paths 2 Methods in the dependency parse tree between entities are used used as features (Kambhatla, 2004) which 2.1 Vocabulary suffered from the sparsity of the feature patterns. In order to better structure information from pa- More recently, neural models are increasingly em- pers, anatomical structure labels were drawn from ployed for relation extraction instead of feature en- a set of relevant ontologies, also approved for use gineering using vectorized word embeddings. The by the SPARC project. These ontology terms dependency information is represented as computa- include primarily FMA (RRID:SCR_003379), tion graphs along the parse tree (Zhang et al., 2018). UBERON (RRID:SCR_010668), and NIFSTD Sequence models, on the other hand, work at the (RRID:SCR_005414) terms, and they are listed 28 Relation Definition Example functional a relationship was determined to The HB reflex is a reflex initiated by lung infla- exist between two structures using tion, which excited the myelinated fibers of vagus physiological techniques nerve, pulmonary stretch receptors [11,19]. anatomical a physical synaptic relationship was Only the most prominent nervous connections, observed between two structures us- such as the penis nerve cord (pnc, Fig. 8a), con- ing anatomical techniques such as necting the ventral ganglion to the penis gan- tract tracing glion can be detected. structural a relationship
Recommended publications
  • How to Keep a Knowledge Base Synchronized with Its Encyclopedia Source
    Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) How to Keep a Knowledge Base Synchronized with Its Encyclopedia Source Jiaqing Liang12, Sheng Zhang1, Yanghua Xiao134∗ 1School of Computer Science, Shanghai Key Laboratory of Data Science Fudan University, Shanghai, China 2Shuyan Technology, Shanghai, China 3Shanghai Internet Big Data Engineering Technology Research Center, China 4Xiaoi Research, Shanghai, China [email protected], fshengzhang16,[email protected] Abstract However, most of these knowledge bases tend to be out- dated, which limits their utility. For example, in many knowl- Knowledge bases are playing an increasingly im- edge bases, Donald Trump is only a business man even af- portant role in many real-world applications. How- ter the inauguration. Obviously, it is important to let ma- ever, most of these knowledge bases tend to be chines know that Donald Trump is the president of the United outdated, which limits the utility of these knowl- States so that they can understand that the topic of an arti- edge bases. In this paper, we investigate how to cle mentioning Donald Trump is probably related to politics. keep the freshness of the knowledge base by syn- Moreover, new entities are continuously emerging and most chronizing it with its data source (usually ency- of them are popular, such as iphone 8. However, it is hard for clopedia websites). A direct solution is revisiting a knowledge base to cover these entities in time even if the the whole encyclopedia periodically and rerun the encyclopedia websites have already covered them. entire pipeline of the construction of knowledge freshness base like most existing methods.
    [Show full text]
  • Learning Ontologies from RDF Annotations
    /HDUQLQJÃRQWRORJLHVÃIURPÃ5')ÃDQQRWDWLRQV $OH[DQGUHÃ'HOWHLOÃ&DWKHULQHÃ)DURQ=XFNHUÃ5RVHÃ'LHQJ ACACIA project, INRIA, 2004, route des Lucioles, B.P. 93, 06902 Sophia Antipolis, France {Alexandre.Delteil, Catherine.Faron, Rose.Dieng}@sophia.inria.fr $EVWUDFW objects, as in [Mineau, 1990; Carpineto and Romano, 1993; Bournaud HWÃDO., 2000]. In this paper, we present a method for learning Since all RDF annotations are gathered inside a common ontologies from RDF annotations of Web RDF graph, the problem which arises is the extraction of a resources by systematically generating the most description for a given resource from the whole RDF graph. specific generalization of all the possible sets of After a brief description of the RDF data model (Section 2) resources. The preliminary step of our method and of RDF Schema (Section 3), Section 4 presents several consists in extracting (partial) resource criteria for extracting partial resource descriptions. In order descriptions from the whole RDF graph gathering to deal with the intrinsic complexity of the building of a all the annotations. In order to deal with generalization hierarchy, we propose an incremental algorithmic complexity, we incrementally build approach by gradually increasing the size of the descriptions the ontology by gradually increasing the size of the resource descriptions we consider. we consider. The principle of the approach is explained in Section 5 and more deeply detailed in Section 6. Ã ,QWURGXFWLRQ Ã 7KHÃ5')ÃGDWDÃPRGHO The Semantic Web, expected to be the next step that will RDF is the emerging Web standard for annotating resources lead the Web to its full potential, will be based on semantic HWÃDO metadata describing all kinds of Web resources.
    [Show full text]
  • Article Reference
    Article Incidences of problematic cell lines are lower in papers that use RRIDs to identify cell lines BABIC, Zeljana, et al. Abstract The use of misidentified and contaminated cell lines continues to be a problem in biomedical research. Research Resource Identifiers (RRIDs) should reduce the prevalence of misidentified and contaminated cell lines in the literature by alerting researchers to cell lines that are on the list of problematic cell lines, which is maintained by the International Cell Line Authentication Committee (ICLAC) and the Cellosaurus database. To test this assertion, we text-mined the methods sections of about two million papers in PubMed Central, identifying 305,161 unique cell-line names in 150,459 articles. We estimate that 8.6% of these cell lines were on the list of problematic cell lines, whereas only 3.3% of the cell lines in the 634 papers that included RRIDs were on the problematic list. This suggests that the use of RRIDs is associated with a lower reported use of problematic cell lines. Reference BABIC, Zeljana, et al. Incidences of problematic cell lines are lower in papers that use RRIDs to identify cell lines. eLife, 2019, vol. 8, p. e41676 DOI : 10.7554/eLife.41676 PMID : 30693867 Available at: http://archive-ouverte.unige.ch/unige:119832 Disclaimer: layout of this document may differ from the published version. 1 / 1 FEATURE ARTICLE META-RESEARCH Incidences of problematic cell lines are lower in papers that use RRIDs to identify cell lines Abstract The use of misidentified and contaminated cell lines continues to be a problem in biomedical research.
    [Show full text]
  • EN125-Web.Pdf
    ercim-news.ercim.eu Number 125 April 2021 ERCIM NEWS Special theme: Brain-inspired Computing Also in this issue Research and Innovation: Human-like AI JoinCt ONTENTS Editorial Information JOINT ERCIM ACTIONS ERCIM News is the magazine of ERCIM. Published quarterly, it reports 4 ERCIM-JST Joint Symposium on Big Data and on joint actions of the ERCIM partners, and aims to reflect the contribu - Artificial Intelligence tion made by ERCIM to the European Community in Information Technology and Applied Mathematics. Through short articles and news 5 ERCIM “Alain Bensoussan” Fellowship Programme items, it provides a forum for the exchange of information between the institutes and also with the wider scientific community. This issue has a circulation of about 6,000 printed copies and is also available online. SPECIAL THEME ERCIM News is published by ERCIM EEIG The special theme “Brain-inspired Computing” has been BP 93, F-06902 Sophia Antipolis Cedex, France coordinated by the guest editors Robert Haas (IBM +33 4 9238 5010, [email protected] Research Europe) and Michael Pfeiffer (Bosch Center for Director: Philipp Hoschka, ISSN 0926-4981 Artificial Intelligence) Contributions Introduction to the Special Theme Contributions should be submitted to the local editor of your country 6 Brain-inspired Computing by Robert Haas (IBM Research Europe) and Michael Copyrightnotice Pfeiffer (Bosch Center for Artificial Intelligence) All authors, as identified in each article, retain copyright of their work. ERCIM News is licensed under a Creative Commons Attribution
    [Show full text]
  • KBART: Knowledge Bases and Related Tools
    NISO-RP-9-2010 KBART: Knowledge Bases and Related Tools A Recommended Practice of the National Information Standards Organization (NISO) and UKSG Prepared by the NISO/UKSG KBART Working Group January 2010 i About NISO Recommended Practices A NISO Recommended Practice is a recommended "best practice" or "guideline" for methods, materials, or practices in order to give guidance to the user. Such documents usually represent a leading edge, exceptional model, or proven industry practice. All elements of Recommended Practices are discretionary and may be used as stated or modified by the user to meet specific needs. This recommended practice may be revised or withdrawn at any time. For current information on the status of this publication contact the NISO office or visit the NISO website (www.niso.org). Published by National Information Standards Organization (NISO) One North Charles Street, Suite 1905 Baltimore, MD 21201 www.niso.org Copyright © 2010 by the National Information Standards Organization and the UKSG. All rights reserved under International and Pan-American Copyright Conventions. For noncommercial purposes only, this publication may be reproduced or transmitted in any form or by any means without prior permission in writing from the publisher, provided it is reproduced accurately, the source of the material is identified, and the NISO/UKSG copyright status is acknowledged. All inquires regarding translations into other languages or commercial reproduction or distribution should be addressed to: NISO, One North Charles Street,
    [Show full text]
  • The NIDDK Information Network: a Community Portal for Finding Data, Materials, and Tools for Researchers Studying Diabetes, Digestive, and Kidney Diseases
    RESEARCH ARTICLE The NIDDK Information Network: A Community Portal for Finding Data, Materials, and Tools for Researchers Studying Diabetes, Digestive, and Kidney Diseases Patricia L. Whetzel1, Jeffrey S. Grethe1, Davis E. Banks1, Maryann E. Martone1,2* 1 Center for Research in Biological Systems, University of California, San Diego, San Diego, California, a11111 United States of America, 2 Dept of Neurosciences, University of California, San Diego, San Diego, California, United States of America * [email protected] Abstract OPEN ACCESS The NIDDK Information Network (dkNET; http://dknet.org) was launched to serve the needs Citation: Whetzel PL, Grethe JS, Banks DE, Martone of basic and clinical investigators in metabolic, digestive and kidney disease by facilitating ME (2015) The NIDDK Information Network: A access to research resources that advance the mission of the National Institute of Diabe- Community Portal for Finding Data, Materials, and Tools for Researchers Studying Diabetes, Digestive, tes and Digestive and Kidney Diseases (NIDDK). By research resources, we mean the and Kidney Diseases. PLoS ONE 10(9): e0136206. multitude of data, software tools, materials, services, projects and organizations available to doi:10.1371/journal.pone.0136206 researchers in the public domain. Most of these are accessed via web-accessible data- Editor: Chun-Hsi Huang, University of Connecticut, bases or web portals, each developed, designed and maintained by numerous different UNITED STATES projects, organizations and individuals. While many of the large government funded data- Received: February 17, 2015 bases, maintained by agencies such as European Bioinformatics Institute and the National Accepted: July 30, 2015 Center for Biotechnology Information, are well known to researchers, many more that have been developed by and for the biomedical research community are unknown or underuti- Published: September 22, 2015 lized.
    [Show full text]
  • The Ontology Web Language (OWL) for a Multi-Agent Understating
    The Ontology Web Language (OWL) for a Multi- Agent Understating System Mostafa M. Aref Zhengbo Zhou Department of Computer Science and Engineering University of Bridgeport, Bridgeport, CT 06601 email: [email protected] Abstract— Computer understanding is a challenge OWL is a Web Ontology Language. It is built on top of problem in Artificial Intelligence. A multi-agent system has RDF – Resource Definition Framework and written in XML. been developed to tackle this problem. Among its modules is It is a part of Semantic Web Vision, and is designed to be its knowledge base (vocabulary agents). This paper interpreted by computers, not for being read by people. discusses the use of the Ontology Web Language (OWL) to OWL became a W3C (World Wide Web Consortium) represent the knowledge base. An example of applying OWL Recommendation in February 2004 [2]. The OWL is a in sentence understanding is given. Followed by an language for defining and instantiating Web ontologies. evaluation of OWL. OWL ontology may include the descriptions of classes, properties, and their instances [3]. Given such ontology, the OWL formal semantics specifies how to derive its logical 1. INTRODUCTION consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. One of various definitions for Artificial Intelligence is “The study of how to make computers do things which, at the Section 2 describes a multi-agents understanding system. moment, people do better”[7]. From the definition of AI Section 3 gives a brief description of a newly standardized mentioned above, “Understanding” can be looked as the technique, Web Ontology Language—OWL.
    [Show full text]
  • Feature Engineering for Knowledge Base Construction
    Feature Engineering for Knowledge Base Construction Christopher Re´y Amir Abbas Sadeghiany Zifei Shany Jaeho Shiny Feiran Wangy Sen Wuy Ce Zhangyz yStanford University zUniversity of Wisconsin-Madison fchrismre, amirabs, zifei, jaeho.shin, feiran, senwu, [email protected] Abstract Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and in- formation integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are con- structed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user “think about features— not algorithms.” We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality.
    [Show full text]
  • Efficient Inference and Learning in a Large Knowledge Base
    Mach Learn DOI 10.1007/s10994-015-5488-x Efficient inference and learning in a large knowledge base Reasoning with extracted information using a locally groundable first-order probabilistic logic William Yang Wang1 · Kathryn Mazaitis1 · Ni Lao2 · William W. Cohen1 Received: 10 January 2014 / Accepted: 4 March 2015 © The Author(s) 2015 Abstract One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web- scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a proposi- tional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approxi- mate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to com- putation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that Editors: Gerson Zaverucha and Vítor Santos Costa.
    [Show full text]
  • Anita Bandrowski, Scicrunch / NIF / RRID [email protected]
    on Anita Bandrowski, Sci Cr unch / NI F / RRI D [email protected] Is Reproducibility really a Problem? Growing Challenge: Ensuring the Rigor and Reproducibility of Science Growing Challenge: Ensuring the Rigor and Reproducibility of Science 4 Rigor and Transparency in Research *New Grant Review Criteria* To support the highest quality science, public accountability, and social responsibility in the conduct of science, NIH’s Rigor and Transparency efforts are intended to clarify expectations and highlight attention to four areas that may need more explicit attention by applicants and reviewers: • Scientific premise • Scientific rigor • Consideration of relevant biological variables, such as sex • Authentication of key biological and/or chemical resources Rigor and Transparency in Research *New Grant Review Criteria* To support the highest quality science, public accountability, and social responsibility in the conduct of science, NIH’s Rigor and Transparency efforts are intended to clarify expectations and highlight attention to four areas that may need more explicit attention by applicants and reviewers: • Scientific premise • Scientific rigor • Consideration of relevant biological variables, such as sex • Authentication of key biological and/or chemical resources What is a Key Biological Resource? • Antibodies • Cell Lines • Organisms (transgenic) Reagents that are the most common failure point in experiments What does it cost to have Key Biological Resource fail sometimes? * Freeman et al, 2017. Reproducibility2020: Progress and priorities
    [Show full text]
  • Plos Progress Update 2014/2015 from the Chairman and Ceo
    PLOS PROGRESS UPDATE 2014/2015 FROM THE CHAIRMAN AND CEO PLOS is dedicated to the transformation of research communication through collaboration, transparency, speed and access. Since its founding, PLOS has demonstrated the viability of high quality, Open Access publishing; launched the ground- breaking PLOS ONE, a home for all sound science selected for its rigor, not its “significance”; developed the first Article- Level Metrics (ALMs) to demonstrate the value of research beyond the perceived status of a journal title; and extended the impact of research after its publication with the PLOS data policy, ALMs and liberal Open Access licensing. But challenges remain. Scientific communication is far from its ideal state. There is still inconsistent access, and research is oered at a snapshot in time, instead of as an evolving contribution whose reliability and significance are continually evaluated through its lifetime. The current state demands that PLOS continue to establish new standards and expectations for scholarly communication. These include a faster and more ecient publication experience, more transparent peer review, assessment though the lifetime of a work, better recognition of the range of contributions made by collaborators and placing researchers and their communities back at the center of scientific communication. To these ends, PLOS is developing ApertaTM, a system that will facilitate and advance the submission and peer review process for authors, editors and reviewers. PLOS is also creating richer and more inclusive forums, such as PLOS Paleontology and PLOS Ecology Communities and the PLOS Science Wednesday redditscience Ask Me Anything. Progress is being made on early posting of manuscripts at PLOS.
    [Show full text]
  • Semantic Web: a Review of the Field Pascal Hitzler [email protected] Kansas State University Manhattan, Kansas, USA
    Semantic Web: A Review Of The Field Pascal Hitzler [email protected] Kansas State University Manhattan, Kansas, USA ABSTRACT which would probably produce a rather different narrative of the We review two decades of Semantic Web research and applica- history and the current state of the art of the field. I therefore do tions, discuss relationships to some other disciplines, and current not strive to achieve the impossible task of presenting something challenges in the field. close to a consensus – such a thing seems still elusive. However I do point out here, and sometimes within the narrative, that there CCS CONCEPTS are a good number of alternative perspectives. The review is also necessarily very selective, because Semantic • Information systems → Graph-based database models; In- Web is a rich field of diverse research and applications, borrowing formation integration; Semantic web description languages; from many disciplines within or adjacent to computer science, Ontologies; • Computing methodologies → Description log- and a brief review like this one cannot possibly be exhaustive or ics; Ontology engineering. give due credit to all important individual contributions. I do hope KEYWORDS that I have captured what many would consider key areas of the Semantic Web field. For the reader interested in obtaining amore Semantic Web, ontology, knowledge graph, linked data detailed overview, I recommend perusing the major publication ACM Reference Format: outlets in the field: The Semantic Web journal,1 the Journal of Pascal Hitzler. 2020. Semantic Web: A Review Of The Field. In Proceedings Web Semantics,2 and the proceedings of the annual International of . ACM, New York, NY, USA, 7 pages.
    [Show full text]