SBP Review Neuromorphological File Format V

Total Page:16

File Type:pdf, Size:1020Kb

SBP Review Neuromorphological File Format V SBP Review Neuromorphological File Format v 4.0 Reviewers: Samir Das and Trygve Leergaard Authors: Susan Tappan, Maci Heal, Aidan Sullivan, INCF Standards and Best Pracices Committee, and Jyl Boline Basic metadata Title: Neuromorphological File Format v 4.0 Brief description: MBF Bioscience’s Neuromorphological file format provides an openly documented and broadly utilized digital reconstruction and modeling structure for microscopic anatomies. The Extensible Markup Language (XML) based file structure has been driven for over 30 years by the ever-advancing science, technology, and input of neuroscientists throughout the world. It balances structure with flexibility by storing each geometrically modeled object as unique data elements and providing mechanisms for grouping any number and type of data elements. This accommodates a number of analytical possibilities such as detailed morphometric analyses, simulations, and electrotonic modeling of the neurons. File-level metadata is retained to provide detail on the origin of the sample, ensuring that the provenance of derivative data is tracked and that important source information is not separated from the corresponding data. URL: http://www.mbfbioscience.com/filespecification Steward: Aidan Sullivan, Susan Tappan Relevant publication: Sullivan, A. E., Tappan, S. J., Angstman, P. J., Rodriguez, A., Thomas, G. C., Hoppes, D. M., Abdul-Karim, M. A., Heal, M. L., Glaser, J. R. (2020). A comprehensive, FAIR file format for neuroanatomical structure modeling. bioRxiv, 306670. https://doi.org/10.1101/2020.09.22.306670 This publication has been submitted to Neuroinformatics and is currently under review. Nomination Information: Neuromorphological File Format v 4.0 was nominated for consideration by Susan Tappan on November 25, 2020. Summary of discussion: Neuromorphological File Format v 4.0 is the first standard created and maintained by a for-profit corporation to be submitted to INCF for endorsement. The SBP Committee feels it is important to encourage companies to keep SBPs as open as possible, but understands this needs to be balanced with the interest of the business. These types of companies can play an important role in promoting the use of standards in the neuroscience community, as a standard is often easier to employ, and thus more likely to be adopted, when a researcher can generate data directly into the standard format with the aid of tools they commonly use to generate or handle the data. This is such an instance, as the Neuromorphological File Format is the underlying standard of tools offered by MBF. The reviewers feel that the Neuromorphological File Format v 4.0 meets the criteria for INCF endorsement of a standard and should be put forward for community review. It is actively maintained, and widely disseminated through the MBF tools. It is freely available for use, well documented, and currently being used in multiple data-sharing projects. Moreover, in response to recommendations during the review process, MBF has created a transparent and balanced mechanism for community feedback. Open criteria: 1. Is the SBP covered under an open license so that it is free to implement and reuse by all interested parties (including commercial)? (List of open source licenses) Under the originally proposed license CC-BY-NC-ND, it would not be open for commercial use. After discussions with members of the SBP Committee and further consideration, MBF Bioscience proposes moving to the CC-BY-ND license, which would make it open to implement and reuse by all interested parties.The license in conjunction with their governance policy allows for anyone to submit modification requests, but these must be approved by the Neuromorphological File Format’s Editorial Board. The CC-BY-ND license would allow for other commercial entities to export digital models of microscopic anatomies in the neuromorphological file format. 2. What license is used? The Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC- BY-NC-ND) was originally proposed, but following discussions with members of the SBP Committee and further consideration, MBF Bioscience proposes moving to the CC-BY-ND license. 3. Does the SBP follow open development practices? The governance policy allows for anyone to request changes to the file format, however, the request must be approved by the Neuromorphological File Format’s Editorial Board. 4. Where and how are the code/documents managed? Neuromorphological file format documentation and examples are available on http://www.mbfbioscience.com/filespecification and https://github.com/MBFBioscience/nmf- schema. A web forum specific to the files specification facilitates community contributions and support for tool developers implementing the file format: https://forums.mbfbioscience.com/c/neuromorphological-file-specification. 5. Any additional comments on the openness of the SBP? MBF Bioscience is a commercial company, and as this standard is an output of their proprietary software, they put a great deal of effort into balancing their needs as a corporation but still offering a mechanism for the community (even commercial) to use and suggest modifications. Currently, reconstructions in the neuromorphological file format are only generated within MBF Bioscience proprietary software. In an effort to broaden the use of the neuromorphological file format and integrate other reconstruction formats into the open ecosystem, MBF Bioscience plans to develop a tool for reading and writing the neuromorphological format. This tool will be developed in Python and made available to the community via GitHub to allow users to adapt it for their specific needs. We predict that this tool will help to make the neuromorphological file format more accessible to the neuroscience community easing integration with software tools. Tool builders can use the neuromorphological reader/writer to develop tools that: convert alternative digital neuroanatomical reconstruction file formats to the open, and FAIR neuromorphological file format, convert the neuromorphological data files to a format that can be read by their unique software, and/or extract valuable metadata stored within neuromorphological data files. Simple examples of neuromorphological data files will be hosted on the GitHub repository. FAIR criteria Considers the SBP from the point of view of some (not all) of the FAIR criteria (Wilkinson et al. 2016). Is the SBP itself FAIR? Does it result in the production of FAIR research objects? Note that many of these may not apply. If so, leave blank or mark N/A. 1. SBP uses/permits persistent identifiers where appropriate (F1) Yes ● Research Resource Identifier (RRID) for the digital reconstruction software application and institution that produce that software are reported. These RRIDs are generated when the application is registered in the SciCrunch knowledge base. ● Unique identifiers are reported using an API connection with the SciCrunch InterLex Terminology Portal for subject and annotation of anatomical regions. 2. SBP allows addition of rich metadata to research objects (F2) Yes. The Neuromorphological file format stores: ● Metadata regarding the software application used to generate the digital reconstruction file to ensure the data generated is reproducible, reusable, and citable. This includes the expected file structure, the software application unique identifier, institution unique identifier, and version number of the neuromorphological file structure. ● Subject and annotation metadata for each data file including fields for subject species, identifier, sex, and age of the sample. ● The file path, name, X, Y, and Z scaling, location, size, order, and color channel information from the source microscopy image conveying the provenance of the data as it relates to the images in which it was derived. The lineage of the derivative data is recorded in the file, regardless of complexity. ● Essential information regarding the software application includes the application name, application version, application Research Resource Identifier (RRID), and institution RRID. 3. SBP uses/permits addition of appropriate PIDs to metadata (F3) N/A. Metadata is embedded within the Neuromorphological file format. 4. The protocol allows for an authentication and authorization when reQuired (A1.2) Not directly applicable to the file format. 5. SBP uses or allows the use of vocabularies that follow the FAIR principles (I2) Yes. RRIDs for software applications and institutions registered with the Resource Identification Portal are reported. This Portal collates many community repositories to provide unique resource identifiers that promote identification, discovery and reuse of the tools and data. Unique identifiers are reported using an API connection with the SciCrunch InterLex Terminology Portal for subject and annotation of anatomical regions. The terminology portal aims to collate existing terminologies, lexicons, and ontologies to provide unique identifiers that help scientists communicate about their data. Term relationships included are described by the SciCrunch Portal. 6. SBP includes/allows Qualified links to other identifiers (I3) N/A 7. Does the standard interoperate with other relevant standards in the same domain? (I1) This data standard employs mechanisms that facilitate interoperability such as using RRIDs and ontologies. 8. Does the SBP provide citation metadata so its use can be documented and tracked? (R1.2) N/A 9. Any additional comments on aspects of FAIR? Encoded
Recommended publications
  • How to Keep a Knowledge Base Synchronized with Its Encyclopedia Source
    Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) How to Keep a Knowledge Base Synchronized with Its Encyclopedia Source Jiaqing Liang12, Sheng Zhang1, Yanghua Xiao134∗ 1School of Computer Science, Shanghai Key Laboratory of Data Science Fudan University, Shanghai, China 2Shuyan Technology, Shanghai, China 3Shanghai Internet Big Data Engineering Technology Research Center, China 4Xiaoi Research, Shanghai, China [email protected], fshengzhang16,[email protected] Abstract However, most of these knowledge bases tend to be out- dated, which limits their utility. For example, in many knowl- Knowledge bases are playing an increasingly im- edge bases, Donald Trump is only a business man even af- portant role in many real-world applications. How- ter the inauguration. Obviously, it is important to let ma- ever, most of these knowledge bases tend to be chines know that Donald Trump is the president of the United outdated, which limits the utility of these knowl- States so that they can understand that the topic of an arti- edge bases. In this paper, we investigate how to cle mentioning Donald Trump is probably related to politics. keep the freshness of the knowledge base by syn- Moreover, new entities are continuously emerging and most chronizing it with its data source (usually ency- of them are popular, such as iphone 8. However, it is hard for clopedia websites). A direct solution is revisiting a knowledge base to cover these entities in time even if the the whole encyclopedia periodically and rerun the encyclopedia websites have already covered them. entire pipeline of the construction of knowledge freshness base like most existing methods.
    [Show full text]
  • Learning Ontologies from RDF Annotations
    /HDUQLQJÃRQWRORJLHVÃIURPÃ5')ÃDQQRWDWLRQV $OH[DQGUHÃ'HOWHLOÃ&DWKHULQHÃ)DURQ=XFNHUÃ5RVHÃ'LHQJ ACACIA project, INRIA, 2004, route des Lucioles, B.P. 93, 06902 Sophia Antipolis, France {Alexandre.Delteil, Catherine.Faron, Rose.Dieng}@sophia.inria.fr $EVWUDFW objects, as in [Mineau, 1990; Carpineto and Romano, 1993; Bournaud HWÃDO., 2000]. In this paper, we present a method for learning Since all RDF annotations are gathered inside a common ontologies from RDF annotations of Web RDF graph, the problem which arises is the extraction of a resources by systematically generating the most description for a given resource from the whole RDF graph. specific generalization of all the possible sets of After a brief description of the RDF data model (Section 2) resources. The preliminary step of our method and of RDF Schema (Section 3), Section 4 presents several consists in extracting (partial) resource criteria for extracting partial resource descriptions. In order descriptions from the whole RDF graph gathering to deal with the intrinsic complexity of the building of a all the annotations. In order to deal with generalization hierarchy, we propose an incremental algorithmic complexity, we incrementally build approach by gradually increasing the size of the descriptions the ontology by gradually increasing the size of the resource descriptions we consider. we consider. The principle of the approach is explained in Section 5 and more deeply detailed in Section 6. Ã ,QWURGXFWLRQ Ã 7KHÃ5')ÃGDWDÃPRGHO The Semantic Web, expected to be the next step that will RDF is the emerging Web standard for annotating resources lead the Web to its full potential, will be based on semantic HWÃDO metadata describing all kinds of Web resources.
    [Show full text]
  • Article Reference
    Article Incidences of problematic cell lines are lower in papers that use RRIDs to identify cell lines BABIC, Zeljana, et al. Abstract The use of misidentified and contaminated cell lines continues to be a problem in biomedical research. Research Resource Identifiers (RRIDs) should reduce the prevalence of misidentified and contaminated cell lines in the literature by alerting researchers to cell lines that are on the list of problematic cell lines, which is maintained by the International Cell Line Authentication Committee (ICLAC) and the Cellosaurus database. To test this assertion, we text-mined the methods sections of about two million papers in PubMed Central, identifying 305,161 unique cell-line names in 150,459 articles. We estimate that 8.6% of these cell lines were on the list of problematic cell lines, whereas only 3.3% of the cell lines in the 634 papers that included RRIDs were on the problematic list. This suggests that the use of RRIDs is associated with a lower reported use of problematic cell lines. Reference BABIC, Zeljana, et al. Incidences of problematic cell lines are lower in papers that use RRIDs to identify cell lines. eLife, 2019, vol. 8, p. e41676 DOI : 10.7554/eLife.41676 PMID : 30693867 Available at: http://archive-ouverte.unige.ch/unige:119832 Disclaimer: layout of this document may differ from the published version. 1 / 1 FEATURE ARTICLE META-RESEARCH Incidences of problematic cell lines are lower in papers that use RRIDs to identify cell lines Abstract The use of misidentified and contaminated cell lines continues to be a problem in biomedical research.
    [Show full text]
  • EN125-Web.Pdf
    ercim-news.ercim.eu Number 125 April 2021 ERCIM NEWS Special theme: Brain-inspired Computing Also in this issue Research and Innovation: Human-like AI JoinCt ONTENTS Editorial Information JOINT ERCIM ACTIONS ERCIM News is the magazine of ERCIM. Published quarterly, it reports 4 ERCIM-JST Joint Symposium on Big Data and on joint actions of the ERCIM partners, and aims to reflect the contribu - Artificial Intelligence tion made by ERCIM to the European Community in Information Technology and Applied Mathematics. Through short articles and news 5 ERCIM “Alain Bensoussan” Fellowship Programme items, it provides a forum for the exchange of information between the institutes and also with the wider scientific community. This issue has a circulation of about 6,000 printed copies and is also available online. SPECIAL THEME ERCIM News is published by ERCIM EEIG The special theme “Brain-inspired Computing” has been BP 93, F-06902 Sophia Antipolis Cedex, France coordinated by the guest editors Robert Haas (IBM +33 4 9238 5010, [email protected] Research Europe) and Michael Pfeiffer (Bosch Center for Director: Philipp Hoschka, ISSN 0926-4981 Artificial Intelligence) Contributions Introduction to the Special Theme Contributions should be submitted to the local editor of your country 6 Brain-inspired Computing by Robert Haas (IBM Research Europe) and Michael Copyrightnotice Pfeiffer (Bosch Center for Artificial Intelligence) All authors, as identified in each article, retain copyright of their work. ERCIM News is licensed under a Creative Commons Attribution
    [Show full text]
  • KBART: Knowledge Bases and Related Tools
    NISO-RP-9-2010 KBART: Knowledge Bases and Related Tools A Recommended Practice of the National Information Standards Organization (NISO) and UKSG Prepared by the NISO/UKSG KBART Working Group January 2010 i About NISO Recommended Practices A NISO Recommended Practice is a recommended "best practice" or "guideline" for methods, materials, or practices in order to give guidance to the user. Such documents usually represent a leading edge, exceptional model, or proven industry practice. All elements of Recommended Practices are discretionary and may be used as stated or modified by the user to meet specific needs. This recommended practice may be revised or withdrawn at any time. For current information on the status of this publication contact the NISO office or visit the NISO website (www.niso.org). Published by National Information Standards Organization (NISO) One North Charles Street, Suite 1905 Baltimore, MD 21201 www.niso.org Copyright © 2010 by the National Information Standards Organization and the UKSG. All rights reserved under International and Pan-American Copyright Conventions. For noncommercial purposes only, this publication may be reproduced or transmitted in any form or by any means without prior permission in writing from the publisher, provided it is reproduced accurately, the source of the material is identified, and the NISO/UKSG copyright status is acknowledged. All inquires regarding translations into other languages or commercial reproduction or distribution should be addressed to: NISO, One North Charles Street,
    [Show full text]
  • The NIDDK Information Network: a Community Portal for Finding Data, Materials, and Tools for Researchers Studying Diabetes, Digestive, and Kidney Diseases
    RESEARCH ARTICLE The NIDDK Information Network: A Community Portal for Finding Data, Materials, and Tools for Researchers Studying Diabetes, Digestive, and Kidney Diseases Patricia L. Whetzel1, Jeffrey S. Grethe1, Davis E. Banks1, Maryann E. Martone1,2* 1 Center for Research in Biological Systems, University of California, San Diego, San Diego, California, a11111 United States of America, 2 Dept of Neurosciences, University of California, San Diego, San Diego, California, United States of America * [email protected] Abstract OPEN ACCESS The NIDDK Information Network (dkNET; http://dknet.org) was launched to serve the needs Citation: Whetzel PL, Grethe JS, Banks DE, Martone of basic and clinical investigators in metabolic, digestive and kidney disease by facilitating ME (2015) The NIDDK Information Network: A access to research resources that advance the mission of the National Institute of Diabe- Community Portal for Finding Data, Materials, and Tools for Researchers Studying Diabetes, Digestive, tes and Digestive and Kidney Diseases (NIDDK). By research resources, we mean the and Kidney Diseases. PLoS ONE 10(9): e0136206. multitude of data, software tools, materials, services, projects and organizations available to doi:10.1371/journal.pone.0136206 researchers in the public domain. Most of these are accessed via web-accessible data- Editor: Chun-Hsi Huang, University of Connecticut, bases or web portals, each developed, designed and maintained by numerous different UNITED STATES projects, organizations and individuals. While many of the large government funded data- Received: February 17, 2015 bases, maintained by agencies such as European Bioinformatics Institute and the National Accepted: July 30, 2015 Center for Biotechnology Information, are well known to researchers, many more that have been developed by and for the biomedical research community are unknown or underuti- Published: September 22, 2015 lized.
    [Show full text]
  • The Ontology Web Language (OWL) for a Multi-Agent Understating
    The Ontology Web Language (OWL) for a Multi- Agent Understating System Mostafa M. Aref Zhengbo Zhou Department of Computer Science and Engineering University of Bridgeport, Bridgeport, CT 06601 email: [email protected] Abstract— Computer understanding is a challenge OWL is a Web Ontology Language. It is built on top of problem in Artificial Intelligence. A multi-agent system has RDF – Resource Definition Framework and written in XML. been developed to tackle this problem. Among its modules is It is a part of Semantic Web Vision, and is designed to be its knowledge base (vocabulary agents). This paper interpreted by computers, not for being read by people. discusses the use of the Ontology Web Language (OWL) to OWL became a W3C (World Wide Web Consortium) represent the knowledge base. An example of applying OWL Recommendation in February 2004 [2]. The OWL is a in sentence understanding is given. Followed by an language for defining and instantiating Web ontologies. evaluation of OWL. OWL ontology may include the descriptions of classes, properties, and their instances [3]. Given such ontology, the OWL formal semantics specifies how to derive its logical 1. INTRODUCTION consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. One of various definitions for Artificial Intelligence is “The study of how to make computers do things which, at the Section 2 describes a multi-agents understanding system. moment, people do better”[7]. From the definition of AI Section 3 gives a brief description of a newly standardized mentioned above, “Understanding” can be looked as the technique, Web Ontology Language—OWL.
    [Show full text]
  • Feature Engineering for Knowledge Base Construction
    Feature Engineering for Knowledge Base Construction Christopher Re´y Amir Abbas Sadeghiany Zifei Shany Jaeho Shiny Feiran Wangy Sen Wuy Ce Zhangyz yStanford University zUniversity of Wisconsin-Madison fchrismre, amirabs, zifei, jaeho.shin, feiran, senwu, [email protected] Abstract Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and in- formation integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are con- structed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user “think about features— not algorithms.” We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality.
    [Show full text]
  • Efficient Inference and Learning in a Large Knowledge Base
    Mach Learn DOI 10.1007/s10994-015-5488-x Efficient inference and learning in a large knowledge base Reasoning with extracted information using a locally groundable first-order probabilistic logic William Yang Wang1 · Kathryn Mazaitis1 · Ni Lao2 · William W. Cohen1 Received: 10 January 2014 / Accepted: 4 March 2015 © The Author(s) 2015 Abstract One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web- scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a proposi- tional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approxi- mate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to com- putation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that Editors: Gerson Zaverucha and Vítor Santos Costa.
    [Show full text]
  • Anita Bandrowski, Scicrunch / NIF / RRID [email protected]
    on Anita Bandrowski, Sci Cr unch / NI F / RRI D [email protected] Is Reproducibility really a Problem? Growing Challenge: Ensuring the Rigor and Reproducibility of Science Growing Challenge: Ensuring the Rigor and Reproducibility of Science 4 Rigor and Transparency in Research *New Grant Review Criteria* To support the highest quality science, public accountability, and social responsibility in the conduct of science, NIH’s Rigor and Transparency efforts are intended to clarify expectations and highlight attention to four areas that may need more explicit attention by applicants and reviewers: • Scientific premise • Scientific rigor • Consideration of relevant biological variables, such as sex • Authentication of key biological and/or chemical resources Rigor and Transparency in Research *New Grant Review Criteria* To support the highest quality science, public accountability, and social responsibility in the conduct of science, NIH’s Rigor and Transparency efforts are intended to clarify expectations and highlight attention to four areas that may need more explicit attention by applicants and reviewers: • Scientific premise • Scientific rigor • Consideration of relevant biological variables, such as sex • Authentication of key biological and/or chemical resources What is a Key Biological Resource? • Antibodies • Cell Lines • Organisms (transgenic) Reagents that are the most common failure point in experiments What does it cost to have Key Biological Resource fail sometimes? * Freeman et al, 2017. Reproducibility2020: Progress and priorities
    [Show full text]
  • Plos Progress Update 2014/2015 from the Chairman and Ceo
    PLOS PROGRESS UPDATE 2014/2015 FROM THE CHAIRMAN AND CEO PLOS is dedicated to the transformation of research communication through collaboration, transparency, speed and access. Since its founding, PLOS has demonstrated the viability of high quality, Open Access publishing; launched the ground- breaking PLOS ONE, a home for all sound science selected for its rigor, not its “significance”; developed the first Article- Level Metrics (ALMs) to demonstrate the value of research beyond the perceived status of a journal title; and extended the impact of research after its publication with the PLOS data policy, ALMs and liberal Open Access licensing. But challenges remain. Scientific communication is far from its ideal state. There is still inconsistent access, and research is oered at a snapshot in time, instead of as an evolving contribution whose reliability and significance are continually evaluated through its lifetime. The current state demands that PLOS continue to establish new standards and expectations for scholarly communication. These include a faster and more ecient publication experience, more transparent peer review, assessment though the lifetime of a work, better recognition of the range of contributions made by collaborators and placing researchers and their communities back at the center of scientific communication. To these ends, PLOS is developing ApertaTM, a system that will facilitate and advance the submission and peer review process for authors, editors and reviewers. PLOS is also creating richer and more inclusive forums, such as PLOS Paleontology and PLOS Ecology Communities and the PLOS Science Wednesday redditscience Ask Me Anything. Progress is being made on early posting of manuscripts at PLOS.
    [Show full text]
  • Semantic Web: a Review of the Field Pascal Hitzler [email protected] Kansas State University Manhattan, Kansas, USA
    Semantic Web: A Review Of The Field Pascal Hitzler [email protected] Kansas State University Manhattan, Kansas, USA ABSTRACT which would probably produce a rather different narrative of the We review two decades of Semantic Web research and applica- history and the current state of the art of the field. I therefore do tions, discuss relationships to some other disciplines, and current not strive to achieve the impossible task of presenting something challenges in the field. close to a consensus – such a thing seems still elusive. However I do point out here, and sometimes within the narrative, that there CCS CONCEPTS are a good number of alternative perspectives. The review is also necessarily very selective, because Semantic • Information systems → Graph-based database models; In- Web is a rich field of diverse research and applications, borrowing formation integration; Semantic web description languages; from many disciplines within or adjacent to computer science, Ontologies; • Computing methodologies → Description log- and a brief review like this one cannot possibly be exhaustive or ics; Ontology engineering. give due credit to all important individual contributions. I do hope KEYWORDS that I have captured what many would consider key areas of the Semantic Web field. For the reader interested in obtaining amore Semantic Web, ontology, knowledge graph, linked data detailed overview, I recommend perusing the major publication ACM Reference Format: outlets in the field: The Semantic Web journal,1 the Journal of Pascal Hitzler. 2020. Semantic Web: A Review Of The Field. In Proceedings Web Semantics,2 and the proceedings of the annual International of . ACM, New York, NY, USA, 7 pages.
    [Show full text]