Semantic Document Clustering Using Information from Wordnet and Dbpedia

Total Page:16

File Type:pdf, Size:1020Kb

Semantic Document Clustering Using Information from Wordnet and Dbpedia Semantic Document Clustering Using Information from WordNet and DBPedia Lubomir Stanchev Computer Science Department California Polytechnic State University San Luis Obispo, CA, USA [email protected] Abstract—Semantic document clustering is a type of unsu- strategy will struggle to find evidence for grouping any two pervised learning in which documents are grouped together documents together. based on their meaning. Unlike traditional approaches that The problem of semantic document clustering is difficult cluster documents based on common keywords, this technique can group documents that share no words in common as long because it involves some understanding of the English lan- as they are on the same subject. We compute the similarity guage and our world. For example, our system can use infor- between two documents as a function of the semantic similarity mation from DBPedia to determine that “Henri, le Chat Noir” between the words and phrases in the documents. We model and “Tardar Sauce” are both famous Internet cats. Although information from WordNet and DBPedia as a probabilistic graph significant effort has been put forward in automated natural that can be used to compute the similarity between two terms. We experimentally validate our algorithm on the Reuters-21578 language processing [9], [10], [23], current approaches fall benchmark, which contains 11; 362 newswire stories that are short of understanding the precise meaning of human text. grouped in 82 categories using human judgment. We apply the In our approach, we make limited use of natural language k-means clustering algorithm to group the documents using a processing techniques (for example, we use the Standford similarity metric that is based on keyword matching and one that CoreNLP tool [22]) and we rely on high-quality information uses the probabilistic graph. We show that the second approach produces higher precision and recall, which corresponds to better about the words in English language (WordNet) and our world alignment with the classification that was done by human experts. (DBPedia) to process the input documents. A traditional approach uses k-means clustering [21] to clus- I. INTRODUCTION ter documents. The algorithm is based on a vector representa- Consider an RSS feed of news stories. Organizing them tion of the documents (based on term frequencies) and a dis- in categories will make search easier. For example, a smart tance metric (e.g., the cosine similarity between two document classifier will put a story about “Tardar Sauce” (better known vectors). Unfortunately, this approach will incorrectly compute as Grumpy Cat) and a story about “Henri, le Chat Noir” the similarity distance between two documents that describe (Henry, the black cat) in the same category because both the same concept using different words. It will only consider stories are about famous cats from the Internet. Our approach the common words and their frequencies and it will ignore the uses information from WordNet [25] and DBPedia [20] to meaning of the words. In [41], we explore how information construct a probabilistic graph that can be used to compute from WordNet can be used to create a probabilistic graph that the semantic similarity between the two documents. is used to cluster the documents. However, this approach does The problem of semantic document clustering is interesting not take into account information from DBPedia and will not because it can improve the quality of the clustering results as be able to determine that “Tardar Sauce” and “Henri, le Chat compared to keyword matching algorithms. For example, the Noir” are both famous Internet cats. later algorithms will likely put documents that use different In this paper, we extend the approach from [41] in two ways. terminology to describe the same concept in separate cate- First, we apply the Standford CoreNLP tool to lemmatize gories. Consider a document that contains the term “ascorbic the words in the documents and assign them to the correct acid” multiple times and a document that contains the term part of speech (i.e., noun, verb, adjective, or adverb). Second, “vitamin C” multiple times. The documents are semantically we add information from DBPedia to the probabilistic graph. similar because “ascorbic acid” and “vitamin C” refer to the DBPedia contains knowledge from Wikipedia. This includes same organic compound and therefore a clustering algorithm the title of each Wikipedia page, the short abstract for the should take this fact into account. However, this will only page, the length of the Wikipedia page, the category of each happen when the close relationship between the two terms is Wikipedia page (e.g., “Anarchism” belongs to the category stored in the system and applied during document clustering. “Political Cultures”), information that an object belongs to a The need for a semantic document clustering system becomes class (e.g., “Azerbaijan” is a type of a country), RDF triplets even more apparent when the number of documents is small between objects (e.g., “Algeria” has official language that is or when they are very short. In this case, it is likely that the “Arabic”), and about disambiguation (e.g., “Alien” can refer to documents will not share many words and a keyword matching “Alien(law)”, that is, the legal meaning of the word.) All this information allows us to extend the probabilistic graph and the probabilistic graph to compute the distance between two find new evidence about the semantic similarities between the documents. Some limited user interaction is possible when phrases in the documents that are to be clustered. classifying documents – see for example the research on In what follows, in Section II we present an overview folksonomies [11]. Our system currently does not allow for of related research. Our main contribution is in Section III, user interaction when creating the document clusters. where we present a modified algorithm for creating the In later years, the research of Croft was extended by creating probabilistic graph that stores the part of speech for each a graph in the form of a semantic network [4], [28], [31] and word. The algorithm is then extended with information from graphs that contain the semantic relationships between words DBPedia. Section IV describes our algorithms for measuring [2], [1], [6]. Later on, Simone Ponzetto and Michael Strube the semantic similarity between documents and clustering the showed how to create a graph that only represents inheritance documents. Our other contribution is in Section V, where of words in WordNet [18], [32], while Glen Jeh and Jennifer we describe our implementation of the algorithm using a Widom showed how to approximate the similarity between distributed Hadoop environment and validate our approach phrases based on information about the structure of the graph by showing how it can produce data of better quality than in which they appear [15]. All these approaches differ from the algorithm that is based on simple keywords matching our approach because they do not consider the strength of the and our previous algorithm that relies exclusively on data relationship between the nodes in the graph. In other words, from WordNet. Lastly, Section VI summarizes the paper and weights are not assigned to the edges of the graph. outlines areas for future research. Natural language techniques can be used to analyze the II. RELATED RESEARCH text in a document [13], [26], [35]. For example, a natural The probabilistic graph that is presented in this paper is language analyzer may determine that a document talks about based on the research from [37], which shows how to measure animals and words or concepts that can represent an animal the semantic similarity between words based on information can be identified in other documents. As a result, documents from WordNet. Later on, in [38] we explain how the graph can that are identified to refer to the same or similar concepts can be extended with information from Wikipedia. After that, in be classified together. One problem with this approach is that [40] we show how the Markov Logic Network model [29] can it is computationally expensive. A second problem is that it use the probabilistic graph to compute the probability that a is not a probabilistic model and therefore it is difficult to be word is relevant to a user given that a different word from the applied towards generating a document similarity metric. user’s input query is relevant. Lastly, in [36] we show how Note our limited use of ontologies to cluster the documents. a random walk in a bounded box in the graph can be used Unlike existing approaches that annotate each document with to make the computation of the semantic similarity between a description in a formal language [17], [27], [12], we use two words more precise and more efficient. In this paper, we ontological information from DBPedia to calculate the weights extend this existing research in two ways: (1) we store the of the edges in the probabilistic graph. The problem with part of speech together with each word in the graph and (2) the traditional approach is that: (1) manual annotation is time we incorporate knowledge from DBPedia in the probabilistic consuming and automatic annotation is not very reliable and graph. (2) a query language, such as SPARQL [33], can tell us which Note that a plethora of research papers have been published documents are similar, but it will not give us a similarity on the subject of using supervised learning models with train- metric. ing sets for document classification [5], [42]. Our approach differs because it is unsupervised, it does not use a training Since the early 1990s, research on LSA (stands for latent set, and it can cluster documents in any number of classes semantic analysis [8]) has been carried out.
Recommended publications
  • Metadata for Semantic and Social Applications
    etadata is a key aspect of our evolving infrastructure for information management, social computing, and scientific collaboration. DC-2008M will focus on metadata challenges, solutions, and innovation in initiatives and activities underlying semantic and social applications. Metadata is part of the fabric of social computing, which includes the use of wikis, blogs, and tagging for collaboration and participation. Metadata also underlies the development of semantic applications, and the Semantic Web — the representation and integration of multimedia knowledge structures on the basis of semantic models. These two trends flow together in applications such as Wikipedia, where authors collectively create structured information that can be extracted and used to enhance access to and use of information sources. Recent discussion has focused on how existing bibliographic standards can be expressed as Semantic Metadata for Web vocabularies to facilitate the ingration of library and cultural heritage data with other types of data. Harnessing the efforts of content providers and end-users to link, tag, edit, and describe their Semantic and information in interoperable ways (”participatory metadata”) is a key step towards providing knowledge environments that are scalable, self-correcting, and evolvable. Social Applications DC-2008 will explore conceptual and practical issues in the development and deployment of semantic and social applications to meet the needs of specific communities of practice. Edited by Jane Greenberg and Wolfgang Klas DC-2008
    [Show full text]
  • Automatic Wordnet Mapping Using Word Sense Disambiguation*
    Automatic WordNet mapping using word sense disambiguation* Changki Lee Seo JungYun Geunbae Leer Natural Language Processing Lab Natural Language Processing Lab Dept. of Computer Science and Engineering Dept. of Computer Science Pohang University of Science & Technology Sogang University San 31, Hyoja-Dong, Pohang, 790-784, Korea Sinsu-dong 1, Mapo-gu, Seoul, Korea {leeck,gblee }@postech.ac.kr seojy@ ccs.sogang.ac.kr bilingual Korean-English dictionary. The first Abstract sense of 'gwan-mog' has 'bush' as a translation in English and 'bush' has five synsets in This paper presents the automatic WordNet. Therefore the first sense of construction of a Korean WordNet from 'gwan-mog" has five candidate synsets. pre-existing lexical resources. A set of Somehow we decide a synset {shrub, bush} automatic WSD techniques is described for among five candidate synsets and link the sense linking Korean words collected from a of 'gwan-mog' to this synset. bilingual MRD to English WordNet synsets. As seen from this example, when we link the We will show how individual linking senses of Korean words to WordNet synsets, provided by each WSD method is then there are semantic ambiguities. To remove the combined to produce a Korean WordNet for ambiguities we develop new word sense nouns. disambiguation heuristics and automatic mapping method to construct Korean WordNet based on 1 Introduction the existing English WordNet. There is no doubt on the increasing This paper is organized as follows. In section 2, importance of using wide coverage ontologies we describe multiple heuristics for word sense for NLP tasks especially for information disambiguation for sense linking.
    [Show full text]
  • From the Semantic Web to Social Machines
    ARTICLE IN PRESS ARTINT:2455 JID:ARTINT AID:2455 /REV [m3G; v 1.23; Prn:25/11/2009; 12:36] P.1 (1-6) Artificial Intelligence ••• (••••) •••–••• Contents lists available at ScienceDirect Artificial Intelligence www.elsevier.com/locate/artint From the Semantic Web to social machines: A research challenge for AI on the World Wide Web ∗ Jim Hendler a, , Tim Berners-Lee b a Tetherless World Constellation, RPI, United States b Computer Science and AI Laboratory, MIT, United States article info abstract Article history: The advent of social computing on the Web has led to a new generation of Web Received 24 September 2009 applications that are powerful and world-changing. However, we argue that we are just Received in revised form 1 October 2009 at the beginning of this age of “social machines” and that their continued evolution and Accepted 1 October 2009 growth requires the cooperation of Web and AI researchers. In this paper, we show how Available online xxxx the growing Semantic Web provides necessary support for these technologies, outline the challenges we see in bringing the technology to the next level, and propose some starting places for the research. © 2009 Elsevier B.V. All rights reserved. Much has been written about the profound impact that the World Wide Web has had on society. Yet it is primarily in the past few years, as more interactive “read/write” technologies (e.g. Wikis, blogs and photo/video sharing) and social network- ing sites have proliferated, that the truly profound nature of this change is being felt. From the very beginning, however, the Web was designed to create a network of humans changing society empowered using this shared infrastructure.
    [Show full text]
  • Combining Semantic and Lexical Measures to Evaluate Medical Terms Similarity?
    Combining semantic and lexical measures to evaluate medical terms similarity? Silvio Domingos Cardoso1;2, Marcos Da Silveira1, Ying-Chi Lin3, Victor Christen3, Erhard Rahm3, Chantal Reynaud-Dela^ıtre2, and C´edricPruski1 1 LIST, Luxembourg Institute of Science and Technology, Luxembourg fsilvio.cardoso,marcos.dasilveira,[email protected] 2 LRI, University of Paris-Sud XI, France [email protected] 3 Department of Computer Science, Universit¨atLeipzig, Germany flin,christen,[email protected] Abstract. The use of similarity measures in various domains is corner- stone for different tasks ranging from ontology alignment to information retrieval. To this end, existing metrics can be classified into several cate- gories among which lexical and semantic families of similarity measures predominate but have rarely been combined to complete the aforemen- tioned tasks. In this paper, we propose an original approach combining lexical and ontology-based semantic similarity measures to improve the evaluation of terms relatedness. We validate our approach through a set of experiments based on a corpus of reference constructed by domain experts of the medical field and further evaluate the impact of ontology evolution on the used semantic similarity measures. Keywords: Similarity measures · Ontology evolution · Semantic Web · Medical terminologies 1 Introduction Measuring the similarity between terms is at the heart of many research inves- tigations. In ontology matching, similarity measures are used to evaluate the relatedness between concepts from different ontologies [9]. The outcomes are the mappings between the ontologies, increasing the coverage of domain knowledge and optimize semantic interoperability between information systems. In informa- tion retrieval, similarity measures are used to evaluate the relatedness between units of language (e.g., words, sentences, documents) to optimize search [39].
    [Show full text]
  • Probabilistic Topic Modelling with Semantic Graph
    Probabilistic Topic Modelling with Semantic Graph B Long Chen( ), Joemon M. Jose, Haitao Yu, Fajie Yuan, and Huaizhi Zhang School of Computing Science, University of Glasgow, Sir Alwyns Building, Glasgow, UK [email protected] Abstract. In this paper we propose a novel framework, topic model with semantic graph (TMSG), which couples topic model with the rich knowledge from DBpedia. To begin with, we extract the disambiguated entities from the document collection using a document entity linking system, i.e., DBpedia Spotlight, from which two types of entity graphs are created from DBpedia to capture local and global contextual knowl- edge, respectively. Given the semantic graph representation of the docu- ments, we propagate the inherent topic-document distribution with the disambiguated entities of the semantic graphs. Experiments conducted on two real-world datasets show that TMSG can significantly outperform the state-of-the-art techniques, namely, author-topic Model (ATM) and topic model with biased propagation (TMBP). Keywords: Topic model · Semantic graph · DBpedia 1 Introduction Topic models, such as Probabilistic Latent Semantic Analysis (PLSA) [7]and Latent Dirichlet Analysis (LDA) [2], have been remarkably successful in ana- lyzing textual content. Specifically, each document in a document collection is represented as random mixtures over latent topics, where each topic is character- ized by a distribution over words. Such a paradigm is widely applied in various areas of text mining. In view of the fact that the information used by these mod- els are limited to document collection itself, some recent progress have been made on incorporating external resources, such as time [8], geographic location [12], and authorship [15], into topic models.
    [Show full text]
  • A Way with Words: Recent Advances in Lexical Theory and Analysis: a Festschrift for Patrick Hanks
    A Way with Words: Recent Advances in Lexical Theory and Analysis: A Festschrift for Patrick Hanks Gilles-Maurice de Schryver (editor) (Ghent University and University of the Western Cape) Kampala: Menha Publishers, 2010, vii+375 pp; ISBN 978-9970-10-101-6, e59.95 Reviewed by Paul Cook University of Toronto In his introduction to this collection of articles dedicated to Patrick Hanks, de Schryver presents a quote from Atkins referring to Hanks as “the ideal lexicographer’s lexicogra- pher.” Indeed, Hanks has had a formidable career in lexicography, including playing an editorial role in the production of four major English dictionaries. But Hanks’s achievements reach far beyond lexicography; in particular, Hanks has made numerous contributions to computational linguistics. Hanks is co-author of the tremendously influential paper “Word association norms, mutual information, and lexicography” (Church and Hanks 1989) and maintains close ties to our field. Furthermore, Hanks has advanced the understanding of the relationship between words and their meanings in text through his theory of norms and exploitations (Hanks forthcoming). The range of Hanks’s interests is reflected in the authors and topics of the articles in this Festschrift; this review concentrates more on those articles that are likely to be of most interest to computational linguists, and does not discuss some articles that appeal primarily to a lexicographical audience. Following the introduction to Hanks and his career by de Schryver, the collection is split into three parts: Theoretical Aspects and Background, Computing Lexical Relations, and Lexical Analysis and Dictionary Writing. 1. Part I: Theoretical Aspects and Background Part I begins with an unfinished article by the late John Sinclair, in which he begins to put forward an argument that multi-word units of meaning should be given the same status in dictionaries as ordinary headwords.
    [Show full text]
  • Semantic Memory: a Review of Methods, Models, and Current Challenges
    Psychonomic Bulletin & Review https://doi.org/10.3758/s13423-020-01792-x Semantic memory: A review of methods, models, and current challenges Abhilasha A. Kumar1 # The Psychonomic Society, Inc. 2020 Abstract Adult semantic memory has been traditionally conceptualized as a relatively static memory system that consists of knowledge about the world, concepts, and symbols. Considerable work in the past few decades has challenged this static view of semantic memory, and instead proposed a more fluid and flexible system that is sensitive to context, task demands, and perceptual and sensorimotor information from the environment. This paper (1) reviews traditional and modern computational models of seman- tic memory, within the umbrella of network (free association-based), feature (property generation norms-based), and distribu- tional semantic (natural language corpora-based) models, (2) discusses the contribution of these models to important debates in the literature regarding knowledge representation (localist vs. distributed representations) and learning (error-free/Hebbian learning vs. error-driven/predictive learning), and (3) evaluates how modern computational models (neural network, retrieval- based, and topic models) are revisiting the traditional “static” conceptualization of semantic memory and tackling important challenges in semantic modeling such as addressing temporal, contextual, and attentional influences, as well as incorporating grounding and compositionality into semantic representations. The review also identifies new challenges
    [Show full text]
  • Employee Matching Using Machine Learning Methods
    Master of Science in Computer Science May 2019 Employee Matching Using Machine Learning Methods Sumeesha Marakani Faculty of Computing, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full time studies. The authors declare that they are the sole authors of this thesis and that they have not used any sources other than those listed in the bibliography and identified as references. They further declare that they have not submitted this thesis at any other institution to obtain a degree. Contact Information: Author(s): Sumeesha Marakani E-mail: [email protected] University advisor: Prof. Veselka Boeva Department of Computer Science External advisors: Lars Tornberg [email protected] Daniel Lundgren [email protected] Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Background. Expertise retrieval is an information retrieval technique that focuses on techniques to identify the most suitable ’expert’ for a task from a list of individ- uals. Objectives. This master thesis is a collaboration with Volvo Cars to attempt ap- plying this concept and match employees based on information that was extracted from an internal tool of the company. In this tool, the employees describe themselves in free flowing text. This text is extracted from the tool and analyzed using Natural Language Processing (NLP) techniques.
    [Show full text]
  • Matrix Decompositions and Latent Semantic Indexing
    Online edition (c)2009 Cambridge UP DRAFT! © April 1, 2009 Cambridge University Press. Feedback welcome. 403 Matrix decompositions and latent 18 semantic indexing On page 123 we introduced the notion of a term-document matrix: an M N matrix C, each of whose rows represents a term and each of whose column× s represents a document in the collection. Even for a collection of modest size, the term-document matrix C is likely to have several tens of thousands of rows and columns. In Section 18.1.1 we first develop a class of operations from linear algebra, known as matrix decomposition. In Section 18.2 we use a special form of matrix decomposition to construct a low-rank approximation to the term-document matrix. In Section 18.3 we examine the application of such low-rank approximations to indexing and retrieving documents, a technique referred to as latent semantic indexing. While latent semantic in- dexing has not been established as a significant force in scoring and ranking for information retrieval, it remains an intriguing approach to clustering in a number of domains including for collections of text documents (Section 16.6, page 372). Understanding its full potential remains an area of active research. Readers who do not require a refresher on linear algebra may skip Sec- tion 18.1, although Example 18.1 is especially recommended as it highlights a property of eigenvalues that we exploit later in the chapter. 18.1 Linear algebra review We briefly review some necessary background in linear algebra. Let C be an M N matrix with real-valued entries; for a term-document matrix, all × RANK entries are in fact non-negative.
    [Show full text]
  • EVERYTHING YOU NEED to KNOW ABOUT SEMANTIC SEO by GENNARO CUOFANO, ANDREA VOLPINI, and MARIA SILVIA SANNA the Semantic Web Is Here
    WHITE PAPER EVERYTHING YOU NEED TO KNOW ABOUT SEMANTIC SEO BY GENNARO CUOFANO, ANDREA VOLPINI, AND MARIA SILVIA SANNA The Semantic Web is here. Those that are taking advantage of Semantic Technologies to build a Semantic SEO strategy are benefiting from staggering results. From a research paper put together with the team atWordLift , presented at SEMANTiCS 2017, we documented that structured data is compelling from the digital marketing standpoint. For instance, on the analysis of the design-focused website freeyork.org, after three months of using structured data in their WordPress website we saw the following improvements: • +12.13% new users • +18.47% increase in organic traffic • +2.4 times increase in page views • +13.75% of sessions duration In other words, many still think of Semantic Technologies belonging to the future, when in reality quite a few players in the digital marketing space are taking advantage of them already. Semantic SEO is a new and powerful way to make your content strategy more effective. In this article/guide I will explain from scratch what Semantic SEO is and why it’s important. Why Semantic SEO? In a nutshell, search engines need context to understand a query properly and to fetch relevant results for it. Contexts are built using words, expressions, and other combinations of words and links as they appear in bodies of knowledge such as encyclopedias and large corpora of text. Semantic SEO is a marketing technique that improves the traffic of a website byproviding meaningful data that can unambiguously answer a specific search intent. It is also a way to create clusters of content that are semantically grouped into topics rather than keywords.
    [Show full text]
  • Finetuning Pre-Trained Language Models for Sentiment Classification of COVID19 Tweets
    Technological University Dublin ARROW@TU Dublin Dissertations School of Computer Sciences 2020 Finetuning Pre-Trained Language Models for Sentiment Classification of COVID19 Tweets Arjun Dussa Technological University Dublin Follow this and additional works at: https://arrow.tudublin.ie/scschcomdis Part of the Computer Engineering Commons Recommended Citation Dussa, A. (2020) Finetuning Pre-trained language models for sentiment classification of COVID19 tweets,Dissertation, Technological University Dublin. doi:10.21427/fhx8-vk25 This Dissertation is brought to you for free and open access by the School of Computer Sciences at ARROW@TU Dublin. It has been accepted for inclusion in Dissertations by an authorized administrator of ARROW@TU Dublin. For more information, please contact [email protected], [email protected]. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License Finetuning Pre-trained language models for sentiment classification of COVID19 tweets Arjun Dussa A dissertation submitted in partial fulfilment of the requirements of Technological University Dublin for the degree of M.Sc. in Computer Science (Data Analytics) September 2020 Declaration I certify that this dissertation which I now submit for examination for the award of MSc in Computing (Data Analytics), is entirely my own work and has not been taken from the work of others save and to the extent that such work has been cited and acknowledged within the test of my work. This dissertation was prepared according to the regulations for postgraduate study of the Technological University Dublin and has not been submitted in whole or part for an award in any other Institute or University.
    [Show full text]
  • Knowledge Graphs on the Web – an Overview Arxiv:2003.00719V3 [Cs
    January 2020 Knowledge Graphs on the Web – an Overview Nicolas HEIST, Sven HERTLING, Daniel RINGLER, and Heiko PAULHEIM Data and Web Science Group, University of Mannheim, Germany Abstract. Knowledge Graphs are an emerging form of knowledge representation. While Google coined the term Knowledge Graph first and promoted it as a means to improve their search results, they are used in many applications today. In a knowl- edge graph, entities in the real world and/or a business domain (e.g., people, places, or events) are represented as nodes, which are connected by edges representing the relations between those entities. While companies such as Google, Microsoft, and Facebook have their own, non-public knowledge graphs, there is also a larger body of publicly available knowledge graphs, such as DBpedia or Wikidata. In this chap- ter, we provide an overview and comparison of those publicly available knowledge graphs, and give insights into their contents, size, coverage, and overlap. Keywords. Knowledge Graph, Linked Data, Semantic Web, Profiling 1. Introduction Knowledge Graphs are increasingly used as means to represent knowledge. Due to their versatile means of representation, they can be used to integrate different heterogeneous data sources, both within as well as across organizations. [8,9] Besides such domain-specific knowledge graphs which are typically developed for specific domains and/or use cases, there are also public, cross-domain knowledge graphs encoding common knowledge, such as DBpedia, Wikidata, or YAGO. [33] Such knowl- edge graphs may be used, e.g., for automatically enriching data with background knowl- arXiv:2003.00719v3 [cs.AI] 12 Mar 2020 edge to be used in knowledge-intensive downstream applications.
    [Show full text]