Services for Connecting and Integrating Big Number of Linked Datasets

Total Page:16

File Type:pdf, Size:1020Kb

Services for Connecting and Integrating Big Number of Linked Datasets UNIVERSITY OF CRETE DEPARTMENT OF COMPUTER SCIENCE FACULTY OF SCIENCES AND ENGINEERING Services for Connecting and Integrating Big Number of Linked Datasets by Michalis Mountantonakis PhD Dissertation Presented in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Heraklion, March 2020 UNIVERSITY OF CRETE DEPARTMENT OF COMPUTER SCIENCE Services for Connecting and Integrating Big Number of Linked Datasets PhD Dissertation Presented by Michalis Mountantonakis in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy APPROVED BY: Author: Michalis Mountantonakis Supervisor: Yannis Tzitzikas, Associate Professor, University of Crete, Greece Committee Member: Dimitrios Plexousakis, Professor, University of Crete, Greece Committee Member: Kostas Magoutis, Associate Professor, University of Crete, Greece Committee Member: Grigoris Antoniou, Professor, University of Huddersfield, United Kingdom Committee Member: Manolis Koubarakis, Professor, National and Kapodistrian University of Athens, Greece Committee Member: Sören Auer, Professor, Leibniz University of Hannover, Germany Committee Member: Giorgos Flouris, Principal Researcher, FORTH-ICS, Greece Department Chairman: Angelos Bilas, Professor, University of Crete, Greece Heraklion, March 2020 Acknowledgments First, I would like to express my deepest gratitude to my supervisor, Associate Professor Yannis Tzitzikas for everything that I have learned from him and for all the support and encouragement he has given to me. During the past four years, he has devoted a lot of time and effort for helping me with my PhD dissertation and for writing together several research papers that were published in prestigious journals and conferences. Indeed, his immense knowledge, motivation, passion and patience have given me more power and spirit to improve my research and presentation skills and to finish my dissertation. Apart from my supervisor, I would like to deeply thank the other two members of my Advisory Committee, Professor Dimitris Plexousakis and Associate Professor Kostas Magoutis, for their valuable comments and suggestions all these years, which helped me to improve my PhD thesis and my research skills. I am also grateful to Professor Grigoris Antoniou, Professor Manolis Koubarakis, Professor Soren¨ Auer and Principal Researcher Giorgos Flouris, for accepting the invitation to become members of my examination com- mittee, and for their valuable comments and suggestions, that helped me to improve my dissertation. Finally, I would like also to thank the anonymous reviewers of our papers for providing us valuable feedback, which was crucial for improving our work. I gratefully acknowledge the funding received towards my PhD thesis from Hellenic Foundation for Research and Innovation and the General Secretariat for Research and Technology for the period August 2017 - March 2020. In particular, this dissertation was supported by the Hellenic Foundation for Research and Innovation (HFRI) and the Gen- eral Secretariat for Research and Technology (GSRT), under the HFRI PhD Fellowship grant (GA. No. 166). Moreover, I would like to acknowledge the support of the Institute of Com- puter Science of the Foundation of Research and Technology (FORTH-ICS), and especially the Information Systems Laboratory (ISL), for the financial support (from the last year of my undergraduate studies until July 2017), for the facilities, and for giving me the oppor- tunity to travel in several places for presenting our research work. In particular, I would like to thank the Head of ISL, Professor Dimitris Plexousakis, and also all the faculty and staff members of ISL. Furthermore, I would like to acknowledge the support of BlueBridge (H2020, 2015-2018), since part of my dissertation was supported by this project. Additionally, I want to express my deepest thanks to the undergraduate and postgrad- uate students of Computer Science Department, and to the staff of University of Crete. Es- pecially, I desire to thank the secretaries of Computer Science Department and ELKE, for helping me during my undergraduate and postgraduate studies. Moreover, I would like v to thank okeanos GRNET’s cloud service (https://okeanos.grnet.gr/), for giving me the opportunity to have access and to use 14 machines, during the past four years. By using these machines, I managed to run most of the experiments which are included in this dis- sertation and to host several online services. Furthermore, I want to thank the people that supported me all these years. Especially in some difficult situations during my postgraduate studies, they encouraged me to con- tinue and to complete my dissertation. First, I want to deeply thank my beloved Efstathia for her love, support and patience, and for the many wonderful moments we have spent together. Moreover, I want to thank my family and especially the following persons: my grandfather Antonis, who is my biggest supporter, my parents Rena and Lefteris and my sister Maria, for believing in me and encouraging me all these years. Finally, I want to thank all my friends for all the good times we have shared all the years. vi Στην αγαπημένη μου Ευσταθία, στον παππού μου Αντώνη, στους γονείς μου Ρένα και Λευτέρη και στην αδερφή μου Μαρία, για την αμέριστη αγάπη, την ενθάρρυνση και την υποστήριξη τους. vii viii Abstract Linked Data is a method for publishing structured data that facilitates their sharing, link- ing, searching and re-use. A big number of such datasets (or sources), has already been published and their number and size keeps increasing. Although the main objective of Linked Data is linking and integration, this target has not yet been satisfactorily achieved. Even seemingly simple tasks, such as finding all the available information for an entity is challenging, since this presupposes knowing the contents of all datasets and perform- ing cross-dataset identity reasoning, i.e., computing the symmetric and transitive closure of the equivalence relationships that exist among entities and schemas. Another big chal- lenge is Dataset Discovery, since current approaches exploit only the metadata of datasets, without taking into consideration their contents. In this dissertation, we analyze the research work done in the area of Linked Data In- tegration, by giving emphasis on methods that can be used at large scale. Specifically, we factorize the integration process according to various dimensions, for better understand- ing the overall problem and for identifying the open challenges. Then, we propose in- dexes and algorithms for tackling the above challenges, i.e., methods for performing cross- dataset identity reasoning, for finding all the available information for an entity, methods for offering content-based Dataset Discovery, and others. Due to the large number and volume of datasets, we propose techniques that include incremental and parallelized algo- rithms. We show that content-based Dataset Discovery is reduced to solving optimization problems, and we propose techniques for solving them in an efficient way. The aforementioned indexes and algorithms have been implemented in a suite of ser- vices that we have developed, called LODsyndesis, which offers all these services in real time. Furthermore, we present an extensive connectivity analysis for a big subset of LOD cloud datasets. In particular, we introduce measurements (concerning connectivity and efficiency) for 2 billion triples, 412 million URIs and 44 million equivalence relationships derived from 400 datasets, by using from 1 to 96 machines for indexing the datasets. Just indicatively, by using the proposed indexes and algorithms, with 96 machines it takes less than 10 minutes to compute the closure of 44 million equivalence relationships, and 81 minutes for indexing 2 billion triples. Furthermore, the dedicated indexes, along with the proposed incremental algorithms, enable the computation of connectivity metrics for 1 million subsets of datasets in 1 second (three orders of magnitude faster than conven- tional methods), while the provided services offer responses in a few seconds. These ser- vices enable the implementation of other high level services, such as services for Data En- ix richment which can be exploited for Machine-Learning tasks, and techniques for Knowl- edge Graph Embeddings, and we show that this enrichment improves the prediction of machine-learning problems. Keywords: Linked Data, RDF,Data Integration, Dataset Discovery and Selection, Con- nectivity, Lattice of Measurements, Big Data, Data Quality Supervisor: Yannis Tzitzikas Associate Professor Computer Science Department University of Crete x Περίληψη Τα ∆ιασυνδεδεμένα ∆εδομένα (Linked Data) είναι ένας τρόπος δημοσίευσης δεδο- μένων που διευκολύνει το διαμοιρασμό, τη διασύνδεση, την αναζήτηση και την επα- ναχρησιμοποίησή τους. ´Ηδη υπάρχουν χιλιάδες τέτοια σύνολα δεδομένων, στο εξής πηγές, και ο αριθμός και το μέγεθος τους αυξάνεται. Αν και ο κύριος στόχος των ∆ια- συνδεδεμένων ∆εδομένων είναι η διασύνδεση και η ολοκλήρωση τους, αυτός ο στόχος δεν έχει επιτευχθεί ακόμα σε ικανοποιητικό βαθμό. Ακόμα και φαινομενικά απλές εργασίες, όπως η εύρεση όλων των πληροφοριών για μία συγκεκριμένη οντότητα αποτελούν πρόκληση διότι αυτό προϋποθέτει γνώση των περιεχομένων όλων των πη- γών, καθώς και την ικανότητα συλλογισμού επί των συναθροισμένων περιεχομένων τους, συγκεκριμένα απαιτείται ο υπολογισμός του συμμετρικού και μεταβατικού κλεισίματος των σχέσεων ισοδυναμίας μεταξύ των ταυτοτήτων των οντοτήτων και των οντολογιών. Η
Recommended publications
  • V a Lida T in G R D F Da
    Series ISSN: 2160-4711 LABRA GAYO • ET AL GAYO LABRA Series Editors: Ying Ding, Indiana University Paul Groth, Elsevier Labs Validating RDF Data Jose Emilio Labra Gayo, University of Oviedo Eric Prud’hommeaux, W3C/MIT and Micelio Iovka Boneva, University of Lille Dimitris Kontokostas, University of Leipzig VALIDATING RDF DATA This book describes two technologies for RDF validation: Shape Expressions (ShEx) and Shapes Constraint Language (SHACL), the rationales for their designs, a comparison of the two, and some example applications. RDF and Linked Data have broad applicability across many fields, from aircraft manufacturing to zoology. Requirements for detecting bad data differ across communities, fields, and tasks, but nearly all involve some form of data validation. This book introduces data validation and describes its practical use in day-to-day data exchange. The Semantic Web offers a bold, new take on how to organize, distribute, index, and share data. Using Web addresses (URIs) as identifiers for data elements enables the construction of distributed databases on a global scale. Like the Web, the Semantic Web is heralded as an information revolution, and also like the Web, it is encumbered by data quality issues. The quality of Semantic Web data is compromised by the lack of resources for data curation, for maintenance, and for developing globally applicable data models. At the enterprise scale, these problems have conventional solutions. Master data management provides an enterprise-wide vocabulary, while constraint languages capture and enforce data structures. Filling a need long recognized by Semantic Web users, shapes languages provide models and vocabularies for expressing such structural constraints.
    [Show full text]
  • Deciding SHACL Shape Containment Through Description Logics Reasoning
    Deciding SHACL Shape Containment through Description Logics Reasoning Martin Leinberger1, Philipp Seifer2, Tjitze Rienstra1, Ralf Lämmel2, and Steffen Staab3;4 1 Inst. for Web Science and Technologies, University of Koblenz-Landau, Germany 2 The Software Languages Team, University of Koblenz-Landau, Germany 3 Institute for Parallel and Distributed Systems, University of Stuttgart, Germany 4 Web and Internet Science Research Group, University of Southampton, England Abstract. The Shapes Constraint Language (SHACL) allows for for- malizing constraints over RDF data graphs. A shape groups a set of constraints that may be fulfilled by nodes in the RDF graph. We investi- gate the problem of containment between SHACL shapes. One shape is contained in a second shape if every graph node meeting the constraints of the first shape also meets the constraints of the second. Todecide shape containment, we map SHACL shape graphs into description logic axioms such that shape containment can be answered by description logic reasoning. We identify several, increasingly tight syntactic restrictions of SHACL for which this approach becomes sound and complete. 1 Introduction RDF has been designed as a flexible, semi-structured data format. To ensure data quality and to allow for restricting its large flexibility in specific domains, the W3C has standardized the Shapes Constraint Language (SHACL)5. A set of SHACL shapes are represented in a shape graph. A shape graph represents constraints that only a subset of all possible RDF data graphs conform to. A SHACL processor may validate whether a given RDF data graph conforms to a given SHACL shape graph. A shape graph and a data graph that act as a running example are pre- sented in Fig.
    [Show full text]
  • Using Rule-Based Reasoning for RDF Validation
    Using Rule-Based Reasoning for RDF Validation Dörthe Arndt, Ben De Meester, Anastasia Dimou, Ruben Verborgh, and Erik Mannens Ghent University - imec - IDLab Sint-Pietersnieuwstraat 41, B-9000 Ghent, Belgium [email protected] Abstract. The success of the Semantic Web highly depends on its in- gredients. If we want to fully realize the vision of a machine-readable Web, it is crucial that Linked Data are actually useful for machines con- suming them. On this background it is not surprising that (Linked) Data validation is an ongoing research topic in the community. However, most approaches so far either do not consider reasoning, and thereby miss the chance of detecting implicit constraint violations, or they base them- selves on a combination of dierent formalisms, eg Description Logics combined with SPARQL. In this paper, we propose using Rule-Based Web Logics for RDF validation focusing on the concepts needed to sup- port the most common validation constraints, such as Scoped Negation As Failure (SNAF), and the predicates dened in the Rule Interchange Format (RIF). We prove the feasibility of the approach by providing an implementation in Notation3 Logic. As such, we show that rule logic can cover both validation and reasoning if it is expressive enough. Keywords: N3, RDF Validation, Rule-Based Reasoning 1 Introduction The amount of publicly available Linked Open Data (LOD) sets is constantly growing1, however, the diversity of the data employed in applications is mostly very limited: only a handful of RDF data is used frequently [27]. One of the reasons for this is that the datasets' quality and consistency varies signicantly, ranging from expensively curated to relatively low quality data [33], and thus need to be validated carefully before use.
    [Show full text]
  • Bibliography of Erik Wilde
    dretbiblio dretbiblio Erik Wilde's Bibliography References [1] AFIPS Fall Joint Computer Conference, San Francisco, California, December 1968. [2] Seventeenth IEEE Conference on Computer Communication Networks, Washington, D.C., 1978. [3] ACM SIGACT-SIGMOD Symposium on Principles of Database Systems, Los Angeles, Cal- ifornia, March 1982. ACM Press. [4] First Conference on Computer-Supported Cooperative Work, 1986. [5] 1987 ACM Conference on Hypertext, Chapel Hill, North Carolina, November 1987. ACM Press. [6] 18th IEEE International Symposium on Fault-Tolerant Computing, Tokyo, Japan, 1988. IEEE Computer Society Press. [7] Conference on Computer-Supported Cooperative Work, Portland, Oregon, 1988. ACM Press. [8] Conference on Office Information Systems, Palo Alto, California, March 1988. [9] 1989 ACM Conference on Hypertext, Pittsburgh, Pennsylvania, November 1989. ACM Press. [10] UNIX | The Legend Evolves. Summer 1990 UKUUG Conference, Buntingford, UK, 1990. UKUUG. [11] Fourth ACM Symposium on User Interface Software and Technology, Hilton Head, South Carolina, November 1991. [12] GLOBECOM'91 Conference, Phoenix, Arizona, 1991. IEEE Computer Society Press. [13] IEEE INFOCOM '91 Conference on Computer Communications, Bal Harbour, Florida, 1991. IEEE Computer Society Press. [14] IEEE International Conference on Communications, Denver, Colorado, June 1991. [15] International Workshop on CSCW, Berlin, Germany, April 1991. [16] Third ACM Conference on Hypertext, San Antonio, Texas, December 1991. ACM Press. [17] 11th Symposium on Reliable Distributed Systems, Houston, Texas, 1992. IEEE Computer Society Press. [18] 3rd Joint European Networking Conference, Innsbruck, Austria, May 1992. [19] Fourth ACM Conference on Hypertext, Milano, Italy, November 1992. ACM Press. [20] GLOBECOM'92 Conference, Orlando, Florida, December 1992. IEEE Computer Society Press. http://github.com/dret/biblio (August 29, 2018) 1 dretbiblio [21] IEEE INFOCOM '92 Conference on Computer Communications, Florence, Italy, 1992.
    [Show full text]
  • SHACL Satisfiability and Containment
    SHACL Satisfiability and Containment Paolo Pareti1 , George Konstantinidis1 , Fabio Mogavero2 , and Timothy J. Norman1 1 University of Southampton, Southampton, United Kingdom {pp1v17,g.konstantinidis,t.j.norman}@soton.ac.uk 2 Università degli Studi di Napoli Federico II, Napoli, Italy [email protected] Abstract. The Shapes Constraint Language (SHACL) is a recent W3C recom- mendation language for validating RDF data. Specifically, SHACL documents are collections of constraints that enforce particular shapes on an RDF graph. Previous work on the topic has provided theoretical and practical results for the validation problem, but did not consider the standard decision problems of satisfiability and containment, which are crucial for verifying the feasibility of the constraints and important for design and optimization purposes. In this paper, we undertake a thorough study of different features of non-recursive SHACL by providing a translation to a new first-order language, called SCL, that precisely captures the semantics of SHACL w.r.t. satisfiability and containment. We study the interaction of SHACL features in this logic and provide the detailed map of decidability and complexity results of the aforementioned decision problems for different SHACL sublanguages. Notably, we prove that both problems are undecidable for the full language, but we present decidable combinations of interesting features. 1 Introduction The Shapes Constraint Language (SHACL) has been recently introduced as a W3C recommendation language for the validation of RDF graphs, and it has already been adopted by mainstream tools and triplestores. A SHACL document is a collection of shapes which define particular constraints and specify which nodes in a graph should be validated against these constraints.
    [Show full text]
  • The Globalization of K-Pop: the Interplay of External and Internal Forces
    THE GLOBALIZATION OF K-POP: THE INTERPLAY OF EXTERNAL AND INTERNAL FORCES Master Thesis presented by Hiu Yan Kong Furtwangen University MBA WS14/16 Matriculation Number 249536 May, 2016 Sworn Statement I hereby solemnly declare on my oath that the work presented has been carried out by me alone without any form of illicit assistance. All sources used have been fully quoted. (Signature, Date) Abstract This thesis aims to provide a comprehensive and systematic analysis about the growing popularity of Korean pop music (K-pop) worldwide in recent years. On one hand, the international expansion of K-pop can be understood as a result of the strategic planning and business execution that are created and carried out by the entertainment agencies. On the other hand, external circumstances such as the rise of social media also create a wide array of opportunities for K-pop to broaden its global appeal. The research explores the ways how the interplay between external circumstances and organizational strategies has jointly contributed to the global circulation of K-pop. The research starts with providing a general descriptive overview of K-pop. Following that, quantitative methods are applied to measure and assess the international recognition and global spread of K-pop. Next, a systematic approach is used to identify and analyze factors and forces that have important influences and implications on K-pop’s globalization. The analysis is carried out based on three levels of business environment which are macro, operating, and internal level. PEST analysis is applied to identify critical macro-environmental factors including political, economic, socio-cultural, and technological.
    [Show full text]
  • Using Rule-Based Reasoning for RDF Validation
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Ghent University Academic Bibliography Using Rule-Based Reasoning for RDF Validation Dörthe Arndt, Ben De Meester, Anastasia Dimou, Ruben Verborgh, and Erik Mannens Ghent University - imec - IDLab Sint-Pietersnieuwstraat 41, B-9000 Ghent, Belgium [email protected] Abstract. The success of the Semantic Web highly depends on its in- gredients. If we want to fully realize the vision of a machine-readable Web, it is crucial that Linked Data are actually useful for machines con- suming them. On this background it is not surprising that (Linked) Data validation is an ongoing research topic in the community. However, most approaches so far either do not consider reasoning, and thereby miss the chance of detecting implicit constraint violations, or they base them- selves on a combination of dierent formalisms, eg Description Logics combined with SPARQL. In this paper, we propose using Rule-Based Web Logics for RDF validation focusing on the concepts needed to sup- port the most common validation constraints, such as Scoped Negation As Failure (SNAF), and the predicates dened in the Rule Interchange Format (RIF). We prove the feasibility of the approach by providing an implementation in Notation3 Logic. As such, we show that rule logic can cover both validation and reasoning if it is expressive enough. Keywords: N3, RDF Validation, Rule-Based Reasoning 1 Introduction The amount of publicly available Linked Open Data (LOD) sets is constantly growing1, however, the diversity of the data employed in applications is mostly very limited: only a handful of RDF data is used frequently [27].
    [Show full text]
  • Proquest Dissertations
    Characterization of regulatory mechanisms of CdGAP, a negative regulator of the small GTPases Racl and Cdc42 Eric Ian Danek Department of Anatomy and Cell Biology McGill University Montreal, Quebec Canada Submitted in January 2008 A thesis submitted to McGill University in partial fulfillment of the requirements of the degree of Doctor of Philosophy © Eric Ian Danek, 2008 Library and Bibliotheque et 1*1 Archives Canada Archives Canada Published Heritage Direction du Branch Patrimoine de I'edition 395 Wellington Street 395, rue Wellington Ottawa ON K1A0N4 Ottawa ON K1A0N4 Canada Canada Your file Votre reference ISBN: 978-0-494-50805-3 Our file Notre reference ISBN: 978-0-494-50805-3 NOTICE: AVIS: The author has granted a non­ L'auteur a accorde une licence non exclusive exclusive license allowing Library permettant a la Bibliotheque et Archives and Archives Canada to reproduce, Canada de reproduire, publier, archiver, publish, archive, preserve, conserve, sauvegarder, conserver, transmettre au public communicate to the public by par telecommunication ou par Plntemet, prefer, telecommunication or on the Internet, distribuer et vendre des theses partout dans loan, distribute and sell theses le monde, a des fins commerciales ou autres, worldwide, for commercial or non­ sur support microforme, papier, electronique commercial purposes, in microform, et/ou autres formats. paper, electronic and/or any other formats. The author retains copyright L'auteur conserve la propriete du droit d'auteur ownership and moral rights in et des droits moraux qui protege cette these. this thesis. Neither the thesis Ni la these ni des extraits substantiels de nor substantial extracts from it celle-ci ne doivent etre imprimes ou autrement may be printed or otherwise reproduits sans son autorisation.
    [Show full text]
  • Finding Non-Compliances with Declarative Process Constraints Through Semantic Technologies
    Finding Non-compliances with Declarative Process Constraints through Semantic Technologies Claudio Di Ciccio2, Fajar J. Ekaputra1, Alessio Cecconi2, Andreas Ekelhart1, Elmar Kiesling1 1 TU Wien, Favoritenstrasse 9-11, 1040 Vienna, Austria, [email protected] 2 WU Vienna, Welthandelsplatz 1, 1020 Vienna, Austria fclaudio.di.ciccio,[email protected] Abstract. Business process compliance checking enables organisations to assess whether their processes fulfil a given set of constraints, such as regulations, laws, or guidelines. Whilst many process analysts still rely on ad-hoc, often handcrafted per- case checks, a variety of constraint languages and approaches have been developed in recent years to provide automated compliance checking. A salient example is DE- CLARE, a well-established declarative process specification language based on tem- poral logics. DECLARE specifies the behaviour of processes through temporal rules that constrain the execution of tasks. So far, however, automated compliance check- ing approaches typically report compliance only at the aggregate level, using binary evaluations of constraints on execution traces. Consequently, their results lack gran- ular information on violations and their context, which hampers auditability of pro- cess data for analytic and forensic purposes. To address this challenge, we propose a novel approach that leverages semantic technologies for compliance checking. Our approach proceeds in two stages. First, we translate DECLARE templates into state- ments in SHACL, a graph-based constraint language. Then, we evaluate the resulting constraints on the graph-based, semantic representation of process execution logs. We demonstrate the feasibility of our approach by testing its implementation on real- world event logs. Finally, we discuss its implications and future research directions.
    [Show full text]
  • Integrating Semantic Web Technologies and ASP for Product Configuration
    Integrating Semantic Web Technologies and ASP for Product Configuration Stefan Bischof1 and Gottfried Schenner1 and Simon Steyskal1 and Richard Taupe1,2 Abstract. Currently there is no single dominating technol- own proprietary specification language. Also in the product ogy for building product configurator systems. While research configuration community there is currently no standard way often focuses on a single technology/paradigm, building an to specify product configuration problems although the topic industrial-scale product configurator system will almost al- of ontologies and product configuration is over 20 years old ways require the combination of various technologies for dif- (cf. [21]) and pre-dates the Semantic Web. ferent aspects of the system (knowledge representation, rea- In Section2 we describe the technologies used for this pa- soning, solving, user interface, etc.) This paper demonstrates per. In Section3 we show how to define a product configurator how to build such a hybrid system and how to integrate var- knowledge base with RDF and SHACL, that allows checking ious technologies and leverage their respective strengths. of constraints and interactive solving. Due to the increasing popularity of the industrial knowledge For solving product configuration problems, RDF and graph we utilize Semantic Web technologies (RDF, OWL and SHACL are combined with Answer Set Programming in Sec- the Shapes Constraint Language (SHACL)) for knowledge tion4. In Section5 we illustrate how reasoning with OWL representation and integrate it with Answer Set Programming can help to integrate the product configuration solutions into (ASP), a well-established solving paradigm for product con- the knowledge graph and facilitate reuse of ontologies. figuration.
    [Show full text]
  • Linked Data Schemata: Fixing Unsound Foundations
    Linked data schemata: fixing unsound foundations Editor(s): Amrapali Zaveri, Stanford University, USA; Dimitris Kontokostas, Universität Leipzig, Germany; Sebastian Hellmann, Universität Leipzig, Germany; Jürgen Umbrich, Wirtschaftsuniversität Wien, Austria Solicited review(s): Mathieu d’Aquin, The Open University, UK; Peter F. Patel-Schneider, Nuance Communications, USA; John McCrae, Insight Centre for Data Analytics, Ireland; One anonymous reviewer Kevin Chekov Feeney, Gavin Mendel Gleason, and Rob Brennan Knowledge and Data Engineering Group & ADAPT Centre, School of Computer Science & Statistics, Trinity College Dublin, Ireland Corresponding author’s e-mail: [email protected] Abstract. This paper describes our tools and method for an evaluation of the practical and logical implications of combining common linked data vocabularies into a single local logical model for the purpose of reasoning or performing quality evalua- tions. These vocabularies need to be unified to form a combined model because they reference or reuse terms from other linked data vocabularies and thus the definitions of those terms must be imported. We found that strong interdependencies between vocabularies are common and that a significant number of logical and practical problems make this model unification inconsistent. In addition to identifying problems, this paper suggests a set of recommendations for linked data ontology design best practice. Finally we make some suggestions for improving OWL’s support for distributed authoring and ontology reuse. Keywords: Linked Data, Reasoning, Data Quality 1. Introduction In order to validate any dataset which uses the adms:Asset term we must combine the adms ontolo- One of the central tenets of the linked data move- gy and the dcat ontology in order to ensure that ment is the reuse of terms from existing well-known dcat:Dataset is a valid class.
    [Show full text]
  • Idioms-And-Expressions.Pdf
    Idioms and Expressions by David Holmes A method for learning and remembering idioms and expressions I wrote this model as a teaching device during the time I was working in Bangkok, Thai- land, as a legal editor and language consultant, with one of the Big Four Legal and Tax companies, KPMG (during my afternoon job) after teaching at the university. When I had no legal documents to edit and no individual advising to do (which was quite frequently) I would sit at my desk, (like some old character out of a Charles Dickens’ novel) and prepare language materials to be used for helping professionals who had learned English as a second language—for even up to fifteen years in school—but who were still unable to follow a movie in English, understand the World News on TV, or converse in a colloquial style, because they’d never had a chance to hear and learn com- mon, everyday expressions such as, “It’s a done deal!” or “Drop whatever you’re doing.” Because misunderstandings of such idioms and expressions frequently caused miscom- munication between our management teams and foreign clients, I was asked to try to as- sist. I am happy to be able to share the materials that follow, such as they are, in the hope that they may be of some use and benefit to others. The simple teaching device I used was three-fold: 1. Make a note of an idiom/expression 2. Define and explain it in understandable words (including synonyms.) 3. Give at least three sample sentences to illustrate how the expression is used in context.
    [Show full text]