SQL, Xquery, and SPARQL:Making the Picture Prettier

Total Page:16

File Type:pdf, Size:1020Kb

SQL, Xquery, and SPARQL:Making the Picture Prettier SQL, XQuery, and SPARQL:Making the Picture Prettier Jim Melton, Oracle Corporation, Copyright © 2007 Oracle, [email protected] Introduction Last year, we asked “what’s wrong with this picture?” regarding the existence of three apparently overlapping query languages: SQL, XQuery, and SPARQL. Our somewhat reluctant answer to the question was that there was essentially nothing wrong because each of the three languages (and their corresponding data models) served specific purposes better than the two alternatives. This year, our research has been aimed at “making the picture prettier” – that is, accepting our earlier conclusions and finding practical ways to make the situation work well at minimal development costs. In early 2006, the World Wide Web Consortium (W3C ) published three Candidate Recommendation documents [SPARQL-L], [SPARQL-P], and [SPARQL-R] defining a new query language called SPARQL. That new language was described as “a query language for getting information from…RDF graphs” (that is, SPARQL is an RDF query language), which seemed on the surface to be a new technology requirement. Comments raised during the Candidate Recommendation review period resulted in the W3C’s Data Access Working Group (DAWG) reverting [SPARQL-L] to Working Draft status for additional work. Recently, the revised specification [SPARQL-L2] was advanced to Last Call Working Draft status, while the other two specifications have been held in the Candidate Recommendation stage awaiting progression of [SPARQL-L2] to Candidate Recommendation. Last year, we acknowledged that SPARQL’s existence is justified, but we also identified some areas in which additional research was required before it could be said whether or not practical integration with SQL and/or XQuery was likely. The present paper addresses that subject further. In particular, we indicate how existing investment in persistence technology can be applied to the RDF data model and to implementing the SPARQL language. Data Model Integration Query languages are designed to be applied to data represented in a particular data model. SQL is used to retrieve, create, modify, and delete data represented in (a variation of) the relational model of data. XQuery is used to locate and retrieve data that is represented in the XPath data model, XDM [XDM]. (The ability to update such data is expected to be provided in early 2008.) Our vision is of a world in which applications can query data that is provided in the SQL/relational model, in the XPath Data Model, and in RDF, preferably in a single query expression. This implies that SQL statements must be able to access XML data and RDF data, that XQuery expressions be able to access SQL data and RDF data, and that SPARQL queries be able to access SQL data and XML data. Achieving that vision requires a significant amount of infrastructure. We’ve long known that one language can be used to query data represented in a data model other than that for which the language was designed by mapping the data from its native data model into the query language’s data model. An important example is SQL/XML [SQL/XML], which allows relational data to be published in an XML form (that is, as an XPath data model instance) that can then be queried using XQuery. SQL/XML also provides a facility (XMLTABLE) that allows XML to be treated as though it were SQL data. Such mappings naturally run into the famous “impedance mismatch” caused by factors such as the collections of data types differing amongst query languages and their corresponding data models. 1 SQL, XQuery, and SPARQL:Making the Picture Prettier RDF is presented in [RDF-C] as yet another data model – a graph data model – distinct from the XPath tree-structured data model and from the SQL “flat table” data model. It is tempting to reject that assertion because of the tuple nature of RDF entities. However, a close examination of [RDF-C] shows subtle – but important – differences between collections of RDF triples and multisets of rows in SQL tables of three columns. For example, SQL tables are defined to comprise one or more columns, each having a particular declared data type (such as INTEGER, TIMESTAMP, or some user-defined type). Every row in that table has exactly that number of columns and the value of each column in each such row must be of the column’s declared type. (Values of user-defined type columns may have a most-specific type that is a subtype of that user-defined type, which is a concept that doesn’t apply to columns of SQL’s built-in types.) In addition, all of SQL’s metadata is essentially structural metadata – that is, metadata about the structure of the various tables, about the data types of columns, and so forth – and not semantic metadata, information that actually describes meaning of the SQL data. In the SQL model, the data types of columns are captured in various system tables, but very little information about the relationships of those data types is derivable from the system tables. Of course, information from those system tables can be combined with the data in the tables themselves, although the criteria through which such combinations would be meaningful are far from clear. By contrast, a given RDF collection can be augmented by RDF triples expressed using RDF Schema [RDF-S] and OWL [OWL-L] constructs that specify the class to which a given RDF entity belongs. Last year, we investigated whether the use of SQL’s user-defined types might offer some way to map such class information from RDF into the SQL model, but the results were discouraging and we have abandoned that line of research. Another important difference arises from the relationships between the metadata associated with each model and the data available under that model. In the SQL environment, data literally cannot exist without metadata – the schema. The two are inseparable in theory and in practice. However, in both the XPath data model and in the RDF data model, data may exist independent of any schema describing that data. While the absence of a schema may limit the ways in which the data can be interpreted, it is possible to build XML documents and RDF collections without any schema that describes them. On the World Wide Web, this distinction is especially important, because, unlike in the closed world of a database system, it is impossible for there to be a central point of control at which such metadata can be created...and enforced. Persistence Models The first commercial “relational” database management systems began to appear about 25 years ago. At the time, data management was dominated by CODASYL and other “network” DBMSs. The conventional wisdom at the time said that the new low-performance, small-volume systems didn’t have a chance against the established base. But the separation of data model from persistence model proved to offer incredible versatility and opportunity for tremendous performance, manageability, scalability, and amount of data. Since then, there have been a number of database system innovations that were hoped to overtake relational systems in the marketplace, such as object-oriented database systems (OODBMS) and the so-called “native XML” database systems. To date, none have succeeded in doing so (although many of them have found secure niche markets with unique requirements). Instead, the implementers of relational systems have co-opted the new forms of data. The advent of object-relational systems (ORDBMS) responded to the majority of the requirements that led to the development of OODBMSs, and it appears at present that those systems have been successfully extended to handle XML data (XORDBMS) for the large majority of application environments. What, then, should be done about RDF data? RDF, as stated above, defines a graph data model. An important question to consider is this: Are the persistence requirements for graph data models so unique that the persistence engines that have served so well for relational data, object-oriented data, and XML data must be avoided? Or do those engines have sufficient flexibility that they can be used successfully for persisting and managing RDF data, too? We considered the possibility of creating a native RDF storage engine to deal with the graph nature of the RDF data. Such an engine would, no doubt, have some similarities to the CODASYL and other network DBMSs of the ’70s. While we realized that there might be some advantages to this approach, we are also highly aware that relational storage engines easily overcame any perceived advantages to such “pointer-based” systems. Furthermore, development of new database storage engines is burdened by the immense amount of infrastructure required to 2 SQL, XQuery, and SPARQL:Making the Picture Prettier provide truly “industrial-strength” capabilities that existing relational engines already provide. There would have to be truly overwhelming advantages to a new storage model to justify the expenditures (literally billions of dollars) that led to the dominance of relational engines today. We firmly believe that the storage technologies that underlie the successful relational systems of which we are aware – including commercial implementations and open source implementations – are completely adequate for RDF data management. In fact, because the nature of RDF is collections of 3-tuples, we are convinced that there is no need at all for a native RDF storage environment and that relational systems are ideally suited for the job. Having reached that conclusion, we were next faced with a somewhat higher-level decision: Should there be a new “native” RDF data type defined for relational systems, as was done for XML data? The choice here was not so obvious, as there are advantages to defining a new data type and advantages to eschewing a new type in favor of ordinary table/column/row representations.
Recommended publications
  • Powerdesigner 16.6 Data Modeling
    SAP® PowerDesigner® Document Version: 16.6 – 2016-02-22 Data Modeling Content 1 Building Data Models ...........................................................8 1.1 Getting Started with Data Modeling...................................................8 Conceptual Data Models........................................................8 Logical Data Models...........................................................9 Physical Data Models..........................................................9 Creating a Data Model.........................................................10 Customizing your Modeling Environment........................................... 15 1.2 Conceptual and Logical Diagrams...................................................26 Supported CDM/LDM Notations.................................................27 Conceptual Diagrams.........................................................31 Logical Diagrams............................................................43 Data Items (CDM)............................................................47 Entities (CDM/LDM)..........................................................49 Attributes (CDM/LDM)........................................................55 Identifiers (CDM/LDM)........................................................58 Relationships (CDM/LDM)..................................................... 59 Associations and Association Links (CDM)..........................................70 Inheritances (CDM/LDM)......................................................77 1.3 Physical Diagrams..............................................................82
    [Show full text]
  • Semantics Developer's Guide
    MarkLogic Server Semantic Graph Developer’s Guide 2 MarkLogic 10 May, 2019 Last Revised: 10.0-8, October, 2021 Copyright © 2021 MarkLogic Corporation. All rights reserved. MarkLogic Server MarkLogic 10—May, 2019 Semantic Graph Developer’s Guide—Page 2 MarkLogic Server Table of Contents Table of Contents Semantic Graph Developer’s Guide 1.0 Introduction to Semantic Graphs in MarkLogic ..........................................11 1.1 Terminology ..........................................................................................................12 1.2 Linked Open Data .................................................................................................13 1.3 RDF Implementation in MarkLogic .....................................................................14 1.3.1 Using RDF in MarkLogic .........................................................................15 1.3.1.1 Storing RDF Triples in MarkLogic ...........................................17 1.3.1.2 Querying Triples .......................................................................18 1.3.2 RDF Data Model .......................................................................................20 1.3.3 Blank Node Identifiers ..............................................................................21 1.3.4 RDF Datatypes ..........................................................................................21 1.3.5 IRIs and Prefixes .......................................................................................22 1.3.5.1 IRIs ............................................................................................22
    [Show full text]
  • OLAP, Relational, and Multidimensional Database Systems
    OLAP, Relational, and Multidimensional Database Systems George Colliat Arbor Software Corporation 1325 Chcseapeakc Terrace, Sunnyvale, CA 94089 Introduction Many people ask about the difference between implementing On-Line Analytical Processing (OLAP) with a Relational Database Management System (ROLAP) versus a Mutidimensional Database (MDD). In this paper, we will show that an MDD provides significant advantages over a ROLAP such as several orders of magnitude faster data retrieval, several orders of magnitude faster calculation, much less disk space, and le,~ programming effort. Characteristics of On-Line Analytical Processing OLAP software enables analysts, managers, and executives to gain insight into an enterprise performance through fast interactive access to a wide variety of views of data organized to reflect the multidimensional aspect of the enterprise data. An OLAP service must meet the following fundamental requirements: • The base level of data is summary data (e.g., total sales of a product in a region in a given period) • Historical, current, and projected data • Aggregation of data and the ability to navigate interactively to various level of aggregation (drill down) • Derived data which is computed from input data (performance rados, variance Actual/Budget,...) • Multidimensional views of the data (sales per product, per region, per channel, per period,..) • Ad hoc fast interactive analysis (response in seconds) • Medium to large data sets ( 1 to 500 Gigabytes) • Frequently changing business model (Weekly) As an iUustration we will use the six dimension business model of a hypothetical beverage company. Each dimension is composed of a hierarchy of members: for instance the time dimension has months (January, February,..) as the leaf members of the hierarchy, Quarters (Quarter 1, Quarter 2,..) as the next level members, and Year as the top member of the hierarchy.
    [Show full text]
  • RDF Query Languages Need Support for Graph Properties
    RDF Query Languages Need Support for Graph Properties Renzo Angles1, Claudio Gutierrez1, and Jonathan Hayes1,2 1 Dept. of Computer Science, Universidad de Chile 2 Dept. of Computer Science, Technische Universit¨at Darmstadt, Germany {rangles,cgutierr,jhayes}@dcc.uchile.cl Abstract. This short paper discusses the need to include into RDF query languages the ability to directly query graph properties from RDF data. We study the support that current RDF query languages give to these features, to conclude that they are currently not supported. We propose a set of basic graph properties that should be added to RDF query languages and provide evidence for this view. 1 Introduction One of the main features of the Resource Description Framework (RDF) is its ability to interconnect information resources, resulting in a graph-like structure for which connectivity is a central notion [GLMB98]. As we will argue, basic concepts of graph theory such as degree, path, and diameter play an important role for applications that involve RDF querying. Considering the fact that the data model influences the set of operations that should be provided by a query language [HBEV04], it follows the need for graph operations support in RDF query languages. For example, the query “all relatives of degree 1 of Alice”, submitted to a genealogy database, amounts to retrieving the nodes adjacent to a resource. The query “are suspects A and B related?”, submitted to a police database, asks for any path connecting these resources in the (RDF) graph that is stored in this database. The query “what is the Erd˝osnumber of Alberto Mendelzon”, submitted to (a RDF version of) DBLP, asks simply for the length of the shortest path between the nodes representing Erd˝osand Mendelzon.
    [Show full text]
  • On Active Deductive Databases
    On Active Deductive Databases The Statelog Approach Georg Lausen Bertram Ludascher Wolfgang May Institut fur Informatik Universitat Freiburg Germany flausenludaeschmayginfo rmat ikun ifr eibur gde Abstract After briey reviewing the basic notions and terminology of active rules and relating them to pro duction rules and deductive rules resp ectively we survey a number of formal approaches to active rules Subsequentlywe present our own stateoriented logical approach to ac tive rules which combines the declarative semantics of deductive rules with the p ossibility to dene up dates in the style of pro duction rules and active rules The resulting language Statelog is surprisingly simple yet captures many features of active rules including comp osite eventde tection and dierent coupling mo des Thus it can b e used for the formal analysis of rule prop erties like termination and expressivepo wer Finally weshowhow nested transactions can b e mo deled in Statelog b oth from the op erational and the mo deltheoretic p ersp ective Intro duction Motivated by the need for increased expressiveness and the advent of new appli cations rules have b ecome very p opular as a paradigm in database programming since the late eighties Min Today there is a plethora of quite dierent appli cation areas and semantics for rules From a birdseye view deductive and active rules may b e regarded as two ends of a sp ectrum of database rule languages Deductive Rules Production Rules Active Rules higher level lower level stratied well RDL pro ARDL Ariel Starburst Postgres
    [Show full text]
  • Multidimensional Network Analysis
    Universita` degli Studi di Pisa Dipartimento di Informatica Dottorato di Ricerca in Informatica Ph.D. Thesis Multidimensional Network Analysis Michele Coscia Supervisor Supervisor Fosca Giannotti Dino Pedreschi May 9, 2012 Abstract This thesis is focused on the study of multidimensional networks. A multidimensional network is a network in which among the nodes there may be multiple different qualitative and quantitative relations. Traditionally, complex network analysis has focused on networks with only one kind of relation. Even with this constraint, monodimensional networks posed many analytic challenges, being representations of ubiquitous complex systems in nature. However, it is a matter of common experience that the constraint of considering only one single relation at a time limits the set of real world phenomena that can be represented with complex networks. When multiple different relations act at the same time, traditional complex network analysis cannot provide suitable an- alytic tools. To provide the suitable tools for this scenario is exactly the aim of this thesis: the creation and study of a Multidimensional Network Analysis, to extend the toolbox of complex network analysis and grasp the complexity of real world phenomena. The urgency and need for a multidimensional network analysis is here presented, along with an empirical proof of the ubiquity of this multifaceted reality in different complex networks, and some related works that in the last two years were proposed in this novel setting, yet to be systematically defined. Then, we tackle the foundations of the multidimensional setting at different levels, both by looking at the basic exten- sions of the known model and by developing novel algorithms and frameworks for well-understood and useful problems, such as community discovery (our main case study), temporal analysis, link prediction and more.
    [Show full text]
  • Oracle Rdb™ SQL Reference Manual Volume 4
    Oracle Rdb™ SQL Reference Manual Volume 4 Release 7.3.2.0 for HP OpenVMS Industry Standard 64 for Integrity Servers and OpenVMS Alpha operating systems August 2016 ® SQL Reference Manual, Volume 4 Release 7.3.2.0 for HP OpenVMS Industry Standard 64 for Integrity Servers and OpenVMS Alpha operating systems Copyright © 1987, 2016 Oracle Corporation. All rights reserved. Primary Author: Rdb Engineering and Documentation group This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007).
    [Show full text]
  • Web and Semantic Web Query Languages: a Survey
    Web and Semantic Web Query Languages: A Survey James Bailey1, Fran¸coisBry2, Tim Furche2, and Sebastian Schaffert2 1 NICTA Victoria Laboratory Department of Computer Science and Software Engineering The University of Melbourne, Victoria 3010, Australia http://www.cs.mu.oz.au/~jbailey/ 2 Institute for Informatics,University of Munich, Oettingenstraße 67, 80538 M¨unchen, Germany http://pms.ifi.lmu.de/ Abstract. A number of techniques have been developed to facilitate powerful data retrieval on the Web and Semantic Web. Three categories of Web query languages can be distinguished, according to the format of the data they can retrieve: XML, RDF and Topic Maps. This ar- ticle introduces the spectrum of languages falling into these categories and summarises their salient aspects. The languages are introduced us- ing common sample data and query types. Key aspects of the query languages considered are stressed in a conclusion. 1 Introduction The Semantic Web Vision A major endeavour in current Web research is the so-called Semantic Web, a term coined by W3C founder Tim Berners-Lee in a Scientific American article describing the future of the Web [37]. The Semantic Web aims at enriching Web data (that is usually represented in (X)HTML or other XML formats) by meta-data and (meta-)data processing specifying the “meaning” of such data and allowing Web based systems to take advantage of “intelligent” reasoning capabilities. To quote Berners-Lee et al. [37]: “The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users.” The Semantic Web meta-data added to today’s Web can be seen as advanced semantic indices, making the Web into something rather like an encyclopedia.
    [Show full text]
  • Curriculum Vitae Michael Kay Phd FBCS
    Curriculum Vitae Michael Kay PhD FBCS Michael Kay is widely known in the XML world as an expert on the XML processing languages XSLT and XQuery. This reputation derives from his book XSLT Programmer's Reference, from the open-source Saxon XSLT and XQuery processor which he developed, and from his work within the W3C consortium as a member of the XSLT and XQuery working groups, and as editor of several of the specifications. His expertise in the field was recognized in 2005 when he was awarded the XML Cup, an annual international award made to individuals who have made outstanding contributions to the XML Community. Before moving into the XML field in 1998, Michael Kay specialized in database and information management technology. He designed a series of successful software products for the computer manufacturer ICL, and was one of the company's most senior engineers advising the company and it customers on technology strategy. Michael Kay is a member of the XML Guild, a loose federation of leading independent XML consultants who pool knowledge and experience to tackle the most challenging XML-related problems. Personal Details Name Michael Howard Kay Address 52 Matlock Road, Caversham Heights, Reading, Berks, UK. RG4 7BS Phone +44 118 948 3589 Email [email protected] Date of Birth 1951-10-11 Nationality British (born in Germany) Languages English, German, some French Career Summary Feb 2004 - present Director, Saxonica Limited Development and Marketing of commercial Saxon-SA product Ongoing development of open source Saxon-B product Independent
    [Show full text]
  • XML-Based RDF Data Management for Efficient Query Processing
    XML-Based RDF Data Management for Efficient Query Processing Mo Zhou Yuqing Wu Indiana University, USA Indiana University, USA [email protected] [email protected] Abstract SPARQL[17] is a W3C recommended RDF query language. A The Semantic Web, which represents a web of knowledge, offers SPARQL query contains a collection of triples with variables called new opportunities to search for knowledge and information. To simple access patterns which form graph patterns for describing harvest such search power requires robust and scalable data repos- query requirements. For SELECT ?t example, the SPARQL itories that can store RDF data and support efficient evaluation of WHERE {?p type Person. ?r ?x ?t. SPARQL queries. Most of the existing RDF storage techniques rely ?p name ?n. ?p write ?r} query on the left re- on relation model and relational database technologies for these trieves all properties of tasks. They either keep the RDF data as triples, or decompose it reviews written by a person whose name is known. into multiple relations. The mis-match between the graph model The needs to develop applications on the Semantic Web and sup- of the RDF data and the rigid 2D tables of relational model jeop- port search in RDF graphs call for RDF repositories to be reliable, ardizes the scalability of such repositories and frequently renders a robust and efficient in answering SPARQL queries. As in the con- repository inefficient for some types of data and queries. We pro- text of RDB and XML, the selection of storage models is critical to pose to decompose RDF graph into a forest of semantically cor- a data repository as it is the dominating factor to determine how to related XML trees, store them in an XML repository and rewrite evaluate queries and how the system behaves when it scales up.
    [Show full text]
  • Introduction to Graph Database with Cypher & Neo4j
    Introduction to Graph Database with Cypher & Neo4j Zeyuan Hu April. 19th 2021 Austin, TX History • Lots of logical data models have been proposed in the history of DBMS • Hierarchical (IMS), Network (CODASYL), Relational, etc • What Goes Around Comes Around • Graph database uses data models that are “spiritual successors” of Network data model that is popular in 1970’s. • CODASYL = Committee on Data Systems Languages Supplier (sno, sname, scity) Supply (qty, price) Part (pno, pname, psize, pcolor) supplies supplied_by Edge-labelled Graph • We assign labels to edges that indicate the different types of relationships between nodes • Nodes = {Steve Carell, The Office, B.J. Novak} • Edges = {(Steve Carell, acts_in, The Office), (B.J. Novak, produces, The Office), (B.J. Novak, acts_in, The Office)} • Basis of Resource Description Framework (RDF) aka. “Triplestore” The Property Graph Model • Extends Edge-labelled Graph with labels • Both edges and nodes can be labelled with a set of property-value pairs attributes directly to each edge or node. • The Office crew graph • Node �" has node label Person with attributes: <name, Steve Carell>, <gender, male> • Edge �" has edge label acts_in with attributes: <role, Michael G. Scott>, <ref, WiKipedia> Property Graph v.s. Edge-labelled Graph • Having node labels as part of the model can offer a more direct abstraction that is easier for users to query and understand • Steve Carell and B.J. Novak can be labelled as Person • Suitable for scenarios where various new types of meta-information may regularly
    [Show full text]
  • Answering SPARQL Queries Modulo RDF Schema with Paths Faisal Alkhateeb, Jérôme Euzenat
    Answering SPARQL queries modulo RDF Schema with paths Faisal Alkhateeb, Jérôme Euzenat To cite this version: Faisal Alkhateeb, Jérôme Euzenat. Answering SPARQL queries modulo RDF Schema with paths. [Research Report] RR-8394, INRIA. 2013, pp.46. hal-00904961 HAL Id: hal-00904961 https://hal.inria.fr/hal-00904961 Submitted on 15 Nov 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Answering SPARQL queries modulo RDF Schema with paths Faisal Alkhateeb, Jérôme Euzenat RESEARCH REPORT N° 8394 November 2013 Project-Teams Exmo ISSN 0249-6399 ISRN INRIA/RR--8394--FR+ENG Answering SPARQL queries modulo RDF Schema with paths Faisal Alkhateeb∗, Jérôme Euzenat† Project-Teams Exmo Research Report n° 8394 — November 2013 — 46 pages Abstract: SPARQL is the standard query language for RDF graphs. In its strict instantiation, it only offers querying according to the RDF semantics and would thus ignore the semantics of data expressed with respect to (RDF) schemas or (OWL) ontologies. Several extensions to SPARQL have been proposed to query RDF data modulo RDFS, i.e., interpreting the query with RDFS semantics and/or considering external ontologies. We introduce a general framework which allows for expressing query answering modulo a particular semantics in an homogeneous way.
    [Show full text]