Bulk Loading Techniques for Object Databases and an Application to Relational Data

Total Page:16

File Type:pdf, Size:1020Kb

Bulk Loading Techniques for Object Databases and an Application to Relational Data Bulk Loading Techniques for Object Databases and an Application to Relational Data Sihem Amer-Yahia Sophie Cluet Claude Delobel INRIA, BP 105 INRIA, BP 105 University of Paris XI 78153 Le Chesnay, France 78153 Le Chesnay, France LRI, 91405 Orsay, France [email protected] [email protected] [email protected] OsTechnology [BDK92] is used in that manner on in- dustrial databases supporting several gigas of data. Abstract The migration problem turns out to be extremely complicated. It is not rare to find migration programs We present a framework for designing, in a requiring hours and even days to be processed. Fur- declarative and flexible way, efficient migra- thermore, efficiency is not the only aspect of the prob- tion programs and an undergoing implemen- lem. As we will see, flexibility in terms of database tation of a migration tool called RelOO whose physical organization and decomposition of the migra- targets are any ODBC compliant system on tion process is at least as important. In this paper, we the relational side and the 02 system on the propose a solution to the data migration problem pro- object side. The framework consists of (i) viding both flexibility and efficiency. The main orig- a declarative language to specify database inality of this work is an optimization technique that transformations from relations to objects, but covers the relational-object case but is general enough also physical properties on the object database to support other migration problems such as object- (clustering and sorting) and (ii) an algebra- object, ASCII-object or ASCII-relational. In order to based program rewriting technique which op- validate our approach, we focus on the relational to timizes the migration processing time while object case. taking into account physical properties and Relational and object database vendors currently transaction decomposition. To achieve this, provide a large set of connectivity tools and some of we introduce equivalences of a new kind that them have load facilities (e.g., ObjectStore DB Con- consider side-effects on the object database. nect, Oracle, GemConnect, ROBIN [Exe97], Persis- tence [INC93], OpenODB [PacSl]). However, these 1 Introduction tools are either not flexible, or inefficient or too low- level. So far, the scientific community has mainly fo- The efficient migration of bulk data into object cused on modeling issues and has provided sound so- databases is a problem frequently encountered in prac- lutions to the problem of automatically generating ob- tice. This occurs when applications are moved from ject schemas from relational ones (e.g. [Leb93, FV95, relational to object systems, but more often, in ap- Hai95, MGG95, JSZ96, DA97, RH97, Fon97, BGD97, plications relying on external data that has to be re- IUT98]). Our approach is complementary and can be freshed regularly. Indeed, one finds more and more used as an efficient support for these previous propos- object replications of data that are used by satel- als. In a manner similar to [WN95], we focus on the lite Java or C++ persistent applications. For cx- problem of incremental loading of large amount of da- ample, we know that the 02 database system from ta in an object database taking into account trans- action decomposition. Additionally, we propose opti- Permission to copy without fee all or part of this material is mization techniques and the means to support appro- granted provided that the copies are not made OT dwkibuted for direct commercial advantage, the VLDB copyright notice and priate physical organization of the object database. the title of the publication and its date appear, and notice is One major difficulty when designing a tool is to given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee come up with the appropriate practical requirements. and/or special permission from the Endowment. Obviously, every tool should provide efficiency and Proceedings of the 24th VLDB Conference user-friendliness. In order to understand what more New York, USA, 1998 was required from a relational-object migration sys- 534 tern, we interviewed people from OzTechnology. This gration process. (5) Optimization based on rewriting led us to the conclusion that flexibility was the most follows taking into account user physical constraints. important aspect of the problem. By flexibility we (6) Then, migration programs are generated that can mean that users should have the possibility to orga- be parameterized by load size or processing time. (7) nize the output object database according to the needs To achieve the migration, the user (or a batch pro- of applications that will be supported on it. For in- gram) launches these programs with the appropriate stance, they should be able to specify some particular parameters whenever the system is available (i.e., not clustering of objects on disk. Also, users should be busy with other important processing). able to slice the migration process into pieces that are In this paper, we describe the RelOO language to run whenever and for as long as appropriate so that specify logical translations and physical constraints, (i) relational and object systems are not blocked for the rewriting technique and the optimizer and code long periods of time or at a time that is inappropriate generator we implemented. We do not describe the (i.e., when there are other important jobs to perform) object schema construction which is rather obvious. and (ii) a crash during the migration process entails minimal recovery. 3 The Specification Language In this paper, we present (i) a framework for design- The RelOO language is simple and declarative. It ing, in a declarative and flexible way, efficient migra- is used to express logical database transformations as tion programs and (ii) an undergoing implementation well as physical database organization. The complete of a migration tool called RelOO. The language pro- syntax of this language is given in [AY98]. posed allows capturing most automatically generated mappings as proposed by, e.g., [Leb93, FV95, Hai95, 3.1 Logical Database Transformation MGG95, JSZ96, DA97, RH97, Fon97, BGD97, IUT98]. An important characteristic of the optimization tech- Database schema transformations from relational to nique is that it is not restricted to the “relations to- object systems have been largely addressed. No- wards objects” case. It can be used to support dump- tably, semantical relational properties such as nor- ing and/or reloading of any database or other kinds of mal forms, keys, foreign keys, existing relationships migration processes (objects to objects, ASCII to ob- and inclusion dependencies have been used to auto- jects, etc.). The RelOO system, we present here, is an matically generate a fait/&Z object image of the re- implementation of this general framework with a given lational schema. We do not address this issue here search strategy with any ODBC compliant system on but provide a simple language that can be used to the relational side and the O2 system on the object implement these approaches and that supports effi- side. cient optimization strategies. This language is defined In this paper, we assume that there are no updates in the same spirit as existing object views languages to the relational database during the migration pro- (e.g. [AB91, SLT91, Ber91, Run92, DdST95]). Classes cess. This is a strong limitation but we believe this is (generated from one or several relations), simple and an orthogonal issue that does not invalidate our ap- complex attributes, class hierarchies, aggregation links proach. The declarativity of the migration specifica- and ODMG-like relationships [ABDf94] can be cre- tion language we propose should allow the use of the ated from any relational database. Because we focus systems’ log mechanism to take updates into account. on data migration, we illustrate only some of the lan- The paper is organized as follows. Section 2 ex- guage features using a simple example. Part of this plains the migration process. Section 3 introduces our example will be used in the sequel to discuss the ef- language. In Section 4, we present the algebraic frame- ficiency and flexibility issues that constitute our core work, investigate various migration parameters and contribution: give some examples of appropriate program rewriting. r-employee[emp:integer,lname:string,fname:string, In Section 5, we give a short overview of the RelOO town:string,function:string,salary:real] system optimizer and code generator. Section 6 con- cludes this paper. 2 The Migration Process Figure 1 illustrates our view of the migration pro- cess. The user specifies the process by means of a program containing two parts that concern (1) the log- ical transformations from relational to object schemas and bases, and (2) the physical constraints that should be achieved. The translation specification is given to a first module that generates (3) the object schema Figure 1: The Migration Process and (4) a default algebraic representation of the mi- 535 r-project[name:string,topic:string,address:string, For instance, the name Employees will be related to boss:integer] all employees who do not live in “Rocquencourt” and r-emp-prj[p-roject:string,employee:integer] are not researchers (function #’ Researcher’). Relations r-employee and r-project provide infor- The two create link statements define, respec- mation on, respectively, employees and projects in a tively, a reference link manages and an ODMG rela- research institute. In our example, they are used to tionship (attribute and its inverse) staff/projects. construct three object classes (Employee, Researcher The first one links each employee in Class Employee and Project). The underlined attributes are the keys to the projects he/she manages (if he/she is a manag- of each relation. These are required to maintain exist- er). Since, researchers are employees, they inherit this ing links between tuples in the relational database and link.
Recommended publications
  • Understanding the Hidden Web
    Introduction General Framework Different Modules Conclusion Understanding the Hidden Web Pierre Senellart Athens University of Economics & Business, 28 July 2008 Introduction General Framework Different Modules Conclusion Simple problem Contact all co-authors of Serge Abiteboul. It’s easy! You just need to: Find all co-authors. For each of them, find their current email address. Introduction General Framework Different Modules Conclusion Advanced Scholar Search Advanced Search Tips | About Google Scholar Find articles with all of the words with the exact phrase with at least one of the words without the words where my words occur Author Return articles written by e.g., "PJ Hayes" or McCarthy Publication Return articles published in e.g., J Biol Chem or Nature Date Return articles published between — e.g., 1996 Subject Areas Return articles in all subject areas. Return only articles in the following subject areas: Biology, Life Sciences, and Environmental Science Business, Administration, Finance, and Economics Chemistry and Materials Science Engineering, Computer Science, and Mathematics Medicine, Pharmacology, and Veterinary Science Physics, Astronomy, and Planetary Science Social Sciences, Arts, and Humanities ©2007 Google Introduction General Framework Different Modules Conclusion Advanced Scholar Search Advanced Search Tips | About Google Scholar Find articles with all of the words with the exact phrase with at least one of the words without the words where my words occur Author Return articles written by e.g., "PJ Hayes" or McCarthy Publication
    [Show full text]
  • The Best Nurturers in Computer Science Research
    The Best Nurturers in Computer Science Research Bharath Kumar M. Y. N. Srikant IISc-CSA-TR-2004-10 http://archive.csa.iisc.ernet.in/TR/2004/10/ Computer Science and Automation Indian Institute of Science, India October 2004 The Best Nurturers in Computer Science Research Bharath Kumar M.∗ Y. N. Srikant† Abstract The paper presents a heuristic for mining nurturers in temporally organized collaboration networks: people who facilitate the growth and success of the young ones. Specifically, this heuristic is applied to the computer science bibliographic data to find the best nurturers in computer science research. The measure of success is parameterized, and the paper demonstrates experiments and results with publication count and citations as success metrics. Rather than just the nurturer’s success, the heuristic captures the influence he has had in the indepen- dent success of the relatively young in the network. These results can hence be a useful resource to graduate students and post-doctoral can- didates. The heuristic is extended to accurately yield ranked nurturers inside a particular time period. Interestingly, there is a recognizable deviation between the rankings of the most successful researchers and the best nurturers, which although is obvious from a social perspective has not been statistically demonstrated. Keywords: Social Network Analysis, Bibliometrics, Temporal Data Mining. 1 Introduction Consider a student Arjun, who has finished his under-graduate degree in Computer Science, and is seeking a PhD degree followed by a successful career in Computer Science research. How does he choose his research advisor? He has the following options with him: 1. Look up the rankings of various universities [1], and apply to any “rea- sonably good” professor in any of the top universities.
    [Show full text]
  • Essays Dedicated to Peter Buneman ; [Colloquim in Celebration
    Val Tannen Limsoon Wong Leonid Libkin Wenfei Fan Wang-Chiew Tan Michael Fourman (Eds.) In Search of Elegance in the Theory and Practice of Computation Essays Dedicated to Peter Buneman 4^1 Springer Table of Contents Models for Data-Centric Workflows 1 Serge Abiteboul and Victor Vianu Relational Databases and Bell's Theorem 13 Samson Abramsky High-Level Rules for Integration and Analysis of Data: New Challenges 36 Bogdan Alexe, Douglas Burdick, Mauricio A. Hernandez, Georgia Koutrika, Rajasekar Krishnamurthy, Lucian Popa, Ioana R. Stanoi, and Ryan Wisnesky A New Framework for Designing Schema Mappings 56 Bogdan Alexe and Wang-Chiew Tan User Trust and Judgments in a Curated Database with Explicit Provenance 89 David W. Archer, Lois M.L. Delcambre, and David Maier An Abstract, Reusable, and Extensible Programming Language Design Architecture 112 Hassan Ait-Kaci A Discussion on Pricing Relational Data 167 Magdalena Balazinska, Bill Howe, Paraschos Koutris, Dan Suciu, and Prasang Upadhyaya Tractable Reasoning in Description Logics with Functionality Constraints 174 Andrea Call, Georg Gottlob, and Andreas Pieris Toward a Theory of Self-explaining Computation 193 James Cheney, Umut A. Acar, and Roly Perera To Show or Not to Show in Workflow Provenance 217 Susan B. Davidson, Sanjeev Khanna, and Tova Milo Provenance-Directed Chase&Backchase 227 Alin Deutsch and Richard Hull Data Quality Problems beyond Consistency and Deduplication 237 Wenfei Fan, Floris Geerts, Shuai Ma, Nan Tang, and Wenyuan Yu X Table of Contents 250 Hitting Buneman Circles Michael Paul Fourman 259 Looking at the World Thru Colored Glasses Floris Geerts, Anastasios Kementsietsidis, and Heiko Muller Static Analysis and Query Answering for Incomplete Data Trees with Constraints 273 Amelie Gheerbrant, Leonid Libkin, and Juan Reutter Using SQL for Efficient Generation and Querying of Provenance Information 291 Boris Glavic, Renee J.
    [Show full text]
  • INF2032 2017-2 Tpicos Em Bancos De Dados III ”Foundations of Databases”
    INF2032 2017-2 Tpicos em Bancos de Dados III "Foundations of Databases" Profs. Srgio Lifschitz and Edward Hermann Haeusler August-December 2017 1 Motivation The motivation of this course is to put together theory and practice of database and knowledge base systems. The main content includes a formal approach for the underlying existing technology from the point of view of logical expres- siveness and computational complexity. New and recent database models and systems are discussed from theory to practical issues. 2 Goal of the course The main goal is to introduce the student to the logical and computational foundations of database systems. 3 Sylabus The course in divided in two parts: Classical part: relational database model 1. Revision and formalization of relational databases. Limits of the relational data model; 2. Relational query languages; DataLog and recursion; 3. Logic: Propositional, First-Order, Higher-Order, Second-Order, Fixed points logics; 4. Computational complexity and expressiveness of database query lan- guages; New approaches for database logical models 1. XML-based models, Monadic Second-Order Logic, XPath and XQuery; 2. RDF-based knowledge bases; 3. Decidable Fragments of First-Order logic, Description Logic, OWL dialects; 4. NoSQL and NewSQL databases; 4 Evaluation The evaluation will be based on homework assignments (70 %) and a research manuscript + seminar (30%). 5 Supporting material Lecture notes, slides and articles. Referenced books are freely available online. More details at a wiki-based web page (available from August 7th on) References [ABS00] Serge Abiteboul, Peter Buneman, and Dan Suciu. Data on the Web: From Relations to Semistructured Data and XML.
    [Show full text]
  • Author Template for Journal Articles
    Mining citation information from CiteSeer data Dalibor Fiala University of West Bohemia, Univerzitní 8, 30614 Plzeň, Czech Republic Phone: 00420 377 63 24 29, fax: 00420 377 63 24 01, email: [email protected] Abstract: The CiteSeer digital library is a useful source of bibliographic information. It allows for retrieving citations, co-authorships, addresses, and affiliations of authors and publications. In spite of this, it has been relatively rarely used for automated citation analyses. This article describes our findings after extensively mining from the CiteSeer data. We explored citations between authors and determined rankings of influential scientists using various evaluation methods including cita- tion and in-degree counts, HITS, PageRank, and its variations based on both the citation and colla- boration graphs. We compare the resulting rankings with lists of computer science award winners and find out that award recipients are almost always ranked high. We conclude that CiteSeer is a valuable, yet not fully appreciated, repository of citation data and is appropriate for testing novel bibliometric methods. Keywords: CiteSeer, citation analysis, rankings, evaluation. Introduction Data from CiteSeer have been surprisingly little explored in the scientometric lite- rature. One of the reasons for this may have been fears that the data gathered in an automated way from the Web are inaccurate – incomplete, erroneous, ambiguous, redundant, or simply wrong. Also, the uncontrolled and decentralized nature of the Web is said to simplify manipulating and biasing Web-based publication and citation metrics. However, there have been a few attempts at processing the Cite- Seer data which we will briefly mention. Zhou et al.
    [Show full text]
  • Data on the Web: from Relations to Semistructured Data and XML
    Data on the Web: From Relations to Semistructured Data and XML Serge Abiteboul Peter Buneman Dan Suciu February 19, 2014 1 Data on the Web: from Relations to Semistructured Data and XML Serge Abiteboul Peter Buneman Dan Suciu An initial draft of part of the book Not for distribution 2 Contents 1 Introduction 7 1.1 Audience . 8 1.2 Web Data and the Two Cultures . 9 1.3 Organization . 15 I Data Model 17 2 A Syntax for Data 19 2.1 Base types . 21 2.2 Representing Relational Databases . 21 2.3 Representing Object Databases . 22 2.4 Specification of syntax . 26 2.5 The Object Exchange Model, OEM . 26 2.6 Object databases . 27 2.7 Other representations . 30 2.7.1 ACeDB . 30 2.8 Terminology . 32 2.9 Bibliographic Remarks . 34 3 XML 37 3.1 Basic syntax . 39 3.1.1 XML Elements . 39 3.1.2 XML Attributes . 41 3.1.3 Well-Formed XML Documents . 42 3.2 XML and Semistructured Data . 42 3.2.1 XML Graph Model . 43 3.2.2 XML References . 44 3 4 CONTENTS 3.2.3 Order . 46 3.2.4 Mixing elements and text . 47 3.2.5 Other XML Constructs . 47 3.3 Document Type Declarations . 48 3.3.1 A Simple DTD . 49 3.3.2 DTD's as Grammars . 49 3.3.3 DTD's as Schemas . 50 3.3.4 Declaring Attributes in DTDs . 52 3.3.5 Valid XML Documents . 54 3.3.6 Limitations of DTD's as schemas .
    [Show full text]
  • O. Peter Buneman Curriculum Vitæ – Jan 2008
    O. Peter Buneman Curriculum Vitæ { Jan 2008 Work Address: LFCS, School of Informatics University of Edinburgh Crichton Street Edinburgh EH8 9LE Scotland Tel: +44 131 650 5133 Fax: +44 667 7209 Email: [email protected] or [email protected] Home Address: 14 Gayfield Square Edinburgh EH1 3NX Tel: 0131 557 0965 Academic record 2004-present Research Director, Digital Curation Centre 2002-present Adjunct Professor of Computer Science, Department of Computer and Information Science, University of Pennsylvania. 2002-present Professor of Database Systems, Laboratory for the Foundations of Computer Science, School of Informatics, University of Edinburgh 1990-2001 Professor of Computer Science, Department of Computer and Information Science, University of Pennsylvania. 1981-1989 Associate Professor of Computer Science, Department of Computer and Information Science, University of Pennsylvania. 1981-1987 Graduate Chairman, Department of Computer and Information Science, University of Pennsyl- vania 1975-1981 Assistant Professor of Computer Science at the Moore School and Assistant Professor of Decision Sciences at the Wharton School, University of Pennsylvania. 1969-1974 Research Associate and subsequently Lecturer in the School of Artificial Intelligence, Edinburgh University. Education 1970 PhD in Mathematics, University of Warwick (Supervisor E.C. Zeeman) 1966 MA in Mathematics, Cambridge. 1963-1966 Major Scholar in Mathematics at Gonville and Caius College, Cambridge. Awards, Visting positions Distinguished Visitor, University of Auckland, 2006 Trustee, VLDB Endowment, 2004 Fellow of the Royal Society of Edinburgh, 2004 Royal Society Wolfson Merit Award, 2002 ACM Fellow, 1999. 1 Visiting Resarcher, INRIA, Jan 1999 Visiting Research Fellowship sponsored by the Japan Society for the Promotion of Science, Research Institute for the Mathematical Sciences, Kyoto University, Spring 1997.
    [Show full text]
  • SIGMOD Record’S Series of Interviews with Distinguished Members of the Database Community
    Serge Abiteboul Speaks Out on Building a Research Group in Europe, How He Got Involved in a Startup, Why Systems Papers Shouldn’t Have to Include Measurements, the Value of Object Databases, and More by Marianne Winslett http://www-rocq.inria.fr/~abitebou/ Welcome to this installment of ACM SIGMOD Record’s series of interviews with distinguished members of the database community. I’m Marianne Winslett, and today we are at the SIGMOD 2006 conference in Chicago, Illinois. I have here with me Serge Abiteboul, who is a senior researcher at INRIA and the manager of the Gemo Database Group. Serge’s research interests are in databases, web data, and database theory. He received the SIGMOD Innovation Award in 1998, and he is a cofounder of Xyleme, a company that provides XML-based content management. His PhD is from the University of Southern California. So, Serge, welcome! Serge, you have built what may be the most successful database group in Europe. How did you do it? And are the challenges for building a group different in Europe than they would be in the US? Many people deserve credit for the group’s success. It is a continuation of a research group named Verso that was sponsored by Francois Bancilhon and Michel Scholl; so when I arrived, the group was already existing. My contribution was to bring in some new fresh very good scientists --- Luc Segoufin, Ioana Manolescu, Sophie Cluet. More recently, we moved to a new research unit of INRIA. The idea was to get closer to the university, so now we are at the University of Orsay; we merged with the knowledge representation group that was managed by Christina Rousset.
    [Show full text]
  • Example-Guided Synthesis of Relational Queries
    Example-Guided Synthesis of Relational Queries Aalok Thakkar Aaditya Naik Nathaniel Sands University of Pennsylvania University of Pennsylvania University of Southern California Philadelphia, USA Philadelphia, USA Los Angeles, USA [email protected] [email protected] [email protected] Rajeev Alur Mayur Naik Mukund Raghothaman University of Pennsylvania University of Pennsylvania University of Southern California Philadelphia, USA Philadelphia, USA Los Angeles, USA [email protected] [email protected] [email protected] Abstract 1 Introduction Program synthesis tasks are commonly specified via input- Program synthesis aims to automatically synthesize a pro- output examples. Existing enumerative techniques for such gram that meets user intent. While the user intent is classi- tasks are primarily guided by program syntax and only make cally described as a correctness specification, synthesizing indirect use of the examples. We identify a class of synthesis programs from input-output examples has gained much trac- algorithms for programming-by-examples, which we call tion, as evidenced by the many applications of programming- Example-Guided Synthesis (EGS), that exploits latent struc- by-example and programming-by-demonstration, such as ture in the provided examples while generating candidate spreadsheet programming [25], relational query synthesis [51, programs. We present an instance of EGS for the synthesis 57], and data wrangling [19, 33]. Nevertheless, their scalabil- of relational queries and evaluate it on 86 tasks from three ity remains an important challenge, and often hinders their application domains: knowledge discovery, program analy- application in the field [5]. sis, and database querying. Our evaluation shows that EGS Existing synthesis tools predominantly adapt a syntax- outperforms state-of-the-art synthesizers based on enumer- guided approach to search through the space of candidate ative search, constraint solving, and hybrid techniques in programs.
    [Show full text]
  • Paris C. Kanellakis
    In Memoriam Paris C Kanellakis Our colleague and dear friend Paris C Kanellakis died unexp ectedly and tragically on De cemb er together with his wife MariaTeresa Otoya and their b eloved young children Alexandra and Stephanos En route to Cali Colombia for an annual holiday reunion with his wifes family their airplane strayed o course without warning during the night minutes b efore an exp ected landing and crashed in the Andes As researchers we mourn the passing of a creative and thoughtful colleague who was resp ected for his manycontributions b oth technical and professional to the computer science research community As individuals wegrieveover our tragic lossof a friend who was regarded with great aection and of a happy thriving family whose warmth and hospitalitywere gifts appreciated by friends around the world Their deaths create for us a void that cannot b e lled Paris left unnished several pro jects including a pap er on database theory intended for this sp ecial issue of Computing Surveys to b e written during his holiday visit to Colombia Instead we wish to oer here a brief biographyofParis and a description of the research topics that interested Paris over the last few years together with the contributions that he made to these areas It is not our intention to outline denitive surveys or histories of these areas but rather to honor the signicantcontemp orary research of our friend Paris was b orn in Greece in to Eleftherios and Argyroula Kanellakis In he received the Diploma in Electrical Engineering from the National
    [Show full text]
  • Efficient Maintenance Techniques for Views Over Active Documents
    Efficient Maintenance Techniques for Views over Active Documents∗ Serge Abiteboul#, Pierre Bourhis#∗, Bogdan Marinoiu# #INRIA Saclay { Ile-de-France^ and University Paris Sud ∗ENS Cachan [email protected] ABSTRACT The core of the model is a complex stream processor that Many Web applications are based on dynamic interactions we call axlog widget. The term axlog results from the mar- between Web components exchanging flows of information. riage between Active XML (AXML for short) [5] and dat- Such a situation arises for instance in mashup systems or alog [7]. An axlog widget consists of an active document when monitoring distributed autonomous systems. Our work interacting with the rest of the system via streams of up- is in this challenging context that has generated recently a dates. See Figure 1. The input streams specify updates to lot of attention; see Web 2.0. We introduce the axlog for- the document (in the spirit of RSS feeds). In most of the mal model for capturing such interactions and show how paper, the focus is on input streams where the updates are this model can be supported efficiently. The central compo- only insertions, although we do consider briefly streams with nent is the axlog widget defined by one tree-pattern query deletions. An output stream is defined by a query on the or more, over an active document (in the Active XML style) document. More precisely, it represents the list of update that includes some input streams of updates. A widget gen- requests to maintain the view for the query. The queries we erates a stream of updates for each query, the updates that consider here are tree-pattern queries with value joins (and are needed to maintain the view corresponding to the query.
    [Show full text]
  • A Formal Study of Collaborative Access Control in Distributed Datalog
    A Formal Study of Collaborative Access Control in Distributed Datalog Serge Abiteboul1, Pierre Bourhis2, and Victor Vianu3 1 INRIA Saclay, Palaiseau, France; and ENS Cachan, Cachan, France [email protected] 2 CNRS, France; and Centre de Recherche en Informatique, Signal et Automatique (CRIStAL) – UMR 9189, Lille, France [email protected] 3 University of California San Diego, La Jolla, USA; and INRIA Saclay, Palaiseau, France [email protected] Abstract We formalize and study a declaratively specified collaborative access control mechanism for data dissemination in a distributed environment. Data dissemination is specified using distributed datalog. Access control is also defined by datalog-style rules, at the relation level for extensional relations, and at the tuple level for intensional ones, based on the derivation of tuples. The model also includes a mechanism for “declassifying” data, that allows circumventing overly restrictive access control. We consider the complexity of determining whether a peer is allowed to access a given fact, and address the problem of achieving the goal of disseminating certain information under some access control policy. We also investigate the problem of information leakage, which occurs when a peer is able to infer facts to which the peer is not allowed access by the policy. Finally, we consider access control extended to facts equipped with provenance information, motivated by the many applications where such information is required. We provide semantics for access control with provenance, and establish the complexity of determining whether a peer may access a given fact together with its provenance. This work is motivated by the access control of the Webdamlog system, whose core features it formalizes.
    [Show full text]