On the Integration of Array and Relational Models in Databases
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Powerdesigner 16.6 Data Modeling
SAP® PowerDesigner® Document Version: 16.6 – 2016-02-22 Data Modeling Content 1 Building Data Models ...........................................................8 1.1 Getting Started with Data Modeling...................................................8 Conceptual Data Models........................................................8 Logical Data Models...........................................................9 Physical Data Models..........................................................9 Creating a Data Model.........................................................10 Customizing your Modeling Environment........................................... 15 1.2 Conceptual and Logical Diagrams...................................................26 Supported CDM/LDM Notations.................................................27 Conceptual Diagrams.........................................................31 Logical Diagrams............................................................43 Data Items (CDM)............................................................47 Entities (CDM/LDM)..........................................................49 Attributes (CDM/LDM)........................................................55 Identifiers (CDM/LDM)........................................................58 Relationships (CDM/LDM)..................................................... 59 Associations and Association Links (CDM)..........................................70 Inheritances (CDM/LDM)......................................................77 1.3 Physical Diagrams..............................................................82 -
Array DBMS in Environmental Science
The 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications 21-23 September, 2017, Bucharest, Romania Array DBMS in Environmental Science: Satellite Sea Surface Height Data in the Cloud Ramon Antonio Rodriges Zalipynis National Research University Higher School of Economics, Moscow, Russia [email protected] Abstract – Nowadays environmental science experiences data model is designed to uniformly represent diverse tremendous growth of raster data: N-dimensional (N-d) raster data types and formats, take into account a dis- arrays coming mainly from numeric simulation and Earth tributed cluster environment, and be independent of the remote sensing. An array DBMS is a tool to streamline raster data processing. However, raster data are usually stored in underlying raster file formats at the same time. Also, new files, not in databases. Moreover, numerous command line distributed algorithms are proposed based on the model. tools exist for processing raster files. This paper describes a Four modern raster data management trends are rele- distributed array DBMS under development that partially vant to this paper: industrial raster data models, formal delegates raster data processing to such tools. Our DBMS offers a new N-d array data model to abstract from the array models and algebras, in situ data processing algo- files and the tools and processes data in a distributed fashion rithms, and raster (array) DBMS. Good survey on the directly in their native file formats. As a case study, popular algorithms is contained in [7]. A recent survey of existing satellite altimetry data were used for the experiments carried array DBMS and similar systems is in [5]. -
OLAP, Relational, and Multidimensional Database Systems
OLAP, Relational, and Multidimensional Database Systems George Colliat Arbor Software Corporation 1325 Chcseapeakc Terrace, Sunnyvale, CA 94089 Introduction Many people ask about the difference between implementing On-Line Analytical Processing (OLAP) with a Relational Database Management System (ROLAP) versus a Mutidimensional Database (MDD). In this paper, we will show that an MDD provides significant advantages over a ROLAP such as several orders of magnitude faster data retrieval, several orders of magnitude faster calculation, much less disk space, and le,~ programming effort. Characteristics of On-Line Analytical Processing OLAP software enables analysts, managers, and executives to gain insight into an enterprise performance through fast interactive access to a wide variety of views of data organized to reflect the multidimensional aspect of the enterprise data. An OLAP service must meet the following fundamental requirements: • The base level of data is summary data (e.g., total sales of a product in a region in a given period) • Historical, current, and projected data • Aggregation of data and the ability to navigate interactively to various level of aggregation (drill down) • Derived data which is computed from input data (performance rados, variance Actual/Budget,...) • Multidimensional views of the data (sales per product, per region, per channel, per period,..) • Ad hoc fast interactive analysis (response in seconds) • Medium to large data sets ( 1 to 500 Gigabytes) • Frequently changing business model (Weekly) As an iUustration we will use the six dimension business model of a hypothetical beverage company. Each dimension is composed of a hierarchy of members: for instance the time dimension has months (January, February,..) as the leaf members of the hierarchy, Quarters (Quarter 1, Quarter 2,..) as the next level members, and Year as the top member of the hierarchy. -
On Active Deductive Databases
On Active Deductive Databases The Statelog Approach Georg Lausen Bertram Ludascher Wolfgang May Institut fur Informatik Universitat Freiburg Germany flausenludaeschmayginfo rmat ikun ifr eibur gde Abstract After briey reviewing the basic notions and terminology of active rules and relating them to pro duction rules and deductive rules resp ectively we survey a number of formal approaches to active rules Subsequentlywe present our own stateoriented logical approach to ac tive rules which combines the declarative semantics of deductive rules with the p ossibility to dene up dates in the style of pro duction rules and active rules The resulting language Statelog is surprisingly simple yet captures many features of active rules including comp osite eventde tection and dierent coupling mo des Thus it can b e used for the formal analysis of rule prop erties like termination and expressivepo wer Finally weshowhow nested transactions can b e mo deled in Statelog b oth from the op erational and the mo deltheoretic p ersp ective Intro duction Motivated by the need for increased expressiveness and the advent of new appli cations rules have b ecome very p opular as a paradigm in database programming since the late eighties Min Today there is a plethora of quite dierent appli cation areas and semantics for rules From a birdseye view deductive and active rules may b e regarded as two ends of a sp ectrum of database rule languages Deductive Rules Production Rules Active Rules higher level lower level stratied well RDL pro ARDL Ariel Starburst Postgres -
Multidimensional Network Analysis
Universita` degli Studi di Pisa Dipartimento di Informatica Dottorato di Ricerca in Informatica Ph.D. Thesis Multidimensional Network Analysis Michele Coscia Supervisor Supervisor Fosca Giannotti Dino Pedreschi May 9, 2012 Abstract This thesis is focused on the study of multidimensional networks. A multidimensional network is a network in which among the nodes there may be multiple different qualitative and quantitative relations. Traditionally, complex network analysis has focused on networks with only one kind of relation. Even with this constraint, monodimensional networks posed many analytic challenges, being representations of ubiquitous complex systems in nature. However, it is a matter of common experience that the constraint of considering only one single relation at a time limits the set of real world phenomena that can be represented with complex networks. When multiple different relations act at the same time, traditional complex network analysis cannot provide suitable an- alytic tools. To provide the suitable tools for this scenario is exactly the aim of this thesis: the creation and study of a Multidimensional Network Analysis, to extend the toolbox of complex network analysis and grasp the complexity of real world phenomena. The urgency and need for a multidimensional network analysis is here presented, along with an empirical proof of the ubiquity of this multifaceted reality in different complex networks, and some related works that in the last two years were proposed in this novel setting, yet to be systematically defined. Then, we tackle the foundations of the multidimensional setting at different levels, both by looking at the basic exten- sions of the known model and by developing novel algorithms and frameworks for well-understood and useful problems, such as community discovery (our main case study), temporal analysis, link prediction and more. -
An Array DBMS for Simulation Analysis and ML Models Predictions
SAVIME: An Array DBMS for Simulation Analysis and ML Models Predictions Hermano. L. S. Lustosa1, Anderson C. Silva1, Daniel N. R. da Silva1, Patrick Valduriez2, Fabio Porto1 1 National Laboratory for Scientific Computing, Rio de Janeiro, Brazil {hermano, achaves, dramos, fporto}@lncc.br 2 Inria, University of Montpellier, CNRS, LIRMM, France [email protected] Abstract. Limitations in current DBMSs prevent their wide adoption in scientific applications. In order to make them benefit from DBMS support, enabling declarative data analysis and visualization over scientific data, we present an in-memory array DBMS called SAVIME. In this work we describe the system SAVIME, along with its data model. Our preliminary evaluation show how SAVIME, by using a simple storage definition language (SDL) can outperform the state-of-the-art array database system, SciDB, during the process of data ingestion. We also show that it is possible to use SAVIME as a storage alternative for a numerical solver without affecting its scalability, making it useful for modern ML based applications. Categories and Subject Descriptors: H.2.4 [Database Management]: Systems Keywords: Scientific Data Management, Multidimensional Array, Machine Learning 1. INTRODUCTION Due to the increasing computational power of HPC environments, vast amounts of data are now generated in different research fields, and a relevant part of this data is best represented as array data. Geospatial and temporal data in climate modeling, astronomical images, medical imagery, multimedia and simulation data are naturally represented as multidimensional arrays. To analyze such data, DBMSs offer many advantages, like query languages, which eases data analysis and avoids the need for extensive coding/scripting, and a logical data view that isolates data from the applications that consume it. -
Array Databases: Concepts, Standards, Implementations
Baumann et al. J Big Data (2021) 8:28 https://doi.org/10.1186/s40537-020-00399-2 SURVEY PAPER Open Access Array databases: concepts, standards, implementations Peter Baumann , Dimitar Misev, Vlad Merticariu and Bang Pham Huu* *Correspondence: b. Abstract phamhuu@jacobs-university. Multi-dimensional arrays (also known as raster data or gridded data) play a key role in de Large-Scale Scientifc many, if not all science and engineering domains where they typically represent spatio- Information Systems temporal sensor, image, simulation output, or statistics “datacubes”. As classic database Research Group, Jacobs technology does not support arrays adequately, such data today are maintained University, Bremen, Germany mostly in silo solutions, with architectures that tend to erode and not keep up with the increasing requirements on performance and service quality. Array Database systems attempt to close this gap by providing declarative query support for fexible ad-hoc analytics on large n-D arrays, similar to what SQL ofers on set-oriented data, XQuery on hierarchical data, and SPARQL and CIPHER on graph data. Today, Petascale Array Database installations exist, employing massive parallelism and distributed processing. Hence, questions arise about technology and standards available, usability, and overall maturity. Several papers have compared models and formalisms, and benchmarks have been undertaken as well, typically comparing two systems against each other. While each of these represent valuable research to the best of our knowledge there is no comprehensive survey combining model, query language, architecture, and practical usability, and performance aspects. The size of this comparison diferentiates our study as well with 19 systems compared, four benchmarked to an extent and depth clearly exceeding previous papers in the feld; for example, subsetting tests were designed in a way that systems cannot be tuned to specifcally these queries. -
Oracle Rdb™ SQL Reference Manual Volume 4
Oracle Rdb™ SQL Reference Manual Volume 4 Release 7.3.2.0 for HP OpenVMS Industry Standard 64 for Integrity Servers and OpenVMS Alpha operating systems August 2016 ® SQL Reference Manual, Volume 4 Release 7.3.2.0 for HP OpenVMS Industry Standard 64 for Integrity Servers and OpenVMS Alpha operating systems Copyright © 1987, 2016 Oracle Corporation. All rights reserved. Primary Author: Rdb Engineering and Documentation group This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). -
The Gamma Operator for Big Data Summarization on an Array DBMS
JMLR: Workshop and Conference Proceedings 36:88{103, 2014 BIGMINE 2014 The Gamma Operator for Big Data Summarization on an Array DBMS Carlos Ordonez [email protected] Yiqun Zhang [email protected] Wellington Cabrera [email protected] University of Houston, USA Editors: Wei Fan, Albert Bifet, Qiang Yang and Philip Yu Abstract SciDB is a parallel array DBMS that provides multidimensional arrays, a query language and basic ACID properties. In this paper, we introduce a summarization matrix operator that computes sufficient statistics in one pass and in parallel on an array DBMS. Such sufficient statistics benefit a big family of statistical and machine learning models, including PCA, linear regression and variable selection. Experimental evaluation on a parallel cluster shows our matrix operator exhibits linear time complexity and linear speedup. Moreover, our operator is shown to be an order of magnitude faster than SciDB built-in operators, two orders of magnitude faster than SQL queries on a fast column DBMS and even faster than the R package when the data set fits in RAM. We show SciDB operators and the R package fail due to RAM limitations, whereas our operator does not. We also show PCA and linear regression computation is reduced to a few minutes for large data sets. On the other hand, a Gibbs sampler for variable selection can iterate much faster in the array DBMS than in R, exploiting the summarization matrix. Keywords: Array, Matrix, Linear Models, Summarization, Parallel 1. Introduction Row DBMSs remain the best technology for OLTP [Stonebraker et al.(2007)] and col- umn DBMSs are becoming a competitor in OLAP (Business Intelligence) query processing [Stonebraker et al.(2005)] in large data warehouses [Stonebraker et al.(2010)]. -
Curriculum Vitae Michael Kay Phd FBCS
Curriculum Vitae Michael Kay PhD FBCS Michael Kay is widely known in the XML world as an expert on the XML processing languages XSLT and XQuery. This reputation derives from his book XSLT Programmer's Reference, from the open-source Saxon XSLT and XQuery processor which he developed, and from his work within the W3C consortium as a member of the XSLT and XQuery working groups, and as editor of several of the specifications. His expertise in the field was recognized in 2005 when he was awarded the XML Cup, an annual international award made to individuals who have made outstanding contributions to the XML Community. Before moving into the XML field in 1998, Michael Kay specialized in database and information management technology. He designed a series of successful software products for the computer manufacturer ICL, and was one of the company's most senior engineers advising the company and it customers on technology strategy. Michael Kay is a member of the XML Guild, a loose federation of leading independent XML consultants who pool knowledge and experience to tackle the most challenging XML-related problems. Personal Details Name Michael Howard Kay Address 52 Matlock Road, Caversham Heights, Reading, Berks, UK. RG4 7BS Phone +44 118 948 3589 Email [email protected] Date of Birth 1951-10-11 Nationality British (born in Germany) Languages English, German, some French Career Summary Feb 2004 - present Director, Saxonica Limited Development and Marketing of commercial Saxon-SA product Ongoing development of open source Saxon-B product Independent -
Multidimensional Arrays for Analysing Geoscientific Data
International Journal of Geo-Information Review Multidimensional Arrays for Analysing Geoscientific Data Meng Lu 1,2,*,† , Marius Appel 1 and Edzer Pebesma 1 1 Institute for Geoinformatics, University of Muenster, 48149 Muenster, Germany; [email protected] (M.A.); [email protected] (E.P.) 2 Department of Physical Geography, Faculty of Geoscience, Utrecht University, 3584 CB Utrecht, The Netherlands * Correspondence: [email protected]; Tel.: +31 648901629 † Current address: Vening Meinesz Building A, Princetonlaan 8A, 3584 CB Utrecht, The Netherlands. Received: 27 June 2018; Accepted: 30 July 2018; Published: 3 August 2018 Abstract: Geographic data is growing in size and variety, which calls for big data management tools and analysis methods. To efficiently integrate information from high dimensional data, this paper explicitly proposes array-based modeling. A large portion of Earth observations and model simulations are naturally arrays once digitalized. This paper discusses the challenges in using arrays such as the discretization of continuous spatiotemporal phenomena, irregular dimensions, regridding, high-dimensional data analysis, and large-scale data management. We define categories and applications of typical array operations, compare their implementation in open-source software, and demonstrate dimension reduction and array regridding in study cases using Landsat and MODIS imagery. It turns out that arrays are a convenient data structure for representing and analysing many spatiotemporal phenomena. Although the array model simplifies data organization, array properties like the meaning of grid cell values are rarely being made explicit in practice. Keywords: multidimensional arrays; geoscientific data; data analysis; spatiotemporal modeling 1. Introduction An array is a storage form for a sequence of objects of similar type. -
Web Coverage Service (WCS) V1.0.0
Open Geospatial Consortium Inc. Date: 2005-10-14 Reference number of this OGC® Project Document: OGC 05-076 Version: 1.0.0 (Corrigendum) Category: OGC® Implementation Specification Editor: John D. Evans Web Coverage Service (WCS), Version 1.0.0 (Corrigendum) Copyright © 2005 Open Geospatial Consortium. All Rights Reserved To obtain additional rights of use, visit http://www.opengeospatial.org/legal/ Document type: OpenGIS® Implementation Specification Document subtype: Corrigendum Document stage: Approved Document language: English OGC 05-076 Contents Page 1 Scope........................................................................................................................1 2 Conformance............................................................................................................1 3 Normative references...............................................................................................1 4 Terms and definitions ..............................................................................................2 5 Conventions .............................................................................................................4 5.1 Symbols (and abbreviated terms) ............................................................................4 5.2 UML notation ..........................................................................................................4 5.3 XML schema notation..............................................................................................6 6 Basic service elements.............................................................................................6