Business Intelligence on Non-Conventional Data

Total Page:16

File Type:pdf, Size:1020Kb

Business Intelligence on Non-Conventional Data Alma Mater Studiorum - Universit`adi Bologna Dottorato di ricerca in Computer Science and Engineering Ciclo XXIX Settore concorsuale di afferenza: 09/H1 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI Settore scientifico disciplinare: ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI Business Intelligence on Non-Conventional Data Enrico Gallinucci Coordinatore Dottorato Relatore Prof. Paolo Ciaccia Prof. Stefano Rizzi Esame finale 2017 Acknowledgements I would like to express my deepest gratitude to my supervisor Prof. Stefano Rizzi and to my advisor Prof. Matteo Golfarelli for their continuous support, patience and motivation during these three research years. Their mentoring has provided an invaluable contribution to both my professional and personal growth. My sincere thanks also goes to Prof. Alberto Abelló and Prof. Oscar Romero for their collaboration and support during my research period in Spain at Universitat Politècnica de Catalunya, Barcelona. I would like to thank Prof. Enrico Denti, Prof. Claudio Sartori, Prof. Robert Wrembel and Prof. Esteban Zimanyi for reviewing this thesis. I wish to thank my dear colleagues in Cesena and Barcelona for the effective teamwork, the stimulating discussions and the strong camaraderie. Finally, I would like to thank my family and my closest friends for making this journey easier with their unquestioned love and support. Abstract The revolution in digital communications witnessed over the last decade had a significant impact on the world of Business Intelligence (BI). In the big data era, the amount and diversity of data that can be collected and analyzed for the decision-making process transcends the restricted and structured set of internal data that BI systems are conventionally limited to. This thesis investigates the unique challenges imposed by three specific categories of non-conventional data: social data, linked data and schemaless data. Social data comprises the user-generated contents published through websites and social media, which can provide a fresh and timely perception about people’s tastes and opinions. In Social BI (SBI), the analysis focuses on topics, meant as specific concepts of interest within the subject area. In this context, this thesis proposes meta-star, an alternative strategy to the traditional star-schema for modeling hierarchies of topics to enable OLAP analyses. The thesis also presents an architectural framework of a real SBI project and a cross-disciplinary benchmark for SBI. Linked data employ the Resource Description Framework (RDF) to provide a public network of interlinked, structured, cross-domain knowledge. In this context, this thesis proposes an interactive and collaborative approach to build aggregation hierarchies from linked data. Schemaless data refers to the storage of data in NoSQL databases that do not force a predefined schema, but let database instances embed their own local schemata. In this context, this thesis proposes an approach to determine the schema profile of a document-based database; the goal is to facilitate users in a schema-on-read analysis process by understanding the rules that drove the usage of the different schemata. A final and complementary contribution of this thesis is an innovative technique inthe field of recommendation systems to overcome user disorientation in the analysis ofa large and heterogeneous wealth of data. Table of contents List of figures xi List of tables xv 1 Introduction1 1.1 Business Intelligence . .1 1.2 Motivations and Contributions . .4 1.2.1 Social BI . .6 1.2.2 Exploratory BI . .7 1.2.3 Pervasive BI . .7 2 Background9 2.1 Data Warehouse . .9 2.2 DW Architecture . 12 2.3 OLAP Analysis . 14 2.4 Multidimensional Model . 14 2.5 Logical modeling: the star schema . 16 3 Meta-star: a modeling approach for social data aggregation 19 3.1 Introduction . 19 3.1.1 Motivation . 21 3.1.2 Goal . 22 3.2 Related Literature . 24 3.3 Meta-Stars . 27 3.3.1 Slowly-Changing Topics and Levels . 33 3.4 Querying Meta-Stars . 34 3.4.1 Translating Group-by Components into SQL . 38 3.4.2 Translating Semantic Filters into SQL . 38 3.4.3 Translating Selections into SQL . 39 viii Table of contents 3.4.4 Impact of Static Levels . 40 3.5 Query Execution Plans and Cost Model for Meta-Stars . 41 3.6 Evaluation . 43 3.6.1 Time (for Accessing Topics) . 45 3.6.2 Time (for Accessing Facts) . 46 3.6.3 Queries with Semantic Topic Aggregation . 47 3.6.4 Space . 48 3.7 Conclusions . 48 4 An architectural and methodological framework for Social BI 51 4.1 Introduction . 51 4.2 Related Literature . 52 4.3 Architectural and Methodological Framework . 53 4.4 A Case Study on EU Politics . 56 4.5 Architectural Options . 57 4.5.1 Analysis . 57 4.5.2 ODS . 58 4.5.3 Crawling . 59 4.5.4 Semantic Enrichment . 60 4.6 Case Study Analysis . 61 4.6.1 Effectiveness . 62 4.6.2 Efficiency . 64 4.6.3 Sustainability . 65 4.7 Conclusions . 66 5 SABINE: a modular benchmark for Social BI 69 5.1 Introduction . 69 5.2 The Content of SABINE . 71 5.2.1 Topics and Mappings . 71 5.2.2 Clips and Annotations . 73 5.2.3 Multidimensional Cubes . 74 5.3 SABINE Construction Techniques . 75 5.3.1 Ontology Design . 76 5.3.2 Crawling . 76 5.3.3 Text Analysis . 77 5.3.4 Topic Search . 77 5.3.5 Sentiment Analysis . 78 Table of contents ix 5.3.6 Data Linking . 80 5.4 Research Tasks . 82 5.4.1 Content Analysis Tasks . 82 5.4.2 Semantic Analysis Tasks . 85 5.4.3 SBI Analytics Tasks . 86 5.5 Conclusions . 87 6 iMOLD: a collaborative approach for Exploratory BI on linked data 89 6.1 Introduction . 89 6.2 Related Literature . 91 6.3 Background . 93 6.3.1 Multidimensional modeling . 93 6.3.2 Linked data modeling . 94 6.4 Approach Overview . 95 6.5 Aggregation Patterns in Ontologies . 99 6.5.1 Association-Based Patterns . 99 6.5.2 Generalization-Based Patterns . 102 6.6 Acquisition . 104 6.6.1 Acquisition of Association-Based Patterns . 106 6.6.2 Acquisition of Generalization-Based Patterns . 109 6.6.3 Mixed patterns . 114 6.6.4 User Experience . 115 6.6.5 Collaboration and Reuse . 118 6.7 Case Study and User Evaluation . 119 6.7.1 Case Study . 120 6.7.2 User Evaluation . 122 6.8 Conclusions . 125 7 Profiling hidden schemata on schemaless data 127 7.1 Introduction . 127 7.2 Related Literature . 130 7.3 Requirements for Schema Profiling . 133 7.4 Formal Background . 134 7.5 Evaluating Schema Profiles . 137 7.5.1 Explicativeness . 137 7.5.2 Precision . 138 7.5.3 Conciseness . 140 x Table of contents 7.6 Building Schema Profiles . 142 7.7 Experimental Results . 146 7.7.1 Effectiveness . 147 7.7.2 Efficiency . 153 7.8 Conclusions . 155 8 Cubeload: a benchmark generator of OLAP sessions 157 8.1 Introduction . 157 8.2 Related Literature . 159 8.3 Overview . 160 8.4 The Workload Model . 161 8.5 Session Templates . 163 8.6 Experiments . 165 8.7 Conclusions . ..
Recommended publications
  • Normalized Form Snowflake Schema
    Normalized Form Snowflake Schema Half-pound and unascertainable Wood never rhubarbs confoundedly when Filbert snore his sloop. Vertebrate or leewardtongue-in-cheek, after Hazel Lennie compartmentalized never shreddings transcendentally, any misreckonings! quite Crystalloiddiverted. Euclid grabbles no yorks adhered The star schemas in this does not have all revenue for this When done use When doing table contains less sensible of rows Snowflake Normalizationde-normalization Dimension tables are in normalized form the fact. Difference between Star Schema & Snow Flake Schema. The major difference between the snowflake and star schema models is slot the dimension tables of the snowflake model may want kept in normalized form to. Typically most of carbon fact tables in this star schema are in the third normal form while dimensional tables are de-normalized second normal. A relation is danger to pause in First Normal Form should each attribute increase the. The model is lazy in single third normal form 1141 Options to Normalize Assume that too are 500000 product dimension rows These products fall under 500. Hottest 'snowflake-schema' Answers Stack Overflow. Learn together is Star Schema Snowflake Schema And the Difference. For step three within the warehouses we tested Redshift Snowflake and Bigquery. On whose other hand snowflake schema is in normalized form. The CWM repository schema is a standalone product that other products can shareeach product owns only. The main difference between in two is normalization. Families of normalized form snowflake schema snowflake. Star and Snowflake Schema in Data line with Examples. Is spread the dimension tables in the snowflake schema are normalized. Like price weight speed and quantitiesie data execute a numerical format.
    [Show full text]
  • The Design of Multidimensional Data Model Using Principles of the Anchor Data Modeling: an Assessment of Experimental Approach Based on Query Execution Performance
    WSEAS TRANSACTIONS on COMPUTERS Radek Němec, František Zapletal The Design of Multidimensional Data Model Using Principles of the Anchor Data Modeling: An Assessment of Experimental Approach Based on Query Execution Performance RADEK NĚMEC, FRANTIŠEK ZAPLETAL Department of Systems Engineering Faculty of Economics, VŠB - Technical University of Ostrava Sokolská třída 33, 701 21 Ostrava CZECH REPUBLIC [email protected], [email protected] Abstract: - The decision making processes need to reflect changes in the business world in a multidimensional way. This includes also similar way of viewing the data for carrying out key decisions that ensure competitiveness of the business. In this paper we focus on the Business Intelligence system as a main toolset that helps in carrying out complex decisions and which requires multidimensional view of data for this purpose. We propose a novel experimental approach to the design a multidimensional data model that uses principles of the anchor modeling technique. The proposed approach is expected to bring several benefits like better query execution performance, better support for temporal querying and several others. We provide assessment of this approach mainly from the query execution performance perspective in this paper. The emphasis is placed on the assessment of this technique as a potential innovative approach for the field of the data warehousing with some implicit principles that could make the process of the design, implementation and maintenance of the data warehouse more effective. The query performance testing was performed in the row-oriented database environment using a sample of 10 star queries executed in the environment of 10 sample multidimensional data models.
    [Show full text]
  • Chapter 7 Multi Dimensional Data Modeling
    Chapter 7 Multi Dimensional Data Modeling Fundamentals of Business Analytics” Content of this presentation has been taken from Book “Fundamentals of Business Analytics” RN Prasad and Seema Acharya Published by Wiley India Pvt. Ltd. and it will always be the copyright of the authors of the book and publisher only. Basis • You are already familiar with the concepts relating to basics of RDBMS, OLTP, and OLAP, role of ERP in the enterprise as well as “enterprise production environment” for IT deployment. In the previous lectures, you have been explained the concepts - Types of Digital Data, Introduction to OLTP and OLAP, Business Intelligence Basics, and Data Integration . With this background, now its time to move ahead to think about “how data is modelled”. • Just like a circuit diagram is to an electrical engineer, • an assembly diagram is to a mechanical Engineer, and • a blueprint of a building is to a civil engineer • So is the data models/data diagrams for a data architect. • But is “data modelling” only the responsibility of a data architect? The answer is Business Intelligence (BI) application developer today is involved in designing, developing, deploying, supporting, and optimizing storage in the form of data warehouse/data marts. • To be able to play his/her role efficiently, the BI application developer relies heavily on data models/data diagrams to understand the schema structure, the data, the relationships between data, etc. In this lecture, we will learn • About basics of data modelling • How to go about designing a data model at the conceptual and logical levels? • Pros and Cons of the popular modelling techniques such as ER modelling and dimensional modelling Case Study – “TenToTen Retail Stores” • A new range of cosmetic products has been introduced by a leading brand, which TenToTen wants to sell through its various outlets.
    [Show full text]
  • ER/Studio Enterprise Data Modeling
    ER/Studio Enterprise Data Modeling ER/Studio®, a model-driven data architecture and database design solution, helps companies discover, document, and reuse data assets. With round-trip database support, data architects have the power to thoroughly analyze existing data sources as well as design and implement high quality databases that reflect business needs. The highly-readable visual format enhances communication across job functions, from business analysts to application developers. ER/Studio Enterprise also enables team and enterprise collaboration with its repository. • Enhance visibility into your existing data assets • Effectively communicate models across the enterprise Related Products • Improve data consistency • Trace data origins and whereabouts to enhance data integration and accuracy ER/Studio Viewer View, navigate and print ER/Studio ENHANCE VISIBILITY INTO YOUR EXISTING DATA ASSETS models in a view-only environ- ment. As data volumes grow and environments become more complex corporations find it increasingly difficult to leverage their information. ER/Studio provides an easy- Describe™ to-use visual medium to document, understand, and publish information about data assets so that they can be harnessed to support business objectives. Powerful Design, document, and maintain reverse engineering of industry-leading database systems allow a data modeler to enterprise applications written in compare and consolidate common data structures without creating unnecessary Java, C++, and IDL for better code duplication. Using industry standard notations, data modelers can create an infor- quality and shorter time to market. mation hub by importing, analyzing, and repurposing metadata from data sources DT/Studio® such as business intelligence applications, ETL environments, XML documents, An easy-to-use visual medium to and other modeling solutions.
    [Show full text]
  • Advantages of Dimensional Data Modeling
    Advantages of Dimensional Data Modeling 2997 Yarmouth Greenway Drive Madison, WI 53711 (608) 278-9964 www.sys-seminar.com Advantages of Dimensional Data Modeling 1 Top Ten Reasons Why Your Data Model Needs a Makeover 1. Ad hoc queries are difficult to construct for end-users or must go through database “gurus.” 2. Even standard reports require considerable effort and detail knowledge of the database. 3. Data is not integrated or is inconsistent across sources. 4. Changes in data values or in data sources cannot be handled gracefully. 5. The structure of the data does not mirror business processes or business rules. 6. The data model limits which BI tools can be used. 7. There is no system for maintaining change history or collecting metadata. 8. Disk space is wasted on redundant values. 9. Users who might benefit from the data don’t use it. 10.Maintenance is tedious and ad hoc. 2 Advantages of Dimensional Data Modeling Part 1 3 Part 1 - Data Model Overview •What is data modeling and why is it important? •Three common data models: de-normalized (SAS data sets) normalized dimensional model •Benefits of the dimensional model 4 What is data modeling? • The generalized logical relationship among tables • Usually reflected in the physical structure of the tables • Not tied to any particular product or DBMS • A critical design consideration 5 Why is data modeling important? •Allows you to optimize performance •Allows you to minimize costs •Facilitates system documentation and maintenance • The dimensional data model is the foundation of a well designed data mart or data warehouse 6 Common data models Three general data models we will review: De-normalized Expected by many SAS procedures Normalized Often used in transaction based systems such as order entry Dimensional Often used in data warehouse systems and systems subject to ad hoc queries.
    [Show full text]
  • Bio-Ontologies Submission Template
    Relational to RDF mapping using D2R for translational research in neuroscience Rudi Verbeeck*1, Tim Schultz2, Laurent Alquier3 and Susie Stephens4 Johnson & Johnson Pharmaceutical Research and Development 1 Turnhoutseweg 30, Beerse, Belgium; 2 Welch & McKean Roads, Spring House, PA, United States; 3 1000 Route 202, Rari- tan, NJ, United States and 4 145 King of Prussia Road, Radnor, PA, United States ABSTRACT Relational database technology has been developed as an Motivation: To support translational research and external approach for managing and integrating data in a highly innovation, we are evaluating the potential of the semantic available, secure and scalable architecture. With this ap- web to integrate data from discovery research through to the proach, all metadata is embedded or implicit in the applica- clinical environment. This paper describes our experiences tion or metadata schema itself, which results in performant in mapping relational databases to RDF for data sets relating queries. However, this architecture makes it difficult to to neuroscience. share data across a large organization where different data- Implementation: We describe how classes were identified base schemata and applications are being used. in the original data sets and mapped to RDF, and how con- Semantic web offers a promising approach to interconnect nections were made to public ontologies. Special attention databases across an organization, since the technology was was paid to the mapping of experimental measures to RDF designed to function within the distributed environment of and how it was impacted by the relational schemata. the web. Resource Description Framework (RDF) and Web Results: Mapping from relational databases to RDF can Ontology Language (OWL) are the two main semantic web benefit from techniques borrowed from dimensional model- standard recommendations.
    [Show full text]
  • A Framework for Ontology-Based Library Data Generation, Access and Exploitation
    Universidad Politécnica de Madrid Departamento de Inteligencia Artificial DOCTORADO EN INTELIGENCIA ARTIFICIAL A framework for ontology-based library data generation, access and exploitation Doctoral Dissertation of: Daniel Vila-Suero Advisors: Prof. Asunción Gómez-Pérez Dr. Jorge Gracia 2 i To Adelina, Gustavo, Pablo and Amélie Madrid, July 2016 ii Abstract Historically, libraries have been responsible for storing, preserving, cata- loguing and making available to the public large collections of information re- sources. In order to classify and organize these collections, the library commu- nity has developed several standards for the production, storage and communica- tion of data describing different aspects of library knowledge assets. However, as we will argue in this thesis, most of the current practices and standards available are limited in their ability to integrate library data within the largest information network ever created: the World Wide Web (WWW). This thesis aims at providing theoretical foundations and technical solutions to tackle some of the challenges in bridging the gap between these two areas: library science and technologies, and the Web of Data. The investigation of these aspects has been tackled with a combination of theoretical, technological and empirical approaches. Moreover, the research presented in this thesis has been largely applied and deployed to sustain a large online data service of the National Library of Spain: datos.bne.es. Specifically, this thesis proposes and eval- uates several constructs, languages, models and methods with the objective of transforming and publishing library catalogue data using semantic technologies and ontologies. In this thesis, we introduce marimba-framework, an ontology- based library data framework, that encompasses these constructs, languages, mod- els and methods.
    [Show full text]
  • Round-Trip Database Support Gives ER/Studio Data Architect Users The
    Round-trip database support gives ER/Studio Data Architect users the power to easily reverse-engineer, compare and merge, and visually document data assets residing in diverse locations from data centers to mobile platforms. You’ll leverage Enterprise data as corporate asset more effectively and can be rest assured that compliance is supported for business standards and mandatory regulations—essential factors in an organizational data governance program. Automate and scale your data modeling “We had to use a tool like Visio and do everything by hand before. We’re talking about thousands of tables with subsequent relationships. Now ER/Studio automates the most time consuming tasks.” - Jason Soroko, Business Architect at Entrust Uncover your database inconsistencies “Embarcadero tools detected inconsistencies we didn’t even know existed in our systems, saving us from problems down the road.” - U.S. Bancorp Piper Jaffray The Challenge of Fully Leveraging Enterprise Data This all-too-common scenario forces non-compliance with business standards and mandatory regulations, As organizations grow and data proliferates, ad hoc systems for while preventing business executives from the benefit of storing, analyzing, and utilizing that data start to appear and incorporating all essential data in to their decision-making are generally located near to the business unit that needs it. This process. Data management professionals face three distinct practice results in disparately located databases storing different challenges: versions and formats of the same data and an enterprise that will suffer from multiple views and instances of a single data • Reduce duplication and risk associated with multiple data capture and storage environments point.
    [Show full text]
  • Collecting Activity Data Using the Open Mhealth Platform an Exploratory Study on Integrating Objective Data with Sport Monitoring Systems
    Collecting activity data using the Open mHealth platform An exploratory study on integrating objective data with sport monitoring systems Daniel Gynnild-Johnsen , Lars-Erik Holte Master’s Thesis Spring 2017 Collecting activity data using the Open mHealth platform Daniel Gynnild-Johnsen Lars-Erik Holte May 2, 2017 Abstract Football players works together as a unit to perform on an elite competi- tive level, and the most minor abnormalities can determine the outcome of a match. Success can often be the result of healthy, uninjured and rejuve- nated players working together as a collective. Even though it is impossible to control all outcomes and scenarios, the risk of failure might be mini- mized by monitoring players closely on an individual level. If we monitor players over a longer period of time we might discover patterns or abnor- malities in their training. This information can be used to avoid multiple scenarios related to fatigue, injuries and overtraining. In this thesis we present a proof of concept for expanding an existing self- reporting monitoring system called pmSys, and look at how football teams and players can utilize modern technology like phones and wearable de- vices to capture objective data. This system will collect and store the data, which can be processed into useful visualised feedback, and help a team to evaluate their players. This way the coaches can make mitigating mea- sures to improve certain aspect that might be lacking on a player or team level. By eliminating the use of pen and paper, pmSys introduces a simpler way of reporting the players’ health status.
    [Show full text]
  • SAS Decision Services 6.4
    SAS® Decision Services 6.4 Administrator’s Guide SAS® Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. SAS® Decision Services 6.4: Administrator's Guide. Cary, NC: SAS Institute Inc. SAS® Decision Services 6.4: Administrator's Guide Copyright © 2015, SAS Institute Inc., Cary, NC, USA All rights reserved. Produced in the United States of America. For a hard-copy book: No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher, SAS Institute Inc. For a web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication. The scanning, uploading, and distribution of this book via the Internet or any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage electronic piracy of copyrighted materials. Your support of others' rights is appreciated. U.S. Government License Rights; Restricted Rights: The Software and its documentation is commercial computer software developed at private expense and is provided with RESTRICTED RIGHTS to the United States Government. Use, duplication or disclosure of the Software by the United States Government is subject to the license terms of this Agreement pursuant to, as applicable, FAR 12.212, DFAR 227.7202-1(a), DFAR 227.7202-3(a) and DFAR 227.7202-4 and, to the extent required under U.S.
    [Show full text]
  • Query Optimizing for On-Line Analytical Processing Adventures in the Land of Heuristics
    Aalto University School of Science Master's Programme in Computer, Communication and Information Sciences Jonas Berg Query Optimizing for On-line Analytical Processing Adventures in the land of heuristics Master's Thesis Espoo, May 22, 2017 Supervisor: Professor Eljas Soisalon-Soininen Advisors: Jarkko Miettinen M.Sc. (Tech.) Marko Nikula M.Sc. (Tech) Aalto University School of Science Master's Programme in Computer, Communication and In- ABSTRACT OF formation Sciences MASTER'S THESIS Author: Jonas Berg Title: Query Optimizing for On-line Analytical Processing { Adventures in the land of heuristics Date: May 22, 2017 Pages: vii + 71 Major: Computer Science Code: SCI3042 Supervisor: Professor Eljas Soisalon-Soininen Advisors: Jarkko Miettinen M.Sc. (Tech.) Marko Nikula M.Sc. (Tech) Newer database technologies, such as in-memory databases, have largely forgone query optimization. In this thesis, we presented a use case for query optimization for an in-memory column-store database management system used for both on- line analytical processing and on-line transaction processing. To date, the system in question has used a na¨ıve query optimizer for deciding on join order. We went through related literature on the history and evolution of database technology, focusing on query optimization. Based on this, we analyzed the current system and presented improvements for its query processing abilities. We implemented a new query optimizer and experimented with it, seeing how it performed on several queries, concluding that it is a successful improvement
    [Show full text]
  • Data Modeling Immersion 3NF DV & Dimensional
    Course Description Modeling: Operational, Data Warehousing & Data Marts Operational DW DMs GENESEE ACADEMY, LLC 2013 Course Developed by: Hans Hultgren DATA MODELING IMMERSION Modeling: Operational, Data Warehousing & Data Marts Overview Data Modeling is the process of database design. As with most design processes, data modeling is both an art and a science. Art: in that it is a creative process whereby we analyze, design and model specific solutions based on unique requirements. Science: in that we apply modeling approaches, methodologies and specific modeling patterns based on the constraints, variables and performance characteristics of the database we are creating. Our operational systems (core business applications), our data warehouse, and our data marts each have very different constraints, variables and performance characteristics. This course teaches data modeling and covers the main data modeling techniques related to each of these three major architectural layers. In this class you will learn data modeling using the normalized (3NF) modeling approach for operational systems, the ensemble data vault modeling approach for the data warehouse, and the dimensional (Star Schema) modeling approach for data marts. This is a two (2) day course delivered in the classroom. 1/1/2013 | DATA MODELING IMMERSION 1 Course Description This course covers the core principles of data modeling through lectures and hands-on labs and exercises. Providing a solid overview of current techniques for modeling operational systems, data warehouses, and data marts. In covering these areas, the course considers data modeling and design using normalized data modeling 3rd Normal Form for Operational Systems, Ensemble Data Vault modeling for the Data Warehouse, and Dimensional modeling (Star Schema) for Data Marts.
    [Show full text]