Sql Query Optimization for Highly Normalized Big Data

Total Page:16

File Type:pdf, Size:1020Kb

Sql Query Optimization for Highly Normalized Big Data DECISION MAKING AND BUSINESS INTELLIGENCE SQL QUERY OPTIMIZATION FOR HIGHLY NORMALIZED BIG DATA Nikolay I. GOLOV Lecturer, Department of Business Analytics, School of Business Informatics, Faculty of Business and Management, National Research University Higher School of Economics Address: 20, Myasnitskaya Street, Moscow, 101000, Russian Federation E-mail: [email protected] Lars RONNBACK Lecturer, Department of Computer Science, Stocholm University Address: SE-106 91 Stockholm, Sweden E-mail: [email protected] This paper describes an approach for fast ad-hoc analysis of Big Data inside a relational data model. The approach strives to achieve maximal utilization of highly normalized temporary tables through the merge join algorithm. It is designed for the Anchor modeling technique, which requires a very high level of table normalization. Anchor modeling is a novel data warehouse modeling technique, designed for classical databases and adapted by the authors of the article for Big Data environment and a massively parallel processing (MPP) database. Anchor modeling provides flexibility and high speed of data loading, where the presented approach adds support for fast ad-hoc analysis of Big Data sets (tens of terabytes). Different approaches to query plan optimization are described and estimated, for row-based and column- based databases. Theoretical estimations and results of real data experiments carried out in a column-based MPP environment (HP Vertica) are presented and compared. The results show that the approach is particularly favorable when the available RAM resources are scarce, so that a switch is made from pure in-memory processing to spilling over from hard disk, while executing ad-hoc queries. Scaling is also investigated by running the same analysis on different numbers of nodes in the MPP cluster. Configurations of five, ten and twelve nodes were tested, using click stream data of Avito, the biggest classified site in Russia. Key words: Big Data, massively parallel processing (MPP), database, normalization, analytics, ad-hoc, querying, modeling, performance. Citation: Golov N.I., Ronnback L. (2015) SQL query optimization for highly normalized Big Data. Business Informatics, no. 3 (33), pp. 7–14. Introduction havior data into millions and billions of dollars. How- ig Data analysis is one of the most popular IT ever, implementations and platforms fast enough to tasks today. Banks, telecommunication compa- execute various analytical queries over all available data Bnies, and big web companies, such as Google, remain the main issue. Until now, Hadoop has been Facebook, and Twitter produce tremendous amounts considered the universal solution. But Hadoop has its of data. Moreover, nowadays business users know how drawbacks, especially in speed and in its ability to proc- to monetize such data [2]. Various artificial intelligence ess difficult queries, such as analyzing and combining marketing techniques can transform big customer be- heterogeneous data [6]. BUSINESS INFORMATICS №3(33)–2015 7 DECISION MAKING AND BUSINESS INTELLIGENCE This paper introduces a new data processing ap- described as entities of their own. The Name of a Cus- proach, which can be implemented inside a relational tomer and the Sum of an Order are two example at- DBMS. The approach significantly increases the vol- tributes. An attribute table must contain at least two col- ume of data that can be analyzed within a given time umns, one for the entity key and one for the attribute frame. It has been implemented for fast ad-hoc query value. If an entity has three attributes, three separate at- processing inside the column oriented DBMS Vertica tribute tables have to be created. [7]. With Vertica, this approach allows data scientists to 3. Tie, table of entity relationships. For example, perform fast ad-hoc queries, processing terabytes of raw which Customer placed a certain Order is typical for a data in minutes, dozens of times faster than this data- tie. Tie tables must contain at least two columns, one for base can normally operate. Theoretically, it can increase the first entity key (that of the Customer) and one for the the performance of ad-hoc queries inside other types of second (that of the related Order). DBMS, too (experiments are planned within the frame- work of further research). The approach is based on the new database modeling Sum Tie Name technique called Anchor modeling. Anchor modeling was first implemented to support a very high speed of by loading new data into a data warehouse and to support placed fast changes in the logical model of the domain area, such as addition of new entities, new relationships be- Order Customer tween them, and new attributes of the entities. Later, it turned out to be extremely convenient for fast ad-hoc Anchor Attribute Attribute Tie queries, processing high volumes of data, from hundreds of gigabytes up to tens of terabytes. Ord ID Cust ID Ord ID Sum Cust ID Name Ord ID Cust ID The paper is structured as follows. Since not everyone 12 4 12 150 4 Dell 12 4 may be familiar with Anchor modeling, Section 1 ex- plains its main principles, Section 2 discusses the aspects 26 8 26 210 8 HP 26 8 of data distribution in an massively parallel processing 35 35 98 35 (MPP) environment, Section 3 defines the scope of analytical queries, Section 4 introduces the main prin- Fig. 1. Anchor modeling sample data model ciples of query optimization approach for analytical Ties and attributes can store historical values using queries. Section 5 discusses the business impact this ap- principles of slow changing dimensions type 2 (SCD 2) proach has on Avito, and the paper is concluded in the [4]. Distinctive characteristic of Anchor modeling is that final section. historicity is provided by a single date-time field (so- called FROM DATE), not a two fields (FROM DATE – 1. Anchor Modeling TO DATE), as for Data Vault modeling methodology. If Anchor modeling is a database modeling technique, historicity is provided by a single field (FROM DATE), based on the usage of the normal form (6NF) [9]. then second border of interval (TO DATE) can be es- Modeling in 6NF yields the maximal level of table de- timated as FROM DATE of next sequential value, or composition, so that a table in 6NF has no non-trivial NULL, if the next value is absent. join dependencies. That is why tables in 6NF usually contain as few columns as possible. The following con- 2. Data structs are used in Anchor modeling: distribution 1. Anchor, table of keys. Each logical entity of domain area must have a corresponding Anchor table. This table Massive parallel processing databases generally have contains unique and immutable identifiers of objects of shared-nothing scale-out architectures, so that each a given entity (surrogate keys). Customer and Order are node holds some subset of the database and enjoy a two example anchors. An anchor table may also contain high degree of autonomy with respect to executing technical columns, such as metadata. parallelized parts of queries. For each workload posed 2. Attribute, table of attribute values. This table stores to a cluster, the most efficient utilization occurs when the values of a logical entities attributes that cannot be the workload can be split equally among the nodes and 8 BUSINESS INFORMATICS №3(33)–2015 DECISION MAKING AND BUSINESS INTELLIGENCE each node can perform as much of its assigned work an instance takes part in can be resolved without the in- as possible without the involvement of other nodes. In volvement of other nodes, and that joining a tie will not short, data transfer between nodes should be kept to require any sort operations. Therefore, tie will require a minimum. Given arbitrarily modeled data, achiev- no more than one additional projection. ing maximum efficiency for every query is unfeasible Usage of single date-time field for historicity (SCD2) due to the amount of duplication that would be need- gives significant benefit – it does not require updates. ed on the nodes, while also defeating the purpose of New values can be added exclusively by inserts. This fea- a cluster. However, with Anchor modeling, a balance ture is very important for column-based databases, that is found where most data is distributed and the right are designed for fast select and insert operations, but are amount of data is duplicated in order to achieve high extremely slow for delete and update operations. Single performance for most queries. The following three dis- date-time field historicity also requires efficient analyti- tribution schemes are commonly present in MPP data- cal queries to estimate closing date for each value. Dis- bases and they suitably coincide with Anchor modeling tribution strategy, described above, guarantees, that all constructs. values for single anchor are stored on a single node and Broadcast the data to all nodes. Broadcasted data is properly sorted, so closing date estimation can be ex- available locally, duplicated, on every node in the clus- tremely efficient. ter. Knots are broadcasted, since they normally take up little space and may be joined from both attributes 3. Analytical and ties. In order to assemble an instance with knotted queries attributes or relationships represented through knot- ted ties without involving other nodes, the knot val- Databases can be utilized to perform two types of ues must be available locally. Thanks to the uniqueness tasks: OLTP tasks or OLAP tasks. OLTP means online constraint on knot values, query conditions involving transaction processing: huge number of small insert, them can be resolved very quickly and locally, to reduce update, delete, select operations, where each operation the row count of intermediate result sets in the execu- processes small chunk of data. OLTP tasks are typical for tion of the query. operational databases. OLAP means online analytical Segment the data according to some operation that processing: relatively small number of complex analyti- splits it across the nodes.
Recommended publications
  • Sixth Normal Form
    3 I January 2015 www.ijraset.com Volume 3 Issue I, January 2015 ISSN: 2321-9653 International Journal for Research in Applied Science & Engineering Technology (IJRASET) Sixth Normal Form Neha1, Sanjeev Kumar2 1M.Tech, 2Assistant Professor, Department of CSE, Shri Balwant College of Engineering &Technology, DCRUST University Abstract – Sixth Normal Form (6NF) is a term used in relational database theory by Christopher Date to describe databases which decompose relational variables to irreducible elements. While this form may be unimportant for non-temporal data, it is certainly important when maintaining data containing temporal variables of a point-in-time or interval nature. With the advent of Data Warehousing 2.0 (DW 2.0), there is now an increased emphasis on using fully-temporalized databases in the context of data warehousing, in particular with next generation approaches such as Anchor Modeling . In this paper, we will explore the concepts of temporal data, 6NF conceptual database models, and their relationship with DW 2.0. Further, we will also evaluate Anchor Modeling as a conceptual design method in which to capture temporal data. Using these concepts, we will indicate a path forward for evaluating a possible translation of 6NF-compliant data into an eXtensible Markup Language (XML) Schema for the purpose of describing and presenting such data to disparate systems in a structured format suitable for data exchange. Keywords :, 6NF,SQL,DKNF,XML,Semantic data change, Valid Time, Transaction Time, DFM I. INTRODUCTION Normalization is the process of restructuring the logical data model of a database to eliminate redundancy, organize data efficiently and reduce repeating data and to reduce the potential for anomalies during data operations.
    [Show full text]
  • The Design of Multidimensional Data Model Using Principles of the Anchor Data Modeling: an Assessment of Experimental Approach Based on Query Execution Performance
    WSEAS TRANSACTIONS on COMPUTERS Radek Němec, František Zapletal The Design of Multidimensional Data Model Using Principles of the Anchor Data Modeling: An Assessment of Experimental Approach Based on Query Execution Performance RADEK NĚMEC, FRANTIŠEK ZAPLETAL Department of Systems Engineering Faculty of Economics, VŠB - Technical University of Ostrava Sokolská třída 33, 701 21 Ostrava CZECH REPUBLIC [email protected], [email protected] Abstract: - The decision making processes need to reflect changes in the business world in a multidimensional way. This includes also similar way of viewing the data for carrying out key decisions that ensure competitiveness of the business. In this paper we focus on the Business Intelligence system as a main toolset that helps in carrying out complex decisions and which requires multidimensional view of data for this purpose. We propose a novel experimental approach to the design a multidimensional data model that uses principles of the anchor modeling technique. The proposed approach is expected to bring several benefits like better query execution performance, better support for temporal querying and several others. We provide assessment of this approach mainly from the query execution performance perspective in this paper. The emphasis is placed on the assessment of this technique as a potential innovative approach for the field of the data warehousing with some implicit principles that could make the process of the design, implementation and maintenance of the data warehouse more effective. The query performance testing was performed in the row-oriented database environment using a sample of 10 star queries executed in the environment of 10 sample multidimensional data models.
    [Show full text]
  • Translating Data Between Xml Schema and 6Nf Conceptual Models
    Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Spring 2012 Translating Data Between Xml Schema and 6Nf Conceptual Models Curtis Maitland Knowles Follow this and additional works at: https://digitalcommons.georgiasouthern.edu/etd Recommended Citation Knowles, Curtis Maitland, "Translating Data Between Xml Schema and 6Nf Conceptual Models" (2012). Electronic Theses and Dissertations. 688. https://digitalcommons.georgiasouthern.edu/etd/688 This thesis (open access) is brought to you for free and open access by the Graduate Studies, Jack N. Averitt College of at Digital Commons@Georgia Southern. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of Digital Commons@Georgia Southern. For more information, please contact [email protected]. 1 TRANSLATING DATA BETWEEN XML SCHEMA AND 6NF CONCEPTUAL MODELS by CURTIS M. KNOWLES (Under the Direction of Vladan Jovanovic) ABSTRACT Sixth Normal Form (6NF) is a term used in relational database theory by Christopher Date to describe databases which decompose relational variables to irreducible elements. While this form may be unimportant for non-temporal data, it is certainly important for data containing temporal variables of a point-in-time or interval nature. With the advent of Data Warehousing 2.0 (DW 2.0), there is now an increased emphasis on using fully-temporalized databases in the context of data warehousing, in particular with approaches such as the Anchor Model and Data Vault. In this work, we will explore the concepts of temporal data, 6NF conceptual database models, and their relationship with DW 2.0. Further, we will evaluate the Anchor Model and Data Vault as design methods in which to capture temporal data.
    [Show full text]
  • Assessment of Helical Anchors Bearing Capacity for Offshore Aquaculture Applications
    The University of Maine DigitalCommons@UMaine Electronic Theses and Dissertations Fogler Library Summer 8-23-2019 Assessment of Helical Anchors Bearing Capacity for Offshore Aquaculture Applications Leon D. Cortes Garcia University of Maine, [email protected] Follow this and additional works at: https://digitalcommons.library.umaine.edu/etd Recommended Citation Cortes Garcia, Leon D., "Assessment of Helical Anchors Bearing Capacity for Offshore Aquaculture Applications" (2019). Electronic Theses and Dissertations. 3094. https://digitalcommons.library.umaine.edu/etd/3094 This Open-Access Thesis is brought to you for free and open access by DigitalCommons@UMaine. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of DigitalCommons@UMaine. For more information, please contact [email protected]. ASSESSMENT OF HELICAL ANCHORS BEARING CAPACITY FOR OFFSHORE AQUACULTURE APPLICATIONS By Leon D. Cortes Garcia B.S. National University of Colombia, 2016 A THESIS Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science (in Civil Engineering) The Graduate School The University of Maine August 2019 Advisory Committee: Aaron P. Gallant, Professor of Civil Engineering, Co‐Advisor Melissa E. Landon, Professor of Civil Engineering, Co‐Advisor Kimberly D. Huguenard, Professor of Civil Engineering Copyright 2019 Leon D. Cortes Garcia All Rights Reserved ii ASSESSMENT OF HELICAL ANCHORS BEARING CAPACITY FOR OFFSHORE AQUACULTURE APPLICATIONS. By Leon Cortes Garcia Thesis Advisor(s): Dr. Aaron Gallant, Dr. Melissa Landon An Abstract of the Thesis Presented in Partial Fulfillment of the Requirements for the Degree of Master of Science (in Civil Engineering) August 2019 Aquaculture in Maine is an important industry with expected growth in the coming years to provide food in an ecological and environmentally sustainable way.
    [Show full text]
  • Methods Used in Tieback Wall Design and Construction to Prevent Local Anchor Failure, Progressive Anchorage Failure, and Ground Mass Stability Failure Ralph W
    ERDC/ITL TR-02-11 Innovations for Navigation Projects Research Program Methods Used in Tieback Wall Design and Construction to Prevent Local Anchor Failure, Progressive Anchorage Failure, and Ground Mass Stability Failure Ralph W. Strom and Robert M. Ebeling December 2002 echnology Laboratory Information T Approved for public release; distribution is unlimited. Innovations for Navigation Projects ERDC/ITL TR-02-11 Research Program December 2002 Methods Used in Tieback Wall Design and Construction to Prevent Local Anchor Failure, Progressive Anchorage Failure, and Ground Mass Stability Failure Ralph W. Strom 9474 SE Carnaby Way Portland, OR 97266 Concord, MA 01742-2751 Robert M. Ebeling Information Technology Laboratory U.S. Army Engineer Research and Development Center 3909 Halls Ferry Road Vicksburg, MS 39180-6199 Final report Approved for public release; distribution is unlimited. Prepared for U.S. Army Corps of Engineers Washington, DC 20314-1000 Under INP Work Unit 33272 ABSTRACT: A local failure that spreads throughout a tieback wall system can result in progressive collapse. The risk of progressive collapse of tieback wall systems is inherently low because of the capacity of the soil to arch and redistribute loads to adjacent ground anchors. The current practice of the U.S. Army Corps of Engineers is to design tieback walls and ground anchorage systems with sufficient strength to prevent failure due to the loss of a single ground anchor. Results of this investigation indicate that the risk of progressive collapse can be reduced
    [Show full text]
  • Data Model Matrix
    Delivery Source Validation Central Facts Generic Data Access Tool Access Data Model Matrix Layer: abstraction Data logistics Notation Information (Target) Generic Specific Concern Data Delivery Logical External Facts Enhancement Target Data Mart/ & Access Approach (Source) Integration models Information Levels of Concerns Concerns Artefacts abstraction Validation model models Perspectives Analysis Tool representation Theory Systems Realities § Source System/ § Data exchange § Source System or data § Collection of all UoD’s § Data Integration § Plausibility § Representation § (Temporal) filtering and § Queryable --------------- platform § Data format delivery validation § Structure stability § Data correlation § Cross model validation (Access) selection § Navigation Data definition Data delivery process / § Formal delivery § Validation against logical model § Fact uniformity § Data homogenization § Data enrichment & § Manipulation (CRUD) § Language § Aggregation specification § Data delivery § Data standardization § Minimally constrained§ Unification of concepts and context correlation § Integrity § Type transformation § Tooling Execution level concerns data layer concerns abstraction (for transformation) Temporally flexible § Source model IST to SOLL (Constraints) § Data Application § Performance § Data representation (model repair) § Derivation/ Processing Interface § Scalability § Data packaging transformation (DAPI) - Informal definition - Comprehension § Legal Reports, Dossiers Informal language Model Canvas Narrative Compendium Data
    [Show full text]
  • Anchor Modeling TDWI Presentation
    1 _tÜá e≠ÇÇuùv~ Anchor Modeling I N T H E D A T A W A R E H O U S E Did you know that some of the earliest prerequisites for data warehousing were set over 2500 years ago. It is called an anchor model since the anchors tie down a number of attributes (see picture above). All EER-diagrams have been made with Graphviz. All cats are drawn by the author, Lars Rönnbäck. 1 22 You can never step into the same river twice. The great greek philosopher Heraclitus said ”You can never step into the same river twice”. What he meant by that is that everything is changing. The next time you step into the river other waters are flowing by. Likewise the environment surrounding a data warehouse is in constant change and whenever you revisit them you have to adapt to these changes. Image painted by Henrik ter Bruggen courtsey of Wikipedia Commons (public domain). 2 3 Five Essential Criteria • A future-proof data warehouse must at least fulfill: – Value – Maintainability – Usability – Performance – Flexibility Fail in one and there will be consequences Value – most important, even a very poorly designed data warehouse can survive as long as it is providing good business value. If you fail in providing value, the warehouse will be viewed as a money sink and may be cancelled altogether. Maintainability – you should be able to answer the question: How is the warehouse feeling today? Could be healthy, could be ill! Detect trends, e g in loading times. Usability – must be simple and accessible to the end users.
    [Show full text]
  • Anchor Modelling
    Anchor modeling Beter dan Data Vault? Dit artikel is oorspronkelijk gepubliceerd in Database Magazine 8 / 2009, met andere lay-out en titel. Het copyright voor die versie berust dus bij Array Publishers. Verzoeken voor overname van dit materiaal gelieve men dus aan Array te richten. Grundsätzlich IT M: 06-41966000 W: www.grundsatzlich-it.nl E: [email protected] R. Kunenborg Versie 1.31 / 26 augustus 2009 / Final Contents Anchor Modeling ..................................................................................................................................... 3 Inleiding ............................................................................................................................................... 3 Geschiedenis ....................................................................................................................................... 3 Anchor Modeling ................................................................................................................................. 3 De zesde normaalvorm (6NF) .......................................................................................................... 3 Anchor Modeling ............................................................................................................................. 4 Sleutelmanagement ........................................................................................................................ 7 Views en functies ............................................................................................................................
    [Show full text]
  • Agile Information Modeling in Evolving Data Environments
    Anchor Modeling { Agile Information Modeling in Evolving Data Environments L. R¨onnb¨ack Resight, Kungsgatan 66, 101 30 Stockholm, Sweden O. Regardt Teracom, Kakn¨astornet,102 52 Stockholm, Sweden M. Bergholtz, P. Johannesson, P. Wohed DSV, Stockholm University, Forum 100, 164 40 Kista, Sweden Abstract Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this article, we propose an agile information modeling technique, called Anchor Modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes. A key benefit of Anchor Modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. Such changes, therefore, do not require immediate modifications of existing applications, since all previous versions of the database schema are available as subsets of the current schema. Anchor Modeling decouples the evolution and application of a database, which when building a data warehouse enables shrinking of the initial project scope. While data models were previously made to capture every facet of a domain in a single phase of development, in Anchor Modeling fragments can be iteratively modeled and applied. We provide a formal and technology independent definition of anchor models and show how anchor models can be realized as relational databases together with examples of schema evolution. We also investigate performance through a number of lab experiments, which indicate that under certain conditions anchor databases perform substantially better than databases constructed using traditional modeling techniques.
    [Show full text]
  • Relational Model of Temporal Data Based on 6Th Normal Form
    D. Golec i dr. Relacijski model vremenskih podataka zasnovan na 6. normalnoj formi ISSN 1330-3651 (Print), ISSN 1848-6339 (Online) https://doi.org/10.17559/TV-20160425214024 RELATIONAL MODEL OF TEMPORAL DATA BASED ON 6TH NORMAL FORM Darko Golec, Viljan Mahnič, Tatjana Kovač Original scientific paper This paper brings together two different research areas, i.e. Temporal Data and Relational Modelling. Temporal data is data that represents a state in time while temporal database is a database with built-in support for handling data involving time. Most of temporal systems provide sufficient temporal features, but the relational models are improperly normalized, and modelling approaches are missing or unconvincing. This proposal offers advantages for a temporal database modelling, primarily used in analytics and reporting, where typical queries involve a small subset of attributes and a big amount of records. The paper defines a distinctive logical model, which supports temporal data and consistency, based on vertical decomposition and sixth normal form (6NF). The use of 6NF allows attribute values to change independently of each other, thus preventing redundancy and anomalies. Our proposal is evaluated against other temporal models and super-fast querying is demonstrated, achieved by database join elimination. The paper is intended to help database professionals in practice of temporal modelling. Keywords: logical model; relation; relational modelling; 6th Normal Form; temporal data Relacijski model vremenskih podataka zasnovan na 6. normalnoj formi Izvorni znanstveni članak Ovaj rad povezuje dva različita područja istraživanja, tj. Temporalne podatke i Relacijsko modeliranje. Temporalni podaci su podaci koji predstavljaju stanje u vremenu, a temporalna baza podataka je baza podataka s ugrađenom podrškom za baratanje s podacima koji uključuju vrijeme.
    [Show full text]
  • AM Cheat Sheet
    BUILDING A FUTURE PROOF DATA WAREHOUSE TDWI 2007 AMSTERDAM Anchor Modeling FLEXIBILITY Where did it all come from? surrogate key found in the anchor and The environment surrounding a Background exactly one attribute value. It should data warehouse is in constant contain meta information similar to that Anchor Modeling is built upon two found in the anchor. If, for a given iden- change. Anchor modeling is built techniques both discovered in the 1970’s; tity, the attribute may change over time, on this premise, such that a large the sixth normal form and entity rela- it should also contain historization in- change on the outside will result tionships. In more recent years the sixth formation. In the case of the attribute normal form has been discussed with in a small change within. having a state or a type, it can also hold respect to storing temporal data. Also, foreign keys of knots. entity relationships has evolved into en- INDEPENDENCE hanced entity relationships, which adds The model itself is independent semantic modeling concepts such as How do you relate entities? of business logic. Rules are de- typing. Ties scriptive rather than physical to Relationships between entities are mod- increase the longevity of the What building blocks are used? eled as ties between anchors. A tie will Constituents thereby relate identities to each other. data warehouse. You have the The most common form is to relate two power to decide how the data There are only four different types of anchors, but there is no theoretical limit should be interpreted. tables used in anchor modeling.
    [Show full text]
  • Data Warehouse Part Ii: Dwh Data Modeling & Olap
    A company of Daimler AG LECTURE @DHBW: DATA WAREHOUSE PART II: DWH DATA MODELING & OLAP ANDREAS BUCKENHOFER, DAIMLER TSS ABOUT ME Andreas Buckenhofer https://de.linkedin.com/in/buckenhofer Senior DB Professional [email protected] https://twitter.com/ABuckenhofer https://www.doag.org/de/themen/datenbank/in-memory/ Since 2009 at Daimler TSS Department: Big Data http://wwwlehre.dhbw-stuttgart.de/~buckenhofer/ Business Unit: Analytics https://www.xing.com/profile/Andreas_Buckenhofer2 NOT JUST AVERAGE: OUTSTANDING. As a 100% Daimler subsidiary, we give 100 percent, always and never less. We love IT and pull out all the stops to aid Daimler's development with our expertise on its journey into the future. Our objective: We make Daimler the most innovative and digital mobility company. Daimler TSS INTERNAL IT PARTNER FOR DAIMLER + Holistic solutions according to the Daimler guidelines + IT strategy + Security + Architecture + Developing and securing know-how + TSS is a partner who can be trusted with sensitive data As subsidiary: maximum added value for Daimler + Market closeness + Independence + Flexibility (short decision making process, ability to react quickly) Daimler TSS 4 LOCATIONS Daimler TSS Germany 7 locations 1000 employees* Ulm (Headquarters) Daimler TSS China Stuttgart Hub Beijing 10 employees Berlin Karlsruhe Daimler TSS Malaysia Hub Kuala Lumpur 42 employees Daimler TSS India * as of August 2017 Hub Bangalore 22 employees Daimler TSS Data Warehouse / DHBW 5 WHAT YOU WILL LEARN TODAY After the end of this lecture you
    [Show full text]