Extract Transform Load (ETL) Process in Distributed Database Academic Data Warehouse

Total Page:16

File Type:pdf, Size:1020Kb

Extract Transform Load (ETL) Process in Distributed Database Academic Data Warehouse APTIKOM Journal on Computer Science and Information Technologies Vol. 4, No. 2, 2019, pp. 61~68 ISSN: 2528-2417 DOI: 10.11591/APTIKOM.J. SIT.36 61 Extract transform load (ET ) process in distributed database academic data warehouse Ardhian Agung Yulianto Industrial Engineering Department, Engineering .acult0, Andalas 1ni2ersit0 Kampus 3imau Manis Kota Padang, 4est Sumatera, Indonesia, 25163 Corresponding author, e-mail: ardhian.a06eng.unand.ac.id Abstract While a data warehouse is desi ned to support the decision-makin function, the most time-consumin part is the Extract Transform Load )ETL) process. Case in Academic Data Warehouse, when data source came from the faculty.s distributed database, althou h havin a typical database but become not easier to inte rate. This paper presents how to an ETL process in distributed database academic data warehouse. 0ollowin Data 0low Thread process in the data sta in area, a deep analysis performed for identifyin all tables in each data sources, includin content profilin . Then the cleanin , confirmin , and data delivery steps pour the different data source into the data warehouse )DW). 1ince DW development usin bottom-up 2imball.s multidimensional approach, we found the three types of extraction activities from data source table3 mer e, mer e-union, and union. Result for cleanin and conformin step set by creatin conform dimension on data source analysis, refinement, and hierarchy structure. The final of the ETL step is loadin it into inte ratin dimension and fact tables by a eneration of a surro ate key. Those processes are runnin radually from each distributed database data sources until it incorporated. This technical activity in distributed database ETL process enerally can be adopted widely in other industries which desi ner must have advance knowled e to structure and content of data source. Keywords3 data warehouse; distributed database; ETL; multidimensional Copyright © 2019 APTIKOM - All rights reser ed. 1. Introduction A simple definition of E8tract, Transform, and 3oad 9ET3: could be —the set of processes for getting data from O3TP 9On-3ine Transaction Processing: s0stems into a data warehouse.“ ET3 process as a part of Data Staging Area, as a first stage of recei2ing data deri2ed from heterogeneous of data sources, assuring data qualit0, and consistenc0 [1A. In a data warehouse 9D4:, the ET3 process is the most time- consuming part, e2en mentioned that the number could rise to 80B of the total proCect de2elopment time [2A. As the important component of database-oriented decision support s0stem, D4 stores the records about acti2ities and e2ents related to a distinct business process. D4 itself, defined as —a s0stem that e8tracts, clean, conforms, and deli2ers source data into a dimensional data store and then supports and implements quer0ing and anal0sis for decision maDing“[3A. The main components of D4 are operational data sources, data staging area, data presentation area, and data access tools [4A. Operational data sources support different business area, different technologies, and formats for staging data. In this research, as a case stud0 is the Academic Information S0stem at Andalas 1ni2ersit0, Padang, Indonesia. Andalas 1ni2ersit0 consists of 15 faculties, and each facult0 has se2eral departments with a total of 120 departments. This information s0stem pro2ides all ser2ices in the academic process started from course registration, lecturer distribution for e2er0 course, grading s0stem, and graduation. 4hen initiating this information s0stem, the database administrator designed it as a distributed database s0stem. Eased on Distributed Database 9DDE: definition said that when a DDE is deplo0ed using onl0 one computer, it remains a DDE because it is still possible to deplo0 it across multiple computers [5A. The general terminolog0 of DDE is the collection of multiple, separate DEMSs, each running on a separate computer, and utiliFing some communication facilit0 to coordinate their acti2ities in pro2iding shared access to the enterprise data [6A. The ad2antages of this configuration are as a reflection of organiFational structure, impro2ed local autonom0, a2ailabilit0, and performance. Although in another side also there are disad2antages such as lacD of standards, comple8it0, and integrit0 control [7A. Received May 15, 2019; Revised June 10, 2019; Accepted June 30, 2019 62 ISSN: 2528-2417 E0 this design, the database ser2er is installed on a single machine and consisted of se2eral separate academic databases for facult0. It means that e2er0 facult0 has its single t0pical database with identic structures 9table and attribute:, but there is no guarantee for the same data of the reference tables. This information s0stem is running well as a transactional s0stem and for preparing the enhanced academic business intelligent, we propose a construction of Academic D4 include integration data process. .igure 1. The main components of a proposed academic data warehouse. The ET3 acti2ities are represented in data staging area The .igure 1 is essential for an academic data warehouse to ha2e the truthful and comprehensi2e integral data. The first obCecti2e of this stud0 is the implementation of bottom-up dimensional data warehouse design approach. This s0stem initiated b0 the completeness of the business requirements pro2ided b0 the uni2ersit0 e8ecuti2es, then through the dimensional and fact tables with measurement data as a representati2e of multidimensional technique. The other obCecti2e in this research is to identif0 related attributes from each data source then identif0 them as the master tables or operational tables. In another hand, se2eral acti2ities performed in the ET3 process such as e8tracting, cleaning, conforming tables from and loading them into D4. This process carried on data qualit0 and the data anal0sis from different facult0Hs academic database data sources. 2. Research Method On distributed database data sources, the ET3 process becomes more complicated because there is no single control mechanism o2er the contents of the data of each database. The gradual and careful process to identif0 which table should be merged or not, to profile the tableHs attribute and anal0Fe data in tables are implemented in detail in this worD. Knowing the data structure will help in cleaning and conforming dimension and maDing fact table. In the case of an academic data warehouse, all steps arrange based on the business processI grain declared, dimension and data measurement in fact tables as the steps for the building data warehouse. In the building of the data warehouse, we adapted KimballHs bottom-up approach in de2eloping a multidimensional model [8A. Eased on KimballHs four-step dimensional design process, the model design acti2ities are built b0 the following steps: a. Select the business processes: to be modeled, taDing into account business requirements and a2ailable data sources. b. Declare the grain: defining an indi2idual fact table representation which related to business process measurements. c. hoose the dimensions: determining the sets of attributes describing fact table measurements. T0pical dimensions are time and date. Identif0 the numeric facts: define what measure to include in fact tables. .acts must be consistent with the declared grain The important ad2antage of this approach is consists of data marts as a representation of the business unit, which ha2e a 2er0 quicD initial set up and effortless to integrate between another data marts. .or consideration to enterprise scale D4, this method tranquil in e8tend from e8isting D4 and also pro2ide reporting capabilit0. After that, in the ET3 process, there are prominent four staging steps while data is staged 9written to the disD: in parallel with the data being transferred to the ne8t stage, as describe in .igure 2. APTIKOM J. CSIT Vol. 4, No. 2, 2019 : 61 G 68 APTIKOM J. CSIT ISSN: 2528-2417 63 .igure 2. .our steps data flow thread in data staging area as etl acti2ities [3A Research about ET3 performance alread0 carried out b0 researchers. Igor MeDterro2ic et al. [9A proposed a robust ET3 procedure with three agents: changer, cleaner, and loader. This research focuses to handle horiFontal data segmentation and 2ersioning when the same fact tables used b0 2arious users and must not influence the other. Vishal Gour et al. [10A proposed another impro2ement ET3 performance b0 quer0 cache technique to reduce the response time of ET3. E. Tute et al. [11A describe an approach for modeling of ET3-processes to ser2e as a Dind of interface between regular IT-management, D4, and users in the case in the clinical data warehouse. Abid Ahmad et al. [12A notif0 the use of the distributed database in de2elopment of D4. This stud0 e8plains more on architecture and change detection component. Sonali V0as et al. [13A define the 2arious ET3 process and testing techniques to facilitate for selection of the best ET3 techniques and testing. E8amining the mentioned stud0, it is clear the research gap in this ET3 process is ad2ance anal0Fing technique of distributed database data source. Also, the specific case in the academic area was applied with the strateg0 in each ET3 steps after anal0Fing the academic data. 3. Results and Discussion 3.1. Designing Multidimensional Model .ollow KimballHs four-step dimensional design for building an academic data warehouse shows in Table 1. Table 1. .our steps data modeling process for academic data warehouse Step Output 1. Identif0 the business Eusiness Process process Anal0sis of New Student
Recommended publications
  • The Savvy Survey #16: Data Analysis and Survey Results1 Milton G
    AEC409 The Savvy Survey #16: Data Analysis and Survey Results1 Milton G. Newberry, III, Jessica L. O’Leary, and Glenn D. Israel2 Introduction In order to evaluate the effectiveness of a program, an agent or specialist may opt to survey the knowledge or Continuing the Savvy Survey Series, this fact sheet is one of attitudes of the clients attending the program. To capture three focused on working with survey data. In its most raw what participants know or feel at the end of a program, he form, the data collected from surveys do not tell much of a or she could implement a posttest questionnaire. However, story except who completed the survey, partially completed if one is curious about how much information their clients it, or did not respond at all. To truly interpret the data, it learned or how their attitudes changed after participation must be analyzed. Where does one begin the data analysis in a program, then either a pretest/posttest questionnaire process? What computer program(s) can be used to analyze or a pretest/follow-up questionnaire would need to be data? How should the data be analyzed? This publication implemented. It is important to consider realistic expecta- serves to answer these questions. tions when evaluating a program. Participants should not be expected to make a drastic change in behavior within the Extension faculty members often use surveys to explore duration of a program. Therefore, an agent should consider relevant situations when conducting needs and asset implementing a posttest/follow up questionnaire in several assessments, when planning for programs, or when waves in a specific timeframe after the program (e.g., six assessing customer satisfaction.
    [Show full text]
  • An Overview of the 50 Most Common Web Scraping Tools
    AN OVERVIEW OF THE 50 MOST COMMON WEB SCRAPING TOOLS WEB SCRAPING IS THE PROCESS OF USING BOTS TO EXTRACT CONTENT AND DATA FROM A WEBSITE. UNLIKE SCREEN SCRAPING, WHICH ONLY COPIES PIXELS DISPLAYED ON SCREEN, WEB SCRAPING EXTRACTS UNDERLYING CODE — AND WITH IT, STORED DATA — AND OUTPUTS THAT INFORMATION INTO A DESIGNATED FILE FORMAT. While legitimate uses cases exist for data harvesting, illegal purposes exist as well, including undercutting prices and theft of copyrighted content. Understanding web scraping bots starts with understanding the diverse and assorted array of web scraping tools and existing platforms. Following is a high-level overview of the 50 most common web scraping tools and platforms currently available. PAGE 1 50 OF THE MOST COMMON WEB SCRAPING TOOLS NAME DESCRIPTION 1 Apache Nutch Apache Nutch is an extensible and scalable open-source web crawler software project. A-Parser is a multithreaded parser of search engines, site assessment services, keywords 2 A-Parser and content. 3 Apify Apify is a Node.js library similar to Scrapy and can be used for scraping libraries in JavaScript. Artoo.js provides script that can be run from your browser’s bookmark bar to scrape a website 4 Artoo.js and return the data in JSON format. Blockspring lets users build visualizations from the most innovative blocks developed 5 Blockspring by engineers within your organization. BotScraper is a tool for advanced web scraping and data extraction services that helps 6 BotScraper organizations from small and medium-sized businesses. Cheerio is a library that parses HTML and XML documents and allows use of jQuery syntax while 7 Cheerio working with the downloaded data.
    [Show full text]
  • Data Warehouse Fundamentals for Storage Professionals – What You Need to Know EMC Proven Professional Knowledge Sharing 2011
    Data Warehouse Fundamentals for Storage Professionals – What You Need To Know EMC Proven Professional Knowledge Sharing 2011 Bruce Yellin Advisory Technology Consultant EMC Corporation [email protected] Table of Contents Introduction ................................................................................................................................ 3 Data Warehouse Background .................................................................................................... 4 What Is a Data Warehouse? ................................................................................................... 4 Data Mart Defined .................................................................................................................. 8 Schemas and Data Models ..................................................................................................... 9 Data Warehouse Design – Top Down or Bottom Up? ............................................................10 Extract, Transformation and Loading (ETL) ...........................................................................11 Why You Build a Data Warehouse: Business Intelligence .....................................................13 Technology to the Rescue?.......................................................................................................19 RASP - Reliability, Availability, Scalability and Performance ..................................................20 Data Warehouse Backups .....................................................................................................26
    [Show full text]
  • Cubes Documentation Release 1.0.1
    Cubes Documentation Release 1.0.1 Stefan Urbanek April 07, 2015 Contents 1 Getting Started 3 1.1 Introduction.............................................3 1.2 Installation..............................................5 1.3 Tutorial................................................6 1.4 Credits................................................9 2 Data Modeling 11 2.1 Logical Model and Metadata..................................... 11 2.2 Schemas and Models......................................... 25 2.3 Localization............................................. 38 3 Aggregation, Slicing and Dicing 41 3.1 Slicing and Dicing.......................................... 41 3.2 Data Formatters........................................... 45 4 Analytical Workspace 47 4.1 Analytical Workspace........................................ 47 4.2 Authorization and Authentication.................................. 49 4.3 Configuration............................................. 50 5 Slicer Server and Tool 57 5.1 OLAP Server............................................. 57 5.2 Server Deployment.......................................... 70 5.3 slicer - Command Line Tool..................................... 71 6 Backends 77 6.1 SQL Backend............................................. 77 6.2 MongoDB Backend......................................... 89 6.3 Google Analytics Backend...................................... 90 6.4 Mixpanel Backend.......................................... 92 6.5 Slicer Server............................................. 94 7 Recipes 97 7.1 Recipes...............................................
    [Show full text]
  • Data Quality: Letting Data Speak for Itself Within the Enterprise Data Strategy
    Data Quality: Letting Data Speak for Itself within the Enterprise Data Strategy Collaboration & Transformation (C&T) Shared Interest Group (SIG) Financial Management Committee DATA Act – Transparency in Federal Financials Project Date Released: September 2015 SYNOPSIS Data quality and information management across the enterprise is challenging. Today’s information systems consist of multiple, interdependent platforms that are interfaced across the enterprise. Each of these systems and the business processes they support often use and define the same piece of data differently. How the information is used across the enterprise becomes a critical part of the equation when managing data quality. Letting the data speak for itself is the process of analyzing how the data and information is used across the enterprise, providing details on how the data is defined, what systems and processes are responsible for authoring and maintaining the data, and how the information is used to support the agency, and what needs to be done to support data quality and governance. This information also plays a critical role in the agency’s capacity to meet the requirements of the DATA Act. American Council for Technology-Industry Advisory Council (ACT-IAC) 3040 Williams Drive, Suite 500, Fairfax, VA 22031 www.actiac.org ● (p) (703) 208.4800 (f) ● (703) 208.4805 Advancing Government through Collaboration, Education and Action Data Quality: Letting the Data Speak for Itself within the Enterprise Data Strategy American Council for Technology-Industry Advisory Council (ACT-IAC) The American Council for Technology (ACT) is a non-profit educational organization established in 1979 to improve government through the efficient and innovative application of information technology.
    [Show full text]
  • Data and Computer Communications (Eighth Edition)
    DATA AND COMPUTER COMMUNICATIONS Eighth Edition William Stallings Upper Saddle River, New Jersey 07458 Library of Congress Cataloging-in-Publication Data on File Vice President and Editorial Director, ECS: Art Editor: Gregory Dulles Marcia J. Horton Director, Image Resource Center: Melinda Reo Executive Editor: Tracy Dunkelberger Manager, Rights and Permissions: Zina Arabia Assistant Editor: Carole Snyder Manager,Visual Research: Beth Brenzel Editorial Assistant: Christianna Lee Manager, Cover Visual Research and Permissions: Executive Managing Editor: Vince O’Brien Karen Sanatar Managing Editor: Camille Trentacoste Manufacturing Manager, ESM: Alexis Heydt-Long Production Editor: Rose Kernan Manufacturing Buyer: Lisa McDowell Director of Creative Services: Paul Belfanti Executive Marketing Manager: Robin O’Brien Creative Director: Juan Lopez Marketing Assistant: Mack Patterson Cover Designer: Bruce Kenselaar Managing Editor,AV Management and Production: Patricia Burns ©2007 Pearson Education, Inc. Pearson Prentice Hall Pearson Education, Inc. Upper Saddle River, NJ 07458 All rights reserved. No part of this book may be reproduced in any form or by any means, without permission in writing from the publisher. Pearson Prentice Hall™ is a trademark of Pearson Education, Inc. All other tradmarks or product names are the property of their respective owners. The author and publisher of this book have used their best efforts in preparing this book.These efforts include the development, research, and testing of the theories and programs to determine their effectiveness.The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book.The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs.
    [Show full text]
  • Evolution of the Infographic
    EVOLUTION OF THE INFOGRAPHIC: Then, now, and future-now. EVOLUTION People have been using images and data to tell stories for ages—long before the days of the Internet, smartphones, and Excel. In fact, the history of infographics pre-dates the web by more than 30,000 years with the earliest forms of these visuals being cave paintings that helped early humans find food, resources, and shelter. But as technology has advanced, so has our ability to tell meaningful stories. Here’s a look into the evolution of modern infographics—where they’ve been, how they’ve evolved, and where they’re headed. Then: Printed, static infographics The 20th Century introduced the infographic—a staple for how we communicate, visualize, and share information today. Early on, these print graphics married illustration and data to communicate information in a revolutionary way. ADVANTAGE Design elements enable people to quickly absorb information previously confined to long paragraphs of text. LIMITATION Static infographics didn’t allow for deeper dives into the data to explore granularities. Hoping to drill down for more detail or context? Tough luck—what you see is what you get. Source: http://www.wired.co.uk/news/archive/2012-01/16/painting- by-numbers-at-london-transport-museum INFOGRAPHICS THROUGH THE AGES DOMO 03 Now: Web-based, interactive infographics While the first wave of modern infographics made complex data more consumable, web-based, interactive infographics made data more explorable. These are everywhere today. ADVANTAGE Everyone looking to make data an asset, from executives to graphic designers, are now building interactive data stories that deliver additional context and value.
    [Show full text]
  • Informal Data Transformation Considered Harmful
    Informal Data Transformation Considered Harmful Eric Daimler, Ryan Wisnesky Conexus AI out the enterprise, so that data and programs that depend on that data need not constantly be re-validated for ev- ery particular use. Computer scientists have been develop- ing techniques for preserving data integrity during trans- formation since the 1970s (Doan, Halevy, and Ives 2012); however, we agree with the authors of (Breiner, Subrah- manian, and Jones 2018) and others that these techniques are insufficient for the practice of AI and modern IT sys- tems integration and we describe a modern mathematical approach based on category theory (Barr and Wells 1990; Awodey 2010), and the categorical query language CQL2, that is sufficient for today’s needs and also subsumes and unifies previous approaches. 1.1 Outline To help motivate our approach, we next briefly summarize an application of CQL to a data science project undertaken jointly with the Chemical Engineering department of Stan- ford University (Brown, Spivak, and Wisnesky 2019). Then, in Section 2 we review data integrity and in Section 3 we re- view category theory. Finally, we describe the mathematics of our approach in Section 4, and conclude in Section 5. We present no new results, instead citing a line of work summa- rized in (Schultz, Spivak, and Wisnesky 2017). Image used under a creative commons license; original 1.2 Motivating Case Study available at http://xkcd.com/1838/. In scientific practice, computer simulation is now a third pri- mary tool, alongside theory and experiment. Within
    [Show full text]
  • Supporting Decision Making Chapter
    Chapter 5 Supporting Decision Making McGraw-Hill/Irwin Copyright © 2008, The McGraw-Hill Companies, Inc. All rights reserved. Chapter Highlights • Introduction • Decision support systems • Management information systems • Online analytical processing • Using decision support systems • Executive information systems • Enterprise information portals • Knowledge management systems 2-2 Learning Objectives • Identify the changes taking place in the form and use of decision support in business. • Identify the role and reporting alternatives of management information systems. • Describe how online analytical processing can meet key information needs of managers. • Explain how the following IS can support the information needs of executives, managers and business professionals: a. Executives information systems b. Enterprise information portals c. Knowledge management systems 2-3 INTRODUCTION • To succeed in business today, companies need IS that can support the diverse information and decision making needs of their managers and business professionals. • Internet, Intranets and other Web enabled information technologies have significantly support the role that IS play in supporting the decision making activities of every managers and knowledge workers in business. 2-4 • The type of information required by decision makers in a company is directly related to the level of management decision making and the amount of structure in the decision situations they face. • Levels of managerial decision making that must be supported by information technology in a successful organization are: • Strategic management • Tactical management • Operational management 2-5 • Strategic Management • Board of directors and an executive committee of the CEO and top executives develop overall organizational goals, strategies, policies and objectives as part of a strategic planning process. • They also monitor the strategic performance of the organization and its overall direction in the political, economic and competitive business environment.
    [Show full text]
  • Scientific Data Mining in Astronomy
    SCIENTIFIC DATA MINING IN ASTRONOMY Kirk D. Borne Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA [email protected] Abstract We describe the application of data mining algorithms to research prob- lems in astronomy. We posit that data mining has always been fundamen- tal to astronomical research, since data mining is the basis of evidence- based discovery, including classification, clustering, and novelty discov- ery. These algorithms represent a major set of computational tools for discovery in large databases, which will be increasingly essential in the era of data-intensive astronomy. Historical examples of data mining in astronomy are reviewed, followed by a discussion of one of the largest data-producing projects anticipated for the coming decade: the Large Synoptic Survey Telescope (LSST). To facilitate data-driven discoveries in astronomy, we envision a new data-oriented research paradigm for astron- omy and astrophysics – astroinformatics. Astroinformatics is described as both a research approach and an educational imperative for modern data-intensive astronomy. An important application area for large time- domain sky surveys (such as LSST) is the rapid identification, charac- terization, and classification of real-time sky events (including moving objects, photometrically variable objects, and the appearance of tran- sients). We describe one possible implementation of a classification broker for such events, which incorporates several astroinformatics techniques: user annotation, semantic tagging, metadata markup, heterogeneous data integration, and distributed data mining. Examples of these types of collaborative classification and discovery approaches within other science disciplines are presented. arXiv:0911.0505v1 [astro-ph.IM] 3 Nov 2009 1 Introduction It has been said that astronomers have been doing data mining for centuries: “the data are mine, and you cannot have them!”.
    [Show full text]
  • EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence an Architectural Overview
    EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence An Architectural Overview EMC Information Infrastructure Solutions Abstract This white paper provides readers with an overall understanding of the EMC® Greenplum® Data Computing Appliance (DCA) architecture and its performance levels. It also describes how the DCA provides a scalable end-to- end data warehouse solution packaged within a manageable, self-contained data warehouse appliance that can be easily integrated into an existing data center. October 2010 Copyright © 2010 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part number: H8039 EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business Intelligence—An Architectural Overview 2 Table of Contents Executive summary .................................................................................................
    [Show full text]
  • SUGI 28: the Value of ETL and Data Quality
    SUGI 28 Data Warehousing and Enterprise Solutions Paper 161-28 The Value of ETL and Data Quality Tho Nguyen, SAS Institute Inc. Cary, North Carolina ABSTRACT Information collection is increasing more than tenfold each year, with the Internet a major driver in this trend. As more and more The adage “garbage in, garbage out” becomes an unfortunate data is collected, the reality of a multi-channel world that includes reality when data quality is not addressed. This is the information e-business, direct sales, call centers, and existing systems sets age and we base our decisions on insights gained from data. If in. Bad data (i.e. inconsistent, incomplete, duplicate, or inaccurate data is entered without subsequent data quality redundant data) is affecting companies at an alarming rate and checks, only inaccurate information will prevail. Bad data can the dilemma is to ensure that how to optimize the use of affect businesses in varying degrees, ranging from simple corporate data within every application, system and database misrepresentation of information to multimillion-dollar mistakes. throughout the enterprise. In fact, numerous research and studies have concluded that data quality is the culprit cause of many failed data warehousing or Customer Relationship Management (CRM) projects. With the Take into consideration the director of data warehousing at a large price tags on these high-profile initiatives and the major electronic component manufacturer who realized there was importance of accurate information to business intelligence, a problem linking information between an inventory database and improving data quality has become a top management priority. a customer order database.
    [Show full text]