Technical Report: an Overview of Data Integration and Preparation

Total Page:16

File Type:pdf, Size:1020Kb

Technical Report: an Overview of Data Integration and Preparation Technical Report: An Overview of Data Integration and Preparation ElKindi Rezig Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Mike Cafarella∗ Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Vijay Gadepally Lincoln Laboratory Massachusetts Institute of Technology May 2020 ∗Mike Cafarella is also with the Department of Computer Science and Engineering at the University of Michigan 1 Abstract AI application developers typically begin with a dataset of interest and a vision of the end analytic or insight they wish to gain from the data at hand. Although these are two very important components of an AI workflow, one often spends the first few weeks (sometimes months) in the phase we refer to as data conditioning. This step typically includes tasks such as figuring out how to prepare data for analytics, dealing with inconsistencies in the dataset, and determining which algorithm (or set of algorithms) will be best suited for the application. Larger, faster, and messier datasets such as those from Internet of Things sensors, medical de- vices or autonomous vehicles only amplify these issues. These challenges, often referred to as the three Vs (volume, velocity, variety) of Big Data, require low-level tools for data management, preparation and integration. In most applications, data can come from structured and/or unstructured sources and often includes inconsistencies, formatting differences, and a lack of ground-truth labels. In this report, we highlight a number of tools that can be used to simplify data integration and preparation steps. Specifically, we focus on data integration tools and techniques, a deep dive into an exemplar data integration tool, and a deep-dive in the evolving field of knowledge graphs. Finally, we provide readers with a list of practical steps and considerations that they can use to simplify the data integration challenge. The goal of this report is to provide readers with a view of state-of-the-art as well as practical tips that can be used by data creators that make data integration more seamless. 2 Contents 1 Executive Summary4 2 Data Integration Introduction and Challenges6 3 Tools and Techniques8 3.1 Data Integration Challenges.....................8 3.2 Data Integration Tasks........................8 4 Bridging the Gap between Data Preparation and Data Analyt- ics 10 4.1 System Architecture......................... 12 4.1.1 User-defined Modules.................... 12 4.1.2 Managing Machine Learning Models............ 13 4.1.3 Visualization......................... 14 4.1.4 Debugging Suite....................... 14 4.1.5 Manual Breakpoints..................... 15 4.1.6 Automatic Breakpoints................... 15 5 Data Civilizer Use Case 17 6 Knowledge Graphs 18 6.1 Background and Terminology.................... 18 6.2 Graph Databases........................... 19 6.3 Application Examples........................ 20 6.4 Wikidata............................... 21 6.5 Future Research........................... 23 6.6 The Knowledge Graph Programming System........... 24 6.7 Example Walkthrough........................ 25 6.8 Language Features.......................... 27 6.8.1 Types and Values....................... 27 6.8.2 Automatic Provenance.................... 28 6.8.3 Sharing Values and Variables................ 29 6.9 Alternate Interaction Modes..................... 29 7 Practical Steps 30 7.1 Data Organization.......................... 30 7.2 Data Quality and Discovery..................... 32 7.3 Data Privacy............................. 33 7.4 Infrastructure, Technology and Policy............... 35 8 Acknowledgements 37 3 1 Executive Summary Many AI application developers typically begin with a dataset of interest and a vision of the end analytic or insight they wish to gain from the data at hand. Although these are two very important components of the AI pipeline, one of- ten spends the first few weeks (sometimes months) in the phase we refer to as data conditioning. This step typically includes tasks such as figuring out how to store data, dealing with inconsistencies in the dataset, and determining which algorithm (or set of algorithms) will be best suited for the application. Larger, faster, and messier datasets such as those from Internet of Things sen- sors [1], medical devices [2,3] or autonomous vehicles [4] only amplify these issues. These challenges, often referred to as the three Vs (volume, velocity, variety) of Big Data, require low-level tools for data management and data cleaning/pre-processing. In most applications, data can come from structured and/or unstructured sources and often includes inconsistencies, formatting dif- ferences, and a lack of ground-truth labels. In practice, there is a wide diversity in the quality, readiness and usability of data [5]. It is widely reported that data scientists spend at least 80% of their time do- ing data integration and preparation [6]. This process involves finding relevant datasets for a particular data science task (i.e., data discovery [7]). After the data has been collected, it needs to be cleaned so that it can be used by subse- quent ML models [8]. Data cleaning or data preparation refer to several data transformations to make the data reliable for future analysis. Example trans- formations include: (1) finding and fixing errors in the data using rules (e.g., an employee salary should not exceed $1M); (2) normalizing value representa- tions [9]; (3) imputing missing values [10, 11]; and (4) detecting and reconciling duplicates. This document provides a survey of the state-of-the-art in data preparation and integration. To help make concepts clearer, we also discuss a tool developed by our team called Data Civilizer [8]. Additionally, we detail the concept of Knowledge Graphs - a technique that can be used to describe structured data about a topic as a tool for organizing data with applications in data preparation, integration and beyond. Finally, this document highlights a number of practical steps, based largely on experience, that can be used to simplify data preparation and integration. A summary of these steps is given below: 1. Data Organization • Leverage tools for schema integration • Use standardized file formats and naming conventions 2. Data Quality and Discovery • Maintain data integrity through enforceable constraints • Leverage workflow management systems for testing various hypoth- esis on the data 4 • Include data version control • Leverage data discovery tools 3. Data Privacy • Leverage encrypted or secure data management systems as appropri- ate • Limit data ownership • Avoid inadvertent data releases • Be aware that data aggregated may be sensitive 4. Infrastructure, Technology and Policy • Use data lakes as appropriate • Databases and files can be used together • Leverage federated data access • Access to the right talent • Pay attention to technology selection, software licensing and soft- ware/hardware platforms • Maintain an acquisition and development environment conducive to incorporating new technology advances 5 2 Data Integration Introduction and Challenges Many AI application developers typically begin with a dataset of interest and a vision of the end analytic or insight they wish to gain from the data at hand. Although these are two very important components of the AI pipeline, one of- ten spends the first few weeks (sometimes months) in the phase we refer to as data conditioning. This step typically includes tasks such as figuring out how to store data, dealing with inconsistencies in the dataset, and determining which algorithm (or set of algorithms) will be best suited for the application. Larger, faster, and messier datasets such as those from Internet of Things sen- sors [1], medical devices [2,3] or autonomous vehicles [12] only amplify these issues. These challenges, often referred to as the three Vs (volume, velocity, variety) of Big Data, require low-level tools for data management, data integra- tion [13] and data preparation (this includes data cleaning/pre-processing). In most applications, data can come from structured and/or unstructured sources and often includes inconsistencies, formatting differences, and a lack of ground- truth labels. One interesting way to think about data preparedness is through concept of data readiness levels as outlined in [5] which is similar to the idea of technology readiness levels (TRL) [14]. It is widely reported that data scientists spend at least 80% of their time doing data integration and preparation [6, 15]. This process involves finding relevant datasets for a particular data science task (i.e., data discovery [7]). After the data has been collected, it needs to be cleaned so that it can be used by subsequent ML models [8]. Data cleaning or data preparation refer to several data transformations to make the data reliable for future analysis. Example transformations include: (1) finding and fixing errors in the data using rules (e.g., an employee salary should not exceed $1M); (2) normalizing value representations [9]; (3) imputing missing values [10, 11]; and (4) detecting and reconciling duplicates. At a high level, the concept of data integration and preparation is the effort required to go from raw sensor data to information that can be used in further processing steps (see the Artificial Intelligence architecture provided in [16]). Sometimes this phase is also referred to as data wrangling. Typically, each of these data integration and preparation tasks can be cumbersome,
Recommended publications
  • Mapreduce-Based D ELT Framework to Address the Challenges of Geospatial Big Data
    International Journal of Geo-Information Article MapReduce-Based D_ELT Framework to Address the Challenges of Geospatial Big Data Junghee Jo 1,* and Kang-Woo Lee 2 1 Busan National University of Education, Busan 46241, Korea 2 Electronics and Telecommunications Research Institute (ETRI), Daejeon 34129, Korea; [email protected] * Correspondence: [email protected]; Tel.: +82-51-500-7327 Received: 15 August 2019; Accepted: 21 October 2019; Published: 24 October 2019 Abstract: The conventional extracting–transforming–loading (ETL) system is typically operated on a single machine not capable of handling huge volumes of geospatial big data. To deal with the considerable amount of big data in the ETL process, we propose D_ELT (delayed extracting–loading –transforming) by utilizing MapReduce-based parallelization. Among various kinds of big data, we concentrate on geospatial big data generated via sensors using Internet of Things (IoT) technology. In the IoT environment, update latency for sensor big data is typically short and old data are not worth further analysis, so the speed of data preparation is even more significant. We conducted several experiments measuring the overall performance of D_ELT and compared it with both traditional ETL and extracting–loading– transforming (ELT) systems, using different sizes of data and complexity levels for analysis. The experimental results show that D_ELT outperforms the other two approaches, ETL and ELT. In addition, the larger the amount of data or the higher the complexity of the analysis, the greater the parallelization effect of transform in D_ELT, leading to better performance over the traditional ETL and ELT approaches. Keywords: ETL; ELT; big data; sensor data; IoT; geospatial big data; MapReduce 1.
    [Show full text]
  • Scientific Data Mining in Astronomy
    SCIENTIFIC DATA MINING IN ASTRONOMY Kirk D. Borne Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA [email protected] Abstract We describe the application of data mining algorithms to research prob- lems in astronomy. We posit that data mining has always been fundamen- tal to astronomical research, since data mining is the basis of evidence- based discovery, including classification, clustering, and novelty discov- ery. These algorithms represent a major set of computational tools for discovery in large databases, which will be increasingly essential in the era of data-intensive astronomy. Historical examples of data mining in astronomy are reviewed, followed by a discussion of one of the largest data-producing projects anticipated for the coming decade: the Large Synoptic Survey Telescope (LSST). To facilitate data-driven discoveries in astronomy, we envision a new data-oriented research paradigm for astron- omy and astrophysics – astroinformatics. Astroinformatics is described as both a research approach and an educational imperative for modern data-intensive astronomy. An important application area for large time- domain sky surveys (such as LSST) is the rapid identification, charac- terization, and classification of real-time sky events (including moving objects, photometrically variable objects, and the appearance of tran- sients). We describe one possible implementation of a classification broker for such events, which incorporates several astroinformatics techniques: user annotation, semantic tagging, metadata markup, heterogeneous data integration, and distributed data mining. Examples of these types of collaborative classification and discovery approaches within other science disciplines are presented. arXiv:0911.0505v1 [astro-ph.IM] 3 Nov 2009 1 Introduction It has been said that astronomers have been doing data mining for centuries: “the data are mine, and you cannot have them!”.
    [Show full text]
  • A Survey on Data Collection for Machine Learning a Big Data - AI Integration Perspective
    1 A Survey on Data Collection for Machine Learning A Big Data - AI Integration Perspective Yuji Roh, Geon Heo, Steven Euijong Whang, Senior Member, IEEE Abstract—Data collection is a major bottleneck in machine learning and an active research topic in multiple communities. There are largely two reasons data collection has recently become a critical issue. First, as machine learning is becoming more widely-used, we are seeing new applications that do not necessarily have enough labeled data. Second, unlike traditional machine learning, deep learning techniques automatically generate features, which saves feature engineering costs, but in return may require larger amounts of labeled data. Interestingly, recent research in data collection comes not only from the machine learning, natural language, and computer vision communities, but also from the data management community due to the importance of handling large amounts of data. In this survey, we perform a comprehensive study of data collection from a data management point of view. Data collection largely consists of data acquisition, data labeling, and improvement of existing data or models. We provide a research landscape of these operations, provide guidelines on which technique to use when, and identify interesting research challenges. The integration of machine learning and data management for data collection is part of a larger trend of Big data and Artificial Intelligence (AI) integration and opens many opportunities for new research. Index Terms—data collection, data acquisition, data labeling, machine learning F 1 INTRODUCTION E are living in exciting times where machine learning expertise. This problem applies to any novel application that W is having a profound influence on a wide range of benefits from machine learning.
    [Show full text]
  • Data Quality Through Data Integration: How Integrating Your IDEA Data Will Help Improve Data Quality
    center for the integration of IDEA Data Data Quality Through Data Integration: How Integrating Your IDEA Data Will Help Improve Data Quality July 2018 Authors: Sara Sinani & Fred Edora High-quality data is essential when looking at student-level data, including data specifically focused on students with disabilities. For state education agencies (SEAs), it is critical to have a solid foundation for how data is collected and stored to achieve high-quality data. The process of integrating the Individuals with Disabilities Education Act (IDEA) data into a statewide longitudinal data system (SLDS) or other general education data system not only provides SEAs with more complete data, but also helps SEAs improve accuracy of federal reporting, increase the quality of and access to data within and across data systems, and make better informed policy decisions related to students with disabilities. Through the data integration process, including mapping data elements, reviewing data governance processes, and documenting business rules, SEAs will have developed documented processes and policies that result in more integral data that can be used with more confidence. In this brief, the Center for the Integration of IDEA Data (CIID) provides scenarios based on the continuum of data integration by focusing on three specific scenarios along the integration continuum to illustrate how a robust integrated data system improves the quality of data. Scenario A: Siloed Data Systems Data systems where the data is housed in separate systems and State Education Agency databases are often referred to as “data silos.”1 These data silos are isolated from other data systems and databases.
    [Show full text]
  • Best Practices in Data Collection and Preparation: Recommendations For
    Feature Topic on Reviewer Resources Organizational Research Methods 2021, Vol. 24(4) 678–\693 ª The Author(s) 2019 Best Practices in Data Article reuse guidelines: sagepub.com/journals-permissions Collection and Preparation: DOI: 10.1177/1094428119836485 journals.sagepub.com/home/orm Recommendations for Reviewers, Editors, and Authors Herman Aguinis1 , N. Sharon Hill1, and James R. Bailey1 Abstract We offer best-practice recommendations for journal reviewers, editors, and authors regarding data collection and preparation. Our recommendations are applicable to research adopting dif- ferent epistemological and ontological perspectives—including both quantitative and qualitative approaches—as well as research addressing micro (i.e., individuals, teams) and macro (i.e., organizations, industries) levels of analysis. Our recommendations regarding data collection address (a) type of research design, (b) control variables, (c) sampling procedures, and (d) missing data management. Our recommendations regarding data preparation address (e) outlier man- agement, (f) use of corrections for statistical and methodological artifacts, and (g) data trans- formations. Our recommendations address best practices as well as transparency issues. The formal implementation of our recommendations in the manuscript review process will likely motivate authors to increase transparency because failure to disclose necessary information may lead to a manuscript rejection decision. Also, reviewers can use our recommendations for developmental purposes to highlight which particular issues should be improved in a revised version of a manuscript and in future research. Taken together, the implementation of our rec- ommendations in the form of checklists can help address current challenges regarding results and inferential reproducibility as well as enhance the credibility, trustworthiness, and usefulness of the scholarly knowledge that is produced.
    [Show full text]
  • The Risks of Using Spreadsheets for Statistical Analysis Why Spreadsheets Have Their Limits, and What You Can Do to Avoid Them
    The risks of using spreadsheets for statistical analysis Why spreadsheets have their limits, and what you can do to avoid them Let’s go Introduction Introduction Spreadsheets are widely used for statistical analysis; and while they are incredibly useful tools, they are Why spreadsheets are popular useful only to a certain point. When used for a task 88% of all spreadsheets they’re not designed to perform, or for a task at contain at least one error or beyond the limit of their capabilities, spread- sheets can be somewhat risky. An alternative to spreadsheets This paper presents some points you should consider if you use, or plan to use, a spread- sheet to perform statistical analysis. It also describes an alternative that in many cases will be more suitable. The learning curve with IBM SPSS Statistics Software licensing the SPSS way Conclusion Introduction Why spreadsheets are popular Why spreadsheets are popular A spreadsheet is an attractive choice for performing The answer to the first question depends on the scale and calculations because it’s easy to use. Most of us know (or the complexity of your data analysis. A typical spreadsheet 1 • 2 • 3 • 4 • 5 think we know) how to use one. Plus, spreadsheet programs will have a restriction on the number of records it can 6 • 7 • 8 • 9 come as a standard desktop computer resource, so they’re handle, so if the scale of the job is large, a tool other than a already available. spreadsheet may be very useful. A spreadsheet is a wonderful invention and an An alternative to spreadsheets excellent tool—for certain jobs.
    [Show full text]
  • Managing Data in Motion This Page Intentionally Left Blank Managing Data in Motion Data Integration Best Practice Techniques and Technologies
    Managing Data in Motion This page intentionally left blank Managing Data in Motion Data Integration Best Practice Techniques and Technologies April Reeve AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Morgan Kaufmann is an imprint of Elsevier Acquiring Editor: Andrea Dierna Development Editor: Heather Scherer Project Manager: Mohanambal Natarajan Designer: Russell Purdy Morgan Kaufmann is an imprint of Elsevier 225 Wyman Street, Waltham, MA 02451, USA Copyright r 2013 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods or professional practices, may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information or methods described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.
    [Show full text]
  • Informatica Cloud Data Integration
    Informatica® Cloud Data Integration Microsoft SQL Server Connector Guide Informatica Cloud Data Integration Microsoft SQL Server Connector Guide March 2019 © Copyright Informatica LLC 2017, 2019 This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC. U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License. Informatica, the Informatica logo, Informatica Cloud, and PowerCenter are trademarks or registered trademarks of Informatica LLC in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://www.informatica.com/trademarks.html. Other company and product names may be trade names or trademarks of their respective owners. Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product. See patents at https://www.informatica.com/legal/patents.html. DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of noninfringement, merchantability, or use for a particular purpose.
    [Show full text]
  • Data Migration (Pdf)
    PTS Data Migration 1 Contents 2 Background ..................................................................................................................................... 2 3 Challenge ......................................................................................................................................... 3 3.1 Legacy Data Extraction ............................................................................................................ 3 3.2 Data Cleansing......................................................................................................................... 3 3.3 Data Linking ............................................................................................................................. 3 3.4 Data Mapping .......................................................................................................................... 3 3.5 New Data Preparation & Load ................................................................................................ 3 3.6 Legacy Data Retention ............................................................................................................ 4 4 Solution ........................................................................................................................................... 5 4.1 Legacy Data Extraction ............................................................................................................ 5 4.2 Data Cleansing........................................................................................................................
    [Show full text]
  • Advanced Data Preparation for Individuals & Teams
    WRANGLER PRO Advanced Data Preparation for Individuals & Teams Assess & Refine Data Faster FILES DATA VISUALIZATION Data preparation presents the biggest opportunity for organizations to uncover not only new levels of efciency but DATABASES REPORTING also new sources of value. CLOUD MACHINE LEARNING Wrangler Pro accelerates data preparation by making the process more intuitive and efcient. The Wrangler Pro edition API INPUTS DATA SCIENCE is specifically tailored to the needs of analyst teams and departmental use cases. It provides analysts the ability to connect to a broad range of data sources, schedule workflows to run on a regular basis and freely share their work with colleagues. Key Benefits Work with Any Data: Users can prepare any type of data, regardless of shape or size, so that it can be incorporated into their organization’s analytics eforts. Improve Efciency: Make the end-to-end process of data preparation up to 10X more efcient compared to traditional methods using excel or hand-code. By utilizing the latest techniques in data visualization and machine learning, Wrangler Pro visibly surfaces data quality issues and guides users through the process of preparing their data. Accelerate Data Onboarding: Whether developing analysis for internal consumption or analysis to be consumed by an end customer, Wrangler Pro enables analysts to more efciently incorporate unfamiliar external data into their project. Users are able to visually explore, clean and join together new data Reduce data Improve data sources in a repeatable workflow to improve the efciency and preparation time quality and quality of their work. by up to 90% trust in analysis WRANGLER PRO Why Wrangler Pro? Ease of Deployment: Wrangler Pro utilizes a flexible cloud- based deployment that integrates with leading cloud providers “We invested in Trifacta to more efciently such as Amazon Web Services.
    [Show full text]
  • Preparation for Data Migration to S4/Hana
    PREPARATION FOR DATA MIGRATION TO S4/HANA JONO LEENSTRA 3 r d A p r i l 2 0 1 9 Agenda • Why prepare data - why now? • Six steps to S4/HANA • S4/HANA Readiness Assessment • Prepare for your S4/HANA Migration • Tips and Tricks from the trade • Summary ‘’Data migration is often not highlighted as a key workstream for SAP S/4HANA projects’’ WHY PREPARE DATA – WHY NOW? Why Prepare data - why now? • The business case for data quality is always divided into either: – Statutory requirements – e.g., Data Protection Laws, Finance Laws, POPI – Financial gain – e.g. • Fixing broken/sub optimal business processes that cost money • Reducing data fixes that cost time and money • Sub optimal data lifecycle management causing larger than required data sets, i.e., hardware and storage costs • Data Governance is key for Digital Transformation • IoT • Big Data • Machine Learning • Mobility • Supplier Networks • Employee Engagement • In summary, for the kind of investment that you are making in SAP, get your housekeeping in order Would you move your dirt with you? Get Clean – Data Migration Keep Clean – SAP Tools Data Quality takes time and effort • Data migration with data clean-up and data governance post go-live is key by focussing on the following: – Obsolete Data – Data quality – Master data management (for post go-live) • Start Early – SAP S/4 HANA requires all customers and vendors to be maintained as a business partner – Consolidation and de-duplication of master data – Data cleansing effort (source vs target) – Data volumes: you can only do so much
    [Show full text]
  • Preparing a Data Migration Plan a Practical Introduction to Data Migration Strategy and Planning
    Preparing a Data Migration Plan A practical introduction to data migration strategy and planning April 2017 2 Introduction Thank you for downloading this guide, which aims to help with the development of a plan for a data migration. The guide is based on our years of work in the data movement industry, where we provide off-the- shelf software and consultancy for organisations across the world. Data migration is a complex undertaking, and the processes and software used are continually evolving. The approach in this guide incorporates data migration best practice, with the aim of making the data migration process a little more straightforward. We should start with a quick definition of what we mean by data migration. The term usually refers to the movement of data from an old or legacy system to a new system. Data migration is typically part of a larger programme and is often triggered by a merger or acquisition, a business decision to standardise systems, or modernisation of an organisation’s systems. The data migration planning outlined in this guide dovetails neatly into the overall requirements of an organisation. Don’t hesitate to get in touch with us at [email protected] if you have any questions. Did you like this guide? Click here to subscribe to our email newsletter list and be the first to receive our future publications. www.etlsolutions.com 3 Contents Project scoping…………………………………………………………………………Page 5 Methodology…………………………………………………………………………….Page 8 Data preparation………………………………………………………………………Page 11 Data security…………………………………………………………………………….Page 14 Business engagement……………………………………………………………….Page 17 About ETL Solutions………………………………………………………………….Page 20 www.etlsolutions.com 4 Chapter 1 Project Scoping www.etlsolutions.com 5 Preparing a Data Migration Plan | Project Scoping While staff and systems play an important role in reducing the risks involved with data migration, early stage planning can also help.
    [Show full text]