The Openclean Open-Source Data Cleaning Library

Total Page:16

File Type:pdf, Size:1020Kb

The Openclean Open-Source Data Cleaning Library From Papers to Practice: The openclean Open-Source Data Cleaning Library Heiko Müller, Sonia Castelo, Munaf Qazi, and Juliana Freire New York University {heiko.mueller,s.castelo,munaf,juliana.freire}@nyu.edu ABSTRACT Inspired by the wide adoption of generic machine learning frame- Data preparation is still a major bottleneck for many data science works such as scikit-learn [15], TensorFlow [1], and PyTorch [14], projects. Even though many sophisticated algorithms and tools we are currently developing openclean, an open-source Python have been proposed in the research literature, it is difficult for library for data profiling and data cleaning [11]. Our goals are practitioners to integrate them into their data wrangling efforts. We twofold. First, openclean aims to provide a unified framework for present openclean, a open-source Python library for data cleaning practitioners that brings together open source data profiling and and profiling. openclean integrates data profiling and cleaning data cleaning tools into an easy-to-use environment. By making tools in a single environment that is easy and intuitive to use. We existing tools available to a large user-community, and through the designed openclean to be extensible and make it easy to add new integration with the rich Python ecosystem, openclean has the functionality. By doing so, it will not only become easier for users to potential to simplify data cleaning tasks. Second, by providing a access state-of-the-art algorithms for their data wrangling efforts, structured, extensible framework, openclean can serve as a plat- but also allow researchers to integrate their work and evaluate its form with which researchers and developers can integrate their effectiveness in practice. We envision openclean as a first step to techniques. We hope that by bringing together a community of build a community of practitioners and researchers in the field. In users, developers, and researchers, we will be in a better position our demo, we outline the main components and design decisions to attack the many challenges in dealing with data quality. in the development of openclean and demonstrate the current Overview of openclean. The source code for openclean is avail- functionality of the library on real-world use cases. able on GitHub [11]. We chose Python to implement openclean because of its growing popularity as well as the large number of PVLDB Reference Format: existing open-source libraries. Figure 1 shows the ecosystem of li- Heiko Müller, Sonia Castelo, Munaf Qazi, and Juliana Freire. From Papers braries that openclean currently leverages. openclean is organized to Practice: in a modular way to account for the large number of approaches The openclean Open-Source Data Cleaning Library. PVLDB, 14(12): 2763 - 2766, 2021. and techniques that fall into the areas of data profiling and data doi:10.14778/3476311.3476339 cleaning and that we aim to integrate into openclean in the future. At the center of openclean is the package openclean-core. 1 MOTIVATION Within this package we define the relevant APIs for accessing and processing data as well as some basic functionality. The core pack- The negative impact of poor data quality on the widespread and age depends on well-known Python libraries including pandas [23] profitable use of machine learning17 [ ] makes data cleaning essen- and scikit-learn [15] as well as on a set of libraries that were tial in many data science projects. Improving data quality requires developed in parallel to openclean to address specific needs regard- data profiling and exploration to gain an understanding of quality ing access to reference data [12], provenance and version manage- issues, and data cleaning to transform the data into a state that is fit ment [10], and integration of tools with existing binaries that are for purpose. This process is tedious and costly. A frequently cited not implemented in Python or that require installation of additional survey in 2016 found that data scientists spend 60% of their time software systems [19]. on data cleaning and organizing data [16]. In the same survey 57% On top of the core library are several extensions that make use of of the data scientists also stated that they consider data cleaning the API’s and underlying base packages to provide additional func- and organizing data as the least enjoyable task of their job. tionality. Current extensions include libraries to discover regular ex- Over the years, many tools for profiling, preparing, and cleaning pressions, a Graphical User Interface that is integrated with Jupyter data have been developed, both in academia and industry (see [2, 7, Notebooks, as well as profiling functionality from the Metanome 8] for overviews). These approaches were developed in isolation and project as one example of how existing tools that are not imple- in different programming languages with no standardized interfaces. mented in Python can easily be integrated into openclean. We are Thus, it is difficult for data scientists to combine existing toolsin also in the process of integrating other tools like HoloClean [18] their data processing pipelines. and Ditto [9]. This work is licensed under the Creative Commons BY-NC-ND 4.0 International In this demo paper, we first introduce the openclean library License. Visit https://creativecommons.org/licenses/by-nc-nd/4.0/ to view a copy of by giving an overview of the main concepts and the architecture this license. For any use beyond those covered by this license, obtain permission by (Section 2). Then, we discuss our demonstration that will show how emailing [email protected]. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. to use the library based on a few real-world examples and how to Proceedings of the VLDB Endowment, Vol. 14, No. 12 ISSN 2150-8097. add new functionality to the library (Section 3). doi:10.14778/3476311.3476339 2763 (from a given input list) that were identified as outliers. A profil- ing operator that computes statistics over the data will return a dictionary with the different computed statistics. Data Cleaning and Transformation. Data cleaning operators are intended to transform data and they almost exclusively operate on data frames. The following abstract operator types are defined in openclean: transformer, mapper, reducer, and group filter. A transformer maps a given data frame to a new data frame. openclean comes with a set of built-in transformation operations Figure 1: openclean architecture. The core library and exten- for filtering columns and rows, insertion of new columns and rows, sions depend on standard Python libraries and additional moving and renaming columns, updating of cell values, and for packages that we developed for specific tasks. sorting. A mapper returns a group of data frames for a given input data frame. Each resulting group is identified by a unique key. The 2 MAIN FEATURES OF OPENCLEAN default mapper in openclean is the groupby operator that groups 2.1 Data Model and Operators rows based on key values that are computed for each row using a given evaluation function (see below). A reducer converts a group Datasets and Streams. openclean is primarily intended for tabu- of data frames into a single data frame. A common example are lar datasets, which are represented as data frames: a set of columns operators that resolve conflicts when repairing violations of FDs.A with (not necessarily unique) names and a set of rows that contain group filter operates on groups of data frames and transforms one values for each of the columns. set of groups into a new set of groups. openclean supports two modes for processing data frames: (i) Functions. Well-defined APIs are essential for a framework that the data frame can either be loaded completely into main memory aims to be flexible and extensible. In openclean we define APIs for pandas.DataFrame (e.g., as a [23]), or (ii) it can be streamed from common data wrangling tasks in a modular way that makes it easy an external storage source, e.g., a comma-separated file. The latter is for a user to combine them as well as to include new functionality at particularly important for large datasets that cannot fit entirely into different levels of granularity. One means to ensure such flexibility openclean main memory. Data streams in are streams of dataset is the use of functions as arguments. rows. The streaming data processor allows the application of clean- In openclean, many cleaning and transformation operators take ing and profiling operations directly to the rows in the stream, e.g., functions as their arguments. For example, the filter and update to filter data of interest that can then be loaded into main memory. operators shown in Figure 2(b) [#7] take functions IsNotEmpty Figure 2(b) shows an example in cell #7 using the NYC Parking and str.upper as arguments to select and manipulate rows. By Violations dataset. The dataset contains over 9 million rows and having functions as operator arguments it becomes easy to change the data file is approx. 380 mb in size. In the example we first select and customize behavior. In the example, we can easily use any Registration three columns from the dataset and rename column (user defined) function with a single argument instead of thestan- State State to . We then filter rows that contain a vehicle color dard str.upper function to transform the data using the update and convert the color values to all upper case. Finally, we take a operator. random sample of 1000 rows and return them as a data frame. Figure 2(b) [#8] shows another example for the power of com- Operators. openclean implements two different types of opera- posability in openclean.
Recommended publications
  • Mapreduce-Based D ELT Framework to Address the Challenges of Geospatial Big Data
    International Journal of Geo-Information Article MapReduce-Based D_ELT Framework to Address the Challenges of Geospatial Big Data Junghee Jo 1,* and Kang-Woo Lee 2 1 Busan National University of Education, Busan 46241, Korea 2 Electronics and Telecommunications Research Institute (ETRI), Daejeon 34129, Korea; [email protected] * Correspondence: [email protected]; Tel.: +82-51-500-7327 Received: 15 August 2019; Accepted: 21 October 2019; Published: 24 October 2019 Abstract: The conventional extracting–transforming–loading (ETL) system is typically operated on a single machine not capable of handling huge volumes of geospatial big data. To deal with the considerable amount of big data in the ETL process, we propose D_ELT (delayed extracting–loading –transforming) by utilizing MapReduce-based parallelization. Among various kinds of big data, we concentrate on geospatial big data generated via sensors using Internet of Things (IoT) technology. In the IoT environment, update latency for sensor big data is typically short and old data are not worth further analysis, so the speed of data preparation is even more significant. We conducted several experiments measuring the overall performance of D_ELT and compared it with both traditional ETL and extracting–loading– transforming (ELT) systems, using different sizes of data and complexity levels for analysis. The experimental results show that D_ELT outperforms the other two approaches, ETL and ELT. In addition, the larger the amount of data or the higher the complexity of the analysis, the greater the parallelization effect of transform in D_ELT, leading to better performance over the traditional ETL and ELT approaches. Keywords: ETL; ELT; big data; sensor data; IoT; geospatial big data; MapReduce 1.
    [Show full text]
  • A Survey on Data Collection for Machine Learning a Big Data - AI Integration Perspective
    1 A Survey on Data Collection for Machine Learning A Big Data - AI Integration Perspective Yuji Roh, Geon Heo, Steven Euijong Whang, Senior Member, IEEE Abstract—Data collection is a major bottleneck in machine learning and an active research topic in multiple communities. There are largely two reasons data collection has recently become a critical issue. First, as machine learning is becoming more widely-used, we are seeing new applications that do not necessarily have enough labeled data. Second, unlike traditional machine learning, deep learning techniques automatically generate features, which saves feature engineering costs, but in return may require larger amounts of labeled data. Interestingly, recent research in data collection comes not only from the machine learning, natural language, and computer vision communities, but also from the data management community due to the importance of handling large amounts of data. In this survey, we perform a comprehensive study of data collection from a data management point of view. Data collection largely consists of data acquisition, data labeling, and improvement of existing data or models. We provide a research landscape of these operations, provide guidelines on which technique to use when, and identify interesting research challenges. The integration of machine learning and data management for data collection is part of a larger trend of Big data and Artificial Intelligence (AI) integration and opens many opportunities for new research. Index Terms—data collection, data acquisition, data labeling, machine learning F 1 INTRODUCTION E are living in exciting times where machine learning expertise. This problem applies to any novel application that W is having a profound influence on a wide range of benefits from machine learning.
    [Show full text]
  • Open Source Copyrights
    Kuri App - Open Source Copyrights: 001_talker_listener-master_2015-03-02 ===================================== Source Code can be found at: https://github.com/awesomebytes/python_profiling_tutorial_with_ros 001_talker_listener-master_2016-03-22 ===================================== Source Code can be found at: https://github.com/ashfaqfarooqui/ROSTutorials acl_2.2.52-1_amd64.deb ====================== Licensed under GPL 2.0 License terms can be found at: http://savannah.nongnu.org/projects/acl/ acl_2.2.52-1_i386.deb ===================== Licensed under LGPL 2.1 License terms can be found at: http://metadata.ftp- master.debian.org/changelogs/main/a/acl/acl_2.2.51-8_copyright actionlib-1.11.2 ================ Licensed under BSD Source Code can be found at: https://github.com/ros/actionlib License terms can be found at: http://wiki.ros.org/actionlib actionlib-common-1.5.4 ====================== Licensed under BSD Source Code can be found at: https://github.com/ros-windows/actionlib License terms can be found at: http://wiki.ros.org/actionlib adduser_3.113+nmu3ubuntu3_all.deb ================================= Licensed under GPL 2.0 License terms can be found at: http://mirrors.kernel.org/ubuntu/pool/main/a/adduser/adduser_3.113+nmu3ubuntu3_all. deb alsa-base_1.0.25+dfsg-0ubuntu4_all.deb ====================================== Licensed under GPL 2.0 License terms can be found at: http://mirrors.kernel.org/ubuntu/pool/main/a/alsa- driver/alsa-base_1.0.25+dfsg-0ubuntu4_all.deb alsa-utils_1.0.27.2-1ubuntu2_amd64.deb ======================================
    [Show full text]
  • Nosql Databases
    Query & Exploration SQL, Search, Cypher, … Stream Processing Platforms Data Storm, Spark, .. Data Ingestion Serving ETL, Distcp, Batch Processing Platforms BI, Cubes, Kafka, MapReduce, SparkSQL, BigQuery, Hive, Cypher, ... RDBMS, Key- OpenRefine, value Stores, … Data Definition Tableau, … SQL DDL, Avro, Protobuf, CSV Storage Systems HDFS, RDBMS, Column Stores, Graph Databases Computing Platforms Distributed Commodity, Clustered High-Performance, Single Node Query & Exploration SQL, Search, Cypher, … Stream Processing Platforms Data Storm, Spark, .. Data Ingestion Serving ETL, Distcp, Batch Processing Platforms BI, Cubes, Kafka, MapReduce, SparkSQL, BigQuery, Hive, Cypher, ... RDBMS, Key- OpenRefine, value Stores, … Data Definition Tableau, … SQL DDL, Avro, Protobuf, CSV Storage Systems HDFS, RDBMS, Column Stores, Graph Databases Computing Platforms Distributed Commodity, Clustered High-Performance, Single Node Computing Single Node Parallel Distributed Computing Computing Computing CPU GPU Grid Cluster Computing Computing A single node (usually multiple cores) Attached to a data store (Disc, SSD, …) One process with potentially multiple threads R: All processing is done on one computer BidMat: All processing is done on one computer with specialized HW Single Node In memory Retrieve/Stores from Disc Pros Simple to program and debug Cons Can only scale-up Does not deal with large data sets Single Node solution for large scale exploratory analysis Specialized HW and SW for efficient Matrix operations Elements: Data engine software for
    [Show full text]
  • Easybuild Documentation Release 20210907.0
    EasyBuild Documentation Release 20210907.0 Ghent University Tue, 07 Sep 2021 08:55:41 Contents 1 What is EasyBuild? 3 2 Concepts and terminology 5 2.1 EasyBuild framework..........................................5 2.2 Easyblocks................................................6 2.3 Toolchains................................................7 2.3.1 system toolchain.......................................7 2.3.2 dummy toolchain (DEPRECATED) ..............................7 2.3.3 Common toolchains.......................................7 2.4 Easyconfig files..............................................7 2.5 Extensions................................................8 3 Typical workflow example: building and installing WRF9 3.1 Searching for available easyconfigs files.................................9 3.2 Getting an overview of planned installations.............................. 10 3.3 Installing a software stack........................................ 11 4 Getting started 13 4.1 Installing EasyBuild........................................... 13 4.1.1 Requirements.......................................... 14 4.1.2 Using pip to Install EasyBuild................................. 14 4.1.3 Installing EasyBuild with EasyBuild.............................. 17 4.1.4 Dependencies.......................................... 19 4.1.5 Sources............................................. 21 4.1.6 In case of installation issues. .................................. 22 4.2 Configuring EasyBuild.......................................... 22 4.2.1 Supported configuration
    [Show full text]
  • Best Practices in Data Collection and Preparation: Recommendations For
    Feature Topic on Reviewer Resources Organizational Research Methods 2021, Vol. 24(4) 678–\693 ª The Author(s) 2019 Best Practices in Data Article reuse guidelines: sagepub.com/journals-permissions Collection and Preparation: DOI: 10.1177/1094428119836485 journals.sagepub.com/home/orm Recommendations for Reviewers, Editors, and Authors Herman Aguinis1 , N. Sharon Hill1, and James R. Bailey1 Abstract We offer best-practice recommendations for journal reviewers, editors, and authors regarding data collection and preparation. Our recommendations are applicable to research adopting dif- ferent epistemological and ontological perspectives—including both quantitative and qualitative approaches—as well as research addressing micro (i.e., individuals, teams) and macro (i.e., organizations, industries) levels of analysis. Our recommendations regarding data collection address (a) type of research design, (b) control variables, (c) sampling procedures, and (d) missing data management. Our recommendations regarding data preparation address (e) outlier man- agement, (f) use of corrections for statistical and methodological artifacts, and (g) data trans- formations. Our recommendations address best practices as well as transparency issues. The formal implementation of our recommendations in the manuscript review process will likely motivate authors to increase transparency because failure to disclose necessary information may lead to a manuscript rejection decision. Also, reviewers can use our recommendations for developmental purposes to highlight which particular issues should be improved in a revised version of a manuscript and in future research. Taken together, the implementation of our rec- ommendations in the form of checklists can help address current challenges regarding results and inferential reproducibility as well as enhance the credibility, trustworthiness, and usefulness of the scholarly knowledge that is produced.
    [Show full text]
  • The Risks of Using Spreadsheets for Statistical Analysis Why Spreadsheets Have Their Limits, and What You Can Do to Avoid Them
    The risks of using spreadsheets for statistical analysis Why spreadsheets have their limits, and what you can do to avoid them Let’s go Introduction Introduction Spreadsheets are widely used for statistical analysis; and while they are incredibly useful tools, they are Why spreadsheets are popular useful only to a certain point. When used for a task 88% of all spreadsheets they’re not designed to perform, or for a task at contain at least one error or beyond the limit of their capabilities, spread- sheets can be somewhat risky. An alternative to spreadsheets This paper presents some points you should consider if you use, or plan to use, a spread- sheet to perform statistical analysis. It also describes an alternative that in many cases will be more suitable. The learning curve with IBM SPSS Statistics Software licensing the SPSS way Conclusion Introduction Why spreadsheets are popular Why spreadsheets are popular A spreadsheet is an attractive choice for performing The answer to the first question depends on the scale and calculations because it’s easy to use. Most of us know (or the complexity of your data analysis. A typical spreadsheet 1 • 2 • 3 • 4 • 5 think we know) how to use one. Plus, spreadsheet programs will have a restriction on the number of records it can 6 • 7 • 8 • 9 come as a standard desktop computer resource, so they’re handle, so if the scale of the job is large, a tool other than a already available. spreadsheet may be very useful. A spreadsheet is a wonderful invention and an An alternative to spreadsheets excellent tool—for certain jobs.
    [Show full text]
  • Data Migration (Pdf)
    PTS Data Migration 1 Contents 2 Background ..................................................................................................................................... 2 3 Challenge ......................................................................................................................................... 3 3.1 Legacy Data Extraction ............................................................................................................ 3 3.2 Data Cleansing......................................................................................................................... 3 3.3 Data Linking ............................................................................................................................. 3 3.4 Data Mapping .......................................................................................................................... 3 3.5 New Data Preparation & Load ................................................................................................ 3 3.6 Legacy Data Retention ............................................................................................................ 4 4 Solution ........................................................................................................................................... 5 4.1 Legacy Data Extraction ............................................................................................................ 5 4.2 Data Cleansing........................................................................................................................
    [Show full text]
  • Towards a Fully Automated Extraction and Interpretation of Tabular Data Using Machine Learning
    UPTEC F 19050 Examensarbete 30 hp August 2019 Towards a fully automated extraction and interpretation of tabular data using machine learning Per Hedbrant Per Hedbrant Master Thesis in Engineering Physics Department of Engineering Sciences Uppsala University Sweden Abstract Towards a fully automated extraction and interpretation of tabular data using machine learning Per Hedbrant Teknisk- naturvetenskaplig fakultet UTH-enheten Motivation A challenge for researchers at CBCS is the ability to efficiently manage the Besöksadress: different data formats that frequently are changed. Significant amount of time is Ångströmlaboratoriet Lägerhyddsvägen 1 spent on manual pre-processing, converting from one format to another. There are Hus 4, Plan 0 currently no solutions that uses pattern recognition to locate and automatically recognise data structures in a spreadsheet. Postadress: Box 536 751 21 Uppsala Problem Definition The desired solution is to build a self-learning Software as-a-Service (SaaS) for Telefon: automated recognition and loading of data stored in arbitrary formats. The aim of 018 – 471 30 03 this study is three-folded: A) Investigate if unsupervised machine learning Telefax: methods can be used to label different types of cells in spreadsheets. B) 018 – 471 30 00 Investigate if a hypothesis-generating algorithm can be used to label different types of cells in spreadsheets. C) Advise on choices of architecture and Hemsida: technologies for the SaaS solution. http://www.teknat.uu.se/student Method A pre-processing framework is built that can read and pre-process any type of spreadsheet into a feature matrix. Different datasets are read and clustered. An investigation on the usefulness of reducing the dimensionality is also done.
    [Show full text]
  • Advanced Data Preparation for Individuals & Teams
    WRANGLER PRO Advanced Data Preparation for Individuals & Teams Assess & Refine Data Faster FILES DATA VISUALIZATION Data preparation presents the biggest opportunity for organizations to uncover not only new levels of efciency but DATABASES REPORTING also new sources of value. CLOUD MACHINE LEARNING Wrangler Pro accelerates data preparation by making the process more intuitive and efcient. The Wrangler Pro edition API INPUTS DATA SCIENCE is specifically tailored to the needs of analyst teams and departmental use cases. It provides analysts the ability to connect to a broad range of data sources, schedule workflows to run on a regular basis and freely share their work with colleagues. Key Benefits Work with Any Data: Users can prepare any type of data, regardless of shape or size, so that it can be incorporated into their organization’s analytics eforts. Improve Efciency: Make the end-to-end process of data preparation up to 10X more efcient compared to traditional methods using excel or hand-code. By utilizing the latest techniques in data visualization and machine learning, Wrangler Pro visibly surfaces data quality issues and guides users through the process of preparing their data. Accelerate Data Onboarding: Whether developing analysis for internal consumption or analysis to be consumed by an end customer, Wrangler Pro enables analysts to more efciently incorporate unfamiliar external data into their project. Users are able to visually explore, clean and join together new data Reduce data Improve data sources in a repeatable workflow to improve the efciency and preparation time quality and quality of their work. by up to 90% trust in analysis WRANGLER PRO Why Wrangler Pro? Ease of Deployment: Wrangler Pro utilizes a flexible cloud- based deployment that integrates with leading cloud providers “We invested in Trifacta to more efciently such as Amazon Web Services.
    [Show full text]
  • Preparation for Data Migration to S4/Hana
    PREPARATION FOR DATA MIGRATION TO S4/HANA JONO LEENSTRA 3 r d A p r i l 2 0 1 9 Agenda • Why prepare data - why now? • Six steps to S4/HANA • S4/HANA Readiness Assessment • Prepare for your S4/HANA Migration • Tips and Tricks from the trade • Summary ‘’Data migration is often not highlighted as a key workstream for SAP S/4HANA projects’’ WHY PREPARE DATA – WHY NOW? Why Prepare data - why now? • The business case for data quality is always divided into either: – Statutory requirements – e.g., Data Protection Laws, Finance Laws, POPI – Financial gain – e.g. • Fixing broken/sub optimal business processes that cost money • Reducing data fixes that cost time and money • Sub optimal data lifecycle management causing larger than required data sets, i.e., hardware and storage costs • Data Governance is key for Digital Transformation • IoT • Big Data • Machine Learning • Mobility • Supplier Networks • Employee Engagement • In summary, for the kind of investment that you are making in SAP, get your housekeeping in order Would you move your dirt with you? Get Clean – Data Migration Keep Clean – SAP Tools Data Quality takes time and effort • Data migration with data clean-up and data governance post go-live is key by focussing on the following: – Obsolete Data – Data quality – Master data management (for post go-live) • Start Early – SAP S/4 HANA requires all customers and vendors to be maintained as a business partner – Consolidation and de-duplication of master data – Data cleansing effort (source vs target) – Data volumes: you can only do so much
    [Show full text]
  • Preparing a Data Migration Plan a Practical Introduction to Data Migration Strategy and Planning
    Preparing a Data Migration Plan A practical introduction to data migration strategy and planning April 2017 2 Introduction Thank you for downloading this guide, which aims to help with the development of a plan for a data migration. The guide is based on our years of work in the data movement industry, where we provide off-the- shelf software and consultancy for organisations across the world. Data migration is a complex undertaking, and the processes and software used are continually evolving. The approach in this guide incorporates data migration best practice, with the aim of making the data migration process a little more straightforward. We should start with a quick definition of what we mean by data migration. The term usually refers to the movement of data from an old or legacy system to a new system. Data migration is typically part of a larger programme and is often triggered by a merger or acquisition, a business decision to standardise systems, or modernisation of an organisation’s systems. The data migration planning outlined in this guide dovetails neatly into the overall requirements of an organisation. Don’t hesitate to get in touch with us at [email protected] if you have any questions. Did you like this guide? Click here to subscribe to our email newsletter list and be the first to receive our future publications. www.etlsolutions.com 3 Contents Project scoping…………………………………………………………………………Page 5 Methodology…………………………………………………………………………….Page 8 Data preparation………………………………………………………………………Page 11 Data security…………………………………………………………………………….Page 14 Business engagement……………………………………………………………….Page 17 About ETL Solutions………………………………………………………………….Page 20 www.etlsolutions.com 4 Chapter 1 Project Scoping www.etlsolutions.com 5 Preparing a Data Migration Plan | Project Scoping While staff and systems play an important role in reducing the risks involved with data migration, early stage planning can also help.
    [Show full text]