Quick viewing(Text Mode)

DATA MIGRATION MANAGEMENT a Methodology: Sustaining Data Integrity After the Go Live and Beyond

DATA MIGRATION MANAGEMENT a Methodology: Sustaining Data Integrity After the Go Live and Beyond

DATA MIGRATION MANAGEMENT A Methodology: Sustaining integrity after the go live and beyond

WHITE PAPER

1 ENTERPRISE DATA LIFECYCLE MANAGEMENT

Data migration is a core practice within Utopia’s Enterprise Data Lifecycle Management (EDLM) framework. EDLM is a comprehensive strategy that combines business processes, data processes, governance practices, and applied technologies. The result of the strategy is a framework of principles and practices to manage and extend the lifecycle of data across an enterprise from creation through archiving. By improving the quality, accessibility, and usability of data the breadth of its lifecycle is widened, increasing the financial returns of the data. The Data Migration Management methodology is an EDLM best practice that extends the lifecycle by moving and transforming data from aging legacy repositories to new leading edge systems. The same migration methodology is used to archive the data when it truly has reached its end of life, completing the cycle. The stages in the EDLM lifecycle are reflected in Figure 1: FIGURE 1: EDLM Stages Data migration projects can occur anywhere in the lifecycle, but the Creation and Integration stages are where the data is first captured and stored, and then integrated into the organization’s environment. Integrating the data entails moving it from temporary capture storage to a system repository, or migrating it from a source system to a new target.

There are four dimensions to EDLM and they form the framework upon which organizational demands are placed and solution practices are devised and employed. The four dimensions are:

LIFECYCLE: time and management of data as it ages

PEOPLE: , data stewards, business metrics and value, and usage

PROCESSES: data flow processes, business workflow processes, standard operating procedures

TECHNOLOGY: systems architecture, applications, repositories, integration layers, IT governance

It is from those four dimensions that the Data Migration Management methodology was first conceived, then tested, and then tempered from one customer migration after another. When an EDLM framework is established within an organization the objective of the program is to transcend geographies, businesses, and functions to support enterprise-common data processes (i.e. migration) wherever they are needed.

2 THE CHALLENGES OF DATA MIGRATION PROJECTS

Historically, for many companies data migration projects are often found to be plagued with risk, and had delayed “go live” events resulting in costly budget overruns and acrimonious recriminations. A frequently overlooked aspect of any new ERP system deployment, for example, is the integrity of the data, which the new business application executes against. Too often system owners forget the root reason for deploying the system: to house and manage the data. The existence of the system is, after all, justified by the data.

Data Migration Should Therefore Be a High Priority in New System Deployments

Traditionally, many systems integrators implementing a new system will prioritize their focus in this order of importance…

• Application design mplementation planning • I How would you prioritize your company’s last project • Integration and its associated success? • Change management • Data migration

For many projects, data migration is one of the last priorities, and systems integrators will often defer to the client the completeness and integrity of the data file for loading into the system. Indeed data migration is often viewed as simply transferring data between systems. Yet the business impact can be significant and detrimental to business continuity when proper is not applied during the migration. Without a thorough review and understanding of the business processes and considerations of how the data will be consumed the suboptimal performance of the old legacy system will be repeated in the new.

Data Migration Scope Should be Inclusive

The data migration process should encompass all business functions that consume data to insure it is properly populated in the new system. In essence, data must be orchestrated with business processes; table and field mappings must be validated, and must be checked against business rules. The combination of risk ignorance and lack of skilled and experienced IT resources is why a data migration defaults to a file loading exercise for many teams. Suffice to say, the effort associated with data migration is often underestimated.

Data Quality Should Be an Inherent Part of Data Migration

When implementing a new ERP system legacy data must be migrated to ensure business continuity in ongoing business operations. During a data migration effort should be devoted to a data quality assessment and subsequent remediation or else the quality problems in the legacy systems will be propagated to the new. These processes include profiling, data standardization, de-duplication, and enrichment of the data to provide the most accurate content for continued business operations.

Planning for Long-Term Data Quality is Critical to Realizing Your ROI

The long-term view looks beyond the data migration event and seeks to govern and manage data long past the go live date. Sustainability of data quality is critical for ongoing process optimization as well as reporting integrity. It is important to not only “get the data clean”, but to “keep it clean.” Therefore the data migration event can be viewed as a convenient genesis for an EDLM strategy. Maintaining the quality of data input into the system after your go live is even more critical than at the point of initial data loading.

3 TOP ISSUES IDENTIFIED FOR DATA MIGRATION PROJECTS

From our experience the top issues faced during a data migration project typically include:

• Poor quality data: sometimes the defects are known in the source systems, but always new and unknown deficiencies are uncovered after extraction. • Missing data: it can be surprising how many “mandatory” fields in the source systems actually have high percentages of blanks, NULLs, or defaults. • Mismatched data: field overuse is a classic problem. Sometimes two, three, or four different domains of data can be found in one field that was “repurposed” after its original use became obsolete. • Data not available in time for go live: operational commitments and cycle times are misaligned with the system implementation project, and delay the entire deployment. • Data requirements are not considered or communicated: business rules, data quality rules, and transformation rules are not sufficiently researched or documented to either the breadth or depth necessary for moving and consolidating multiple systems into one target. • Lack of legacy system knowledge and understanding: this can also apply to the new target system. The client has insufficient staff to provide system guidance.

These issues and others form the drivers for the use of the structured approach outlined in this methodology. By following the five phases of the Data Migration Management methodology the systems implementer will gain these benefits:

Reduced migration cycle time. Following a proven, repeatable process avoids dead ends, project rework, and inefficient resource utilization.

Reduced risk of system deployment. By knowing the steps to follow and adhering to the sound data quality procedures, trial data conversions, and iterative validations, the system implementer can accurately predict development periods and build a project plan with a high degree of certainty. The unknowns are drawn to the light and worked out of the system.

Improved data quality. Not only is the data migrated in predictable manner, but it is loaded into the target system with a substantially higher degree of quality than when it was extracted from the sources. The methodology aims for zero percent defects by the completion of the Final Preparation phase.

Enhanced replication of source data in the target system. Through the use of multiple stages of interrogative workshops with system stakeholders, hidden details of what additional data, obsolete data, or new reference data is mined from the organization and built into the migration plan. This ensures the original goal of the migration: loading of all applicable source data, is achieved. The data is found, extracted, and transformed.

METHODOLOGY OVERVIEW

Utopia’s Data Migration Management methodology incorporates elements of the four dimensions of EDLM: lifecycle, people, processes, and technology to ensure the data is provisioned as required by the business. The methodology is aligned, in part, on SAPs ASAP implementation methodology. We do this for a number of reasons, one of which is to adopt standard nomenclature and phrases that are familiar to SAP® ecosystem partners and clients. This methodology works equally well for non-SAP® environments as the fundamentals and processes for moving and managing data are universal. Data is data, whether you are moving it into an ERP system or a .

4 The Data Migration Management methodology is based upon proven best practices and extensive experience in , harmonization, and migration. The methodology utilizes the SAP® BusinessObjects™ Data Services platform and our own techniques to assess and improve the quality of source data before it is loaded into the target repository. Detecting and cleansing data upfront reduces the total cycle time of the , thereby reducing the total cost while enhancing the quality of the data being migrated into the target system.

FIGURE 2: Five Phases of the Data Migration Management Methodology Phases Methodology There are five phases in the methodology, and they span an entire data migration project from the initial project scoping task, through trial data loads, to the final support period. Those phases are depicted in Figure 2.

Figure 3 below depicts the major stages in each of the five methodology phases. Those phases and stages will be discussed in detail in the next sections in this white paper.

FIGURE 3: The Major Stages within Each of the Five Methodology Stages

Project Business Final Go Live and Name Phase Realization Preparation Blueprint Preparation Support

Scoping Identify & Analyze Data Assess Source Data Data Quality Remediation Prepare Data Load Trial Data Loads Validate Results Load Tgt Sys Go-Live

5 GOVERNANCE AND ISSUES COMMUNICATION

As the project moves from the Business Blueprint phase through the Go Live phase significant and documentation is generated by the SAP® BusinessObjects™ Data Services (DS) platform. These reports are used to provide a visualization capacity to the migration process. They aid in identifying potential data errors, and developing actionable outputs to improve data quality, monitor transformation trends, and enable data governance.

As legacy source records are processed through the validation transforms to prepare the output files, data quality errors can be captured and reported. These field-level errors can be rolled up along a metrics hierarchy, which provides for analysis along quality vectors such as completeness, accuracy, formatting, etc. The extract and transformation jobs can also be analyzed by data objects (business information domains), such as customer, vendor, materials, etc. These metrics can be aggregated upwards to a top-level score, which provides a multi- level understanding of what data is ready to be loaded into the system. This beginning-to-end insight into the performance of the extract, transform, and data quality routines helps focus the migration process on problematic source feeds. By focusing process governance, and by extending the data governance effort on critical areas the duration and cost of the migration effort are reduced. Fewer governance and project resources will be inspecting high quality data feeds. Their energy and time will be spent on those that need attention instead.

The Data Migration Management methodology, inherent process monitoring, and defect measuring operations lay the data governance foundation for when the system passes Go Live. Data governance, like data migration, is a critical EDLM practice. The EDLM framework that supports those and other practices identifies and builds capabilities that carry the entire data environment long after the initial migration project is complete.

PROJECT PREPARATION PHASE The Project Preparation phase involves researching, scoping, and planning the early tasks that need to be completed in order for the design and implementation activities to commence. A number of documents are generated during this phase, notably are the project management plan, quality management plan, and resource management plan.

Each phase in the Data Migration Management methodology is started and ended with a stage-gate. In order for the project to move into or out of a phase, specific criteria must be met. For the Project Preparation phase the starting stage-gate occurs when the organization managing the source systems determines a replacement is needed. The exiting stage-gate occurs when all of the applicable documentation and planning has been completed to the degree that allows the Business Blueprinting phase to begin. The Project Preparation phase is unique in that a pseudo stage-gate exists within its tasks series. Figure 4 below depicts the major activities in this phase.

FIGURE 4: Project Preparation Phase

Project Conception Project Preparation Phase

Formulate Infrastructure Migration High-Level Scoping Data Health Data & Target Framework Source to Tgt Study Assessment Conversion System Setup Setup Mapping Approach

Formulate Strategy

Formulate Ready for data Cut-Over extraction & ETL Strategy design

Stage- Gate Blueprint Phase

6 The Infrastructure & Target System Setup task acts as a stage-gate. While usually considered its own major project, building the new target system, such as procuring the licenses, establishing the repository , developing the data structure and model, etc., can run in parallel to the migration project. Building the target system places dependencies on the migration effort. One dependency is the target system must be at least partially built before data can be loaded into it. The Infrastructure & Target System Setup task effectively represents a link between the parallel projects and methodologies. One project will create and install the new system infrastructure and the other project will gather and load the data. In a best practices approach the two are inextricably connected, but each requires its own specialization.

S C O P I N G S T U D Y

Utopia’s approach to data migration begins with performing a scoping study to assess the existing system environment and the level of effort to complete a successful project. The results of the study will provide a detailed data migration strategy to include:

The data objects in scope. • For the data warehouse, ERP, or MDM system. • The requirements of how the data will be used in the target system.

The criteria need to be evaluated to ensure the quality of the data is “fit for its intended purpose?” • How the data will be migrated • The timelines for the migration • Estimate of the effort required • Initiation of a Data Health Assessment™ to capture data quality statistics • Determine what activities are necessary to prepare the data

Data architecture and structures information: • Where the data is currently used, how it is used, by who, for what purpose across the data lifecycle • Collection of customer artifacts on source data structures • Collection of target system artifacts, such as ERP or MDM system data structures • Identification of target system downstream interfaces and use by whom

DATA PROFILING – THE DATA HEALTH ASSESSMENT™

Data quality has a significant impact on the performance and capabilities of the new system. Utopia assesses the data from the source systems as well the target application. Assessments are performed using Utopia’s Data Health Assessment™ (DHA) process. The DHA measures data quality and identifies critical parameters including those shown in Figure 5.

Figure 5: The Critical Parameters of a DHA

7 Using the results from the data discovery workshops and the DHA, migration developers will prepare a data migration strategy. A migration strategy takes into account not only the existing legacy data structures and the volumes of data, it also factors in the data requirements of the system being implemented. The discovery workshops and the DHA examine the structure and content of the legacy data systems to provide input to the Business Blueprint phase where high-level data requirements are compared to the discovery findings (metadata).

BUSINESS BLUEPRINT PHASE

The Business Blueprint phase focuses on charting the business processes surrounding the target system, interviewing the system and data stakeholders, and evaluating both the legacy system and target system for the items reflected in Figure 6: FIGURE 6: Business Blueprint Items of Focus The data migration consultants interact with business process owners, system integrators, data owners, and the legacy technical team to achieve a detailed understanding of the data subject areas (objects) addressed by the migration project. Two types of interrogative discovery workshops are held with data stakeholders: legacy data investigation, and target system transformations and upload logic. The high-level outputs of the Business Blueprint phase are… • Data quality rules for legacy & target data • Detailed data mappings • Transformation rules • Traceability requirements • System cut-over plan • Staged legacy data

These deliverables form the backbone of the data migration documentation. The Business Blueprint phase essentially collects and documents the metadata for the project and data objects. From these artifacts and discovery workshops a set of strategy and approach documents are created to guide the actual migration implementation performed in the Realization, Final Preparation, and Go Live phases. A second major output resulting from this phase is the staging of extracted data from the legacy source systems. Figure 7 depicts the key activities conducted during the Business Blueprint phase.

Stage-Gate FIGURE 7: Business Blueprint Phase to Blueprint Phase Business Blueprint Phase

For each object Strategy and Legacy Create Create Data Requirements Data Conversion Definitions Object 1 Track Object X Workshop List Document Object 1 through X

For each object Create Profile Transformation Extract Data Source to Legacy Data Rules and Filter Obsolete Legacy Extraction Tgt in Landing Upload Logic Data Object 1 Data Track Object X Mappings Area Workshop Object 1 through X

Define Define Output Transfer Data All Objects mapped , Reconciliation File or ETL to Staging extracted, and profiled Guide Load specs . Area

Stage- Tasks that physically interact with the data Gate to Key Planning and definition tasks Realization Phase

8 The Business Blueprint phase loops through each data object or domain destined to be migrated to the target system. The first iterative process focuses on strategy and collecting legacy system information and metadata through workshops with client stakeholders. A list of attributes to be converted to the new system along with a corresponding data definitions document is created for each object. Once a strategy/investigation workshop for an object is completed the object can move to the “” track where the source to target mappings are created and the data is extracted from the legacy system. Once extracted to a landing area, typically in a flat , the critical task of data profiling is performed to verify the data is as expected. Data defects are cataloged, patterns identified, and the true field formats discovered. The data profile is a follow-on to the original DHA conducted during the early Project Preparation phase. The results of the data profile are key to constructing accurate transformation rules and filter logic to eliminate obsolete data. As a foundation for those transforms two separate data quality reports are generated:

PROFILING REPORT – A report for each object is created using the results of analysis and output from SAP® BusinessObjects™ Data Services. A sample of the report metrics includes the percentage and number of records regarding uniqueness, completeness, cardinality, minimum and maximum lengths, domain values, etc. on the extracted legacy data. This report produces an initial assessment of the legacy data per the identified target structure before starting the transformation work.

CLEANSING CERTIFICATE- reflects all the records that are reported as failing to meet quality criteria as established in profiling report. It is considered a detailed data analysis for each of the target objects. Cleansing Certificates are generated in multiple iterations. They are submitted to the project stakeholders for review. After reviewing the reports it is the responsibility of the legacy data owners to remediate the defective records in the legacy systems or provide business rules to rectify the problems to ensure they meet the target system requirements. Once the defective records are fixed the data will be extracted again and profiled for a second iteration of the Cleansing Certificate. This exercise will continue until zero defective records are attained for loading in the target system.

An area of potential deviation in a migration project is in the “Defining Output File or ETL Load Specs” task. Some target systems such as SAP® ECC have their own specialized load programs while others like an Oracle data warehouse are loaded directly by a commercial extract, transform, and load (ETL) solution. For targets such as SAP® ECC the “defined output file” means the creation of legacy system migration workbenches (LSMW) or intermediate document (IDoc) files that are recognized and consumed by the ECC loader. For a generic data warehouse, this same task will specify the ETL load parameters for directly loading the data via a commercial ETL solution.

The final task in the data extract track is the definition of the reconciliation guide. The data reconciliation plan/ process is an important check and balance step when migrating data. It verifies that the data is moved correctly. This is where quantities, totals, and other numeric sums are confirmed to match from the source system to the target. A sophisticated process is planned where a suite of test records is built to test every business rule and confirm a check exists to cause failure. If defect records can be created to cause the failure of all business rules then the test suite is certified to be 100% complete. This level of testing is particularly important for highly regulated industries such as banking and finance that need stringent documentation on their data test sets.

REALIZATION PHASE The Realization phase is where data migration developers build ETL jobs as per the requirements and specifications created in the Business Blueprint phase. Business rules, data quality rules, and field validations are built into the ETL jobs. Configuration data, reference tables, and system data is pulled from the target or other master data systems for incorporation into the transformations and validations. ETL unit and integration testing is carried out by the migration team to ensure that the specified integration routines perform and interact as designed. The entire Realization phase is depicted in Figure 8 and shows an ETL job, potentially multiple jobs, created for each of the target system objects. Within each job data quality and transformation rules are collected and coded. Unit tests are defined and the entire job or suite of jobs is run. The sole purpose of some of these jobs will be to cleanse and transform the extracted data, while other jobs will run as load routines in the trial conversion stages of the Final

9 Preparation phase. The purpose of running the data quality and transformation jobs is to achieve a zero defect rate on the data by the time it leaves the Realization phase. This means that the ETL job loop may be executed six, seven, or eight times in order to drive the error rate down to zero. The class of defects corrected in this loop is at the field level. Additional defects will be uncovered during the trial data conversion stages that relate to cross-object testing.

FIGURE 8: Realization Phase Stage-Gate to Realization Realization Phase Phase

ETL Job for Object X Records Requiring Collect Data Object 1 Manual Intervention Object X Quality Develop DQ Delivered to SMEs Object 1 Criteria and transforms Through X Rules

Design ETL Code ETL Define Unit Test ET and Adjustment Routines Routines Tests DQ Routines Per Testing

Analyze Test Results, Run Cleansing and Transformation again?

Load Data ETL Routine into Staging Ready for Trial Area

Stage- Gate to Final Prep Phase

The stage-gate to leaving the Realization phase is having all transformation and data quality defects implemented that can be with programmatic fixes. Records and fields that require manual intervention will be delivered to system subject matter experts (SMEs) for remediation. A goal is to deliver the records requiring the manual fixes to the SMEs as early in the process as possible.

Once the ETL jobs for all objects have been created the jobs that are destined to build the output file or load set are tested on the development version of the target system. The testing is conducted through a series of pre-planned trial data conversions (TDCs) in the Final Preparation phase.

A typical ETL job developed in the Realization Phase will have some if not all of the following features. 1. A configuration file, which is used to declare global variables and set rules is applied throughout the job. It is used to control job flow and improve developer productivity by reducing duplication of codes in a job.

2. Movement of extract data to a staging server. 3. Mappings of the source file to target structure. 4. transforms to convert legacy data to the target data format. These validation routines will include: i) Mandatory Checks – checks to confirm critical data is not blank, and that mandatory fields are

10 populated per specific business rules governing that field. ii) Data Length Check – checks the source data length as per the target structure iii) Data Format Check – checks the format of data, which includes numeric check, date format check, pattern check, etc. iv) Reference Table Check – input record fields will be compared to specified reference tables, and defective records logged as deficient. v) Duplicate Check –identifies duplicates at record level, as well as column level. There are two levels of duplicates: complete duplicate and potential duplicate. Complete duplicates are those records which are exactly identical. Potential duplicates are records which are similar based on crossing a matching threshold as specified per the business requirements. These are implemented using Match Transform in SAP® BusinessObjects™. 5. Removal of records from the source, which fails the validation rules. These failed records are provided to the data and system stakeholders for review and remediation before the next load iteration. 6. Application of all business and enrichment rules on the source data. 7. Generation of the output files as per the source to target mapping specification. 8. Exportation of the output files from staging server to the target system through the use of DS or a specialized application load facility. LSMW or IDocs as in the case of SAP® ECC.

DATA CLEANSING, ENSURING BUSINESS READY DATA

Achieving a sufficient level of quality in your business ready data is the most crucial part of a successful migration project and is a key aspect of the Realization phase. Bypassing data cleansing and directly loading the master data objects, no matter the quality, perpetuates legacy system issues and shifts the quality problems from pre-production to post-production implementation. This raises the cost to correct the data and introduces new data issues into downstream applications. In order to pass from the Realization phase to the Final Preparation phase the master data must adhere to the following:

• Each master data record is itself error free • Master data values are in the expected range of allowed entries (e.g. units of measure can be “liter” and “ml”, but not “cubic meter”) • Inconsistencies in the data are cleansed (e.g. data duplicates are removed) • Master data attributes to execute a business process are accurate within the context of the process

Efficient and effective data cleansing can only be implemented when close cooperation exists between the business units responsible for the maintenance of the master data. As part of the EDLM framework, data standards are applied so that consistent transformation can occur. This iterative process is where IT and business become partners in solving systemic legacy data quality, normalization, and duplication issues. In the Business Blueprint phase workshops are conducted to collect and validate existing business rules and to establish additional business rules to ensure long-term data quality in the target system.

FINAL PREPARATION PHASE The goal of the Final Preparation phase is to achieve business ready data for production loading into the target system. The data will be in the defined output file format that is ready for direct loading via DS, or as LSMW or IDoc files. The methodology accomplishes data readiness by executing the ETL jobs (created in the Realization phase) in a series of three TDCs. They are performed on all three types of data: master, transactional, and historical, and are loaded into dev/test or pre-production versions of the target system. Upon successful testing – completion of the TDCs – the data migration jobs are approved and the project enters the Go Live phase.

11 FIGURE 9: Final Preparation Phase

Stage-Gate to Final Preparation Phase Final Preparation Phase

Execution Run (Practice Load Adjust ETL TDC1 Preparation with Master Validation Routines Data)

Execution Execution Run Run (with (Practice Load Adjust ETL TDC2 Preparation Tested Transactional Validation Routines Master Data) Data)

GLS Execution Execution Run Execution Run (Go-Live Run (with (with tested (Practice with Load Adjust ETL Preparation Simulation, Tested Transactional Historical Validation Routines TD3) Master Data) Data) Data)

Stage- Gate to Go Live Phase

As Figure 9 above portrays, the focus of the first TDC (TDC1) is to test-execute the ETL jobs (Master Data Uploads) used for loading just master data. The second TDC (TDC2) builds upon the results of the first TDC by executing the tested Master Data Upload routines, and then running the Transactional Data Upload routines. Testing and load validation is performed between the Master and Transactional Uploads to verify the transactional data did not otherwise disrupt or corrupt the master data. When the two upload routines are successfully load tested together, the Final Preparation phase moves into the Go Live Simulation where tested Master and Transactional Uploads are run in addition to Historical Data Uploads. In addition to running and testing all three uploads together against a pre-production copy of the target system, the Go Live (cut-over) strategy is tested. The goal of this testing is to prove the cut-over can be performed in the time frame allotted and that all steps are documented and agreed upon.

As part of each Load Validation, ETL jobs developed for upload reconciliation are executed. These jobs consist of reconciliation rules for a particular target data object and their logical relationships to other objects. Apart from the intra object reconciliation, basic reconciliation tests are performed such as: • Verifying the number of actual records in the output is equal to the expected number of records • Calculations generated from the legacy system match those generated from the target system, such as the total in three financial accounts might have a value of $52,000 in the source files the same value should be derived from the output files staged and ready for loading.

A successful Final Preparation phase reduces the risk of a delayed target system startup and seeks zero interruptions in business operations.

GO LIVE & SUPPORT

During the Go Live phase the TDC-tested ETL jobs are executed on the production target and the cut-over process is commenced. First the master data is loaded and validated, followed by the transaction data. As can be seen in Figure 10 historical data is then loaded. These three loads were tested and effectively duplicated in the Final Preparation phase through TDC1, TDC2, and TDC3 with the goal of making the Go Live phase a non-event with no surprises. Following the production loads the reconciliation tests first developed in the Business Blueprint phase are run and their results analyzed. Reconciliation tests check for amongst other things that totals, calculations, and

12 other business operations conducted in the legacy systems are duplicated with the same results when run in the new production system. The target system stakeholders then sign-off on the results once any post-test adjustments are made.

Depending on the transactional volumes in the legacy systems and any latency in the cut-over period there may be a requirement for delta uploads. These uploads will be accomplished as a second Transactional Data Upload.

FIGURE 10: Go Live (Cut-Over) Phase

Stage-Gate to Go-Live Phase Go-Live (Cut-Over) Phase

Run Cut-Over Master Data Transactional Historical Data Adjustment Post Go-Live Reconciliation Preparation Upload Data Upload Upload and Closure Support Tests

Migration Complete

POST GO LIVE SUPPORT

In the Post Go Live support period training, editing of operational guides, and ETL-code consulting is provided. Planning towards the next migration project is also discussed. This is important because as a risk mitigation tactic for large system deployments as a subset of the total number of data objects may have been migrated. Indeed, the future of further data domain roll-outs may depend on the success of the first migration/roll-out. In these subsequent migration projects substantial benefit can be gained by leveraging the processes established by the Data Migration Management methodology during the first Blueprinting, Realization, and other phases. While much of the source extract code and source mappings may not be reusable, knowledge of the target system, business rules, data validation rules, environment metadata, etc, will be invaluable. The existence of unit test procedures, data profiling processes, SAP® BusinessObjects™ Data Services infrastructure, and how to conduct Final Preparation TDCs will substantially accelerate migration of subsequent objects and reduce implementation risk.

What is said for migrating subsequent objects holds true for future mergers and acquisitions (M&A) projects, but on a larger scale. M&A are a leading driver for many large system migrations. In one M&A project this author worked on the acquired firm had 115 different systems and applications to be replaced, hosted, or migrated to the buyer’s systems. Moreover, an enterprise that has chosen to grow via acquisitions is likely to conduct multiple M&A operations in a year. This level of a data environment places extreme stress on the host organization’s capabilities. That stress often drives the creation of data integration/migration competency center. Those centers use the Data Migration Management methodology as the foundation for their operations.

DATA HEALTH ASSESSMENT™ POST GO LIVE

Sustaining throughout the life cycle of both the data and enterprise system is a key factor in maintaining and improving a firm’s cost structure. If the data degrades, suboptimal decisions are made, such as erroneous inventory restocking causing undue expenses. The data profiling tests and infrastructure established in the Business Blueprint phase can be repurposed to monitor the health of the target system. This is of particular importance as data inflows to the ERP, CRM, or other targets, are constant and pose a continual challenge of defect leakage. Even a relatively low percentage (five percent or less) of defective records in a non-monitored data stream can cause significant operational headaches for the managers of the target system. Identifying, isolating, and remediating those defects once they are in the system is two to three times more expensive than if they were intercepted as part of the input. The processes employed by the DHA can be implemented on the input data streams, leveraging the business rules, validation checks, and metadata captured during the Blueprint phase. This way the investment in the data migration framework can pay dividends well beyond the completion of the original project.

13 SUMMARY UTOPIA CORPORATE PROFILE Data migration is a key event for your enterprise. By recognizing and preparing for the critical impact that Utopia, Inc. is a global services and strategy data plays during a large system roll out significant risk consulting firm focused exclusively on and costly overruns can be mitigated and eliminated. Enterprise Data Lifecycle Management Additionally, the sustainable management of data via an (EDLM). EDLM is an umbrella strategy EDLM strategy will not only support your data migration that combines business processes, people, event, but also yield significant process improvement and applied technologies to manage and and financial benefits over the long term. Embracing improve the lifecycle of data across an the five phases of the Data Migration Management enterprise (from creation through archiving). methodology is simple. Deploying the methodology Net outcomes of EDLM are business with rigor demands an organizational commitment of process optimization, hard dollar savings operational excellence, and with that commitment the and reporting integrity methodology delivers a new system with quality data. Utopia’s offerings range from enterprise data strategy and systems integration to data migration, data quality, and data governance services. We help our clients reduce overall costs and improve efficiencies by enabling them to manage and sustain (structured and unstructured) data as a key asset.

We serve customers in a variety of industries including: oil & gas, utilities, process and discrete manufacturing, consumer packaged goods, transportation, engineering and construction, distribution, telecom, healthcare, and financial services where large volumes of data often exist in disparate systems, multiple locations and inconsistent formats.

We are an approved SAP, SAP® Business Objects™, SAP Consulting, and Open Text partner with satisfied customers worldwide. The company is headquartered in the Chicagoland area with offices in Dubai, Singapore, and Bangalore.

White paper authored by:

Frank Dravis, Solutions Consultant

Utopia, Inc. | Headquarters: 405 Washington Boulevard | Suite 203 Mundelein, Illinois 60060 USA | Phone 1 847 388 3600 Web www.utopiainc.com | E-mail [email protected] © 2009 Utopia, Inc. All Rights Reserved. 07/201014