LEVERAGING DATABASE VIRTUALIZATION for TEST DATA MANAGEMENT Vikas Dewangan, Senior Technology Architect, Infosys

Total Page:16

File Type:pdf, Size:1020Kb

LEVERAGING DATABASE VIRTUALIZATION for TEST DATA MANAGEMENT Vikas Dewangan, Senior Technology Architect, Infosys WHITE PAPER LEVERAGING DATABASE VIRTUALIZATION FOR TEST DATA MANAGEMENT Vikas Dewangan, Senior Technology Architect, Infosys Abstract Database virtualization is an emerging trend in test data management (TDM) and is all set to bring about a revolution in the provisioning of non-production data. These tools offer efficient virtual cloning of production databases for software application development and testing, without having to physically replicate the entire database. Coupled with powerful features like taking point-in- time snapshots, write backs to the source database, and rolling back changes; these tools have the potential to bring in significant cost savings and efficiency improvements. In this paper, we explore the concept of database virtualization, industry tool features, and how these tools can be leveraged to improve TDM. Introduction Virtualization technologies have been savings. Further, virtualization leads to The field of test data management (TDM) around for a while and were first savings in hardware costs; thus resulting concerns itself with provisioning of introduced in the era of mainframe in decreased energy consumption and production-like data in non-production systems. However, it is only in the recent increased green technology. Given these environments, such as those used for past, due to advances in technology, advantages, virtualization will continue application development and testing that virtualization has been gaining to see a trend of increasing adoption in and is thus a related area. Database popularity. The term ‘virtualization’ refers the years to come. There are a number virtualization is an emerging trend in TDM, to encapsulating a particular hardware of popular virtualization technologies with the potential of offering significant or software resource so that multiple today, such as server, storage, database cost advantages. In this paper we will systems can make use of it. By means of and desktop virtualization. Out of these, explore how database virtualization can virtualization, hardware resources can database virtualization is the focus of this be optimally leveraged to increase TDM be utilized more effectively through paper. effectiveness. consolidation, resulting in significant cost External Document © 2018 Infosys Limited External Document © 2018 Infosys Limited Traditional approach to create subsets of production data and maintained as a common block across entire datasets can be made available to the instances, while modified data is stored The common approach taken by most relevant applications. privately for each clone, leading to enterprises today is to have physical copies optimization of the memory footprint of the production database in each non- Presented below is a simplified view of the production environment. Non-production traditional data refresh process for non- 4. Ease of refresh: Unlike refreshing environments include environments for production environments compared with a physical databases, refreshing an software development, testing, and training. database virtualization-based approach. It existing virtual database clone with should be noted that in some cases there the latest production data is relatively It is not uncommon to find even 10 – 12 may be an intermediate database between simple and straightforward. The clones such production copies in most enterprises. the production and non-production being in-memory instances, the refresh These physical copies may contain environments, which may be used for process is much faster and efficient either an entire copy or a subset of the masking or sub-setting purposes. as there are no physical writes. Most production database. With each additional environment needed there is an increase Traditional ETL approach Database virtualization approach in physical storage requirements, database for non-production environments for non-production environments administration needs and the time required Dev Virtual Clone 1 to setup these databases. Refreshing these Environment 1 non-production databases is a time- Production Production Database Test Virtual Clone 2 consuming and costly exercise. Often, due to Environment 2 Database Database ETL Database contentions, the availability of development / sub-settingutility virtualization engine and test environments becomes a Test Environment 3 Virtual Clone 3 Involves creating physical copies Involves creating virtual clones of the bottleneck in the software development of the production database production database life cycle and can lead to increased overall turnaround times. In summary, the database virtualization engines will traditional approach is complex and costly. Key features of database refresh the virtual instance with changes virtualization tools only, rather than purging and refreshing Database virtualization – a Today, there are a number of vendors the entire database offering products in the database better approach 5. Point-in-time snapshots: Often, in virtualization space. Let us take a look at test environments, there is a need to With the coming of age of database some of the important features and benefits take snapshots of multiple databases virtualization, a gamut of opportunities that these products (though not all) provide: for optimization of non-production at the same point in time to ensure environments has opened up. Several 1. Provision database clones: The consistency of the test data. In the enterprises have started taking cognizance functionality of rapidly provisioning traditional approach, extracting data of this trend and are actively exploring how virtual database clones is at the heart of from multiple databases seldom best they can leverage this technology for all database virtualization tools. These happens simultaneously and there deriving maximum business value. virtual clones can typically be created in is some time difference between a matter of hours, as opposed to physical data extraction processes. This leads So what exactly is database virtualization databases, which may take up to several to referential integrity issues across and how does it work? In the case of a days to setup and load data databases (as several transactions database virtualization-based approach, may have taken place during this time the virtualization engine creates multiple 2. Resource segregation: Each virtual difference), which have to be fixed in virtual clones of the production database. database instance is a private copy and non-production environments. With These virtual copies of the database can is unaffected by changes made to other database virtualization engines, it be utilized in the same way as the actual virtual clones (‘Ring fencing’) is possible to obtain a virtual clone physical copies. They will be transparent 3. Lower resource utilization: Database of each source database against a to the applications in these environments virtualization tools implement highly- specific point-in-time. This results in and will continue to function with these optimized compression algorithms to maintaining the referential integrity virtual database instances in exactly the minimize the in-memory footprint of and consistency of data across the same manner as they did with the physical each cloned database instance. Usually provisioned virtual database instances. non-production databases. There is no need data that is duplicate across clones is External Document © 2018 Infosys Limited External Document © 2018 Infosys Limited 6. Synchronization with source self-service, virtual copies of the given centralization of all administrative tasks database: The updates made to the source databases. leading to ease of administration. There virtual database clones can be written is no need to account for differences 8. Roll back to a previous state: back to the intermediate database in environment configurations in each Most database virtualization tools if required. For example, this might non-production environment where provide a feature for rolling back the be required if the data in the virtual the target database is to be deployed. database state to a previous date. This clone has been augmented for testing is especially useful for testing teams. These features have the potential to bring purposes and a need arises to replicate Often, test data gets corrupted or significant cost savings and efficiency this to other test environments. This changed after a test cycle. By using the improvements from a TDM perspective. feature can also be used to help roll back feature, the test data can be maintain a ‘gold copy’ of the database. restored to a previous consistent state TDM use cases 7. Self-service: Virtualization tools, to enable another round of testing. once setup, are easy to use and can TDM processes can be broadly divided 9. Roll forward: After rolling back, some reduce the dependency on database into data refresh and data provisioning database virtualization tools provide administrators (DBAs) when compared processes. The data refresh process involves the facility to roll forward, if required, to the traditional approach where the refreshing a non-production database with to a specified date in the database physical copy of the database needs required datasets. The data provisioning timeline. to be refreshed. Further, the TDM team process is concerned with mining this data as well as developers and testers, can 10. Ease of administration: Database and allocation of identified records to end be provided the facility to obtain, via virtualization products offer
Recommended publications
  • Eliminate Software Development and Testing Constraints with Service Virtualization 2 | WHITE PAPER: KEY CAPABILITIES of a SERVICE VIRTUALIZATION SOLUTION Ca.Com
    WHITE PAPER | NOVEMBER 2014 Eliminate Software Development and Testing Constraints with Service Virtualization 2 | WHITE PAPER: KEY CAPABILITIES OF A SERVICE VIRTUALIZATION SOLUTION ca.com Table of Contents Executive Summary 3 Section 1: Creating a “Live-Like” Environment 4 Simulating Dependent Systems in Dev and Test Environments Section 2: Enabling Parallel Development and Testing 5 Dev and Test Teams Working Simultaneously Section 3: Handling Test Data for Out-of-Scope Dependencies 6 Missing Downstream Data is No Longer a Problem Section 4: Supporting Heterogeneous Technology and Platforms 8 Creating a Truly Complete Environment Section 5: Conclusion 10 Taking the Right First Steps 3 | WHITE PAPER: KEY CAPABILITIES OF A SERVICE VIRTUALIZATION SOLUTION ca.com Executive Summary Challenge The growing complexity of applications, along with globally-dispersed organizations, means teams face a barrage of bottlenecks as they strive to deliver. Examples of bottlenecks—or constraints—include access to mainframe or ERP test environments, lack of test data, access to third-party systems, and budget limitations. Often, constraints are created parallel development teams seeking access to the same environments. Often, teams try to either copy a complete environment in a test lab, or “mock up” responsive systems by coding their own versions for test purposes. This can become a costly and time-consuming endeavor. Opportunity In essence, Service Virtualization (SV) is the productization of the practice of “mocking and stubbing” dev and test environments, with enough realism and context to push development forward faster, while shifting testing left in the lifecycle so integration and release processes can happen faster, with higher quality and less risk.
    [Show full text]
  • An Introduction to Service Virtualization IBM Middleware User Community Ask the Experts Series
    An Introduction to Service Virtualization IBM Middleware User Community Ask the Experts Series Matt Tarnawsky Offering Manager, API Testing & Service Virtualization © 2016 IBM Corporation Agenda What is service virtualization? How can I make use of service virtualization? How do I create virtual services? Tools and Integrations © 2016 IBM Corporation Page 2 What is Service Virtualization? Presentation tier SOA/BPM B2B Database ESB Mainframe © 2016 IBM Corporation Page 3 What is Service Virtualization? Presentation tier SOA/BPM B2B Virtual Services Database ESB Mainframe © 2016 IBM Corporation Page 4 Shift Left Automated Integration Testing C o Traditional Service Virtualization l UI Testing = Start testing earlier l a Test Automation b o UNIT SERVICE UI = Test repeatedly and r continuously a Deployment Automation t Exploratory Testing i = Provision and o orchestrate the process n Service Virtualization © 2016 IBM Corporation Page 6 Multi-Speed IT Big Big Bang! Bang! Presentation layer Integrate Integrate Middleware component 1 week 1 week Mainframe component typically 6 Weeks typically 6 Weeks © 2016 IBM Corporation Page 7 7 Actual Service/App Virtual Service/App © 2016 IBM Corporation Page 8 – Isolating parts of the system Other uses such as UI or business processes – Creating new test scenarios – Performance testing – Shielding production systems from test calls © 2016 IBM Corporation Page 9 Rational Test Virtualization Server Presentation Layer – Examine integration protocols and messages to verify end-to-end communication – Reuse integration
    [Show full text]
  • Service Virtualization Data Sheet
    Data Sheet Application Delivery Management Service Virtualization Easily eliminate the development and testing “wait time” that slows application delivery by enabling teams to quickly create realistic simulations of APIs and Virtual Services. Now, teams can focus on service quality rather than resource constraints. Key Features ■ One solution for unit, functional and performance testing—built-in data and performance modelling capabilities; pre-built integrations to ALM, UFT and LoadRunner family of products ■ Cross-vertical usage—proven in multiple industry sectors including financial services, telecom, utilities, insurance manufacturing and more ■ Data-oriented simulation—broad built-in enterprise protocols coverage and modelling techniques independent of customers technical background and skills ■ Large-scale load testing—build for scale and load testing of realistic performance simulation scenarios without impacting production or 3rd party systems Product Highlights services are not available, when data access Today’s composite app initiatives often cre- is restricted, when data is difficult to attain, or ■ Dev focused API simulation—new simulation technology with innovative approach to solving when the services are not suitable for the par- ate a difficult dilemma for development and advanced simulation use cases in testing testing teams: meet the project deadline OR ticular test. of enterprise applications spanning across deliver a high-quality service. The core issue: Web, Mobile and Internet of Things growing interdependency among functional Major impediments addressed by Service ■ Flexible infrastructure—deployment of components leads to reliance on resources Vir tu alization: multiple simulation nodes depending on from third-party vendors, which leads to wait ■ Non-availability of production and performance or test environments needs with time and extra cycles.
    [Show full text]
  • Evolving from Automated to Continuous Testing for Devops
    Issue 2 Evolving from Automated to Continuous Testing for DevOps As agile development practices mature and There are many nuances associated with the 1 DevOps principles begin to infiltrate our transformation from automated testing to Evolving from Automated to corporate cultures, organizations realize the Continuous Testing. In this introduction, we’ll Continuous Testing for DevOps distinct opportunity to accelerate software focus on three key distinctions: delivery. However, when you speed up 2 Research from Gartner: any process, immature practice areas and • Aligning “test” with business risk Market Guide for API Testing and roadblocks become much more pronounced. Service Virtualization It’s the difference between driving over a • Ubiquitous access to complete test speed bump at 5 MPH versus 50 MPH … environments 8 at 50 MPH, that speed bump is going to be Top 5 Continuous Testing quite jarring. • Extreme test automation at the API/ Questions from Gartner message layer Conferences Accelerating any business process will expose systemic constraints that shackle Aligning Test and Business Risk 9 the entire organization to its slowest moving The most fundamental shift required in Parasoft Continuous Testing component. In the case of the accelerated Accelerates Testing and moving from automated to continuous is SDLC, testing has become the most significant Improves Quality aligning “test” with business risk. Especially barrier to taking full advantage of iterative with DevOps and Continuous Delivery, 11 approaches to software development. releasing with both speed and confidence About Parasoft For organizations to leverage these requires having immediate feedback on the transformative development strategies, they business risks associated with a software must shift from test automation to Continuous release candidate.
    [Show full text]
  • The Next Generation of Test Environment Management
    Service Virtualization: The Next Generation of Test Environment Management Today's complex, interdependent systems wreak havoc on parallel development and functional/performance testing efforts—significantly impacting productivity, quality, and project timelines. As systems become more complex and interdependent, development and quality efforts are further complicated by constraints that limit developer and tester access to realistic test environments. These constraints often include: Missing/unstable components Evolving development environments Inaccessible 3rd party/partner systems and services Systems that are too complex for test labs (mainframes or large ERPs) Internal and external resources with multiple "owners" Although hardware and OS virtualization technology has provided some relief in terms of reduced infrastructure costs and increased access, significant gaps still exist for software development and testing. It is not feasible to leverage hardware or OS virtualization for many large systems such as mainframes and ERPs. And more pointedly, configuring and maintaining the environment and data needed to support development and test efforts still requires considerable time and resources. As a result, keeping complex staged environment in synch with today's constantly-evolving Agile projects is a time-consuming, never-ending task. This paper introduces Service Virtualization: a new way to provide developers and testers the freedom to exercise their applications in incomplete, constantly evolving, and/or difficult-to-access
    [Show full text]
  • CA LISA® Service Virtualization
    DATA SHEET CA LISA® Service Virtualization At a Glance Applications are essential to business today, yet time-to-market, development cost, and application quality remain significant challenges. The complexity and constraints inherent to composite application development continue to disrupt application development efforts despite ever-increasing investment and focus. The CA LISA® product suite simulates constrained or unavailable systems and networks conventional virtualization cannot touch. The CA LISA solutions are designed to allow customers to more effectively develop in parallel, require less infrastructure for software development, and find defects earlier in the application development process. This means applications delivered to market faster, lower costs for application development systems and environments, and higher quality applications for end users. Key Benefits /Results Business Challenges Most new enterprise applications are built in modern, distributed environments, supported The CA LISA suite delivers a proven solution by multiple service teams and delivery partners. This makes development and testing to the problems of system dependency, cost extremely complex and expensive. Key challenges include: and quality constraints when developing complex, customer-facing applications. Unavailable/Inaccessible. Systems become constrained due to schedules, security Significant measured results from recent restrictions, contention between teams, or because they are still under development. customers include: Poor performing. Downstream systems and mockups may not provide the functional • Fortune 100 Global Bank. Avoided $30M in infrastructure and performance lab behavior or performance response needed, network connections in the test lab do not configuration costs in Year one by replacing reflect production network conditions, and end user performance suffers. manually coded “stubs” and responders with Hardware-based virtualization doesn’t work.
    [Show full text]
  • Service Virtualization, Performance Testing, and Devops at Comcast
    CASE STUDY Service Virtualization, Performance Testing, and DevOps at Comcast By Frank Jennings, Director TQM Performance Testing at Comcast Case Study Service Virtualization, Performance Testing, and DevOps at Comcast OVERVIEW Before service virtualization, Comcast's Performance Testing team often ran into scheduling conflicts around sharing the test infrastructure. Sometimes downstream systems were not available. Other times, test engineers would try to run tests at the same time, which could affect the test results. This led to variability between tests, which made it challenging to isolate particular problems. Learn about the results Comcast achieved after successfully implementing service virtualization—and why service virtualization is a key component of our DevOps initiative. 98 65 60 INTERFA ANNUAL REDU INVOLVED REDUCTION AGING in tests that are of time spent creating environment now virtualized. & maintaining test data. downtime. 2 Case Study Service Virtualization, Performance Testing, and DevOps at Comcast THE CHALLENGES My team at Comcast executes performance testing across a number of verticals in the company—from business services, to our enterprise services platform, to customer-facing UIs, to the backend systems that perform the provisioning and activation of the devices for the subscribers on the Comcast network. While our testing targets (AUTs) typically have staged environments that accurately represent the performance of the production systems, the staging systems for the AUT’s dependencies do not. Complicating the matter further was the fact that these environments were difficult to access. When we did gain access, we would sometimes bring down the lower environments (the QA or integration test environments) because they weren't adequately scaled and just could not handle the load.
    [Show full text]
  • Service Virtualization in Mainframe Environments: Removing Constraints to Speed Development with Higher Quality and Lower Cost
    EXECUTIVE BRIEF | SEPTEMBER 2014 Service Virtualization in Mainframe Environments: Removing Constraints to Speed Development with Higher Quality and Lower Cost Executive summary Mainframe systems form the IT backbone for many organizations, providing back-end transaction processing, databases and reporting functionality. Although they are often thought of as legacy systems, this misleading characterization discounts the fact that these systems often hold critical business logic and still receive heavy investment. Organizations that develop and test applications in mainframe environments have struggled to keep up with users’ increasing demands for innovative, new functionality. For one, mainframe development can require decades’ worth of legacy expertise, which is in danger of disappearing as a generation of workers approach retirement. In addi- tion, modern, client-facing applications driven by the mainframe are being accessed by growing numbers of users via smartphones, tablets and other devices – stressing mainframes in unforeseen ways. Finally, there is the significant cost of sourcing the additional MIPS needed to support new development and test requirements within the mainframe – not to mention the time and resources required to install these complex systems. Because of these factors, new challenges have arisen within mainframe development and test operations, such as unavailable systems, inability to accurately model perfor- mance and complex test data requirements. However, by employing service virtualiza- tion to model the core business logic in mainframe platforms and their integrations with other in-house and third-party systems, teams can free themselves of these constraints, leading to faster build and test cycles, higher quality applications and lower costs. By employing service virtualization to model the core business logic in mainframe platforms and their integrations with other in-house and third-party systems, teams can free themselves of constraints, leading to faster build and test cycles, higher quality applications and lower costs.
    [Show full text]
  • Fitch Delivers High Code Coverage & Quality for Microservices Applications
    CASE STUDY Fitch Delivers High Code Coverage & Quality for Microservices Applications Case Study Fitch Delivers High Code Coverage & Quality for Microservices Applications OVERVIEW Fitch Solutions is a world-leading provider of credit intelligence and the primary distributor of Fitch Ratings content. Today, 90% of the world's leading financial institutions, multinational companies, governmental bodies, and consulting firms based in more than 118 countries depend on Fitch content to inform their business decisions. They are a global leader in financial information services with operations in more than 30 countries. Fitch Solutions follows a microservice architecture with over 200 microservices. The majority of those microservices are based on Java Spring Boot, while some use Vue.js and C++. The Fitch Solutions team responsible for these services includes more than 50 developers spread out across the world. CODE COVERAGE DOWNTIME YOY PRODUCTIVITY 90% 0 days +12% 2 Case Study Fitch Delivers High Code Coverage & Quality for Microservices Applications THE CHALLENGES Fitch had three main problems to solve: » Unplanned downtime » Lack of feedback for developers » Rigid two week release cycles UNPLANNED DOWNTIME In 2019, Fitch had 18 incidents that caused their website to go down due to breakages in the production environment. These unplanned outages were related to either their own APIs or other products that they host. Eighteen incidents averaged to more than once a month. That's bad news for the service level agreements (SLA) and their clients. LACK OF FEEDBACK FOR DEVELOPERS The software development team at Fitch Solutions is fluid. Developers move from project to project so it's common for them to work on and become responsible for microservices unknown to them.
    [Show full text]
  • Acceptance Testing Obusiness Continuity Assurance
    Whitepaper operational acceptance testing OBusiness continuity assurance Trusted partner for your Digital Journey Summary Operational Acceptance Testing (OAT) Acceptance Testing Irrespective of the delivery model em- is the penultimate phase of Acceptance Formal testing with respect to user needs, ployed for the project (waterfall, agile, Table of contents Table of figures Testing in a Software Testing Life cycle requirements, and business processes con- devops), 3 aspects need to be addressed: and is the last defence line between a ducted to determine whether or not a system software development project and de- satisfies the acceptance criteria and to enable 1. Conformance of the asset built with 03 01 ployment of software on production. the user, customers or other authorized entity to the requirements and specifications FFor decedes, Operational Acceptance has determine whether or not to accept the system. 2. Operate and support the asset Summary Main Activities and Operational been undermined and misunderstood. Where once deployed to production User Acceptance has been written about Operational Acceptance Testing 3. Ensure adherence with Business KPI, Acceptance Testing stakeholders and hailed as a “final phase in testing before Operational testing in the acceptance IT SLA and business continuity. production”. User Acceptance is but one side of test phase, typically performed in a (sim- 04 the coin, Operational Acceptance is the other. ulated) operational environment by op- To address these questions, the project Introduction 02 erations and/or systems administration team needs to perform the Operational The International Software Testing staff focusing on operational aspects, aspects of acceptance testing and eval- Quality attribute evaluation Qualifications Board (ISTQB) defines E.g.
    [Show full text]
  • Service Virtualization Implementation, Practices, and Trends for On-Demand Test Environments
    Service Virtualization Implementation, Practices, and Trends for On-Demand Test Environments Bas Dijkstra Beijing Boston Farnham Sebastopol Tokyo Service Virtualization by Bas Dijkstra Copyright © 2017 O’Reilly Media Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Editors: Brian Anderson and Interior Designer: David Futato Virginia Wilson Cover Designer: Randy Comer Production Editor: Colleen Lobner Illustrator: Rebecca Demarest Copyeditor: Octal Publishing, Inc. October 2016: First Edition Revision History for the First Edition 2016-10-12: First Release The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Service Virtualiza‐ tion, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limi‐ tation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsi‐ bility to ensure that your use thereof complies with such licenses and/or rights.
    [Show full text]
  • Devtest: Driving Continuous Testing with Service Virtualization
    EBOOK The Who, What, How, When, Where and Why of DevTest Driving continuous testing with virtual services Enabling DevTest Automated, continuous testing speeds up the application delivery pipeline. The DevTest framework from CA Technologies provides By leveraging test automation and virtual services solutions that focus on shortening the development to simulate unavailable systems across the software Elements that differentiate CA and testing cycles to increase speed to market, improve development lifecycle (SDLC), developers, testers, testing with virtual services: quality, reduce infrastructure spend, and mitigate risk. integration, and performance teams can work in Components of the DevTest environment, CA Application parallel, taking advantage of increased collaborative • Parallel development and testing in a Test invokes and verifies functionality at every layer of opportunities in a DevOps-enabled IT environment. DevOps environment the application, and CA Service Virtualization simulates This productive testing environment helps build strong, • Shift left to test early and test more in the unavailable systems by emulating their dynamic empowered development and testing teams. software lifecycle behavior, data, and performance. These solutions help • Widest multi-protocol support available create an excuse-free testing environment that uses Automated DevTest with virtual services facilitates virtual services to speed up the testing of applications exploratory testing by enabling testers to create more • Simulation of observed behaviors, stateful while increasing quality. on-demand test scenarios and conditions without transactions, and performance scenarios extensive environmental and data provisioning. • Realistic ways to emulate application testing Overcoming Testing Challenges A major challenge for development and QA teams is • Ease of administration the inability to quickly and efficiently test components they are working on.
    [Show full text]