<<

White Paper : The Good, The Bad, and the Ugly The field of is undergoing a major transformation. What used to be an onerous manual process took a big step forward with the advent of Selenium and other testing tools. But those tools remain heavily developer-centric and are primarily tethered to rigid script-based approaches that do not scale. And now, with the rise of cloud, big data and machine learning, companies need to re-think their testing strategies and technologies yet again.

The core concept behind this new approach is that, in this era of agile development and rapid customer response, testing can no longer be an adjunct to the development process: it needs to be an integral and strategic component of the workflow. Functionize is doing just that with the introduction of our ground-breaking scalable autonomous test platform. The driving force behind this platform is Adaptive Event Analysis TM, a patented and proprietary hybrid of algorithms which span supervised, unsupervised, reinforcement learning and computer vision, enabling lightning-fast test creation and execution, self- healing maintenance, and actionable analytics.

Continuous testing is a framework for running automated tests—as early as practicable and across the product delivery pipeline—in which the results of these tests quickly provide risk exposure feedback on a specific software release candidate. The promise of continuous testing is faster delivery of higher quality software. Functionize can help you move faster toward a continuous testing framework that will progressively deliver better products to your customers. The Promise of Continuous Testing The promise of continuous testing is faster delivery of higher quality software.

Test automation produces a set of failure/acceptance data points that correspond to product . Continuous testing has a broader scope across more of the development cycle, focuses more on business risk, and provides more insight on the probability that a product is going to be shippable. Continuous testing is a framework for running automated tests—as early as practicable and across the product delivery pipeline—in which the results of these tests quickly provide risk exposure feedback on a specific software release candidate.

Generally, the goal is to achieve higher speed with higher quality, by moving testing upstream and testing with a higher degree of frequency. It’s easy to ship a software product if testing is minimal; and it’s easy to get out good software if you’ve got a whole year to dump a feature. Test early, test often, test exhaustively, and get the payoff in higher quality products that potentially release sooner.

The price for all of this? You’ll need to reconfigure your delivery pipeline. Full- bore continuous testing includes not only code coverage, functional quality, and compliance, but also impact analysis, and post-release testing.

The Need for Continuous Testing Changes in continue to increase stress on testing teams—like never before. Also, the complexity of newer technologies and components present more challenges in achieving with conventional methods and tools.

Extensive, complex application architectures: Software tools and technologies continue to become more complex, cloud-connected, distributed and expansive with APIs and microservices. There is an ever increasing number of combinations of innovations, application components, and protocols that interact within a single event or transaction.

Frequent releases/continuous builds: DevOps and Agile continue a big push toward , which has brought the industry to the point at which no small number of applications release builds many times per day. This is only possible when significant effort has been put into the product lifecycle to automate testing and assess risk of failure. It also means that end-of-cycle testing time must have a much short duration.

Managing risk: Software is a primary business interface, so any application failure translates directly to a failure for the business. A “minor” glitch will have a serious negative impact if it significantly affects user experience. For many software vendors and service providers, application integrity risks are now a critical concern for all business leaders.

Learn more at: www.functionize.com How does CT differ from Testing Automation?

We can categorize the main differences between test automation and continuous testing with categories: risk, broader coverage, and time.

Minimizing Business Risk

Today, most businesses not only expose many elements of their internal applications to external end users, they have also built many types of additional software that ex- tend and complement those internal applications. Major software application failures have brought serious repercussions to the extent that that software-related risks are now high-profile aspects in many business financial filings. On average, recent statistics suggest that notable software failures result in an average 4% stock price decline—about $2.5 billion reduction in total market capitalization. This is a direct hit to the bottom line, so business leaders are putting more pressure on their IT leaders to find a remedy.

With continuous testing, if your test cases haven’t been built to readily assess busi- ness risk, then the results won’t provide the feedback necessary to continually assess risk. The design of most tests is to provide low-level detail on whether requirements/ specifications have been met. Such tests give no indication of how much risk the business would take if the software were released today. Think about this: Could your senior management intelligently make a decision to cancel a release according to test results? If the answer is no, then your tests are out of alignment with your business risk assessment criteria.

Broader Coverage

Even when a company manages to avoid the detriments of large-scale software failures, it remains true that a supposedly minor defect can cause major problems. If a user evaluation results in an unsatisfactory experience or fails to meet expectations, there is a real risk that the customer will consider your competitors. There is also the risk of damage to the brand if any user takes his complaints to news media.

Merely knowing that a unit test fails or an interface test passes doesn’t tell you the extent to which recent app changes will affect user experience. To maintain continuity and satisfaction for the user community your tests must be sufficiently broad to detect application changes that will adversely impact functionality on which users rely.

Time

The speed at which organizations ship software has become a competitive differentiator, so a majority of companies are looking to DevOps, Agile, and other methodologies to optimize and accelerate delivery pipelines.

In its infancy, automated testing brought testing innovations to internal applications and systems that were built with conventional, waterfall development procedures and processes. Since these systems were fully under the control of the organization, everything was dev-complete and test-ready at the designated start of the testing Learn more at: www.functionize.com phase. With the rise of Agile and DevOps, the expectation is forming in many companies that testing must start very soon after development begins.

Some highly-optimized DevOps teams are actually realizing continuous delivery with consistent success. These teams can often deliver releases every hour of the day—or more frequently. Feedback at each step in the process must be virtually instantaneous.

If quality isn’t a critical concern at your company—minimal disincentive for rolling back when defects are found in production—then it might be sufficient to quickly run some unit and smoke tests on the release. If, on the other hand, your management and your team have got to the level of frustration that drives you to minimize the risk of releasing defective software to customers, then you might be searching for a way to achieve solid risk mitigation.

For testing, there are a number of significant impacts:

• To be effective in continuous delivery pipelines, testing has to become an integral activity for the entire development cycle—instead of continuing to be seen as a hygiene activity that occurs post-development.

• As much as possible, tests should be built concurrently and be ready to execute very soon after the new functions or features are built.

• The entire team should work together to analyze and determine which tests should be run at specific points in the delivery pipeline.

• Each test suite should be configured to run fast enough to avoid any bottleneck in a particular stage in the software delivery pipeline.

• Environment stabilization is important to prevent constant changes from raising false positives.

Minimizing Risk By Moving Testing Further Upstream Development teams need to improve their testing abilities and add test automation wherever it’s necessary in the delivery pipeline. One approach is known as , which simulates systems or components that are unavailable. This enables a team to test further upstream in the pipeline and test more frequently.

When combining service virtualization with test automation, a development team can achieve immediate feedback on quality, which translate into faster and less costly issue resolution. By pursuing and cultivating this approach in their development teams, businesses can bring higher quality, more innovative solutions to the market faster than before.

Learn more at: www.functionize.com Achieve Better Balance

Continuous testing not only involves moving quality assurance activities and tools as far upstream as possible. It is also important to find good balance across all testing practices. The figure below depicts the critical test practices that are necessary to enable and support continuous testing. Below, we list some questions to pose to your team as together you seek understanding about how time is being spent and how you should rebalance to achieve more effective results.

Test Management: Does the team manually roll up test results and compile status reports? How does staff know if testing is on schedule, behind schedule or even ahead of schedule? Consider acquiring a tool that provides real-time information, and pro- vide both summary and detail views whenever necessary.

Testing Automation: What is the level of efficiency in rerunning tests? Are these run manually? What proportion of tests have been automated? Is it only the functional tests that are automated? What about performance testing, API tests, and the securi- ty tests?

Analytics: How do teams assess why specific tests are run, and which tests should be run at what times? How strong is overall test efficacy–ideally, is the team finding the largest number of issues by running the fewest number of tests? It’s critical to perform impact analysis, which feeds into the test selection process that should occur with each new build.

Learn more at: www.functionize.com Testing Environments – Does the team have to wait for test environment re-provi- sioning and re-configuration? After running tests, do the testers realize too late that the test environment isn’t set up properly, requiring changes to the environment before the tests have to be executed all over again?

Service Virtualization – Does the testing team frequently have to wait for one or more subsystems to become available before proper testing can begin? Are you tak- ing a big-bang approach, in which all the components are cobbled together near the end, crossing your fingers in the hope that everything functions by design and inter- acts well with other components? Is it feasible for QA to test error, exceptions, and anomaly scenarios well before migrating to production? Is it the case that the high- risk functions and components are always delayed until the end of the testing effort?

Test Data – Do the testers have a solid set of production-similar test data which will help them ensure broad coverage of all relevant test scenarios? Is it possible that there are error and exception cases that are not being executed because the test data is inadequate?

Defects – Is too much effort spent to analyze, triage, and log defects? What about wasted effort that is spent dealing with defects that not really defects—cases in which there is actually is a mismatch between the code and the tests? Is it feasible to elimi- nate entire categories of such “defects” altogether?

Conclusion

A primary strategic goal for any product company is to reduce business risk in re- leasing applications, such that—at minimum—new code won’t frustrate or alienate customers. Test automation is a tactical activity that contributes to overall continuous testing goals.

For continuous testing, the focus shouldn’t be on details, proper code formatting or how many bugs were found. Though that is part the entire pipeline, the most critical concern in CT is the risk to the business. Technical risk is a lesser concern. The guiding questions should ever be: Is the product release-ready? Will our customer continue to maintain high levels of satisfaction when they use the updated product? https://devops.com/continuous-testing-vs-test-automation/ https://www.infoq.com/articles/cd-benefits-challenges https://www.infoq.com/news/2015/04/benefits-continuous-testing https://dzone.com/articles/continuous-testing-best-practices-amp-challenges https://devops.com/continuous-testing-vs-test-automation/ https://devops.com/continuous-testing-shift-left-find-right-balance/ https://softcrylic.com/blogs/infographic-top-25-metrics-measure-continuous-test- ing-process/

© Copyright 2018 Functionize, Inc. All Rights Reserved 156 2nd Street | San Francisco, CA 94105 +1-800-826-5051 | www.functionize.com