Software Testing Tools : Analyses of Effectiveness on Procedural and Object-Orientated Source Code

Total Page:16

File Type:pdf, Size:1020Kb

Software Testing Tools : Analyses of Effectiveness on Procedural and Object-Orientated Source Code Calhoun: The NPS Institutional Archive Theses and Dissertations Thesis Collection 2001-09 Software testing tools : analyses of effectiveness on procedural and object-orientated source code Snyder, Byron B. Monterey, California. Naval Postgraduate School http://hdl.handle.net/10945/1938 NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS SOFTWARE TESTING TOOLS: METRICS FOR MEASUREMENT OF EFFECTIVENESS ON PROCEDURAL AND OBJECT-ORIENTED SOURCE CODE by Bernard J. Bossuyt Byron B. Snyder September 2001 Thesis Advisor: J. Bret Michael Second Reader: Richard H. Riehle Approved for public release; distribution is unlimited. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED September 2001 Master’s Thesis 4. TITLE AND SUBTITLE: Title (Mix case letters) 5. FUNDING NUMBERS Software Testing Tools: Analyses of Effectiveness on Procedural and Object-Oriented Source Code 6. AUTHOR(S) Bernard J. Bossuyt & Byron B. Snyder 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING Naval Postgraduate School ORGANIZATION REPORT Monterey, CA 93943-5000 NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING / MONITORING N/A AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. 12a. DISTRIBUTION / AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (maximum 200 words) The levels of quality, maintainability, testability, and stability of software can be improved and measured through the use of automated testing tools throughout the software development process. Automated testing tools assist software engineers to gauge the quality of software by automating the mechanical aspects of the software-testing task. Automated testing tools vary in their underlying approach, quality, and ease-of-use, among other characteristics. Evaluating available tools and selecting the most appropriate suite of tools can be a difficult and time- consuming process. In this thesis, we propose a suite of objective metrics for measuring tool characteristics, as an aide in systematically evaluating and selecting automated testing tools. Future work includes further research into the validity and utility of this suite of metrics, conducting similar research using a larger software project, and incorporating a larger set of tools into similar research. 14. SUBJECT TERMS software testing tool metrics, procedural, object-oriented, software testing 15. NUMBER OF tools, metrics, testing tool evaluation, testing tool selection, PAGES 209 16. PRICE CODE 17. SECURITY 18. SECURITY 19. SECURITY 20. LIMITATION CLASSIFICATION OF CLASSIFICATION OF THIS CLASSIFICATION OF OF ABSTRACT REPORT PAGE ABSTRACT Unclassified Unclassified Unclassified UL NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. 239-18 i THIS PAGE INTENTIONALLY LEFT BLANK ii iii THIS PAGE INTENTIONALLY LEFT BLANK iv ABSTRACT The levels of quality, maintainability, testability, and stability of software can be improved and measured through the use of automated testing tools throughout the software development process. Automated testing tools assist software engineers to gauge the quality of software by automating the mechanical aspects of the software- testing task. Automated testing tools vary in their underlying approach, quality, and ease- of-use, among other characteristics. Evaluating available tools and selecting the most appropriate suite of tools can be a difficult and time-consuming process. In this thesis, we propose a suite of objective metrics for measuring tool characteristics, as an aide in systematically evaluating and selecting automated testing tools. Future work includes further research into the validity and utility of this suite of metrics, conducting similar research using a larger software project, and incorporating a larger set of tools into similar research. v THIS PAGE INTENTIONALLY LEFT BLANK vi TABLE OF CONTENTS I. INTRODUCTION........................................................................................................1 A. PROBLEM STATEMENT .............................................................................1 B. RESEARCH ISSUES ......................................................................................1 1. Identifying Metrics...............................................................................2 2. Testing of Procedural versus Object-oriented Source Code............2 3. Evaluating Tools ..................................................................................2 C. CASE STUDY: CSMA/CD LAN DISCRETE-EVENT SIMULATION PROGRAM ......................................................................................................2 II. RELATED WORK ......................................................................................................5 A. IEEE STANDARD 1175 WORKING GROUP’S TOOL- EVALUATION SYSTEM...............................................................................5 1. Analyzing User Needs..........................................................................5 2. Establishing Selection Criteria ...........................................................8 3. Tool Search.........................................................................................10 4. Tool Selection .....................................................................................10 5. Reevaluation .......................................................................................14 6. Summary.............................................................................................14 B. INSTITUTE FOR DEFENSE ANALYSES REPORTS.............................15 C. SOFTWARE TECHNOLOGY SUPPORT CENTER’S SOFTWARE TEST TECHNOLOGIES REPORT............................................................16 III. METHODOLOGY ....................................................................................................17 A. TOOL SEARCH ............................................................................................17 1. BoundsChecker ..................................................................................17 a. Summary..................................................................................17 b. Features...................................................................................17 2. C-Cover...............................................................................................17 a. Summary..................................................................................17 b. Features...................................................................................17 3. CTC++ (Test Coverage Analyzer for C/C++) .................................18 a. Summary..................................................................................18 b. Features...................................................................................18 4. Cantata++ ...........................................................................................19 a. Summary..................................................................................19 b. Features...................................................................................19 5. ObjectChecker/Object Coverage/ObjectDetail...............................19 a. Summary..................................................................................19 b. Features...................................................................................20 6. Panorama C/C++ ...............................................................................20 vii a. Summary..................................................................................20 b. Features...................................................................................20 7. TCAT C/C++......................................................................................21 a. Summary..................................................................................21 b. Features...................................................................................21 B. TOOLS SELECTED FOR EVALUATION................................................21 1. LDRA TESTBED...............................................................................21 a. Summary..................................................................................21 b. Static Analysis Features .........................................................22 c. Dynamic Analysis Features....................................................24 2. Parasoft Testing Products .................................................................25 a. Summary..................................................................................25
Recommended publications
  • A Framework and Tool Supports for Generating Test Inputs of Aspectj Programs
    A Framework and Tool Supports for Generating Test Inputs of AspectJ Programs Tao Xie Jianjun Zhao Department of Computer Science Department of Computer Science & Engineering North Carolina State University Shanghai Jiao Tong University Raleigh, NC 27695 Shanghai 200240, China [email protected] [email protected] ABSTRACT 1. INTRODUCTION Aspect-oriented software development is gaining popularity with Aspect-oriented software development (AOSD) is a new tech- the wider adoption of languages such as AspectJ. To reduce the nique that improves separation of concerns in software develop- manual effort of testing aspects in AspectJ programs, we have de- ment [9, 18, 22, 30]. AOSD makes it possible to modularize cross- veloped a framework, called Aspectra, that automates generation of cutting concerns of a software system, thus making it easier to test inputs for testing aspectual behavior, i.e., the behavior imple- maintain and evolve. Research in AOSD has focused mostly on mented in pieces of advice or intertype methods defined in aspects. the activities of software system design, problem analysis, and lan- To test aspects, developers construct base classes into which the guage implementation. Although it is well known that testing is a aspects are woven to form woven classes. Our approach leverages labor-intensive process that can account for half the total cost of existing test-generation tools to generate test inputs for the woven software development [8], research on testing of AOSD, especially classes; these test inputs indirectly exercise the aspects. To enable automated testing, has received little attention. aspects to be exercised during test generation, Aspectra automati- Although several approaches have been proposed recently for cally synthesizes appropriate wrapper classes for woven classes.
    [Show full text]
  • Parasoft Dottest REDUCE the RISK of .NET DEVELOPMENT
    Parasoft dotTEST REDUCE THE RISK OF .NET DEVELOPMENT TRY IT https://software.parasoft.com/dottest Complement your existing Visual Studio tools with deep static INCREASE analysis and advanced PROGRAMMING EFFICIENCY: coverage. An automated, non-invasive solution that the related code, and distributed to his or her scans the application codebase to iden- IDE with direct links to the problematic code • Identify runtime bugs without tify issues before they become produc- and a description of how to fix it. executing your software tion problems, Parasoft dotTEST inte- grates into the Parasoft portfolio, helping When you send the results of dotTEST’s stat- • Automate unit and component you achieve compliance in safety-critical ic analysis, coverage, and test traceability testing for instant verification and industries. into Parasoft’s reporting and analytics plat- regression testing form (DTP), they integrate with results from Parasoft dotTEST automates a broad Parasoft Jtest and Parasoft C/C++test, allow- • Automate code analysis for range of software quality practices, in- ing you to test your entire codebase and mit- compliance cluding static code analysis, unit testing, igate risks. code review, and coverage analysis, en- abling organizations to reduce risks and boost efficiency. Tests can be run directly from Visual Stu- dio or as part of an automated process. To promote rapid remediation, each problem detected is prioritized based on configur- able severity assignments, automatical- ly assigned to the developer who wrote It snaps right into Visual Studio as though it were part of the product and it greatly reduces errors by enforcing all your favorite rules. We have stuck to the MS Guidelines and we had to do almost no work at all to have dotTEST automate our code analysis and generate the grunt work part of the unit tests so that we could focus our attention on real test-driven development.
    [Show full text]
  • Parasoft Static Application Security Testing (SAST) for .Net - C/C++ - Java Platform
    Parasoft Static Application Security Testing (SAST) for .Net - C/C++ - Java Platform Parasoft® dotTEST™ /Jtest (for Java) / C/C++test is an integrated Development Testing solution for automating a broad range of testing best practices proven to improve development team productivity and software quality. dotTEST / Java Test / C/C++ Test also seamlessly integrates with Parasoft SOAtest as an option, which enables end-to-end functional and load testing for complex distributed applications and transactions. Capabilities Overview STATIC ANALYSIS ● Broad support for languages and standards: Security | C/C++ | Java | .NET | FDA | Safety-critical ● Static analysis tool industry leader since 1994 ● Simple out-of-the-box integration into your SDLC ● Prevent and expose defects via multiple analysis techniques ● Find and fix issues rapidly, with minimal disruption ● Integrated with Parasoft's suite of development testing capabilities, including unit testing, code coverage analysis, and code review CODE COVERAGE ANALYSIS ● Track coverage during unit test execution and the data merge with coverage captured during functional and manual testing in Parasoft Development Testing Platform to measure true test coverage. ● Integrate with coverage data with static analysis violations, unit testing results, and other testing practices in Parasoft Development Testing Platform for a complete view of the risk associated with your application ● Achieve test traceability to understand the impact of change, focus testing activities based on risk, and meet compliance
    [Show full text]
  • Case Study Test the Untestable: Alaska Airlines Solves
    CASE STUDY Testing the Untestable Alaska Airlines Solves the Test Environment Dilemma Case Study Testing the Untestable Alaska Airlines Solves the Test Environment Dilemma OVERVIEW Alaska Airlines is primarily a West Coast carrier that services the states of Alaska and Hawaii with mid-continent and destinations in Canada and Mexico. Alaska Airlines received J.D. Powers' “Highest in Customer Satisfaction Among Traditional Carriers” recognition for twelve years in a row even recently winning first in all but one of the seven categories. A large part of the credit belongs to their software testing team. Their industry-leading, proactive approach to disrupting the traditional software testing process ensures that testers can test faster, earlier, and more completely. Learn how Ryan Papineau and his team used advanced automation in concert with service virtualization to rigorously test their complex flight operations manager software. The result: operations that run smoothly— even if they encounter a snowstorm in July. RELIABLE & ON-DEMAND FALSE REPEATABLE TESTS AUTOMATED TEST CASES POSITIVES 100欥 500 ELIMINATED 2 Case Study Testing the Untestable Alaska Airlines Solves the Test Environment Dilemma THE CHALLENGES At Alaska Airlines, the flight operations manager software is ultimately responsible for transporting 46 million customers to 115 global destinations via approximately 440,000 flights per year, safely and efficiently. This software coordinates a highly complex set of inputs from systems around the organization to ensure flights are on time while evaluating and managing fuel, cargo, baggage, and passenger requirements. In addition to the previously mentioned requirements, the system considers many factors including weather, aircraft characteristics, market, and fuel costs.
    [Show full text]
  • Parasoft Named an Omnichannel Functional Test Automation Leader
    Parasoft Corp. Headquarters 101 E. Huntington Drive Monrovia, CA 91016 USA www.parasoft.com [email protected] Press Release Parasoft Named an Omnichannel Functional Test Automation Leader, Recognized by major analyst firm for Impressive Roadmap Parasoft shines in evaluation specifically around effective test maintenance, strong CI/CD and application lifecycle management (ALM) platform integration MONROVIA (USA) – July 30, 2018 – Parasoft, the global leader in automated software testing, today announced its position as a leader in The Forrester Wave™: Omnichannel Functional Test Automation Tools, Q3 2018, where it received the highest scores possible in the API Testing and Automation and Product Road Map criteria. The report notes Parasoft’s “impressive and concrete road map to increase test automation from design to execution, pushing autonomous testing.” Parasoft will be showcasing its technology and discussing the future of testing in an upcoming webinar, The Future of Test Automation: Next- Generation Technologies to Use Today on August 23rd. To register, click here. According to the report, conducted by Forrester’s Diego Lo Giudice, “Parasoft shined in our evaluation specifically around effective test maintenance, strong CI/CD and application lifecycle management (ALM) platform integration, as well as reporting through its analytics system PIE. Clients like the recent changes, and all reference customers reported achieving test automation of more than 50% in the past 12 months.” After examining past research, user need assessments, and vendor and expert interviews, Forrester evaluated 15 omnichannel functional test automation tool vendors across a comprehensive 26-criteria to help organizations working on enterprise, mobile, and web applications select the right tool.
    [Show full text]
  • Devsecops DEVELOPMENT & DEVOPS INFRASTRUCTURE
    DevSecOps DEVELOPMENT & DEVOPS INFRASTRUCTURE CREATE SECURE APPLICATIONS PARASOFT’S APPROACH - BUILD SECURITY IN WITHOUT DISRUPTING THE Parasoft provides tools that help teams begin their security efforts as DEVELOPMENT PROCESS soon as the code is written, starting with static application security test- ing (SAST) via static code analysis, continuing through testing as part of Parasoft makes DevSecOps possible with API and the CI/CD system via dynamic application security testing (DAST) such functional testing, service virtualization, and the as functional testing, penetration testing, API testing, and supporting in- most complete support for important security stan- frastructure like service virtualization that enables security testing be- dards like CWE, OWASP, and CERT in the industry. fore the complete application is fully available. IMPLEMENT A SECURE CODING LIFECYCLE Relying on security specialists alone prevents the entire DevSecOps team from securing software and systems. Parasoft tooling enables the BENEFIT FROM THE team with security knowledge and training to reduce dependence on PARASOFT APPROACH security specialists alone. With a centralized SAST policy based on in- dustry standards, teams can leverage Parasoft’s comprehensive docs, examples, and embedded training while the code is being developed. ✓ Leverage your existing test efforts for Then, leverage existing functional/API tests to enhance the creation of security security tests – meaning less upfront cost, as well as less maintenance along the way. ✓ Combine quality and security to fully understand your software HARDEN THE CODE (“BUILD SECURITY IN”) Getting ahead of application security means moving beyond just test- ✓ Harden the code – don’t just look for ing into building secure software in the first place.
    [Show full text]
  • Integration Testing of Object-Oriented Software
    POLITECNICO DI MILANO DOTTORATO DI RICERCA IN INGEGNERIA INFORMATICA E AUTOMATICA Integration Testing of Object-Oriented Software Ph.D. Thesis of: Alessandro Orso Advisor: Prof. Mauro Pezze` Tutor: Prof. Carlo Ghezzi Supervisor of the Ph.D. Program: Prof. Carlo Ghezzi XI ciclo To my family Acknowledgments Finding the right words and the right way for expressing acknowledgments is a diffi- cult task. I hope the following will not sound as a set of ritual formulas, since I mean every single word. First of all I wish to thank professor Mauro Pezze` , for his guidance, his support, and his patience during my work. I know that “taking care” of me has been a hard work, but he only has himself to blame for my starting a Ph.D. program. A very special thank to Professor Carlo Ghezzi for his teachings, for his willingness to help me, and for allowing me to restlessly “steal” books and journals from his office. Now, I can bring them back (at least the one I remember...) Then, I wish to thank my family. I owe them a lot (and even if I don't show this very often; I know this very well). All my love goes to them. Special thanks are due to all my long time and not-so-long time friends. They are (stricty in alphabetical order): Alessandro “Pari” Parimbelli, Ambrogio “Bobo” Usuelli, Andrea “Maken” Machini, Antonio “the Awesome” Carzaniga, Dario “Pitone” Galbiati, Federico “Fede” Clonfero, Flavio “Spadone” Spada, Gianpaolo “the Red One” Cugola, Giovanni “Negroni” Denaro, Giovanni “Muscle Man” Vigna, Lorenzo “the Diver” Riva, Matteo “Prada” Pradella, Mattia “il Monga” Monga, Niels “l’e´ semper chi” Kierkegaard, Pierluigi “San Peter” Sanpietro, Sergio “Que viva Mex- ico” Silva.
    [Show full text]
  • A Brief History of Parasoft Jtest
    A Brief History of Parasoft Jtest The static analysis technology for Jtest is invented The test generation technology for Jtest is invented The patent for Jtest’s test generation technology is First public release filed The patent for Jtest’s static analysis technology is filed Jtest patents awarded Jtest TM awarded Jtest introduces security rule set Jtest wins Best in Show at DevCon Jtest wins Software Magazine’s Productivity award Jtest nominated for JavaWorld Editors’ Choice awards Jtest becomes first product to use Design by Contract (Jcontract) comments to verify Java Automated JUnit test case generation is introduced classes/components at the system level Jtest wins Jolt Product Excellence Award Jtest wins Writer’s Choice Award from Java Report Jtest Tracer becomes the first tool to generate Jtest wins Software Business Magazines’s Best functional unit test cases as the user exercises the Development Tool Award working application Jtest wins Software and Information Industry Association’s Codie award for Best Software Testing Jtest wins JDJ Editors’ Choice Award Product or Service Jtest wins Software Development Magazines’s Jtest receives “Excellent” rating from Information World Productivity Award Jtest security edition released Flow-based static analysis is introduced Automated peer code review is introduced Cactus test generation is introduced Jtest is integrated into Development Testing Platform Jtest wins InfoWorld’s Technology of the Year award (DTP) Jtest wins Codie award for Best Software Testing DTP static analysis components
    [Show full text]
  • Inovytec Achieves FDA Certification with Customized Static Code Analysis Solution Case Study Leading Insurance Company Modernizes Applications with Software Testing
    CASE STUDY Inovytec Achieves FDA Certification With Customized Static Code Analysis Solution Case Study Leading Insurance Company Modernizes Applications With Software Testing OVERVIEW Inovytec is an innovative medical device company that develops cutting- edge solutions for respiratory and cardiac failures. During the COVID-19 crisis, Inovytec has been a vital supplier of ventilators around the world, delivering critical care to patients suffering respiratory symptoms from the contagious disease. The embedded development team at Inovytec delivers medical devices with safety-critical software like the Ventway Sparrow, which is a groundbreaking family of transport and emergency ventilators designed to stand up to the harshest of conditions while providing reliable high- performance ventilation at all times. 100% FDA 510(k) Certification Rules & Guidelines 2 Case Study Leading Insurance Company Modernizes Applications With Software Testing CHALLENGE On a mission to deliver clean code and be compliant with the FDA 510(k) regulation inspection, Inovytec started using Parasoft's C/C++ static code analysis solution. APPROACH To satisfy the FDA 510(k) certification, the embedded software development team customized a set of rules in Parasoft C/C++test to the standard. "Every time we are going to release a new software version of the Ventway Sparrow ventilator, we make sure that the static analysis from Parasoft is configured to run according to the FDA regulation definitions. We not only noticed improvements in code quality, but C/C++test has really helped us in our static analysis verification activities and goal of achieving FDA 510(k) certification,” said Roi Birenshtok, solution architect and team leader of embedded software.
    [Show full text]
  • Empirical Evaluation of the Effectiveness and Reliability of Software Testing Adequacy Criteria and Reference Test Systems
    Empirical Evaluation of the Effectiveness and Reliability of Software Testing Adequacy Criteria and Reference Test Systems Mark Jason Hadley PhD University of York Department of Computer Science September 2013 2 Abstract This PhD Thesis reports the results of experiments conducted to investigate the effectiveness and reliability of ‘adequacy criteria’ - criteria used by testers to determine when to stop testing. The research reported here is concerned with the empirical determination of the effectiveness and reliability of both tests sets that satisfy major general structural code coverage criteria and test sets crafted by experts for testing specific applications. We use automated test data generation and subset extraction techniques to generate multiple tests sets satisfying widely used coverage criteria (statement, branch and MC/DC coverage). The results show that confidence in the reliability of such criteria is misplaced. We also consider the fault-finding capabilities of three test suites created by the international community to serve to assure implementations of the Data Encryption Standard (a block cipher). We do this by means of mutation analysis. The results show that not all sets are mutation adequate but the test suites are generally highly effective. The block cipher implementations are also seen to be highly ‘testable’ (i.e. they do not mask faults). 3 Contents Abstract ............................................................................................................................ 3 Table of Tables ...............................................................................................................
    [Show full text]
  • Parasoft Named a Leader in 2020 Continuous Functional Test
    Parasoft Corp. Headquarters 101 E. Huntington Drive Monrovia, CA 91016 USA www.parasoft.com [email protected] Press Release Parasoft Named a Leader in 2020 Continuous Functional Test Automation in Independent Research Report Parasoft's Suite of Software Testing Tools With Added Smarts Recognized Monrovia (USA)/Berlin, 23 June 2020 — Parasoft, the global leader in automated software testing for over 30 years, today announced it has been named a Leader in The Forrester Wave™: Continuous Functional Test Automation Suites, Q2 2020, conducted by Forrester Research. Parasoft’s functional testing suite, including SOAtest, Virtualize, and Selenic, was included in Forrester’s evaluation process. According to the report, "Parasoft’s continuous testing shines in API testing, service virtualization and integration testing, and the combined automation context. Finally, Parasoft has very strong continuous integration/continuous delivery (CI/CD) and application lifecycle management (ALM) platform integration as well as reporting through its analytics system PIE (Process Intelligence Engine)." Forrester evaluated the 15 most significant continuous functional test automation (CFTA) providers. They researched, analyzed, and scored each one using their 26-criterion evaluation. In the report, Parasoft is recognized as the "go-to testing platform for developers and still is one of their preferred choices," while also adding capabilities targeted for less technical teammates. "We're honored to be recognized by Forrester as a leader in this evaluation. We believe this acknowledgment demonstrates our continued commitment to bring innovations that drive high levels of test automation and build long- standing partnerships with our clients," said Elizabeth Kolawa, President and CEO of Parasoft. Parasoft continues to invest in enhancements for their automated testing tools.
    [Show full text]
  • Accelerate Software Innovation Through Continuous Quality
    Accelerate Software Innovation Through Continuous Quality 1 Software quality is recognized as the #1 issue IT executives are trying to mitigate. Enterprise organizations strive to accelerate the delivery of a compelling user experience to their customers in order to drive revenue. Software quality is recognized as the #1 issue IT executives are trying to mitigate. QA teams know they have issues and are actively looking for solutions to save time, increase quality, improve security, and more. The most notable difficulties are in identifying the right areas to test, the availability of flexible and reliable test environments and test data, and the realization of benefits from automation. You may be facing many challenges with delivering software to meet the high expectations for quality, cost, and schedule driven by the business. An effective software testing strategy can address these issues. If you’re looking to improve your software quality while achieving your business goals, Parasoft can help. With over 30 years of making testing easier for our customers, we have the innovation you need and the experience you trust. Our extensive continuous quality suite spans every testing need and enables you to reach new heights. 3 QUALITY-FIRST APPROACH You can’t test quality into an application at the end of the software development life cycle (SDLC). You need to ensure that your software development process and practices put a priority on quality- driven development and integrate a comprehensive testing strategy to verify that the application’s functionality meets the requirements. Shift testing left to the start of your development process to bring quality to the forefront.
    [Show full text]