Parasoft C/C++ Testing Starter Kit "C/C++ Error-Detection Techniques" White Paper Inomed Case Study NEC Case Study Bovie Case Study Parasoft C/C++Test Data Sheet

Total Page:16

File Type:pdf, Size:1020Kb

Parasoft C/C++ Testing Starter Kit Parasoft C/C++ Testing Starter Kit "C/C++ Error-Detection Techniques" White Paper Inomed Case Study NEC Case Study Bovie Case Study Parasoft C/C++test Data Sheet Integrated Error-Detection Techniques: Find More Bugs in Embedded C Software Software verification techniques such as pattern-based static code analysis, runtime memory monitoring, unit testing, and flow analysis are all valuable techniques for finding bugs in embedded C software. On its own, each technique can help you find specific types of errors. However, if you restrict yourself to applying just one or some of these techniques in isolation, you risk having bugs that slip through the cracks. A safer, more effective strategy is to use all of these complementary techniques in concert. This establishes a bulletproof framework that helps you find bugs which are likely to evade specific techniques. It also creates an environment that helps you find functional problems, which can be the most critical and difficult to detect. This paper will explain how automated techniques such as pattern-based static code analysis, runtime memory monitoring, unit testing, and flow analysis can be used together to find bugs in an embedded C application. These techniques will be demonstrated using Parasoft C++test, an integrated solution for automating a broad range of best practices proven to improve software development team productivity and software quality. As you read this paper—and whenever you think about finding bugs—it’s important to keep sight of the big picture. Automatically detecting bugs such as memory corruption and deadlocks is undoubtedly a vital activity for any development team. However, the most deadly bugs are functional errors, which often cannot be found automatically. We’ll briefly discuss techniques for finding these bugs at the conclusion of this paper. Introducing the Scenario To provide a concrete example, we will introduce and demonstrate the recommended bug-finding strategies in the context of a scenario that we recently encountered: a simple sensor application that runs on an ARM board. Assume that so far, we have created an application and uploaded it to the board. When we tried to run it, we did not see an expected output on the LCD screen. It’s not working, but we’re not sure why. We can try to debug it, but debugging on the target board is time- consuming and tedious. We would need to manually analyze the debugger results and try to determine the real problems on our own. Or, we might apply certain tools or techniques proven to pinpoint errors automatically. At this point, we can start crossing our fingers as we try to debug the application with the debugger. Or, we can try to apply an automated testing strategy in order to peel errors out of the code. If it’s still not working after we try the automated techniques, we can then go to the debugger as a last resort. Page 1 Pattern-Based Static Code Analysis Let’s assume that we don’t want to take the debugging route unless it’s absolutely necessary, so we start by running pattern-based static code analysis. It finds one problem: This is a violation of a MISRA rule that says that there is something suspicious with this assignment operator. Indeed, our intention was not to use an assignment operator, but rather a comparison operator. So, we fix this problem and rerun the program. There is improvement: some output is displayed on the LCD. However, the application crashes with an access violation. Again, we have a choice to make. We could try to use the debugger, or we can continue applying automated error detection techniques. Since we know from experience that automated error detection is very effective at finding memory corruptions such as the one we seem to be experiencing, we decide to try runtime memory monitoring. Page 2 Runtime Memory Monitoring of the Complete Application To perform runtime memory monitoring, we have C++test instrument the application. This instrumentation is so lightweight that it is suitable for running on the target board. After uploading and running the instrumented application, then downloading results, the following error is reported: This indicates reading an array out of range at line 48. Obviously, the msgIndex variable must have had a value that was outside the bounds of the array. If we go up the stack trace, we see that we came here with this print message with a value that was out of range (because we put an improper condition for it before calling function printMessage() ). We can fix this by taking away unnecessary conditions (value <= 20). void handleSensorValue(int value) { initialize(); int index = -1; if (value >= 0 && value <= 10) { index = VALUE_LOW; } else if ((value > 10) && (value <= 20)) { index = VALUE_HIGH; } printMessage(index, value); } Page 3 Now, when we rerun the application, no more memory errors are reported. After the application is uploaded to the board, it seems to work as expected. However, we are still a bit worried. We just found one instance of a memory overwrite in the code paths that we exercised…but how can we rest assured that there are no more memory overwrites in the code that we did not exercise? If we look at the coverage analysis, we see that one of the functions, reportSensorFailure(), has not been exercised at all. We need to test this function…but how? One way is to create a unit test that will call this function. Unit Testing with Runtime Memory Monitoring We create a test case skeleton using C++test’s test case wizard, then we fill in some test code. Then, we run this test case—exercising just this one previously-untested function—with runtime memory monitoring enabled. With C++test, this entire operation takes just seconds. The results show that the function is now covered, but new errors are reported: Our test case uncovered more memory-related errors. We have a clear problem with memory initialization (null pointers) when our failure handler is being called. Further analysis leads us to realize that in reportSensorValue() we mixed an order of calls. finalize() is being called before printMessage() is called, but finalize() actually frees memory used by printMessage(). Page 4 void finalize() { if (messages) { free(messages[0]); free(messages[1]); free(messages[2]); } free(messages); } We fix this order, then rerun the test case one more time. That resolves one of the errors reported. Now, let’s look at the second problem reported: AccessViolationException in the print message. This occurs because these table messages are not initialized. To resolve this, we call the initialize() function before printing the message. The repaired function looks as follows: void reportSensorFailure() { initialize(); printMessage(ERROR, 0); finalize(); } When we rerun the test, only one task is reported: an unvalidated unit test case, which is not really an error. All we need to do here is verify the outcome in order to convert this test into a regression test. C++test will do this for us automatically by creating an appropriate assertion. Next, we run the entire application again. The coverage analysis shows that almost the entire application was covered, and the results show that no memory error problems occurred. Are we done now? Not quite. Even though we ran the entire application and created unit tests for an uncovered function, there are still some paths that are not covered. We can continue with unit Page 5 test creation, but it would take some time to cover all of the paths in the application. Or, we can try to simulate those paths with flow analysis. Flow Analysis We run flow analysis with C++test’s BugDetective, which tries to simulate different paths through the system and check if there are potential problems in those paths. The following issues are reported: If we look at them, we see that there is a potential path—one that was not covered—where there can be a double free in the finalize() function. The reportSensorValue() function calls finalize(), then the finalize() calls free(). Also, finalize() is called again in the mainLoop(). We could fix this by making finalize() more intelligent, as shown below: void finalize() { if (messages) { free(messages[0]); free(messages[1]); free(messages[2]); free(messages); messages = 0; } } Now, let’s run flow analysis one more time. Only two problems are reported: Page 6 We might be accessing a table with the index -1 here. This is because the integral index is set initially to -1 and there is a possible path through the if statement that does not set this integral to the correct value before calling printMessage(). Runtime analysis did not lead to such a path, and it might be that such path would never be taken in real life. That is the major weakness of static flow analysis in comparison with actual runtime memory monitoring: flow analysis shows potential paths, not necessarily paths that will be taken during actual application execution. Since it’s better to be safe than sorry, we fix this potential error easily by removing the unnecessary condition (value >= 0). void handleSensorValue(int value) { initialize(); int index = -1; if (value <= 10) { index = VALUE_LOW; } else { index = VALUE_HIGH; } printMessage(index, value); } In a similar way, we fix the final error reported. Now, we rerun the flow analysis, and no more issues are reported. Page 7 Regression Testing To ensure that everything is still working, let’s re-run the entire analysis. First, we run the application with runtime memory monitoring, and everything seems fine. Then, we run unit testing with memory monitoring, and a task is reported: Our unit test detected a change in the behavior of the reportSensorFailure() function.
Recommended publications
  • Core Elements of Continuous Testing
    WHITE PAPER CORE ELEMENTS OF CONTINUOUS TESTING Today’s modern development disciplines -- whether Agile, Continuous Integration (CI) or Continuous Delivery (CD) -- have completely transformed how teams develop and deliver applications. Companies that need to compete in today’s fast-paced digital economy must also transform how they test. Successful teams know the secret sauce to delivering high quality digital experiences fast is continuous testing. This paper will define continuous testing, explain what it is, why it’s important, and the core elements and tactical changes development and QA teams need to make in order to succeed at this emerging practice. TABLE OF CONTENTS 3 What is Continuous Testing? 6 Tactical Engineering Considerations 3 Why Continuous Testing? 7 Benefits of Continuous Testing 4 Core Elements of Continuous Testing WHAT IS CONTINUOUS TESTING? Continuous testing is the practice of executing automated tests throughout the software development cycle. It’s more than just automated testing; it’s applying the right level of automation at each stage in the development process. Unlike legacy testing methods that occur at the end of the development cycle, continuous testing occurs at multiple stages, including development, integration, pre-release, and in production. Continuous testing ensures that bugs are caught and fixed far earlier in the development process, improving overall quality while saving significant time and money. WHY CONTINUOUS TESTING? Continuous testing is a critical requirement for organizations that are shifting left towards CI or CD, both modern development practices that ensure faster time to market. When automated testing is coupled with a CI server, tests can instantly be kicked off with every build, and alerts with passing or failing test results can be delivered directly to the development team in real time.
    [Show full text]
  • A Framework and Tool Supports for Generating Test Inputs of Aspectj Programs
    A Framework and Tool Supports for Generating Test Inputs of AspectJ Programs Tao Xie Jianjun Zhao Department of Computer Science Department of Computer Science & Engineering North Carolina State University Shanghai Jiao Tong University Raleigh, NC 27695 Shanghai 200240, China [email protected] [email protected] ABSTRACT 1. INTRODUCTION Aspect-oriented software development is gaining popularity with Aspect-oriented software development (AOSD) is a new tech- the wider adoption of languages such as AspectJ. To reduce the nique that improves separation of concerns in software develop- manual effort of testing aspects in AspectJ programs, we have de- ment [9, 18, 22, 30]. AOSD makes it possible to modularize cross- veloped a framework, called Aspectra, that automates generation of cutting concerns of a software system, thus making it easier to test inputs for testing aspectual behavior, i.e., the behavior imple- maintain and evolve. Research in AOSD has focused mostly on mented in pieces of advice or intertype methods defined in aspects. the activities of software system design, problem analysis, and lan- To test aspects, developers construct base classes into which the guage implementation. Although it is well known that testing is a aspects are woven to form woven classes. Our approach leverages labor-intensive process that can account for half the total cost of existing test-generation tools to generate test inputs for the woven software development [8], research on testing of AOSD, especially classes; these test inputs indirectly exercise the aspects. To enable automated testing, has received little attention. aspects to be exercised during test generation, Aspectra automati- Although several approaches have been proposed recently for cally synthesizes appropriate wrapper classes for woven classes.
    [Show full text]
  • Parasoft Dottest REDUCE the RISK of .NET DEVELOPMENT
    Parasoft dotTEST REDUCE THE RISK OF .NET DEVELOPMENT TRY IT https://software.parasoft.com/dottest Complement your existing Visual Studio tools with deep static INCREASE analysis and advanced PROGRAMMING EFFICIENCY: coverage. An automated, non-invasive solution that the related code, and distributed to his or her scans the application codebase to iden- IDE with direct links to the problematic code • Identify runtime bugs without tify issues before they become produc- and a description of how to fix it. executing your software tion problems, Parasoft dotTEST inte- grates into the Parasoft portfolio, helping When you send the results of dotTEST’s stat- • Automate unit and component you achieve compliance in safety-critical ic analysis, coverage, and test traceability testing for instant verification and industries. into Parasoft’s reporting and analytics plat- regression testing form (DTP), they integrate with results from Parasoft dotTEST automates a broad Parasoft Jtest and Parasoft C/C++test, allow- • Automate code analysis for range of software quality practices, in- ing you to test your entire codebase and mit- compliance cluding static code analysis, unit testing, igate risks. code review, and coverage analysis, en- abling organizations to reduce risks and boost efficiency. Tests can be run directly from Visual Stu- dio or as part of an automated process. To promote rapid remediation, each problem detected is prioritized based on configur- able severity assignments, automatical- ly assigned to the developer who wrote It snaps right into Visual Studio as though it were part of the product and it greatly reduces errors by enforcing all your favorite rules. We have stuck to the MS Guidelines and we had to do almost no work at all to have dotTEST automate our code analysis and generate the grunt work part of the unit tests so that we could focus our attention on real test-driven development.
    [Show full text]
  • Parasoft Static Application Security Testing (SAST) for .Net - C/C++ - Java Platform
    Parasoft Static Application Security Testing (SAST) for .Net - C/C++ - Java Platform Parasoft® dotTEST™ /Jtest (for Java) / C/C++test is an integrated Development Testing solution for automating a broad range of testing best practices proven to improve development team productivity and software quality. dotTEST / Java Test / C/C++ Test also seamlessly integrates with Parasoft SOAtest as an option, which enables end-to-end functional and load testing for complex distributed applications and transactions. Capabilities Overview STATIC ANALYSIS ● Broad support for languages and standards: Security | C/C++ | Java | .NET | FDA | Safety-critical ● Static analysis tool industry leader since 1994 ● Simple out-of-the-box integration into your SDLC ● Prevent and expose defects via multiple analysis techniques ● Find and fix issues rapidly, with minimal disruption ● Integrated with Parasoft's suite of development testing capabilities, including unit testing, code coverage analysis, and code review CODE COVERAGE ANALYSIS ● Track coverage during unit test execution and the data merge with coverage captured during functional and manual testing in Parasoft Development Testing Platform to measure true test coverage. ● Integrate with coverage data with static analysis violations, unit testing results, and other testing practices in Parasoft Development Testing Platform for a complete view of the risk associated with your application ● Achieve test traceability to understand the impact of change, focus testing activities based on risk, and meet compliance
    [Show full text]
  • Leading Practice: Test Strategy and Approach in Agile Projects
    CA SERVICES | LEADING PRACTICE Leading Practice: Test Strategy and Approach in Agile Projects Abstract This document provides best practices on how to strategize testing CA Project and Portfolio Management (CA PPM) in an agile project. The document does not include specific test cases; the list of test cases and steps for each test case are provided in a separate document. This document should be used by the agile project team that is planning the testing activities, and by end users who perform user acceptance testing (UAT). Concepts Concept Description Test Approach Defines testing strategy, roles and responsibilities of various team members, and test types. Testing Environments Outlines which testing is carried out in which environment. Testing Automation and Tools Addresses test management and automation tools required for test execution. Risk Analysis Defines the approach for risk identification and plans to mitigate risks as well as a contingency plan. Test Planning and Execution Defines the approach to plan the test cases, test scripts, and execution. Review and Approval Lists individuals who should review, approve and sign off on test results. Test Approach The test approach defines testing strategy, roles and responsibilities of various team members, and the test types. The first step is to define the testing strategy. It should describe how and when the testing will be conducted, who will do the testing, the type of testing being conducted, features being tested, environment(s) where the testing takes place, what testing tools are used, and how are defects tracked and managed. The testing strategy should be prepared by the agile core team.
    [Show full text]
  • Email: [email protected] Website
    Email: [email protected] Website: http://chrismatech.com Experienced software and systems engineer who has successfully deployed custom & industry-standard embedded, desktop and networked systems for commercial and DoD customers. Delivered systems operate on airborne, terrestrial, maritime, and space based vehicles and platforms. Expert in performing all phases of the software and system development life-cycle including: Creating requirements and design specifications. Model-driven software development, code implementation, and unit test. System integration. Requirements-based system verification with structural coverage at the system and module levels. Formal qualification/certification test. Final product packaging, delivery, and site installation. Post-delivery maintenance and customer support. Requirements management and end-to-end traceability. Configuration management. Review & control of change requests and defect reports. Quality assurance. Peer reviews/Fagan inspections, TIMs, PDRs and CDRs. Management and project planning proficiencies include: Supervising, coordinating, and mentoring engineering staff members. Creating project Software Development Plans (SDPs). Establishing system architectures, baseline designs, and technical direction. Creating & tracking project task and resource scheduling, costs, resource utilization, and metrics (e.g., Earned Value Analysis). Preparing proposals in response to RFPs and SOWs. Project Management • Microsoft Project, Excel, Word, PowerPoint, Visio & Documentation: • Adobe Acrobat Professional
    [Show full text]
  • Survey of Verification and Validation Techniques for Small Satellite Software Development
    Survey of Verification and Validation Techniques for Small Satellite Software Development Stephen A. Jacklin NASA Ames Research Center Presented at the 2015 Space Tech Expo Conference May 19-21, Long Beach, CA Summary The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation. Introduction While the space industry has developed very good methods for verifying and validating software for large communication satellites over the last 50 years, such methods are also very expensive and require large development budgets.
    [Show full text]
  • An Eclipse Plug-In for Testing and Debugging
    GZoltar: An Eclipse Plug-In for Testing and Debugging José Campos André Riboira Alexandre Perez Rui Abreu Department of Informatics Engineering Faculty of Engineering, University of Porto Portugal {jose.carlos.campos, andre.riboira, alexandre.perez}@fe.up.pt; [email protected] ABSTRACT coverage). Several debugging tools exist which are based on Testing and debugging is the most expensive, error-prone stepping through the execution of the program (e.g., GDB phase in the software development life cycle. Automated and DDD). These traditional, manual fault localization ap- testing and diagnosis of software faults can drastically proaches have a number of important limitations. The place- improve the efficiency of this phase, this way improving ment of print statements as well as the inspection of their the overall quality of the software. In this paper we output are unstructured and ad-hoc, and are typically based present a toolset for automatic testing and fault localiza- on the developer's intuition. In addition, developers tend to use only test cases that reveal the failure, and therefore do tion, dubbed GZoltar, which hosts techniques for (regres- sion) test suite minimization and automatic fault diagno- not use valuable information from (the typically available) sis (namely, spectrum-based fault localization). The toolset passing test cases. provides the infrastructure to automatically instrument the Aimed at drastic cost reduction, much research has been source code of software programs to produce runtime data. performed in developing automatic testing and fault local- Subsequently the data was analyzed to both minimize the ization techniques and tools. As far as testing is concerned, test suite and return a ranked list of diagnosis candidates.
    [Show full text]
  • Case Study Test the Untestable: Alaska Airlines Solves
    CASE STUDY Testing the Untestable Alaska Airlines Solves the Test Environment Dilemma Case Study Testing the Untestable Alaska Airlines Solves the Test Environment Dilemma OVERVIEW Alaska Airlines is primarily a West Coast carrier that services the states of Alaska and Hawaii with mid-continent and destinations in Canada and Mexico. Alaska Airlines received J.D. Powers' “Highest in Customer Satisfaction Among Traditional Carriers” recognition for twelve years in a row even recently winning first in all but one of the seven categories. A large part of the credit belongs to their software testing team. Their industry-leading, proactive approach to disrupting the traditional software testing process ensures that testers can test faster, earlier, and more completely. Learn how Ryan Papineau and his team used advanced automation in concert with service virtualization to rigorously test their complex flight operations manager software. The result: operations that run smoothly— even if they encounter a snowstorm in July. RELIABLE & ON-DEMAND FALSE REPEATABLE TESTS AUTOMATED TEST CASES POSITIVES 100欥 500 ELIMINATED 2 Case Study Testing the Untestable Alaska Airlines Solves the Test Environment Dilemma THE CHALLENGES At Alaska Airlines, the flight operations manager software is ultimately responsible for transporting 46 million customers to 115 global destinations via approximately 440,000 flights per year, safely and efficiently. This software coordinates a highly complex set of inputs from systems around the organization to ensure flights are on time while evaluating and managing fuel, cargo, baggage, and passenger requirements. In addition to the previously mentioned requirements, the system considers many factors including weather, aircraft characteristics, market, and fuel costs.
    [Show full text]
  • Parasoft Named an Omnichannel Functional Test Automation Leader
    Parasoft Corp. Headquarters 101 E. Huntington Drive Monrovia, CA 91016 USA www.parasoft.com [email protected] Press Release Parasoft Named an Omnichannel Functional Test Automation Leader, Recognized by major analyst firm for Impressive Roadmap Parasoft shines in evaluation specifically around effective test maintenance, strong CI/CD and application lifecycle management (ALM) platform integration MONROVIA (USA) – July 30, 2018 – Parasoft, the global leader in automated software testing, today announced its position as a leader in The Forrester Wave™: Omnichannel Functional Test Automation Tools, Q3 2018, where it received the highest scores possible in the API Testing and Automation and Product Road Map criteria. The report notes Parasoft’s “impressive and concrete road map to increase test automation from design to execution, pushing autonomous testing.” Parasoft will be showcasing its technology and discussing the future of testing in an upcoming webinar, The Future of Test Automation: Next- Generation Technologies to Use Today on August 23rd. To register, click here. According to the report, conducted by Forrester’s Diego Lo Giudice, “Parasoft shined in our evaluation specifically around effective test maintenance, strong CI/CD and application lifecycle management (ALM) platform integration, as well as reporting through its analytics system PIE. Clients like the recent changes, and all reference customers reported achieving test automation of more than 50% in the past 12 months.” After examining past research, user need assessments, and vendor and expert interviews, Forrester evaluated 15 omnichannel functional test automation tool vendors across a comprehensive 26-criteria to help organizations working on enterprise, mobile, and web applications select the right tool.
    [Show full text]
  • Eclipse Project Briefing Materials
    [________________________] Eclipse project briefing materials. Copyright (c) 2002, 2003 IBM Corporation and others. All rights reserved. This content is made available to you by Eclipse.org under the terms and conditions of the Common Public License Version 1.0 ("CPL"), a copy of which is available at http://www.eclipse.org/legal/cpl-v10.html The most up-to-date briefing materials on the Eclipse project are found on the eclipse.org website at http://eclipse.org/eclipse/ 200303331 1 EclipseEclipse ProjectProject 200303331 3 Eclipse Project Aims ■ Provide open platform for application development tools – Run on a wide range of operating systems – GUI and non-GUI ■ Language-neutral – Permit unrestricted content types – HTML, Java, C, JSP, EJB, XML, GIF, … ■ Facilitate seamless tool integration – At UI and deeper – Add new tools to existing installed products ■ Attract community of tool developers – Including independent software vendors (ISVs) – Capitalize on popularity of Java for writing tools 200303331 4 Eclipse Overview Another Eclipse Platform Tool Java Workbench Help Development Tools JFace (JDT) SWT Team Your Tool Plug-in Workspace Development Debug Environment (PDE) Their Platform Runtime Tool Eclipse Project 200303331 5 Eclipse Origins ■ Eclipse created by OTI and IBM teams responsible for IDE products – IBM VisualAge/Smalltalk (Smalltalk IDE) – IBM VisualAge/Java (Java IDE) – IBM VisualAge/Micro Edition (Java IDE) ■ Initially staffed with 40 full-time developers ■ Geographically dispersed development teams – OTI Ottawa, OTI Minneapolis,
    [Show full text]
  • Devnet Module 3
    Module 3: Software Development and Design DEVASCv1 1 Module Objectives . Module Title: Software Development and Design . Module Objective: Use software development and design best practices. It will comprise of the following sections: Topic Title Topic Objective 3.1 Software Development Compare software development methodologies. 3.2 Software Design Patterns Describe the benefits of various software design patterns. 3.3 Version Control Systems Implement software version control using GIT. 3.4 Coding Basics Explain coding best practices. 3.5 Code Review and Testing Use Python Unit Test to evaluate code. 3.6 Understanding Data Formats Use Python to parse different messaging and data formats. DEVASCv1 2 3.1 Software Development DEVASCv1 3 Introduction . The software development process is also known as the software development life cycle (SDLC). SDLC is more than just coding and also includes gathering requirements, creating a proof of concept, testing, and fixing bugs. DEVASCv1 4 Software Development Life Cycle (SDLC) . SDLC is the process of developing software, starting from an idea and ending with delivery. This process consists of six phases. Each phase takes input from the results of the previous phase. SDLC is the process of developing software, starting from an idea and ending with delivery. This process consists of six phases. Each phase takes input from the results of the previous phase. Although the waterfall methods is still widely used today, it's gradually being superseded by more adaptive, flexible methods that produce better software, faster, with less pain. These methods are collectively known as “Agile development.” DEVASCv1 5 Requirements and Analysis Phase . The requirements and analysis phase involves the product owner and qualified team members exploring the stakeholders' current situation, needs and constraints, present infrastructure, and so on, and determining the problem to be solved by the software.
    [Show full text]