
International Journal of Computer Science and Information Security (IJCSIS), Vol. 14, No. 3, March 2016 A Model Driven Regression Testing Pattern for Enhancing Agile Release Management Maryam Nooraei Abadeh, Department of Computer Science, Science and Research Branch, Islamic Azad University, Tehran, Iran * 1Seyed-Hassan Mirian-Hosseinabadi Department of Computer Science, Sharif University of Technology, Tehran, Iran Abstract- Evolutionary software development disciplines, such as Agile Development (AD), are test-centered, and their application in model-based frameworks requires model support for test development. These tests must be applied against changes during software evolution. Traditionally regression testing exposes the scalability problem, not only in terms of the size of test suites, but also in terms of complexity of the formulating modifications and keeping the fault detection after system evolution. Model Driven Development (MDD) has promised to reduce the complexity of software maintenance activities using the traceable change management and automatic change propagation. In this paper, we propose a formal framework in the context of agile/lightweight MDD to define generic test models, which can be automatically transformed into executable tests for particular testing template models using incremental model transformations. It encourages a rapid and flexible response to change for agile testing foundation. We also introduce on- the-fly agile testing metrics which examine the adequacy of the changed requirement coverage using a new measurable coverage pattern. The Z notation is used for the formal definition of the framework. Finally, to evaluate different aspects of the proposed framework an analysis plan is provided using two experimental case studies. Keywords Agile development. Model Driven testing. On-the fly Regression Testing. Model Transformation. Test Case Selection. I. INTRODUCTION The Model Driven Architecture (MDA) paradigm enhance traditional development discipline with defining a platform independent model (PIM), which is followed by manually or automatically transforming it to one or more platform specific model (PSM), and completed with a code generation from PSMs [1]. The MDA profits, e.g., abstraction modeling, automatic code generation, reusability, effort reduction and efficient complexity management can be influenced to all phases of the software lifecycle. To get all advantages of MDD, it is essential to use it in an agile way, involving short iterations of development with enough flexibility and automation. Because it is so easy to add functionality when using MDD, you will not be the first one ending up with a ‘concrete-model’. On the other hand Model transformation and traceability, as two key concepts MDA, provide an automatic maintenance management’s ability in a more agile and rapid release environment MDD to make it possible to show the results of a model change almost directly on the working application. Agile MDA principles, e.g., alliance testing, immediate execution, racing down the chain from analysis to implementation in short cycles should be applied in short incremental, iterative cycles. To support agile changes and at different levels of abstraction, e.g., requirement specification, design, implementation using manual or semi-automated refactoring approaches, efficient change management supports induced changes using update propagation. Update propagation has been essentially used to provide techniques for efficient traceable and incremental view maintenance and integrity checking in different phases of software development. Incremental refactoring of model improves the development and test structure to early defect fault introduced through evolution more precise. *Author for correspondence 313 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 International Journal of Computer Science and Information Security (IJCSIS), Vol. 14, No. 3, March 2016 When developing safety-critical software systems there is, however, a requirement to show that the set of test cases, covers the changes to enable the verification of design models against their specifications. Automatic update propagation can enhance regression testing in a formal way to reduce the associated efforts. The purpose of regression testing is to verify new versions of a system to prevent functionality inconsistencies among different versions. To avoid rerunning the whole test suite on the updated system various regression test selection techniques have been proposed to exercise new functionalities according to a modified version [2]. Propagating design changes to the corresponding testing artifacts leads to a consistent regression test suite. Normally this is done by transforming the design model to code, which is compiled and executed to collect the data used for structural coverage analysis. If the structural code coverage criteria are not met at the PSM level, additional test cases should be created at the PIM level. The proposed approach is a MDT version of agile development. The motivation with the proposed approach is that instead of creating extensive models before writing source code you instead create agile models which are just barely good enough that drive your overall development efforts. Agile model driven regression testing is a critical strategy for scaling agile software development beyond the small changes during the stages of agile adoption. It provides continuous integration, maintenance and testing even with platform changes. As a technical contribution of the current paper, we use the Z specification language [3] not only for its capability in software system modeling, development and verification, but also for its adaptation in formalizing MDD concepts, e.g., model refactoring, transformation rules, meta-model definition and refinement theory to produce a concrete specification. Besides, verification tools such as CZT [4] and Z/EVES [5] have been well-developed for type-checking, proofing and analyzing the specifications in the Z-notation. Although OCL can be used to answer some analysis issues in MDD, it is only a specification language and the mentioned mechanisms for consistency checking are not supported by OCL. Finally, a main challenge which is investigated in this paper is: how will abstract models be tested in an agile mythology? How can use MDA-based models to handle the inherent complexities of legacy system testing? Is developing these complex models really more productive than other options, such as agile development techniques? The rest of the paper is organized as follows: Section 2 reconsiders the related concepts. Section 3 extends the formalism for platform independent testing. In Section 4 the agile regression testing is introduced. Section 5 introduces on-fly agile (regression) testing framework. The practical discussion and analysis of the framework are provided in Section 6. Section 7 reviews the related works and compares the similar approaches to our work. Finally, Section 8 concludes the paper and gives suggestions for future works. II. PRELIMINARIES In this section, we review some preliminary concepts that are prerequisites for our formal framework. A. Regression test selection, minimization and prioritization Regression testing as a testing activity during the system evolution and maintenance phase can prevent the contrary effects of the changes at different levels of abstraction. Important issues have been studied in regression testing to keep and maximize the value of the accrued test suite are test case selection, minimization and prioritization. Regression Test Selection Techniques (RTSTs) select a cost-effective subset of valid test cases from 314 https://sites.google.com/site/ijcsis/ ISSN 1947-5500 International Journal of Computer Science and Information Security (IJCSIS), Vol. 14, No. 3, March 2016 previously validated version to exercise the modified parts of a model/program. A RTST essentially consists of two major activities: identifying affected parts of a system after the maintenance phase and selecting a subset of test cases from the initial test suite to effectively test the affected parts of the system. A suitable coverage by a number of test cases is needed that detects new potential faults. A well-known classification of regression test cases is suggested in [2] which classifies test suites into obsolete, reusable and retestable test cases. Obsolete test cases are invalid for the new version and should be removed from the original test pool and two others are still valid to be rerun. Test case selection, or the regression test selection problem, is essentially similar to the test suite minimization problem; both problems are about choosing a subset of test cases from the test suite. The key difference between these two approaches in the literature is whether the focus is upon the changes in the system under test. Test suite minimization is often based on metrics such as coverage measured from a single version of the program under test. By contrast, in regression test selection, test cases are selected because their accomplishment is relevant to the changes between the previous and the current version of the system under test. Minimization techniques aim to reduce the size of a test suite by eliminating redundant test cases. Effective minimization techniques keep coverage of reduced subset equivalent as the original test suite while reducing the maintenance costs and time. Compared to test case selection techniques that also attempt to reduce the
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-