Automatic Test Generation Based on Formal Speci cations Practical Procedures for E cient State Space Exploration and Improved Representation of Test Cases
Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakult aten der Georg-August-Universit at zu G ottingen
vorgelegt von Michael Schmitt aus Orsoy
G ottingen 2003 D 7 Referent: Prof. Dr. Dieter Hogrefe Korreferent: Prof. Dr. Jens Grabowski Tag der mundlichen Prufung: 3. April 2003 Acknowledgments
I would like to thank my supervisor Prof. Dr. Hogrefe for his assistance and the pos- sibility to research under perfect conditions. The work at the Institute for Telematics in Lub eck has been a very positive experience. I appreciate that I was allowed to act independently and to take on responsibility in a trustful atmosphere. I would also like to thank my former colleagues in Lub eck who have accompanied me through the years. Special thanks go to Jens Grabowski and Beat Koch with whom I worked together in the Autolink project and who shared endless discussions and debates on testing and test generation with me. This thesis would not be in its current state without the numerous proof-readers who spent their spare time on nding typing errors, missing indexes, etc. and who gave helpful hints at how to improve my dissertation. I thank Zhen Ru Dai, Michael Ebner, Andreas Heuer, Maeve H olscher, Helmut Neukirchen, Cornelia Rieckho , and Uwe Roth for their e orts. Another major contribution to this dissertation was made by my parents who not only conceived me but also encouraged me to study computer science. This thesis is dedicated to Susanne. Thanks to her support and care, this dissertation has become reality. I hope that I will ever be able to return what she has done for me.
Contents
1 Introduction 1
2 Foundations of Testing 5 2.1 Classi cation of Tests ...... 5 2.2 Conformance Testing Concepts ...... 9 2.2.1 Conformance Requirements ...... 9 2.2.2 Test Cases ...... 10 2.2.3 Test Verdicts ...... 11 2.2.4 Test Suites ...... 11 2.3 The Conformance Testing Process ...... 12 2.3.1 Test Suite Development ...... 12 2.3.2 Test Preparation ...... 12 2.3.3 Test Operation ...... 13 2.3.4 Test Evaluation ...... 13 2.4 Test Methods and Con gurations ...... 14
3 Test Languages 17 3.1 The Tree and Tabular Combined Notation ...... 19 3.1.1 Test Suite Overview and Import Part ...... 20 3.1.2 Declarations Part ...... 21 3.1.3 Constraints Part ...... 24 3.1.4 Dynamic Part ...... 26 3.2 The Testing and Test Control Notation ...... 30 3.2.1 Modules and Groups ...... 30 3.2.2 Data Model ...... 31 3.2.3 Communication ...... 32 3.2.4 Test Con gurations ...... 34 3.2.5 Templates ...... 36 3.2.6 Behavior Descriptions ...... 37 3.2.7 Development Tools ...... 40 3.3 Discussion ...... 40
4 Test Generation Based on Formal Speci cations 45 4.1 Formal Methods in Conformance Testing ...... 46 4.1.1 Speci cation and Implementation ...... 46 4.1.2 Static and Dynamic Conformance ...... 46
i 4.1.3 Testing Concepts ...... 47 4.1.4 Test Generation ...... 49 4.2 Test Generation Methods ...... 49 4.2.1 Fault Models ...... 50 4.2.2 Test Coverage Criteria ...... 53 4.2.3 Scenario-Based Requirements ...... 59 4.3 Test Generation, Veri cation, and Validation ...... 60
5 High-Level Speci cation Languages 63 5.1 Message Sequence Chart ...... 63 5.1.1 Basic Message Sequence Charts ...... 64 5.1.2 Data Model ...... 67 5.1.3 Structural Concepts ...... 67 5.1.4 High-Level Message Sequence Charts ...... 68 5.2 Speci cation and Description Language ...... 70 5.2.1 Agents and Structuring ...... 70 5.2.2 Communication ...... 72 5.2.3 Behavior ...... 72 5.2.4 Object Orientation ...... 75 5.2.5 SDL Data Model and ASN.1 ...... 75 5.2.6 Further Language Constructs ...... 76 5.3 Tool Support ...... 76
6 The Autolink Tool 79 6.1 The Autolink Test Generation Process ...... 80 6.2 Test Purpose Speci cation ...... 80 6.2.1 Manual Speci cation ...... 82 6.2.2 Interactive Simulation ...... 82 6.2.3 Observer Processes ...... 83 6.2.4 Automatic Computation ...... 84 6.3 Test Case Generation ...... 85 6.3.1 State Space Exploration ...... 85 6.3.2 Direct Translation ...... 87 6.4 Test Suite Production ...... 87 6.5 Interpretation of MSC Test Purposes ...... 89 6.5.1 Partial Order Semantics ...... 89 6.5.2 Structuring Concepts ...... 90 6.6 Case Studies ...... 94 6.6.1 Core INAP CS-2 ...... 94 6.6.2 VB5.1 and VB5.2 ...... 97 6.7 Comparison with Other SDL-based Test Tools ...... 98 6.7.1 SaMsTaG ...... 98 6.7.2 TestComposer ...... 99 6.8 Discussion ...... 100
ii 7 Test Generation for Distributed Test Architectures 103 7.1 Concurrency in TTCN-2 ...... 103 7.2 De nition of Test Component Con gurations ...... 105 7.3 Synchronization of Test Components ...... 106 7.3.1 Implicit Synchronization ...... 106 7.3.2 Explicit Synchronization ...... 106 7.4 A Test Generation Procedure ...... 110 7.4.1 Simulator Requirements ...... 111 7.4.2 Test Generation for MTC and PTCs ...... 112 7.5 Case Studies ...... 115
8 The Tree Walk Search Strategy 119 8.1 Classical Search Strategies ...... 120 8.2 Labeled Transition Systems ...... 121 8.3 Main Concepts ...... 122 8.3.1 Root States ...... 123 8.3.2 Algorithmic Description ...... 124 8.4 Detection of Identical States ...... 128 8.4.1 Example ...... 131 8.4.2 Algorithmic Description ...... 131 8.5 Case Studies ...... 134 8.6 Discussion ...... 138
9 Test Suite Representation 141 9.1 The Autolink Script Language ...... 142 9.1.1 General Language Concepts ...... 143 9.1.2 Constraint Rules ...... 143 9.1.3 Test Suite Structure Rules ...... 149 9.2 Automatic Structuring of Constraint Descriptions ...... 153 9.2.1 Constraint Factorization ...... 156 9.2.2 Constraint Merging ...... 157 9.2.3 Constraint Parameterization ...... 158 9.2.4 Constraint Derivation ...... 161 9.2.5 Constraint Defactorization ...... 162 9.2.6 Case Studies ...... 163 9.3 A List Pattern Matching and Manipulation Language ...... 164 9.3.1 General Language Concepts ...... 167 9.3.2 Basic Patterns ...... 168 9.3.3 Variables and Variable Operators ...... 170 9.3.4 Manipulation Operators ...... 171 9.3.5 Extension Operator ...... 172 9.3.6 Search Operators ...... 172 9.4 Discussion ...... 173
iii 10 Advanced Test Generation by Symbolic Execution 177 10.1 Motivation ...... 178 10.2 Checking the Feasibility of Path Conditions ...... 180 10.3 The ValiBOSE Tool ...... 184 10.3.1 Navigation ...... 184 10.3.2 Coverage Measurements ...... 185 10.3.3 Assertions ...... 185 10.3.4 Bookmarks ...... 185 10.3.5 Normalization ...... 186 10.3.6 Test Data Selection ...... 186 10.3.7 Example ...... 186 10.4 Discussion ...... 194
11 Conclusions 197
A List of Abbreviations 199
B TTCN-2 Test Suite for the Inres Protocol 203
C TTCN-3 Module for the Inres Protocol 213
Bibliography 219
iv List of Figures
1.1 Structure of the thesis ...... 3
2.1 The distributed test method in a multi party context ...... 14
3.1 The Inres service and protocol ...... 18 3.2 The local test method applied to Inres ...... 19 3.3 TTCN-2 – Test suite overview ...... 20 3.4 TTCN-2 – ASN.1 type de nitions ...... 22 3.5 TTCN-2 – Test component con guration ...... 23 3.6 TTCN-2 – Constraints ...... 25 3.7 TTCN-2 – Test case SingleDataTransfer ...... 26 3.8 TTCN-2 – Test step MediumAccess ...... 27 3.9 TTCN-2 – Default MTCFailure ...... 28 3.10 TTCN-3 – Module TestsForInres ...... 31 3.11 TTCN-3 – Communication ...... 34 3.12 TTCN-3 – Test con guration ...... 35 3.13 TTCN-3 – Templates ...... 36 3.14 TTCN-3 – Test case SingleDataTransfer ...... 37 3.15 TTCN-3 – Function MediumAccess ...... 38 3.16 TTCN-3 – Altsteps MTCFailure and ReceptionIDISind ...... 40 3.17 Feature proposal – The par statement ...... 43
4.1 Test methods based on control ow ...... 54 4.2 Test methods based on data ow ...... 56 4.3 A process model for speci cation, validation, veri cation, and test gener- ation ...... 62
5.1 Basic MSCs for the Inres protocol ...... 65 5.2 A High-level Message Sequence Chart ...... 69 5.3 Inres – Structural description ...... 71 5.4 Excerpt from the description of process Initiator ...... 74
6.1 Test suite generation with Autolink ...... 81 6.2 MSC IN2m A BASIC RR BV 25 ...... 82 6.3 An observer process for test generation ...... 84 6.4 TTCN-2 test case IN A BASIC RR BV 25 ...... 88
v 6.5 Message Sequence Chart PartialOrder ...... 90 6.6 Representation of partially ordered events in TTCN-2 ...... 91 6.7 Synchronization among inline expressions ...... 93 6.8 Three test cases described by one HMSC diagram ...... 94 6.9 Computation time of MSC veri cations and test generations ...... 96 6.10 A VB5.2 constraint ...... 98
7.1 Limitations of synchronization ...... 105 7.2 Inres test purpose with lack of synchronization ...... 107 7.3 Explicit synchronization by means of a coordination message ...... 108 7.4 Coordination messages – Complex example ...... 109 7.5 Explicit synchronization by means of an MSC condition ...... 109 7.6 Automatically generated CM exchange for the condition in gure 7.5 . 110 7.7 MSC DisconnectionSync ...... 115 7.8 TTCN-2 test case DisconnectionSync – MTC behavior description . . . 116 7.9 TTCN-2 test case DisconnectionSync – PTC behavior descriptions . . . 117 7.10 MSC MultiSync ...... 117 7.11 Concurrent TTCN-2 test case MultiSync ...... 118
8.1 Tree Walk – Detection of identical states ...... 129 8.2 Running Tree Walk with detection of identical states ...... 130 8.3 Exploration of the Inres protocol ...... 135 8.4 MSCs generated by Tree Walk ...... 136 8.5 MSCs generated by Tree Walk (continued) ...... 137 8.6 Exploration of the VB5.2 protocol ...... 138
9.1 Atomic expressions of the Autolink script language ...... 144 9.2 Autolink script language – A simple constraint rule ...... 145 9.3 Autolink script language – A constraint rule for multiple signal types . 146 9.4 Autolink script language – Using functions in constraint rules . . . . . 147 9.5 Autolink script language – A conditional constraint rule ...... 147 9.6 Autolink script language – Constraints with wildcards ...... 148 9.7 Autolink script language – Test suite parameters ...... 149 9.8 Test purpose naming scheme for INAP CS-2 ...... 151 9.9 Autolink script language – A simple test suite structure rule . . . . . 151 9.10 Autolink script language - A generalized test suite structure rule . . . 152 9.11 Application of the generalized rule ...... 153 9.12 Syntax of the Autolink script language in EBNF ...... 154 9.13 Quality factors and criteria ...... 155 9.14 The impact of type ordering on parameterization ...... 160 9.15 Number of constraints in the VB5.2 and INAP CS-2 test suites . . . . . 163 9.16 Size of the VB5.2 and INAP CS-2 test suites ...... 163 9.17 Original VB5.2 constraints ...... 165 9.18 Chained and parameterized constraints for VB5.2 ...... 166 9.19 Syntax of the LPML in EBNF ...... 169
vi 9.20 Classi cation of search operators ...... 173 9.21 Application of the search operators ...... 174
10.1 A simple, harmful MSC test purpose ...... 179 10.2 CSP example ...... 183 10.3 ValiBOSE example – Access control ...... 187 10.4 MSC TestCaseVariable ...... 195 10.5 Code generation for MSC TestCaseVariable ...... 196
vii viii List of Algorithms
7.1 Invocation of the test generation for the PTCs and the MTC ...... 112 7.2 Test generation algorithm for the MTC ...... 113 7.3 Test generation algorithm for PTCi ...... 114
8.1 Tree Walk – Main function treeWalk ...... 125 8.2 Tree Walk – Subfunction treeSearch ...... 127 8.3 Tree Walk – Subfunction treeSearch with detection of identical states 132 8.4 Tree Walk – Hash table access with n keys ...... 133
ix x 1 Introduction
Modern telecommunication systems involve complex interactions between distributed components. Very often, these systems are heterogeneous and their components come from many di erent vendors. To ensure that products of di erent companies can interact with each other, cross-national organizations such as the European Telecommunications Standards Institute (ETSI ) or the International Telecommunications Union (ITU ) de- ne international standards for protocols. In the past, these standards were speci ed in natural language. But since informal descriptions can be misinterpreted, formal description techniques are applied in many standards nowadays. Due to their formal semantics, formal description languages allow for precise, unambiguous speci cations. Two speci cation languages that are used in the telecommunication area are the Speci cation and Description Language (SDL) and Message Sequence Chart (MSC ). While SDL is used for the speci cation of complete systems, MSC allows to describe single scenarios. In recent decades, computer science has made great e orts to improve the quality of software. Nevertheless, a software product has to undergo extensive tests before it can be used in practice. The process of determining the extent to which an implementation ful lls the requirements of its speci cations is called conformance testing. Thorough testing is expensive and time-consuming. On the other hand, testing is always incomplete; it can detect errors in the implementation but it can never prove their absence. For economical reasons, testing calls for a systematic and e cient approach. In particular, this goes for the initial phase of conformance testing in which a suitable set of test cases must be de ned for a given protocol. Typically, the standardization organizations take over this task and provide the manufacturers with test suites that are speci ed in a standardized test language such as the Tree and Tabular Combined Notation 2 (TTCN-2 ). If a formal speci cation is available, test cases can be derived automatically from the speci cation. This can be achieved, e.g., by means of simulation. Automatic test gen- eration based on formal speci cation leads to a faster, cheaper, and less error-prone testing process. It ensures that the generated test cases are correct with regard to the speci cation. Moreover, their e ectiveness can be assessed and quanti ed. This thesis deals with the automatic generation of test suites for conformance testing. In particular, work has been done in the following problem areas: 1. Test case generation for test architectures where the tester itself is a distributed system.
1 1 Introduction
2. E cient exploration of the state space of the speci cation that retrieves test cases with a high coverage in a reasonable time. 3. Improved readability of generated test suites by means of user-de ned rules and automatic structuring of data descriptions. The framework in which all these solutions are embedded is formed by the Autolink project. During this ve-year joint project with Telelogic SA, Malm o, the Institute for Telematics (University of Lub eck) developed a commercial tool that allows to generate TTCN-2 test suites based on SDL system speci cations and MSC test purposes. Various projects at ETSI have proved the successful application of Autolink. Beside the Autolink project, two other works in the eld of testing and test generation are considered in this thesis: In 2001, a new universal testing language, called Testing and Test Control Notation 3 (TTCN-3 ), was released. The author was involved in its standardization and developed the rst free TTCN-3 syntax checker. Experience with Autolink has unveiled that the traditional way of simulating a speci cation raises some problems that could be solved by symbolic execution. For proof of concept, a prototype has been developed that demonstrates the bene ts of symbolic execution.
The Structure of the Thesis
This thesis consists of eleven chapters. The dependencies between the individual chapters are shown in gure 1.1. In chapters 2 and 3, the fundamental aspects of testing are outlined. Chapter 2, “Foun- dations of Testing”, provides an overview of types of testing, test architectures, and test methods. The ISO/OSI Conformance Testing Methodology and Framework is especially emphasized. In chapter 3, “Test Languages”, the Tree and Tabular Combined Notation 2 and its successor, the Testing and Test Control Notation 3, are described and compared. The theoretical foundations of automatic test generation are explained in chapter 4, “Test Generation Based on Formal Speci cations”. It summarizes the main concepts of the ITU-T Framework on Formal Methods in Conformance Testing and presents dif- ferent test generation methods. In chapter 5, “High-Level Speci cation Languages”, the formal description techniques Message Sequence Chart and Speci cation and Description Language are introduced. They are used for automatic test generation in this thesis. The Autolink project is described in chapters 6 to 9. Chapter 6,“The Autolink Tool”, provides an overview of the test generation process. In addition, the interpretation of MSCs for test generation purposes is considered and two case studies are introduced. In the following three chapters, solutions for speci c test generation problems are presented. Chapter 7, “Test Generation for Distributed Test Architectures”, deals with methods for specifying synchronization among di erent test components and presents an algorithm for the generation of test cases for concurrent TTCN-2. Chapter 8, “The Tree Walk Search Strategy”, describes a new deterministic search strategy that achieves a higher system coverage than traditional approaches and produces test cases without redundant
2 Chapter 11: Conclusions
Chapter 7: Chapter 8: Chapter 9: Test Generation The Tree Walk Test Suite Chapter 10: for Distributed Search Strategy Representation Advanced Test Architectures Test Generation by Symbolic Chapter 6: Execution The Autolink Tool
Chapter 4: Chapter 5: Test Generation High-Level Specification Languages Based on Formal Specifications
Chapter 3: Test Languages
Chapter 2: Foundations of Testing
Chapter 1: Introduction
Figure 1.1: Structure of the thesis test events. Chapter 9, “Test Suite Representation”, is concerned with the readability of automatically generated test suites. Two solutions are proposed: Customization by user- de ned rules and automatic constraint structuring with no or only minor intervention by the test speci er. One of the greatest challenges in automatic test generation is the modeling of the envi- ronment when simulating a speci cation. In chapter 10, “Advanced Test Generation by Symbolic Execution”, a prototype is presented that demonstrates how to cope with this problem. Each of the previous chapters concludes with one or more case studies that are based on three protocols, namely Inres, Core INAP CS-2, and VB5.2 BBCC. In addition, possible improvements are discussed, where appropriate. In chapter 11, “Conclusions”, some ideas on the application and future perspective of automatic test generation in general are presented. The document is completed by three appendices: In appendix A, the abbreviations used throughout this thesis are listed. Appendices B and C include complete TTCN-2 and TTCN-3 test suites for the Inres protocol of which parts are presented in chapter 3.
3 1 Introduction
4 2 Foundations of Testing
Testing is a method or process that aims at detecting errors in a system or a system component. Beside verifying and analyzing methods, testing is one of the three analyt- ical ways of ensuring the quality of a product in terms of functionality and reliability (Liggesmeyer, 1990, p. 28). But since veri cation mostly fails due to the complexity of modern systems, and static analyses of their structure and complexity (based on metrics and control/data ow) can only give hints at potential problems, testing is often the only practicable method to create con dence in a system. Testing represents an extensive and multifaceted process within the software and hard- ware life cycle. A classi cation and description of di erent types of testing is given in section 2.1. This thesis is concerned with conformance testing whose purpose is to de- termine the extend to which an implementation complies with its speci cation(s). The core concepts of conformance testing are presented in section 2.2. A short survey of the testing process and the persons involved is given in section 2.3. Finally, section 2.4 discusses the major aspects of abstract test methods and test con gurations.
2.1 Classi cation of Tests
Testing is a generic term that subsumes many di erent methods, procedures, and ob- jectives of various application areas (see, e.g., Peng and Wallace (1993); Walter et al. (1998)). Accordingly, there are several possible ways to de ne and classify types of tests (see, e.g., Balzert (1998, chapter 5) and Coward and Ince (1995, chapter 2)). A rst major distinction can be made between static and dynamic test methods: Static methods are based on the analysis of code. Manual test methods that fall into this category are review (software requirements, design, and code are presented to an audience for comments and approval), inspection (the work is examined by a person or group other than the author), and structured walkthrough (the developer guides his colleagues through a segment of design or code). Contrarily, dynamic test methods are based on the execution of the system to be tested. Typically, the system is stimulated by some input and the resulting output (response) is observed and appraised. In the following, dynamic testing is characterized by ve properties: The type of implementation The objective of the test
5 2 Foundations of Testing