
New Strategies for Automated Random Testing Mian Asbat Ahmad PhD University of York Computer Science September 2014 Abstract The ever increasing reliance on software-intensive systems is driving research to discover software faults more effectively and more efficiently. Despite intensive research, very few approaches have studied and used knowledge about fault domains to improve the testing or the feedback given to developers. The present thesis addresses this shortcoming: it leverages fault co-localization in a new random testing strategy called Dirt Spot Sweep- ing Random (DSSR), and it presents two new strategies: Automated Discovery of Failure Domain (ADFD) and Automated Discovery of Failure Domain+ (ADFD+). These improve the feedback given to developers by deducing more information about the failure domain (i.e. point, block, strip) in an automated way. The DSSR strategy adds the value caus- ing the failure and its neighbouring values to the list of interesting values for exploring the underlying failure domain. The comparative evaluation showed significantly better perfor- mance of DSSR over Random and Random+ strategies. The ADFD strategy finds failures and failure domains and presents the pass and fail domains in graphical form. The re- sults obtained by evaluating error-seeded numerical programs indicated highly effective performance of the ADFD strategy. The ADFD+ strategy is an extended version of ADFD strategy with respect to algorithm and graphical presentation of failure domains. In compar- ison with Randoop, ADFD+ strategy successfully detected all failures and failure domains while Randoop identified individual failures but could not detect failure domains. The ADFD and ADFD+ techniques were enhanced by integration of the automatic invariant detector Daikon, and the precision of identifying failure domains was determined through extensive experimental evaluation of real world Java projects contained in a database, namely Qual- itas Corpus. The analyses of results, cross-checked by manual testing indicated that the ADFD and ADFD+ techniques are highly effective in providing assistance but are not an alternative to manual testing. i Contents Abstract i List of Figures ix List of Tables xiii Dedication xv Acknowledgements xvii Declaration xix 1 Introduction1 1.0 Preliminaries...................................1 1.1 Software Testing.................................2 1.2 Random Testing.................................3 1.3 The Problems...................................4 1.3.1 Limitation of RT to Discover Contiguous Unique Failures.......4 1.3.2 Inability of RT to Identify Failure Domains...............4 1.3.3 Lack of Techniques to Present RT Results in Graphical Form....4 1.4 Research Goals.................................5 1.5 Contributions...................................5 1.6 Thesis Structure.................................6 2 Literature Review9 2.1 Software Testing.................................9 2.1.1 Input Domain............................... 10 2.1.2 Test Case................................. 11 2.1.3 Test Oracle................................ 11 2.2 Software Testing from Various Viewpoints................... 12 2.3 Software Testing Levels............................. 13 iii 2.4 Software Testing Purpose............................ 13 2.5 Software Testing Perspective.......................... 13 2.5.1 White-box Testing............................ 13 2.5.2 Black-box Testing............................. 15 2.6 Software Testing Types............................. 17 2.6.1 Manual Software Testing........................ 18 2.6.2 Automated Software Testing...................... 18 2.7 Test Data Generation.............................. 19 2.7.1 Path-wise Test Data Generator..................... 20 2.7.2 Goal-oriented Test Data Generator................... 20 2.7.3 Intelligent Test Data Generator..................... 21 2.7.4 Search-based Test Data Generator................... 22 2.7.5 Random Test Data Generator...................... 23 2.8 Random Testing................................. 23 2.9 Pass and Fail domains.............................. 25 2.10 Versions of Random testing........................... 26 2.10.1 Random+ Testing............................ 27 2.10.2 Adaptive Random Testing........................ 27 2.10.3 Mirror Adaptive Random Testing.................... 28 2.10.4 Restricted Random Testing....................... 29 2.10.5 Directed Automated Random Testing.................. 30 2.10.6 Feedback-directed Random Testing.................. 30 2.10.7 The ARTOO Testing........................... 30 2.11 Automatic Random Testing Tools........................ 32 2.11.1 JCrasher................................. 32 2.11.2 Jartege.................................. 33 2.11.3 Eclat.................................... 34 2.11.4 Randoop................................. 35 2.11.5 QuickCheck................................ 35 2.11.6 AutoTest.................................. 36 iv 2.11.7 TestEra.................................. 37 2.11.8 Korat................................... 38 2.11.9 YETI.................................... 38 2.12 Summary..................................... 40 2.13 The Proposed Area of Research........................ 41 3 York Extensible Testing Infrastructure 43 3.1 Overview..................................... 43 3.2 Design....................................... 44 3.2.1 Core Infrastructure of YETI....................... 44 3.2.2 Strategy.................................. 45 3.2.3 Language-specific Binding....................... 46 3.2.4 Construction of Test Cases....................... 46 3.2.5 Call sequence of YETI.......................... 47 3.2.6 Command-line Options......................... 48 3.2.7 Execution................................. 50 3.2.8 Test Oracle................................ 51 3.2.9 Report................................... 51 3.2.10 Graphical User Interface......................... 52 3.3 Summary..................................... 54 4 Dirt Spot Sweeping Random Strategy 55 4.1 Dirt Spot Sweeping Random Strategy..................... 55 4.1.1 Random Strategy............................. 56 4.1.2 Random+ Strategy............................ 56 4.1.3 Dirt Spot Sweeping........................... 56 4.1.4 Working of DSSR Strategy....................... 58 4.1.5 Explanation of DSSR Strategy by Example.............. 60 4.2 Implementation of DSSR Strategy....................... 62 4.3 Evaluation..................................... 63 4.3.1 Research Questions........................... 63 4.3.2 Experiments............................... 63 v 4.3.3 Performance Measurement Criteria.................. 65 4.4 Results...................................... 65 4.4.1 Absolute Best in R, R+ and DSSR Strategies............. 67 4.4.2 Classes For Which any of the Three Strategies Performs Better... 68 4.4.3 The Best Default Strategy in R, R+ and DSSR............ 68 4.5 Discussion.................................... 68 4.6 Related Work................................... 71 4.7 Summary..................................... 72 5 Automated Discovery of Failure Domain 73 5.1 Introduction.................................... 73 5.2 Automated Discovery of Failure Domain.................... 74 5.2.1 GUI Front-end for Providing Input.................... 74 5.2.2 Automated Finding of Failure...................... 75 5.2.3 Automated Generation of Modules................... 75 5.2.4 Automated Compilation and Execution of Modules.......... 76 5.2.5 Automated Generation of Graph.................... 76 5.2.6 Implementation of ADFD Strategy................... 77 5.2.7 Explanation of ADFD Strategy by Example.............. 78 5.3 Experimental Results.............................. 79 5.4 Discussion.................................... 83 5.5 Threats to Validity................................ 84 5.6 Related Work................................... 84 5.7 Summary..................................... 85 6 Automated Discovery of Failure Domain+ 87 6.1 Introduction.................................... 87 6.2 Automated Discovery of Failure Domain+ ................... 87 6.2.1 Implementation of ADFD+ ........................ 88 6.2.2 Workflow of ADFD+ ........................... 89 6.2.3 Example to Illustrate Working of ADFD+ ................ 90 6.3 Evaluation..................................... 91 vi 6.3.1 Research Questions........................... 92 6.3.2 Randoop................................. 92 6.3.3 Experimental Setup........................... 92 6.4 Experimental Results.............................. 93 6.4.1 Efficiency................................. 94 6.4.2 Effectiveness............................... 95 6.4.3 Presentation of Failure Domains.................... 95 6.5 Discussion.................................... 98 6.6 Threats to Validity................................ 99 6.7 Related Work................................... 99 6.8 Summary..................................... 100 7 Evaluation of ADFD and ADFD+ techniques 101 7.1 Enhancement of the Techniques........................ 101 7.2 Daikon....................................... 104 7.2.1 Types of Invariants Detected by Daikon................ 104 7.3 Difference in Working Mechanism of the Two Techniques.......... 106 7.4 Research Questions............................... 108 7.5 Evaluation..................................... 108 7.5.1 Experiments............................... 108 7.5.2 Results.................................. 109 7.6 Threats to Validity................................ 115 7.7 Related Work..................................
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages163 Page
-
File Size-