Test Suite Optimisation Based on Response Status Codes and Measured Code Coverage
Total Page:16
File Type:pdf, Size:1020Kb
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Tuomas Parttimaa TEST SUITE OPTIMISATION BASED ON RESPONSE STATUS CODES AND MEASURED CODE COVERAGE Master’s Thesis Degree Programme in Computer Science and Engineering April 2013 Parttimaa T. (2013) Test Suite Optimisation Based on Response Status Codes and Measured Code Coverage. University of Oulu, Department of Computer Science and Engineering, Degree Programme in Computer Science and Engineering, Oulu, Finland, Master’s Thesis, 78 pp., 2 Appendices. ABSTRACT A software test suite often comprises of thousands of distinct test cases. There- fore, the execution of an unoptimised test suite might waste valuable time and resources. In order to avoid the unnecessary execution of redundant test cases, the test suite should be optimised to contain fewer test cases. This thesis focuses on the optimisation efforts of a commercially available Hypertext Transfer Proto- col (HTTP) fuzzing test suite. A test setup was created for the optimisation purposes. The test setup con- sisted of the given fuzzing test suite, five different HTTP server implementations used as test subjects, and a code coverage measurement tool. Test runs were ex- ecuted against the five test subjects with the test suite, and at the same time code coverage was measured from the test subjects. In this thesis, three different types of test suite optimisation algorithms were implemented. The original test suite was optimised by applying the optimisation algorithms to the results of the test runs. Another set of test runs were performed with the optimised subset suites, while again measuring code coverage from the same test subjects. All of the coverage measurement results are presented and analysed. Based on the code coverage analysis, the test suite optimisation algo- rithms were assessed and the following research results were obtained. Code coverage analysis demonstrated with a strong degree of certainty that a variation in the response messages indicates which test cases actually exercise the test subject. The analysis showed also with a quite strong degree of certainty that an optimised test suite can achieve the same level of code coverage which was attained with the original test suite. Keywords: fuzz testing, coverage measurement, coverage testing, software test- ing, hypertext transfer protocol. Parttimaa T. (2013) Testisarjan optimointi perustuen vastausten tilakoodeihin ja mitattuun koodikattavuuteen. Oulun yliopisto, Tietotekniikan osasto, Tietotekniikan koulutusohjelma, Diplomityö, 78 s., 2 liitettä. TIIVISTELMÄ Ohjelmistojen testisarja koostuu usein tuhansista erilaisista testitapauksista. Tes- tisarjan suorittamisessa voi mennä hukkaan arvokasta aikaa ja resursseja, jos tes- tisarjaa ei optimoida. Testisarja tulee optimoida siten, että se sisältää vähän testi- tapauksia, jotta epäolennaisten testitapausten tarpeeton suoritus vältetään. Tämä diplomityö keskittyy kaupallisesti saatavilla olevan Hypertext Transfer Protocol (HTTP) -palvelimien fuzz-testisarjan optimointiyrityksiin. Optimointia varten luotiin testiympäristö, joka koostuu fuzz-testaussarjasta, testikohteina käytetyistä viidestä HTTP-palvelimesta ja koodikattavuuden mit- taustyökalusta. Ennalta määriteltyä testisarjaa käytettiin testikohteille suorite- tuissa testiajoissa, joiden aikana mitattiin testikohteista koodikattavuus. Diplomityössä toteutettiin kolme erilaista testisarjan optimointialgoritmia. Al- kuperäinen testisarja optimoitiin käyttämällä algoritmeja testiajojen tuloksiin. Optimoiduilla osajoukkosarjoilla suoritettiin uudet testiajot, joiden aikana mi- tattiin testikohteista koodikattavuus. Kaikki koodikattavuusmittaustulokset on esitetty ja analysoitu. Kattavuusanalyysin perusteella arvioitiin testisarjan opti- mointialgoritmien toimivuus ja saavutettiin seuraavat tutkimustulokset. Koodikattavuusanalyysin avulla saatiin varmuus siitä, että muutos vastaus- viesteissä ilmaisee, mitkä testitapaukset oikeasti käyttävät testikohdetta. Analyy- silla saatiin myös kohtalainen varmuus siitä, että optimoidulla osajoukkosarjalla voidaan saavuttaa sama koodikattavuus, joka saavutettiin alkuperäisellä testisar- jalla. Avainsanat: fuzz-testaus, kattavuuden mittaus, kattavuustestaus, ohjelmistotes- taus, hypertext transfer protocol. TABLE OF CONTENTS ABSTRACT TIIVISTELMÄ TABLE OF CONTENTS FOREWORD LIST OF ABBREVIATIONS AND SYMBOLS 1. INTRODUCTION ...............................................................................9 2. SOFTWARE TESTING ...................................................................... 11 2.1. Fuzz Testing .............................................................................. 11 2.1.1. Fuzzer Categorization ........................................................ 12 2.1.2. Mini-Simulation Method .................................................... 12 2.2. Code Coverage ........................................................................... 13 2.2.1. Line Coverage ................................................................. 14 2.2.2. Branch Coverage .............................................................. 14 2.2.3. Function Coverage ............................................................ 15 2.3. Code Coverage Measurement ........................................................ 15 2.3.1. Source Code Level Instrumentation ...................................... 15 2.3.2. Binary-Level Instrumentation .............................................. 16 3. HYPERTEXT TRANSFER PROTOCOL (HTTP) ..................................... 18 3.1. Client/Server Message Exchange ................................................... 18 3.1.1. Request Messages ............................................................ 19 3.1.2. Response Messages .......................................................... 19 3.2. Request Methods ........................................................................ 20 3.3. Response Status Codes ................................................................ 21 3.3.1. Response Status Code Classes ............................................. 21 4. TEST ENVIRONMENT ...................................................................... 22 4.1. Research Questions ..................................................................... 22 4.2. Terms of the Work ...................................................................... 23 4.3. Solution Choices of the Test Setup ................................................. 23 4.4. Test Suite Analysis ..................................................................... 24 4.4.1. Structure of the Test Suite .................................................. 25 4.4.2. Instrumentation Method ..................................................... 26 4.4.3. Test Run Result Files ........................................................ 26 4.5. Test Subject Selection .................................................................. 27 4.5.1. Nginx ............................................................................ 28 4.5.2. Lighttpd ......................................................................... 29 4.5.3. Tntnet ............................................................................ 29 4.5.4. Hiawatha ........................................................................ 29 4.5.5. Apache httpd ................................................................... 30 4.5.6. Other Candidates for Test Subjects ....................................... 30 4.6. Code Coverage Measurement Tool Selection .................................... 31 4.6.1. GNU Compiler Collection (GCC) ........................................ 31 4.6.2. Gcov ............................................................................. 31 4.6.3. Lcov .............................................................................. 32 4.6.4. Other Candidates for the Code Coverage Measurement Tool ...... 33 5. TEST SUITE OPTIMISATION ............................................................ 35 5.1. Description of the Optimisation Methods ......................................... 35 5.1.1. Test Suite Optimisation Algorithm No. 1 ............................... 36 5.1.2. Test Suite Optimisation Algorithm No. 2 ............................... 38 5.1.3. Test Suite Optimisation Algorithm No. 3 ............................... 40 5.2. Test Setup Control Script .............................................................. 41 5.3. Practical Trial ............................................................................ 42 5.3.1. Result Validation .............................................................. 42 6. TEST RESULTS ............................................................................... 44 6.1. Code Coverage Measurement Results Using the Original Test Suite ....... 45 6.2. Optimisation Results as Subset Test Suites ....................................... 46 6.2.1. Reduction in the Size of the Test Suite .................................. 47 6.3. Code Coverage Measurement Results Using the Subset Test Suites ........ 48 6.3.1. Results Using the Subset Test Suites Optimised with Algorithm No. 1 ............................................................................. 49 6.3.2. Results Using the Subset Test Suites Optimised with Algorithm No. 2 ............................................................................. 50 6.3.3. Results Using the Subset Test Suites Optimised with Algorithm No. 3 ............................................................................. 51 7. DISCUSSION ..................................................................................