Linköping University | Department of Computer and Information Science Bachelor thesis, 16 ECTS | Computer Science 2018 | LIU-IDA/LITH-EX-G--18/033--SE

Practical Approach to Developing an Automation Testing Tool

Hannes Persson and Povel Ståhlberg

Supervisor : Ivan Ukhov Examiner : Ahmed Rezine

Linköping University SE–581 83 Linköping +46 13 28 10 00 , www.liu.se Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin- istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam- manhang som är kränkande för upphovsmannenslitterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum- stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con- sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni- versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c 2017 Hannes Persson and Povel Ståhlberg Abstract

Manually verifying a software under development can be time consuming because of its complexity, but also because of frequent updates to different parts of the system. As the software grows larger, a way of verifying the software automatically without user-interaction is a good approach. Verifying a software systematically and automatically will both save time for the developers; and assure that the updated version functions as before. This report will present a starting point for automatic testing. This is done in cooperation with XperDi, a company developing a plug-in for CAD-software that currently verifies their functionality manually. This was achieved by developing a testing tool that support communication between Windows applications used by the plug-in; this was needed to automate the testing process. The reached conclusions during this thesis are promising as a starting point for XperDi; to move from manually to automatic verification. There are however several improvements that this report presents for further development of testing tool.

Keywords: Software Testing, Box Approach Testing, Testing Tool, VB.NET Acknowledgments

We would like to thank Albin Mannerfelt and Manokar Munisamy at XperDi in Linköping. We also want to thank Ahmed Rezine at IDA, institution of computer science at Linköping University, for being our examiner.

A special thank to Creative Science Park for providing a working environment.

iv Contents

1 Introduction 2 1.1 Background ...... 2 1.2 Motivation ...... 2 1.3 Purpose ...... 3 1.4 Research questions ...... 3 1.5 Thesis outline ...... 3

2 Theory 4 2.1 Development Environment ...... 4 2.1.1 VB.NET ...... 4 2.1.2 Windows API ...... 4 2.1.3 Component Object Model ...... 5 2.2 CAD configurator ...... 5 2.2.1 SQLite ...... 5 2.3 Continuous Integration ...... 5 2.4 Documentation and Testing ...... 5 2.4.1 Log4Net ...... 6 2.4.2 Software Testing ...... 6 2.4.3 Pairwise Testing ...... 9

3 Method 10 3.1 Pre-studies of the CAD Configurator ...... 10 3.1.1 Parts ...... 11 3.1.2 Features ...... 11 3.2 Testing strategy ...... 12 3.2.1 Execution time ...... 12 3.3 Test Structure ...... 12 3.3.1 Permutations ...... 12 3.3.2 Pairwise Test Structure ...... 12 3.4 Box Testing Approach ...... 13 3.5 Implementation ...... 13 3.5.1 Automated Testing Tool ...... 13 3.5.2 Testing Features ...... 17 3.5.3 Logging with Log4Net ...... 23

4 Results 24 4.1 Testing Tool Implementation and Tests ...... 24 4.1.1 Tests ...... 24 4.1.2 Testing Tool ...... 24

5 Discussion 26 5.1 Results ...... 26

v CONTENTS

5.2 Method ...... 26 5.3 Maintenance and future improvements ...... 27

6 Conclusion 29 6.1 Future Work ...... 29

Bibliography 30

1 1 Introduction

Software development is a challenging process. It is hard to predict how the user will operate applications. Therefore, the developer needs a systematic way to gain information about po- tential issues before the application is deployed. By using testing methods, the developer can minimise potential errors and therefore improve the software quality. This makes testing a fundamental part of the application development cycle. Having a test tool allows one to cap- ture the desired behaviour and improve with confidence later on. This thesis will investigate and evaluate how an automatic test system can be designed and implemented for a system that assists with 3D-modeling.

1.1 Background

XperDi is a company located in Linköping, Sweden that provides a software where when used can achieve a more effective 3D-modeling prototyping process. This is provided by XperDi’s own add-on that functions together with a number of 3D-modeling software. This application is used by drafters in order to design complex 3D models in a straightforward manner. Usually this process is a tedious and time-consuming practice, especially at later stages of a design where small changes can have a large impact on related parts. The appli- cation developed by XperDi assist in the assembly and alteration process at the design stage of a 3D model in order to ensure that time consumption is kept to a minimum.

1.2 Motivation

Today, when the developers at XperDi provides new functionality, the system is evaluated and tested manually to verify that the application is working properly. This is not a testing process that scales well with an increasing set of functionality, and it necessitates redundant work. Because of this there is no systematic way to gain confidence that the updated func- tionality and other related parts of the system works as intended. In order to avoid manual testing and redundant work, automatic testing systems could provide a systematic approach to prevent bugs in deployed code.

2 1.3. Purpose

1.3 Purpose

The goal of this thesis is to develop an automated testing prototype that exercises a defined set of the application functionality. The prototype that is to be developed has the following set of requirements to follow: • The test tool shall conduct fully automatic testing without supervision and interaction. • The test tool shall handle all types of errors the test might throw without breaking the queued test execution. • The test tool shall be able to detect communication errors and faultily waiting states. • The test tool shall have the possibility to be integrated in a continuous integration envi- ronment. • The test tool shall output information regarding the test execution to a log. • The test tool shall function on a Windows 10 system. By analysing a range of testing and automation types, a suitable testing prototype is to be de- veloped in order to test the functions and provide an entry point for an automatic integrated testing system.

1.4 Research questions

In this thesis, we are set out to investigate the following questions: • Is it possible to apply system automation testing on the application? • Can automated testing be conducted with an existing testing framework? The first research question aims to investigate if it is possible to implement a set of tests to validate each of the features users can access from the application.

The second research question aims to answer if there is an existing testing framework that can be used for the purpose of testing the applications features correctly. Even though there exists many testing frameworks it might be a possibility that none satisfies our needs, one reason to this could be user-interaction between windows applications that should be automated. This particular communication is between the application and the 3D-modeling software CATIA.

1.5 Thesis outline

In chapter 2 we explain the relevant concepts of software testing and tools of importance when designing a testing tool.

In chapter 3 we describe how the application first were analysed in order to get a better understanding of how each part in the system correlates with the other parts. We will also present the development of the automated testing system.

In chapter 4 we present the testing prototype and the results from the test suite.

In chapter 5 a discussion will be conducted where the system is evaluated in a larger context, design decisions are analysed and re-evaluated.

Chapter 6 will present the conclusions of this thesis together with suggestions for im- provements in future work.

3 2 Theory

This chapter will go through the tools and techniques that will be of importance for this thesis.

2.1 Development Environment

The following parts will briefly present theory that is important regarding the development environment. CATIA, is the 3D-modeling software that the testing system neet to function with. .NET will be used as the primary development language according to requirements from XperDi. SQLite is used to store all the information regarding the compo- nents we use in this thesis. It is necessary to know about such concepts as multi-threading and deadlock due to issues in automation of a user-interaction application. In order to develop the testing tool a Windows system will be used to build, and run the software. The Windows system is used because of design decisions in the application and because of accessibility. By working with Windows, the different software have the possibility to access explicit information about certain parts of the system. This can be done through the Windows API and the component object standard.

2.1.1 VB.NET Xperdi’s application is developed in Visual Basic .NET. Visual Basic.NET is an easy-to-learn type-safe, object-oriented and event-driven that is based on the high level programming family BASIC. To ensure that the testing prototype can access functional- ity in the application and the provided libraries, a suitable selection of language is VB.NET. Visual Studio 2015 has integrated support for Visual Basic.NET together with the Nuget packet manager that is used to manage packages such as SQLite and testing frame- works.

2.1.2 Windows API Windows API is a programming interface to access public functions. This type of libraries is accessed via Windows Dynamic-Link Libraries (DLL) [1]. The USER32.DLL is a library that contain modularised functionality related to Windows Graphical Interface. Among the functions the library provides is the window management functionality FindWindow() and

4 2.2. CAD configurator

PostMessage(), which can be used to manipulate current states of open instances of a window. FindWindow() takes a name or class of a window and return a unique handle that can be used to interact with. PostMessage() sends a message to a window handle.

2.1.3 Component Object Model Component Object Model (COM) is an interface standard that can be used to communicate between systems and programming languages. COM objects can be used to manipulate state of applications and retrieve data that is supported by the receiver [2]. COM is a standard of how applications are supposed to interact between each other. CATIA is a supported software that uses a COM-interface.

2.2 CAD configurator

The XperDi CAD configurator is an application that makes a set of functions available to the user via a graphical interface. This GUI provides an intuitive way to alter the state of the active CATIA-instance. The small set of software functions the GUI provides is partly imple- mented in a compiled library, these do all relate to the state of CATIA-instance in different ways. The provided functionality can be divided in the following groups:

• Add, delete, and alter objects in CATIA

• Object grouping functionality

• Project/Design functionality

By constructing a model via the application, the user do not need any particular modeling ex- pertise and can focus on the design aspect and later on rearrange sections of the design with- out the need of reworking the whole model from the start. This is an action that would oth- erwise take more time for minor fixes and advocates experimentation because of the amount of static dependencies in 3D models.

2.2.1 SQLite SQLite is a self-contained database engine that integrates well with VB.NET. SQLite is a re- liable alternative for creating structures of information in order to save data and states. The application uses SQLite for data tracking among the parts in the a 3D model. The SQLite system is also used as a shared access-point of all available parts.

2.3 Continuous Integration

There are several development methods. One of the most commonly used among the agile methods is extreme programming. Extreme programming is appreciated because it is focused on communication between customers and developers. This makes it easy to integrate fea- tures on demand due to the support for easy deployment. A method for agile development is to use continuous integration by using a system that integrates multiple parts of the de- velopment process and conducts builds, executes code coverage analysis and tests the source code automatically. If a continuous integration process is not used this could be tedious and cause errors due to overlooked steps in the process[3].

2.4 Documentation and Testing

When developing applications it can make it easier for both future development and verify- ing if the the application was executed with the proper behaviour by providing a structured

5 2.4. Documentation and Testing way of presenting information. Debug logging, verbose error handling and logging of exe- cution results can be used to decrease the time it can take to trace an unexpected error. This can be achieved with different libraries or framework, most commonly known is Log4Net.

2.4.1 Log4Net Log4Net is a log framework that integrates Microsoft .NET platforms and can be used for creating and maintaining log output messages[4]. Log4Net needs to be implemented in a global App.Config file for the project. A log instance is, thereafter, instantiated for each of the classes that needs writing privileges to the log. By implementing Log4Net, the developer can direct messages from different classes to a standard formatted output to different types of logs while maintaining consistency among the messages. This tool supports functionality to filter depending on a log message category. The message categories of interest are DEBUG, INFO, and ERROR.

2.4.2 Software Testing Software testing is a process used for evaluating a system or its components. Testing is needed in order to be able to answer the question: “Does the software satisfy the require- ments or not?” [5]. There is a number of different kinds of testing methods, each of which focuses on catching certain type of errors. In this thesis black- , white- and grey-box testing will be studied further and will therefore be explained a bit more in detail.

2.4.2.1 Black-Box Testing Black-box testing is the method to test a closed system guided by the corresponding spec- ification. The developer is ignoring the code and focuses on testing if the software is able to meet the requirements (Data-driven, functional, or input/output-driven) [6, 7]. Black-box testing should be conducted by a tester with no internal knowledge of the source code. The application is accompanied by a set of implemented test cases that will be executed with a preselected set of inputs. The tester evaluates the output for correctness. Black-box testing is adequate for specifying software components and pre- and post-conditions. Unfortunately it is not enough for the more interactive components. This is because the result of a compo- nents operation may be dependent on external operations, that are called during execution [8]. Therefore, the external operations activated by the black-box component also need to be specified. It is enough to specify how the external operations should be implemented; there is no need to apply them [8]. The advantages of black-box testing are as follows [6, 9]:

• Efficient when used in large code segments.

• No need for source code access.

• Quicker test case development.

• User’s perspective is clearly separated from the developer’s one.

However, the disadvantages of black-box testing are as follows:

• Only a subset of the functionality is triggered. This result in limited code coverage.

• Limited testing due to low knowledge of the code base of the application.

• Difficulty in designing test cases.

6 2.4. Documentation and Testing

Figure 2.1: Black-box testing

2.4.2.2 White-Box Testing White-Box testing, is the method of testing code by running test functions that usually ex- ercise classes or functions. Using white-box testing will help to optimise code, eliminate redundant functionality, and catch errors that a compiler may not be able to detect [9]. By using white-box testing, the developer needs full knowledge of the applications codebase. White-box testing can in combination with black-box testing (see section 2.4.2.1) form an- other box-testing method.

2.4.2.3 Grey-Box Testing Grey-box testing is defined as a meeting-point of black- and white-box testing. It is a method to use when having limited knowledge of the codebase, together with knowledge of the fun- damental aspects of the system[9]. The Grey-box testing system under test (SUT) can reveal parts of its internal workings, not just inputs and outputs similar to black-box testing. By combining black- and white-box methodology, a grey-box methodology can be adapted.

Figure 2.2: Grey-box testing

The techniques found inside grey-box testing are as follows: • Orthogonal array testing - Used for testing subset of all possible combination. • Matrix testing - Used for stating a status report of a project. • Regression testing - Every time an application is modified this type of testing can be used in order to re-establish the confidence that the application still functions according to the specification [10]. • Pattern testing - Verify the application on how good the architecture and design are. Advantages of grey-box testing are [6, 9]: • Combined benefits of white- and black-box testing. • Tester relies on interface definition and functional specification, not source code. • Tests done from user’s point of view, not the designer’s. • A tester can design excellent tests based on limited information. Disadvantages of grey-box testing are : • Still no full access to codebase, and limited test coverage. • Many application paths remain untested. • There can be redundant test cases existing inside grey-box testing.

7 2.4. Documentation and Testing

2.4.2.4 Regression Testing Regression testing is a grey-box testing technique which is applied to testing software, the tests should run whenever someone modifies the codebase; to re-establish confidence that the application functions according to a set of specifications. Modifications of code often in- volve new logic’s that fixes bugs or increases the application’s efficiency [11]. The tests that a regression testing system run are usually function-oriented and test if the updated code still return the correct result. This also involves functions that may not be directly connected to the altered code but may use shared data or modules. Regression testing can be grouped in progressive regression testing or corrective regression testing, depending on if the speci- fication has been altered [12]. A detailed description of the differences can be seen in figure 2.3.

Figure 2.3: Differences in types of regression testing

There are two types of regression testing that are adapted from the black-box approach. Progressive regression testing is a method of writing tests for changes to the code that have not created any alterations in the specification. Usually the majority of the old tests can be reused because the input and outputs of the program stay the same and will function ac- cording to the old specification. However, changes that alter data structures and data flow might not be tested completely, and new tests might be needed to re-establish the confidence [12]. The two methods can be used together to complement regression testing in a beneficial way. By combining them, the developer can use progressive regression testing at a milestone when the specifications has been changed; and corrective regression during the continual production to maintain the testing code during small changes. With every new version, this requires new or extended test cases that verify the new fea- tures. This becomes a problem when a project starts to grow. This depends both on the complexity of features and on how the data is structured. Elbaum et al. write about a group of collaborators who test 20,000 lines of code which takes about seven weeks to run. In such a case, it is important to prioritise tests [13]. One can both decrease the time it needs to execute and decrease the average time it takes to find an error[13, 14]. An example to this is, if an existing test case have been altered and the developer want to check if it still functions correctly. It could be a good idea to prioritise it before running the rest of the test suite (that could take up to 6-7 weeks before detecting the change).

8 2.4. Documentation and Testing

2.4.3 Pairwise Testing Pairwise test generation (also called t-wise testing) is a method that is used for testing com- binations of objects. The plug-in (presented in section 2.2) could be described as a Software Product Line (SPL), a product being built in a software. Even though a SPL is difficult to validate due to large number of components usually involved in building a product [15]. Perrouin et al. describe this in the context of an automotive domain as follows: "generally the number of possible configurations grows exponentially leading to millions of combinations to test" [7, 15]. Because of this, we need to decrease the number of possible configurations that need to be tested. Perrouin et al. raise two major issues in testing an SPL: the amount of the possible products and the generation of the test suites of the products. Pairwise test- ing takes components constraints knowledge in mind, giving developers the opportunity to select pairs of components that are crucial for the product to be created [16]. Williams present that a possible trade off between test coverage and the number of tests, is to determine a set of configurations that test all dependencies without testing all available permutations [17]. Pairwise testing requires that every pair of components should at least have one test case [18, 17]. Following this goal will provide a well-defined test coverage. Redundancy can still exist, but at smaller scales.

9 3 Method

In this chapter, the method is described in a way that explains how the implementation was carried out in detail. A pre-study of the CAD Configurator features is needed to be conducted to locate the different areas of the codebase that need to be tested. A study will be followed on both black-box and grey-box in order to decide which suitable technique to use in testing the CAD Configurator. The method of how test cases were developed will also be presented through usage of permutation combined with pairwise testing, a method within software testing. The implementation phase of the testing tool will also be presented.

3.1 Pre-studies of the CAD Configurator

In this section the pre-studies of the CAD Configurator will be presented. A description on how we came in contact with the given codebase, the functionality and features existing in the CAD Configurator. XperDi has provided a set of features that is in need of being tested. The features all alter the existing instance of CATIA or database information that is used by the CAD Configura- tor. The CAD Configurator uses communication with CATIA and requires user interaction through the whole process. In the beginning of this project a large amount of time was spent by examining the given high level code structure and how the Cad Configurator communi- cated with CATIA. This provided an understanding of how features accessed and used CA- TIA interface and the application GUI. Each feature of the CAD Configurator need of some kind of interaction with the user. The interactions may occur in one of three different layers. One type of interaction is CAD Configurator requirements, this may be dependent data in text fields, object dependencies or prompt-boxes that may require confirmation. Another in- teraction is the selections which is required by CATIA to function correctly. The last type of external interaction is the dependency of CATIA running on the system. In our case this is that the CATIA software must be running in the background. The majority of these actions needs to be automated to work with the test cases. By investigating each of the functions and how they operates in the CAD Configura- tor, would result in a limited scope where unknown dependencies could be found without knowledge of the code.

10 3.1. Pre-studies of the CAD Configurator

Figure 3.1: Model type constraints

3.1.1 Parts The CAD Configurator uses a set of parts that can be instantiated. A part is a collection of information sent by the CAD Configurator that instructs CATIA to build a certain object according to the specification. This item needs to be assembled to a parent (the part which the current item is derived from, see figure 3.1) and sometimes a part will also require a reference point if the item needs to be placed in a section. The items directly reflect the type of products the user is designing. In our case we are using the collection of models used to design a railway-wagon when verifying the underlying functionality. All of the parts have a set of constraints that need to be followed to be able to make changes. These constraints are dependent on the model type. A chart of what these are can be seen in figure 3.1. An example to this is that a Car, can be derived from the Start Node and a Floor can not.

3.1.2 Features Each of the features that will be tested is accessed through a GUI and the majority of features have internal data requirements. Such as that a project is needed before some of the functions are available. The functions which are to be verified can be divided in the following five categories:

• Projects - All functions related to managing the project. This includes adding a new project, opening last used and navigating to the project file of choice.

• Instantiation - Functionality related to instantiation of parts.

• Groups - Functionality related to handling groups of parts.

• Parameters - Functionality related to alterations to specific parameters of parts.

• Change parts - Functionality related to swap a part to another of the same type

All functions these groups interact with are located in a compiled library file. Because this is the functionality the user has access to via the GUI this will be the groups of functionalties that are needed to be verified. This system is a suitable use case for grey box testing be- cause we have an internal knowledge regarding how each of the features is called and what dependencies each function will need, but have no access to the library code.

11 3.2. Testing strategy

3.2 Testing strategy

In this thesis we are testing a large set of inputs against a series of functions. All functions will be monitored by both the duration of a test execution and the amount of tested subsets. By doing this we believe, a solid set of tests that achieve good test coverage against the library function can be provided. All the features that need testing consists of a series of functions that needs to be provided with configured data. These sets of data are selections normally made by user-interaction. This might result in test cases growing to a large extent.

3.2.1 Execution time Because some functions may only require testing within the CAD Configurator and with little to no external communications with CATIA, whilst other have small sets of test cases but all of them require expensive actions to communicate with CATIA, there is no correlation between the amount of test cases and the execution time that is required between the tests. Because that CATIA is very dependent on the hardware when instantiating objects, the systems that is executing the test is a major factor on how fast a test is processed by the system. Therefore we decided to use one system to log the test results.

3.3 Test Structure

To test the functionality, a method on how to decide which test cases that will be used is needed. We first investigated the alternative to just use plain permutations and later on pair- wise testing was also investigated.

3.3.1 Permutations Only using permutations was not possible due to the large amount of inputs, that will be unmanageable. A worst case scenario when using permutations in this scale would be: If no constraint restrictions were to exist and every part were able to mount to every possible part. The amount of input permutations would grow in an unmanagable way. In this thesis there are at the moment 24 different components that would be an amount of 24! sets of input permutations (equal to 6.204484017 E+23 permutations). The reason that permutations can be used in this thesis is because of the constraint re- strictions. The constraint restriction for this thesis is presented with the formula found in section 3.5.2.2. The sample size is in this thesis converted to constraint restrictions, giving us a value of two. This means that for every component added the size of permutations will still grow. The growth is quite manageable and slow because of the sample size two. Adding a few more constraint restrictions (increasing sample size) could however, quickly produce an unmanageable amount of input permutations.

3.3.2 Pairwise Test Structure If one were to only use permutations, the result would be an enormous amount of unneces- sary redundancy in our case. If one were to combine permutations with pairwise testing, one could only test the constraints dependencies and this would result in a lot less test cases than with only the use of permutations. Perrouin et al. presented the impact of using pairwise testing and how it can decrease test cases, with even major amount of parts. The example below shows the differences in using the two approaches [15]. Two ways to generate test cases have been presented. Plain permutations which test all possible combinations, this would though be very time-consuming but would ensure a full test coverage. Pairwise testing would represent a small amount of tests and still represent a full test coverage but only for the constraint dependencies. Using the permutations with

12 3.4. Box Testing Approach

24! P(n, r) = = 6.204484017E + 23 (24 ´ 23)!

Figure 3.2: Amount of test cases using only permutations

24! P(n, r) = = 552 (24 ´ 2)!

Figure 3.3: Amount of test cases with a combination of permutations and pairwise testing pairwise testing would present the better way of testing constraint dependencies while per- mutations would test everything, as if all parts should be able to mount to one another.

3.4 Box Testing Approach

In chapter 2, theory, there are three box-testing approaches that have been presented; White-, Black- and Grey-box testing. Due to the fact that large sections of the codebase have been hidden and only will be accessible from binaries, we will have a limited knowledge about the code and how it works, and because of this the white-box testing method is not applicable. Black-box testing does not require any knowledge about the codebase at all, instead knowledge about how a function should behave. Basically when a function is used the in- put should produce a resulting output with a predefined behaviour. Because the codebase is accessible to us at feature level we still have knowledge about some behaviour of the in- ternal workings as well. Because of this grey-box testing would be a more fitting alternative than black-box, as grey-box testing is defined as a middle group for black-box and white-box testing. There are of course several grey-box testing techniques and methods where all have different specialities, meant for different scenarios. Regression testing will be the one tech- nique with its speciality in comparing an old version of an application with the new modified application. Regression testing is used for finding changes in the core code that was uninten- tional, code in other modules than the one intentionally changed is located and brought to daylight.

3.5 Implementation

In this section the design and implementation of the automated testing tool will be pre- sented together with the decisions that had to be made in order to fulfil the requirements from XperDi. In figure 3.5, we see a high abstraction model of how the testing system is supposed to communicate with the different parts of the CAD Configurator. The interface part of the CAD Configurator is the window, users interact with. This has been shown in figure 3.4. Most of the CAD Configurator has interactions with the local database to keep track of the structured data. XCC-library is a class library that contains functions that interact with the CATIA instance. This is where the CAD Configurator constraints and calculations are located. External CATIA instance is the required CATIA software.

3.5.1 Automated Testing Tool The short code examples provided by XperDi mentioned earlier in section 3.1, explained to us what functions were in need of testing. The functions used in the code examples could only be accessed in a framework. The provided examples were used to determine how we could work around each feature, without knowing in detail how they operated within the framework.

13 3.5. Implementation

Figure 3.4: Start Menu of the application

Figure 3.5: High level structure

14 3.5. Implementation

Figure 3.6: Console Application Prototype Workflow

In order to verify that the library functions properly, it was needed to develop and design a tool that integrates with each of the systems that is affected by the given compiled library. The testing tool will verify one feature at the time and log errors and test results. How the testing process is logged will be explained in section 3.5.3. The testing tool is designed to run all the feature tests independently. This is done by providing an argument when starting the tool to select the tests that should execute. The -h argument displays the different start options that are available to run. Figure 3.7 shows avail- able options and some important information regarding the functions of the tool. Testing one feature at the time independently from each other will slow the testing process down when executing the complete set of tests, however if one would need to test a subset of features the developer can select a specific feature. By splitting each of the tests and removing any dependencies between the tests one could split the tests among different computers and only test a part of the features to speed up the process. Because the testing tool always should be able to start executing all of the tests regardless of which test was run before; the tests need to be fully modular and not require any prede- fined data that an earlier test might have provided or be contaminated by leftover data from a previous test. By designing the structure like, if system grows larger and the execution time of tests will take a long time, one could distribute the feature tests among different systems and decrease the time to get full test coverage. The testing tool also needs to withstand any type of error that a test may throw. There can be a series of different types of errors that can arise during the execution. When designing the testing tool it was important to withstand the following errors:

• Fatal errors the compiled library might throw

• Communication problems

15 3.5. Implementation

Figure 3.7: Console with -h as input argument

This is a problem if a fatal error occurs during execution. A fatal error is a type of exception that indicates a failure that is so severe that the CAD Configurator is unable to continue the execution. Because of that execution is unable to continue and exception handling will not be able to prevent the testing tool from crashing. This means that structuring the tests by functions in an application is not an acceptable solution. This is because an imported library shares the same address space as the caller of the function. Because of this, if a fatal error occurs that contaminates the process address space will not be able to recover and continue with the next test. To avoid this problem we have divided each test as a standalone process that will allocate its own address space that can be corrupted without interrupting other upcoming tests. By using this design we ensure that each test can be automated and will continue regardless of what happened before. Because of this structure it will be a rather tedious task to update tests due to that the developer needs to build all of the testing processes which have been altered instead of just one. However, when using a modular system like this it will make the testing tool more manageable when expanding the functionality. The only requirement for each test process is that the New Design function is called in the beginning of the test. The occurrence of fatal errors was solved by using the Windows Error Reporting service, when a process is experiencing a fatal error or anything that will result in a process crashing the Windows Error Reporting service is triggered and displays a debug dialogue window with information regarding the crash. This window is then handled by the supervising tool that instantiated the test. This system is continuously polling the system if a crash report has occurred, and if so exits the testing process with an interpretable error code. This will avoid the problem of the supervising tool to waiting for a return value when a fatal error crash report have occurred in the system. To solve this Spy++ was used to determine the class of a Windows Error Reporting dia- logue. By using the WinAPI user32.DLL we can communicate with this window and close it. This was done with the help of triggering an action to the unique window handle that owns the error message. By sending the code "&H10" the receiving window is closed. When this dialogue is closed the process returns four bytes of the corrupt stack as a return value. This return value has a total range of 232 = 4294967296. There is a small possibility that this value will be zero which is used to determine if the test has been successful and will in that case produce a faulty result in the solution. After a discussion with XperDi regarding the small chance of error, it was decided that, because of the probability of this occurring is so small; it will be accepted as a solution for the testing tool and can in a later stage be improved. This system is displayed in figure 3.8 and is only tested on Windows 10 systems. Whenever there is a new update available the full test suite are recommended to be ex- ecuted to check that the new update did not cause an unintentional error within another method or function. The modified module (function/method) should be the only one that has been altered, if other modules are affected this should be caught by the implemented regression testing.

16 3.5. Implementation

Figure 3.8: Console structure

The other error that might occur is if a communication between the library and the CATIA- instance has been lost or is out of sync with each other. An error of this type might be thrown when the library sends a query that does not get registered, or if the callback in the library does not register that CATIA is finished with the task; and is kept in a waiting state. This is a problem because both of the applications are waiting for one other, similar to a deadlock. In order to solve this problem the test tool needs to keep track of the execution state for each process it has started. By evaluating the state of each process, the tool can determine how long the execution state can be unaltered without interpreting it as a failure. Because some of the tests have communications with CATIA they can take a while to execute, this timer need to be set to not trigger when the testing process is waiting for when CATIA is working as faulty state. When evaluated the different tests the timer have been set to 30 minutes which can be changed easy within the system if it is needed. If the state of the testing process is unaltered during this time the test will be terminated and marked as a failure.

3.5.2 Testing Features This section will explain in detail how the tests were designed and implemented in the testing tool. By using a modular approach a structure of tests that works independently of each other can be achieved. This is a necessity because of the importance of only running specific tests.

3.5.2.1 Test New Design To test New Design, a function has been developed to test the library feature that creates a new design. The test first checks that an empty CATIA-instance is available and creates a new design. A function that communicates with CATIA checks all the items in CATIA and can by doing this, see if a design has been created with the correct structure.

3.5.2.2 Instantiation Test The instantiation of a part usually consists of several stages. The user needs to select a part from an existing list. This part needs a parent that will be used as a mounting point. The library will then construct a object in CATIA with the appropriate parameters together with the required information about the part. By automating this process without a GUI we ensure that the functionality of the library will operate correctly, but not the code that links the GUI

17 3.5. Implementation to the library. This will both test if valid inputs have a chance of crashing the system, or if there are any logical errors that will cause unexpected behaviour. Such behaviour could be that an item might not be added. The items can be divided among 12 categories and consist of parts given by XperDi, that are in need of being tested; each one having their own restrictions and dependencies. The detailed flowchart of the instantiation test uses three data structures, this is needed to cover all of the possible iterations that may occur. The test iterates all the 24 possible parents, each parent assembles with all other items. This implies that testing a category will need 24*(Number of items in category). This is necessary to execute for all categories because A mounted on B does not construct the same action as B mounted on A. Each object will need a parent to mount to and dependencies are shown in figure 3.1. This test is traversing each parent object. Because of all tests being executed with a child and a parent the size of each set is two. The number of parts that can be assembled are 24 and is because of this the sample points. The total set of permutations was earlier calculated with the formula presented in 3.2. This will not however provide the whole truth, in this test it will also be necessary to test duplicates. A sample of [a,b,c] and a sample size of two gives the permutations : [ab, ac, bc, ba, ca, cb]. It is notable that [aa, bb,cc] is not validated. These are important samples that need to be checked because of how the object relate to each other. To cover this functionality gap, 24 extra tests (for duplicates) will be added. The starting mounting point is another exception. This is an item that is not a design object. This means that one can not instantiate this item as a child, but only use it as a parent. Because this is a special case and does not function like ordinary object, this is not covered by the permutations. Therefore another 24 tests are needed to check the parent functionality. This results in a total of 600 tests.

552 + (24 ˚ 2) = 600

The flowchart in figure 3.9 displays the testing approach we have used in the codebase. This explains the test in high abstraction. ParentType, ChildType and ChildItems are all contain- ers that include information about the accessible groups and objects. The CAD Configurator function that adds an object to the solution has its own response message that executes the result of its operation. A test iteration could test all items in the itembase and try to mount them as a child to the current parent. Validation is later done to check if a valid item is added and the invalid will be rejected. If any item has problems with this stage, the test will exit and print the failed action to the log.

3.5.2.3 Delete Part Test A commonly used feature is the function to remove items, this feature is accessed via the XCC-library. This function removes an item in the current CATIA solution and updates all other relevant data from the data structures. By selecting the unique item name from CATIA, the library removes the selected item and all related children that might be connected as children. When removing an item a prompt-box is displayed that prompts the user to confirm the action. Because of the sequential nature of code execution interruption of any kind becomes a problem in order to maintain complete automation without requiring user-interactions. Be- cause the library prompts the user to acknowledge the deletion this is not possible anymore. To solve this problem we need to access the controls that is located on the prompt-box. There is a various amount of ways that this could be done in order to be automated. This could be solved by running a process in the background that functions as an assistant that will look for prompt-boxes that might occur as children from this application and interact with them. However this would provide another external dependency to wait for and we decided to try to solve this with using a single system. If it is possible we want to avoid having to create another process. To avoid this multiple threads can provide a simpler solution. By running a thread in the background that executes the function that is free to wait until the

18 3.5. Implementation

Figure 3.9: Flowchart of Instantiation Test prompt-boxed is closed. The other thread will when promt-box displays, access the controls and trigger a button press. All of this is done using safe thread calls and making sure that no unnecessary threads is still active. To simulate this delete test behaviour the test will exercise two possible scenarios. The library function instantiates all items a first time and when done removes all items by deleting the item below the base parent of the structure. An example of this can be seen in figure 3.10. When this is executed all items except for the base parent will be deleted. After this a check is done where all objects in CATIA are counted and checks that the database contains the same amount of items. By doing this test we produce confidence that the deletion of an item with children will remove all the related children. The other scenario tests the deletion of a particular item. This is done by first instantiating another complete item population in CATIA. The test will then iterate through all the items in the order from the last instantiated to the first. This will then do a deletion that targets each of the items without any children associated to it.

3.5.2.4 Change Part Test The change part feature allows the user to alter an already instantiated part to another of the same type. When this feature is used the existing parent that the user wants to change is removed together with its associated children parts. When the user has selected another part of the same type, it will be instantiated and the system builds the previously removed structure of children parts with the new selected part as parent instead. To test this feature, a function is written to exercise this behaviour by changing all of the items in a group (i.e. Cars beginning with CAR1) to another item of the same type (i.e. CAR1

19 3.5. Implementation

Figure 3.10: Remove Item Example to CAR2 and CAR3). By populating the instance with one item from each category we can fully test the functionality by changing all of the items in each group to another of the same type. When this is done the function has tested that all items can be changed from and to all available components. This will therefore not test all permutations. For a,b,c the tested cases would be ab, bc, ca. This is because when an item is changed to, the removed part child structure will be removed and built as a child to the new item. Because of this, the time it take to change a part will be dependent on the depth of the child relative to the parent.

3.5.2.5 Test Open and Save Design This test will exercise the functionality to save a project as well as open a previously saved project. This test will first save an active project in the external CATIA-instance. The saved project will then be opened in the location where it was suppose to be saved. This will handle two of the functions that were provided for. First the functionality save design, and shortly after the functionality open design. The saved design will be compared with the opened one to validate that all components were included, and that the saved design were not corrupt when saved. Miss-behaviour are dealt with, documenting both progression and errors during execution.

3.5.2.6 Update Parameter Test When wanting to update a parameter value of a specific part there are several steps that are in need of being done in a specific order. First the part (that the user wants to alter) in the CATIA-instance needs to be selected. One part from each category in the itembase will then load parameters from the SQL-Knowledgebase to the GUI. All parameters can then be altered and updated from that view. Therefor by comparing the value of a parameter in the CATIA- instance with the respective one in the SQL-Knowledgebase, a way of testing the functionality for this feature could be discovered. As every category consists of a set of parameters we first wanted to know how many parameters that were to exist in a project solution when only having one part of each category in the itembase. By inserting the 11 available categories an amount of 62 parameters are encountered. As the parameters had no given predefined range of valid values, the parameters still had an underlying constraint. To evaluate if values in range was set correctly, predefined parameters were used. As the only way to evaluate if a parameter was in range, were to evaluate if the item was still present in the CATIA-instance. A flowchart for the updating of parameters in a project can be seen in figure 3.11.

3.5.2.7 Group Functionality Test A rather large section of features are continuously used when working with groups in the CAD Configurator. A brief flowchart can be seen in the figure 3.12. The 11 different categories are to be instantiated in a single group that will be referenced to as starting group. The addPart-feature will be used and tested during this first step. The

20 3.5. Implementation

Figure 3.11: Flowchart Update Parameter Test

SQL-Knowledgebase will be accessed to ensure that the group was created, the group will also be instantiated to check so the content that the added group contains was correct and not corrupt. The starting group will be copied with the function copyGroup and to also test the importGroup function, the copied group will also be imported by the library function. There should now be three existing groups in the SQL-Knowledgebase and they will be checked accordingly. The group-features changePart, instantiateGroup and removePart will be tested by manipulating the created starting group. We figured that by changing parts from the lowest level to the highest would be the least time-consuming way to test these features. The lowest level in the starting group will be selected, it will be changed if there are any parts of the same category that have not yet been tested. If changed the group will be instantiated in the CATIA-instance. When the last part of a category is instantiated in the CATIA-instance that category will be removed and the next category will be selected, continuing with the same procedure as the previous one. This procedure and test will end when the last category (in our case the TC-CAR) is removed. The copied and imported group will be deleted with the deleteGroup-function and one last check will be made in the SQL-Knowledgebase if all groups where removed correctly. The starting group is not needed to be deleted as this is done automatically when the last category is removed from the group as it is the highest and lowest level at that time. All features will produce accordingly errors if something where to miss-behave during execution.

21 3.5. Implementation

Figure 3.12: Flowchart Group Functionality Test

3.5.2.8 Change Parent Test The CAD Configurator has the feature to alter the associated parent to an object. When an object changes parent the item and all of the children will be removed and instantiated as children to the new parent. However there is a large number of constraints that can be tested. The constraints do not only include the parent we change to, but the child relations this parent already has associated to it. To test this in a good way we need to limit the test-space by a large amount without the possibility to miss errors that might occur. Without testing each of the scenarios we can not guarantee that this function is without errors. However a test that exercises all of the constraints will give us a clue if the logic operates correctly. The test will first change all items that have the possibility to own a parent to each of the other available parents. This will test the constraints of all of the detach and attach constraints that can occur when a parent is changed. However to test if a parent to another item can change parents without errors a test that simulates this is also needed. The other part of this test simulates three items that are independent from each other. The first test we need to run to validate this is explained with a linear function, this is because the amount of tests is limited to the sum of children of the same type multiplied with

22 3.5. Implementation each possible parent that can be assigned. This function could be further improved by testing all possible items as parents, even the ones that are not directly associated.

3.5.3 Logging with Log4Net We have decided to use Log4Net to log the actions of the testing tool, this will help in debug- ging and finding the state in the testing tool when an error occur. The Log4Net framework both good to log the occurred errors and also general information regarding the execution, which helps in understanding what state the testing tool execution were in when an error occurred. When an error is encountered a detailed timestamp, function-caller and a custom message will be printed to a local log file. After installing the Log4Net package on the solu- tion, a small set of configurations is needed in the config-file to get Log4Net up and running. The config-file exists of three appenders, two FileAppender and one BufferingForwardingAppen- der. One FileAppender will log INFO-entries when a test has passed. The error-handling will be done by the other FileAppender and the BufferingForwardingAppender. The second FileAp- pender will only accept entries of the type DEBUG or ERROR and the BufferingForwardingAp- pender will contain an amount of previous DEBUG and ERROR entries. The reason for this is to not log unimportant DEBUG-messages, but to instead locate the reason that caused an unexpected ERROR to occur. The BufferingForwardingAppender will when an ERROR occur print the saved buffer to the local logfile. The size are set to 10 entries, but this can easily be changed in the App.Config file existing in the testing tool.

Figure 3.13: Example output captured from logfile.txt

23 4 Results

This chapter will present the result obtained during execution of the testing tool developed for testing the functionality of CAD Configurator. Both the pre-studies and the implemen- tation were done at the same time. This information was evaluated together with the CAD Configurator, the testing tool could then be implemented with this information. This chap- ter will present the resulting software that were developed in order to validate each of the functions that the CAD Configurator accesses.

4.1 Testing Tool Implementation and Tests

A testing tool was developed; and our ambition to test certain functionality was in the end successful. Under execution, the system will keep track of the execution and progress of tests. The tool has a simple interface that read arguments provided at start to decide what test(s) is to be executed. When the testing tool executes the test suite, no user-interaction will be needed. Each of the tests that are executed by the tool are independent and can therefore be executed in any order.

4.1.1 Tests Each of the requested tests was implemented in the testing tool. All tests will not cover all of the features functionality results but will cover most of the use cases. This is a result of that some of the knowledgebases have been far too large which would make testing all iterations of them too demanding. This was solved by issuing tests that only validated each of the dependencies that items can contain by categorising items to groups.

4.1.2 Testing Tool As mentioned earlier in this report, XperDi provided a set of test cases that were used to exercise functions within the CAD Configurator. The idea was to continue our work where XperDi had started in a unit testing framework. Later in the process test cases resulted in producing fatal errors that would make it impossible to continue the work in the used unit testing framework. Because of this we had to implement a system that supported these inter- ruptions. By combining an existing framework with our software design (a testing tool with

24 4.1. Testing Tool Implementation and Tests modular tests) this problem could be avoided. A way to log behaviour was implemented using Log4Net in order to provide developers with a consistent approach to fetch both exe- cution errors, test results and general information regarding the execution of the system. By using Log4Net we could provide an easy to use and easy to expand system that has support for both write to text file(s) but also parse the data to database(s) for larger quantities of data which can be hard to navigate through in a simple text file. Large parts of the system that the testing tool is verifying had nested dialog systems, which caused the CAD Configurator to halt and wait for input before certain actions was to be performed. This might be actions that have a large impact on the design layout. Because the CAD Configurator had to execute without user-interaction, a multi-threading solution was applied which created a thread that was allocated to wait for these specific system halts; and to "act" as the user-interaction.

25 5 Discussion

The results and method of this thesis are discussed and evaluated in this chapter. Together with other potential solutions to some of the issues we encountered. Further improvements that could be of interest will be presented.

5.1 Results

This project has been focused on test automation, development of modular tests and how a test system could be designed to fulfil a set of requirements. The main task was to provide a systematic way of testing one feature at the time automatically. As XperDi was manually test- ing their CAD Configurator after an alteration of their code, no investigation on automated testing had been made. Early in the process of investigating if an automation of the testing process were to be possible to achieve, obstacles in the compiled library were found which prevented full au- tomation. However the majority of these obstacles, could be solved by an update to the library. As an example to this a lot of features needed user-interactions to continue their ex- ecution and would without it freeze the executing testing tool. These functions had to be altered so that they took a reference point as parameter from the CATIA-instance instead of a user-interaction. As expected all features that XperDi were interested in testing could be accordingly, with the developed testing tool; however the testing process took longer than we expected. We observed that the testing process was very dependant on what type of computer (hardware) setup that were used. We also believed that we would be able to use an already existing open source framework for the testing process. However, because of that the testing tool needed to withstand errors (fatal errors and waiting conditions) that the evaluated framework did not support on its own; we needed to design the testing tool to provide handling for such errors.

5.2 Method

In the beginning of the project we invested time with literature and studies in software test- ing. The concept of testing and software verification is large and because of this it was im- portant to focus on the appropriate theories which could be applicable in our case. This came

26 5.3. Maintenance and future improvements to be the first problem we faced. Because of miscommunication we decided to use a certain approach and invested time to understand the more, which later revealed to not be applicable for the given task. Some time during this project was because of this, invested in areas which was not applicable for the testing tool. However when the task was defined together with the testing tools scope this was no longer a problem. A grey-box testing approach could then be implemented after evaluating other testing methods. It is important to mention that by having a clear scope from the beginning of this project, the result would had been the same. To solve the logging issue, we investigated if an already existing open source software was available; or if we had to create our own way in writing logs. The choice between either our own created log system or the already existing Log4Net system that were free for use, were not that hard. The Log4Net provided more functionality and was easy to use, especially the filter functionality which could document the flow of the testing tool. With the help of filters, lower layers of documentation could be viewed when needed; for more info about this read section 3.5.3. Log4Net also proved to be usable in a database-environments and continuous integration which could be of interest for XperDi in the future. By evaluating the different systems the CAD Configurator were dependent on, we could limit where potential errors could arise. These observations provided a series of potential structural issues with automation, which we were not able to solve by the framework of choice. The same result might had been possible to achieve, if one is able to split the shared virtual memory in Visual . However, we decided to construct our own modular test system to avoid this.

5.3 Maintenance and future improvements

Because the testing tool verifies the functionality of the CAD Configurator, each new function or updated dependency that is added; need to be implemented in the testing tool. The tool could easily be extended to support new items or types. This is currently stored in databases without dependencies and complemented with information in the tests. However, an im- provement would be to design a database structure that contains the dependencies about the items. This would make it more modular and easier to expand in the future. If we were to do this thesis a second time, we would construct a system like that from the beginning. The parameter testing is currently only testing inbound values. This is a problem, as no upper bound exists for parameter values. If limited boundaries for each item could be given in a data structure, out of bound testing could also be possible. A problem that may occur in the future is related to how the 3D-modeling software handles parts. To fully test a couple of feature a lot of memory is needed by CATIA. This is not a problem at the time, but for larger sets of 3D-models or usage of an increasing amount of constraints; this will later cause issues regarding the allocation of memory. As this thesis was meant for building a prototype tool for XperDi. There are improve- ments that could further extend the tool, improvements like making it more modular and production viable as an example. One major improvement that we believe would be a good start is: integration to a Continuous Integration environment. The majority of work for this improvement would be to integrate and configure the environment; too have it correctly set- ting up the 3D-modeling software together with the other system dependencies. As the tool is modular, a testing framework could handle the execution of the tests and is therefore an approach which most CI systems are supporting. Another improvement that could be used instead of using command line arguments, would be to specify which test that is supposed to be executed; together with what type of configuration the test should have. This could be provide with the help of a XML-file, which specifies these combinations. This would also further increase the use of the system during development. Another minor improvement for the testing tool would be to alter the code in the com- piled library to not produce prompt-boxes during testing. By doing this the usage of multi-

27 5.3. Maintenance and future improvements threading can be removed in this part of the tool, this would also speed up the process in testing. The fix is a minor improvement, as it will not alter the functionality of the result. As errors and faults are detected by the end-user, it should also be an improvement to create a manual that the end-users can read, so that they know what to expect from the testing tool. This is not needed if the detected errors after running the test suite are to be dealt with by XperDi, before the end-user can access the updated version.

28 6 Conclusion

The purpose of this thesis was to develop a system to automatically test a set of compiled library functions of the CAD Configurator. This system supports error handling of all kinds of errors that the library might throw. By conducting a study on software testing, a suitable grey-box testing method could be found and adapted to the developed testing tool. Because we were not able to find a suitable testing framework which could catch a fatal exception we had to design the testing tool to support this. Regression testing was the method used for testing features continuously but did not provide full test coverage for all the features in the CAD Configurator. If this is a goal in the future a complete testing of all the functions the library provides is needed. To accomplish this, a system that verify the GUI interaction with the library will be needed; this has been observed during the development of the testing tool.

6.1 Future Work

The testing tool was in this thesis developed to be the starting point towards a fully auto- mated testing process, work does however remain. The testing tool should be integrated to a continuous integration environment. This would help XperDi towards a faster development process before updated code could be deployed. This feature might not be an important aspect for a small scaled company as XperDi is today, but instead of interest when both code- base and employees grow. If more time were given to the thesis a more modular approach would have been ben- eficial in the long run. By altering the structure of the database a lot of the dependencies could be modularized and would because of this be easier to both maintain and expand to support new parts and functionality. This could be done by moving constraints of the tests from the testing tool to the database level, which would make it possible to access it from the test environment. Another approach we have discussed is the possibility to run the system with a data sheet that describes what parts that need to be tested and therefore constraining the itembase. This could be a good solution to the increasing complexity of tests like instantiating and parameter testing, which is a very costly test that might otherwise test items that have already been cleared of working with a certain library version.

29 Bibliography

[1] Microsoft. Software Testing Tutorial. 2017. URL: https://msdn.microsoft.com/ en- us/library/windows/desktop/ms682589(v=vs.85).aspx (visited on 01/30/2017). [2] Matthew J. Ungerer and Michael F. Goodchild. “Integrating spatial data analysis and GIS: a new implementation using the Component Object Model (COM)”. In: International Journal of Geographical Information Science 16.1 (2002), pp. 41–53. DOI: 10 . 1080 / 13658810110095066. eprint: http : / / dx . doi . org / 10 . 1080 / 13658810110095066. URL: http : / / dx . doi . org / 10 . 1080 / 13658810110095066. [3] M. Meyer. “Continuous Integration and Its Tools”. In: IEEE Software 31.3 (May 2014), pp. 14–16. ISSN: 0740-7459. DOI: 10.1109/MS.2014.58. [4] Apache Software Foundation. Apache log4net. 2017. URL: https : / / logging . apache.org/log4net/ (visited on 11/06/2017). [5] Tutorialspoint. Software Testing Tutorial. 2016. URL: https://www.tutorialspoint. com/software_testing/index.htm (visited on 10/21/2016). [6] Stephen R Schach. Object-oriented and classical software engineering. Vol. 6. McGraw-Hill New York, 2002. [7] Glenford J Myers, Corey Sandler, and Tom Badgett. The art of software testing. John Wiley & Sons, 2011. [8] Martin Buchi and Wolfgang Weck. “A plea for grey-box components”. In: Turku Centre for Computer Science (1997). [9] Mohd Ehmer Khan, Farmeena Khan, et al. “A comparative study of white box, black box and grey box testing techniques”. In: Int. J. Adv. Comput. Sci. Appl 3.6 (2012). [10] Hareton KN Leung and Lee White. “Insights into regression testing [software testing]”. In: Software Maintenance, 1989., Proceedings., Conference on. IEEE. 1989, pp. 60–69. [11] Abhijit A Sawant, Pranit H Bari, and PM Chawan. “Software testing techniques and strategies”. In: International Journal of Engineering Research and Applications (IJERA) 2.3 (2012), pp. 980–986. [12] H. K. N. Leung and L. White. “Insights into regression testing [software testing]”. In: Proceedings. Conference on Software Maintenance - 1989. Oct. 1989, pp. 60–69. DOI: 10. 1109/ICSM.1989.65194.

30 Bibliography

[13] Sebastian Elbaum, Alexey G Malishevsky, and Gregg Rothermel. Prioritizing test cases for regression testing. Vol. 25. 5. ACM, 2000. [14] W Eric Wong, Joseph R Horgan, Saul London, and Hiralal Agrawal. “A study of effec- tive regression testing in practice”. In: Software Reliability Engineering, 1997. Proceedings., The Eighth International Symposium on. IEEE. 1997, pp. 264–274. [15] Gilles Perrouin, Sebastian Oster, Sagar Sen, Jacques Klein, Benoit Baudry, and Yves Le Traon. “Pairwise testing for software product lines: comparison of two approaches”. In: Software Quality Journal 20.3-4 (2012), pp. 605–643. [16] Gilles Perrouin, Sagar Sen, Jacques Klein, Benoit Baudry, and Yves Le Traon. “Auto- mated and scalable t-wise test case generation strategies for software product lines”. In: 2010 Third international conference on software testing, verification and validation. IEEE. 2010, pp. 459–468. [17] Alan W Williams and Robert L Probert. “A measure for component interaction test coverage”. In: Computer Systems and Applications, ACS/IEEE International Conference on. 2001. IEEE. 2001, pp. 304–311. [18] Kuo-Chung Tai and Yu Lie. “A test generation strategy for pairwise testing”. In: IEEE Transactions on Software Engineering 28.1 (2002), p. 109.

Abbreviations

CATIA Computer Aided Threedimensional Interactive Application

COM Component Object Model

DLL Dynamic-Link Library

GUI Graphical User Interface

SPL Software Product Line

SUT System Under Test

XCC XperDi CAD Configurator

31