MASTER’S THESIS | LUND UNIVERSITY 2013

Evaluating Open-Source and Free Software Tools for Test Management

Albin Rosberg

Department of Computer Science Faculty of Engineering LTH

ISSN 1650-2884 LU-CS-EX 2013-37

Evaluating Open-Source and Free Software Tools for Test Management

Albin Rosberg [email protected]

Sunday 29th September, 2013

Master's thesis work carried out at System Verication Sweden AB for Lund University, Computer Science.

Supervisor: Magnus C. Ohlsson, [email protected] Examiner: Per Runeson, [email protected]

Abstract

There are currently a very large amount of Test Management tools available, many of which are free to use. The aim of these tools is to support decision making and management for test management. To nd which tools is the best, they need to be evaluated and compared to each other. There is also a question whether or not free software or open source tools are as good, or better, than proprietary alternatives. The aim of the master thesis project is to select tools and compare free tools to a baseline tool using evaluation criteria and documentation of scenarios dened by the author. The combination of these was the basis for a case study conducted on the matter and then followed up by hosting a demonstration event for Test Managers, showcasing a selection ot Test Management tools. The participants in the demo event clearly put Microsoft Test Man- ager (MTM) as the best option for a Test Management tool - an idea which was supported in the case study. The tool does not integrate well with non-Microsoft software which can be a problem for some organisa- tions which does not evolve around Microsoft products. This means such organisations might have a harder time integrating the MTM and should therefore look for an open source tool such as TestLink, which holds a lot more options for the additional software required.

Keywords: Test Management, Tool Evaluation 2 Acknowledgements

I would like to thank the consultants at System Verication for lending me their time and knowledge and giving me a taste of how testing is run in a modern organisation, special thanks to Magnus for creating this masters thesis proposal and making it happen. I would also like to thank Per and Magnus for the feedback of my work - and to my family for all the support.

3 4 Contents

1 Introduction 7 1.1 Purpose ...... 7 1.2 Outline ...... 8

2 Background 9 2.1 Test Management ...... 9 2.2 Software Licence Models ...... 10 2.3 Related work ...... 10

3 Method 13 3.1 Questionnaire ...... 13 3.2 Case Study Protocol ...... 14 3.2.1 Context ...... 15 3.2.2 Hypothesis ...... 15 3.2.3 Scenarios ...... 15 3.3 Demo event ...... 17

4 Tool Selection 19 4.1 Candidate Tool Selection ...... 19 4.2 Microsoft Test Manager ...... 20 4.3 XStudio ...... 20 4.4 TestLink ...... 21 4.5 Testopia ...... 21

5 Results 23 5.1 Scenario Results ...... 23 5.2 Case Study ...... 24 5.3 Demo event results ...... 25 5.4 Validity ...... 26

5 CONTENTS

6 Discussion 29 6.1 Participation ...... 29 6.2 Maintainability ...... 29 6.3 Installability ...... 30 6.4 Functionality versus interoperability ...... 30 6.5 Usability ...... 31

7 Conclusions 35

Appendix A Questionnaire 43

Appendix B Test management tool setup 45

Appendix C Demo scenarios 47

6 Chapter 1 Introduction

Test management is the process of managing and organising test within an organ- isation, used by members of the test organisation and supervised by one or more test managers. The purpose of a test management organisation is, among other things, to plan testing eorts, provide product and process quality information and maintain a repository of test-related information [5]. The larger an organisation, the more coordination and management is required which means that at one point the workload becomes too great to handle manually. This is where test management tools come in, to help organise testing and issues related to this.

1.1 Purpose

The purpose of the thesis project is to compare dierent Test Management tools in order to get an overview and hopefully give an answer to which is superior. Test Management tools are multi-function tools with the purpose to give an overview over the test situation of a project. The functionalities of a test management tool can vary from simple bug handling to include conguration management, scheduling and requirements tracking. In order to determine which tool is superior there is need for a set of evaluation criteria on which the dierent tools are tested. There are a lot of dierent tools for Test Management and even though a majority of them are commercial tools, plenty are open source and free software alternatives. This report will focus on the free-to-use tools and compare these to a baseline com- mercial tool. In order to get an idea of which tools are used and what functionality is sought-after in a Test Management tool a case study is performed. The participants in the case study use tools in their day-to-day work and have experience in one or more Test Management tools. The objective is to study the Test Management tools, using test managers to collect the data needed. Furthermore the participants were invited to a demonstration of the chosen tools to give feedback on the functionalities

7 1. Introduction and usability of the tools. Not only will functional aspects of a tool be regarded, but also quality aspects like usability, maintainability and installability. Examples of usability dierences in tools are how the data extracted from a tool is presented to a user and which information is highlighted in dierent ways. To be able to display the proper information, as a user, is a valuable tool in itself [9] and can aid in the decision making for e.g. release of software in a project. The non-functional aspects of Test Management tools are regarded to give a broader view and not only focus on functionality, but also on other aspects which may be as important when determining which tool to use e.g. how easy it is to install and maintain the tool.

1.2 Outline

In chapter 2 a general background of Test Management in general as well as the dierent types of Software Licence Models and their signicance. Chapter 3 de- scribes the research methods used for the thesis project; including the outline of the interviews, case study, scenarios and demonstration event. Chapter 4 includes how the tools were selected as well as further descriptions of the selected tools. In chapter 5 the results from the dierent elements in the thesis project are presented alongside threats and limitations of the case study. Chapter 6 draws conclusions from observations as well as results from the case study. Finally in chapter 7 a short summary of the results is presented.

8 Chapter 2 Background

The master thesis project is divided into two parts; 1) establishing and executing an evaluation framework, and 2) presenting the results analysing pros and cons of the evaluated tools. The main focus of the evaluation framework are the functional aspects, though some aspects of ISO 9126 [28] are also considered. The aspects usability, maintainability and portability (mainly installability within portability), will be considered in order to make sure a tool does not require an excessive amount of work for e.g. installation and maintenance.

2.1 Test Management

The term Test Management is vague and has no clear denition. International Software Testing Qualications Boards (ISTQB) denes the test manager's role as performing risk management . . . reviews, assessments, quality gate evaluation, quality assurance and audits [16]. The work description of a test manager may vary depending on which test methodology is followed, however the hierarchical status of a test manager in an organisation is very much the same. In order to structure the work of test managers within an organisation dierent methodologies and models may be applied in order to structure the management process [27]. The model chosen will have an impact on the testing resources since the test manager manages the testing resources. To assist in the tasks linked to Test Management, several tools are available to assist the test manager in planning, analysing etc [26]. Dierent tool developers have dierent ideas of what is needed for Test Management and thus the functionality available in dierent tools vary greatly. The tools compared in the study are tools most mentioned by the interviewees or similar tools which have similar functionality as the ones emerged from the interviews. A test manager is generally the person within a test organisation responsible

9 2. Background for stang, connections with the management of other departments and being the central person around all testing and quality issues [5]. Typical work tasks as stated by Burnstein may include test policy making, . . . test planning, test documentation, controlling and monitoring of tests, . . . reviewing test work.

2.2 Software Licence Models

There are at the moment of writing a vast amount of tools designed to be used for test management purposes and they can generally be divided into two categories:

• Tools with licenses which cost money to use. • Tools with licenses which do not. The categorisation is very general with the rst category including licenses for which any user must pay a fee to use the software, periodically or a one time fee. The proprietary license has no xed rules set and is rather mentioned as a single group due to the fact that it is not free to use. This Master's Thesis' focus lie primarily on the second category which can be divided into two sub-categories; open source and freeware. Open source is a software intellectual property rights model whose rule set was originally dened by the Open Source Initiative (OSI). In Open Source software the license is inherited, meaning any further software developed with an Open Software base will also be Open Software [24]. Among other rules, the source code of an open software is required to not restrict any other software or technology. The ruleset also dictates that any source must be available and free to use by anyone; either non-prot interests or commercial. These rules are dened by the Open Source Denition (OSD) [14], however there are many dierent open source licences with their own adaptions of the ruleset, where some are more rigorous than others and are intended for dierent organisations and purposes [15]. Freeware is a denition widely used for software which does not cost anything to retrieve, although it is developer still owns the copyright and may control the distribution in the future. Unlike Open Source, there is no set of rules which the developers need to follow in order to call it Freeware and there is also no need for the developer to attach any source code [21]. The lack of source code also means that modications of the product is restricted. There are some dierences between Free Software and Freeware, which makes it important to distinguish these terms. The main dierence is that Free Software means available source code [13] although still dierentiating with Open Source because of i.a. not inheriting the licence.

2.3 Related work

Related work is focused primarily on evaluations and comparisons of Test Manage- ment Tools. Most comparisons use a certain set of prerequisites and can only be applied specic scenarios [17]. Moreover a lot of the related work are white papers from companies and work done in order to highlight the benets and functionality

10 2.3 Related work of a company's own products rather than giving an objective view and comparison. Several papers have also noted the cons of a few tools and rather than investigating further, developed their own tool. XQual [8] has their own Test Management Tool and has put together a table with dierent tools in an attempt to show which functionalities are found in dierent tools. There are a lot of commercial and non-commercial tools in the table, however the company itself is commercial company and thus one is obliged to consider that the table is a method of promoting their own tool. Safana and Ibrahim [26] oer a short description of strengths and weaknesses of several commercial tools and nally focuses on the use of the SpiraTeam test man- agement tool in their paper. Their work is not only relevant to learn the SpiraTeam tool, but also for showing the importance of a good test management tool and what to consider when choosing one. One feature included in several tools is bug reporting, being able to commit reports when a feature is not working as intended. Determining which features are needed in a bug reporting tool, and thus ought to be included in a test management tool, are investigated in a paper by Zimmermann et al. [29] interviewing developers and reporters. Kitchenham et al. [17] provide useful information of what needs to considered when conducting a case study for the purpose of evaluating tools and techniques, following up with examples of performed case studies including improvement sug- gestions.

11 2. Background

12 Chapter 3 Method

The overall goal of the master thesis project is to to determine which test manage- ment tool among a set is preferred. A case study, using the dierent test management tools as case, is the preferred research method for achieving this goal [17, 25]. In order to collect the data needed for the case study, interviews with test managers is the preferred path to achieve this. To test each tool, data extracted from the course Software Development for Large Systems (ETSN05) by Rosberg et. al [23, 22] and Andersson et. al [1] is used. Each tool will be tested by the author using this data. The four interviewees are consultants working at System Verication Sweden AB have between three and four years of experience in test management and between six and eight years experience working with testing. The interviews are conducted using a questionnaire-type form as basis and are asked and answered through e-mail correspondence. This chapter contains the description on how the interviews were conducted as well as an outline of the questionnaire sent to the interviewees in section 3.1. In section 3.2 the case study protocol is presented and nally in section 3.3 the demonstration event is outlined.

3.1 Questionnaire

The questionnaire is the basis for a semi-structured interview in order to combine how the interviewees rate qualities of a tool and gathering information about expe- rience and attributes not thought of beforehand [12]. This means the majority of the answers are in the form of comments rather than binary values in order to get the right information. The interviews are conducted in Swedish, but translated to English and added to Appendix A. Any quotes from the interviews are translated by the author. When

13 3. Method the answers were not clear, the interviewees were asked to clarify what they meant. The interviews were conducted in concert with System Verication Sweden AB and all participation was voluntary. The questionnaire outline is as follows:

Test experience How long have you been working with testing?

Test Management experience How long have you been working with Test Manage- ment? These questions are meant to give a short background of the interviewee in order to observe any connection between experience and how the questions are answered.

Functionalities Which functionalities do you deem important in a Test Management Tool? A set of eleven attributes, based on common attributes found in test management tools, were listed and the interviewees were asked to prioritise between them. A twelfth slot was left open in case there was any attribute which had been overlooked by the interviewer.

Extra functionalities Are there any functionalities which are less important, but nice to have? A freetext eld which is supposed to catch any attributes which may improve the perceived experience working with the tool, though not imperative for how the tool functions.

Functionality vs. usability How important is it to have a tool with a lot of func- tionality, compared to one which is easy to use? The interviewees were asked to rate the statement going from Very important to Unimportant on a ve level Likert item [12], in order to see how non-functional attributes compare to functional ones.

Functionality vs. integration How important is it to have a tool with a lot of func- tionality, compared to one which easily integrates with other tools? The inter- viewees were asked to rate the statement in order to see what was deemed more important; a one-in-all solution or the idea of adding functionality through other tools.

Tools Suggestions of tools which you have used? The nal question was aimed to give an idea of which among the plethora of tools should be looked at for comparison. The second objective of this questions was to see what the tools were used for, by knowing which tools are used their functionalities may be mapped to see how well that compared to earlier questions.

3.2 Case Study Protocol

The content of the case study protocol is based on the guidelines of Kitchenham et al [17].

14 3.2 Case Study Protocol

3.2.1 Context The overall goal of the Case Study is to determine which tool, if any, is superior for Test Management purposes. A selection of non-commercial tools will be evaluated and compared to a baseline tool which is already in use at System Verication. The tools will be installed and used on the same data set for comparison.

3.2.2 Hypothesis The case study will try to determine which tool in a set is superior, meaning the hypothesis to be disproved is that all tools are identical.

Evaluation criteria The criteria have been extracted from the interviews and are shown in tables 3.1 and 3.2. Several attributes are deemed equally important when the answers were summarised and are set as the same priority. Table 3.2 shows that having a lot of features is preferred over a tool which is easy to use. However, in table 3.1 only a few features are rated more important than usability indicating that it is an important feature in a tool. Several of the features in table 3.1 are functions in a tool and either exist or not, which is easy to verify and compare. Other non-functional requirements including those of maintainability and usability are much harder to measure accurately [20].

Table 3.1: Criteria ordered descending by priority.

Attribute Priority Test case management 1 Test reporting 1 Issue handling 2 Mapping requirements to tests 2 Usability 2 Creating/handling test plans 2 Version control 3 Scalability 3 Backlogging/follow-through 3 Scheduling 3 Requirements management 4

Comparing the two tables with priorities is a measuring stick for tool selection, though with some conicting responses with the non-functional requirements.

3.2.3 Scenarios In order to make sure all Test Management Tools are tested on the same basis and to do as extensive testing as possible, several scenarios are run by the author

15 3. Method

Table 3.2: Priority between features, usability and interoper- ability.

Attribute Priority Functionality 1 Interoperability 1 Usability 2 using each tool sequentially. These scenarios attempt to show how basic usage of the test management tool is conducted and check where the dierences lie in the tools. Functional attributes as well as non-functional attributes are checked and the results are noted, with the aid of these scenarios. To further verify the ndings after analysing the results, a demonstration event is held with the interviewees participating, running a second set of scenarios with all tools in parallel.

(a) Installing the tool.

i Installing required software. ii Setting and conguring up required software.

(b) Importing a set of data.

(c) Creating a new set of data.

(d) Setting up a project.

(e) Setting up a second project.

(f) Adding multiple users.

(g) Creating test cases.

(h) Creating an error report.

(i) Managing error report.

(j) Export project.

(k) Multiple users accessing the same project simultaneously.

(l) Multiple users accessing the same test simultaneously.

The scenarios are chosen to show how the tools actually work in a user-environment and to nd out which problems a user might have during the initial phases of tool us- age. For installation, all available documentation is used and administration rights will be assumed for at least one user.

16 3.3 Demo event

3.3 Demo event

In order to receive feedback, the interviewees in the case study were invited to participate in a workshop-like demonstration where the four chosen tools were used in predened scenarios. Each participant received one copy of the scenario paper, see Appendix C, to ll our during the event. The participants were then urged to rank (1-4, 1 being the best) the tools' performance in accordance with the scenario. In case of a tie between several tools in performance or if the participants wanted to further comment any tool in the scenario, the comment line was used. After the pre-dened scenarios were run, the participants were allowed to create their own scenarios in order to show how functions they use work in the dierent tools.

17 3. Method

18 Chapter 4 Tool selection

In order to proceed with the case study, tools had to be selected. Since there is such a vast amount of tools available; ranging from simple bug-trackers to full- scale solutions, screening has to be made to make sure the workload would not be insurmountable.

4.1 Candidate Tool Selection

The tool selection process uses the priorities gathered from the questionnaire and the table on the Xqual web page [8] to nd viable candidates for comparison. Since the web page is largely based on which tools contains certain functionalities, the tools' websites has been used to check if the functionalities are in fact supported. The vast number of available tools makes it impossible to check them all, so restrictions were made and the tools selected are presented in this chapter. The candidates for the case study, based on the suggestions as well as the tools' provided features, were: 1. Microsoft Test Manager (MTM). [18] 2. Testopia. [11] 3. TestLink. [10] 4. Salomé TMF. [6] 5. XStudio. [7] The main reason these were the selected tools was the amount of functionality in each tool combined with being mentioned as known tools by the interviewees. Microsoft Test Manager is the current tool used throughout System Verication Sweden AB and it was specically expressed that it should be in the comparison. MTM also had most of the attributes from the list.

19 4. Tool Selection

Testopia is an extension of and is a web based tool developed by Mozilla. Although the interviewees had no experience working with Testopia, they were ex- perienced working with Bugzilla. Bugzilla is a well-known bug handling system and has been one of the prime examples of open source alternatives in this eld. TestLink is an open source tool mentioned by one interviewee, also with a large number of functionalities. The tool is web based and is not supported by any company, but rather worked on by contributing individuals [10]. At rst Salomé TMF was chosen for the evaluation due to having most attributes from the list as well as being unknown to the interviewees. The tool failed the rst scenario (installation) and has not being worked on since 2011 [6], and was thus disregarded. XStudio is developed by Gavaldo Consulting and includes all available function- alities according to their own list. Unlike Testopia and TestLink it is not web based and has both a free version, not including any support, and one which does. The free version was the primary contender for this tool due to support being the dierence between the two versions which should not aect the evaluation. The setup of the featured tools are shown in table B.1.

4.2 Microsoft Test Manager

Test Manager is a commercial tool by Microsoft created to plan, manage, and execute both manual and exploratory tests [18]. The installation package includes SQL server, webserver and other software required to run the tool and does not support any non-related products within these domains. The 2010 Microsoft Test Manager (MTM) tool is widely used in project within System Verication and was thus chosen as a baseline for the comparison. The tool is largely focused on software testing, using the tool side-by-side with the system under test. The tool can also be used with external testing though a lot of functionalities such as recording will have no eect. MTM is linked to other software required for it to work, one of which is Team Foundation Server (TFS). TFS provides version control, work item tracking, project management functions, team build, data collection and reporting, team project portal and team foundation shared services [19]. MTM is a tool the interviewees are familiar with and use on a daily basis. This reveals a risk of bias since a user will have some idea of what they think of the tool beforehand and thus might be too positive or too negative regarding it. Test Manager and Team Foundation Server require licenses to run, however a trial version was used which includes everything though during a limited amount of time.

4.3 XStudio

The XStudio tool is a commercial tool with a free version and a version which is not. The monthly fee includes updates and support, though the functionality is the same across both versions. The XStudio tool includes its own issue handler but can also be integrated with other common tools such as Bugzilla and [4, 2]. The tool is

20 4.4 TestLink client based and requires it to be installed for each user. The clients use the database server to update the information status of each XStudio client. The XStudio tool includes scheduling assistance as well as focusing largely on requirements engineering and connecting requirements to test cases, displaying how the overall test coverage is for the requirements.

4.4 TestLink

TestLink is an open source tool under the GNU General Public License and thus includes all source code with the download package. The tool is completely web based and does not require separate clients per user to run. The tool does not include an issue handler but integrates with a number of common tools such as Bugzilla, linking the bugs and showing the status of the bugs in the TestLink tool.

4.5 Testopia

Testopia is completely web based and as such does not require any separate clients to be installed per user. The server containing Bugzilla and Testopia is installed with the help of multiple perl scripts where the installer may choose which features they want to include in the installation and which are deemed superuous. Being an extension of Bugzilla it requires the use of Bugzilla for issue handling and when both are installed they share space on the web page with, i.a. there are multiple administrator links on the same page (one belonging to Bugzilla and one belonging to Testopia), named Administration and Admin as seen gure 4.1. The list of additional required software to be able to install Testopia is longer than any of the other chosen tools and is not installed through a single package or solution. However the included checksetup.pl assists with most of the installing and it is only needed for the server, not the clients.

Figure 4.1: Testopia example

21 4. Tool Selection

22 Chapter 5 Results

The results presented in this chapter are divided into three dierent parts of the study; scenario, case study and demonstration event. The scenario results originate from the author testing each tool while results from case study and the demonstra- tion event include employees from System Verication Sweden AB. The validity of the results is discussed in the end of the chapter.

5.1 Scenario Results

The four test management tools which completed the rst scenario, installation, were all tested sequentially through the scenarios given in section 3.2.3. The tools completed most of the tasks without fault, however due to how Microsoft has solved the tools' user accounts there was no way of adding more user accounts to Microsoft Test Manager (MTM). MTM gathers the user information from elsewhere and nd- ing out where this information is obtained proved impossible to nd. Only being able to use a single client and not being able to add multiple users also automatically failed the last two scenarios. Both XStudio and Testopia claim to have support for importing any type of data, though when attempting to do so, and having installed any additional software required, nothing happened. Not only was there no visible change to the projects, no error message were shown in either tool. Table 5.1 shows whether or not each tool completed each scenario (marked with √ ). When a tool fail a scenario due to some integrated feature, other failed scenarios might depend on the same thing and so several failed scenarios might be linked to a single source. Not being able to install, Salomé fails every single scenario due to them requiring a running version of the tool.

23 5. Results

Table 5.1: Results from the scenarios.

Scenario MTM XStudio Testopia TestLink Salomé √ √ √ √ Installation √ √ Importing data √ √ √ √ Creating new set √ √ √ √ Setting up project √ √ √ √ Setting up second project √ √ √ Adding multiple users √ √ √ √ Creating test case √ √ √ √ Creating an error report √ √ √ √ Manage an error report √ √ √ √ Export project √ √ √ Simultaneous use of project √ √ √ Simultaneous use of test case

5.2 Case Study

The questionnaire regarding the case study data collecting focused on how the test managers used a tool and was thus sent to people working within that area. Four test managers working at System Verication responded. The test managers' expe- rience in testing ranged from six to eight years and experience in test management ranging from three to four years. They had between them used dierent test man- agement tools, though the only tool they all had experience with was MTM. The test managers' also deemed things like performance, being able to record test runs, aesthetics, requirements coverage, export functions to Excel and user rights set- tings as important. These attributed were not covered in the questionnaire and was mentioned as nice to have or missing from the functionality list. Except Testopia, the tools actively support requirements engineering and map- ping requirements to test cases. All tested tools save execution history and records changes done to project, test cases etc. though MTM has a recording option when running test cases and XStudio automatically records dierent statistics such as execution time. Even though all tools were running on a localhost conguration there were con- siderable dierences in response-time with Testopia taking longer to perform actions than TestLink using the same web browser and the XStudio client responding faster than the MTM client. With the exception of MTM all test management tools had working export func- tions at various levels of data, ranging from test case results to entire projects, whereas MTM does not support exporting individual test cases. The tools exported to either .xml or .csv depending on exporting on a project level or a test case level, however no tools use the same formatting and are thus incompatible with each other. In the rst part of the questionnaire, outlined in Appendix A, the test managers were asked to prioritise the dierent functionalities to check which are deemed most important. Even though there were dierences in opinion, some functionalities were

24 5.3 Demo event results prioritised highly by all and some were prioritised lowly by all. The results with the percentage of interviewees ranking each functionality high or low are shown in table 5.2. On the high end of the scale, being ranked very important by all, was test case management and test reporting and on the other side of the scale was requirements engineering - only deemed very important by a single test manager. The access of requirements is still highly prioritised for a tool considering mapping requirements to tests was deemed very important by most as well as the tool being easy to use and issue handling.

Table 5.2: Case study questionnaire priority table.

Functionality High Priority Low Priority Test case management 100% 0% Test reporting 100% 0% Issue handling 75% 25% Mapping requirements to tests 75% 25% Usability 75% 25% Creating/handling test plans 75% 25% Version control 50% 50% Scalability 50% 50% Backlogging/follow-through 50% 50% Scheduling 50% 50% Requirements management 25% 75%

Even though it was deemed very important to have a tool which is easy to use they all agreed on functionality being more important than usability with the argument that everybody will learn to use a tool eventually. However there was a split on the question if functionality was more important than easy integration with other tools, half answering Important and half Not very important.

5.3 Demo event results

The interviewed test managers as well as two additional employees with experience in testing and test management were invited to the demo event. Of the invited three were able to join the event for the whole duration and participated and helped run through the scenarios. The participants agreed that two additional scenarios should be tested:

• Run Test.

• Create Data-driven / parameter-controlled test case. Two of the participants admitted to probably being biased due to working exten- sively with MTM and therefore preferring the well-known rather than the unknown tool and the third participant had no prior experience with either of the four tools. The participants were asked to rank the tools for each scenario (1 being highest, 4 being lowest) and the scores were then added over the participants to decide which

25 5. Results tool came out on top in each scenario. Tallying the scores, shown in gure 5.1, show MTM as a clear winner, followed by TestLink with Testopia and XStudio at roughly the same points. The nal scenario shown had one less participant and therefore has a lower maximum value.

Figure 5.1: Demo event tally.

The rst of the two additional scenarios followed the trend of previous scenarios and the second additional scenario was only supported by MTM. Throughout the demonstration the participants reacted to certain performance issues with Testopia and the non-intuitive navigation of XStudio, and to a lesser extent Testopia. When checking to see how the pass/fail/not-run ratio within a test suite MTM and TestLink were very similar, displaying the results as a pie diagram as in g- ures 5.2 and 5.3 - with MTM being able to show more information about failed tests.

5.4 Validity

The limitations in the case study is primarily the limited number of participants in the demonstration event as well as the lack of responses during the data collection. When gathering information from consultants one must also consider the amount of time which may be spent on it, working on projects and practicing the day-to- day labour is prioritised. One clear limitation of the TestLink tool is that it uses Bugzilla, on which Testopia is based, for bug reports which means that it is most unlikely to receive higher score during that scenario in the demo event. One major limitation lies in how the tools are structured and the amount of conguration and modication the tool requires to function; the open source alter- natives requiring a lot of conguration while being unable to congure certain things in MTM altogether. Due to the conguration limitations, the connection between

26 5.4 Validity

Figure 5.2: TestLink pie diagram.

Figure 5.3: MTM pie diagram.

TestLink and Bugzilla did not work during the demo event aecting the scenarios concerning bug reports. The amount of available documentation and assistance for each tool may be threatening to the validity of each tool, having issues and no idea how to solve them would aect the user. Both TestLink and Testopia have extensive manuals and documentation about installing and conguring their products, the installation (and other) documentation of XStudio exists as HTML pages on their website and the MTM documentation is virtually non-existent a part from help documentation within the software. The most well known and used software, Testopia and MTM, had a lot of issues and problems addressed on forums (their own forums mostly) as had TestLink to a somewhat lesser extent, but due to support being an upgrade from the free version, XStudio did not have as much online help as the other three tools. A threat to the validity would also be that during the demonstration event, the participants were asked to run the scenarios alongside the author. Whether or not the scenario has been tried and tested beforehand as well as who runs the scenario and might inuence the experience and threaten the validity. Another threat is the fact that the open source tools may be modied beforehand to be tailored to a demonstration or a specic scenario. A major threat of the demo event was bias of the participants; past experience with certain tools and a larger understanding of those tools would mean that their minds were already made up about the tool. There might also be preconceptions for or against open source solutions in general.

27 5. Results

The documentation threat can be avoided or diminished by choosing tools which are currently worked on increase the chances of others having encountered the prob- lem already and oering solutions for it. Choosing well-documented tools and tools with an active community also contributes to diminishing this threat. Running the scenarios with several groups or increasing the participants for the event would possibly increase the chances of nding people with level experience of the tool, making the scenario runs more equal. Having multiple events would possibly add the number of participants, increasing the chances of consultants being free for a session. To address any threat of tailoring the open source tool to the scenarios, a second party may be used to control the software - although the impact of modifying the software would be very minor. The threat of bias is probably the most important issue and the toughest to combat. If the participants have used the tool beforehand they are bound to have opinions about it. Trying to mask the name of each tool during a demonstration would diminish the threat, although it would most likely not be possible. However, not declaring which tool is open source and which tool is not would be a way of trying to eliminate any preconception about open source and free tools.

28 Chapter 6 Discussion

The work has been focused very much around a group of test managers on System Verication Sweden AB focusing on their experience, using their experience and ideas to choose a set of tools for comparison. The results gathered as well as obser- vations made during the thesis project is the base of the discussion. The discussions will serve as the main basis for any conclusions drawn by the results.

6.1 Participation

The participation during the case study data collection phase was not large, would the selection have been larger it would probably have increased the amount of test management tools with prior experience almost guaranteed to include several men- tions of HP Quality Center and Rational TestManager.

6.2 Maintainability

During the selection process when Salomé TMF was rst chosen it became apparent that when choosing an open source solution it is important to make sure it is a living community handling the tool. Unless a company is willing to invest in developing and maintaining the code themselves, it is important that it is currently in use by users willing to contribute to it. The main reason Salomé TMF was disregarded was not having been updated for years, the second and as important reason being the lack of information and support on forums and other websites. To have good maintainability active support is needed and this is achieved by Testopia, TestLink and MTM by an active community answering support more or less wherever it is asked - which is not the case with XStudio since the support is a major selling point for upgrading from the free version of the software.

29 6. Discussion

An observation which was made during the case study and demonstration event was the lack of a built-in export function in MTM. The maintainability of the test management tools is an important aspect, history of runs, bugs, test cases etc. is a major factor to not repeat the same mistakes in the future. Upon further research the software TestScribes was found, which is needed in order to export test cases with MTM. Considering MTM is a software solution including server, client etc. it is surprising exporting test cases requires additional software. An observation which was made by happenstance prior to the demonstration event was that MTM lost connection with TFS which in turn lost it is connection to the SQL server. This is due to the TFS installation uses the computer (and network) proles instead of created proles for users, which resulted in problems with the installation when the server prole had to change password. The all- in-one installation package of TFS & MTM sorts out most conguration of SQL connection and the like automatically which is handy but also obstructs any attempt of troubleshooting and attempts to solve problems by oneself. This meant that all software had to be reinstalled, the lost connection between the dierent server software could not be reconnected.

6.3 Installability

During the installation scenario, the dierence in easily found documentation for each tool was apparent and the observation was that the open source tools' printed documentation was far more extensive. The lack of easily found documentation for MTM and the other software required to use it is a minor concern, since there are a lot of useful topics concerning dierent issues on forums. The lack of XStudio documentation was most likely deliberate considering support is the main dierence between the free version and the licensed version. The Testopia installation solution was the perl script checksetup.pl, which gave a quick overview of which features are installed, what needs installing for it to run and what is available to install. checksetup.pl functions both as an installation software as well as a repair tool and is very straightforward (see gure 6.1).

6.4 Functionality versus interoperability

In the case study data collection, the test managers were divided when deciding if a lot of functionality or interoperability was more important. The interoperability aspect includes the integration between dierent server-side solutions of the tools: mainly the choice of SQL server. There are multiple options for choosing an SQL server and it is common to already use one within a company. It is therefore im- portant that no vital information is lost in the case of a legacy system, but it is also important to not spend extra time maintaining a second SQL server only used for a test management tool. Testopia, TestLink and XStudio all claim to support a variety of SQL servers while MTM requires the server to run Microsoft SQL server. Being forced to use exclusively Microsoft software, including HTTP and SQL server,

30 6.5 Usability

Figure 6.1: Testopia checksetup.pl. to the tool also means that one may not change to a superior alternative if there is one. If the company already use other alternatives these need to be either migrated to Microsoft editions or run in parallel and leave all past acquired data separated - neither of which are good solutions. Maintaining two separate SQL servers may also cause the double maintenance problem [3] in case you want to include the information from the previous SQL database to the new installation.

6.5 Usability

The demo event gave a good idea of how the tools felt; mainly in usability and performance. The issues of usability with dierent test management tools boils down to two major factors; recognition and intuitional. If a tool is based on the same line of thought and interface as other software the user is used to, it is easier to use without training. If the functions of a software is intuitional and a user can more or less guess the next move in order to complete the task, not much training and experience is needed. These aspects are very hard to measure, although the demo event scores may give a hint of which tools are intuitive and/or recognisable. If the users are mainly windows users, the recognition of MTM may be great and if the users have experience with Bugzilla, the recognition of Testopia may be great. The scores from the demo event show MTM as a clear winner. This is not very surprising since most of them were accustomed to the tool, thus recognising what to do. However not everything was intuitive, MTM is more strict than some other tools, requiring the use of multiple software to create a project and making search

31 6. Discussion algorithms for adding test cases to suites, requiring the user to understand how to use a rather unintuitive search function, see gure 6.2. The normal operating system of the company is Windows, which could also aect the attitude towards it. To get a full picture of why participants in a demo event ranked the tools the way they did, more extensive background checks could be performed to nd out which tools in general the participant is accustomed to, which operating system the participant uses etc. This would only give a clearer picture of the results and would most likely not aect the actual results.

Figure 6.2: MTM Example search.

32 6.5 Usability

Testopia was the tool where the participants of the demo event did wrong most often. The navigation of Testopia is quite intuitive, doing most of the navigation from the Product Dashboard, though all elds are editable and clicking on any eld except the ID will edit that eld. Clicking on ID will however bring the user to that item (see gure 6.3), which most of the case was the intention of the action. This setup of editing and navigating is not quite as intuitive and the user will most likely end up editing several times when the intention is to move to that item. One major issue with the tool which was frequently noted on the demo event was that the tool is too cluttered, showing insignicant information and lling up the web browser leaving little room for running the tool in parallel with the system under test and requiring the user to swap between the web browser and the system.

Figure 6.3: Testopia navigation example.

33 6. Discussion

34 Chapter 7 Conclusions

The master's thesis project has aimed to nd and evaluate Test Management tools to ease the workload for test managers in their daily work. After selecting four tools from a plethora of Test Management tools, they were evaluated by running basic scenarios as a basis for a case study. The tools were also presented in a demonstration where test managers could see the tools working in parallel. The results of the demo event showed a preference to the proprietary tool Microsoft Test Manager. The demo event showed a slight favour to the open source tool TestLink over Testopia and XStudio, with the three tools being quite tied up in the tally. TestLink was the only tool which supported all of the basic scenarios, also containing extensive documentation. The main issue with MTM is being locked to using Microsoft products, whereas the open source tools can be used with most well-used HTTP- and SQL-servers. This means that MTM is more of a package deal and is installed as a single entity instead of having to separately install and congure each component. Which solution is better highly depends on how much can be spent on a Test Management tool, but also which software is already available and running in an organisation. I.e. if an organisation is running a lot of Microsoft products, MTM is most likely the best tool available.

35 7. Conclusions

36 Bibliography

[1] Fredrik Andersson, Hannes Nevalainen, and Sandra Pettersson. SRS. System Requirements Specication created in the course ETSN05, Software Develop- ment for Large Systems. Oct. 2011 (cit. on p. 13). [2] Atlassian. https://www.atlassian.com/software/jira. Extracted 2013-08-04. 2013 (cit. on p. 20). [3] Wayne A. Babich. Software Conguration Management: Coordination for Team Productivity. Addison Wesley, 1986. Chap. 1. isbn: 0201101610 (cit. on p. 31). [4] bugzilla.org. http://www.bugzilla.org/status/roadmap.html. Extracted 2013-06-25. Mar. 2009 (cit. on p. 20). [5] Ilene Burnstein. Practical Software Testing: A Process-Oriented Approach. 1st. Springer Publishing Company, Incorporated, 2010. Chap. 8. isbn: 1441928855, 9781441928856 (cit. on pp. 7, 10). [6] OW2 Consortium. http : / / wiki . ow2 . org / salome - tmf. Extracted 2013-06-21. June 2011 (cit. on pp. 1920). [7] Gavaldo Consulting. http://www.xqual.com. Extracted 2013-08-29. July 2013 (cit. on p. 19). [8] Gavaldo Consulting. http://www.xqual.com/qa/tools.html. Ex- tracted 2013-03-25. Mar. 2012 (cit. on pp. 11, 19). [9] Emelie Engström. Supporting Decisions on Regression Test Scoping in a Soft- ware Product Line Context - from Evidence to Practice. Paper VI: Supporting Test Scoping with Visual Analytics. PhD dissertation. Department of Com- puter Science Lund University, 2013. isbn: 978-91-980754-1-0 (cit. on p. 8). [10] Martin Havlát. http://http://teamst.org/. Extracted 2013-08-19. 2012 (cit. on pp. 1920). [11] Greg Hendricks. http://testopia.blogspot.se/. Extracted 2013-06- 25. 2012 (cit. on p. 19).

37 BIBLIOGRAPHY

[12] M. Höst, B. Regnell, and P. Runeson. Att genomföra examensarbete. Stu- dentlitteratur, 2006. isbn: 9789144005218. url: %5Curl%7Bhttp://books. google.se/books?id=gUaHMwAACAAJ%7D (cit. on pp. 1314). [13] Free Software Foundation Inc. http://www.gnu.org/philosophy/ categories.html. Extracted 2013-03-26. Mar. 2013 (cit. on p. 10). [14] The Open Source Initiative. http://opensource.org/docs/osd. Ex- tracted 2013-03-25 (cit. on p. 10). [15] The Open Source Initiative. http://opensource.org/proliferation- report. Extracted 2013-04-22 (cit. on p. 10). [16] ISTQB. http : / / www . istqb . org / certification - path - root / expert - level / test - management - 2 . html. Extracted 2013-04-09. 2013 (cit. on p. 9). [17] Barbara Kitchenham, Shari Lawrence Peeger, and Lesley Pickard. Case Studies for Method and Tool Evaluation. In: IEEE Software 12.4 (1995), p. 52. issn: 07407459 (cit. on pp. 1011, 1314). [18] Microsoft. http://msdn.microsoft.com/en-us/library/jj635157. aspx. Extracted 2013-06-18. 2013 (cit. on pp. 1920). [19] Microsoft. http://msdn.microsoft.com/en-us/library/ms364062. aspx. Extracted 2013-08-09. 2013 (cit. on p. 20). [20] Ms L Shanmuga Priya, Ms A Askarunisa, Dr N Ramaraj, Reema Sharma, Ra- jesh Rohilla, Mohit Sharma, TC Manjunath, Srinivasa Kumar Devireddy, G Ramaswamy, D Ravikiran, et al. Measuring the Eectiveness of Open Cover- age based Testing Tools. In: Journal of Theoretical and Applied Information Technology (2005) (cit. on p. 15). [21] The Linux Information Project. http://www.linfo.org/freeware. html. Extracted 2013-03-26. Oct. 2006 (cit. on p. 10). [22] Albin Rosberg, Fredrik Karlsson, and Daniel Perván. SVVI. System Veri- cation and Validation Instructions created in the course ETSN05, Software Development for Large Systems. Oct. 2011 (cit. on p. 13). [23] Albin Rosberg, Fredrik Karlsson, and Daniel Perván. SVVS. System Veri- cation and Validation Speicication created in the course ETSN05, Software Development for Large Systems. Oct. 2011 (cit. on p. 13). [24] Michael Run and Christof Ebert. Using Open Source Software in Product Development: A Primer. In: IEEE Software (Jan. 2004), pp. 8286 (cit. on p. 10). [25] Per Runeson and Martin Höst. Guidelines for conducting and reporting case study research in software engineering. In: Empirical Software Engineering 14.2 (2009), pp. 131164 (cit. on p. 13).

38 BIBLIOGRAPHY

[26] Ahmed Ibrahim Safana and Suhaimi Ibrahim. Implementing software test management using SpiraTeam tool. In: Proceedings - 5th International Con- ference on Software Engineering Advances, ICSEA 2010. Proceedings - 5th In- ternational Conference on Software Engineering Advances, ICSEA 2010. Cen- tre for Advance Software Engineering, Universiti Teknologi Malaysia, 2010, pp. 447452 (cit. on pp. 9, 11). [27] Jun-feng Yao, Shi Ying, Ju-bo Luo, Dan Xie, and Xiang-yang Jia. Reective Architecture Based Software Testing Management Model. In: 2006 IEEE In- ternational Conference on Management of Innovation & Technology (2006), p. 821. issn: 9781424401482 (cit. on p. 9). [28] Benjamin Zeiÿ, Diana Vega, Ina Schieferdecker, Helmut Neukirchen, and Jens Grabowski. Applying the ISO 9126 Quality Model to Test Specications  Exemplied for TTCN-3 Test Specications. In: Software Engineering 2007 (SE 2007). Lecture Notes in Informatics (LNI) 105. Copyright Gesellschaft für Informatik. Köllen Verlag, Bonn, Mar. 2007, pp. 231242 (cit. on p. 9). [29] T Zimmermann, R Premraj, N Bettenburg, S Just, A Schroter, and C Weiss. What Makes a Good Bug Report?. In: IEEE Transactions on Software En- gineering 36.5 (n.d.), pp. 618643. issn: 00985589 (cit. on p. 11).

39 BIBLIOGRAPHY

40 Appendices

41

Appendix A Questionnaire

How long have you been working with testing? How long have you been working with Test Management?

Which functionalities do you deem important in a Test Management Tool?

• Requirements management.

• Mapping requirements to tests.

• Creating/handling test plans.

• Scheduling.

• Test reporting.

• Version control.

• Test case management.

• Issue handling.

• Backlogging/follow-through.

• Scalability.

• Usability.

• Anything missing from this list? Are there any functionalities which are less important, but nice to have?

43 A. Questionnaire

How important is it to have a tool with a lot of functionality, compared to one which is easy to use?

(a) Very important

(b) Important

(c) Either way

(d) Not very important

(e) Unimportant

How important is it to have a tool with a lot of functionality, compared to one which easily integrates with other tools?

(a) Very important

(b) Important

(c) Either way

(d) Not very important

(e) Unimportant

Suggestions of tools which you have used?

44 Appendix B Test management tool setup

45 B. Test management tool setup irsf iulSui 2010 Studio Server SP1Rel 10.0.40219.1 Visual Microsoft Foundation 2010 10.0.30319.317 Team 10.0.30319.317 SqlExpress 2008 Manager server SQL Microsoft 10.0.40219.445 Test MTM Scenario yQ evr5.1.50 server MySQL 1.8 XStudio XStudio al .:Ts aaeetto setup. tool management Test B.1: Table H 5.4.14 PHP 20.0 Firefox Mozilla 2.2.21 server Apache 4.2.5 5.6.11 Bugzilla & 5.1.50 servers MySQL 1.9.6 TestLink TestLink elJO 2.53 JSON Perl 5.16.3 Perl 0.63 GD-Graph3d 5.4.14 PHP 20.0 Firefox Mozilla 2.2.21 server Apache 4.2.5 5.6.11 Bugzilla & 5.1.50 servers MySQL 2.4 Testopia Testopia

46 Appendix C Demo scenarios

47 Scenario 1: Skapa testfall.

Skapa testfall med följande steg och förväntade resultat.

Precondition: Användare Alice med lösenord testare1 finns i systemet.

1. Fyll i Alice som användarnamn 1. Alice visas som användarnamn

2. Fyll i testare1 som lösenord 2. Texten visas ej som klartext

3. Klicka Logga In 3. Alice är inloggad i systemet.

Vilket verktyg klarade av scenariot bäst?

TFS XStudio Testopia TestLink

Kommentar: .

Scenario 2: Lägga till nytt projekt.

Skapa ett nytt tomt projekt med namnet NewProject.

Vilket verktyg klarade av scenariot bäst?

TFS XStudio Testopia TestLink

Kommentar: . P a g e | 2

Scenario 3: Sätt ansvarig på testfall.

Välj testfallet F4. Sätta låsdatum och sätt Albin Rosberg ([email protected]) som ansvarig.

Vilket verktyg klarade av scenariot bäst?

TFS XStudio Testopia TestLink

Kommentar: .

Scenario 4: Skapa rapport.

Skapa en rapport över success-rate för projektet.

Vilket verktyg klarade av scenariot bäst?

TFS XStudio Testopia TestLink

Kommentar: .

Scenario 5: Kontrollera buggrapport.

Kontrollera bug tillhörande testfall S2. Lösenordsfältet ska vara teckenskyddat.

Vilket verktyg klarade av scenariot bäst?

TFS XStudio Testopia TestLink

Kommentar: .

© COPYRIGHT SYSTEM VERIFICATION, 2010 P a g e | 3

Scenario 6: Kolla historik för testfall.

Kolla körhistorik för testfall F2. Logga ut

Vilket verktyg klarade av scenariot bäst?

TFS XStudio Testopia TestLink

Kommentar: .

Scenario 7: Flytta över existerande test till en ny testsvit.

Skapa en ny testsvit kallad Regressionstest med samtliga redan existerande testfall i.

Vilket verktyg klarade av scenariot bäst?

TFS XStudio Testopia TestLink

Kommentar: .

Scenario 8: Kontrollera om testare Albin Rosberg kört tilldelade testfall.

Vilket verktyg klarade av scenariot bäst?

TFS XStudio Testopia TestLink

Kommentar: .

© COPYRIGHT SYSTEM VERIFICATION, 2010