FACULDADEDE ENGENHARIADA UNIVERSIDADEDO PORTO

Test Automation in Web Environment

Jorge Miguel Guerra Santos

ForMestrado Jury Integrado em Engenharia Evaluation Informática e Computação

Supervisor: Profa Ana Paiva Proponent: Engo Joel Campos

June 27, 2016

Test Automation in Web Environment

Jorge Miguel Guerra Santos

Mestrado Integrado em Engenharia Informática e Computação

Approved in oral examination by the committee:

Chair: External Examiner: Supervisor:

June 27, 2016

Abstract

In today’s fast moving world, it is a challenge for any company to continuously maintain and improve the quality and efficiency of software systems development. In many software projects, testing is neglected because of time or cost constraints. This leads to a lack of product quality, followed by customer dissatisfaction and ultimately to increased overall quality costs. Addition- ally, with the increasingly more complex software projects, the number of hours spent on testing increases as well, but without the support of suitable tools, the test efficiency and validity tends to decline. Some software testing tasks, such as extensive low-level interface regression testing, can be laborious and time consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively. Once automated tests have been developed, they can be run quickly and repeatedly. However, test automation systems usually lack reporting, analysis and meaningful information about project status. The end goal of this research work is to create a prototype that can create and organize test batteries by recording user interaction, reproduce the recorded actions automatically, detect failures during test execution and generate reports, while also setting up the test environment, all in a automatic fashion and develop techniques to create more maintainable test cases. This tool can help bring technical advantage in automated web test- ing by creating new and more maintainable recorded test cases with minimal user action and allow testers to better evaluate the software project status.

i ii Acknowledgements

First of all, I would like to thank FEUP for serving as a place for personal and interpersonal growth, as a well as for providing all the conditions for me to focus on my personal goals and to meet incredible people. I thank Profa Ana Cristina Ramada Paiva for guiding me throughout this whole research work, making sure that I stayed focused on the main goal and for always managing her time in order to provide me feedback about the work being developed and clarify any doubts that I had. Additionally, I would like to thank Glintt HS and its crew for allowing me to fit right in and provide all the conditions I needed during this research work. I hope the work done throughout my stay will be of value to the company for a long time. More specifically, I thank Joel for always being available for any questions that I had, even in the busiest days, and for letting me have the freedom to experiment with new ideas for the research work, while giving me his feedback. A special shout-out to the Andrés group for being such a tight-knit group of friends. You keep my sanity in check—most of the times—and I thank you for all the moments during our stay at FEUP and I hope that it is just the beginning. A special thanks to Simão for helping me throughout these academic years. I am eternally grateful for the fact that you had the initiative to work with me and for influencing me in a way that I was able to adapt to the new academic life and form a new work ethic that has without a doubt led me this far. Another individual thanks goes to JP. Thanks for being such an upbeat guy and for, in a random day, asking me to study together in FEUP at night. This turned into a tradition that helped me manage my time to be able to do a little bit of everything. I thank my parents for always supporting me through thin and thick but also allowing me the freedom to explore and focus on my interests while always giving me advice. I am thankful for having had the chance to be born to parents as incredible as you and sorry you had to deal with me and my brother. Speaking of which, thank you my brother. You were always at my side throughout my life and I wish it stays that way no matter where our paths go.

Thank you,

Jorge Miguel Guerra Santos

iii iv “Only those who have patience to do simple things perfectly ever acquire the skill to do difficult things easily.”

James J. Corbett

v vi Contents

1 Introduction1 1.1 Context and Background ...... 1 1.2 Motivation ...... 1 1.3 Goals ...... 2 1.4 Structure ...... 2

2 Automated Web Application Testing3 2.1 Software Testing ...... 3 2.2 Web Application Testing ...... 4 2.3 Test Automation ...... 5 2.3.1 Automated Web Application Testing Techniques ...... 7 2.4 Software Testing Metrics ...... 9 2.4.1 Automated Software Testing Metrics ...... 9 2.5 Tools Survey ...... 10 2.5.1 Automated Software Testing Tools ...... 10 2.5.2 Test Logging and Reporting Tools ...... 12 2.6 Conclusions ...... 13

3 Methodology 15 3.1 Requirements ...... 15 3.2 Technologies Comparison ...... 16 3.2.1 Automated Web Application Testing Tools ...... 16 3.2.2 Continuous Integration Tools ...... 17 3.2.3 Additional Technologies ...... 18 3.3 Test Cases ...... 19 3.3.1 Locators ...... 19 3.3.2 Recurrent Steps ...... 20 3.3.3 Implicit waits ...... 21 3.4 Test Environment ...... 21 3.4.1 Selection of Test Cases ...... 21 3.4.2 Automation ...... 22 3.4.3 Test Reports ...... 22 3.5 Conclusions ...... 23

4 Implementation 25 4.1 Method ...... 25 4.1.1 Test Case Structure ...... 25 4.1.2 Test Case Life Cycle ...... 26

vii CONTENTS

4.1.3 Test Results file structure ...... 27 4.1.4 Test Selector file structure ...... 28 4.1.5 Test Environment Setup ...... 28 4.2 Architecture ...... 30 4.2.1 Selenium IDE ...... 30 4.2.2 Jenkins ...... 35 4.3 Prototype ...... 39 4.3.1 Selenium IDE ...... 39 4.3.2 Jenkins ...... 43 4.4 Discussion ...... 47 4.4.1 Method ...... 47 4.4.2 Results ...... 48

5 Conclusions and Future Work 51 5.1 Final Remarks ...... 51 5.2 Future Work ...... 52

References 53

A Configuration 55 A.1 Selenium IDE Configuration ...... 55 A.2 Jenkins Configuration ...... 56

B Procedure 63 B.1 Translated Selenese commands in NUnit C# ...... 63 B.2 Gsearcher’s Locator Strategy ...... 64

viii List of Figures

2.1 Mike Cohn’s test automation pyramid ...... 6

3.1 Integration between technologies ...... 18 3.2 RecurrentSteps case used by multiple test cases ...... 20 3.3 Jenkins Build Process ...... 22

4.1 Jenkins Workspace ...... 29 4.2 Overview of the developed system ...... 30 4.3 Implementation model of the Test suite chooser plugin ...... 31 4.4 Implementation model of the Command builders ...... 31 4.5 Implementation model of the Selenium IDE’s user extension ...... 32 4.6 Implementation model of the saveTestCaseToProperties executable ...... 33 4.7 Implementation model of the C# NUnit Formatters ...... 34 4.8 Use Case diagram of Selenium IDE ...... 35 4.9 Implementation model of the tests selector Jenkins plugin ...... 36 4.10 Implementation model of the transferTestsToBuild executable ...... 36 4.11 Implementation model of the parseTestsProperties executable ...... 37 4.12 Implementation model of the extractTestResults executable ...... 38 4.13 Use Case diagram of Jenkins ...... 38 4.14 Context menu with Selenium IDE commands in Selenese ...... 40 4.15 Selenium IDE interface with a select command ...... 41 4.16 Test suite tree view ...... 42 4.17 Locator’s format in Selenium IDE ...... 43 4.18 Jenkins main interface ...... 44 4.19 Jenkins test selector interface with a screenshot ...... 44 4.20 Jenkins test selector interface with a failed screenshot ...... 45 4.21 Jenkins test result interface ...... 45 4.22 Cause of the failed test presented in the test result interface ...... 46

A.1 Selenium IDE’s general settings ...... 55 A.2 Selenium IDE’s format settings ...... 56 A.3 Jenkins’ Tests folder ...... 57 A.4 Jenkins’ Project Pre-Build Configuration ...... 59 A.5 Jenkins’ Project Build Configuration ...... 60 A.6 Jenkins’ Project Post-Build Configuration ...... 61 A.7 Jenkins’ Project Node Configuration ...... 62

ix LIST OF FIGURES

x List of Tables

3.1 Web Automated Testing Tools Comparison ...... 16 3.2 Continuous Integration Tools Comparison ...... 17

4.1 Rules to handle user actions in Selenese commands and in NUnit C# ...... 26

xi LIST OF TABLES

xii Abbreviations

UI User Interface GUI Graphical User Interface IDE Integrated Development Environment IT Information Technology CI Continuous Integration C&R Capture & Replay API Application Programming Interface URL Uniform Resource Locator CSS Cascading Style Sheets HTML HyperText Markup Language DOM XML Extensible Markup Language JSON JavaScript Object Notation AJAX Asynchronous JavaScript and XML

xiii

Chapter 1

2 Introduction

4 This introductory section provides a brief overview of the problem being address throughout this research work by first giving the background at a technical level. It then goes on to present the

6 motivation behind the elaboration of this dissertation and the main goals to achieve with this research work.

8 The last section describes the structure of this dissertation by introducing each of its chapters.

1.1 Context and Background

10 Testing is a vital part of the software development process and as web applications are becoming increasingly important in our world it is crucial that they are tested properly and faster. At first,

12 web testing was focused on finding bugs and security issues by going through the source code at a low level, testing server and database communication. But as web applications are becoming more

14 and more advanced and dynamic, testing the functionality of the web application UI has become more important [LDD14].

16 The study and software developed throughout this research work will be done with the col- laboration of Glintt HS. Glintt HS is the Glintt Group’s company focused in healthcare. Its core

18 business is the development of software solutions to this market segment, where it is at the fore- front at a national level. It is present in Brazil, Poland and Angola as well. It has around 300

20 collaborators and has its headquarters in Porto.

1.2 Motivation

22 Glintt offers multiple web applications and the tendency is that the number of applications as well as the respective features will increase. This naturally implies that the number of hours spent in

24 tests will increase, but without the support of adequate tools, the efficiency and validity of the tests will tend to decline. The relevancy of automated tests in this scenario is even greater and with

1 Introduction the creation/use of the correct tools, the investment that may be made in creating them will easily payoff. The use of automated testing of web application is becoming more common. However 2 it is still challenging to test UI functionality automatically. Most web applications are dynamic rather than static, which makes them complex to automatically test since its elements can change, 4 and are often comprised of different components built using different languages and techniques, which can also make automated testing difficult. In addition, test automation system often lack 6 reporting, analysis and meaningful information about project status.

1.3 Goals 8

The purpose of this research work is to study frameworks regarding web automated testing in order to create a prototype that takes into account software testers without experience in Web testing. 10 The prototype’s goal is to use technologies adapted for this purpose, such as automated testing techniques, test reporting, continuous integration, and implement new methods and techniques to 12 allow the following features:

1. Create and organize test batteries through the recording of user actions on the web applica- 14 tion.

2. Increase the robustness of the generated test cases. 16

3. Reproduce automatically the recorded actions.

4. Manage automatically the test environment, detect failures during testing and produce screen- 18 shots and graphical reports of the errors.

1.4 Structure 20

Besides the introduction, this research paper is structured in four more sections. Chapter2 provides an overview of the state of the art of automated Web application testing as well as a automated 22 testing, logging and reporting tools survey.

Chapter3 presents a framework comparison of the main components of the prototype and 24 specifies the technologies required for their integration. Additionally, it comprises a theoretical analysis of a set of relevant methods for the developed prototype. 26 Chapter4 describes the prototype’s architecture and implementation by analyzing each com- ponent that structures the developed prototype and the methods and techniques implemented. It 28 finishes with a discussion of the results obtained.

Finally, chapter5, concludes the research paper. 30

2 Chapter 2

2 Automated Web Application Testing

4 The purpose of this chapter is to review the literature on automated web application testing. It begins by introducing Software Testing as a core activity in the software development process,

6 followed by assessing its techniques that are related to this research work, such as Web Application Testing and Test Automation. Their characteristics and associated testing methods are reviewed

8 before proceeding to the analysis of automated software testing metrics. In addition, a tool survey is conducted, which gathered automated software testing, logging and reporting tools, followed by

10 the conclusion of the literature review.

2.1 Software Testing

12 Software testing has been widely used in the industry as a quality assurance technique for the components of a software project, including the specification, the design, and source code. As

14 software becomes more important and complex, defects in software can have a significant impact to users and vendors. Therefore, the importance of planning, especially planning through testing,

16 is paramount. A company may devote as much as 40% of its time on testing to assure the quality of the software produced due to the fact that software testing is such a critical part of the process

18 of developing high-quality software [PRPE13][LDD14]. In software testing, a suite of test cases is designed to test the overall functionality of the

20 software, whether it conforms to the specification document or exposes failures in the software (e.g., functionality or security failures). However testing is usually the process of finding as many

22 errors as possible and thus improving assurance of the reliability and the quality of the software. This is because, in order to demonstrate the nonexistence of errors in software, it would be needed

24 to test all possible permutations for a given set of inputs. However, realistically, it is not possible to test all the permutations of a given set of inputs for a given program, even for a trivial one. For

26 any non-trivial software system, such an exhaustive testing approach is essentially technologically and economically unfeasible.

3 Automated Web Application Testing

The main goals of any testing technique (or test suite) are the demonstration of the presence of errors during a program execution and to discover a new fault or regression fault in a successful 2 test case [LDD14].

2.2 Web Application Testing 4

Ever since the creation of the World Wide Web, there as been an increased usage of Web applica- tions. A Web application is a system which typically is composed of a database and Web pages, 6 also described as back-end and front-end respectively, with which users interact over a network using a browser. A Web application can be of two types – static, in which the contents of the Web 8 page do not change regardless of user input; and dynamic, in which the contents of the Web page may change depending on user actions [DLP05][DLF06]. 10 Compared to traditional desktop applications, Web applications are unique, which presents new challenges for their quality assurance and testing [LDD14]. 12

1. Web applications are multilingual. They usually consist of server-side backend and a client-

facing frontend, and these two components are usually implemented in different program- 14 ming languages [DLF06]. Moreover, the frontend is also typically implemented with a myr-

iad markup, presentation and programming languages such as HTML, CSS and JavaScript, 16 which pose additional challenges for fully automated CI practices, as test drivers for differ-

ent languages need to be integrated into the CI process and managed coherently. 18

2. The operating environment of typical Web applications is much more open than that of a

desktop application. Such a wide visibility makes such applications susceptible to vari- 20 ous attacks, such as the distributed denial-of-service (DDOS) attacks. Moreover, the open

environment makes it more difficult to predict and simulate realistic workload. Levels of 22 standards compliance and differences in implementation also add to the complexity of de-

livering coherent user experiences across browsers [DLF06]. 24

3. A desktop application is usually used by a single user at a time, whereas a Web application

typically supports multiple users [DLF06]. The effective management of resources (HTTP 26 connections, database connections, files, threads, etc.) is crucial to the security, scalability,

usability and functionality of a Web application. The multi-threaded nature of Web applica- 28 tions also makes it more difficult to detect and reproduce resource contention issues.

4. A multitude of Web applications development technologies and frameworks are being pro- 30 posed, actively maintained and fast evolving. such constant evolution requires testing tech-

niques to stay current. 32

The aim of Web application testing consists of executing the application using combination of input and state to reveal failures. A failure is the manifested inability of a system to perform 34 a required function within the specified requirements and it can be attributed to any fault in the

4 Automated Web Application Testing

application implementation [DLF06]. In a web application, it is not possible to test faults sepa-

2 rately and establish exactly which of them is responsible for each exhibited failure, because the application is strictly interwoven to the whole infrastructure (composed of hardware, software and

4 middleware components). Since the infrastructure mainly affects the non-functional requirements of a Web application

6 (such as performance, stability, or compatibility), while the application is responsible for the func- tional requirements, Web application testing will have to be considered by two different perspec-

8 tives:

• Non-functional testing: Comprehends the different types of testing that need to be executed

10 for verifying the conformance of the Web application with the specified non-functional re- quirements. The most common testing activities are performance, load, stress, compatibility,

12 usability and accessibility testing.

• Functional testing: It has the responsibility of uncovering failures of the application that

14 are due to faults in the implementation of the specified functional requirements. Most of the methods and approaches used to test functional requirements of traditional software can be

16 used for Web application too. Testing the functionality relies on test models, testing levels, test strategies and testing processes.

18 Both are complementary and not mutually exclusive, therefore a Web application must be tested from these two perspectives [DLF06].

20 2.3 Test Automation

Test automation of a software consists of using a computer program to execute system or user

22 transactions against an IT system, which is typically achieved by using an automated testing tool. Automated testing is typically used in functional regression testing, performance testing, load

24 testing, network testing and security testing. The tools are very useful to speed up the test cycle as they can replicate processes at a much faster rate [TSSC12]. An effective test

26 automation strategy calls for automating tests at three different levels, which are, as show in figure 2.1, Unit/Component, Acceptance and GUI Tests.

28 Advantages

Test automation has its benefits, which include the development of tests that can be run faster, that

30 are consistent, and tests can be run over and over again with less overhead. As more automated tests are added to the test suite more tests can be run each time thereafter. Manual testing never

32 goes away, but these efforts can now be focused on more rigorous tests [PRPE13].

• It can Save Time and Money: After each development of the Software product, the test

34 has to be repeated to ensure quality of the software. With automation testing, only the initial

5 Automated Web Application Testing

Figure 2.1: Mike Cohn’s test automation pyramid

cost is there after that it runs over and over again at no additional cost, it can be executed

as many times as it is needed and they are much faster than manual tests. However, since 2 test automation is an investment, the testing effort may take more time or resources in the

current release. 4

• Testing increases confidence in the correction of the software: The steps of the tests

repeat each and every time when the source code changes, which maintain the accuracy of 6 a software system throughout the several iterations of its development.

• Increase Testing Coverage: Automated software testing process works on thousands of 8 different complex test cases which are not possible with manual testing. This allows more

focus on the depth and scope of tests which increases the quality of software. Automa- 10 tion testing also facilitates testers to test the software on multiple computers with different

configurations. 12

• Helpful in Testing Complex Web Applications: Automation testing process is helpful for

those web applications where millions of users interact together, by creating virtual users to 14 check load capacity of the web application. It can also be used where the application GUI

will always be the same and features get changed always due to source code changes. 16

Challenges

It is important to point out that test automation actually makes the effort more complex since 18 there’s now another added software development effort. Automated testing does not replace good test planning, writing of test cases or much of the manual testing effort and has its own challenges. 20

6 Automated Web Application Testing

• Regression Test Cases Coverage: When software expands after every release, it becomes

2 so wide that it is a challenge to cope up with the regression testing, verify the new changes, test the old functionality, tracking of existing defects and logging new ones.

4 • 100% Automation: It is a challenging job to automate maximum number of scenarios possible, since it is practically impossible to automate each and every test case.

6 • Required Skill Set: Tester has to have some programming knowledge to write the scripts and also should show how to use automation tools really well.

8 • Time to write automated tests: When project has tight deadlines, it becomes difficult to write automated tests, review them and then execute them. The tester has to be very skilled

10 to perform all this within the given time.

• Environment set up: To carry out testing of some applications, it is required to set up an

12 environment. They may be some kind of tools which are required or some pre-conditions to fulfill, which need to be all set up properly to get the accurate results.

14 Limitations

As with most forms of automated testing, setting a regression-testing program on autopilot is not

16 a surefire solution, and some conscious oversight and input is generally still needed to ensure that tests catch all the bugs they should. When testers have the exact same suite of tests running

18 repeatedly, night after night, the testing process itself can become static. Over time, developers may learn how to pass a fixed library of tests, and then their standard array of regression tests can

20 inadvertently end up not testing much of anything at all. If a created regression testing becomes too automated, the whole point of doing it can backfire. It can end up guaranteeing a clear software-

22 development trajectory for a development team while unwittingly ignoring components of the application, letting the end users stumble upon undetected glitches at their own peril. Walking

24 along a single path of least resistance is easier than stopping to sweep the entire application after each new step, but it’s worth the effort to take regression testing all the way by frequently scanning

26 a little further afield by complementing automation with some manual tests.

2.3.1 Automated Web Application Testing Techniques

28 Model based testing

Model based testing is a software testing technique in which the test cases are derived from a

30 model that describes the functional aspects of the system under test. Its main purpose is to create a model of the application. The test cases are derived on the basis of the model constructed and

32 are generated according to either the all-statement or all-path coverage criterion [LDD14]. The generated test case suite includes inputs, expected outputs and necessary infrastructure to execute

34 the tests automatically. This technique depends on three key factors; notation used for data model, test-generation algorithm and tools that generate supporting infrastructure for the tests.

7 Automated Web Application Testing

Mutation Testing

Mutation Testing is a fault-based testing technique that is based on the assumption that a program 2 is well tested if all simple faults are predicted and removed – complex faults are coupled with simple faults and thus detected by tests that detect simple faults. In this form of testing, some 4 lines of code are randomly changed in a program to check whether the test case can detect the change. It is aimed at detecting the most common errors that typically exist in a Web application 6 and is mainly intended to ensure that testing has been done properly and also to cover additional faults [LDD14]. 8

Scanning and Crawling

In Scanning and Crawling techniques, a Web application is injected with input data that may result 10 in malicious modifications of the database if not detected. These are mainly intended to check the security of Web applications, while aiming to improve the overall security of a Web site [LDD14]. 12 In order to achieve page coverage, testing tools are typically based on the Web crawling. They can automatically navigate links starting from a given URL and use automated input generation 14 techniques to process forms [MTT+11].

Random Testing 16

In random testing, random inputs are passed to a Web application, mainly to check whether the

Web application functions as expected and can handle invalid inputs [LDD14]. Actions are per- 18 formed randomly without knowledge of how humans use the application. This form of testing is good for finding system crashes, it is independent of GUI updates and need no effort in generating 20 test cases. However, it is difficult to reproduce the errors found because of its randomness, which makes it unpredictable. 22

Fuzz Testing

Fuzz testing is a special type of random testing, where boundary values are chosen as input to 24 test that the Web site performs appropriately when rare input combinations are passed as in- put [LDD14]. 26 Fuzzing is generally an automatic or semi-automatic process which involves repeated manipu- late target software and provide processing data for it. This process can be divided into identifying 28 target, recognizing input, generation of fuzzing data, performing fuzzing data, monitoring abnor- malities and determining availability [LDLZ13]. 30

Concolic Testing

Concolic Testing is a hybrid software verification technique that, similarly to random and fuzz 32 testing, aims to cover as many branches as possible in a program. In this form of testing, random inputs are passed to a Web application to discover different branches through the combination of 34

8 Automated Web Application Testing

concrete and symbolic execution [LDD14]. This approach addresses the problem of redundant

2 executions and increases test coverage [SMA05].

Capture & Replay

4 C&R tools have been developed as a mechanism for testing the correctness of interactive applica- tions with graphical user interfaces. Using a capture and replay tool, a quality-assurance person

6 can run an application and record the entire interactive session [PRPE13]. The tool records all the user’s events, such as the keys pressed or the mouse movements, in a log file. Given that

8 file, the tool can then automatically replay the exact same interactive session any number of times without requiring a human user. By replaying a given log file on a changed version of the ap-

10 plication, capture & replay tools support fully-automatic regression testing of graphical user in- terfaces [LCRT13]. However, the generated tests are often brittle and include high maintenance

12 costs.

2.4 Software Testing Metrics

14 As time proceeds, software projects become more complex because of increased lines of code as a result of added features, bug fixes, etc. Also, tasks must be done in less time and with fewer

16 people. Complexity over time has a tendency to decrease the test coverage and ultimately affect the quality of the product. Other factors involved over time are the overall cost of the product and

18 the time in which to deliver the software. Carefully defined metrics can provide insight into the status of automated testing efforts. [DGG09]

20 In software testing, metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute. Most software testing metrics fall into one of

22 three categories [DGG09]:

• Coverage: meaningful parameters for measuring test scope and success.

24 • Progress: parameters that help identify test progress to be matched against success criteria. Progress metrics are collected iteratively over time. They can be used to graph the process

26 itself (e.g., time to fix defects, time to test, etc.).

• Quality: meaningful measures of testing product quality, such as Usability, performance,

28 scalability, overall customer satisfaction and defects reported.

2.4.1 Automated Software Testing Metrics

30 Automated testing metrics are used to measure past, present and future performance of the im- plemented automated testing process and related efforts and artifacts. They serve to enhance and

32 complement general testing metrics, providing a measure of the automated software testing cov- erage, progress and quality, instead of replacing them. This metrics should have clearly defined

34 goals for the automation effort and relate to its performance in order to be meaningful. [DGG09]

9 Automated Web Application Testing

Some metrics specific to automated testing are as follow:

• Percent automatable: It can be defined as the percentage of a set of given test cases that is 2 automatable. This could be represented by the following equation:

ATC No.o ftestcasesautomatable PA(%) = TC = Totalno.o ftestcases 4

• Automation progress: It refers to the number of tests that have been automated as percent-

age of all automatable test cases. This metric is useful to track during the various stages of 6 automated testing development.

AA No.o ftestcasesautomated AP(%) = ATC = No.o ftestcasesautomatable 8

• Percent of automated testing coverage: It determines what percentage of test coverage the

automated testing is actually achieving. Various degrees of test coverage can be achieved, 10 depending on the project and defined goals. Together with manual test coverage, this metric

measures the total completeness of the test coverage and can measure how much automation 12 is being executed relative to the total number of tests. However, it does not say anything

about the quality of the automation. For example, 1,000 test cases executing the same or 14 similar data paths may take a lot of time and effort to execute, but they do not equate to

a larger percentage of test coverage. The goal of this metric is to measure its dimension, 16 instead of the effectiveness of the testing taking place. [DGG09]

AC AutomationCoverage PATC(%) = C = TotalCoverage 18

2.5 Tools Survey

2.5.1 Automated Software Testing Tools 20

In an era of highly interactive and responsive software processes, test automation is frequently becoming a requirement for software projects. Test automation means using a software tool to 22 run repeatable tests against the application to be tested and there are a number of commercial and open source tools available for assisting with the development of test automation. 24

2.5.1.1 Selenium

Selenium is an open source set of different software tools each with a different approach to sup- 26 porting test automation across different browsers and platforms [HK06]. The entire suite of tools result in a rich set of testing functions specifically geared to the needs of testing of web applications 28 of all types [BKW09]. These operations are highly flexible, allowing many options for locating

UI elements and comparing expected test results against actual application behavior [Sel]. 30 The tools and API’s that Selenium includes are Selenium IDE, Selenium Remote-Control,

Selenium WebDriver, and Selenium Grid. 32

10 Automated Web Application Testing

• Selenium IDE: plugin which allows users to record and play back actions in the

2 browser. Scripts are recorded in Selenese, which is a special test scripting language for Selenium that provides commands for performing actions in a browser (i.e., click a link,

4 select an option), and for retrieving data from the resulting pages.

• Selenium Remote-Control: The first tool in the Selenium project that allowed automation

6 of web applications in browsers. This has been deprecated, although it is still functional in the project, and WebDriver is now the recommended tool for browser automation.

8 • Selenium WebDriver: This Selenium project’s compact Object Oriented API refers to both the language bindings and the implementations of the individual browser controlling

10 code. It consists of a set of libraries for different programming languages and drivers which can automate actions in browsers.

12 • Selenium Grid: It allows automated tests to be run remotely on multiple browsers, and on other machines in parallel.

14 2.5.1.2 Watir

Watir(Web Application Testing in Ruby) is an open-source family of Ruby libraries for automating

16 web browsers that consists of three projects— also called gems—which are Watir, Watir-Classic and Watir-Webdriver [Wat].

18 • Watir: This gem will load either Watir-Classic or Watir-Webdriver, based on the browser and operating system being used.

20 • Watir-Classic: This is the original Watir gem that drives .

• Watir-Webdriver: This gem allows the driving of additional browsers, like Chrome and

22 Firefox. It is an API wrapper around the Selenium-Webdriver gem.

Watir is designed to mimic a user’s actions, which means that these same directions can be

24 used when creating an automated script. Understanding the gem relationship is crucial because, when adding functionality to Watir,

26 the code may need to consider which browser/gem is being used and since there are some API differences between the Watir-Classic and Watir-Webdriver projects, despite striving to be API

28 compatible through a common specification called Watirspec.

2.5.1.3 Sahi

30 Sahi is an open-source testing tool that uses JavaScript to execute events on the browser, with the ease to record and playback scripts for any web applications on any browser and any operating

32 system. Some of the features of Sahi are In-browser controls, intelligent recorder, text-based scripts, AJAX support and inbuilt reports and logs. [NBN14] Sahi runs as a proxy server which

11 Automated Web Application Testing intercepts traffic from the web browser and records the web browsing actions. It injects JavaScript to access elements in the web page which makes the tool independent of the website or web 2 application [Sah].

Sahi also has a proprietary version called Sahi Pro which has all the features of the Sahi OS 4 plus it stores reports in Database, takes snapshots, has custom report generation and a script editor to create functions. 6

2.5.1.4 DalekJS

DalekJS is a UI testing tool that uses a browser automation technique, with which the Web-Driver 8 JSON-Wire protocol is used to communicate with the browsers. Tests are JavaScript-based that can check page properties such as title and dimensions, as well as perform actions such as clicking 10 links and buttons and filling forms. DalekJS is still under development and is not recommended for production use by its creators [Gol]. 12

2.5.2 Test Logging and Reporting Tools

After the testing cycle it is important to provide information by communicating the test results and 14 findings to the team so that risks can be assessed. There are different types of tools to handle this issue such as test report tools, test frameworks and test management tools, that can be integrated 16 with automated testing tools. What follows is a description of specific tools for each one of those types. 18

2.5.2.1 Allure

Allure Framework is a flexible lightweight multi-language test report tool with the possibility to 20 add screenshots and logs. It provides modular architecture and web reports with the ability to store attachments, steps, parameters, among others. 22

2.5.2.2 Mocha

Mocha is an open source JavaScript test framework running on node.js, featuring browser support, 24 asynchronous testing and runs tests serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases [Moc]. 26 Mocha has multiple add-ons that allow this framework to be used with most JavaScript asser- tion libraries, to perform headless testing, test coverage, additional interfaces and reporters. 28

2.5.2.3 PractiTest

PractiTest is a proprietary test management tool that is able to manage requirements, tests and 30 issues, as well as to show the traceability between them throughout the project’s lifecycle. It

12 Automated Web Application Testing

integrates with bug tracking, automation and continuous integration tools. Additionally, it is cus-

2 tomizable enough to fit specific processes and can automatically generate reports through the use of dashboards.

4 2.5.2.4 Serenity BDD

Serenity BDD is an open source reporting library that enhances the writing of automated accep-

6 tance criteria in order to be more maintainable and better structured. It also produces rich mean- ingful test reports that report on the test results and what features have been tested by documenting

8 what was executed, in a step-by-step narrative format that includes test data and screenshots for web tests.

10 2.6 Conclusions

In this chapter, an analysis of the state of the art of Automated Web Application Testing was made.

12 It was clear that software testing is a vital part in the software development process and more time is needed to guarantee the quality of the software. Consequently, test automation is becoming

14 increasingly important. With the increased usage of Web applications and the fact that these ap- plications are unique compared to traditional desktop applications, a number of testing techniques

16 were designed for automated Web application testing, along with specific testing metrics to better evaluate and measure the automated testing process.

18 A survey of software tools for test automation was performed in order to study frameworks that could be used or influence the design and implementation of the prototype. The main conclusions

20 of this survey will be discussed in Section 3.2.

13 Automated Web Application Testing

14 Chapter 3

2 Methodology

4 As explained in the introduction, it is clear that the testing phase in software systems development is increasingly more time consuming but also more important to guarantee product quality.

6 Regarding the creation of tests, as seen in section 2.5, there are tools that can capture and playback user actions when interacting with a GUI. These tools allow the fast creation of test cases,

8 but the tests generated with this technique are often brittle, failing in moments due to timing issues. The tests often require to capture the same actions to reach different parts of the application, which

10 increase maintenance costs. For example, most web applications require a user to authenticate in order to access its services. Through capture & replay tools, each test case would require to repeat

12 the steps required to login in order to reach the component of the application to test the GUI. If the application changed in a way that would make one of those test steps fail, it would be required

14 to fix each test case that uses that test step separately. In addition, managing the test environment can become a time consuming task, specially when

16 involving recorded test cases, which, as discussed above, can be easily created but often include high maintenance costs.

18 To be able to create a prototype that can ease the automated testing process, a study of the tech- nologies to aggregate is necessary. A number of requirements were decided upon the prototype,

20 which filtered the technologies that could be used.

3.1 Requirements

22 1. Open-source Technologies: The technologies gathered need to be open source because it allows to freely experiment with the source code and tinker it in order to change or create

24 new features. Furthermore, the cost of proprietary software could be a issue.

2. Capture & Replay feature:

26 C&R feature is essential to create a prototype that can record and playback user actions.

15 Methodology

3. Continuous Integration: CI is required because it allows the prototype to run full end-to-

end automation testing in a single automated build process that could be run on-demand or 2 periodically.

4. Test Logging and Reporting: The prototype needs to provide good representation of the 4 tests’ execution output by identifying the step of the test where the failure occurred through

the use of a screenshot and its cause. 6

3.2 Technologies Comparison

3.2.1 Automated Web Application Testing Tools 8

There are many web testing application tools available. These tools differ in functionality, features and usability although the core functions are similar, as seen in table 3.1. 10

Tool Features Selenium SAHI OS Watir DalekJS Test Studio Open Source Yes Yes Yes Yes No Capture & Replay Yes Yes Yes No Yes Inbuilt Logs Yes Only propri- No No Yes etary version (SAHI Pro) Inbuilt Reports Only with Yes No No Yes support from other open source software Table 3.1: Web Automated Testing Tools Comparison

• Telerik Test Studio has a myriad of features, such as a comprehensive yet cost-effective

automated testing suite and mobile application support. However, the fact that it is not open 12 source makes it hard to build upon, in order to implement new innovative features.

• DalekJS does not have implemented the Capture & Replay feature, which is crucial to this 14 work. Additionally, it is still in development and not recommended to be used for production

by its creators. 16

• Watir does not have inbuilt logs, but a log file can be created. The main issue is that there

are not a lot of frameworks and tools that complement its lack of features. 18

• SAHI OS does fulfill most of the requirements, as a log file can be created. Despite this, it

does not allow users to view the recorded steps in controller, lacks documentation and does 20 not have an active community.

16 Methodology

• Selenium’s user action recorder (Selenium IDE) only works in Firefox - but the action play-

2 back feature works on the most popular web browsers - and does not have inbuilt reports. However, it is extensively documented and the Selenium WebDriver framework forms the

4 basis for many other testing frameworks and tools.

Although Selenium has its issues, the Selenium WebDriver offers a lot of flexibility. There

6 are a great number of different frameworks and tools that complements its lack of features. It is seemingly established when it comes to automated testing of web applications, as shown by the

8 extensive available documentation and active community [CSRM14], which makes it stood out as the clear choice.

10 3.2.2 Continuous Integration Tools

Regarding Continuous Integration, there are few powerful open source tools available and they

12 vary in their focus despite having similar features, as seen in the following table.

Tool Features Jenkins Travis CI Buildbot Strider Open Source Yes Yes Yes Yes Integration Integrates with every ma- Git - Git jor tool thanks to plugins Platform Cross-platform Hosted, Python Node.js accessed on Github Table 3.2: Continuous Integration Tools Comparison

• Jenkins (formerly known as Hudson) is a continuous integration and continuous delivery

14 application. It is relatively easy to install and configure, has a rich plugin ecosystem and is extensible enough to allow parts of the tool to be extended or modified to suit every project.

16 • Travis CI is a hosted service used to build and test projects continuously. It is easy to install and configure. However, it is not extensible and requires the project to be a Git project

18 hosted in Github.

• Buildbot is a framework for automating software build, test, and release processes. It is

20 written in Python and based on the Twisted framework. It does not have a GUI, instead it works through commands in a terminal or Python scripts.

22 • Strider is an open source Continuous Deployment/Continuous Integration platform that is written in Node.js/JavaScript and uses MongoDB as a backing store. It requires program-

24 ming effort to setup and is customizable through plugins that increase that effort.

The two main open source CI Tools are Travis CI and Jenkins. On the one hand, both Buildbot

26 and Strider include programming efforts in their setup and have little documentation available.

17 Methodology

Strider in particular, can be customized, but its plugins are used to extend its UI and backing store and do not extend it with other tools. 2 On the other hand, Jenkins and Travis CI have extensive documentation available and active community that helps newcomers and presents issues to the developers. However, Jenkins ended 4 up as the tool chosen to implement the CI features required for this prototype. Even though it is comparatively harder to install and configure than Travis CI, its extensibility is a great boon. It 6 allows for the creation or modification of plugins that grant better user interaction with the test environment’s resources and better readability of the test results and the project status. 8

3.2.3 Additional Technologies

Having decided the frameworks for the two main components of the prototype, C&R and CI, the 10 following section will address the technologies chosen to integrate both of the main components.

That integration is visually represented in figure 3.1. 12

Figure 3.1: Integration between technologies

The Selenium’s C&R component, Selenium IDE, uses its own language-independent conven- tion called Selenese. It is used to specify the commands and any other parameters generated from 14 the user’s actions. However, in order for the test case to use the Selenium WebDriver API, it is necessary to export these commands into a object-oriented programming language. The language 16 chosen was C#, along with the unit testing framework NUnit to run the tests. Note that these gen- erated UI tests are integration tests and not unit tests. The tests attempt to verify that elements on 18 the interface behave as expected, instead of following a series of objective logic tests to confirm the business logic. 20 Selenium IDE is a plugin for browsers based on the Mozilla platform. As a result, the develop- ment of new functionality for the IDE has to be done through Mozilla plugins. It should be noted 22 that the publishing of developed plugins for the most popular Mozilla browser, Mozilla Firefox,

18 Methodology

requires the user to use the Firefox Developer Edition or Firefox Nightly instead of the standard

2 version [Fir]. In the case of the CI component, the use of a Visual Studio C# NUnit project allows the

4 automation of the test cases’ build process. It searches for errors that occurred during the test case’s creation and in its exportation.

6 3.3 Test Cases

In GUI Testing, the generation of test cases has to deal with the domain size and with sequences

8 because GUI’s have many operations to test in comparison to Command Line Interface systems. A small program can have hundreds of GUI operations and in a large program, the number of

10 operations can easily be an order of magnitude larger. Additionally, some functionality of the GUI system may only be fulfilled with a specific sequence of GUI events. As an example, to open a file

12 a user may have to click a "File" button in a toolbar, then select the "Open file..." operation and use a dialog box to specify the file.

14 The GUI test cases involve testing UI components and attributes such as:

• Size, position, width and height of elements.

16 • Error messages that are displayed after an action.

• Screen in different resolutions.

18 • The presence and availability of fields.

• Text found on the web page’s title and body.

20 The following sections describe the methods applied to the prototype that address the genera- tion of test cases of GUI testing.

22 3.3.1 Locators

Due to the fact that web applications are dynamic and are the result of different technologies, it is

24 a challenge to guarantee the robustness and durability of the recorded test cases. A small change to the GUI could render a test case invalid.

26 In the standalone Selenium IDE, it is possible to locate elements through their ID, NAME, by CSS, XPath or DOM.

28 The two first locators allow Selenium to test a UI element independent of its location on the page. So if the page’s structure and organization changes, the test will still pass. However, the use

30 of dynamic ID’s, which change the ID of an element whenever the page generates the element, is an issue because the test will not be able to identify the element on different playbacks. The

32 NAME locator does not have this issue, but it is a weaker locator since it is possible to have multiple Name attributes with the same value and it is less used than the ID attribute.

19 Methodology

One of the main reasons for using the other alternatives is when there is not a suitable ID or NAME attribute for the element to locate. The CSS locator uses CSS selectors for binding 2 style properties to elements in the document, while DOM’s location strategy takes JavaScript that evaluates to an element on the page, which can just be the element’s location. 4 The XPath locator can be absolute or relative. Both support and extend beyond the methods of locating by ID or NAME attributes. However, as absolute XPaths contain the location of all el- 6 ements from the HTML page’s root, they are likely to fail with only the slightest adjustment to the web application. On the other hand, relative XPaths locate the element based on the relationship 8 of the target element and a parent element with an id or name attribute, which are much less likely to change. This opens up new possibilities such as locating the third check-box on the page. 10 Specific locators are required to handle this issues. They must allow the recorder to specify the element in a way that they are correctly identified during all test runs. 12

3.3.2 Recurrent Steps

In web applications, it is common to repeat a set of actions to reach a certain point of the applica- 14 tion, such as logging in, see user profile and go to settings.

Recording user actions would make several steps repeated across different test cases. In case of 16 a failure on one of those steps, it would be required to fix every one of those test cases separately.

To avoid this problem, the development of a specific type of file will allow the tester to record and 18 store sequences of actions that are commonly used to navigate through the GUI. On recording a test case that uses those steps, the user can then referenced them as a command of the test case, as 20 exemplified in the following figure.

Figure 3.2: RecurrentSteps case used by multiple test cases

20 Methodology

3.3.3 Implicit waits

2 Waiting is having the automated task execution elapse a certain amount of time before continuing with the next step. There are two types of waits in Selenium, Explicit and Implicit waits.

4 An explicit wait is code that is defined by the user himself to wait for a certain condition to occur before proceeding further in the code. The worst case of this is through the use of

6 Thread.Sleep(), which sets the condition to an exact period of time to wait. The Selenium Web- Driver API, on the other hand, provides methods that wait up to an amount of time.

8 An implicit wait is used to tell WebDriver to poll the DOM of the GUI for a certain amount of time when trying to find an element or elements if they are not immediately available.

10 In order to avoid long waiting times or unnatural user actions during test recording, such as explicitly specify that a test must hold until an element is present, the generated test case will

12 implicitly wait for page loads and for the presence of elements. For example, while recording a test case, if a user clicks on an element that prompts another page, it will record a single click

14 command. Yet, the associated C# test case exported code must first check if the element is present in the GUI. After confirming that the element is present, the click is simulated and then it checks

16 if the page is loaded to continue the test. This approach is designed to:

18 • Avoid overloading the user by remembering to explicitly add commands to wait for an element.

20 • Use the Selenium WebDriver API to implement different implicit timers for different occa- sions, such as to wait for input elements or error messages to be visible.

22 3.4 Test Environment

The test environment used by the prototype automates its management and supports test execution.

24 It aids the software testing process by providing a stable and usable environment to run the test cases. This includes building the test cases, selecting which test cases to run and managing both

26 the test results and screenshots. The decision of which CI tool would be used to integrate in the prototype took into account the interaction with the test environment that it provided for the user.

28 As mentioned in section 3.2, the tool chosen was Jenkins which allows the development of custom features to its web UI—due to its extensibility through the use of plugins.

30 3.4.1 Selection of Test Cases

For the purpose of running tests, it was required to allow the user to select the test cases to run

32 instead of having to always run all the generated tests. To do so requires the development of a Jenkins plugin that creates a test selector page integrated in its web UI.

21 Methodology

This page will consist of a list that uses a file that stores information regarding the test cases in the test environment, including those that were generated but not executed, and groups them by 2 their test suite. The user will be able to select specific test cases or whole test suites to execute.

The test selector page is also used to visualize which tests failed and see their associated 4 screenshot, which is taken moments before the unsuccessful test exits.

3.4.2 Automation 6

Regarding the automation of the test environment’s management, the process will be conducted during the Jenkins build operation. This process is separated in two phases, Build and Post-Build. 8 It is configured in a way that executes a set of developed programs in a series of steps.

Figure 3.3: Jenkins Build Process

In the build phase, the first step is the automatic compilation of the test cases selected by the 10 user in the Visual Studio C# NUnit project. Following their execution, a file with all the tests’ results will be generated. 12 During the second phase, the generated file with the test results is used to update the file that is integrated with the developed test selector page in the Jenkins interface that updates its list, as 14 mentioned in the previous section. In addition, an e-mail notification is sent to the configured addresses in a list of recipients. This notification is sent only on certain occasions such as when a 16 build fails, becomes unstable or returns to stable.

3.4.3 Test Reports 18

Reporting is an important part of any test execution, as it helps the user understand the result of the test execution, point of failure, and the reasons for failure. Logging, on the other hand, is 20 important to analyze the execution flow or for debugging in case of any failures.

The test logging and reporting capabilities of the prototype will consist of Jenkins plugins. 22 This plugins will use the test results file generated during the Jenkins builds to provide some

22 Methodology

graphical visualization of the historical test results. Additionally, the Jenkins web UI will be used

2 for viewing test reports, tracking failures, visualizing the test results trend and accessing the test logs.

4 3.5 Conclusions

The issue of minimize the maintenance costs of automated C&R tests was separated in two main

6 approaches. The first focuses on the creation of test cases by maintaining the advantages of C&R generated tests and increasing the tests’ robustness. The prototype does so by implementing spe-

8 cific locator strategies, avoiding timing issues through the use of implicit waits and resorts to a new method called RecurrentSteps.

10 The other approach deals with the test environment’s setup, interaction and automated man- agement by resorting to continuous integration features. In addition, it works on a way to use a

12 web UI that allows the user to select the test cases to run through the CI tool and provide graphical visualization of the tests’ results.

14 Tools to develop both approaches were chosen, as well as complementary technologies to integrate the two main components that implement the methods and techniques analyzed in this

16 chapter.

23 Methodology

24 Chapter 4

2 Implementation

4 In this chapter, a description of the procedures and methods used in this research work and the implementation of the developed prototype is conducted, following the more theoretical analysis

6 in chapter3. It begins by presenting a low-level technical approach towards the prototype’s incor- porated methods by describing the structure of the integrated technologies and how they interact

8 with one another. For the purpose of specifying the pieces of the puzzle and how they fit together, the next section

10 analyzes the prototype’s architecture. It details every component, including the changes made to existing technologies and the design of the new programs through the use of implementation

12 models. After presenting the prototype’s design, section 4.3 focuses on describing the main contribu-

14 tions and changes implemented during this research work for each main component. The chapter concludes with the discussion of the methods implemented and the results ob-

16 tained.

4.1 Method

18 The development of a specialized automated web testing tool entailed a set of methods to structure the test cases and organize the test environment.

20 4.1.1 Test Case Structure

Although the test cases were written in C# and run with NUnit, the prototype generates test

22 cases from the recorded user actions through its C&R component, which uses its own language- independent convention (Selenese) in order to represent the commands received.

24 To structure the test cases, rules to translate from Selenese to C# were formalized, as seen in table 4.1. These rules are used by the developed C# formatters (See section 4.3.1, under "C#

26 formatters").

25 Implementation

Formalized general rules

• Setup test: The generated C# NUnit test case specifies the browser’s driver where the tests 2 will be executed, its configuration and two timers used in different circumstances. One timer

will be used in page loads and is defined to wait for a maximum of 60 seconds. The other 4 timer will be used whenever the test checks the presence of elements. It waits a maximum

of 10 seconds. A detailed implementation of this process will be discussed in section 4.3.1. 6

• Tear down test: Moments before the end of the test, the browser’s window is safely closed,

followed by asserting if any error has occurred during test execution and publication of the 8 results in a .xml file.

• Implicit waits: Exported commands, such as click and type, will use the previously men- 10 tioned timers. They will first check the presence of the command’s target element, followed

by simulating its associated action. Then it will check if the page is loaded and the test 12 proceeds.

User action Selenese NUnit C# Click Commands: click and clickAndWait Listing B.1 Target: Type Commands: type and sendKeys Listing B.2 Target: Value: Open web page Commands: open Listing B.3 Target: Check if element is present Commands: isElementPresent Listing B.4 Target: Select element from list inde- Commands: addSelection Listing B.5 pendently from its position Value: Table 4.1: Rules to handle user actions in Selenese commands and in NUnit C#

4.1.2 Test Case Life Cycle 14

The prototype works with multiple technologies, which require the test case to undergo changes from the moment it is generated to its execution. The main steps are as follows: 16

1. Record user actions: The first step to create a test case is to use the Selenium IDE to record

user actions as the user navigates through the web application to test it. These actions will 18 be represented as Selenese commands.

2. Extract test case commands to a C# file: By using a developed C# NUnit formatter, the 20 Selenese test case will be exported into a C# file. Once complete, the exported test case

is transferred to a Jenkins workspace ready to be run or edited using Selenium WebDriver. 22

26 Implementation

With regard to the test cases that use recurrent steps, a detail description of the process will

2 be presented in section 4.3.1, under "C# formatters".

3. Run test case through a CI tool: After setting up the test environment with Continuous

4 Integration using Jenkins, the test case is executed using the NUnit console and the results will be saved in a XML file, which can be analyzed through the use of different test report

6 tools.

4.1.3 Test Results file structure

8 Following the tests’ execution, the tests’ results are stored in a generated XML file. The file structures the test cases in different types of XML components that contain the tests’ results in-

10 formation. These types of components are ordered hierarchically and they adopt the following structure.

12 1. Assembly: At the top is the Assembly component which includes information regarding the NUnit project where the tests are located and contains all Namespace components.

14 2. Namespace: These components are derived from the user specified test suite which groups the test cases in other nested namespaces or in a TestFixture component.

16 3. TestFixture: It is related to the NUnit’s test case class and stores the respective TestCase component.

18 4. TestCase: It is the basic component of the test. It contains an attribute that is used to specify a unique identifier for the respective test case. In case of a failed test, this component will

20 also nest a failure element with the message and stack trace of the failure that caused the test’s unsuccessful execution.

22 As an example of this hierarchical behavior, the user creates two test cases, one has the test suite "appTests.navigationTests" and the other "appTests.authenticationTests". The test results

24 XML file generated from executing these two tests will consist of an assembly component that has one namespace component called "appTests". This namespace will then have two children

26 namespace components called "navigationTests" and "authenticationTests". Each one will have their associated TestFixture and TestCase component.

28 Regarding the components’ attributes, excluding the TestCase component, all of them specify a type attribute which specifies the type of suite represented by its element (Assembly, Namespace,

30 TestFixture). What follows is a description of the attributes that every type of component contains.

• name: The display name of the test as generated by NUnit.

32 • executed: Boolean variable that indicated if the test was executed, independently of the test’s outcome.

34 • result: The basic result of the test. May be Success, Failed, Inconclusive or Skipped.

27 Implementation

• success: Boolean variable that specifies if the tests executed successfully or failed.

• time: The duration of the test in seconds, expressed as a real number. 2

• asserts: The number of asserts executed by the test.

Note that components with nested components will sum the results of its children. For exam- 4 ple, a component will sum the time attributes of its nested components to update its own time.

Additionally, if at least one of the nested components of a component result in a failed test, the 6 component in question will update its attributes to indicate that it failed.

4.1.4 Test Selector file structure 8

As mentioned in section 3.4.1, a file is required to store the test cases’ information and group them by test suite in a list to allow the user to select them through an interface. For this purpose, a file 10 with a properties extensions was created, which contains the following five properties:

• tests: This property value is a JSON array that contains a JSON object for each test. 12

• enableField: The name of the field that will imply if the test is enabled or not. If the value

in the specified field, for some tests, will be false then the test will not be shown at all. 14

• groupBy: The field that the list will group the tests by. It is related to the Namespace

component in the test results’ file. 16

• showFields: The fields that will be shown in the tests’ tree list.

• fieldSeparator: The character that will separate between the fields in the tests’ tree list. 18

This file serves as a configuration file for the list in the Test Selector interface, in addition to the JSON array. The implemented interface can be seen in figure 4.20. 20

4.1.5 Test Environment Setup

For this research work, a test environment with continuous integration features was required in 22 order to run full end-to-end automation testing in a single automated build process that could be run on-demand or periodically, with minimal user action. 24 In order to setup the test environment, Jenkins is used to create a workspace that structures the test environment and to automate the build process. 26 The figure 4.1 illustrates the breakdown of the Jenkins workspace contents:

• Folder BuildSeleniumTests: Pre-built Visual Studio NUnit project that is used to build 28 the exported tests that were recorded with Selenium IDE using the C# NUnit formatter

developed. 30

28 Implementation

Figure 4.1: Jenkins Workspace

• TestResult.xml: This XML file is created after the tests were run. It represents the tests’

2 results and has information regarding their success, the reason of failure and their corre- sponding test suites to organize the results.

4 • tests.properties: The purpose of the properties file is to allow the selection of tests to run by the user. It organizes the tests results in a JSON array that is then integrated with the

6 Jenkins web UI, as described in section 4.1.4.

• Folder lastTest: This folder contains a XML file similar to the TestResult file. It is used

8 whenever the user selects some tests to execute instead of all of them. The file stores the latest test run results which is used to update the main TestResult.xml.

10 • Developed executables: This programs work with the aforementioned files to update the project status on each build, which are executed in specific phases of the Jenkins project

12 build operation.

These contents are used during the automated Jenkins Build operation, which is divided in two

14 phases, Build and Post-Build. During the first phase, the test cases exported from the Selenium IDE will be moved to the

16 pre-built NUnit project to then be built and executed. During the tests’ execution, only one test will be run at a time. Each test prompts a browser to start interacting with the interface through the

18 user commands recorded, closing it after the test’s conclusion. After all selected tests are executed, a XML test results file is created or the existing one is updated.

20 In the Post-Build phase, the generated test results file is used to updated the .properties file. Once updated, the test results are published in the Jenkins interface by using a simple test reports

22 tool. Additionally, if a build fails, becomes unstable or returns to stable, a email will be sent to a designated email address with information regarding the project status.

29 Implementation

4.2 Architecture

Before proceeding to a thorough analysis of the prototype’s implementation, it will be necessary 2 to examine its design.

The prototype is separated in two main components, Selenium IDE and Jenkins. During this 4 research work, each component had elements developed. Regarding the Selenium IDE, a Mozilla plugin, a file containing the designed commands, a file with extensions to the IDE and the C# 6 NUnit formatters were created. Additionally, whenever a recorded test is exported, the IDE uses the saveTestCaseToProperties program, which is located in the Jenkins workspace. 8 In the case of the Jenkins component, as explained in the subsection 4.1.5, it uses developed programs to manage the test environment. In addition, a modified Test Selector Jenkins plugin is 10 used to facilitate the selection of tests to be run and its integration with the properties file.

Figure 4.2: Overview of the developed system

What follows is an extensive analysis and description of the components developed for both 12 Selenium IDE and Jenkins.

4.2.1 Selenium IDE 14

Selenium IDE Plugin - Test suite chooser

The test suite chooser is a Mozilla plugin that works with the Selenium IDE’s toolbar. It uses an 16 overlay to add an input element to the toolbar. This element stores a XML with a serialized DOM of a tree view of test suites and RecurrentSteps, as observed in figure 4.16. This is thanks to a 18 persist attribute, which maintains the DOM data when the window is closed and restores it when the window is re-opened. 20 As seen in the following model, the function createBaseTreeDOM is used to create a default tree view. It is used by the chooseTestSuite function on the first interaction with the input element 22 to setup the tree view. This last mentioned function is called when clicking on the input element

30 Implementation

in the toolbar. It summons a Dialog window containing the test suites’ tree view, which can

2 be interacted with the functions in class chooseTestSuite. This functions include the creation, selection and removal of test suites, saving a selected test suite to appear in the Selenium IDE’s

4 UI and supporting functions to handle and validate the data.

Figure 4.3: Implementation model of the Test suite chooser plugin

Selenium extensions

6 Selenium IDE uses two types of extensions, namely, Selenium Core extensions and Selenium IDE extensions. The latter is used to implement or modify builders for locators and commands.

Figure 4.4: Implementation model of the Command builders

31 Implementation

As seen in section 4.4, locators can be added through the add function which takes as parame- ters the name of the locator builder and the function that is called to build the element locator. This 2 function takes the HTML element as the argument, and returns the element locator. Additionally, it uses the order variable to prioritize locators. Selenium IDE tries each locator’s function in order 4 until it finds one that identifies the correct element or until it reaches the end of the order array.

Upon finding one, the locator is built and used. 6 The Selenium Core extensions, also called User extensions, are used to access the Selenium’s functions to modify them according to the prototype’s requirements. Two main Selenium objects 8 were worked with, the Recorder and Format objects, as observed in fig. 4.5. The former spec- ifies the events handled by the Selenium IDE. The addEventHandler and removeEventHandler 10 functions allowed default handlers to be disable and switched to new developed handlers. More information about the changed events can be found in section 4.3.1. The other function, findClick- 12 ableElement, is used by the IDE each time a user left mouse clicks on the web application UI.

It checks for clickable element such as input elements, text links and elements with an onclick 14 attribute. The inputTypes array variable contains strings from the HTML type attribute normally found in input elements. For example, it is used by the type event handler, which detects keys 16 pressed by the user. It only checks for input elements that require the user to input information and ignores buttons and similar elements like radio and checkboxes. 18 The Format object is used by the Selenium IDE to save the recorded commands or export them through the use of a formatter. Both functions seen in the Format element in the figure 20 4.5 were modified to save their generated test case files in a pre-determined folder located in the Jenkins contents. In addition, the exportContentAs function adds the test suite specified into 22 the exported test case through the use of the updateNamespace function. After completing the test case’s exportation, the export function also calls addToTestProperties, which executes the 24 saveTestCaseToProperties executable. This executable will be described in the topic that follows.

Figure 4.5: Implementation model of the Selenium IDE’s user extension

32 Implementation

saveTestCaseToProperties executable

2 This program is located in the Jenkins workspace (See figure 4.1) and it is used whenever a test case is exported through the Selenium IDE. It takes two parameters, the test case’s name and its

4 method name. It uses the test case’s name to find the location of the exported file in the Jenkins workspace. It then calls the checkCommands function which takes the RecurrentSteps cases’

6 commands that were referenced. This function uses the addRecurrentSteps function to replace the references with the respective commands. A more detailed analysis of this process is described in

8 section 4.3.1, under C# formatters. As seen in fig. 4.6, the program uses a TestProperties class to structure the test cases information required to identify it. It is then transformed into a JSON

10 object and added to the JSON array containing all exported tests. This array is located in the tests.properties file in the Jenkins workspace.

Figure 4.6: Implementation model of the saveTestCaseToProperties executable

12 Formatters

The developed formatters, C# NUnit and RecurrentSteps formatters, are JavaScript scripts that are

14 used by the Selenium IDE to translate the recorded Selenese commands to c# NUnit code. In order to do so, both take advantage of the Selenium WebDriver API (WDAPI class). The Driver

16 and Element classes provide default functions to handle the commands. The former translates commands related to the browser. For example, navigate to a web page, find UI elements and

18 close the browser window. The latter handles commands that work with elements found in the web application GUI. New event handlers created or existing ones whose handling changed are

20 described in section 4.3.1 under User action events. These functions return a string that contains the translated commands in C# code.

33 Implementation

Figure 4.7: Implementation model of the C# NUnit Formatters

The C# test case files are created following the specification found in the options JSON vari- able. This is where the developed formatters differ. In the C# NUnit formatter, the variable 2 separates the test case file in three parts which are the header, the test and the footer.

• Header: The first part of the file contains all the libraries necessary to run the test and the 4 information required to identify the test case, specifically its test suite and method name.

Additionally, it sets up the test by declaring its variables and implementing the setup and 6 teardown functions, whose functionality is described in section 4.1.1.

• Test: This part of the file takes the translated commands in C# code and appends them to 8 the test case’s method function which will be run during the test’s execution.

• Footer: The last part contains the implementations of functions that handle the translated 10 C# test commands. These functions use the Selenium WebDriver API to strengthen the C#

commands with additional steps. These steps include the handling of exceptions that occur 12 during the test’s execution, including taking a screenshot of the GUI. In specific cases,

such as the click command, its associated function will first try to perform its task through 14

34 Implementation

Selenium WebDriver functions. However, in case of failure, it will then try to achieve the

2 same task through the execution of JavaScript script, as seen in listing B.1 in appendixA.

On the other hand, the RecurrentSteps formatter produces a file with only the translated com-

4 mands in C# code. The last component that supports the formatters is the SeleniumWebDriverAdaptor Class. It is

6 used to handle the Selenese command’s type and redirect to an according function in the Driver variable. The three operations seen in the fig. 4.7, addSelection, mouseDown and select, are asso-

8 ciated with the new commands that were created. A description of these commands is conducted in section 4.3.1, under User action events.

10 Use Cases

As seen in figure 4.8, the modified Selenium IDE component of the prototype allows testers to

12 create test cases through the recording of their actions on the interface, to specify a corresponding test suite and to handle the generated test case by saving it in Selenese commands or export to C#

14 NUnit.

Figure 4.8: Use Case diagram of Selenium IDE

4.2.2 Jenkins

16 Tests Selector Plugin

The developed Jenkins plugin is the result of a modified Tests Selector plugin. This developed

18 version changes the createTest and selectTest function seen in figure 4.9 in order to change the Jenkins Tests selector web UI. As mentioned in the last topic, the purpose of this changes is to

35 Implementation allow the user to access the tests’ screenshot from the same interface. Additionally, it alters the appearance of the tests information to indicate if the test was executed successfully or failed in its 2 last execution. The implemented interface can be observed in figure 4.20, in section 4.3.2.

Figure 4.9: Implementation model of the tests selector Jenkins plugin

transferTestsToBuild executable 4

This program is executed during the first phase of the Jenkins build operation, the Build phase.

Its purpose is to move the files from a pre-determined folder, where the exported tests from the 6 Selenium IDE are located, to the pre-built Visual Studio NUnit project folder BuildSeleniumTests.

It uses the GetAllFiles class, seen in 4.10, to obtain the path of all test case files with a valid 8 extension and their test suites to create a unique name for the test case file to avoid overwrites.

After moving the files, the NUnit project is built and the test are compiled. 10

Figure 4.10: Implementation model of the transferTestsToBuild executable

parseTestsProperties executable

The main program of the Jenkins build operation. It runs during the Build phase and uses the 12 Visual Studio NUnit project and the tests.properties file from the Jenkins workspace. Its purpose

36 Implementation

is to run the user selected test cases from the Jenkins web UI and generate the test results in a

2 XML file. There are two courses of action, run all test cases or some of them. In the former’s case, the

4 program executes the NUnit console (function runNUnitTests) who runs from the Visual Studio’s assembly. This generates a new test results XML file which overwrites any existing one.

6 If the user chooses to not run all test cases, only the selected ones, the program executes the NUnit console on each test case file. Each test’s execution generates a XML file with its

8 results. The purpose of the UpdateTestResultXML class’ main function updateXML is to update the main test results XML file by using the newly generated one. This process resorts to the

10 TestXMLComponents class to structure the test’s components, which use the TestXMLComponent class to represent each component’s attributes, as seen in figure 4.11. The structure of the XML

12 file is described in section 4.1.3.

Figure 4.11: Implementation model of the parseTestsProperties executable

extractTestResults executable

14 This program is executed in the Post-build phase of the Jenkins Build operation. It takes the generated test results XML file to create JSON objects associated with those results. These JSON

16 objects are structured through the TestProperties configuration, which can be observed in figure 4.12.

18 The program updates the tests.properties’ JSON Array with the created JSON objects con- taining the updated test results. Additionally, it uses the Tests Selector Plugin contents to store

20 any screenshots taken during the tests’ execution. The plugin’s associated web UI can then access these screenshots and present them to the user, associating them to the respective tests.

37 Implementation

Figure 4.12: Implementation model of the extractTestResults executable

Use Cases

The main functionality of the Jenkins component of the prototype is to automatically build the 2 generated test cases and represent their results to user, in order to grant a better overview of the project status. Additionally, as seen in Fig. 4.13, it allows the user to select which tests to run and 4 see screenshots of the interface captured when the tests failed.

Figure 4.13: Use Case diagram of Jenkins

38 Implementation

4.3 Prototype

2 For a full automated software test process two main features were required, Capture & Replay and Continuous Integration. The tools chosen to implement these functionality were Selenium IDE

4 and Jenkins, respectively.

4.3.1 Selenium IDE

6 Regarding the Selenium IDE, its standalone version wasn’t enough to implement the components required for the prototype. The added functionality is based on user extensions and plugins created

8 specifically for this research work. What follows is a detailed description of the implementation of each Selenium IDE feature

10 developed for the prototype.

User action events

12 Due to special types of inputs found in Glintt solutions, some IDE functions and events were modified and new functionality was added. This was done to better simulate user interaction with

14 those input elements. Additionally, the number of events handled that were turned into commands was increased in order to better simulate user actions and increase the tests’ robustness. All of

16 the modified commands do not influence the users interaction with the GUI, but some of the new commands do. One of them is the addSelection command, whose actions were described in the

18 table 4.1, which requires additional actions from the user. To ease its usage, this command has been made accessible from the context menu, which is available whenever the user right mouse

20 clicks when recording the test, as observed in figure 4.14. The other two new commands are select and mouseDown. The first one is used for selecting

22 to run commands found in a RecurrentSteps case. This command requires the user to add the specified case’s name in its parameter on the Selenium IDE interface, which can be done at any

24 point. An example of its usage can be seen in figure 4.15, in which, prior to executing the recorded actions, the test case will run the commands located in a RecurrentSteps case called login.

26 Regarding the mouseDown command, the moment the user presses the left mouse button, with- out needing to release it, an event is launched, which is recorded by the Selenium IDE. However,

28 the event only occurs if pressed on selected input elements, because this command is used to bring focus to input elements that summon additional elements. For example, drop-down boxes and

30 developed application inputs with similar behavior, which, when clicked, prompt a list of options. Returning to the modified IDE functions, the main changes were related to the handling of

32 events that were generated from user actions that prompt loads, which was modified to take into account timing issues. The solution to this issues will be analyzed on the next section.

34 In addition to this changes, the click command had another modification. It will first try to simulate its recorded action in the interface through the use of Selenium WebDriver commands.

39 Implementation

Figure 4.14: Context menu with Selenium IDE commands in Selenese

In case of failure, it will try to click the element through the use of JavaScript. This was specifically done for the click command since most of the user actions recorded will be based on clicks. 2

The last event handling modified was the event associated with the sendKeys command, which, in addition to what is described in table 4.1, is used on input elements that show suggestions to 4 auto-complete words or phrases. On those elements, the Selenium IDE captures each key pressed by the user instead of the final complete input. As an example, the Google autocomplete search 6 input, which updates the search predictions on each character typed.

For the purpose of asserting or verifying UI elements and their attributes, the user can use the 8 context menu to access the corresponding commands. As observed in 4.14, both assert and verify commands are similar. The suffix in their commands indicates the attribute of the UI element 10 that is going to be checked. These types of commands can use the locator to the element that the mouse was pointed at when summoning the context menu or a string or both. For example, the 12 assertText will check the presence of the string in the element, which uses both the locator and a string as parameters. Another example, the assertElementPresent only uses the locator to check 14 the presence of the element in the GUI. The difference between these two types of commands is that the assert commands will end the test if it fails. The verify commands, on the other hand, will 16 let the test proceed in case of failure.

40 Implementation

Figure 4.15: Selenium IDE interface with a select command

Handling of timing issues

2 A recurrent error during test execution occurred due to timing issue that resulted in elements not being found. The problem was caused when the IDE did not wait for the page to load the specific

4 element before checking if it is present. To avoid unnatural user actions with the interface to solve this issue, extensions were developed that implicitly execute those waits.

6 For general purposes, whenever a user interacts with an element in the GUI, whether a click or typing to an input, it will constantly check if the element is present in the GUI for a maximum

8 of 10 seconds. If the element is present at any point during that time interval, the test continues and executes the user action recorded. If not, the test ends and an exception is thrown.

10 In case of the Glintt solutions, the application shows a loading icon whenever the page is not completely loaded, whether it is a page load or AJAX load, but it required the user to explicitly

12 add the command to wait for the load to complete. To fix this, a Selenium IDE extension was created that implicitly waits whenever a user action triggers the loading icon. It constantly checks

14 if the loading icon is visible in the GUI for a maximum of 60 seconds. In contrast to the previously mentioned extension, if the icon is no longer visible during that time interval, the test proceeds. If

16 it is still present, the test ends and an exception is thrown.

41 Implementation

C# formatters

Since Selenium IDE uses the high-level, language-independent convention for Selenium com- 2 mands called Selenese to represent the user commands recorded, C# formatters were integrated with the IDE in order to allow the user to create Selenium WebDriver C# test cases that also 4 implement the previously mentioned functions that were extended in the IDE.

Two formatters were developed, the C# NUnit Formatter and the C# RecurrentSteps Formatter. 6 The former is used to generate the functional NUnit test cases by translating the Selenese com- mands to Selenium WebDriver C# commands. This is done by inserting the translated commands 8 into the test case class and by defining the tests template. The template is used to deal with NUnit and Selenium dependencies as well as to declare the functions to handle the extended user actions. 10 The C# RecurrentSteps Formatter, on the other hand, is used to store the translated commands that are frequently used. This allows the user to record a test case and specify these user actions 12 through the use of the select command, which are then inserted into the exported test case.

This technique lets the user test the GUI without needing to interact with code. However, the 14 NUnit test cases are still editable, even though it requires programming skills and knowledge of

Selenium WebDriver. 16

Interface changes

Regarding the Selenium IDE Interface, which can be observed in figure 4.15, a Mozilla plugin was 18 developed that integrates a new box to the IDE’s toolbar overlay. Its function is to allow the users to choose the test suite of the recorded tests. 20 Upon clicking the box, a new window will open, which shows all test suites created in a tree view, as seen in figure 4.16. The user has the option of adding and removing test suites. 22

Figure 4.16: Test suite tree view

It is through this window that the user specifies if the recorded commands are to be saved as a test case or as a RecurrentSteps case. As observed in the test suite tree view figure, under 24 "Test Suites" there is a fixed element called "RecurrentSteps", whose test cases are not executed during the Jenkins build and test process. These test cases are exported through the use of the C# 26 RecurrentSteps Formatter.

42 Implementation

New locators

2 Selenium uses locators to find and match the elements of the UI that it needs to interact with. There are various locator types, however none of them were adequate to identify some of the elements

4 in the Glintt solutions UI, specifically, elements that use dynamic IDs. Using dynamic IDs often leads to problems in test automation because they are newly gener-

6 ated every time an element is displayed, which will not guarantee that, when replaying the recorded actions, the elements will be correctly identified.

8 This lead to the creation of new precise locators, which locate elements through the use of XPath. They are used to match elements with dynamic IDs and similar configuration to elements

10 present in the same page on different test playbacks. For example, as seen in figure 4.17, the XPath locator in the Target input examines the web page’s DOM in search of the third element whose

12 parent element has a tag named button and an attribute type with value "button".

Figure 4.17: Locator’s format in Selenium IDE

Despite that they are able to handle elements with dynamic ID’s and access almost any ele-

14 ment, even those without class, NAME, or ID attributes, note that this locators are precise to their respective purposes. This in turn makes them static and are only useful for specific occasions.

16 For the Glintt’s web application on development under test, the locators implemented were used to complement the locator strategies implemented in the standalone Selenium IDE. As an

18 example, the developed gsearcher locator is used by the Selenium IDE’s recorder to identify elements of dynamic lists through the text of the clicked list element. The code necessary to

20 implement this locator can be found in appendix B.2. It is possible to add new locator strategies through the commandBuilders file, as explained in

22 section 4.2.1 under Selenium Extensions.

4.3.2 Jenkins

24 Continuous integration was used to fully automate the build and running test processes, through the use of developed programs and Jenkins plugins that set up the test environment with minimal

26 user action.

Jenkins Interface

28 Regarding the Jenkins build project interface, it contains the build history, as observed in the bottom left panel of the figure 4.18, which shows the status of each build run. There are four

30 different build statuses, which are symbolized as a specific color.

43 Implementation

Figure 4.18: Jenkins main interface

The Success(blue) status indicates that the build run all tests successfully and the Unsta- ble(yellow) status specifies that there were tests that failed. A test that previously run without 2 issues and has failed in the latest test run will be described as a regression instead of a failure.

These two statuses are related to the test execution phase while, on the other hand, the Fail- 4 ure(red) and Abort(gray) are related to the build phase. The red status indicates that an error as occurred in the processes that manage the test environment, which will stop the build process 6 immediately and return the logs that specify the issue.

The gray status occurs whenever the user issues the build to abort. It can be done at any point 8 during the full Jenkins process, however, tests that were run before the abort will be updated.

Still in the Jenkins main interface, the user is able to access and view the workspace contents, 10 observe the test results trend throughout the builds and see the test results of each build.

Figure 4.19: Jenkins test selector interface with a screenshot

44 Implementation

In addition, there is the test selector interface, which can be seen in figure 4.19. It allows the

2 user to select the tests to run and see the screenshot of the failed tests. The screenshots are taken after the detection of a failure and before ending the test. However, in case a failure occurs when

4 generating the screenshot, a red line will replace the screenshot under the test in the interface, as observed in fig. 4.20. It serves as a way to show that an error as occurred and that it was not

6 possible to take a screenshot of the GUI.

Figure 4.20: Jenkins test selector interface with a failed screenshot

Regarding the visualization of the test results, the Jenkins web UI presents information for

8 each test case, such as execution duration and the difference of results between the current and last build, as seen in figure 4.21. The build’s test results page also emphasize the failed tests by

10 showing the stack trace of each one and their age—measured in continuous builds where the test failed, as observed in fig. 4.22.

Figure 4.21: Jenkins test result interface

45 Implementation

Figure 4.22: Cause of the failed test presented in the test result interface

Test Environment Contents and Management

Having examined how the Jenkins interface behaves, the following section addresses how the 2 automation of the test environment management of its resources is accomplished.

This section follows on from section 4.1.5, which outlined the test environment contents, lo- 4 cated in the Jenkins workspace.

The first step in this process occurs whenever a test case is exported through the Selenium 6 IDE. It calls the saveTestCaseToProperties executable, which gathers information regarding the test case’s name and suite and adds the newly generated test case information to the JSON array 8 in tests.properties file. This file is closely integrated with the Jenkins’ test selector plugin and updates the list of runnable tests available for the user to select. 10 After the user specifies the tests to run and orders the build process to start, the process enters the Build phase, in which the generated test cases are transferred to the pre-built Visual Studio 12 NUnit project BuildSeleniumTests. Once transferred the tests are checked for syntax and depen- dencies errors. 14 Once the tests are ready, they are run through the NUnit console, which executes in back- ground. Each test summons a browser window and starts replaying the user action recorded on the 16 window’s interface. When the test ends, before moving to the next test, the TestResult.xml file is generated. Additionally, if the test fails, a screenshot is taken and the window safely closes. This 18 screenshot will be available for examination in the Jenkins’ test selector interface.

After all test are executed, the process generates the TestResult.xml file and enters the Post- 20 Build phase. In this phase, the test results are extracted from the XML file and the properties file is updated. The results are then published through the use of a simple NUnit test result report 22

46 Implementation

Jenkins plugin. The process ends by sending a email to the selected email address whenever the

2 build fails, a test is unstable or returns back to a success status.

4.4 Discussion

4 Since the focus of this research work was to generate reliable test cases through the capture of user actions and to ease the users of the burden of maintaining the generated tests, all in a automated

6 fashion, a theoretical and a technical approach to this issue were conducted.

4.4.1 Method

8 From a theoretical standpoint, the two main contributions developed are the RecurrentSteps case and the automation of the test environment’s management.

10 The RecurrentSteps cases allow the user to select a set of frequently used actions to reach a part of the web application. In case a failure occurs on those actions, the user only needs to fix

12 the issue on the RecurrentSteps case file. This saves the user from having to fix each test case that replay those actions. This reduces the maintenance cost of the generated test cases as less time is

14 needed to fix similar failures on different tests. Regarding the automation of the test environment’s management, the process involved ab-

16 stracts the user of all Jenkins build operations. It only requires the selection of which tests to execute and the updated test results are presented in the Jenkins web UI.

18 Turning now to the technical approach, about the implementation of the developed methods to structure the test cases, the purpose was to reduce the maintenance cost of the test cases generated

20 through the use of C&R tools, which are known to be brittle, as mentioned in the literature review. To do so, several issues were located and dealt with separately, as analyzed in the previous chapter.

22 • User actions events: The new and modified functionality regarding event handling allow a better simulation of the user actions recorded and increases the test case’s robustness.

24 However, some of the new commands require the user to perform additional actions. For example, the use of the addSelection command is through the context menu and the select

26 command is through the Selenium IDE interface. Additionally, the incorporation of this new commands and modification of the handling of the IDE’s events has, consequently,

28 increased the test cases’ size. This may reduce its readability from a developer standpoint, in exchange of more easily maintainable test cases.

30 • Recurrent Steps: The main problem with the implementation of the RecurrentSteps cases is its usage through the Selenium IDE. Currently, it requires the user to memorize the contents

32 of the case only through its name, which can be seen in the test suite chooser box integrated in the IDE’s interface. In addition, its command(select) can not be added to the test case

34 through recorded user actions in the GUI. However, the benefits discussed earlier in this section are implemented and the RecurrentSteps’ management is an automatic process that

36 is integrated with the test environment.

47 Implementation

• Detection of list elements: The standalone selenium IDE would detect an element of a

list through a locator that would depend on the position of the element in the list, unless it 2 was through the ID or Name locators. However, the use of an ID or name attribute on the

list elements is rare. The developed addSelection command is used to bypass this issue by 4 identifying the element, independently of its position, on the list through the text contained

in it. Further work is required to dynamically add types of lists that the command detects. 6 Currently it serves as a proof of concept, as it detects a type of list found on Glintt solutions.

It is possible to identify elements in other types of lists, but it requires the user to identify 8 the locator of the list and manually add it to the C# NUnit Formatter’s code.

In the case of the implementation of the developed methods related with the Continuous In- 10 tegration component, the extensibility that Jenkins provides was a great boon which was used to tinker with its web UI. It allowed the modification of a test selector page and the use of test report 12 plugins to better represent the test results. Additionally, Jenkins enables the use of its workspace to store the developed programs and test environment’s contents. It allows dynamic access to all 14 its contents from the Jenkins project and the Selenium IDE. However, as a result, it can only work in a specific Jenkins project (its configuration can be analyzed in appendix A.2). This is because 16 the Selenium IDE component does not have access to the name of the Jenkins’ projects, which is required in order to know the project’s workspace PATH for the saveTestToPropreties. Note that 18 all other developed programs only work if executed during the Jenkins build operation. They call

Jenkins environment variables which can only be accessed through the Jenkins project. 20

4.4.2 Results

The development of the prototype throughout this research work was conducted on a real applica- 22 tion on development in Glintt HS. This helped to validate the prototype since it has shown value to the company, despite that, at the time of writing, it has not been used by the quality assurance 24 team.

The testing process in the company is mainly through functional testing on each development 26 patch. With the increasingly more complex software projects, the time spent on testing has in- creased and if the process continues the same, the test efficiency and validity will tend to decline. 28 The introduction of the develop prototype has shown to ease the creation and execution of test cases with minimal user action. It implements the automation of the generation of test cases and its 30 management. The development of tests can be run faster and can be run over and over again with less overhead. Note that manual testing will not go away, but these efforts can now be focused on 32 more rigorous tests.

In addition to the new process of creating test cases, the prototype also deals with the man- 34 agement of the generated contents in an automatic fashion. Through the CI tool Jenkins, the test cases’ code is abstracted from the users, although it can still be accessed but requires programming 36 skills and knowledge of Selenium WebDriver.

48 Implementation

The automated Jenkins build operation manages the test cases and their results without user

2 intervention. The user only has to select which tests to execute and the process runs without requiring more input. This allows the user to let the selected tests run automatically during a

4 period of time. For example, letting the tests run after leaving work and in the next day see the test results. Thanks to the Jenkins web UI, at the end of the operation the test results and console

6 output can be visualized and analyzed. Having discussed the advantages of test automation and continuous integration that the pro-

8 totype grants to the company’s testing process, what follows addresses the results of the more technical aspects of the developed prototype that were implemented.

10 Tests that are created through C&R tools are known to be brittle and as a result, they have high maintenance costs. The main focus of this research work was to add support for test maintenance

12 on generated tests through these tools to benefit test engineers and quality assurance personnel. As discussed in chapter4, the recorded test cases can be run through the Selenium IDE’s in-

14 terface. They are cheap to develop in terms of time required but are very expensive to maintain. For the purpose of addressing this issue, C# NUnit Formatters were developed to translate the un-

16 reliable recorded commands to C# NUnit test cases that allow the use of the Selenium WebDriver API. The formatters’ generated code follows the prototype’s design in NUnit which makes them

18 more robust and facilitates the integration with the CI tool. In addition, the Selenium WebDriver API allows for better support of dynamic web pages where elements of a page may change without

20 the page itself being reloaded. Still in the Selenium IDE’s interface, the user is now able to choose the test suite of the

22 recorded test case in the Selenium UI, thanks to a developed Mozilla plugin that integrates a new view. This helps the user categorize the test cases and eases the readability of the test results

24 in the Jenkins web UI. It is also through this view that it is made the separation between a test case and a RecurrentSteps case. The latter type of case was developed to reduce the tests’ maintenance

26 costs since these steps can be used by multiple test cases. In case of a failure, it is only necessary to fix the issue on one file.

49 Implementation

50 Chapter 5

2 Conclusions and Future Work

4 The final chapter of this dissertation is divided into two parts. The first presents the final remarks of this research work in order to summarize the whole process conducted and draws the main con-

6 clusions. The last section makes suggestions for improvement and speculates on future directions.

5.1 Final Remarks

8 A study of the state of the art in automated web testing was conducted to investigate how au- tomated testing can be applied to a Web application and what available frameworks and tools

10 could be used to create a prototype that focuses on software testers without experience in Web testing. This work’s contribution is related to the developed prototype, which implements meth-

12 ods to increase the tests’ robustness such as implicit waits and new locator strategies, as well as a new technique called RecurrentSteps that allows the user to store frequently used commands.

14 The test cases that use these commands can referenced them through the use of a select command during the test’s recording. Additionally, the prototype includes a Continuous Integration compo-

16 nent—Jenkins—that automates the test environment, abstracting the user of its management. The Jenkins web UI was altered in order to allow the user to select which tests to run, visualize screen-

18 shots of the GUI in each failed test and analyze graphical test reports. Note that the prototype was developed and tested on a real application on development, although, at the time of writing, it has

20 not been used by a quality assurance team. However, the results indicate that the generated tests are more maintainable than through standalone C&R tools, as even during a month of changes to

22 the web application, almost all tests continued to behave correctly. Those tests that regressed had their causes of failure identified through the Jenkins web UI’s visual reports and screenshots and

24 fixed within the Selenium IDE’s interface by editing the recorded commands. This whole process did not required any programming approach and middle steps to configure the test environment,

26 demonstrating how integrated are the C&R and CI components of the prototype.

51 Conclusions and Future Work

5.2 Future Work

In the immediate future, the plan is for the prototype to be integrated in a quality assurance team 2 and its performance analyzed to confirm or confute the results. On the one hand, the Jenkins Build operation will be updated and optimized to allow the user to create different Jenkins projects with 4 their own respective workspace and speed up the tests’ execution process. On the other hand, the issue of interacting with the web application GUI to dynamically find UI elements indepen- 6 dently of its position is an intriguing one which could be usefully explored in further research, in particular, studying visual web testing tools based on image recognition (as an example, Sikuli 8 Script[Sik]). Finally, it is planned for the prototype to integrate manual tests in its test environ- ment, granting the user access to all tests through the Jenkins web UI and manage them in the 10 same tool.

52 References

2 [BKW09] Andreas Bruns, Andreas Kornstadt, and Dennis Wichmann. Web application tests with selenium. IEEE Software, 26(5):88–91, 2009.

4 [CSRM14] Laurent Christophe, Reinout Stevens, Coen De Roover, and Wolfgang De Meuter. Prevalence and maintenance of automated functional tests for web applications. 2014 6 IEEE International Conference on Software Maintenance and Evolution, pages 141– 150, 2014.

8 [DGG09] Elfriede Dustin, Thom Garrett, and Bernie Gauf. Implementing Automated Software Testing: How to Save Time and Lower Costs While Raising Quality. Addison-Wesley 10 Professional, 1st edition, 2009.

[DLF06] Giuseppe A. Di Lucca and Anna Rita Fasolino. Testing web-based applications: The 12 state of the art and future trends. Information and Software Technology, 48(12):1172– 1186, 12 2006.

14 [DLP05] Giuseppe A. Di Lucca and Massimiliano Di Penta. Integrating static and dynamic analysis to improve the comprehension of existing web applications. Proceedings - 16 Seventh IEEE International Symposium on Web Site Evolution, WSE 2005, 2005:87– 94, 2005.

18 [Fir] Packaging and installation - mozilla | mdn. https://developer.mozilla. org/en-US/Add-ons/WebExtensions/Packaging_and_Installation#

20 Installation. Accessed: 07-06-2016.

[Gol] Sebastian Golasch. Dalekjs - automated cross browser testing with . http: 22 //dalekjs.com/. Accessed: 28-01-2016.

[HK06] Antawan Holmes and Marc Kellogg. Automating functional tests using selenium. In 24 Proceedings - AGILE Conference, 2006, volume 2006, pages 270–275, 2006.

[LCRT13] Maurizio Leotta, Diego Clerissi, Filippo Ricca, and Paolo Tonella. Capture-replay 26 vs. programmable web testing: An empirical assessment during test case evolution. Proceedings - Working Conference on Reverse Engineering, WCRE, pages 272–281, 28 2013.

[LDD14] Yuan-Fang Li, Paramjit K. Das, and David L. Dowe. Two decades of web application 30 testing—a survey of recent advances. Information Systems, 43:20–54, 7 2014.

[LDLZ13] Li Li, Qiu Dong, Dan Liu, and Leilei Zhu. The application of fuzzing in web soft- 32 ware security vulnerabilities test. Proceedings - 2013 International Conference on Information Technology and Applications, ITA 2013, pages 130–133, 2013.

53 REFERENCES

[Moc] Mocha - the fun, simple, flexible javascript test framework. http://mochajs. org/. Accessed: 10-02-2016. 2

[MTT+11] Alessandro Marchetto, Roberto Tiella, Paolo Tonella, Nadia Alshahwan, and Mark Harman. Crawlability metrics for automated web testing. International Journal on 4 Software Tools for Technology Transfer, 13(2):131–149, 2011.

[NBN14] Thanuja Janarthana Naidu, Nor Asyikin Basri, and Saravanan Nagenthram. Sahi vs. 6 selenium: A comparative analysis. In Proceedings of 2014 International Conference on Contemporary Computing and Informatics, IC3I 2014, pages 967–970, 2014. 8

[PRPE13] M. Polo, P. Reales, M. Piattini, and C. Ebert. Test automation. IEEE Software, 30(1):84–89, 2013. 10

[Sah] Sahi | open source automation testing tool. http://sahipro.com/ sahi-open-source/. Accessed: 28-01-2016. 12

[Sel] Selenium documentation. http://www.seleniumhq.org/docs/. Accessed: 27-01-2016. 14

[Sik] Sikuli script. https://lucene.apache.org/core/2_9_4/scoring.html/ /. Accessed: 16-06-2016. 16

[SMA05] Koushik Sen, Darko Marinov, and Gul Agha. Cute : A concolic unit testing engine for c. Program, 30:263–272, 2005. 18

[TSSC12] Suresh Thummalapenta, Saurabh Sinha, Nimit Singhania, and Satish Chandra. Au- tomating test automation. Proceedings - International Conference on Software Engi- 20 neering, pages 881–891, 2012.

[Wat] Web application testing in ruby. http://watir.com/. Accessed: 27-01-2016. 22

54 Appendix A

2 Configuration

A.1 Selenium IDE Configuration

4 1. Load the commandBuilders and userExtensions JavaScript Scripts in the Selenium IDE’s options menu (Figure A.1).

6 2. Add formatters in the Selenium IDE’s options menu (Figure A.2).

3. Open the Jenkins Test Chooser Plugin with your chosen Mozilla browser to install it.

Figure A.1: Selenium IDE’s general settings

55 Configuration

Figure A.2: Selenium IDE’s format settings

A.2 Jenkins Configuration

Jenkins system configuration 2

The Jenkins system configuration requires the following technologies:

1. JDK 1.8 for running the Jenkins server. 4

2. MSBuild v4.0 for compiling the pre-built Visual Studio C# NUnit project during the Jenkins

Build operation. 6

3. Maven v3.3.9 for managing the Jenkins plugins.

4. SMTP Server for configuring E-mail notifications. 8

5. Plugins, namely, the NUnit, MSBuild and Post-Build Script Plugins, in addition to the

developed Tests Selector Plugin. 10

Jenkins Structure

The Jenkins component contains two main folders, the project’s workspace and a folder where all 12 the tests from the Selenium IDE are exported to. An example of the way the tests are stored can be seen in figure A.3. 14

56 Configuration

Figure A.3: Jenkins’ Tests folder

The tests are grouped in folders associated with their test suite. For example, the test case with

2 test suite "appTests.navigationTests.SearchPATIENTS" will be stored in the Tests folder with path /appTests/navigationTests/searchPATIENTS, as seen in the Jenkins’ Tests folder figure.

4 The test cases generated from the Selenium IDE interface can be saved in Selenese or in C# NUnit. Both types are stored in the same location in the Tests folder but the former has an HTML

6 extension, while the latter has an .cs extension. It is possible to open the HTML files through the Selenium IDE for editing purposes.

8 Jenkins project configuration

After installing Jenkins and configure its system, the following steps are required to create the

10 project used by the prototype:

57 Configuration

1. Create a "New Item" of type Freestyle Project with name "NUnitSeleniumTests".

2. Configure the project, which is separated in three phases, as presented in figures A.4, A.5 2 and A.6.

3. Create a "New Node" of type "Dumb Slave". 4

4. Configure the node as presented in figure A.7.

58 Configuration

Figure A.4: Jenkins’ Project Pre-Build Configuration

59 Configuration

Figure A.5: Jenkins’ Project Build Configuration

60 Configuration

Figure A.6: Jenkins’ Project Post-Build Configuration

61 Configuration

Figure A.7: Jenkins’ Project Node Configuration

62 Appendix B

2 Procedure

B.1 Translated Selenese commands in NUnit C#

4 What follows are NUnit C# segments of code that are generated through the Selenese commands and inserted into the exported test case. Some segments call developed functions that are declared

6 in the test case through the C# NUnit Formatter, while others only consist of existing Selenium WebDriver C# commands.

8 1 private void Click(By locator){

102 try { 3 IsElementPresent(locator);

124 driver.FindElement(locator).Click(); 5 }

146 catch (WebDriverException){ 7 try {

168 ((IJavaScriptExecutor)driver).ExecuteScript("arguments[0].click();", driver. FindElement(locator));

189 } 10 }

1120 waitForGlinttLoadNotVisible(); 12 return;

1322 }

Listing B.1: Click command in C#

24 1 IsElementPresent(By.Id("inputTest"));

262 driver.FindElement(By.Id("inputTest")).Clear(); 3 checkUncaughtError();

284 IsElementPresent(By.Id("inputTest")); 5 driver.FindElement(By.Id("inputTest")).SendKeys("Teste");

306 waitForGlinttLoadNotVisible();

Listing B.2: Type command in C# that writes "Teste" in an input with ID "inputTest"

63 Procedure

1 driver.Navigate().GoToUrl(URL); 2

Listing B.3: Open command in C#

4 1 private void IsElementPresent(By by)

2 { 6 3 this.wait.Until(ExpectedConditions.PresenceOfAllElementsLocatedBy(by));

4 } 8

Listing B.4: Check if element is present command in C#

10 1 private void SelectFromLists(string text)

2 { 12 3 IReadOnlyCollection listElements= driver.FindElements(By.

CssSelector("div.some_container>.logic_template_wrapper")); 14 4 foreach(IWebElement listElement in listElements)

5 { 16 6 IWebElement patient= listElement.FindElement(By.XPath(".//div/div[3]/div/

div")); 18 7 if (patient.Text.Contains(text))

8 { 20 9 patient.Click();

10 waitForGlinttLoadNotVisible(); 22 11 checkUncaughtErrors();

12 return; 24 13 }

14 } 26 15 } 28 Listing B.5: Select element from list given its text in C#

B.2 Gsearcher’s Locator Strategy

To exemplify the addition of a new locator strategy in the commandBuilds file, what follows is the 30 implementation of the developed locator Gsearcher:

32 1 LocatorBuilders.add(’gsearcher’, function(e){

2 var path=’’; 34 3 var reachedGsearcherContainer= false;// Turns true after reachinga node with

anID, because it’s parent node will bea div of class"ui-autocomplete-box" 36 4 var text=e.innerText ||e.textContent ||’’;

5 var current=e; 38 6 while (current != null){

64 Procedure

7 if (current.parentNode != null){

82 path= this.relativeXPathFromParent(current) + path; 9 if (1 == current.parentNode.nodeType&&// ELEMENT_NODE

104 current.parentNode.getAttribute("id")) { 11 reachedGsearcherContainer= true;

126 } 13 if (1 == current.parentNode.nodeType&&// ELEMENT_NODE

148 reachedGsearcherContainer&&(current.parentNode.getAttribute("class"). indexOf("ui-autocomplete-boxr") > -1)) {

1510 16 return this.preciseXPath("//"+ this.xpathHtmlElement(current.parentNode.

12 nodeName.toLowerCase()) + 17 ’[contains(@class,"ui-autocomplete-box")]’+

1814 path+"[contains(text(),’"+ text+"’)]",e); 19 }

2016 } else { 21 return null;

2218 } 23 current= current.parentNode;

2420 } 25 return null;

2622 });

Listing B.6: Gsearcher’s locator strategy

65