<<

Aalto University School of Science Master’s Programme in Computer, Communication and Information Sciences

Ilkka Malassu

DevOps and other development practices in a web application implementation

Master’s Thesis Espoo, December 29, 2020

Supervisor: Professor Petri Vuorimaa, Aalto University Advisor: Jani Koski M.Sc. (Tech.) Aalto University School of Science Master’s Programme in Computer, Communication and ABSTRACT OF Information Sciences MASTER’S THESIS Author: Ilkka Malassu Title: DevOps and other practices in a web application implementation Date: December 29, 2020 Pages: viii + 57 Major: Code: SCI3042 Supervisor: Professor Petri Vuorimaa Advisor: Jani Koski M.Sc. (Tech.) DevOps aims to streamline the process of implementing and delivering new soft- ware by merging the traditionally separate functions of software development and operations into a unified model. Continuous practices of DevOps implement pipelines as processes that take software from development to production with as much automation as possible, while maintaining high quality. Choosing the right software development approaches and practices is important in ensuring end user satisfaction, and profitability in a business context. This thesis conducts a case study on a DevOps environment established in the corporate context of the telecommunications company Nokia. This environ- ment is evaluated by observing it’s ability to support the continuous development of a web-based application. The goal of this research is to describe how the DevOps environment established within Nokia implements the main functionali- ties of DevOps, compare it with current literature and studies and provide ideas for future discussion and improvements. As a result, the studied DevOps platform successfully supported the continuous development of the web service implemented for the purposes of this thesis. In the future, standardization of pipeline implementations and other configurations could be discussed. A possibility of a migration from using two CI/CD pipeline tools to using only one could also be evaluated. Keywords: DevOps, web, software development Language: English

ii Aalto-yliopisto Perustieteiden korkeakoulu DIPLOMITYON¨ Tieto-, tietoliikenne- ja informaatiotekniikan maisteriohjelma TIIVISTELMA¨ Tekij¨a: Ilkka Malassu Ty¨on nimi: DevOps ja muut ohjelmistokehitysk¨ayt¨ann¨ot verkkopalvelun toteutuksessa P¨aiv¨ays: 29. joulukuuta 2020 Sivum¨a¨ar¨a: viii + 57 P¨a¨aaine: Tietotekniikka Koodi: SCI3042 Valvoja: Professori Petri Vuorimaa Ohjaaja: Diplomi-insin¨o¨ori Jani Koski DevOps-mallin tavoitteena on virtaviivaista ohjelmistojen kehitys- ja julkaisupro- sessi. P¨a¨aasiallisesti t¨am¨a toteutetaan sulauttamalla perinteisesti erilliset ohjel- mistojen kehitys- ja yll¨apitovastuut yhten¨aiseksi toiminnoksi. Jatkuvan integraa- tion ja k¨aytt¨o¨onotton periaatteet t¨aht¨a¨av¨at ohjelmistojen viemiseen kehityksest¨a tuotantoon mahdollisimman automaattisesti, samalla taaten korkean laadun. Oikeiden l¨ahestymistapojen ja k¨ayt¨ant¨ojen valitseminen ohjelmistokehityksess¨a on t¨arke¨a¨a loppuk¨aytt¨ajien tyytyv¨aisyyden sek¨a ohjelmistojen laadun ja tuotta- vuuden takaamiseksi. T¨ass¨a diplomity¨oss¨a raportoidaan tapaustutkimus, jonka kohteena oli tietoliikenneyhti¨o Nokian sis¨ainen DevOps-ymp¨arist¨o. T¨am¨a diplo- mity¨o tutkii sen kyky¨a tukea verkkopalveluprojektin jatkuvaa kehityst¨a ja in- tegraatiota. Tutkimuksen tavoitteena on kuvata kuinka ymp¨arist¨oss¨a toteutuu DevOpsin p¨a¨aperiaatteet, verrata sit¨a nykykirjallisuuteen ja tutkimuksiin sek¨a tarjota ideoita tulevaisuuden kehityskohteiksi. Arvioitu DevOps-malli tarjosi onnistuneesti tuen verkkopalvelun kehitysprojektil- le. Jatkuvan integraation, k¨aytt¨o¨onoton ja konfiguraatioiden standardisointi sek¨a siirtyminen kahdesta jatkuvan integraation ty¨okalusta yhteen voidaan arvioida tulevaisuuden kehityskohteina. Asiasanat: DevOps, verkko, ohjelmistokehitys Kieli: Englanti

iii Acknowledgements

I would like to thank my supervisor Petri Vuorimaa and Nokia for making this thesis possible.

Espoo, December 29, 2020

Ilkka Malassu

iv Abbreviations and Acronyms

CD CDP Continuous Delivery Pipeline CI CM Continuous Monitoring DevOps Development and operations OS Operating QA Quality assurance REST Representational State Transfer Protocol SCM Management SDLC Software Development Life Cycle SOA Service-Oriented Architecture TDD Test-Driven Development UAT User Acceptance Test VM Virtual machine WSDL Web Service Description Language

v Contents

Abbreviations and Acronyms v

1 Introduction 1 1.1 Background and Motivation ...... 2 1.2 Research questions ...... 2 1.3 Structure of the Thesis ...... 2

2 Current literature 4 2.1 Defining DevOps ...... 4 2.2 An overview of DevOps ...... 7 2.2.1 Values and goals ...... 7 2.2.2 Processes ...... 8 2.3 DevOps software management ...... 8 2.3.1 Code reviews ...... 9 2.3.2 Source code management ...... 9 2.3.3 Build management ...... 9 2.3.4 Release management ...... 10 2.3.5 Automation ...... 10 2.4 Continuous Integration ...... 10 2.4.1 Unit tests ...... 11 2.4.2 Pre-checks ...... 11 2.5 Continuous Delivery ...... 12 2.5.1 Testing ...... 12 2.5.2 Coding metrics ...... 12 2.5.3 Infrastructure-as-Code ...... 13 2.6 Continuous Deployment ...... 13 2.6.1 Deployment types ...... 14 2.7 Studies on continuous practices ...... 14 2.7.1 Existing literature reviews ...... 14 2.7.2 Continuous practices: available tools and approaches . 15 2.7.3 Tools for CDP design and implementation ...... 16

vi 2.7.4 Challenges in migrating to continuous practices . . . . 16 2.8 Adopting DevOps: a case study ...... 17 2.8.1 Background ...... 17 2.8.2 Starting the transformation ...... 18 2.8.3 DevOps adoption methods ...... 18 2.8.4 Conclusion ...... 19 2.9 Test-Driven Development ...... 20 2.9.1 Defining TDD ...... 20 2.9.2 Studies on TDD ...... 21 2.10 Web service choreographies with TDD ...... 23 2.10.1 Choreography development ...... 23 2.10.2 RESTful Web services and SOAP ...... 24 2.10.3 Integrating choreographies and TDD ...... 25 2.11 Microservices and DevOps ...... 27 2.11.1 Defining microservices ...... 28 2.11.2 Microservices and DevOps ...... 28 2.11.3 Migrating to microservices ...... 28 2.11.4 Challenges with microservices ...... 30 2.11.5 Related tools ...... 30 2.12 Chaos engineering ...... 31 2.12.1 Origins of chaos engineering ...... 31 2.12.2 Chaos engineering principles ...... 32

3 Environment 33 3.1 Software management ...... 33 3.1.1 Source code management ...... 33 3.1.2 Build and release management ...... 33 3.2 Software development processes ...... 34 3.3 The implementation environment ...... 35 3.4 The web service ...... 35

4 Methods 37 4.1 Case studies in ...... 37 4.2 Case study research methodology ...... 37 4.2.1 Characteristics ...... 37 4.2.2 Research process ...... 38 4.3 The case study of this thesis ...... 38 4.3.1 Research objectives ...... 38 4.3.2 Collecting data ...... 39

vii 5 Implementation 40 5.1 Use cases ...... 40 5.2 Initial steps ...... 40 5.3 Build and release management ...... 41 5.4 Building a CI/CD pipeline ...... 41 5.4.1 Merge request deployments ...... 42 5.5 Implementing features ...... 43 5.5.1 Initial application ...... 43 5.5.2 First deployment ...... 43 5.5.3 Incremental improvements ...... 44 5.5.4 Result pages ...... 45

6 Evaluation 47 6.1 Build and release configurations ...... 47 6.2 Writing software code ...... 47 6.3 CI/CD tools ...... 48 6.4 Web service implementation ...... 50 6.5 Summary ...... 51

7 Conclusion 52

viii Chapter 1

Introduction

In a world that is increasingly driven by computer code, the processes and practices of developing software are growing in importance. Decreasing the time it takes to deploy software into production and streamlining the delivery process is essential in an environment where the end users are demanding new features and improvements to their services at an increasing rate. DevOps is a set of practices aimed at merging the responsibilities of soft- ware development and operations teams to decrease the lead times of soft- ware . In addition to organizational structuring, DevOps includes all the technological choices that aid in achieving this goal. DevOps and other contemporary software development practices can provide help for problems faced with traditional software delivery processes, which include: inconsis- tency across development and operations environments, risk of failures caused by manual interventions, difficult software configuration and version manage- ment and high costs [1]. This thesis conducts a case study on a DevOps environment implemented in the corporate context of Nokia. The research is done by following the implementation of a web-based service and observing the DevOps platform’s ability to support this continuous development process. The observations made during this are compared with the theoretical framework pro- vided by current studies and literature on DevOps and other software development practices. The goal of the research is to describe how the DevOps envi- ronment established within Nokia implements the main functionalities of DevOps and provide ideas for future discussion and improvements.

1 CHAPTER 1. INTRODUCTION 2

1.1 Background and Motivation

DevOps and related software development practices can aid in achieving business goals in the highly competitive domain of IT. This is pursued by breaking down organizational silos and facilitating continuous engagement between the teams of development and operations. Different industries are willing to adopt DevOps due to the advantages it can provide, which leads to a globally increased demand of DevOps professionals. Organisations that have employed these professionals are experiencing better returns compared to businesses that are still utilizing more traditional software development approaches. Continuous Integration and other properties of DevOps have created success stories across different software-related industries, which con- tributes to the increased trendiness of these subjects in the domain of IT. [2] Continuous practices have the goal of accelerating the rate of software deliveries while maintaining a high standard of quality. Evaluating these practices and the related tools is important in identifying challenges and classifying different approaches. [3]

1.2 Research questions

This thesis conducts a case study that is centred around the topics of DevOps and CI/CD. The research is guided by two research questions, which are presented below.

RQ T.1: How are the main functionalities and practices of DevOps and CI/CD implemented in the environment of Nokia?

RQ T.2: How can the DevOps and CI/CD environment of Nokia be improved in the future?

1.3 Structure of the Thesis

After this introduction, the next chapter of this thesis covers the current literature related to DevOps and other software development practices. First, the concepts of DevOps and continuous practices are defined, after which the state of current studies on these concepts are reviewed. Microservices, Test- Driven Development, web service choreographies and the concept of chaos engineering are also reviewed in this chapter. CHAPTER 1. INTRODUCTION 3

After the literature review, the third chapter of this thesis describes the DevOps environment, in which the case study research is conducted. This description includes the technological possibilities as well as the software development practices established in the environment. The case study research methodology of this thesis is described in Chapter 4. Chapter 5 follows the development and release process of the web-based application implemented for the purposes of this thesis. Chapter 6 evalu- ates the DevOps platform utilized in this implementation, while Chapter 7 concludes the thesis. Chapter 2

Current literature

This chapter reviews the currently available literature on DevOps and other modern software development practices. It defines the terms most relevant to the topic and the web service implementation of this thesis and also gives a more detailed view on the different practices related to these concepts. The scope of this chapter is mainly the articles and books published in recent years.

2.1 Defining DevOps

Only quite recently, has the term DevOps been adopted to common use. This might be one of the reasons it is hard to find a single strict definition for it. Different articles and books give differently worded descriptions and expla- nations for the concept. There is also variance in the amount of practices, methodologies and other concepts attributed to DevOps. This section of the thesis reviews the definitions found in literature and attempts to establish a common ground between them. From an organisational point of view DevOps is about fast, flexible and provisioning business processes. It aims to integrate traditionally separate organizational silos into teams that work cross-functionally. These teams fo- cus on delivering new operational features continuously. In general, DevOps is a culture shift towards increasing collaboration between different units of business: development, quality assurance and operations. Figure 2.1 depicts a general view of the DevOps process for an enterprice working in the field of software. [4] A perspective of a software architect according to Bass et al. [5] states that DevOps is defined as a set of practices that intend to reduce the time between implementing a change to a software system and the change being

4 CHAPTER 2. CURRENT LITERATURE 5

Figure 2.1: The generic DevOps process according to Ebert et al. [4] integrated into production, while maintaining high software quality. Quality consists of usability of the system as perceived by users, developers and administrators. High quality also means that the software is secure, reliable and available. The software architect’s definition of DevOps also states that the delivery mechanism should be made as reliable and repeatable as possible. The scope of DevOps practices is not constrained only to the development, testing and deployment of software . It also includes ensuring and monitoring that the system meets its requirements throughout its life cycle. Figure 2.2 shows an overview of the DevOps process from the point of view of the software architect.

Figure 2.2: The DevOps process according to Bass et al. [5]

In contrast to the definition given by Ebert et al. [4], the definition CHAPTER 2. CURRENT LITERATURE 6 of DevOps given by Bass et al. [5] is goal oriented. It does not specify what needs to be done structurally or methodologically in the organisation to achieve DevOps. The practice is only defined as a goal to reduce the time between changes made to the code and deploying the code into production. This definition does not specify, which methods should be used in order to work towards this goal, which is in contrast to many other common definitions that tend to emphasize the connection between DevOps and agile methods [5]. At the most concrete level, from the point of view of a software devel- oper, according to H¨uttermann[6] the term DevOps is a combination of development (Dev) and IT operations (Ops). It describes practices that make delivering new software more efficient. Traditionally, in organisations that produce software, employees are divided into teams with respect to the type of work they are doing. Deploying software into production and main- taining it requires a different type of skillset from that of a developer, which can create the silo of IT operations. H¨uttermann[6] states that there is a fundamental conflict baked into this type of separation. The success of a development department is measured in the amount of features and bug fixes it can produce, while the success of an operations department is measured in its ability to maintain the stability of the software system. New features and fixes always threaten this stability, so it is in the best interest of an operations department to create mechanisms that resist these changes. In contrast to the definition given by Bass et al. [5], H¨uttermann[6] states that the set of practices named Agile methods must be applied in order to bridge the gaps between organisational silos and achieve DevOps. Figure 2.3 depicts an example migration from horizontal teams to DevOps- compliant cross-functional teams that are each responsible for different projects. Members of the teams are equipped with different types of skill sets. A core team consisting of representatives can also be established. This team has an overview of the whole system and can issue architectural refactorings. [7] As can be interpreted from the literature of this section, the term DevOps can be defined in different ways. This chapter attempted to cover the per- spectives of an organisation, a software architect and a developer. The over- lapping principle found in all of these points of view seems to be the goal to make the production and maintenance of software systems faster and more efficient. CHAPTER 2. CURRENT LITERATURE 7

Figure 2.3: Migrating from a traditional horizontal organisation structure (a) to vertically arranged teams in DevOps (b) [7]

2.2 An overview of DevOps

DevOps is built on the ideal that both old and new methods can be sum- marized, which can lead to differing descriptions on what is part of it [6]. This section aims to define the different practices and fundamentals that are most commonly linked to DevOps and find a common ground on which to communicate about the topic.

2.2.1 Values and goals DevOps has a strong emphasis on communication and collaboration between experts of different fields. Individuals and groups should feel committed to shared goals and values. In addition to technical aspects like processes and tools, incentives should also be aligned cross-functionally. This can be achieved by establishing a common and transparent definition of high quality, which rewards developers for frequent changes and also rewards operations for making deployments and keeping them stable. Achieving DevOps requires that employees are open-minded and willing to solve a problem together. [6] CHAPTER 2. CURRENT LITERATURE 8

2.2.2 Processes The processes of developing software are more important than tools used in development and delivery. Tools can be chosen afterwards to fit the needs of a software development process. In DevOps, processes are crucial in addressing the collaboration between development and operations departments. The functions of these departments should be accessed and communicated to via interdisciplinary experts. Development and operations are part of the same process, which should be holistic, streamlined, comprehensive and end-to- end. Speed should not be sacrificed while development delivers software to operations. Responsibilities during this process should not be tied to individual roles but to the deliverable itself. [6] In DevOps both development and operations are focused on the com- mon goal of delivering high-quality software as frequently as possible. This favours the vertical optimization approach shown in Figure 2.4, in which the components of an infrastructure are laid out and utilized to achieve the ideal architecture of each application. Optimization is only done vertically within the boundaries of an application. [6]

Figure 2.4: Vertical optimization of applications [6]

2.3 DevOps software management

This section covers some of the DevOps related practices through which to manage software development and deployment. CHAPTER 2. CURRENT LITERATURE 9

2.3.1 Code reviews Facilitating code reviews is important in ensuring that the software deliv- erables maintain high quality. These reviews can be formal meetings or interactions where the code is being inspected thoroughly. The reviews can also be organised informally via emailing or programming in pairs. Different tools for code reviews can also be utilized. [8]

2.3.2 Source code management Source Code Management (SCM) systems have been an essential part of soft- ware development for decades. However, these systems offer various benefits with respect to robust integration and automation when used in conjunction with DevOps processes. Branching features should be utilized in order to easily track features and bug fixes related to different releases. Coordina- tion between different contributing members of a development team can also be facilitated using SCM. Other benefits of these systems with respect to DevOps processes are the ability to review any changes before implementing them and rollback a commit if it turns out to be undesired. SCM systems facilitate a framework in which to make, backup and recover changes incre- mentally, which is important when following a DevOps process. [8]

2.3.3 Build management Build management refers to the ability to control the assembly of different components of software or code into a single deliverable. Build management means preparing a build environment, in which the source code and all its dependencies are compiled into a single functional unit. In DevOps builds can be manual, on-demand and automatic. Triggered automated builds are initiated when a change is committed in the SCM system. Scheduled auto- mated builds are scheduled to run periodically. These types of builds are run on a continuous integration server. Builds can also be initiated on-demand by running a script. Implementing build management is crucial in ensuring that the software is usable and reliable across different environments. [8] Using an artifactory repository manager allows the management of the build executables throughout their life cycles and it also makes it convenient to share builds across different domains [8]. According to Laster [9] an artifact is a deliverable or something that is used by a deliverable. An artifactory repository manager can be used to separate the releases of artifacts with respect to, for example, development-quality, testing-quality and production- quality [9]. CHAPTER 2. CURRENT LITERATURE 10

2.3.4 Release management Release management facilitates the transition of a software release from the development and deployment phase to support and maintenance phase. It is linked to many other DevOps processes in the Software Development Life Cycle (SDLC). Using a release management system allows, for example, the tracking and integrating of every phase in the SDLC and monitoring the status of recent deployments. [8]

2.3.5 Automation Automating the stages of the DevOps process is essential when aiming for frequent software deliveries. It also enables feedback to be received rapidly. A deployment pipeline should be implemented in order to achieve the goals of automation. This pipeline includes all changes from every component of a software system and forms a single path that takes these changes to production. The pipeline should also be able to set up all the required en- vironments automatically. This can be simplified with practices and tools like infrastructure as a service, platform as a service, virtualization and data centre automation tools. Figure 2.5 presents an outline of a deployment pipeline. [10]

Figure 2.5: Overview of the deployment pipeline [10]

2.4 Continuous Integration

In the context of Continuous Integration (CI), the term continuous suggests that there is automation involved in the practice. In addition to automating CHAPTER 2. CURRENT LITERATURE 11 build management, this automation applies to, for example, testing, vali- dating, installing and configuring software. It does not mean that processes should be executing continuously; rather, that in the event of a new change being implemented into the system, this change can be handled and deployed quickly and automatically. This allows for initiating builds more frequently. It also enables faster detection of failures so that a correction or a rollback can be made as soon as possible. Continuous means also that a change can propagate, without a human needing to intervene, through the stages of building, testing, releasing and so on. When executed consecutively, these stages form a so called ”pipeline”, which aims to automatically turn compo- nents of software into a releasable unit. This process can be made to require human intervention in the case of failure or validation. [9] The meta-analysis by Shahin et al. [3] stated that practising CI improves code quality and increases the productivity of development teams.

2.4.1 Unit tests The CI phase in software development includes merging and testing the changes made by a developer. The developer should be notified immedi- ately if any problems arise during the merge so that the code base does not stay broken for any longer than necessary. To identify these problems, the CI platform runs targeted tests called unit tests to confirm that after the changes the software still produces the desired outputs, inputs are handled correctly and so on. These tests should be able to run independent of any external databases or other resources. If, for example, data from a database is a dependency of the software system, test data with the same structure could be created as input for the unit tests. [9]

2.4.2 Pre-checks Most SCM systems include a functionality for executing the unit tests auto- matically or a possibility to send a notification to a dedicated CI tool that does the testing. A CI platform could also implement pre-checks that can, for example, interrupt a push to a code repository. Additional measures such as a code review would then be required before publishing the changes in the code base. Pull requests of GitHub repositories, for example, are a way of pre-checking code. Via these requests developers can propose a change to a software system. This change is then inspected by the owner of the repository, who then decides whether to apply it or not. This way of includ- ing more parties in the validation process of course reduces the frequency of integration. [9] CHAPTER 2. CURRENT LITERATURE 12

2.5 Continuous Delivery

While the CI phase of the DevOps process merged and validated the isolated changes made to the software, the Continuous Delivery (CD) cycle combines the changes with the rest of the production code. The resulting code is then tested through sequential stages with increasing comprehension. This process is called the Continuous Delivery Pipeline (CDP). The stages of CDP can be divided into smaller more specialized ”jobs”. For example, the stage usually referred to as the commit stage typically consists of tasks that compile the code, do unit testing and integration testing, evaluate the code and release artifacts. The commit stage could then hand off the artifacts to an acceptance stage, which would verify these artifacts and perform functional testing on them. [9] If at any stage of the CDP a problem arises, the execution of the pipeline should be stopped. The aim of the continuous delivery mechanism is to ensure that non-deployable software does not reach the end of the pipeline. [9] In their meta-analysis, Shahin et al. [3] state that CD has several benefits such as lower costs, faster feedback delivery and decreased risk of deployment failure. They also establish CI as a perquisite for CD.

2.5.1 Testing The testing done in the stages of the CDP aims to progressively gain more confidence that the software under delivery is functional. This testing can take the form of integration testing, functional testing or acceptance testing for example. Integration testing ensures that the different modules or com- ponents of the software function together. Functional testing ensures that running the software produces desired outputs and results. Acceptance test- ing is a way of comparing some metrics of the software against predetermined criteria. These criteria can include, for example, scalability and performance. [9]

2.5.2 Coding metrics There are several tools available that can help with testing the quality of the source code. These tools usually calculate a metric called code-coverage, which is the proportion of the code that is covered by the tests. Calculating the lines of source code and measuring the complexity are other quality indi- cators. Structures found in the code can also be compared with conventional programming patterns. [9] CHAPTER 2. CURRENT LITERATURE 13

2.5.3 Infrastructure-as-Code One of the recommended DevOps practices is the infrastructure-as-code ideal. The goal of this ideal is to include all the functionality of the continuous delivery pipelines in the SCM. This functionality could then be changed on- demand by the software developers, which would trigger the CDP to fetch the changes from the source control. [9]

2.6 Continuous Deployment

The Continuous Deployment process can take place after the CDP has veri- fied the validity and deployability of the changes made to the software. The software can be deployed automatically due to the tests made in the CI and CD phases. Here, deployment can refer to all the multiple ways that make the software available to the end user. The software could be made downloadable or be running in a web location. [9] According to Zhu et al. [11] the lack of coordination between the de- velopers affects the architecture and design of the software system under continuous deployment. Deployment of every new deliverable might not always be desired. In this case, so called manual checks could be added. These checks would require human intervention and acceptance prior to deployment. Manual checks usually take place earlier in the DevOps pipelines and are implemented as User Acceptance Tests (UAT). [9] Deployed systems are often monitored when DevOps is practiced, which might result in rolling back changes made to the software. This affects the architecture of the system and how the system exposes information [11]. According to Shahin et al. [3] the difference between the definitions of CD and Continuous Deployment is debated in industrial and academic contexts. Continuous Deployment is seen to differ from Continuous Delivery with it’s utilization of a production environment to which the software changes are deployed to. This should all be done automatically in the case of a new commit being made to the system, while in CD, deciding to deliver software should be the final manual step of the process. This makes Continuous Delivery a suitable practise for all kinds of organization, while Continuous Deployment might be applicable only in certain contexts. [3] CHAPTER 2. CURRENT LITERATURE 14

2.6.1 Deployment types The DevOps framework can facilitate different types of deployments. A staged deployment makes the release to a subset of all the end users. If problems with the new version emerge, these users could always be redirected to the previous version without all users being affected. This is referred to as a canary deployment. [9] Another alternative form of deployment is the blue/green deployment, which utilizes two different environments during the DevOps process. One of these is intended for testing and development while the other is the pro- duction environment. Whenever the non-production environment is deemed ready to be released to production, it is labelled and used as the new pro- duction environment. The previous production environment is switched to be used for testing and development of the next release. [9]

2.7 Studies on continuous practices

Figure 2.6 describes how the different continuous practices covered in this thesis are linked together. This section of this literature review studies on the different practices, approaches, challenges and tools related to CI, CD and Continuous Delivery.

Figure 2.6: The connection between continuous practices [3]

2.7.1 Existing literature reviews The existing literature reviews on continuous practices covered by Shahin et al. [3] reveal a conclusion that the practice of frequent releasing is widespread in the software development industry. However, evidence of the benefits and drawbacks of rapid release is concluded to be sparse. A mapping study by CHAPTER 2. CURRENT LITERATURE 15

Rodr´ıguez et al. [12] also reveals that in some contexts customers do not like receiving software on a continuous basis. Many organizational changes in, for example, ways of working, mindsets and quality assurance practices are required to undergo a CD transformation. This transformation has proven to be a challenge when applied to embedded systems. The mapping study extracts ten attributes that define the practice of CD. These include, for ex- ample, (a) frequent releasing; (b) testing continuously and applying QA; (c) CI; (d) having deployment, releasing and delivery processes and configuring the deployment environments. [3]

2.7.2 Continuous practices: available tools and approaches In their meta-analysis, Shahin et al. [3] examined studies on how building and testing times could be reduced. Approaches using two or three nested Virtual Machines (VMVM or VMVMVM) facilitated the isolation of test- ing dependencies, which decreased the dependency-related overhead when executing short-term test cases. The method of using three nested Virtual Machines allowed the parallel execution of prolonged test cases. The visibility a CI process should be facilitated so that the parties in- volved can easily understand and study the results of builds and tests. Some common CI tools, such as Jenkins, generate large amounts of data that might not be useful to testers, developers and other stakeholders. Studies examine the utilization of approaches, such as SQA-Profile, to create separate views for different users based on their roles in the CI environment. Organiza- tions should describe and structure their testing processes and goals prior to adopting CI. CIViT (Continuous Integration Visualization Technique) is a tool that attempts to create an end-to-end visualization of the testing ef- forts. This tool helps in outlining the status of quality and testing features and makes it easier to avoid duplicate tests. [3] Several studies included in the meta-analysis by Shahin et al. [3] focus on the failures and errors related to CI systems. A tool named WECODE is re- ported to automatically identify merging issues with uncommitted code at an earlier stage that an SCM system would. Another method called incremen- tal integration aims to increase the CI system resilience by utilizing earlier build results of the components that fail to build. In a normal integration procedure, the build would be stopped altogether. The literature review by Shahin et al. [3] only found few papers that address the security and scalability issues of CDPs. One proposition is an approach of integrating connect, disconnect, create and delete security tactics in order to make CDPs more secure. Scalability of a CI process is addressed with a proposition called Enterprise Continuous Integration (ECI), which CHAPTER 2. CURRENT LITERATURE 16 converts a large software project into a modular system using binary depen- dencies.

2.7.3 Tools for CDP design and implementation Choosing the suitable tools and platforms for continuous practices helps in minimizing the related adoption difficulties. The implementation of a contin- uous deployment pipeline could be partitioned into seven phases: (i) an SCM system, (ii) tools for managing and analysing code, (iii) tools for building, (iv) a CI server, (v) tools for testing, (vi) configurations and provisioning management and (vii) a CD server. Studies reviewed by Shahin et al. [3] report Git/GitHub and Subversion to be the most commonly used SCM sys- tems. Seven papers also report the use of code analysis tools in conjunction with the CDP. A tool called SonarQube can be used in conjunction with a Jenkins CI server to analyse, for example, how big a portion of the code is covered by the tests or if there are any deviations from the development standards. [3] A CI server can be utilized for automatic building and testing of software systems. Jenkins is the most commonly referred CI server implementation in the literature reviewed by Shahin et al. [3]. Other common tools are Bamboo and CruiseControl. After the stage of CI server execution, testing should be performed in different environments. Merely four papers [13–16] in the literature review combined testing tools with a CD pipeline. The frameworks JUnit and NUnit were used in two scenarios [13, 14]. One paper [14] also employed Athena, which is a tool that can run tests and deliver the results in a format that is compatible with Jenkins. [3]

2.7.4 Challenges in migrating to continuous practices The challenges in adopting continuous practises identified by Shahin et al. [3] were mostly related to organizational and human factors. Technological challenges include, for example, the deficiencies that available tools have in code revision and in the test feedback provision of CI. The reliability and security of CD tools might also not be guaranteed. Frequent changes made to the database schemas of a system, for example, should also be robustly handled by a CDP. [3] Shahin et al. [3] also state in their meta-analysis, that a proper testing strategy should be implemented in order to mitigate the challenges in con- tinuous practices adoption. The tests should also be of high quality, which means that there is not too many of them, they should cover a big portion of the software code and their run-time should be minimized. CHAPTER 2. CURRENT LITERATURE 17

Conflicts with code merges also complicate the practising of CI. These can be caused by having too many dependencies, incompatible dependencies or third-party dependencies in the software system. Having systems depend on hardware or legacy code can also cause problems with migrating to CD. [3]

2.8 Adopting DevOps: a case study

Adopting DevOps can be both culturally and technically challenging. One issue, for example, is to find the most suitable kinds of practices for differ- ent systems in different organizations. Organizations that function in the domain of Internet were the ones to introduce and grow the practices of DevOps. Google and Netflix, for example, make changes to their systems by introducing system components that are in principle extensions of the same component family. A more traditional institution such as a bank evolves it’s systems with a different mind-set. This might lead to the question: Which domains are best suited for adopting the practices of DevOps? The domain of big data systems is one example. Organizations of this domain require frequent deployments to keep their data pipeline up to date. [11] This section of the thesis goes through the different aspects related to adopting DevOps by following an example case study originally conducted by Callanan and Spillane [17].

2.8.1 Background Wotif group is an Australian organization working in the domain of travel e-commerce. During the years 2013 and 2014, the company underwent a reform of it’s software delivery process. Utilizing the practices of DevOps and CD enabled the organization to reduce the release time of new software from weeks to hours. [17] Prior to their DevOps transformation, Wotif group had a software deliv- ery process with too many stages of validation and bureaucracy. The two- day-long and deployment process of the company often left developers in frustration with their releases. The difficulty of the process lead to more features being included in the releases, which furthermore increased the workload of the operations department. Initially, Wotif saw that utilizing microservices as the system architecture would be the first solution to these problems. In spite of the advantages provided by the architecture, the legacy code bases of the company resulted in having to manually test and deploy CHAPTER 2. CURRENT LITERATURE 18 every microservice, which increased the software release problems even more. [17]

2.8.2 Starting the transformation The transformation of Wotif group started from the goal of reducing the average lead time of releases. A CD team was gathered to standardize the process of delivering software. Cross-functional meetings voiced the need to standardize, for example, the locations of log files, initialization scripts and configuration files. Different conventions related to the packaging and formatting of files, applications and certificates were also included in the list. [17] Implementing the standards that facilitate a DevOps-compliant process can make the development and operations teams slower. Wotif discovered that in the case of changing the standards, the teams would have to re- educate themselves on the process. This meant that automatic verification of changes should be implemented. [17]

2.8.3 DevOps adoption methods Wotif started their implementation of DevOps by automating the verification of standards compliance. This was achieved by executing a test suite of basic Linux commands remotely with SSH to verify that the behaviour and appearance of the system was as expected. The compliance test suite was executed on every commit in both acceptance and test environments to block non-compliant releases and identify the updates that were needed to achieve compliance. The test cases of the test suite were also annotated so that applications conforming to older standardization could still be supported. [17] Wotif also created an elementary reference implementation called ”helloworld- service”. This service was deployed first to show that a functional and au- tomated deployment pipeline existed. In case of a platform update this reference build pipeline could be used to accept the updated compliance tests. The code base of the reference implementation was also used as a development template for creating new microservices that comply with the latest standards. [17] Deployment automation was identified to be an important step in im- plementing the CD pipeline for Wotif. Employees of the company thought, generally, that automation was inconceivable because the established deploy- ment process required so much manual checking and human intervention. Wotif used Git, Yum, TeamCity, Puppet and Hiera with a Fabric code base CHAPTER 2. CURRENT LITERATURE 19 to create a DevOps toolchain that replicated the manual stages of the release process. It was also required that all instances of the service were deployed with a smoke test shell script that assumed the service under testing was running on localhost. This script would then execute various tests depend- ing on the application. The smoke tests could include, for example, sending requests to the HTTP endpoints of the service or running some of the tests from the main test suite. The smoke test script was given write privileges to test data only, so that its usage could be risk-free. Using all these tools allowed Wotif to cut their release cycle time by 85 percent, which meant a shift from hours to minutes. [17] Despite utilizing all the methods mentioned in this section, Wotif still faced significant delays in their CD pipelines. This was caused by multiple microservices being tested in a same run, which meant that measures would have to be taken to ensure they where released together. This was solved by mandating that every microservice is made backward compatible so that it could be independently released. [17]

2.8.4 Conclusion After Wotif group had reviewed their new software release process, they decided name it ”SLIPway” (Simple Lightweight Independent Pathway). This process was refined to reward developers that followed the practices of DevOps and CD. Rules were defined to keep system changes indepen- dent, prevent manual testing during release time, ensure that the application is standard-compliant and allow operations team to roll back releases. On average, implementing these rules reduced the release cycle time from ap- proximately two weeks to one day. Figure 2.7 depicts the impact of the DevOps-based release process SLIPway compared to the legacy methods of Wotif group. [17] While adopting CD and DevOps, Wotif group learned additionally that changes to standards should be delivered with migration notes, helping the developers identify the necessary steps that are needed for implementing up- dates. The microservices delivered by the development teams had shared dependencies on internal Java libraries. Wotif switched from date-based ver- sion numbering to semantic versioning in order to separate non-standardized library updates from standardized ones. This spared the developers from having to integrate unnecessary updates. [17] CHAPTER 2. CURRENT LITERATURE 20

Figure 2.7: Effect of SLIPway compared to legacy methods [17]

2.9 Test-Driven Development

Test-Driven Development (TDD) is a process considered to be one of the agile software development practices. It has created controversy regarding its effect on quality of software and developer productivity. [18] This section of the thesis gives an overview on TDD and explores recent studies to find different viewpoints on its effectiveness and validity.

2.9.1 Defining TDD The practice of TDD is based on incrementally constructing the design of a system. This is achieved by first writing the unit tests for a software component. The implementation of this component aims to make the unit tests pass. Other components are implemented if the unit tests and the implementation of the first component require the developer to do so. The process of TDD does not explicitly include separate design and testing phases but it can be expressed as a design and design improvement cycle. Design CHAPTER 2. CURRENT LITERATURE 21 improvement is referred to as ”refactoring” the code. Tests for a feature need to be thought through prior to development. [19] The ”Red, Green, Refactor” mantra of TDD helps in memorization of the process described by Beck [20][19]:

1. Write a unit test.

2. Compile the test code. (Compilation should not succeed because no production code has been implemented at this stage.)

3. Implement just enough production code for the compilation to succeed.

4. Run the tests again.

5. If the test fails, again implement only the amount of production code that is required for the test to pass.

6. Run the tests.

7. Refactor the code for better quality.

8. Start over from 1.

A new feature of the software system is not considered done unless the newly written unit tests and all the previously written test cases in the test suite, pass [19]. According to Besson et al. [21] having a comprehensive test suite as- sociated with the software gives the developer confidence and courage to make changes to the code. The tests allow potential problems with the implementation to be detected immediately. Without any tests the correct- ness of an integrated or even deployed software change can not be confirmed.

2.9.2 Studies on TDD The paper by Rafique and Miˇsi´c[22] conducts a meta-analysis of 27 studies that inspect how following TDD affects external quality of code and pro- ductivity of developers. The analysis found that generally TDD increases the code quality by a small factor but has a very little or non-noticeable impact on productivity. Subgroup studies found that the industrial stud- ies found a larger improvement in quality and a bigger drop in productivity in comparison with the academic studies. Differences in the effort put into implementing the tests also affects these results. The study concluded that future research needs to include some additional conditions. For example, CHAPTER 2. CURRENT LITERATURE 22 a traditional development process should be established in the test setting before running the experiment. This eliminates the effect of extra effort put into tests. Scale of the implemented task should also be large enough for the benefits in quality and productivity to become visible. A common definition for internal quality is also needed. Finally, there should be an emphasis on the training given to test subjects prior to the experiment and a focus on researching TDD in a more long-term manner. In their article, Karac and Turhan [18] examine the evidence on whether TDD has lived up to its promises. They present a table shown in Figure 2.8, which reveals that the studies fail to demonstrate a general benefit from TDD with respect to productivity and quality.

Figure 2.8: Results from studies on TDD by Bissi et al. [23], Munir et al [24], Rafique and Miˇsi´c[22], Turhan et al. [25], Shull et al. [26], Kollanus [27] and Siniaalto [28]. [18]

There are several likely reasons that lead to the inconsistencies presented in the different TDD studies of Figure 2.8. In addition to the different con- texts the experiments were conducted in, according to Karac and Turhan CHAPTER 2. CURRENT LITERATURE 23

[18], the general structure of TDD and more specificly the large amount of ”cogs” in TDD might lead to these inconsistencies. These cogs also interact with each other. In addition, the impact of TDD is affected by, for example, the implementation at hand, legacy code and the differing levels in expertise of individuals. Most of the studies also have a strong emphasis on only the test-first aspect of TDD, while it has been discovered that the length of the development cycle has a bigger influence on software quality and developer productivity. A study conducted with 39 professionals by Fucci et al. [29] showed that shorter cycles in development lead to an improvement in these metrics. More in line with the principles of DevOps, Alistair Cockburn [30] states in Elephant Carpaccio that implementing end-user-visible small tasks is the main reason agile developers can deliver frequently. Based on this aspect of TDD, Karac and Turhan [18] based the implementations of their exper- iments starting from detailed descriptions of user stories. This proved to have a noticeable positive effect on code quality and developer productivity in conjunction with TDD.

2.10 Web service choreographies with TDD

Choreographies are a way of constructing web services in a distributed way, while orchestrations function in a centralized fashion. In a choreography, soft- ware components interact with each other in a collaborative manner with no centralized control. Implementing large-scale distributed applications using choreography development methods still results in problems with robustness. According to Besson et al. [21] the implementation of choreographies is of- ten done in a non-disciplined way with little or no testing conventions. They propose the inclusion of TDD in the process as a solution. [21] How exactly is a web service defined? Sakthivel et al. [31] define services as the most optimized and organized components used for building software applications. They also link services with features such as platform indepen- dence, language independence, loose coupling, precise definition, re-usability, self containment and ease of integration. This section examines the development of web service choreographies and whether TDD can help make them more reliable and robust.

2.10.1 Choreography development Service-oriented architecture (SOA) offers a framework on which to develop distributed software using services as building blocks. Web applications can CHAPTER 2. CURRENT LITERATURE 24 be constructed using individual web services as a foundation. These appli- cations can then be combined into more complex business processes. Each component of a service choreography is only focused on the functions it needs to fulfil its role, which makes choreographies a plausible way to develop soft- ware with a SOA. [21] A web service orchestration on its own is an executable process. A web service choreography can be developed using multiple orchestrations. Each of these orchestrations have a different role in controlling the information flow of the entire system. In a choreography, the different roles are described with a Web Service Description Language (WSDL) document. No central entity controls or tracks the process as a whole. Orchestrations send each other messages that are usually described using a modelling language. On a lower lever, these orchestrations must have a WSDL interface that is in-line with the role given by the choreography. [21] Figure 2.9 shows an example on how choreographies can be implemented through orchestrations. The leftmost orchestration implements the role of a store. This store sends messages or in this case payments to an orchestration that executes the role of a bank. The bank sends a confirmation message to a shipper orchestration which delivers the information back to the store. Information flows through the system in a decentralized manner. [21]

Figure 2.9: An example on how service orchestrations can form a choreogra- phy [21]

2.10.2 RESTful Web services and SOAP RESTful Web services is a way of composing web services that communicate using HTTP. These services should be lightweight to implement and deploy CHAPTER 2. CURRENT LITERATURE 25 and also easily accessible. The REST in RESTful stands for Representational State Transfer, which is an architectural style that represents the resources of everyday environments as services. In addition to software systems and libraries, all types of mundane resources such as mobile devices or other electronic machines can be depicted as RESTful services. [31] The RESTful architecture enables developing orchestrated or choreographed web applications. The constraints of this architecture are Client-server, Stateless, Cacheable, Layered system, Uniform interface and optionally Code on demand. The POST, GET, PUT and DELETE methods of HTTP fa- cilitate the inter-service collaboration of RESTful by respectively creating, reading, updating or removing a resource. [31] RESTful Web services can be composed statically or dynamically. In the static method, the services to be implemented are designed upfront in a design phase. With dynamic service creation, the services are designed and implemented during run time. [32] SOAP is a protocol that uses the interfaces of services to reveal their business logic. In comparison to REST, SOAP is more heavyweight in band- width utilization, message size, latency and other computational require- ments. RESTful services provide numerous resources, but only a few static methods, which makes it a more flexible and better architecture for most applications, mobile applications in particular. However, based on stud- ies presented by Malik and Kim [33] SOAP can offer more communication security. Figure 2.10 shows the comparison between REST and SOAP in conceptual terms. [33] In their conference paper, Malik and Kim [33] find that using a RESTful architecture in their actuator control network implementation gave beneficial results compared to SOAP. With REST, query access and response times of services were fast, while CPU utilization and load of accessed services were diminished. With smaller message size, REST resulted in quicker message parsing and decreased latency.

2.10.3 Integrating choreographies and TDD Testing choreographies automatically is challenging. In their paper, Besson et al. [21] examine the use of Rehearsal, which is an open source framework for web service choreography testing. For example, with this framework web service clients can be created dynamically, messages can be intercepted, ser- vices can be emulated or ”mocked”, the choreography can be described in abstract terms and the scalability of the system can be explored. Based on these features of the framework, the paper proposes a TDD-based method- ology to aid the development of choreography roles. Figure 2.11 presents the CHAPTER 2. CURRENT LITERATURE 26

Figure 2.10: Comparing the concepts of REST and SOAP [33] phases of this methodology. According Besson et al. [21], following these phases helps in maintaining an extensive suite of tests, which in turn makes it easier to implement changes to the code because problems can be more easily detected. The study by Besson et al. [21] yielded overall positive results for the use of Rehearsal framework. The above-mentioned TDD-based development methodology also performed well in terms of adequacy and efficiency. The study concludes that these tools have good potential in enhancing the quality of web service choreographies and giving a more disciplined approach to the development process. CHAPTER 2. CURRENT LITERATURE 27

Figure 2.11: Phases of the TDD methodology [21]

2.11 Microservices and DevOps

From a service-oriented point of view, there is an increasing tendency to build continuously deployed systems using microservices as an architectural style. The developers should restrict the size of each service while having an uniform understanding of the overall system. Furthermore, they should have a predisposition to ensure they are delivering reliable services that integrate well with the main system. Making the changes incremental is encouraged when developing systems that are already used in production. [11] Most common Google searches on the topic of microservices are currently driven by technology. This indicates that a general awareness on the defini- tion of the term itself has been well established. Companies delivering soft- ware as well as content providing organizations have adopted the microservice architecture through the practices of DevOps. In 2016, Balalaie et al. [7] state in their article that after 2014 the usages of both keywords ”DevOps” and ”Microservices” have grown at an equal rate on Google Trends1. Figure 2.12 shows the current status of these search terms. While DevOps can be practiced using a monolithic code base, microservices are effective in that they stress the importance of using small teams. [7] This section of the thesis examines how microservices are utilized as a part of a DevOps process.

1https://trends.google.com/ CHAPTER 2. CURRENT LITERATURE 28

Figure 2.12: Google Trends report on keywords ”DevOps” (red) and ”Mi- croservices” (blue)

2.11.1 Defining microservices Microservices is a cloud-native architecture based on building software sys- tems as a set of small services, where each microservice can be deployed independently. These services can execute as their own processes and com- municate with each other using different platforms or technological stacks. This communication is implemented using RESTful APIs, APIs based on re- mote procedure calls or other lightweight methodologies. In a microservices architecture, each service is a representation of a business process. [7]

2.11.2 Microservices and DevOps Transforming monolithic code bases to a microservices architecture has many advantages. These include, for example, the increased capability to imple- ment changes and new features to existing systems. Using microservices also reduces the time it takes to deliver new software and better organizes development teams around individual tasks. [7] The mechanism of Continuous Delivery is a crucial enabler of a microser- vices architecture given that the amount of deliverables increases. In ad- dition, implementing the DevOps practice of Continuous Monitoring (CM) can help in measuring the performance of the system and in detecting any unexpected behaviour. [7]

2.11.3 Migrating to microservices In their paper, Balalaie et al. [7] migrated Backtory, a system of backend services for mobile developers, to microservices in order to enable DevOps. This migration was done incrementally as architectural changes minimizing the impact on the end users. The main requirements of a CI pipeline were CHAPTER 2. CURRENT LITERATURE 29 fulfilled by choosing Jenkins to be the CI server, internally-hosted Gitlab to be the SCM system and Artifactory to be the artifact repository man- ager. Virtualizing microservices is not efficient because there may be many instances of the same service executing in parallel, which results in excessive use of computational resources. In this case, utilizing containers is a bet- ter option because it allows application deployment to any environment that has a support for containerization. This also makes environment specifica- tion unnecessary. Services can be containerized, for example, with Docker, which can be used in conjunction with Docker Registry to enable the cre- ation of a pipeline. Figure 2.13 depicts an example migration of a monolithic system to a microservices-based CI pipeline where independent deployment of individual services is possible. Using a pipeline like this also allows every service to be tested against it’s own suite of tests, which further enables the tests to be implemented according to the use cases of each service. This type of testing facilitates the formation of more compact teams and furthermore allows for practising DevOps. Instead of deploying the independent services of Figure 2.13 to a single server, clusterization can be implemented by using, for example Kubernetes. [7]

Figure 2.13: An example of a migration from a monolithic pipeline to deliv- ering microservices independently [7]

Monitoring performance and behaviour in the context of microservices can be done by dedicating an independent monitoring system to each service. Performance of the whole system can be calculated from the per-service data with the use of a statistical model that uses data gathered, for example, from mundane situations. Identifying, which services influence end-to-end performance the most, helps in making changes to the system architecture. [7] Organizing teams in accordance to the depiction in Figure 2.3 is beneficial CHAPTER 2. CURRENT LITERATURE 30 with respect to microservices because each of the cross-functional teams of DevOps can focus on a single service, which results in more frequent feature deliveries. [7]

2.11.4 Challenges with microservices In their paper, Balalaie et al. [7] anecdotally present challenges that might arise when migrating to microservices. One challenge is run-time testing in a development environment. In addition to the service under testing, all the dependencies of that service must also be deployed. Using a tool like docker- compose to describe the deployment and automatically fetch all dependencies is a way to solve this. [7] In microservices, all services expose their behaviour and communicate using contracts. These contracts are critical and making changes to them might cause the whole system to fail. [7] Another challenge with microservices is that it requires knowledgeable developers. Concepts of all the supporting services of this distributed archi- tecture need to be familiar to the developers implementing features to the system. [7]

2.11.5 Related tools One of the main enablers of a microservices architecture is virtualization. Both containers and virtual machines can be utilized in implementing it. [34] Virtual machines allow the deployment of multiple independent guest operating systems on top of a host operating system running on a physical server. The installation of multiple OSes is enabled by a hypervisor. Each microservice of a microservices-based system can be deployed on top of a virtual machine guest OS instance. [34] Containers, on the other hand, create a runtime environment that in- cludes all the necessary libraries, binaries and other dependencies required to run a specific application. Containers implement packaging that decouples the application from the infrastructure so that a single physical machine can host multiple containers. Docker is one of the main providers of container- ization software. In their case, applications are packaged, built and deployed in an isolated linux environment. [34] In comparison with virtual machines, containers use less space, are faster with start-up, distribute resources better and are less redundant. [34] CHAPTER 2. CURRENT LITERATURE 31

2.12 Chaos engineering

Many of the software development practices covered in this section aim to ensure and prove the validity of a system before it is being deployed to pro- duction. However, the validity of a complex distributed web-based service, for example, can be difficult to confirm prior to deployment. Many busi- nesses in the tech domain are verifying the reliability of their software by experimenting on it instead of trying to test it comprehensively [35]. This practice was named chaos engineering originally by the engineers of Netflix [35].

2.12.1 Origins of chaos engineering In line with the premises of DevOps, the developers of a service like Netflix need to continuously improve or make changes to their system. This might mean making hundreds of new commits a day, while at the same time having to keep their service available. Netflix has utilized an internally-available tool called Chaos Monkey to test the robustness of their service. Chaos Monkey targets the VMs that run production software and terminates some of them by random. This creates an incentive for the developers to make their software more robust against failing VM instances. Chaos Monkey has been intended to be used only during daytime so that the engineers can respond to failures caused by it as soon as possible. [35] Using Chaos Monkey turned out to be beneficial to Netflix. All their services are being implemented as resilient against failing VM instances. However, the concept of chaos engineering also consists of more discreet activities than just breaking some components of a system in production. The main goal is to increase confidence in a distributed system’s ability to handle volatile situations. These situations could range, for example, from a hardware failure, to a sudden upswing in service requests, to an incorrectly formatted configuration parameter. [35] From a system-oriented point of view, the developers that practice chaos engineering should first see their distributed set of services as a single sys- tem. This system is then experimented on with inputs that mimic real-world events, which gives the developers insight into the behaviour of their soft- ware. Traditionally, the functional specification of a software-based system describes which inputs should result to which outputs. Complex distributed systems can be hard and impractical to model upfront following this method- ology, which leads to the approach of following the steady-state behaviour of a system. A goal of chaos engineering is to monitor whether a system CHAPTER 2. CURRENT LITERATURE 32 seems to be functioning correctly and whether there are any reports of un- usual behaviour from the users of the service. This is in contrast with the traditional approach of only comparing the implementation of a system to it’s specification. [35]

2.12.2 Chaos engineering principles Practicing chaos engineering starts from defining the metrics that character- ize the steady-state behaviour of a system. In the case of Netflix, the software engineers monitor a metric called SPS that indicates the number of streams started at any given time. Fluctuations of this metric can indicate whether the service is available, given that the engineers have an intuition of how the SPS metric looks like in a normal situation. [35] The next step is using inputs to mimic special situations that could hap- pen in the real world. These include all the undesirable cases where, for example, a hardware failure occurs or a part of the network suddenly gets congested. These experiments should then be executed automatically and continuously in production to ensure the long-term resilience of the system. The use of a production environment for experimentation stems from the premise of working with complex distributed systems. It has become infea- sible to replicate a modern system of this type in a separate testing environ- ment. [35] Chapter 3

Environment

This chapter describes the environment, in which the web service implementation of this thesis was conducted. It takes into account the current technical possi- bilities and the established software development practices within the domain of Nokia. The scope of this chapter is the environment used to support the practices of CI/CD and DevOps.

3.1 Software management

This section gives an overview on the software management tools, applica- tions and processes that are used in CI/CD and are relevant to the implementation made in this thesis.

3.1.1 Source code management For source code management, the development environment used in this the- sis includes an internal GitLab service that is based on the git version control system. This service includes all the software systems and libraries that are developed with and used in CI/CD. All commits made to GitLab are auto- matically tested with repository-specific tests.

3.1.2 Build and release management Releasing software is initialized mainly by utilizing a Makefile script. This script file includes commands for creating Python virtual environments in which to build and test the application locally. It also implements commands for releasing, testing and deploying the software in question. When delivering or deploying software, the newest code commit is tagged with a tagging

33 CHAPTER 3. ENVIRONMENT 34 command. This tag is then pushed automatically to GitLab, which handles the next steps of the release or deployment process using a repository-specific gitlab-ci.yml scripting file. This CI/CD script triggers the unit tests one more time before executing the steps of software delivery. In the case of building web software, docker images are used for container- izing the application. An artifactory repository manager is then utilized for delivering the application and running it on a cloud server instance. There is also a possibility to utilize a Jenkins CI server instance to pull images from the artifactory and execute pipelines that are defined in the SCM system.

3.2 Software development processes

Figure 3.1 shows the intended way of writing new code in the CI environment.

Figure 3.1: The approach to writing new code in the CI environment

The process of using GitLab consists of creating branches for new features and merging them when the implementation is complete. A WIP (Work In Progress) status is used for merge requests that are not yet ready to be merged. The code changes of a merge request are also reviewed by another engineer in the team who then either accepts the request or makes comments on the code. CHAPTER 3. ENVIRONMENT 35

3.3 The implementation environment

The software management and development approaches described in this chapter do not put constraints on the environment that can be used for implementing the application itself. The application implemented for the research of this thesis is intended to be a web-based service for displaying and visualizing data. React has been chosen to be the framework of the application and the code is written in a Linux virtual machine environment using Windows as the host operating system. The application uses a popu- lated MongoDB database instance.

3.4 The web service

The goal of the web service development case of this thesis was to implement a DevOps dashboard application. This dashboard aims to provide a visu- ally appealing overview of the software testing activities carried out in the CI/CD environment inside Nokia. The front page of the application displays information about the testing environments and their latest test results. Fol- lowing a microservices-like architecture, APIs to other web services in the internal network are used to fetch this information. A Jenkins CI server instance and a Python database interface were accessed in this case. Python Flask1 framework provided the back-end API functionalities that were used in extending an existing web service for database data retrieval. During the writing of this thesis, the existing Flask web application also displayed UI views that had some overlap with the views implemented for the DevOps dashboard service. For the front page table, the API serves the data related to all the testing environments that have been used within the last thirty days. Older environments are filtered in the back-end. The API also processes the data so that the only responsibility of the front-end application is to display it without modifying the JSON response object. Open source modules were used in front-end development and React2 pro- vided the main framework for the web-based service. Tables and other data containers were imported from the Material-UI3 framework. Some navigation properties were also needed in the web service to display software-specific or environment-specific test data pages. React router4 was imported to fulfil

1https://flask.palletsprojects.com 2https://reactjs.org/ 3https://material-ui.com/ 4https://reactrouter.com/ CHAPTER 3. ENVIRONMENT 36 this requirement and a navigation bar component from the React Bootstrap5 front-end framework was used to contain the links. Some React Bootstrap icons were also added to the project to better visualize the failed and passed statuses of individual test cases. The service also uses unified status icons with the Jenkins CI server to display the latest state of the testing environ- ments. Figure 3.2 shows all the main components used in implementation.

Figure 3.2: Application components and interfaces

The end users of the web application are the people responsible for the testing activities in the DevOps environment.

5https://react-bootstrap.github.io/ Chapter 4

Methods

This chapter states the research methodologies used in this thesis and eval- uates their validity in the context of software development. The main guide- lines for the research done in this thesis are extracted from case study methodology.

4.1 Case studies in software engineering

According to Runeson and H¨ost[36], case study is an appropriate method- ology for conducting research in the context of software engineering. This is because case studies examine modern phenomena in their natural domain. These phenomena would often be difficult to study in isolation. Case study can be categorized as an observational method closely related to methodologies such as field study and project monitoring. These types of studies do not provide the same output as controlled experiments do, but can result in a better understanding of the research object. The research objects specific to software engineering differ from other fields of study in that they are parties developing software systems instead of using them. Their work is also more often organized in projects instead of functions. [36]

4.2 Case study research methodology

4.2.1 Characteristics Originally, case studies were conducted for exploratory purposes, which means forming an understanding of the subject and also giving new ideas as a basis for new research. If trying to achieve a general depiction is not that impor- tant, case study methodology can also be utilized in describing a situation

37 CHAPTER 4. METHODS 38 or a phenomenon. In software engineering, case studies commonly aim to find improvements to a certain facet of the studied subject. Software en- gineering can also be considered a multidisciplinary field that includes the development, operations and maintenance of systems, as well as the political and social environments formed by individuals and organisations. This makes software engineering an appropriate domain for conducting case studies. [36]

4.2.2 Research process Case study methodology includes five main steps to follow [36]:

1. Design the objectives of the study.

2. Define and describe how data is collected.

3. Study the case and extract the data.

4. Analyse the data.

5. Compile a report.

Designing a case study should include stating the goal of the study, defin- ing the studied case, establishing a theoretical background as a reference frame, defining the research questions and describing how an where to col- lect the data needed in the study. [36]

4.3 The case study of this thesis

4.3.1 Research objectives Following the guidelines of the previous sections in this chapter, this thesis aims to conduct a case study on DevOps and other software development pro- cesses in the business environment of Nokia. The main research objective is to describe how the DevOps and CI environment implements the main function- alities of DevOps and provide ideas for future improvements and discussion. This is done by investigating observing the environment’s ability to support the continuous development and deployment of a web-based application de- scribed in Chapter 5. The observations made during this implementation are compared with the current state of DevOps provided by the literature and studies of Chapter 2. The time it takes to deliver the application from development to production via the CI/CD pipeline is also observed. CHAPTER 4. METHODS 39

4.3.2 Collecting data Runeson and H¨ost[36] describe observations as a way of collecting data in a case study. This thesis collects it’s research data by following a software implementation project and in this way interacting with the DevOps platform and people involved. The results of these interactions are collected and the data is used in describing and investigating the tools and approaches of the DevOps environment. A survey amongst the end users is also conducted to evaluate how suc- cessful the web service implementation was. This is carried out by using an online form. Chapter 5

Implementation

This chapter covers the software implementation that was done in order to aid the research and evaluation parts of this thesis. Chapter 3 describes the implementation environment and the possibilities it offers.

5.1 Use cases

The implementation process started with defining the most important use cases for the web application in question. These use cases were established by having an online discussion with the end-users of the software. The software system implemented in this thesis is a web-based service that aims to visualize and display data from a database in a dashboard-like view. The first was defined as a front page that shows a visualisation of the main environments that are already stored and described in the database. The design of this page should also be visually appealing and it should also use an API to the utilized Jenkins CI server to fetch the status information of each environment. The second use case described an ”analytics page” that intends to offer visual comparisons between different data sets. The final use case that emerged from the discussion described an environment- specific page for detailed information and visualizations.

5.2 Initial steps

The process of writing new software usually starts with creating a new repos- itory in the SCM system, in this case GitLab. The contents of this repository can be initialized with respect to the type of implementation project in ques- tion. The development environment provided no standardized templates or

40 CHAPTER 5. IMPLEMENTATION 41 configuration files for a web service so the project was started by initializing a React application in an empty repository. Graphical UI elements used by the service were also installed and tested by creating simple table and icon elements.

5.3 Build and release management

After creating the initial application frame, it became necessary to facilitate the building and releasing of the service. Again, the environment included no standardized way of implementing these components, which lead to the practice of referring to other similar repositories in the environment for infor- mation. Other web services were built and released utilizing Dockerfiles and makefile scripts in conjunction with an artifactory repository manager, so these were added to the new GitLab repository. The makefile code of other implementations was written in a general manner with repository-specific variables, which made it reusable in the case of this new application. The containerization of the application had to be implemented without referring to the DevOps environment for assistance, because it included no previous implementations of JavaScript-based applications. At this stage, the make- file was the main entry point for building and releasing the application. It could be used for turning the source code into a docker image and pushing it to an artifactory repository. Version tagging was also implemented in this script.

5.4 Building a CI/CD pipeline

Most implementations in the DevOps environment inspected by this thesis use Gitlab CI/CD as the main approach to building a CI/CD pipeline. A gitlab-ci.yml file was added to the web application repository to define the CI and CD process. This process was not standardized so other implementations in the environment were used as a reference. The CI/CD pipeline starts by building the application with yarn and linting the code with eslint. An initial unit test was also implemented to be executed after these stages. The unit test tested whether a main element of the application was rendered correctly. These CI stages of building, linting and testing are triggered every time a new commit is made to any branch in the SCM system. Choosing which stages of CI/CD need to be executed can also be decided with respect to different events in the SCM. In addition to building, linting and testing, the release and deploy stages are fired every time a commit is CHAPTER 5. IMPLEMENTATION 42 tagged on the master branch. The release stage, in this case, builds a docker container image of the application by utilizing a tool called kaniko. This tool is needed because the GitLab runners that execute the stages of CI/CD and thus use docker commands do not run on elevated privileges. The container image is then released using an internal artifactory repository manager. This release is tagged automatically according to the commit tag that triggered the pipeline. For the deployment of the web service, a dedicated cloud server instance was initialized in the internal network. The deploy script in GitLab then connects to this instance using ssh. On every deployment event this script proceeds to remove previous image versions from the server, pull the newest version from the artifactory repository, stop and remove the currently running container and start a container with the newest version of the service. With the CI pipeline implementation of this section, for the developer, implementing a new feature to the application ends with tagging a commit in GitLab. All the steps after that are handled automatically by the pipeline. Figure 5.1 illustrates the overall CI/CD process of the GitLab platform.

Figure 5.1: CI/CD process in GitLab

5.4.1 Merge request deployments During the implementation of new features to the web service, a need to review these features not only by reading code, but in practice, emerged. This led to the deployment of a new virtual machine server instance to the internal network. This server could be used for deploying the code of other branches than the master branch of GitLab. The end-users could then eval- uate planned feature implementations in action without needing to make a production-level deployment. The CI/CD pipeline functionalities of GitLab CHAPTER 5. IMPLEMENTATION 43 were used to automate the deployment of the API web service, in the case of planned changes being committed to it’s merge request.

5.5 Implementing features

Following the use cases of section 5.1 guides the development of the web ser- vice implemented in this thesis. At the start of every new feature implementation, a new branch is created. This way, when the feature is considered imple- mented, a code review can be issued before merging the code to the master branch and deploying the service to production.

5.5.1 Initial application The development process started with implementing an element that displays the title of the front page and a latest status entry from the database. For the initial application frame, a simple table that could be populated with data from the database was also implemented. These served as starting points for the front page described in the first use case. For fetching the data, an API was implemented to another web service that acted as a database interface. The development of the API was done incrementally by first creating a simple endpoint that handled a minimal amount of data for the front page title element. This API decoupled the data processing from the UI of the application and it was implemented by extending the features of an already existing Flask Python application that was deployed in the internal network. This API re-implemented features already present in the service, which opened up a possibility to remove the old implementations in case they are seen as redundant. Before deploying the updated Flask application, it’s compatibility with the main service was tested on localhost by running both services. After successfully transferring data with the first API end point, a second more complex end point for populating the front page table of the React application was implemented and tested.

5.5.2 First deployment The first deployment was made as soon as a functional view displaying ini- tial front page information with the title element and the main data table was implemented. The internal end users could this way interact with the service and voice requests for revisions or improvements. After the first de- ployment users gave feedback on the design of the UI. The application had CHAPTER 5. IMPLEMENTATION 44 some decorative background images and logos, which, according to the users, took too much space in relation to the informative content of the front page. Additional information columns to the initial UI table were also requested, which lead to the decision of making font sizes substantially smaller. Figure 5.2 shows the entry page user interface of the deployed web ser- vice. The table element contained over thirty rows when the whole page is displayed.

Figure 5.2: UI of the web application. Actual data is concealed for confiden- tiality.

5.5.3 Incremental improvements The subsequent deployments of the web service were focused on making small incremental improvements and feature implementations by taking the feedback of end users into account. First, font sizes were decreased and some of the decorative images removed to accommodate the request of fitting more data to the front page. Adding more information to the UI table was done by modifying the existing database interface API and adding more table columns to the UI. After the first deployment, the second major feature implementation in- cluded fetching status data from a Jenkins CI server and combining it with the environment information shown on the front page of the application. This was implemented by extending the API of the Flask application. The API endpoint responsible for the front page table data of the main application was extended to query the API of the Jenkins CI server for additional envi- ronment status information. The table of the main web service was updated CHAPTER 5. IMPLEMENTATION 45 to show this information in the form of clickable status icons that include links to the respective Jenkins result pages. These icons are shown in Figure 5.2 as columns ”latest baseline” and ”latest testline”. The implementation of new features proceeds continuously and incremen- tally after the writing of this thesis.

5.5.4 Result pages After the initial deployments of the application, a need to display the results of test runs and individual test cases became timely. React router was used to allow the navigation to the pages containing this data. Again, a simple extension to the API acting as a database interface was sufficient in providing the data to the front-end application. First, indexing pages were created to summarize the test runs in a ta- ble with respect to two different parameter choices: the software and the environment used in testing. Figure 5.3 shows a screen capture of the user interface. The table component was chosen to be similar with the table on the front page. Clicking a row opens up a list of test runs summarized by the parameter in question.

Figure 5.3: Test result summary pages of the web service

Secondly, test run detail pages were also needed. These pages open up a test run and display the results of individual test cases using icons from the React Bootstrap library. Log-files and links to the Jenkins CI server are also shown on the page, which is illustrated in the screen capture of Figure 5.4. CHAPTER 5. IMPLEMENTATION 46

Figure 5.4: Test result detail pages of the web service Chapter 6

Evaluation

This chapter evaluates the capabilities of the DevOps and CI/CD environ- ment described in Chapter 3 and used for the implementation of Chapter 5.

6.1 Build and release configurations

As discussed in the Chapter 3, the CI/CD environment mainly uses Makefile scripts for building and releasing software. In the implementation of Chapter 5, other repositories were used for reference in the construction of this Make- file that often implements the same commands across different applications. In the case of similar software applications, the gitlab-ci.yml configuration file is also structured in a similar way. Standardizing the formatting and location of these types of configuration files was part of the DevOps transformation covered by Callanan and Spillane [17]. The scripts and configurations of the DevOps platform within Nokia were not standardized in a similar manner. Most of the applications and services implemented in the DevOps envi- ronment covered by this thesis are also intended to be containerized, which leads to implementing the containers using Dockerfiles. Currently, these files are also not standardized, so starting a new project requires the developer to refer to other projects for artifactory repository URLs and other parameters.

6.2 Writing software code

Figure 3.1 presented the intended way of writing software code in the DevOps environment within Nokia. In this practise, all the unit tests that are required to test a new feature must be implemented before writing any production

47 CHAPTER 6. EVALUATION 48 code. This differs from the TDD approach described by Beck [20], in which the developer alternates between writing a unit test and writing production code to make the test pass. The implementation of Chapter 5 used the intended use cases of the application as a starting point for the development process. These use cases only provided guidelines for the high-level features of the application, which made it more challenging to write unit tests with them as the only basis. More specific and detailed use cases would be more easily converted to unit tests, providing for a more TDD-based development approach. After successfully implementing a new feature to an application, the de- veloper issues a merge request in GitLab and requests a code review from a peer. These code reviews helped in identifying errors, for example, in the makefile and Dockerfile implementations during the development process of Chapter 5. Feedback received from the end-users of the application was also helpful in continuously delivering new features and improvements. After the merge request is accepted, the new feature implementation is merged to the master branch of the project in the SCM. In order to release and deploy the new version of the application, the latest commit still needs to be tagged utilizing the Makefile script, which triggers the CI/CD system of GitLab to execute the steps of release and deploy. Versioning the application by running a script after making the merge adds an additional manual step in the release process.

6.3 CI/CD tools

On average, the time it took to deploy the web application from development to production using the CI/CD pipeline of GitLab was 10 minutes 38 seconds. During the deployment, the pipeline implementation described in Chapter 5 forces the removal of previous docker images and containers on the deploy- ment server. This is done prior to verifying the successful deployment of the new application version. This type of practice is risky because it leaves no backup versions to deploy in case the latest deployment fails. In addition to the CI/CD system of GitLab, the DevOps environment of this thesis also includes a Jenkins CI server running in production. Hence, the environment utilizes two systems for creating and executing CI/CD pipelines. Jenkins pipelines could also be used in conjunction with GitLab SCM by specifying a webhook URL that initiates the trigger point, which would remove the need for implementing pipelines using GitLab CI/CD. Figure 6.1 shows a comparison between these two tools. [34] Figure 6.2 shows a performance comparison between GitLab CI runners CHAPTER 6. EVALUATION 49

Figure 6.1: Comparing GitLab CI/CD with Jenkins. [34]

and Jenkins in terms of the deployment time for 10 sequential commits [34]. Currently, only a shared GitLab runner is used in the DevOps environment within Nokia. According to the study by Singh et al. [34], Jenkins and project-specific GitLab runners outperform shared GitLab runners by a fac- tor of 20 in the time it takes to deploy a build. In their research paper, Singh et al. [34] state that the plugin support of Jenkins makes it easy to configure an automated CI/CD pipeline for a software project. So, migrating projects from GitLab CI/CD to Jenkins could provide a one possible answer to the standardization issue raised in the first section of this chapter. However, as software applications grow larger in size, more Jenkins plugins are needed, which makes it harder to manage Jenkins in general [34]. On the other hand, GitLab CI/CD only uses a single YML file in configuring the pipeline, which makes it easy to outline and modify the pipeline [34]. Both GitLab and Jenkins provide good CI capabilities and choosing between them depends on the project in question. Smaller and simpler microservice-based applications might be better suited with a GitLab CI/CD pipeline while the projects of a large software company should probably use Jenkins. [34] CHAPTER 6. EVALUATION 50

Figure 6.2: Comparing the performance of GitLab CI/CD with Jenkins. [34]

6.4 Web service implementation

In order to evaluate the web service that was implemented during this thesis, an anonymous form-based survey was conducted amongst the end-users. It had three questions to which the respondent could give a rating on a scale from one to ten. In addition, a free-form feedback field was provided in the survey. The survey had a participation rate of 29,2%. The average scores with respect to the survey questions are presented in Table 6.1 below:

Table 6.1: Survey scores Question Score How would you assess the usability of the service? 8.14 How would you assess the visual appeal of the service? 8.14 How would you assess the usefulness of the service? 8.86

From the results it can be seen that the web service was seen as use- ful, usable and visually appealing from the point of view of the survey re- spondents. However, at the time of writing this thesis, the web service is still at development phase, which was also stated in the open feedback sec- CHAPTER 6. EVALUATION 51 tion. Further continuous development and adjustments are needed in the fu- ture. The feedback section also stated that the current implementation of the application provides a very good and promising frame for future development.

6.5 Summary

In summary, the DevOps environment of Nokia described in Chapter 3 suc- cessfully supported the continuous development of a web service described in Chapter 5. Feedback loops with the end users were also successful in de- livering feature and improvement requests after all the deployments. The main principles of DevOps, including build and release configurations, prac- tices for writing code, CI and CD were implemented in the environment. In the future, the responsibility of the developer in extracting the use cases and the upfront design of an application could be discussed. Currently, it is also mainly under the developers responsibility to design and configure the CI/CD pipeline of an application. A more standardized approach for this could be discussed. The DevOps environment also utilizes two different tools, Jenkins and Gitlab CI/CD, for pipeline implementations. Converging to a single tool can be a subject for future evaluation. Chapter 7

Conclusion

As the world becomes increasingly software-driven, the importance of good software development practices increases. DevOps describes a software deve- lopment and delivery process where the boundary between development and operations experts and teams has been faded out. This premise aims to fa- cilitate an environment of frequent software deliveries that increases the fre- quency of end-user feedback, which in turn enables continuously delivering new software functionality. The practices of Continuous Integration, Con- tinuous Delivery and Continuous Deployment aim to create an automated pipeline that delivers new features and fixes to production as quickly and robustly as possible. These practices are often seen as part of implementing a culture of DevOps. Concepts such as code reviews, automated testing and microservices ar- chitecture described in the Chapter 2 of this thesis can also be seen as closely related to the processes of DevOps. Other practices such as chaos engi- neering, web service choreographies and Test-Driven Development were also covered in the literature review. Based on this literature review, this the- sis conducted a case study on the DevOps environment implemented in the business context of Nokia. The quality of this environment was assessed by implementing a web-based application and observing the capabilities of the development and delivery platform. The DevOps platform used in the implementation covered by this the- sis was successful in delivering software on a continuous basis. On average, executing the CI/CD pipeline taking the application from development to production took 10 minutes 38 seconds. The main features of DevOps, in- cluding CI/CD functionalities, practices for writing code and build and re- lease configurations were included in the environment. Feedback loops with the end users were also successfully facilitated and they helped with new feature deliveries. For the future, the responsibilities of the developer in

52 CHAPTER 7. CONCLUSION 53 configuring and implementing CI/CD pipelines and establishing the upfront design of an application can be discussed. The DevOps environment also uses two different tools, Jenkins and Gitlab CI/CD, for pipeline implementations. Migrating to a single tool could be evaluated in the future. Bibliography

[1] P. Agrawal and N. Rawat. Devops, a new approach to cloud development testing. In 2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT), volume 1, pages 1–4, 2019.

[2] N. Govil, M. Saurakhia, P. Agnihotri, S. Shukla, and S. Agarwal. Ana- lyzing the behaviour of applying agile methodologies culture in e-commerce web application. In 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184), pages 899–902, 2020.

[3] M. Shahin, M. Ali Babar, and L. Zhu. Continuous integration, delivery and deployment: A systematic review on approaches, tools, challenges and practices. IEEE Access, 5:3909–3943, 2017.

[4] C. Ebert, G. Gallardo, J. Hernantes, and N. Serrano. DevOps. IEEE Software, 33(3):94–100, 2016.

[5] L. Bass, I. Weber, and L. Zhu. DevOps: A Software Architect’s Per- spective. Addison-Wesley Professional 2015, 1st edition, 2015.

[6] M. H¨uttermann. DevOps for Developers. Apress 2012, 1st edition, 2012.

[7] A. Balalaie, A. Heydarnoori, and P. Jamshidi. Microservices architec- ture enables devops: Migration to a cloud-native architecture. IEEE Software, 33(3):42–52, 2016.

[8] S. Vadapalli. DevOps: Continuous Delivery, Integration, and Deploy- ment with DevOps. Packt Publishing 2018, 1st edition, 2018.

[9] B. Laster. Continuous Integration vs. Continuous Delivery vs. Contin- uous Deployment. O’Reilly Media, Inc 2017, 1st edition, 2017.

[10] Jez Humble and Joanne Molesky. Why enterprises must adopt devops to enable continuous delivery. Cutter IT Journal, 24(8):6, 2011.

54 BIBLIOGRAPHY 55

[11] L. Zhu, L. Bass, and G. Champlin-Scharff. Devops and its practices. IEEE Software, 33(3):32–34, 2016.

[12] Pilar Rodr´ıguez, Alireza Haghighatkhah, Lucy Ellen Lwakatare, Su- sanna Teppola, Tanja Suomalainen, Juho Eskeli, Teemu Karvonen, Pasi Kuvaja, June Verner, and Markku Oivo. Continuous deployment of software intensive products and services: A systematic mapping study. Journal of Systems and Software, 123, 01 2016.

[13] G. d. Pereira Moreira, R. P. Mellado, D. A. Montini, L. A. V. Dias, and A. Marques da Cunha. Software product measurement and analysis in a continuous integration environment. In 2010 Seventh International Conference on Information Technology: New Generations, pages 1177– 1182, 2010.

[14] J. Pesola, H. Tanner, J. Eskeli, P. Parviainen, and D. Bendas. Inte- grating early v v support to a gse tool integration platform. In 2011 IEEE Sixth International Conference on Global Software Engineering Workshop, pages 95–101, 2011.

[15] J. H. Hill, D. C. Schmidt, A. A. Porter, and J. M. Slaby. Cicuts: Com- bining system execution modeling tools with continuous integration en- vironments. In 15th Annual IEEE International Conference and Work- shop on the Engineering of Computer Based Systems (ecbs 2008), pages 66–75, 2008.

[16] J. Gmeiner, R. Ramler, and J. Haslinger. Automated testing in the continuous delivery pipeline: A case study of an online company. In 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pages 1–6, 2015.

[17] M. Callanan and A. Spillane. Devops: Making it easy to do the right thing. IEEE Software, 33(3):53–59, 2016.

[18] I. Karac and B. Turhan. What do we (really) know about test-driven development? IEEE Software, 35(4):81–85, 2018.

[19] T. Dogsa and D. Batic. The effectiveness of test-driven development: an industrial case study. Software Quality Journal, 19(4):643–661, 2011.

[20] K. Beck. Test Driven Development: By Example. Addison-Wesley Pro- fessional 2002, 1st edition, 2002. BIBLIOGRAPHY 56

[21] F. Besson, P. Moura, F. Kon, and D. Milojicic. Bringing test-driven development to web service choreographies. Journal of Systems and Software, 99(6):135–154, 2015.

[22] Y. Rafique and V. B. Misic. The effects of test-driven development on external quality and productivity: A meta-analysis. IEEE Transactions on Software Engineering, 39(6):835–856, 2013.

[23] Wilson Bissi, Adolfo Gustavo Serra Seca Neto, and Maria Claudia Figueiredo Pereira Emer. The effects of test driven development on in- ternal quality, external quality and productivity: A systematic review. Information and Software Technology, 74:45 – 54, 2016.

[24] Hussan Munir, Misagh Moayyed, and Kai Petersen. Considering rigor and relevance when evaluating test driven development: A systematic review. Information and Software Technology, 56(4):375 – 394, 2014.

[25] Burak Turhan, Lucas Layman, Madeline Diep, Forrest Shull, and Hakan Erdogmus. How Effective is Test Driven Development. 10 2010.

[26] F. Shull, G. Melnik, B. Turhan, L. Layman, M. Diep, and H. Erdogmus. What do we know about test-driven development? IEEE Software, 27(6):16–19, 2010.

[27] S. Kollanus. Test-driven development - still a promising approach? In 2010 Seventh International Conference on the Quality of Information and Communications Technology, pages 403–408, 2010.

[28] M. Siniaalto. Test driven development: empirical body of evidence. 2007.

[29] D. Fucci, H. Erdogmus, B. Turhan, M. Oivo, and N. Juristo. A dissection of the test-driven development process: Does it really matter to test-first or to test-last? IEEE Transactions on Software Engineering, 43(7):597– 614, 2017.

[30] A Cockburn. Elephant carpaccio. blog, 2008.

[31] U. Sakthivel, N. Singhal, and P. Raj. Restful web services composition performance evaluation with different databases. In 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), pages 1–4, 2017. BIBLIOGRAPHY 57

[32] D. M. Rathod, M. S. Dahiya, and S. M. Parikh. Towards composition of restful web services. In 2015 6th International Conference on Com- puting, Communication and Networking Technologies (ICCCNT), pages 1–6, 2015.

[33] S. Malik and D. Kim. A comparison of restful vs. soap web services in ac- tuator networks. In 2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN), pages 753–755, 2017.

[34] C. Singh, N. S. Gaba, M. Kaur, and B. Kaur. Comparison of different ci/cd tools integrated with cloud platform. In 2019 9th International Conference on Cloud Computing, Data Science Engineering (Conflu- ence), pages 7–12, 2019.

[35] A. Basiri, N. Behnam, R. de Rooij, L. Hochstein, L. Kosewski, J. Reynolds, and C. Rosenthal. Chaos engineering. IEEE Software, 33(3):35–41, 2016.

[36] P. Runeson and M. H¨ost.Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering, 14(2), 2009.