Application Security in Continuous Delivery

Fábio Freitas Master’s Degree in Information Security Departament of Computer Science 2020

Orientador Prof. Dr. Eduardo R. B. Marques, Faculty of Sciences of University of Porto

Coorientador Eng. Pedro Borges, LOQR S.A.

Todas as correções determinadas pelo júri, e só essas, foram efetuadas.

O Presidente do Júri,

Porto, / /

UNIVERSIDADEDO PORTO

MASTERS THESIS

Application Security in Continuous Delivery

Author: Supervisor:

Fábio FREITAS Eduardo R. B. MARQUES

Co-supervisor:

Pedro BORGES

A thesis submitted in fulfilment of the requirements for the degree of MSc. Information Security

at the

Faculdade de Ciências da Universidade do Porto

November 25, 2020

Acknowledgements

Firstly, I would like to thank my thesis supervisors Prof. Dr. Eduardo R. B. Marques and Eng. Pedro Borges, whose expertise and guidance throughout the entire project proved itself invaluable.

Secondly, I would like to thank my co-workers at Euronext, in special to my two mentors and good friends Duarte Monteiro and Ricardo Gonçalves, both alumni of this department and experts in this subject. I’m lucky to have worked alongside you two and to have learned so much from both of you.

Then to all my friends, specially to my colleagues at the Information Security Master’s Degree André Cirne and Nuno Lopes, who taught me a lot in the past two years, and helped me grow as both a student and more recently, as a professional of the Information Security field.

And lastly, and most importantly, to my family - most of all to my parents and siblings - who have been there for me for all of my academic journey and allowed me this opportunity to pursue and now work in a field that I’m passionate about.

iii

Abstract

In the last few years, software development has seen a shift regarding the gap between the development and operation activities, with more and more focus with automating the building, testing and deployment of the application in what is usually called the Continuous Integration/Continuous Delivery process. However, this process has still few concerns with security in the real world.

This thesis studies and implements security checks on top of a standard software delivery pipeline using a modular approach and considering a wide range of security checks of both Static and Dynamic nature. This framework is then instantiated for two different applications written in two different programming languages and the results are analyzed.

Keywords: Application Security, Software Delivery Automation, Security Automation, De- vOps, DevSecOps

v

Resumo

Nos últimos anos, o desenvolvimento de software tem sofrido mudanças no que toca à distância entre as desenvolvimento e as atividades das operações, com cada vez mais foco na automação do building, dos testes e do deployment das aplicações, no processo que é chamado Integração Contínua / Entrega Contínua (CI/CD). No entanto, no mundo real, este processo ainda considera muito poucas preocupações com a segurança das aplicações.

Nesta tese será feito o estudo e implementação de validações de segurança assentes em cima de uma software delivery pipeline padrão utilizando uma abordagem modular com um leque vasto de validações de segurança distintas, de natureza estática e dinâmica. Esta framework é depois instanciada em duas aplicações escritas em duas linguagens diferentes, e os respetivos resultados analisados.

Palavras-chave: Segurança Aplicacional, Automação de Segurança, Integração Contínua, DevOps, DevSecOps

vii

Contents

Acknowledgements iii

Abstract v

Resumo vii

Contents vii

List of Figures xi

List of Tables xiii

1 Introduction 1 1.1 Problem statement...... 1 1.2 Contributions...... 2 1.3 Thesis structure...... 2

2 State of the Art3 2.1 DevOps...... 4 2.1.1 Software Version Control...... 5 2.1.2 GitLab...... 6 2.1.3 Continuous Integration...... 6 2.1.4 Continuous Delivery...... 6 2.2 Containers...... 7 2.2.1 Docker...... 8 2.2.2 Container Security...... 9 2.3 Application Security...... 10 2.3.1 OWASP...... 10 2.3.2 Static Application Security Testing (SAST)...... 11 2.3.3 Source-Code Analysis...... 11 2.3.4 Secrets Scanning...... 12 2.3.5 Dependency Scanning...... 13 2.3.6 Dynamic Application Security Testing (DAST)...... 13 2.4 Vulnerability Management...... 14 2.4.1 DefectDojo...... 15 2.5 Integrating Security Checks in a CI/CD Pipeline...... 15

3 Implementation 17

ix x APPLICATION SECURITYIN CONTINUOUS DELIVERY

3.1 Architecture...... 17 3.2 Setting up the environment...... 20 3.2.1 Software Versioning Control System - GitLab ...... 21 3.2.2 Automation Server - Gitlab CI/CD + Runner ...... 21 3.3 Implementing the Secure Pipeline...... 24 3.3.1 Baseline...... 24 3.3.2 Integrating Source-Code Analysis - Sonarqube ...... 28 3.3.3 Integrating DAST - Zed Attack Proxy ...... 32 3.3.4 Integrating Container Scanning - Clair ...... 36 3.3.5 Integrating Secrets Scanning - Gitleaks ...... 38 3.3.6 Integrating Dependency Checks - OWASP Dependency Checker ..... 41 3.3.7 Integrating a results aggregator (custom script) - build_risk_calc.py ... 44 3.3.8 Integrating a Vulnerability Tracker - DefectDojo ...... 47

4 Results 55 4.1 Instantiation 1 - Java Vulnerable Lab - Java...... 55 4.1.1 Vulnerabilities...... 56 4.1.2 Performance...... 56 4.2 Instantiation 2 - OWASP Juice Shop - JavaScript/NodeJS...... 57 4.2.1 Vulnerabilities...... 58 4.2.2 Performance...... 58

5 Conclusion 61 5.1 Concluding Remarks...... 61 5.2 Future Work...... 61

Bibliography 63 List of Figures

2.1 DevOps Process Overview - as described by AWS...... 4 2.2 Google Trends Query - "DevOps" - January 2010 to January 2020...... 5 2.3 Continuous Delivery Pipeline...... 7 2.4 Architecture - Containers vs Virtual Machines...... 8 2.5 Docker Architecture...... 9

3.1 Complete Pipeline...... 19 3.2 Prototype - System Architecture...... 20 3.3 GitLab Runner Token...... 23 3.4 GitLab Runner Test 1...... 24 3.5 Baseline for Software Delivery Pipeline...... 25 3.6 SAST Check in the Pipeline - Flow...... 28 3.7 Secure Pipeline with SAST...... 31 3.8 DAST Check in the Pipeline - Flow...... 33 3.9 Secure Pipeline with DAST...... 35 3.10 Container Scanning in the Pipeline - Flow...... 37 3.11 Secrets Scanning in the Pipeline - Flow...... 40 3.12 Dependency Check in the Pipeline - Flow...... 42 3.13 Results aggregator in the Pipeline - Flow...... 45 3.14 build_risk_calc.py - HTML Dashboard...... 46 3.15 Pipeline results submitted to DefectDojo Vulnerability Tracker - Flow...... 48 3.16 Deduplication of issues at the Product Level...... 49 3.17 DefectDojo - Main Product Dashboard...... 52 3.18 DefectDojo - Engagement View...... 52 3.19 DefectDojo - Issues View...... 53

xi

List of Tables

4.1 Issues Table - By Severity and Security Check...... 56 4.2 Pipeline Performance - Times over 5 Executions...... 57 4.3 Issues Table - By Severity and Security Check...... 58 4.4 Pipeline Performance - Times over 5 Executions...... 59

xiii

Listings

2.1 Gitleaks Rules TOML file example...... 12 3.1 Gitlab CI/CD Runner Installation Commands...... 22 3.2 Gitlab CI/CD Runner Installation Check...... 22 3.3 Gitlab Runner Registration...... 23 3.4 Test .gitlab-ci.yml file...... 23 3.5 .gitlab.yml - Baseline definition for Java Project...... 25 3.6 .gitlab.yml - Baseline definition for NodeJS Project...... 26 3.7 SonarQube.service file in /etc/system/systemd/...... 29 3.8 sonar-project.properties...... 30 3.9 code-analysis.yml...... 30 3.10 SonarQube output result - report_sast.json...... 32 3.11 connection_check.sh...... 33 3.12 dynamic-analysis.yml...... 34 3.13 ZAP output result - report_dast.json...... 35 3.14 container-scan.yml...... 37 3.15 Clair output result - report_container-scan.json...... 38 3.16 Hardwired E-mails Regex Rule - .gitleaks.toml...... 39 3.17 secrets-scan.yml...... 40 3.18 Gitleaks Output result - report-secrets.json...... 41 3.19 dependency-check.yml...... 42 3.20 Owasp Dependency Check Output result - report_dependency-check.json.... 43 3.21 vulnerability_tracker.yml...... 50 3.22 dependency-check.yml tweaked to submit results to DefectDojo...... 51

xv

Chapter 1

Introduction

1.1 Problem statement

Nowadays, software building has seen a transformation when it comes to automating the delivery process of applications - from the process of packaging, compiling or building the appli- cation’s source-code into executable binaries, to deploying these binaries into live environments to be accessed by users.

These practices are usually referred in software engineering as Continuous Integration/- Continuous Delivery - CI/CD. These practices aim to tighten the gap between the software development and the operations by ensuring that the entire process of building, testing and deployment of applications is done in an automated manner.

Usually this is achieved by implementing an automation pipeline. Leveraging CI/CD en- gines, the beginning of this pipeline is triggered when a new commit is made to an application’s source code repository, and then a series of stages are executed, which, in the most standard implementations, goes through the building of the application, testing it, and deploying it. However, often the security of the applications is done afterwards, with companies discovering security vulnerabilities only through pentesting of the live applications, or worse, after they have been exploited by malicious actors.

To solve this, several tools in the market try to offer solutions to detect these problems early in the software development life-cycle, but these tools are often specific to a certain type of security check, and in a situation where different types of automated security checks are

1 2 APPLICATION SECURITYIN CONTINUOUS DELIVERY desired, the results of different tools will be often spread, non-uniform and thus less actionable by security teams.

1.2 Contributions

In order to solve the problem described above, this thesis’ presents the implementation of a wide range of different security checks in a standard CI/CD pipeline. The main contributions are:

• The design and implementation of a CI/CD pipeline containing checks of both Static and Dynamic Analysis Security Testing types (SAST and DAST):

– SAST checks include Source-Code Analysis, Secrets Scanning, Dependency Analysis and Container Analysis;

– DAST checks include Automated Pentesting.

– The implementation uses a modular approach that ensures that each job is optional, reusable and independent from the others

• The instantiation of the pipeline for two web applications in two different programming languages: Java and Javascript.

• The analysis of the results of the solution’s instantiations

1.3 Thesis structure

In Chapter 2, the state-of-the-art is presented. A brief description of all the technologies leveraged in the project is given, as well as an overview of previous academic work done in the context of securing software delivery pipelines.

In Chapter 3, a description of the implementation is given, focusing on the architecture of our designed solution, the environment setup and the implementation of the described architecture.

Chapter 4 contains the results of the two instantiations of the prototype, in terms of both vulnerabilities found and the pipeline’s performance.

Finally, Chapter 5 presents concluding remarks and items for future work. Chapter 2

State of the Art

In this chapter of the dissertation, each of the technologies that plays a role in implementing the secure software delivery pipeline is exposed in detail, firstly with an explanation of what each technology is, the problems it proposes to solve, and the role that it plays in the overall objective we aim to achieve. After that, an analysis of each the tools available for each purpose is conducted, touching the pros and cons of it and ultimately, a choice of tools will be made to integrate in our final prototype for this dissertation.

Firstly, we analyse the state of Continuous Integration and Continuous Delivery technologies, and the tools available at the current time - here is where all the Application Security tests are integrated, meaning that it constitutes the backbone of the entire project.

Also, a brief introduction of Container technology - namely Docker - is presented, as this technology too plays an important role in most modern development automation environments.

After that, we detail the security checks that can be integrated in the pipeline, namely the Static Analysis Security Testing (SAST) checks, and the Dynamic Analysis Security Testing (DAST) checks.

3 4 APPLICATION SECURITYIN CONTINUOUS DELIVERY

2.1 DevOps

Nowadays, the development teams are more and more expected to quickly deliver their products. Development teams working under Scrum and other Agile methodologies that used to deliver in a monthly sprint basis are now commonly pushing code that is functional/ready to go every week. In cutting-edge tech giants such as Amazon or Google, this is pushed to an extreme, with thousands of changes to production systems being pushed on a daily basis [31].

Regardless, this change of pace is welcome under the fundamentals and building blocks of all the modern software delivery methodologies like Agile, but in order to catch up to this pace, the common ways of deploying fixes or changes to application’s code had to be rebuilt with a different approach [33], one that takes in consideration and leverages this new high-speed, continuous way of working.

This progressive shift has sparked the interest of software development companies in a practice common referred to as DevOps - but what is DevOps?

Amazon Web Services, arguably one of the pioneers of the industry to adapt and leverage these practices, describes DevOps [27] as "the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using tradi- tional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market".

In Figure 2.1[27] we can see an overview of the overall goal of the DevOps process.

FIGURE 2.1: DevOps Process Overview - as described by AWS 2. STATE OF THE ART 5

Running a Google Trends query for the term since 2010, we can clearly identify a considerable and steady raise in interest with a peak being reached during last year (2019) as demonstrated in Figure 2.2.

FIGURE 2.2: Google Trends Query - "DevOps" - January 2010 to January 2020

Here, our focus will be on just part of the full DevOps process, the Continuous Integration and Continuous Delivery.

2.1.1 Software Version Control

Version Control System (VCS), also often called Source Code Manager (SCM) or Revision Control System (RCS) is the name we give to the tool used to manage the changes to, in this case, repositories of source code for computer programs. Each of these changes has an identification, which is normally a code, or a number. This identification of the change is called a revision, and along with it comes the association with a timestamp and the user who committed this change. Modern tools for this purpose will also allow the comparison and restore of each of these revisions, as well as merging them in some cases.

The most popular Software Version Control technology by far in modern software devel- opment is Git [10]. Git is a software versioning control system created by Linus Torvalds in 2005 to support the development of the Kernel. It is completely distributed in the sense that every git repository that is cloned to a host is complete with the entire history a version tracking, without needing network access or access to a central server. 6 APPLICATION SECURITYIN CONTINUOUS DELIVERY

Other of git’s strengths is its compatibility with existent protocols - since Git repositories can be published and synchronized over HTTP, FTP and SSH [10].

2.1.2 GitLab

Gitlab [1] is a tool that started as a hosting tool for Git repositories, and now offers other functionalities as well, such as wikis, issue tracking and continuous integration and continuous delivery (which will be presented in more detail in the next sections). Gitlab offers a free cloud-based service, with some limitations being associated with upgrading from free tier to a premium service.

As for the Git-repository hosting functionality, it allows a free user to host their code repositories in the public version of Gitlab, which runs in the cloud as a service, as opposed to the product business-model that Gitlab also offers, which allows companies to setup their own private, on-premises version of it.

2.1.3 Continuous Integration

As described by Martin Fowler [33], Continuous Integration is a software development practice where multiple developers working on a project integrate their work as frequently as possible. In practical terms, this means short and frequent commits to the project’s repository. Each integration is verified by an automated build - leveraging automation technologies as the one presented in the next section - to integrate the code-base, including the test stages. Recently, more and more teams have agreed that this practice leads to a significant reduction in integration issues, and more agile software development.

2.1.4 Continuous Delivery

Continuous Delivery is commonly referred to as a natural extension to Continuous Integration, and it refers to extending the practice explained above to the delivery of applications. This means automating the entire process of software delivery, from the commit, into the building and testing of the application, until the deployment of the application itself - in what is usually called a delivery pipeline. An example of a delivery pipeline can be seen in Figure 2.3 2. STATE OF THE ART 7

FIGURE 2.3: Continuous Delivery Pipeline

Currently, the mainstream platforms for hosting Git repositories like GitHub [12], Gitlab and Bitbucket also offer native integration with multiple CI/CD engines like Gitlab CI/CD, TravisCI, CircleCI, Jenkins and AppVeyor.

2.1.4.1 Gitlab CI/CD

Besides offering the Git-repositories hosting functionality, the Gitlab product also comes with its own CI/CD engine that allows a developer to define their own automation pipelines associated with a code repository.

The overall architecture of the Gitlab CI/CD solution is, as with most CI/CD engines, a master-slave based architecture. In practical terms, this means that the Gitlab Server will act as a master, who will read and interpret the .gitlab-ci.yml file, which is where the pipeline will be defined. Then, it will assign each of the jobs to a slave, which is called the Gitlab Runner Server.

The Gitlab CI/CD edition has available several runner servers in the cloud that can be used - or the user can install the Gitlab-Runner Agent in his own server, and enforce any job to only be run in this slave with the tags directive.

For reusability and modularity, the Gitlab CI/CD engine also supports the include direc- tive, which allows other pipeline definitions to be imported and reused.

2.2 Containers

Containers are a virtualization technology that are , by design, different from traditional virtualization based on hypervisors. A container runs as a completed isolated process, without being able to access any external resources. The main advantage of this is the ability to share the 8 APPLICATION SECURITYIN CONTINUOUS DELIVERY host kernel, instead of virtualizing another kernel. This allows the container to be a lightweight version of a traditional VM - which means that project deployments will be stable regardless of changes in the host environment.

The comparison of the architecture of a container based system versus a traditional Virtual Machine Architecture can be seen in the Figure 2.4.

FIGURE 2.4: Architecture - Containers vs Virtual Machines

2.2.1 Docker

Nowadays, the overall most popular instantiation of container technology by far is Docker. A growing number of big companies are using Docker to support their business, such as Spotify, Yelp, eBay, PayPal, Shopify and Uber [6].

Docker describe themselves as "a platform for developers and sysadmins to build, run, and share applications with containers".

A docker-based container system is composed of three main components with specific functions, as seen in Figure 2.5. 2. STATE OF THE ART 9

• Docker Daemon - a service that runs on top of the Linux and depends on several Linux Kernel Features. It’s described as the "persistent process that manages containers"[9]. By default, it locally exposes a Rest-API to handle client interactions

• Docker Client - also commonly refered to as Docker CLI, it’s the command-line client interface that interacts with the daemon’s Rest-API

• Docker Registry - a server application to store and distribute docker images

FIGURE 2.5: Docker Architecture

2.2.2 Container Security

When it comes to container security, more specifically security of Docker container technology, there are several techniques used, each to explore its own niche when it comes to the security layer it analysis. In our dissertation, we will only focus the static-analysis component of Docker Security, specifically in the image-scanning approach.

Image-scanning is a static-analysis technique applied in container security that consists in analysis a container image from a registry and checking each of their dependencies and 10 APPLICATION SECURITYIN CONTINUOUS DELIVERY modules for known vulnerabilities - comparing the packages’ version against a database of public disclosed vulnerabilities [4].

2.2.2.1 Clair

Clair [24] is an open-source tool whose main purpose is to perform static-analysis in application containers - supporting currently appc and docker. It works by downloading and updating a database with public vulnerabilities’ information from well-known sources (Mitre’s CVEs, e.g.). Then, the Clair API can be used (directly or leveraging a CLI wrapper like Clair-Scanner [2]) to submit a container image and index all the features present in it. Finally, these two lists are compared and Clair returns a result of all the public vulnerabilities detected for the software and packages used in that container image.

2.3 Application Security

Application Security [34] is the name given to the process, the practices and the tools applied throughout the entire application life-cycle with the goal of protecting some given software from threats.

In this dissertation, several Application Security techniques that we will see in the next section will be implemented such as SAST and DAST, to try and achieve a wide-range of different security tests in the same automated delivery pipeline.

2.3.1 OWASP

In order to raise awareness to developers about security concerns to be had when creating software, the OWASP (Open Web Application Security Project) foundation [28] was created, which is a nonprofit foundation that is globally regarded as the leading organization working towards secure coding. OWASP has been responsible during the last few years for delivering open-source software for security, as well as learning material and standard documents to help developers and security professionals to achieve this goal. 2. STATE OF THE ART 11

2.3.2 Static Application Security Testing (SAST)

Static Application Security Testing [35] is the type of software analysis that can be performed without having the application executed. In most cases, this type of analysis is performed against the application’s source code - however there are some exceptions as seen above in Clair - which is a type of SAST done on a project’s container image, and not its source-code. SAST is a term usually given to this type of analysis when it’s done by an automated tool, as when it’s performed by a human analyst, it’s usually called Code Review.

2.3.3 Source-Code Analysis

Source-code Analysis [32] is a type of automated SAST that scans an application’s source-code against a pre-defined rule-set of coding rules, and if at any point the coded application breaks any of the rules enforced, a finding is flagged.

The usage of this tool offers many advantages such as: using it is significantly faster than manually reviewing the code; since it does not require any live deployment of the application, it can be performed earlier in the software’s life-cycle - even without a deployment-ready version of the application; and it can find flaws in hard-to-reach states or unusual application circumstances - which would often be missed by human auditors [32].

2.3.3.1 SonarQube

SonarQube is an open-source source-code analysis tool [25] developed my SonarSource which performs static analysis of source-code "to detect bugs, code smells, and security vulnerabilities". Even though the tool is open-source and made available under the GNU Lesser General Public License, a paid enterprise version is available.

Natively, SonarQube supports over 20 programming languages [25] (although some of them are only available in the Enterprise version) - including all the mainstream programming languages like Java, Python, JavaScript, C/C++, PHP, and Go. 12 APPLICATION SECURITYIN CONTINUOUS DELIVERY

2.3.4 Secrets Scanning

Secrets Scanning is the name given to a scan ran by the application’s source-code against a list of various regex patterns that are often associated with content that should be secret and/or not hard-wired directly in the application’s source code, such as passwords, API Keys, personal information, etc.

2.3.4.1 Gitleaks

Gitleaks [15] is an open-source SAST tool for scanning secrets in git source-code repositories. It works, as described in the previous section, by performing regex analysis based on a set of rules that can be customized in a TOML file, as shown in Listing 2.1. The also tool supports JSON and CSV reporting.

LISTING 2.1: Gitleaks Rules TOML file example

[[rules]] description = "a string describing one of many rule in this config" regex = ’’’one-go-style-regex-for-this-rule’’’ file = ’’’a-file-name-regex’’’ path = ’’’a-file-path-regex’’’ tags = ["tag","another tag"] [[rules.entropies]] # note these are strings, not floats Min = "3.5" Max = "4.5" Group = "1" [rules.allowlist] description = "a string" files = [’’’one-file-name-regex’’’] paths = [’’’one-file-path-regex’’’] regexes = [’’’one-regex-within-the-already-matched-regex’’’]

[allowlist] description = "a description string for a global allowlist config" commits = [ "commit-A", "commit-B"] files = [ ’’’file-regex-a’’’, ’’’file-regex-b’’’] paths = [ ’’’path-regex-a’’’, ’’’path-regex-b’’’] repos = [ ’’’repo-regex-a’’’, ’’’repo-regex-b’’’] regexes = [’’’one-regex-within-the-already-matched-regex’’’] 2. STATE OF THE ART 13

2.3.5 Dependency Scanning

Dependency Scanning [37] is the name given of a type of SAST that consists in analysing a project’s library dependencies, and verifying if any of them have known vulnerabilities associated, by checking the library and respective version against a database of public known vulnerabilities (CVEs).

2.3.5.1 OWASP Dependency Check

Since 2013, the OWASP foundation has considered "using components with known vulnerabili- ties" as one of the top issues affecting real world web-applications [21], as these components may render the application that uses them vulnerable as well.

As such, developed as a part of the OWASP project, the OWASP Dependency-Check is an open-source tool that scans application’s dependencies, identifies them, and finally checks the package for vulnerabilities in MITRE’s CVE list [20].

It works by gathering information about the dependencies using Analyzers. For example, the JarAnalyzer (part of the Dependency-Check engine) will collect information from the Manifest, pom.xml and the package names in the JAR files, and then checks them for known vulnerabilities.

2.3.6 Dynamic Application Security Testing (DAST)

As opposed to SAST, which does not require a deployment of the application, the Dynamic Application Security Testing [26] or DAST is a type of security testing that tries to replicate an attacker’s behavior against a live version of the application.

DAST tools test the endpoints as they communicate through the frontend of the application - emulating malicious user behaviour by analyzing the live application’s behavior, and injecting malicious payloads into input fields and observing the outputs.

They are useful especially for detection of the following issues:

• input validation (cross-site scripting, injection vulnerabilities, etc.). 14 APPLICATION SECURITYIN CONTINUOUS DELIVERY

• server configuration issues (lack of security headers, unprotected cookies, unencrypted communications, etc.)

2.3.6.1 Zed Attack Proxy

The OWASP Zed Attack Proxy, or commonly referred to as ZAP [23] is an Web-Application Security tool also developed under the OWASP Foundation, which aims to help security professionals to automatically find security vulnerabilities in web-applications.

It’s also used by pentesters for manual testing, as it allows for interception and manipulation of HTTP Requests between the client and the application.

It also offers the possibility to run as a daemon [29], which can then be controlled by a REST API. Several command-line wrappers for this API exist, including official ones maintained by the developers of the tool, which allow for a seamless integration with CI/CD tools, as we illustrated in the implementation Chapter for this check.

2.4 Vulnerability Management

Vulnerability Management is the name given to the entire process of identifying, evaluating, treating and reporting security vulnerabilities in a given application or information system.

• Identifying vulnerabilities - using security tools or manual penetration tests to find security issues in an application or system

• Evaluating vulnerabilities - categorizing an identified issue according to a set of criteria in order to define a priority for addressing it.

– The criteria defined should be inline with the organization’s context - but some common factors to consider are:

∗ Is it hard to exploit?

∗ Is the vulnerable asset exposed to the internet?

∗ What would be the impact of sucessful exploitation?

• Treating Vulnerabilities - deciding on how to address it, having as a baseline the evalua- tion performed in the previous step: 2. STATE OF THE ART 15

– Remediation - applying a patch or action to fully ensure the issue can not be exploited.

– Mitigation - applying a security control or work-around to lessen the likelihood of exploitation of the vulnerability.

– Acceptance - deciding to take no action to address the vulnerability (usually if the risk is low, or if the mitigation cost is higher than the cost of sucessful exploitation).

• Reporting Vulnerabilities - exporting statistics and visualization about these actions, in order to support regulatory or compliance requirements, as well as to understand the organization’s evolution over time.

2.4.1 DefectDojo

DefectDojo [11] is an open-source vulnerability management tool, also developed as a part of the OWASP Project. The goal of the tool is to help organization’s management their Application Security programs by providing a platform to aggregate all the information about vulnerabilities.

The application natively supports integration with over 20 other security tools’ reporting format, and it also provides an API that can be used for automating several things, including the importing scan results.

2.5 Integrating Security Checks in a CI/CD Pipeline

Regarding previous work accomplished in this field of study, there have been several instances of attempts of including Security Checks in CI/CD pipelines. In 2018, Abdollah Shajadi of the University Oulu University of Applied Sciences [38] successfully implemented a prototype of a pipeline that integrated a DAST check. Similarly to what we propose to do in our prototype, he also leveraged Gitlab CI/CD as the automation engine. However, this work was limited in terms of security checks’ scope, as it only considered DAST. Besides that, the research also leveraged the Burp Suite Pro tool, whose automated DAST scans are only available in the paid edition, while our prototype intends to implement the framework levering only open-source tools.

Similarly, the research done by Mariana Paulo of Instituto Superior Técnico [36], it goes deeper into the DAST types of security checks by integrating three different tools for the purpose. 16 APPLICATION SECURITYIN CONTINUOUS DELIVERY

In the prototype, three different DAST scanners with different scopes were implemented: Arachni, for specific web-application scanning; Nessus, for system vulnerability scanning (more intended to detect vulnerabilities in services running in the host - for example the Web Application Server itself), and Nmap to detect open ports on the host.

In 2020, in "Adding security testing in DevOps software development with continuous integration and continuous delivery practices" [39], Ella Viitasuo implements a delivery pipeline that includes some of the tools used in our prototype, including Sonarqube and Zed Attack Proxy. However, this research, while also considering a wide range of security checks, does not include the implementation of dedicated Secrets Scanning, and does not offer a way to aggregate the results from the different tools in an uniform manner.

In both, the future work proposed for this field implies extending the automation to other types of security checks in order to ensure a wide scope of different validations that may be able to find different issues, as well as proposing a way to gather all the different tools’ reports to a single actionable report, with a single scale of severity.

In our project, we present a framework that, similarly to work that as already been done, considers automated DAST in the delivery pipeline, but expands the scope to multiple SAST and DAST checks in the same delivery process. Chapter 3

Implementation

This chapter of the dissertation presents the technical implementation of the secure development pipeline. We walk the reader through each of the prototype implementation stages in detail, providing code snippets, screenshots and other relevant attachments that may be useful to demonstrate, as well as all the technical decisions made during this phase.

First, we start by giving defining the architecture of the proposed solution, Then, we detail the process of setting up an automation environment, including all the (virtualized) infrastructure from both a systems and networks point of view as well as the installation and configuration of all necessary software and pipeline infrastructure. Finally, we will describe the implementation itself, in regard to how the flow of the pipeline is implemented and why and how the security checks are implemented and why.

3.1 Architecture

The overall goal of our work is to integrate a wide range of different security checks on an CI/CD, as illustrated in Figure 3.1. For this, we make use of the techniques and tools presented in the state-of-the-art of this dissertation (Chapter 2).

In total, we integrate a total of five security checks on top of a simple baseline pipeline. The implementation follows a modular approach, with none of the security checks being reliant on each other, by dividing the pipeline definition in several separate job definitions. These modules are then imported at will to the baseline definition. Additionally, all the results from the different security checks are all aggregated in uniform manner.

17 18 APPLICATION SECURITYIN CONTINUOUS DELIVERY

The security checks comprime 4 SAST checks (that do not require a live deployment of the application), and one DAST check (that requires live deployment).

The SAST checks are as follows:

• Code Analysis - to examine the code against a set of rules that check the data and control flow of the source-code for security vulnerabilities.

• Secrets Scan - to perform regex scanning of the repository for hard-wired secrets such as passwords, API keys, etc.

• Dependency Scan - to compare the libraries imported in the project against a database of known vulnerabilities.

• Container Scan - to analyse the produced container image against a database of known vulnerable packages.

The DAST check is as follows:

• Dynamic Analysis - to perform a set of live tests against a live deployment of the application using pentesting tools 3. IMPLEMENTATION 19

FIGURE 3.1: Complete Pipeline

For the instantiation of the enhanced CI/CD pipeline, we make use of a computational environment illustrated in Figure 3.2 that includes: A Software Versioning Control System in which the repository will be hosted, an automation server that will execute the defined jobs in the pipeline - including the security checks - and finally a web-server, in which the application will be deployed - in order for DAST checks to be performed. 20 APPLICATION SECURITYIN CONTINUOUS DELIVERY

FIGURE 3.2: Prototype - System Architecture

It’s worth adding that in a production environment, it would be advised that we deploy these different functionalities (especially the different types of security testing) in different hosts in a cluster, to ensure performance, stability and availability. Additionally, the web-application server used by DAST should be a test-only server, not the production server being used by real clients, for two reasons: first, a build that fails DAST checks should not be considered secure; and second, the dynamic-testing may hinder the performance and stability of the application for the other users, as well as the DAST tool, if configured to perform intrusive tests, could compromise data in production.

3.2 Setting up the environment

Taking into account the architectural requirements discussed in the previous section, we made the following choices for the CI/CD environment setup:

• Software Versioning Control System - we make use of Gitlab to hook on to our automa- tion system, such that a build is triggered each time a commit is pushed onto a Gitlab repository. 3. IMPLEMENTATION 21

• Automation Server - in this server, we host Gitlab CI/CD Runner for the automated build process, along with all the complementary tools for SAST and DAST checks.

• Web Application Server - in this server, we host a web-application server that depends on the application at stake: two instances were configured, one running Apache Tomcat for Java web applications, and another running NodeJS for Javascript web applications.

3.2.1 Software Versioning Control System - GitLab

The Software Version Control System that we selected for this role in the project was Gitlab, due to the already provided interface with Gitlab CI/CD, our automation system. For the two instantiations, we have a Gitlab code repository for each, that contains all the application’s source- code, as well as the YAML file in which the automation pipeline is defined, .gitlab-ci.yml

3.2.2 Automation Server - Gitlab CI/CD + Runner

To setup our Automation Server, we begin by creating an EC2 Instance in AWS - eu-west-1 (Ireland) running Ubuntu Server 18 LTS and the following specifications:

Type vCPUs Architecture RAM (MiB)

t2.large 2 x86_64 8192

In association to the instance, an Elastic IP (the AWS designation for a public IP address that is static) is configured. Next, we configure the Security Groups (which is AWS’s feature for Port management) for this IP to allow SSH from any IP (default practice - as it requires the SSH Private Key). As illustrated in the next section, we can configure the GitLab CI/CD Runner (which will communicate with the versioning control system) to report back through SSH as well, so no additional security groups need to be configured.

The security group was defined as follows:

Type Protocol Flow Port Range Source Description

SSH TCP 22 Inbound 0.0.0.0/0 ssh-inbound-all 22 APPLICATION SECURITYIN CONTINUOUS DELIVERY

With this configuration in place, we proceed with the installation of Gitlab CI/CD Runner in the EC2 Instance. We firstly connect via SSH to the EC2 instance and we run the commands available in Listing 3.1[ 18]. We can then confirm the sucessful installation by running the command in Listing 3.2.

LISTING 3.1: Gitlab CI/CD Runner Installation Commands

# Linux x86-64 $ sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3. amazonaws.com/latest/binaries/gitlab-runner-linux-amd64

$ sudo chmod +x /usr/local/bin/gitlab-runner

$ sudo useradd --comment ’GitLab Runner’ --create-home gitlab-runner --shell /bin/bash

$ sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner

$ sudo gitlab-runner start

LISTING 3.2: Gitlab CI/CD Runner Installation Check

$ service gitlab-runner status gitlab-runner.service - GitLab Runner Loaded: loaded (/etc/systemd/system/gitlab-runner.service; enabled; vendor pr Active: active (running) since Sat 2020-01-25 03:23:18 UTC; 1min 5s ago Main PID: 8185 (gitlab-runner) Tasks: 6 (limit: 1152) CGroup: /system.slice/gitlab-runner.service 8185 /usr/local/bin/gitlab-runner run --working-directory /home/git

After confirming the successful installation of the GitLab Runner service, the next step is to register it to our GitLab (SCM) instance. We can achieve this firstly by accessing our GitLab interface and getting the GitLabCI-token (Settings - CI/CD), as seen in Figure 3.3, and then running the commands in Listing 3.3. 3. IMPLEMENTATION 23

FIGURE 3.3: GitLab Runner Token

LISTING 3.3: Gitlab Runner Registration

$ sudo gitlab-runner register Runtime platform arch=amd64 os=linux pid=8216 revision=003 fe500 version=12.7.1 Running in system-mode.

Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/): https://gitlab.com/ Please enter the gitlab-ci token for this runner: PYxz-Z9Qx6LtHJqEg9oN Please enter the gitlab-ci description for this runner: [ip-172-31-23-71]: my-runner Please enter the gitlab-ci tags for this runner (comma separated): secure-pipeline Registering runner... succeeded runner=PYxz-Z9Q Please enter the executor: docker, docker-ssh, parallels, shell, ssh, virtualbox, docker+ machine, kubernetes, custom, docker-ssh+machine: shell Runner registered successfully. Feel free to start it, but if it’s running already the config should be automatically reloaded!

We can test both the Runner and the communication back to the SCM by creating a simple .gitlab-ci.yml’ file that just runs an ls command on the instance, as can be seen in Listing 3.4 and in Figure 3.4.

LISTING 3.4: Test .gitlab-ci.yml file 24 APPLICATION SECURITYIN CONTINUOUS DELIVERY

job: script: "ls"

FIGURE 3.4: GitLab Runner Test 1

3.3 Implementing the Secure Pipeline

3.3.1 Baseline

The baseline for a software delivery pipeline consists of the common steps in any software delivery pipeline, without any security concerns embedded into it. As a first example, we have the CI/CD flow for a Java application in 4 jobs, divided into 3 stages, as illustrated in Figure 3.5

• Initialize - in the initialization stage, the OS is initialized and the file system is prepared to handle the artifacts. 3. IMPLEMENTATION 25

• Package - in the Build stage, the package manager builds the artifacts of the application - in the Java case, it’s where the "maven clean package" is executed. This is done using Maven for the Java application, and NPM for the Javascript application.

• Build Container - also in the build stage, this job takes the generated artifacts and builds a container - assuming that the application is deployed using container technology.

• Deploy - in the deployment stage, the automation server takes the built artifacts and deploys them into a server, in our case a QA server.

FIGURE 3.5: Baseline for Software Delivery Pipeline

The Gitlab CI/CD configurations for the Java and Javascript applications are given in Listings 3.5 and 3.6, respectively. The only differences are related to the use of Maven for Java, and NPM for Javascript. For the Java, it uses mvn clean package to build the artifacts, and copies them to the appropriate Tomcat directory using scp. As for the Node, it uses npm install for the building of the application, and the forever start to deploy it, which is equivalent to npm start but starts the process in the background.[17][14]

LISTING 3.5: .gitlab.yml - Baseline definition for Java Project variables: TARGET_CONTAINER: ’0xfabiof/secure-pipeline-java’ stages: - initialize - build:mvn - build:docker 26 APPLICATION SECURITYIN CONTINUOUS DELIVERY

- deploy-to-qa initialize_os: stage: initialize script: - pwd - mkdir reports - chmod u+rwx reports/ - chmod o+w reports/ - df - ls -la artifacts: paths: - reports/ tags: - runner-shell build-mvn: stage: build:mvn script: - mvn clean package artifacts: paths: - target/ tags: - runner-shell build-docker: stage: build:docker script: - docker build -t $TARGET_CONTAINER . - docker push $TARGET_CONTAINER tags: - runner-shell deploy-to-qa: stage: deploy-to-qa script: - scp -i ~/deployment-keys/tomcat_priv.pem -o StrictHostKeyChecking=no target/*.war ubuntu@tomcat:/home/ubuntu/apache-tomcat-9.0.30/webapps/webapp.war tags: - runner-shell

LISTING 3.6: .gitlab.yml - Baseline definition for NodeJS Project variables: 3. IMPLEMENTATION 27

TARGET_CONTAINER: ’0xfabiof/secure-pipeline-node’ stages: - initialize - build:npm - build:docker - deploy-to-qa initialize_os: stage: initialize script: - pwd - mkdir reports - chmod u+rwx reports/ - chmod o+w reports/ - df - ls -la artifacts: paths: - reports/ tags: - runner-shell

build-npm: stage: build:npm script: - ssh -i ~/deployment-keys/node-js-qa-server.pem -o StrictHostKeyChecking=no -t ubuntu@node- js ’rm -rf node-app/*’ - scp -i ~/deployment-keys/node-js-qa-server.pem -o StrictHostKeyChecking=no -r * ubuntu@node-js:/home/ubuntu/node-app/ - ssh -i ~/deployment-keys/node-js-qa-server.pem -o StrictHostKeyChecking=no -t ubuntu@node- js ’cd node-app; npm install --prefer-offline --no-audit’ tags: - runner-shell build-docker: stage: build:docker script: - docker build -t $TARGET_CONTAINER . - docker push $TARGET_CONTAINER tags: - runner-shell deploy-to-qa: stage: deploy-to-qa script: 28 APPLICATION SECURITYIN CONTINUOUS DELIVERY

- ssh -i ~/deployment-keys/node-js-qa-server.pem -o StrictHostKeyChecking=no -t ubuntu@node-js ’cd node-app; forever start app.js’ tags: - runner-shell

3.3.2 Integrating Source-Code Analysis - Sonarqube

After defining our baseline pipeline of Software Delivery, we can now introduce security checks in the process. We start with the Source-Code Analysis, which is a type of Static Analysis Security Testing. The plan is to, after initializing and building our project, we can submit both the code, and for Java also the compiled bytecode resulting from the build to SonarQube, the Source-Code Analysis tool we employ. The flow for this is as shown in Figure 3.6.

FIGURE 3.6: SAST Check in the Pipeline - Flow

The first step for integrating the Source-Code Analysis in the pipeline is selecting, installing and configuring Sonarqube. For this, we have two requirements that need to be fulfilled:

• A host with Sonarqube Server installed and the SonarScanner Binary provisioned; 3. IMPLEMENTATION 29

• And a Sonarqube configuration file sonar-project.properties in the target code repository.

To complete the first requirement, we need to download the SonarQube official binaries from the website, unpack them and then start the SonarQube server by running the "./sonar.sh start" bootup script. Optionally, it’s also possible to deploy Sonarqube as a service in order to have it always running and ready to receive analysis submissions from our automation server - by creating the sonar.service file in /etc/systemd/system, as can be seen in Listing 3.7. We can test if the installation and service configuration has been sucessfull by accessing the Sonarqube server on the port :9000 - i.e. https://sonarqubeip:9000. In our development environment, for usability purposes, we deployed the Sonarqube server in the same host that has been deployed with the Automation runner, meaning that a source-code analysis may be submitted to the localhost directly. In a production environment, the Sonarqube server would be most likely deployed on a different host, meaning that additionally there would be the need to open network flows between the Gitlab-runner machine and the SAST (Sonarqube) in the respective ports.

LISTING 3.7: SonarQube.service file in /etc/system/systemd/

[Unit] Description=SonarQube service After=syslog.target network.target

[Service] Type=forking

ExecStart=/home/gitlab-runner/sonarqube-8.1.0.31237/bin/linux-x86-64/sonar.sh start ExecStop=/home/gitlab-runner/sonarqube-8.1.0.31237/bin/linux-x86-64/sonar.sh stop

User=gitlab-runner Group=gitlab-runner Restart=always

[Install] WantedBy=multi-user.target

To submit a code repository to a source-code analysis, Sonarqube provides a series of options that can be integrated directly with certain project packaging frameworks such as a Maven goal; but it also provides a standalone tool called SonarScanner that is agnostic to the packaging framework and project code language, and allows us to submit the project for analysis directly. 30 APPLICATION SECURITYIN CONTINUOUS DELIVERY

In our case, we have decided to use the latter, mostly because of the flexibility it offers thus the next step is provisioning the automation server with SonarScanner.

With a SonarQube server ready to receive projects for analysis, and an automation server provisioned with the tool to submit these projects for analysis, all that is required now is to create a sonar-project.properties file (Listing 3.8) in the root of our code repository allowing us to specify certain parameters for the analysis of this project such as the project key, project’s languages, directories to consider or ignore, etc.

LISTING 3.8: sonar-project.properties sonar-project.properties # must be unique in a given SonarQube instance sonar.projectKey=JVL_Project sonar.java.binaries=target/*

With the requirements for the Source-Code Analysis check fulfilled we can configure the pipeline stage that will submit our project to source-code analysis and retrieve the results, by defining the code-analysis.yml file in Listing 3.9.[13]

LISTING 3.9: code-analysis.yml

[...] variables: SONAR_SCANNER_PATH: ’/home/gitlab-runner/sonar-scanner-4.2.0.1873-linux/bin/sonar-scanner’ SONAR_REPORT_PATH: ’/usr/bin/sonar-report’ SONARQUBE_HOST: ’localhost:9000’ SONARQUBE_PROJECT: ’JVL_Project’ code-analysis: stage: static-analysis script: - $SONAR_SCANNER_PATH - sleep 5 - curl -f "http://$SONARQUBE_HOST/api/issues/search?projects=$SONARQUBE_PROJECT&types= VULNERABILITY" >> reports/report_sast.json - $SONAR_REPORT_PATH --sonarurl="http://$SONARQUBE_HOST" --sonarcomponent=" $SONARQUBE_PROJECT" --project="$SONARQUBE_PROJECT" > reports/report_sast.html artifacts: paths: - reports/report_sast.json 3. IMPLEMENTATION 31

- reports/report_sast.html tags: - runner-shell

The first line of this CI Stage script is where we call the SonarScanner binary, which will run inside the code repository, fetching the sonar-project.properties file and running accordingly. The sonar-scanner utility has a config-file itself where the IP and Port of the Sonarqube server is specified - by default it runs on localhost:9000, but once again it would need changing in a production environment. The sonar-scanner binary will run until the static-analysis is complete, which means we can, when it completes, make an API call to the Sonarqube server and fetch the updated results. We opted to fetch only the Security Vulnerabilities identified, instead of all code-quality and other issues that the SonarQube scanner manages to identify. In the end, we just write the output of this API call to a JSON file and export it from the stage as an artifact to be accessible to future stages of the pipeline. Figure 3.7 illustrates the pipeline with the source-code analysis job integrated, as part of the SAST stage.

FIGURE 3.7: Secure Pipeline with SAST 32 APPLICATION SECURITYIN CONTINUOUS DELIVERY

The JSON output file will be in the format of the API reporting of the SonarQube tool, and each of the flagged has associated details such as the rule that flagged it, the file in which was found, the line, a small description about the vulnerability, as well as a rate of severity. Listing 3.10 gives an example of the output of an issue flagged by SonarQube.

LISTING 3.10: SonarQube output result - report_sast.json

{ "key": "AXEDXSWrgC8xWKOdJMjk", "rule": "findsecbugs-jsp:XSS_REQUEST_PARAMETER_TO_JSP_WRITER", "severity": "MAJOR", "component": "JVL_Project:src/main/webapp/vulnerability/xss/search.jsp", "project": "JVL_Project", "line": 21, "hash": "410213d2fc439cec53de8ce795713d7e", "textRange": { "startLine": 21, "endLine": 21, "startOffset": 0, "endOffset": 41 }, "flows": [], "status": "OPEN", "message": "HTTP parameter directly written to JSP output, giving reflected XSS vulnerability in org.apache.jsp.vulnerability.xss.search_jsp", "tags": [ "jsp", "owasp-a3" ], "creationDate": "2020-03-22T17:48:57+0000", "updateDate": "2020-03-22T17:48:57+0000", "type": "VULNERABILITY", "organization": "default-organization", "fromHotspot": false } ]

3.3.3 Integrating DAST - Zed Attack Proxy

Next, we opted to integrate the Dynamic Analysis Security testing (DAST) into our software pipeline. For this, as we saw earlier, we opted for the open-source Zed Attack Proxy tool, also known as ZAP. 3. IMPLEMENTATION 33

In terms of work-flow, our starting point will be the pipeline already implemented in the previous point, with the addition of a 5. step, in which, after the deployment of the application in the QA server, we run an active-scan against it - as can be seen in Figure 3.8.

FIGURE 3.8: DAST Check in the Pipeline - Flow

For the DAST, we opted to use the ZAP Baseline scan, which is a pre-configured scan made available with ZAP that, by default, spiders the target for 1 minute and then runs passive scanning against the identified end-points. This pre-configured scan does not actually perform any intrusive "active" exploitation against the target, which means that it will run for a short time, which is ideal for the implementation in a DevOps pipeline.

In cases that the execution-time of the pipeline is not a constraint, this security scan can be tweaked accordingly in order to execute a more complete set of tests.

As can be seen in the job configured in our YAML file (Listing 3.12), this step of the pipeline starts by running a bash script called connection-check.sh (Listing 3.11): this is an optional simple bash script we created that simply curls the target we want to scan to confirm if the connection between the agent which will run the scan and the target is up and running.

LISTING 3.11: connection_check.sh

#!/bin/bash 34 APPLICATION SECURITYIN CONTINUOUS DELIVERY

sleep 10 curl tomcat:8080/webapp/ -m 5 if [ $? -eq 0 ] then echo "Connection successful" exit 0 else echo "Couldnt reach endpoint - is runners IP allowed in servers security group?" exit 1 fi

If this connection check is not successful, the pipeline will be halted, otherwise our ZAP Baseline Scan will be started. As can be seen in the yml code snippet, we have used a docker wrapper to this script that was made available by OWASP, and we simply need it to provide with the -t argument, which stands for target - in our case it’s our tomcat QA server (http://tomcat:8080/webapp/). [30]

LISTING 3.12: dynamic-analysis.yml

[...] variables: ZAP_TARGET: ’http://tomcat:8080/webapp/’ dynamic-analysis: stage: dynamic-analysis script: - sleep 10; curl $ZAP_TARGET -m 5 - docker run --network=host -v $(pwd)/reports:/zap/wrk/:rw -t owasp/zap2docker-stable zap- baseline.py -t $ZAP_TARGET -x report_dast.xml -J report_dast.json -z ’-config zap.spider. set_option_max_depth=2’ || true artifacts: paths: - reports/report_dast.json - reports/report_dast.xml tags: - runner-shell

[...] 3. IMPLEMENTATION 35

With the integration of the dynamic analysis testing, there was a significative increase to the execution time of the pipeline, as this type of checks are, by nature, the ones that take the longer - as can be seen in Figure 3.9.

FIGURE 3.9: Secure Pipeline with DAST

Similarly to what have seen in the SAST integration, the result for the ZAP scan will be a .json file (or, optionally, .xml) that lists each of the issues found, as well as additional useful information such as the endpoint in which it found the vulnerability, the HTTP method, confidence and even the attack payload used - for the cases of active-scans enabled.

In Listing 3.13 we can see an example of an issue raised by the ZAP Baseline scan and its respective .json output.

LISTING 3.13: ZAP output result - report_dast.json

[...] { "pluginid":"10038", "alert":"Content Security Policy (CSP) Header Not Set", "name":"Content Security Policy (CSP) Header Not Set", "riskcode":"1", 36 APPLICATION SECURITYIN CONTINUOUS DELIVERY

"confidence":"2", "riskdesc":"Low (Medium)", "desc":"

Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS ) and data injection attacks. These attacks are used for everything from data theft to site defacement or distribution of malware. CSP provides a set of standard HTTP headers that allow website owners to declare approved sources of content that browsers should be allowed to load on that page covered types are JavaScript, CSS, HTML frames, fonts, images and embeddable objects such as Java applets, ActiveX, audio and video files.<\/p>", "instances":[ { "uri":"http://tomcat:8080/webapp/vulnerability/securitymisconfig/forum. jsp", "method":"GET"

}, { "uri":"http://tomcat:8080/webapp/vulnerability/xss/search.jsp?action= Search&keyword=ZAP", "method":"GET"

} [...]

Similarly to the SAST job seen before, this too will required further tweaking to integrate with the DefectDojo, which we will describe later in this report.

3.3.4 Integrating Container Scanning - Clair

The next integration that was added to the pipeline was our Container Scanning job, that leverages the open-source container scanning tool Clair. As we saw in the State-of-the-art chapter, Clair works by having a database server containing a record of the different vulnerabilities (CVEs) associated various "baseline" container images. In our check, we simply submit our newly created image from the docker registry to the Clair server, which will perform the analysis and return the results.

In a Production environment, the Clair server would be deployed on a dedicated server which should be running permanently. However, in our prototype, we deployed it directly on the automation server - which was not permanently up, so a work-around in the pipeline was implemented that checked if the docker containers were up at the beginning of execution, and that would boot them in case they weren’t. This mechanism was defined in a specific template 3. IMPLEMENTATION 37 called "services-boot.yml" that, as the name implies, boots any services that should be up for the pipeline to work.

We can see how this works in terms of workflow in Figure 3.10.

FIGURE 3.10: Container Scanning in the Pipeline - Flow

The configuration of the pipeline job to perform this is can be seen in Listing 3.14.[3]

LISTING 3.14: container-scan.yml variables: CLAIR_SCANNER_PATH: ’/home/gitlab-runner/clair/clair-scanner_linux_amd64’ CLAIR_DOCKER_IP: 172.17.0.1 TARGET_CONTAINER: ’0xfabiof/secure-pipeline-java’ container-scan: stage: static-analysis before_script: - source vars script: - $CLAIR_SCANNER_PATH --ip $CLAIR_DOCKER_IP -r reports/report_container-scan.json --all= false $TARGET_CONTAINER:latest || true artifacts: paths: - reports/report_container-scan.json 38 APPLICATION SECURITYIN CONTINUOUS DELIVERY

tags: - runner-shell

We use the Clair Scanner binary to submit the target container from the docker registry, and we set the –ip flag to point to the server where our Clair Server is listening.

The result will be a report file in .json format that details each of the vulnerabilities found in the scanned container, including information regarding severity, container namespace, description, etc. An example of a produced report can be seen in Listing 3.15.

LISTING 3.15: Clair output result - report_container-scan.json

[...]

"vulnerabilities": [ { "featurename": "openssh", "featureversion": "1:7.9p1-10+deb10u2", "vulnerability": "CVE-2019-16905", "namespace": "debian:10", "description": "OpenSSH 7.7 through 7.9 and 8.x before 8.1, when compiled with an experimental key type, has a pre-authentication integer overflow if a client or server is configured to use a crafted XMSS key. This leads to memory corruption and local code execution because of an error in the XMSS key parsing algorithm. NOTE: the XMSS implementation is considered experimental in all released OpenSSH versions, and there is no supported way to enable it when building portable OpenSSH.", "link": "https://security-tracker.debian.org/tracker/CVE-2019-16905", "severity": "Negligible", "fixedby": "" },

[...]

As with the previous jobs, when we consider the addition of DefectDojo as an aggregator and vulnerability tracker for the results of the various tools, this job will need some tweaks that will be seen in detail later in the report.

3.3.5 Integrating Secrets Scanning - Gitleaks

Another tool that was integrated into our secure pipeline had the goal of finding clear-text secrets hardwired in the repository. This includes hardwired passwords, commented passwords 3. IMPLEMENTATION 39 in the code, API keys or even SSH Key files. This was achieved by integrating a tool called Gitleaks that performs regex scanning in the repository based on a set of pre-defined rules passed from a configuration file.

We can see in Listing 3.16 an example of one of the rules available to detected e-mails in the rule-set used. [16]

LISTING 3.16: Hardwired E-mails Regex Rule - .gitleaks.toml

[...] [[rules]] description = "Email" regex = ’’’[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}’’’ tags = ["email"] [[rules.whitelist]] file = ’’’(?i)bashrc’’’ description = "ignore bashrc emails" [...]

In terms of workflow, as this is part of static-analysis of the application, this job will be included in the respective stage, as can be seen in the diagram shown in Figure 3.11. 40 APPLICATION SECURITYIN CONTINUOUS DELIVERY

FIGURE 3.11: Secrets Scanning in the Pipeline - Flow

The implementation of this job in the pipeline is also very trivial. We simply need to execute the Gitleaks binary, which should be deployed in the runner that will execute the job, and pass both the rule-set file (.gitleaks.toml) as an argument, as well as the repository path - which will be cloned in the current work-directory during execution of the pipeline. Listing 3.17 shows the definition of the job.

LISTING 3.17: secrets-scan.yml variables: GITLEAKS_PATH: ’/home/gitlab-runner/secrets-scan/gitleaks-linux-amd64’ GITLEAKS_CONFIG: ’.gitleaks.toml’ secrets-scan: stage: static-analysis before_script: - source vars script: 3. IMPLEMENTATION 41

- $GITLEAKS_PATH -v --config=$GITLEAKS_CONFIG --report=reports/report_secrets.json --repo- path=. || true artifacts: paths: - reports/report_secrets.json tags: - runner-shell

The tool also produces a .json report as an output, which will detail, among other things, the file in which the secret was found, the line, and the rule from the rule-set that triggered it, as can be seen in Listing 3.18.

LISTING 3.18: Gitleaks Output result - report-secrets.json

{ "line": " secret = \"nosecret\";", "offender": "secret = \"nosecret\"", "commit": "560888b7bc5c3fa034e2469cff3dedad80956bc8", "repo": ".", "rule": "Generic Credential", "commitMessage": "Initial commit\n", "author": " F b i o Daniel Santos Freitas", "email": "[email protected]", "file": "src/main/java/org/cysecurity/cspf/jvl/controller/Register.java", "date": "2020-01-25T03:41:32Z", "tags": "key, API, generic" },

3.3.6 Integrating Dependency Checks - OWASP Dependency Checker

The last security check we have integrated into our CD pipeline was the analysis of dependencies leveraged by the software project. As seen in Chapter 2, this check performs analysis on the libraries that are defined in the project - using for example, in the case of a Maven Java Project, the "pom.xml" file.

As for the workflow, this type of source-composition check can be categorized in the static- analysis stage of the pipeline, and thus the flow-chart would be as can be seen in Figure 3.12. 42 APPLICATION SECURITYIN CONTINUOUS DELIVERY

FIGURE 3.12: Dependency Check in the Pipeline - Flow

Regarding the implementation in the .gitlab-ci.yml file, it’s quite similar to the one seen in the previous sub-section - meaning that we are OWASP provides us with the Dependency Check executable file - in this case a bash script "dependency-check.sh", and we simply need to run it in our target repository, as we can see in Listing 3.19.[22]

LISTING 3.19: dependency-check.yml variables: DEPENDENCY_CHECK_PATH: ’/home/gitlab-runner/dependency-check/bin/dependency-check.sh’ dependency-check: stage: static-analysis 3. IMPLEMENTATION 43

script: - $DEPENDENCY_CHECK_PATH --scan ./ -f XML -f JSON -o reports/ - mv reports/dependency-check-report.json reports/report_dependency.json; mv reports/ dependency-check-report.xml reports/report_dependency.xml artifacts: paths: - reports/report_dependency.json - reports/report_dependency.xml tags: - runner-shell

As for the results, the tool provides outputs in both .json or .xml format - an example of an issue found in .json format can be seen in Listing 3.20.

LISTING 3.20: Owasp Dependency Check Output result - report_dependency-check.json

{ "source":"NVD", "name":"CVE-2018-12538", "severity":"HIGH", "cvssv2":{ "score":6.5, "accessVector":"NETWORK", "accessComplexity":"LOW", "authenticationr":"SINGLE", "confidentialImpact":"PARTIAL", "integrityImpact":"PARTIAL", "availabilityImpact":"PARTIAL", "severity":"MEDIUM" }, "cvssv3":{ "baseScore":8.8, "attackVector":"NETWORK", "attackComplexity":"LOW", "privilegesRequired":"LOW", "userInteraction":"NONE", "scope":"UNCHANGED", "confidentialityImpact":"HIGH", "integrityImpact":"HIGH", "availabilityImpact":"HIGH", "baseSeverity":"HIGH" }, "cwes":[ "CWE-384" ], 44 APPLICATION SECURITYIN CONTINUOUS DELIVERY

"description":"In Eclipse Jetty versions 9.4.0 through 9.4.8, when using the optional Jetty provided FileSessionDataStore for persistent storage of HttpSession details , it is possible for a malicious user to access\/hijack other HttpSessions and even delete unmatched HttpSessions present in the FileSystem’s storage for the FileSessionDataStore." , "notes":"", "references":[ { "source":"CONFIRM", "url":"https:\/\/bugs.eclipse.org\/bugs\/show_bug.cgi?id=536018", "name":"https:\/\/bugs.eclipse.org\/bugs\/show_bug.cgi?id=536018" }

As with the previous checks, also the dependency check will require further tweaking to integrate the results with the vulnerability-tracker tool.

3.3.7 Integrating a results aggregator (custom script) - build_risk_calc.py

The first approach to implementing an aggregator of results from all the tools to our security pipeline consisted in a customized python script that would read from the .json reports produced by all presented tools, and having a simple parser for each of the different reports produced. The flow for this can be seen in figure 3.13. 3. IMPLEMENTATION 45

FIGURE 3.13: Results aggregator in the Pipeline - Flow

After parsing the results, the script would then take them and produce a dynamically generated HTML report which - on a first phase, would display the number of issues raised by each security checks, as well as the total number of issues. The script would also take the commit hash of the build and print it as an identifier on the dashboard - as can be seen in Figure 3.14. Additionally, a conditional to generate different colors given the amount of issues found based on a certain threshold was also configured, in order to demonstrate a proof-of-concept of a "quality gate" of a potential full solution. 46 APPLICATION SECURITYIN CONTINUOUS DELIVERY

FIGURE 3.14: build_risk_calc.py - HTML Dashboard

Allthough this would work as a purely proof-of-concept illustrator of aggregating the results in a single place - we later decided to dismiss this approach and integrate instead an already established vulnerability tracking tool, as we will explain in the next section. 3. IMPLEMENTATION 47

3.3.8 Integrating a Vulnerability Tracker - DefectDojo

Later in the execution of the project, we realized that while the results aggregator script seen in the previous point would work for proof-of-concept purposes as a central dashboard to show the results of the security checks, there were already open-source tools available which could serve the same purpose and take it a step forward. Namely, the DefectDojo Vulnerability Tracking tool, which we decided to implement as the final step of the pipeline.

As for the workflow and architecture of this integration, the DefectDojo tool will be a deployed service running, and the each of the security checks executed along the pipeline will, at the end of their job, submit the results to the service. The diagram in Figure 3.15 illustrates this. 48 APPLICATION SECURITYIN CONTINUOUS DELIVERY

FIGURE 3.15: Pipeline results submitted to DefectDojo Vulnerability Tracker - Flow

The DefectDojo is a Django Application, that also comes with the packaging option of a docker-compose file. In a production environment, it would be deployed on a dedicated server. Similarly to what was already explained in the Container-Scan implementation, the DefectDojo Server was also deployed directly on the automation server in our prototype, so the "services- boot.yml" template mentioned earlier also contained a job to verify the DefectDojo if the service was running, and boot it in case it wasn’t. Again, in a production environment, this template 3. IMPLEMENTATION 49

would not be required as it’s expected that the DefectDojo application would be up and running in a dedicated server.

In terms of objects, DefectDojo abstracts its functionality in three different layers: A product - in which a software project should be defined. For each product, there can be one or more engagements, which themselves will have one or more issues.[7]

In our prototype, we instantiated this by having a Product for each of the samples used in instantiation - which we will later see being the Java Project (Java Vulnerable Lab) and NodeJS Project (OWASP Fruit Shop). Then, for each Product, we decided that each pipeline build would be an engagement, including the issues from all the security checks.

After setting up the application, the first configuration tweak required in DefectDojo was the enabling of the Deduplication of issues. The application’s deduplication engine works by matching two findings if the two share a URL and have the same CWE or title, and flagging the most recent as duplicate. In our configuration of the tool, we configured the deduplication to work at the Product level (as can be seen in Figure 3.16). This means that any issues that haven’t been fixed between builds would only be one issue instead of accumulating duplicates. [8]

FIGURE 3.16: Deduplication of issues at the Product Level

As for the integration of this in the security pipeline, it was divided in two workstreams: automating the creation of an new engagement for each new build; and ensuring each security check submitted the results to that newly created engagement.

To accomplish the goal of the first workstream, we created a new template "vulnerabil- ity_tracker.yml" (Listing 3.21) that has a job defined, as part of the initialize stage. The job itself 50 APPLICATION SECURITYIN CONTINUOUS DELIVERY takes a parameter which is the Product_ID, which should already be manually created in the DefectDojo application, as it will only need to happen once for each of the software projects in the environment. Then, a POST request is performed to the API endpoint of DefectDojo that handles the engagements, creating a new one with the name of the build commit. The server will reply with an engagement ID for the created one, which our pipeline will need to save, as it’s the only identifier we can use to later in the pipeline submit the results to the same engagement. To preserve this engagement number across stages, we simply parsed the response from the server and saved this id to a file, which was then passed as an artifact to the next stages, to read it.

LISTING 3.21: vulnerability_tracker.yml variables: DEFECT_DOJO_HOST: ’localhost:8080’ PRODUCT_ID: ’1’ DEFECT_DOJO_ENABLED: "true"

Defectdojo_engagement: stage: initialize script: # Initializing a new Defect Dojo Engagement for each Pipeline # Saving its ID to use it for submitting results in next stages - echo $PRODUCT_ID - sleep 10 - DOJO_ENG_OUTPUT=‘curl -X POST "http://$DEFECT_DOJO_HOST/api/v2/engagements/" -H "accept: application/json" -H "Content-Type:application/json" -H "Authorization:Token $defectdojo_api_token" -d ’{"name":"’"${CI_COMMIT_SHA}"’","target_start":"2020-10-10"," target_end":"2030-10-10","product":"’"${PRODUCT_ID}"’"}’‘ - cd $CI_PROJECT_DIR - eng_number=‘cut -d’,’ -f1 <<< $DOJO_ENG_OUTPUT | cut -d’:’ -f2‘ - echo "export eng_number=$eng_number" >> vars - cat vars artifacts: paths: - vars tags: - runner-shell

Then, in each of the security checks’ stages, two additional lines were added: one to read the file containing the engagement id and save it as an environment variable, and another one to post the scan’s results to the respective engagement - leveraging the import-scan module of the DefectDojo API. 3. IMPLEMENTATION 51

As the goal of the project was to build a pipeline that was as modular as possible, and we wanted that even the addition of the vulnerability tracker to be optional, we configured these two additional lines to be only executed if a boolean variable "DEFECT_DOJO_ENABLED" is set to true, which can be overridden in the main template file. With this, we can ensure that pipelines will still work even if we chose not to use the vulnerability tracker as an aggregator.

We can see an example of the tweaks made to "dependency-check.yml" that now includes the integration with DefectDojo in Listing 3.22.

LISTING 3.22: dependency-check.yml tweaked to submit results to DefectDojo variables: DEPENDENCY_CHECK_PATH: ’/home/gitlab-runner/dependency-check/bin/dependency-check.sh’ DEFECT_DOJO_HOST: ’localhost:8080’ dependency-check: stage: static-analysis before_script: - if [ "$DEFECT_DOJO_ENABLED" == "true" ]; then source vars; fi script: - $DEPENDENCY_CHECK_PATH --scan ./ -f XML -f JSON -o reports/ - mv reports/dependency-check-report.json reports/report_dependency.json; mv reports/ dependency-check-report.xml reports/report_dependency.xml - if [ "$DEFECT_DOJO_ENABLED" == "true" ]; then curl -f -F file=@reports/report_dependency. xml -F engagement=$eng_number -F ’scan_type=Dependency Check Scan’ -H "Authorization:Token $defectdojo_api_token" http://$DEFECT_DOJO_HOST/api/v2/import-scan/ ; fi artifacts: paths: - reports/report_dependency.json - reports/report_dependency.xml tags: - runner-shell

In Figures 3.17, 3.18, and 3.19, we can see the results of a new build, which triggered the creation of the respective engagement, containing all the issues from all the checks in the pipeline: 52 APPLICATION SECURITYIN CONTINUOUS DELIVERY

FIGURE 3.17: DefectDojo - Main Product Dashboard

FIGURE 3.18: DefectDojo - Engagement View 3. IMPLEMENTATION 53

FIGURE 3.19: DefectDojo - Issues View

Chapter 4

Results

This final chapter of the dissertation will focus on the results provided by the implementation of the prototype. The plan here is for us to chose some software projects to work as samples. To better demonstrate the potential, we opted to chose projects with purposely injected application security vulnerabilities. To also demonstrate the versatility of the implementation, we opted to chose code-bases in different programming languages.

Then, we will submit these samples (software projects) to our pipeline with security checks and retrieve the results regarding the vulnerabilities found, their severity and the type of check that found the issue. In terms of performance, we will also retrieve an average of execution times of each pipeline and present the results in totals, as well as divided by stages and jobs.

4.1 Instantiation 1 - Java Vulnerable Lab - Java

For the first instantiation of this prototype we chose an application written in Java, Java Vulnerable lab, which is a Web-Application with purposely injected with several common web- application vulnerabilities meant to aware developers about frequent security issues present in Java projects. It was developed by Cyber Security and Privacy Foundation[5].

To host this web-application in a QA Server for the dynamic-analysis, we have created and configured an EC2 Instance to host an Apache Tomcat Server. This instance was configured to have the web-application exposed only to the Automation-Server, as it would be careless to host a potentially vulnerable web-application in an exposed server.

55 56 APPLICATION SECURITYIN CONTINUOUS DELIVERY

After configuring the baseline steps of the pipeline which are specific to the code-base of the respective project - for example, this project uses the Maven Package Manager to build the application - in its main .gitlab.yml file, we included the templates explained above, and below are the results.

4.1.1 Vulnerabilities

In terms of vulnerabilities, our prototype detected a total of 455 vulnerabilities from all the security checks performed. We can see the Table 4.1 that displays the vulnerabilities found, pivotted both by their severity as well as the tool that found them.

TABLE 4.1: Issues Table - By Severity and Security Check

Now, in a corporate environment, it would be up to the Application Security team to review each of these issues, flag them as true positives or not accordingly, and following up with the mitigation actions with the development teams. As the DefectDojo tool leverages the deduplication engine mentioned earlier, any issues flagged as False Positives would not be raised again in next automatic builds.

4.1.2 Performance

As for the performance, we measured the execution time over a total of 5 executions of the pipeline and calculated the averages. For this, we excluded the execution times of stages/jobs that would not exist in a production environment - as its the case with the already mentioned services-boot.yml - which serves the purpose of booting the Clair and DefectDojo services in case they’re not up. 4. RESULTS 57

The table 4.2 shows the total average execution time, pivotted by each job.

TABLE 4.2: Pipeline Performance - Times over 5 Executions

Some useful outputs of this measures would be to analyse and optimize the jobs which take the longer, or even consider separating some of the jobs from the main pipeline, and moving them downstream - as to allow faster delivery for the development teams.

4.2 Instantiation 2 - OWASP Juice Shop - JavaScript/NodeJS

For the second instantiation of this prototype we chose an application written in JavaScript, leveraging the NodeJS Framework called OWASP Juice Shop. This was an application developed as a part of the OWASP Project with the goal of demonstrating security issues in modern web- applications built on top of the NodeJS Framework. [19]

To host this web-application in a QA Server for the dynamic-analysis, we have created, similarly as above, and configured an EC2 Instance to host an NPM Server. This instance was also configured to have the web-application exposed only to the Automation-Server, as it would be careless to host a potentially vulnerable web-application in an exposed server.

As in the previous, the baseline steps for this project were different - as a NodeJS requires different building and deployment steps, but after these were configured, we just had to import 58 APPLICATION SECURITYIN CONTINUOUS DELIVERY the wanted templates and override the variables in the templates for the correct values for this project.

4.2.1 Vulnerabilities

In terms of vulnerabilities, our prototype detected a total of 163 vulnerabilities from all the security checks performed. We can see the Table 4.3 that displays the vulnerabilities found, pivotted both by their severity as well as the tool that found them.

TABLE 4.3: Issues Table - By Severity and Security Check

4.2.2 Performance

Same as above, 5 executions were performed, and the average time was calculated - excluding again as above the jobs/stages that would not be necessary in a proper production environment.

Table 4.4 contains the total average execution time, pivotted by each job. 4. RESULTS 59

TABLE 4.4: Pipeline Performance - Times over 5 Executions

Same as above, analysing these results to propose jobs optimization would be a valuable output from this data.

Chapter 5

Conclusion

5.1 Concluding Remarks

After concluding the dissertation project, several conclusions can be drawn from the accom- plished work. It can be concluded that it is possible to integrate a wide range of security checks of different types and with different goals in a single CI/CD pipeline.

Additionally, it’s also possible to aggregate all the results from the different implemented checks in a uniform way, with the same scale of severity.

The same pipeline framework is modular and reusable in different projects, as well as compatible with different programming languages - as the leveraged tools support at least the most mainstream ones.

Finally, another drawn conclusion is that the DAST type of security checks are the ones that cause the most overhead to the performance of the pipeline. If in a given context the performance is a deciding factor, this stage should be ran in parallel as to not halt the conclusion of the delivery.

5.2 Future Work

In terms of future work, several additions to our framework could be implemented to further solidify the accomplished work. An obvious additional step would be to configure automatic halting of the delivery pipeline, especially when it comes to live deployment, if in any of the

61 62 APPLICATION SECURITYIN CONTINUOUS DELIVERY security checks a certain threshold of issues was found, for example: preventing deployment if a new critical issue has been raised.

As for additional checks to be added in the pipeline, the most obvious improvements would be to have more types of DAST, which at the moment is only adapted for live testing of web- applications - and could be improved to support testing of desktop software, or even mobile applications.

In terms of instantiation, it would be valuable to attempt to instantiate this pipeline for more projects written in other programming languages and frameworks, which should not be a big workload due to the modularity and reusability of the current solution, as long as the security tools leveraged support it. Bibliography

[1] About Gitlab. https://about.gitlab.com/company/. Accessed on 14-01-2020.

[2] arminc/clair-scanner - Docker containers vulnerability scan. https://github.com/ arminc/clair-scanner. Accessed on 14-01-2020.

[3] Clair scanner. https://github.com/arminc/clair-scanner. Accessed on 13-09- 2020.

[4] Container Image Security: Beyond Vulnerability Scanning. https:// www.stackrox.com/post/2020/04/container-image-security-beyond- vulnerability-scanning/. Accessed on 14-01-2020.

[5] CSPF-Founder / JavaVulnerableLab. https://github.com/CSPF-Founder/ JavaVulnerableLab. Accessed on 20-08-2020.

[6] Customers | Docker. https://www.docker.com/customers. Accessed on 13-09-2020.

[7] DefectDojo Documentation - Models. https://defectdojo.readthedocs.io/en/ latest/models.html. Accessed on 13-09-2020.

[8] DefectDojo Features - Deduplication. https://defectdojo.readthedocs.io/en/ latest/features.html#deduplication. Accessed on 20-08-2020.

[9] dockerd | Docker Documentation. https://docs.docker.com/engine/ reference/commandline/dockerd/. Accessed on 14-01-2020.

[10] Git - fast, scalable, distributed revision control system. https://github.com/git/git. Accessed on 14-01-2020.

[11] Github - DefectDojo. https://github.com/DefectDojo/django-DefectDojo. Accessed on 20-08-2020.

63 64 APPLICATION SECURITYIN CONTINUOUS DELIVERY

[12] GitHub Marketplace - Continuous integration Tools. https://github.com/ marketplace/category/continuous-integration. Accessed on 13-09-2020.

[13] GitLab CI/CD | SonarQube Docs. https://docs.sonarqube.org/latest/ analysis/gitlab-cicd/. Accessed on 13-09-2020.

[14] Gitlab Examples - NodeJS. https://gitlab.com/gitlab-examples/nodejs/. Accessed on 13-09-2020.

[15] Gitleaks. https://github.com/zricethezav/gitleaks. Accessed on 14-01-2020.

[16] Gitleaks - Configuration. https://github.com/zricethezav/gitleaks/wiki/ Configuration. Accessed on 13-09-2020.

[17] How to deploy Maven projects to Artifactory with GitLab CI/CD. https:// docs.gitlab.com/ee/ci/examples/artifactory_and_gitlab/. Accessed on 13-09-2020.

[18] Install GitLab Runner manually on GNU/Linux. https://docs.gitlab.com/ runner/install/linux-manually.html. Accessed on 14-01-2020.

[19] Juice Shop - Insecure Web Application for Training | OWASP. https://owasp.org/ www-project-juice-shop/. Accessed on 20-08-2020.

[20] MITRE | CVE - Common Vulnerabilities and Exposure.

[21] OWASP Dependency-Check.

[22] OWASP Dependency Check - Usage. https://jeremylong.github.io/ DependencyCheck/dependency-check-cli/. Accessed on 13-09-2020.

[23] OWASP ZAP. https://owasp.org/www-project-zap/. Accessed on 14-01-2020.

[24] quay/clair - Vulnerability Static Analysis for Containers. https://github.com/quay/ clair. Accessed on 14-01-2020.

[25] SonarQube Documentation. https://docs.sonarqube.org/latest/. Accessed on 13-09-2020.

[26] What is DAST? https://www.sqreen.com/web-application-security/what- is-dast. Accessed on 14-01-2020. BIBLIOGRAPHY 65

[27] What is DevOps? https://aws.amazon.com/devops/what-is-devops. Accessed on 14-01-2020.

[28] Who is the OWASP Foundation. https://owasp.org/. Accessed on 14-01-2020.

[29] Zed Attack Proxy in a CI Pipeline? https://www.nearform.com/blog/zed-attack- proxy-in-a-ci-pipeline/. Accessed on 14-01-2020.

[30] Zed Attack Proxy in a CI Pipeline? https://www.nearform.com/blog/zed-attack- proxy-in-a-ci-pipeline/. Accessed on 13-09-2020.

[31] L. Bell, M. Brunton-Spall, R. Smith, and J. Bird. Agile Application Security: Enabling Security in a Continuous Delivery Pipeline. O’Reilly Media, 2017.

[32] I. Gomes, P. Morgado, T. Gomes, and R. Moreira. An overview on the Static Code Analysis approach in Software Development. Technical report, FEUP, 2009.

[33] J. Humble and D. Farley. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. Addison-Wesley Signature Series (Fowler). Pearson Education, 2010.

[34] S. Kumar, R. Mahajan, N. Kumar, and S. K. Khatri. A study on web application security and detecting security vulnerabilities. In 2017 6th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), pages 451–455, 2017.

[35] B. Livshits. Improving Software Security with Precise Static and Runtime Analysis. PhD thesis, Stanford, CA, USA, 2006. AAI3242585.

[36] M. Paulo. Security Testing in Continuous Integration Systems. 2016.

[37] H. Plate, S. E. Ponta, and A. Sabetta. Impact assessment for vulnerabilities in open-source software libraries. 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 411–420, 2015.

[38] A. Shajadi. Automating Security Tests for Web Applications in Continuous Integration and Deployment Environment. 2018.

[39] E. Viitasuo. Adding security testing in DevOps software development with continuous integration and continuous delivery practices. 2020.