<<

Creation of a Kubernetes Infrastructure

Degree Thesis submitted to the Faculty of the Escola Tècnica d’Enginyeria de Telecomunicació de Barcelona Universitat Politècnica de Catalunya by Lluís Baró Cayetano

In partial fulfillment of the requirements for the degree in TELEMATICS ENGINEERING

Advisor: Jose Luis Muñoz Tapia Barcelona, January 2021 Abstract

Nowadays the way of deploying applications is evolving, so the target of this project is the creation of a Kubernetes cluster, that is a set of nodes that run containerized applications. So we are going to develop three different cluster scenarios: First of all, we are going to use Compose, that is a tool for defining and running multi-container Docker applications. Secondly, we are going to use Microk8s, a tool for developing production-grade upstream for Kubernetes deployment. Finally, the Kubernetes cluster is going to be nested in LXC containers, that is an operating-system-level method for running multiple isolated systems on a control host using a single Linux kernel. So what is this project offering? We are offering a simple, powerful and quick wayto deploy applications in a server, that are in a secure environment, easy to manage, easy to set up, scalable, cheap and using an open source methodology.

2 Resum

Avui en dia la forma de desplegar aplicacions està evolucionant, de manera que l’objectiu d’aquest projecte és la creació d’un clúster Kubernetes, que és un conjunt de nodes que executen aplicacions en contenidors. Per tant, les aplicacions es desenvoluparan en tres escenaris: En primer lloc, utilitzarem Docker Compose, que és una eina per definir i executar apli- cacions Docker multi-contenidor. En segon lloc, utilitzarem Microk8s, una eina per desenvolupar a nivell de producció per al desplegament de Kubernetes. Finalment, el clúster de Kubernetes s’integrarà en contenidors LXC, que és una virtu- alització a nivell de sistema operatiu per executar diversos sistemes Linux aïllats en un amfitrió de control mitjançant un sol nucli Linux. Per tant, què ofereix aquest projecte? Oferim una manera senzilla, potent i ràpida de de- splegar aplicacions a un servidor, les quals es troben en un entorn segur, fàcil de gestionar, fàcil de configurar, escalable, barat i mitjançant una metodologia de codi obert.

3 Resumen

Hoy en día la forma de implementar aplicaciones está evolucionando, por lo que el objetivo de este proyecto es la creación de un clúster de Kubernetes, que es un conjunto de nodos que ejecutan aplicaciones en contenedores. Las aplicaciones se van a desarrollar en tres escenarios: En primer lugar, vamos a utilizar Docker Compose, que es una herramienta para definir y ejecutar aplicaciones Docker en varios contenedores. En segundo lugar, vamos a utilizar Microk8s, una herramienta para desarrollar a nivel de producción la implementación de Kubernetes. Finalmente, el clúster de Kubernetes se anidará en contenedores LXC, que es un método de virtualización a nivel de sistema operativo para ejecutar múltiples sistemas Linux aislados en un host de control utilizando un solo kernel de Linux. Entonces, ¿qué ofrece este proyecto? Ofrecemos una forma simple, potente y rápida de implementar aplicaciones en un servidor, que se encuentran en un entorno seguro, fácil de administrar, fácil de configurar, escalable, económico y con una metodología de código abierto.

4 Acknowledgements

This TFG project has a very deep and detailed information about Kubernetes, that has been possible since different people have worked on this information: Jose Luis Muñoz, who is a teacher from the ETSETB doctorate in Networks, as the supervisor, has given the advice during development. At the same time he was one of the beta testers and has done a functional review of all the documentation. Jesús López, who is a student from ETSETB UPC, as a co-worker, has been another developer of the information. Finally Rafa Genés, who is a PhD student from ISG, has been another beta tester of my code. In addition, he has done a functional review of the thesis with my supervisor.

5 Revision history and approval record

Revision Date Purpose 0 07/12/2020 Document creation 1 03/01/2021 Document language revision 2 04/01/2021 Document content revision 3 20/01/2021 Delivery document

DOCUMENT DISTRIBUTION LIST

Name e-mail Lluís Baró Cayetano [email protected] Jose Luis Muñoz Tapia [email protected] Rafa Genés Durán [email protected]

Written by: Reviewed and approved by: Date 07/12/2020 Date 20/01/2021 Name Lluís Baró Cayetano Name Jose Luis Muñoz Tapia Position Project Author Position Project Supervisor

6 Contents

List of Figures 9

List of 10

1 Introduction 12 1.1 Statement of purpose ...... 12 1.2 Requirements and specifications...... 12 1.3 Methods and procedures...... 13 1.3.1 Software ...... 13 1.3.2 Documentation ...... 14 1.3.3 Communication ...... 14 1.4 Work plan with tasks, milestones and a Gantt diagram...... 14 1.4.1 Work Packages ...... 15 1.4.2 Milestones ...... 16 1.4.3 Gantt Diagram ...... 17 1.5 Deviations from the initial plan and incidences...... 17 1.5.1 Plan changes ...... 17 1.5.2 Incidences ...... 17

2 State of the art of the technology used or applied in this thesis: 19 2.1 Container Deployment ...... 19 2.2 Docker ...... 20 2.2.1 Docker Compose ...... 20 2.3 Kubernetes ...... 21 2.3.1 Kubectl ...... 21 2.3.2 Helm ...... 21 2.3.3 Microk8s ...... 22 2.4 LXD Containers ...... 23

3 Methodology: 24 3.1 Communication ...... 24 3.2 Software ...... 24 3.2.1 Git ...... 24 3.2.2 Docker Compose ...... 25 3.2.3 Microk8s ...... 25 3.2.4 LXD ...... 25 3.3 Documentation ...... 25

4 Project Development: 26 4.1 Selected Applications ...... 26 4.2 Desired Cluster ...... 27 4.3 Docker Compose Cluster ...... 29 4.3.1 Creating an Application using Docker Compose: XWiki ...... 29 4.3.2 Integration of the Services ...... 31

7 4.4 Kubernetes Cluster ...... 32 4.4.1 Creating an Application using Microk8s ...... 32 4.4.2 Integration of the Services ...... 34 4.5 Nested LXC Container Kubernetes Cluster ...... 34 4.5.1 Creating LXC Containers ...... 36

5 Results 37 5.1 Docker Compose Cluster ...... 37 5.2 Kubernetes Cluster ...... 38 5.3 Nested LXC Container Kubernetes Cluster ...... 40

6 Budget 43

7 Costs 43

8 Environmental Impact 46

9 Conclusions 47

10 Future Work 47

References 48

8 Listings

List of Figures 1 Project’s Gantt diagram ...... 17 2 Container Evolution ...... 19 3 Useful Docker Compose Commands ...... 20 4 Useful Kubectl Commands ...... 21 5 Useful Helm Commands ...... 22 6 Useful Microk8s Commands ...... 22 7 LXC Commands: Useful Commands ...... 23 8 Desired Cluster ...... 28 9 Config of the Desired Cluster ...... 29 10 Docker Compose Example: XWiki YAML file ...... 30 11 Docker Compose Example: XWiki Environmental Variables ...... 30 12 Docker Compose Command: Start Application ...... 31 13 Docker Compose Command: Status of XWiki Application ...... 31 14 Docker Compose Example: XWiki Home Page ...... 31 15 Docker Compose Command: Create a Network ...... 32 16 Docker Compose Networks Example: Updated YAML file ...... 32 17 Microk8s Command: Enable Addons ...... 32 18 Helm Command: Add Repo ...... 33 19 Helm Command: Install XWiki ...... 33 20 Kubectl Command: Cluster Status XWiki ...... 33 21 Kubernetes Cluster: XWiki Home Page ...... 34 22 Nested LXC Cluster: Structure ...... 35 23 Nested LXC Cluster: Structure Example ...... 35 24 LXC Command: Creating a Container ...... 36 25 LXC Command: Creating Microk8s profile ...... 36 26 Microk8s Command: Adding a node to a Cluster ...... 36 27 Docker Compose: Cluster Status ...... 37 28 Docker Compose Cluster: Proxy Hosts ...... 38 29 Kubernetes Cluster: Deployments ...... 39 30 Kubernetes Cluster: Pods ...... 39 31 Kubernetes Cluster: Services ...... 40 32 Kubernetes Cluster: Ingresses ...... 40 33 Creation of a Zpool ...... 41 34 Nested LXC Kubernetes Cluster: Nodes ...... 41 35 Nested LXC Kubernetes Cluster: Ingresses ...... 41 36 Nested LXC Kubernetes Cluster: Hosts of Devops ...... 42 37 Nested LXC Kubernetes Cluster: Mattermost Application ...... 42 38 State of devops Zpool ...... 42

9 List of Tables 1 Work Package 1 ...... 15 2 Work Package 2 ...... 15 3 Work Package 3 ...... 15 4 Work Package 4 ...... 15 5 Work Package 5 ...... 15 6 Work Package 6 ...... 16 7 Milestone ...... 16 8 Budget for the Project ...... 43 9 Cost of the Project: Members ...... 44 10 Cost of the Project: Material ...... 44 11 Cost of the Project: Amortization ...... 44 12 Cost of the Project: Utilities ...... 45 13 Cost of the Project: Total Cost ...... 45

10 Abbreviations

AppArmor Application Armor APT Advanced Package Tool CRI Container Runtime Interface DevOps set of practices that combines software development Dev and IT operations Ops DNS ETSETB Escola Tècnica Superior d’Enginyeria de Telecomunicació de Barcelona IDE Integrated Development Environment IP Internet Protocol ISG Information Security Group IT Information Technology K8s Kubernetes LDAP Lightweight Directory Access Protocol LTS Long Time Support LXC LinuX Containers LXD LinuX Container NPM Node Package Manager PoC Proof of Concept RAM Random Access Memory RBAC Role Based Access Control SSH Secure Shell SSL Secure Sockets Layer TFG Treball Final de Grau UPC Universitat Politècnica de Catalunya VNC Virtual Network Computing VPN Virtual Private Network WP Work Package YAML Ain’t Markup Language is a human friendly data serialization language. ZFS Z File System Zpool ZFS filesystems built on top of virtual storage pools.

11 1 Introduction

Nowadays companies are becoming more digital and they have to adapt to employees working from home. So they need to cover different needs, like having a chat in orderto communicate, or doing conferences, some tool to host repositories, of data, knowledge or information and to access to it in a simple mode. And for sure, all these services must be secure. So this thesis is based on a Kubernetes cluster creation. Kubernetes, also known as K8s, is an open source orchestration system for containers that allows us to manage containers on a cluster of machines for high availability. So, Kubernetes is a system for running many different containers over multiple different machines. And containers are the current way to deploy applications. Firstly we are going to deploy different applications using Docker Compose, that is atool for defining and running multi-container Docker applications. Secondly, we are going to use Microk8s, a tool for developing production-grade upstream for Kubernetes deployment to deploy the same applications. And finally, the Kubernetes cluster is going to be nested in LXC containers, thatisan operating-system-level virtualization method for running multiple isolated Linux systems on a control host using a single Linux kernel. All the software used in the project is going to be open-source based and we are going to develop different applications to achieve it.

1.1 Statement of purpose The purpose of this project, personally, is to learn about DevOps technologies and un- derstand the importance of it. Also develop the project using a open source ideology, this means that all the optimized solutions will be based on community knowledge sharing. The main goal of this project is the creation of a Kubernetes cluster that covers the basic needs of a company. We also found the need to research about Docker to be able to understand the philosophy of containerized applications. Eventually it is also necessary the implementation of this applications in dockerized en- vironment in order to be able to finally create a Kubernetes cluster. Finally another purpose of the project is to create the cluster in a nested LXC container.

1.2 Requirements and specifications. This project has been designed to be compatible with the majority of equipment so the system requirements are minimal.

12 The first requirement is that the user needs a Debian-type distribution in order toinstall the facilitated package, for example, an Ubuntu 20.04 LTS. The requirements for each software used are very similar to the general requirements: Docker Compose Prerequisites[1]: • An Ubuntu 20.04 LTS, 18.04 LTS or 16.04 LTS environment to run the commands. • Docker Engine is supported on x86_64 (or amd64), armhf, and arm64 architectures. Microk8s Requirements[2]: • An Ubuntu 20.04 LTS, 18.04 LTS or 16.04 LTS environment to run the commands (or another operating system which supports snapd). • At least 20G of disk space and 4G of RAM memory are recommended. • An internet connection. LXC Requirements: • A server with a Debian-type distribution. • At least 250G of disk space and 20G of RAM memory are recommended.

1.3 Methods and procedures. This project is based on an Open Source Ideology, with optimized solutions based on community knowledge sharing. Also uses a DevOps methodology for creating software, so it is based in the integration between software developers and system administrators. With this methodology we are able to develop software faster, with higher quality, lower cost and a very high frequency of releases. This project uses different applications and software developed by other authors, like Docker, Microk8s and the development of the different applications that take part in the cluster.

1.3.1 Software Microk8s is the mainly software used to develop this project. Microk8s is the simplest, lightweight and focused production-grade upstream for Kubernetes deployment that runs entirely on your workstation or edge device. With a single command can be installed on Linux, Windows or macOS. Especially made for DevOps. On the other hand, Docker Compose was the other software used. Compose is a tool for defining and running multi-container Docker applications. With Compose, youusea YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

13 Another software used is Git, that is a distributed version-control system for tracking changes in any set of files, in order to be able to share documentation between co-workers and the project supervisor.

1.3.2 Documentation In terms of communication, clear procedure has been followed, the moodle of the university has been used to download the templates of the documents; All the documentation have been edited with , LATEX, and once delivered and reviewed by the tutor a pdf version has created to be uploaded. In order to share all the documentation with the supervisor, it was created a git repository, there were uploaded all the versions for the revision and after the ”.pdf” which were the ones that have been officially submitted for the evaluation of the work in the moodle. Finally, some small Presentation or ”.txt” documents have been created to document pending tasks or doubts towards the supervisor or to some collaborator.

1.3.3 Communication In matters of communication, remote communications had been the main way of commu- nicating during the development of the project. This communications have focused on emails for solving small questions and concreting meetings. We used Jitsi Meet, that is a fully encrypted, open source video conferencing solution for the meetings because it includes the possibility of conversing, chatting and sharing desks in a more efficient way. Later on we used the adapted Jitsi Meet version deployed, with Docker Compose, for the project. Finally, we used a remote desktop, VNC, so with the co-workers and the supervisor we can view and edit in the same terminal.

1.4 Work plan with tasks, milestones and a Gantt diagram. The project has been divided into six work packages; the first two are about adquiring knowledge about Docker and Kubernetes, the followings are about software development, Dockerizing applications, creation of the Kubernetes Cluster and the creation of a nested LXC container. Finally, the last work package includes all the documentation made during the project.

14 1.4.1 Work Packages

Project: Learning Docker WP ref: 1 Major Constituent: Learning Short description: Be able to acquire knowledge about Docker Start date: 17/07/2020 through Udemy courses and YouTube channels like Tech- End date: 25/08/2020 World with Nana.

Table 1: WP 1

Project: Learning Kubernetes WP ref: 2 Major Constituent: Learning Short description: Be able to acquire knowledge about Ku- Start date: 02/08/2020 bernetes through Udemy courses and YouTube channels like End date: 30/09/2020 Just Me and Opensource.

Table 2: WP 2

Project: Dockerize Applications WP ref: 3 Major Constituent: Software Development Short description: Dockerize different applications such as Start date: 05/08/2020 Adminer, GitLab, Jisti, Mattermost, Nginx Proxy Manager, End date: 25/10/2020 Nextcloud, OpenLDAP, OpenVPN, and XWiki.

Table 3: WP 3

Project: Creation of a Kubernetes Cluster WP ref: 4 Major Constituent: Software Development Short description: Create a Kubernetes cluster with microk8s Start date: 01/10/2020 with these applications: Adminer, GitLab, Jisti, Mattermost, End date: 12/12/2020 Nextcloud, OpenLDAP, Redmine and XWiki.

Table 4: WP 4

Project: Creation of a nested LXC container WP ref: 5 Major Constituent: Software Development Short description: Create a LXC container inside another Start date: 01/12/2020 LXC container in order to be able to create the Kubernetes End date: 25/12/2020 cluster.

Table 5: WP 5

15 Project: Documentation WP ref: 6 Major Constituent: Documentation Short description: Write documentation files such as the Start date: 20/07/2020 Project Proposal, the Critical Review, the Thesis, the Presen- End date: 15/01/2020 tation and slides about Kubernetes for an academic purpose.

Table 6: WP 6

1.4.2 Milestones

WP Short Line Milestone / deliverable Date (week) 1 Docker slides Initial Docker slides 2 1 Docker slides Update Docker slides 4 1 Docker slides Final Docker slides 5 2 Kubernetes slides Initial Kubernetes slides 3 2 Kubernetes slides Update Kubernetes slides 6 2 Kubernetes slides Final Kubernetes slides 10 3 Docker Applications Adminer 2 3 Docker Applications Nginx Proxy Manager 3 3 Docker Applications Redmine 4 3 Docker Applications OpenLDAP 4 3 Docker Applications XWiki 5 3 Docker Applications Jitsi 6 3 Docker Applications Nextcloud 7 3 Docker Applications Mattermost 8 3 Docker Applications GitLab 9 3 Docker Applications Readme with the app config 10 4 Kubernetes Cluster Adminer 9 4 Kubernetes Cluster Redmine 10 4 Kubernetes Cluster OpenLDAP 10 4 Kubernetes Cluster XWiki 11 4 Kubernetes Cluster Jitsi 12 4 Kubernetes Cluster Nextcloud 12 4 Kubernetes Cluster Mattermost 13 4 Kubernetes Cluster GitLab 14 4 Kubernetes Cluster Readme with the app config 16 5 LXC Container K8s cluster in a nested LXC container 18 6 Documentation Project Proposal 3 6 Documentation Critical Review 6 6 Documentation Thesis 24 6 Documentation Presentation 26

Table 7: Milestones

16 1.4.3 Gantt Diagram

Phases of the Project 2020 2021 Jul. Aug. Sep. Oct. Nov. Dec. Jan. Planning Defining objectives Set up workspace Research Docker Kubernetes Development Docker Apps Kubernetes LXC Containers Documentation Project Proposal Critical review Degree Thesis Annexes Presentation

Figure 1: Gantt diagram of the project

1.5 Deviations from the initial plan and incidences. 1.5.1 Plan changes At first, the project was thought to deploy the OpenVPN application in order to implement an extra layer of security. This way applications like Adminer, Nginx Proxy Manager and OpenLDAP are only possible to be configured from the inside. But, in the middle ofthe project, we realised that this feature was very ambitious and was not really necessary for the development of the other applications. So in the course of this project the applications have been deployed, but the extra security layer will be incorporated in short-term.

1.5.2 Incidences The main incidence of the project was to acquire the vast amount of information that is evolving all the time. As we know Kubernetes was released 6 years ago, on June of 2014 and since that day it is in constant development being the last release on December of 2020. So every day it is evolving with new features and improvements and this makes us to be alert of them and adapt to it.

17 Another problem that occurred during the project was the failure of the nested LXC Containers. We had many problems that we needed to resolve in order to create the cluster. First of all we could not install anything inside the containers because we get an er- ror of insufficient permissions to use AppArmor, that is a Linux kernel security module that allows the system administrator to restrict programs’ capabilities with per-program profiles. The second problem was the creation of zpools using ZFS, that combines the features of a file system and a volume manager, so we could not create the pools inside the containers due to memory problems, as the default zpool defaults to 20% of the size of the partition behind /var/lib/lxd with a minimum size of 15GB and maximum size of 100GB, so the memory left was filling up and there was not enough left. And the third and final problem was another time the AppArmor profiles for microk8s, that without them we could not install microk8s on the desired container.

18 2 State of the art of the technology used or applied in this thesis:

The traditional way of deploying an application on a server is becoming to be outdated. To deploy an application, first of all you need that the server to be properly configured, and this means that the application is probably hard wired to the environment. Although validating completely the configuration is nearly impossible because developers do not even have control over what the live environment looks like, we still can access it via SSH and configure some parts of it. This means that the majority of deployments need to be performed in the right way for it to succeed. That is why we need a different way of deployment. Nowadays the way of deploying applications is evolving and no longer is required to spend a lot of time and effort doing it. So here it comes the containerized deployment, where the deployments of applications becomes very simple and brief. In the next sections a brief concept of the technology used in the project is going to be explained, even though an annex with the complete information is attached.

2.1 Container Deployment Containers are a of operating system virtualization, an abstraction at the application layer which packages codes and dependencies together however they do not contain oper- ating system images. A single container might be used to run anything from a small service or software process to a larger application. Also the use of containers allows to deploy applications regardless of where are they going to be deployed, in a personal environment, public cloud ot private data center. The next figure represents the evolution of the different ways of deploying an application, being the Container Deployment the way that is on rise.

Figure 2: Container Evolution

19 The deployment of containers uses management software that simplifies the launch and updates of applications. Container deployment provides fast access to environments and speeds up development because secure containers can be quickly downloaded and put to use. Container deployment also minimizes errors because it reduces the number of moving parts in development. In larger application deployments, multiple containers may be deployed as one or more container clusters. Such clusters might be managed by a containers orchestrators such as Kubernetes or Docker.

2.2 Docker Docker is defined by their creators as: “an open source project to pack, ship andrunany application as a lightweight container” The Docker goal is to provide a comprehensive abstraction layer that allows developers to containerize or package any application and have it run on any infrastructure. To do so, Docker uses container technology which provides a “light” form of virtualization. So with Docker you can deploy multiple containers that run on the same operating system. This provides a benefit not offered by virtual machines, considering that using avirtual machine requires running an entire guest operating system to deploy a single application.

2.2.1 Docker Compose Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Also creates a default network to communicate with the containers. With the YAML file, we can define multiple objects like services, networks, and volumes, so it can be defined a multi-container application in a single file. Using Compose is basically a three-step process: 1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere. 2. Define the services (containers) that make up your app in docker-compose.yaml so they can be run together in an isolated environment. 3. Run docker-compose up and compose starts and runs your entire app. The most useful commands are represented below.

$ docker-compose up -d # Start the docker container in the background $ docker ps -a # Show all containers running $ docker-compose down # Remove containers in the repository $ docker exec -it :container_id bash # Terminal into the docker container $ docker-compose logs # Show log of the container in the repository

Figure 3: Useful Commands of Docker Compose

20 2.3 Kubernetes Kubernetes is an open source orchestration system for containers that allows us to manage containers on a cluster of machines for high availability. So a Kubernetes cluster is a set of nodes that run containerized applications. So in this way, Kubernetes clusters allow applications to be more easily developed, moved and managed. The main features of Kubernetes are: 1. Provide high availability for containers: check health and restart failed containers. 2. K8s must manage how to distribute the load among nodes. 3. Is autoscalable, a cluster can start with one node and expand to thousands. 4. You can run Kubernetes anywhere: on-premise (own datacenters), public cloud ( cloud, AWS, ...) or hybrid public-private.

2.3.1 Kubectl The kubectl is the Kubernetes command-line tool to talk to clusters. The configuration of kubectl is also in /.kube/config, where we can specify other kubeconfig files bysetting the KUBECONFIG environment variable or by setting the –kubeconfig flag. These are the most used commands of Kubectl are:

$ kubectl config view # To view information about the cluster $ kubectl create -f file.yaml # Create a k8s object from file $ kubectl get {k8s object} # Display one or many k8s object $ kubectl expose {k8s object} # Expose a k8s object as a new service $ kubectl delete {k8s object} # Delete a k8s object $ kubectl apply -f file # Apply a configuration to a k8s object by file $ kubectl scale --replicas=# {k8s object} # Set a new size of a k8s object $ kubectl describe {k8s object} name # Get information about a k8s object $ kubectl exec -it myPod -- /bin/bash # Terminal into the Pod

Figure 4: Useful Commands of Kubectl

2.3.2 Helm Helm is a package manager for kubernetes, like apt or npm. In K8s, you have to configure a lot of YAML files to create a k8s application, so Helm provides a convenient wayfor packaging collections of manifest (YAML files) and distribute them in public and private registries. Helm uses Charts, that is a set of files that allow easily creating a bundle of Kubernetes deployment manifests. So Helm Charts help to define, install and upgrade complex Kubernetes applications and can be versioned. And the most used commands of are:

21 $ helm search hub/repo # Search a release in the hub/repo $ helm repo add myName repoLink # Add a chart repository $ helm repo update # Get the latest list of charts $ helm repo list # List the repositories $ helm install myName # Install a release $ helm uninstall myName # Uninstall a release $ helm list # Get the installed charts $ helm upgrade myName # Uninstall a release

Figure 5: Useful Commands of Helm

2.3.3 Microk8s Microk8s is the simplest, lightweight and focused production-grade upstream for Kuber- netes deployment that runs entirely on your workstation or edge device supported by Canonical, with a single command can be installed and easy to maintain. It is especially made for DevOps and perfect for IOT, developer machines or Raspberry Pi. Also there are the Microk8s addons, that provide us all the tools needed to develop the Kubernetes cluster. For example we used RBAC: enables the Role Based Access Control for authorization, Storage: create a default storage class which allocates storage from a host directory, Dashboard: the standard Kubernetes Dashboard, DNS: deploys CoreDNS (DNS server) , Ingress: a simple ingress controller for external access, Helm3: installs the Helm 3 package manager, and Metallb: deploys the MetalLB Loadbalancer. Another useful thing about microk8s is that we can use our host’s commands like Kubectl and Helm. This is because we can export the configuration of Microk8s and also we are able to use Lens, an IDE for Kubernetes, so it helps us to deal with Kubernetes clusters. And these are the most useful commands to use Microk8s:

$ microk8s start # Start the cluster $ microk8s stop # Stop the cluster $ microk8s status # Show the status of the cluster $ microk8s enable # Enabling addons $ microk8s add-node # Adding a node to the cluster $ microk8s join :/ # Joining to a remote cluster $ microk8s leave # Remove the node from the cluster $ microk8s kubectl ... # Using Kubectl commands $ microk8s helm3 ... # Using Helm commands

Figure 6: Useful Commands of Microk8s

22 Finally we noted that running a VNC is so important, because it offers the possibility to create clusters inside a machine without affecting the machine directly, so we are going to use the LXD Containers.

2.4 LXD Containers LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. So with this tool we are able to create LXC containers in order to create a kubernetes cluster in a nested container. As LXD isn’t a rewrite of LXC, we are using LXC commands to create the instances: containers or virtual machines, and the most useful commands are showed in the next figure.

$ lxd init # Initial configuration $ launch imageserver:imagename myInstance # Launch an Instance $ lxc list # List all Instances $ lxc start/stop myInstance # Start/Stop a Container $ lxc exec myInstance bash # Shell/Terminal inside Container $ lxc file pull myInstance/container-path host-path # Copy from an instance to host $ lxc file push host-path myInstance/container-path # Copy from host to instance $ lxc delete myInstance # Remove instance $ lxc profile create myProfile # Create Profile $ lxc profile assign myInstance myProfile # Assign Profile $ lxc config show myInstance # Show the configuration of an Instance

Figure 7: Useful Commands of LXC

So what is this project offering? We are offering a simple, powerful and quick wayto deploy applications in a single server, that are in a secure environment, easy to manage, easy to set up, scalable, cheap and using an open source methodology.

23 3 Methodology:

In this section there is a detailed explanation about the methodology followed when communicating with the tutor and team mates. Also, it includes a description about the software used develop the project and how it was shared with the supervisor. At the end, there is the procedure of creating the work documentation. So as this project is based on an Open Source Ideology, with optimized solutions based on community knowledge sharing and a DevOps methodology for creating software, so it is based in the integration between software developers and system administrators.

3.1 Communication As it was stated in the introduction, remote communications had been the main way of communicating during the development of the project. These meetings were crucial to discuss the progress and the improvements. At first, it was very comfortable, especially when the supervisor was able to explain allwe need to know about Docker and Kubernetes and provided us with some Udemy courses to learn about the matter. In addition, we had meetings during the process of development through remote conver- sations. On the other hand, thanks to a VNC machine enabled in the server, which will go into detail in the software part, the group could make audio meetings through Jitsi and share the desktop of the , so apart from speaking this tool let the team work on the same screen.

3.2 Software This project uses different applications and software developed by other authors, like Docker, Microk8s and Git and the development of the different applications that take part in the cluster. Also we have been using Git in order to share the software developed and the documentation created.

3.2.1 Git The supervisor decided to use Git as a method of sharing the software and the docu- mentation, so we where provided with a VNC machine on the server of the networking department. In the machine a certificate was generated in order to get access to it and configurea Git repository with the code. So, I could work with my personal computer locally in the project and make uploading updates ”commits”.

24 Also, the tutor had access to the repository, so he could download the code and check the updates and improvements daily. This development environment let the supervisor to add new co-workers easily in order to contribute in the development of the project.

3.2.2 Docker Compose Docker Compose has been used in order to create the Docker Cluster and we have worked with it creating different YAML files to store all the configurations for the different ap- plications used. The YAML files where edited directly from the terminal, we used a text editor without graphical interface, Nano, which is included in the official Ubuntu repository. This tool was useful because we only need to create a single YAML file with concrete specifications for each application.

3.2.3 Microk8s We used Microk8s to be able to create the Kubernetes cluster and because it provides us a lot of addons and is the best tool to orchestrate a Kubernetes cluster. The most useful thing about microk8s is that it integrates the use of Kubectl and Helm and we only need one tool to manage the cluster.

3.2.4 LXD LXD is a next generation system container manager. So with this tool we are able to create LXC containers in order to create a kubernetes cluster in a nested container.

3.3 Documentation In the documentation part has been followed a simple way, although maintaining the philosophy of free software, so all the documentation has been written with LATEXand LibreOffice. Once all the documentation was done, it was sent to the supervisor forvali- dation. Finally, when the documents were approved by the tutor a pdf version was print and uploaded in the UPC Moodle service. In order to share all the documentation with the supervisor, he created a Git repository in a server provided by the networking department, so all the files were saved there in its editable version and in PDF. Finally, the next documentation tasks that have to be done is the realization of a paper, which will be done in Latex language and will be shared in the git repository. Finally, some small Presentation or ”.txt” documents have been created to document pending tasks, installation configurations or doubts towards the supervisor or to some collaborator.

25 4 Project Development:

This section will describe in depth how the project has been developed, what tools and methodologies have been used for each of the parts that make it up. Our goal is to obtain different applications running on a cluster and being lightweigth and easy to manage. These applications are very useful for any company because they cover the basic needs of it. So we want a communication service, because the team communication is very important to the companies, it allows the staff to a chat or to be able to establish meetings, internal, or with clients. Also we need a tool that allows us to have a file storage on the cloud in order to host the file sharing. Taking into account a more IT sector company based, we also need a service that host all the software developed and allows file sharing and a service in order to beable to document all the relevant information a company would like to save, for example the configurations needed to install an application. Another interesting service we aregoing to deploy is an application that allows us to manage databases. On one hand, regarding aspects of security we need to stablish a layer of protection of these applications so we want to deploy a VPN to be able to access the crucial services that needs to be configured and also we need a LDAP server to grant access to the applications. On the other hand, another service that we want to implement is a server proxy that allows us forward to the websites, where the services will hosted, including free SSL, like Let’s Encrypt. Last but not least, we have to remember that we are using an Open Source philosophy and all the applications deployed must comply with that premise.

4.1 Selected Applications The choosed applications to be deployed are Adminer, GitLab, Jitsi, Mattermost, Nginx Proxy Manager, Nextcloud, OpenLDAP, OpenVPN, Redmine and XWiki, that are going to be detailed in the following sections. • Adminer: formerly phpMinAdmin, is a full-featured database management tool written in PHP. • GitLab: is a web-based DevOps lifecycle tool that provides a Git-repository manager providing wiki, issue-tracking and continuous integration and deployment pipeline features. • Jitsi: is a set of open-source projects that allows you to easily build and deploy secure videoconferencing solutions. At the heart of Jitsi are Jitsi Videobridge and Jitsi Meet, which let you have conferences on the internet, while other projects in the community enable other features such as audio, dial-in, recording, and simulcasting.

26 • Mattermost: is an open-source, self-hostable online chat service with file sharing, search, and integrations. It is designed as an internal chat for organisations and companies, and mostly markets itself as an open-source alternative to Slack and Microsoft Teams. • Nging Proxy Manager: enables you to easily forward to your websites running at home or otherwise, including free SSL, without having to know too much about Nginx or Letsencrypt. • Nextcloud: is a suite of client-server software for creating and using file host- ing services. With the integrated OnlyOffice, Nextcloud application functionally is similar to Dropbox, Office 365 or , but can be used on home-local computers or for off-premises file storage hosting. • OpenLDAP: is a free, open-source implementation of LDAP developed by the OpenLDAP Project. The suite includes: slapd - stand-alone LDAP daemon (server), libraries implementing the LDAP protocol, and utilities, tools, and sample clients. • OpenVPN: is a VPN system that implements techniques to create secure point- to-point or site-to-site connections in routed or bridged configurations and remote access facilities. It implements both client and server applications. Also allows peers to authenticate each other using pre-shared secret keys, certificates or username/- password. When used in a multiclient-server configuration, it allows the server to release an authentication certificate for every client, using signatures and certificate authority. • Redmine: is a flexible project management web application and issue tracking tool. Written using the Ruby on Rails framework, it is cross-platform and cross-database. It allows users to manage multiple projects and associated subprojects. It features per project and forums, time tracking, and flexible, role-based access control. It includes a calendar and Gantt charts to aid visual representation of projects and their deadlines. Redmine integrates with various version control systems and includes a repository browser and diff viewer. • XWiki: is a light and powerful development platform that allows you to customize a Wiki to your specific needs. So is a Wiki featuring collaborative edition, document versioning or user access rights management. However, XWiki is more than the usual wiki. It is an Open-Source Second Generation Wiki. What does this mean? In short, XWiki provides you with the ability to create applications directly in your wiki.

4.2 Desired Cluster As we want to create a cluster, all the applications need to be deployed in the same server. Taking this into account we could use the same IP to host the websites of the applications. The benefit of using a cluster is that it contains a internal network andthis way the applications will be able to see each other. This feature is very important, because it allows us to not expose unnecessary host ports.

27 So the desired scenario is a cluster with the desired apps, that are going to authenticate via LDAP and the crucial ones, in terms of security like OpenLDAP and Nginx Proxy Manager are going to be accessible trough a VPN server. To put in words what we are saying the figure below shows how we want the cluster to be, and with which applications should interact.

DESIRED CLUSTER

LDAP Server

Figure 8: Desired Cluster

Also we want to maintain all the configurations and instructions to start the applications on their own directory and a general directory where all the configuration files will be saved. So in terms of deployment the scenario should look like this:

28 Figure 9: Configuration files of the Desired Cluster

4.3 Docker Compose Cluster To make possible the purpose of this project, first of all we need to deploy the desired apps in a Docker environment. To do it, we are going to use Docker Compose, we are using this workaround, because to be able to deploy this apps in K8s, we need to know how to work with them first, using Docker Compose and deploying them in a Docker Host.

4.3.1 Creating an Application using Docker Compose: XWiki The first application used in order to start deploying different services was XWiki. This way we have the first contact with Docker Compose and we are able to deploy more applications. With this example we are going to deploy the XWiki application running alongside with a PostgreSQL database. So we made a YAML file, showed below, that contains the spec- ifications for the deployment.

29 1 version: '3' 2 services: 3 web: 4 image: "xwiki:lts-postgres-tomcat" 5 restart: ${RESTART_POLICY} 6 depends_on: 7 - db 8 ports: 9 - "${XWIKI_PORT}:8080" 10 environment: 11 - DB_USER=${POSTGRES_USER} 12 - DB_PASSWORD=${POSTGRES_PASS} 13 - DB_DATABASE=${POSTGRES_DB} 14 - DB_HOST=${XWIKI_POSTGRES_CONTAINER_NAME} #Nameofthehost(ordockercontainer)containingthedatabase. 15 volumes: 16 - ${XWIKI_CONFIG}/data:/usr/local/xwiki 17 networks: 18 - default 19 20 # ThecontainerthatrunsPostgreSQL 21 db: 22 image: "postgres:12-alpine" 23 container_name: ${XWIKI_POSTGRES_CONTAINER_NAME} 24 restart: ${RESTART_POLICY} 25 volumes: 26 - ${XWIKI_CONFIG}/postgres:/var/lib/postgresql/data 27 environment: 28 - POSTGRES_ROOT_PASSWORD=${POSTGRES_ROOT_PASS} 29 - POSTGRES_PASSWORD=${POSTGRES_PASS} 30 - POSTGRES_USER=${POSTGRES_USER} 31 - POSTGRES_DB=${POSTGRES_DB} 32 - POSTGRES_INITDB_ARGS="--encoding=UTF8" 33 networks: 34 - default

Figure 10: YAML of Xwiki

Another important aspects are the environmental variables that can be defined in a bash script .env, there we can specify the configuration parameters and this way the YAML file does not need to be modified.

# Env file of Xwiki # Config directory XWIKI_CONFIG=../docker-volumes/xwiki

# local variables POSTGRES_DB=xwiki POSTGRES_USER=xwiki POSTGRES_PASS=test1234 XWIKI_POSTGRES_CONTAINER_NAME=xwiki_db # Name of the host (or docker container ) containing the database . POSTGRES_ROOT_PASS=test1234

XWIKI_PORT=8000 # Global Parameters :

Figure 11: XWiki Environmental Variables

Now that we have created the application we need to start it, so we are going to use the next Docker Compose command that pulls the needed Docker images, and starts the XWiki and database containers.

30 $ docker-compose up -d

Figure 12: Docker Compose Up command

Once the container is up and running, we can check its status with the next command and see that is running on the port 8080 of the container, and it is exposing the port 8000 of the host. Also we can observe that there is a PostgreSQL application running on the same Docker Host, that is going to be the database for XWiki.

Figure 13: Cheking the state of the Application

At this point, XWiki should be running in our Docker Host, and we can complete the installation as a XWiki administrator. So we are going to open http://localhost:8000 in a web browser, and after some con- figurations we expect to obtain the next result.

Figure 14: Home Page of XWiki

4.3.2 Integration of the Services To integrate all the services in the Docker cluster and make possible the communication between them so the functionality of the LDAP Server makes sense, we need to create a bridge network between the applications.

31 $ docker network create --driver=bridge --subnet=172.31.0.0/16 --gateway=172.31.0.1 devops

Figure 15: Docker Compose: Creation of a Network

After we created a network called devops, we need to configure this network to all the YAML files of the different applications so they can resolve each other by name oralias. Containers connected to the same user-defined bridge network automatically expose all ports to each other (no ports to the outside world).

1 networks: 2 devops: 3 external: true

Figure 16: Updated YAML file to provide Communication Between Applications

So now the containerized applications are able to talk between them through the container name.

4.4 Kubernetes Cluster To create the Kubernetes Cluster we are going to use Microk8s, as explained before, is the simplest, lightweight and focused production-grade upstream for Kubernetes deployment. Microk8s provides us all the tools necessary to create the cluster, the most important one being Helm, that is a package manager for K8s (like apt or npm), so it provides a convenient way for packaging collections of manifest, YAML files, and distribute them in public and private registries. In order to use Microk8s we have to enable our desired capabilities such as RBAC, Dash- board, Storage, Ingress, DNS, Helm3 and Metallb. Once all of it are enabled we can start the deployment of our first application.

4.4.1 Creating an Application using Microk8s We are going to deploy the same application as before, XWiki, and we are going to use Helm. This way it will be super easy to achieve it. As commented earlier, we need to enable Microk8s addons with this command:

$ microk8s enable rbac dashboard storage ingress dns helm3 metallb:(IP Range)

Figure 17: Enabling addons in microk8s

32 The advantatge of using Helm is that we are going to deploy XWiki only using a single command and modifying its configuration parameters. To deploy it we are going touse the next steps described in the XWiki Helm Chart[3]. We are going to install the chart, and to do so, first we need to add the keyporttech helm repo and update our repos.

$ microk8s helm3 repo add keyporttech https:// keyporttech . .io/helm - charts / $ microk8s helm3 repo update

Figure 18: Adding XWiki Helm Chart Repo

Then we can install easily install it with the next command:

$ microk8s helm3 install xwiki keyporttech/xwiki

Figure 19: Installing XWiki

To check if the application is running, we have to check the K8s cluster with kubectl and wait until the deployment is ready, like in the figure.

$ microk8s kubectl get all NAME READYSTATUSRESTARTSAGE pod/xwiki-postgresql-0 1/1 Running 0 2d pod/xwiki-xwiki-7f868db4b-5h79f 1/1 Running 0 2d

NAME TYPE CLUSTER-IP EXTERNAL-IPPORT(S)AGE service/kubernetes ClusterIP 10.152.183.1 443/TCP 2d4h service/xwiki-xwiki ClusterIP 10.152.183.200 80/ TCP 2d service/xwiki-postgresql ClusterIP 10.152.183.39 5432/ TCP 2d

NAME READYUP-TO-DATEAVAILABLEAGE deployment.apps/xwiki-xwiki 1/1 1 1 2d

NAME DESIREDCURRENTREADYAGE replicaset.apps/xwiki-xwiki-7f868db4b 1 1 1 2d

NAME READYAGE statefulset.apps/xwiki-postgresql 1/1 2d

Figure 20: Cheking the status of the cluster

Now we can access the Cluster IP and the port, http://10.152.183.200:80, in a web browser, and after some configurations we expect to obtain the next result.

33 Figure 21: Home Page of XWiki

4.4.2 Integration of the Services In the Kubernetes Cluster the integration of the services require more configurations as the services can be deployed in different nodes that form the cluster. So to integrate all the services in the cluster we needed to configure a K8s object called Service. We need to recall that in K8s all the Pods have an IP that can be used for direct com- munications without NAT. So in theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it and the Deployment will create new ones, with different IPs. So this is the problem a Service solves. Service is an abstract way to expose an application running on a set of Pods as a network service and when created, each Service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. This way, Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service. And to make all of this possible we need another K8s object to expose the Services, and here comes to work Ingress. Ingress sits in front of multiple services and act as a ”smart router” or entrypoint into your cluster. Is very useful if you want to expose multiple services under the same IP address. While Loadbalancers are L4 devices, Ingress is a L7 proxy (typically HTTP/HTTPS), so it provides external access with features like: Load Balancing, SSL termination, Name-based virtual hosting or URL re-writing and more.

4.5 Nested LXC Container Kubernetes Cluster In terms of an educational method, we want to accomplish this cluster, because this way it will be easy for different developers to work in different machines and have theirown cluster, and it will reduce the hardware need it.

34 So a nested LXC Kubernetes Cluster is simple, we want to create a k8s cluster inside a LXC container that is hosted by another LXC container, as shows the figure below.

SERVER

LXC Container 1 Kubernetes Cluster Kubernetes Cluster Node 1 (LXC Container) Node 2 (LXC Container) Node 3 (LXC Container) Node X (LXC Container) Node 1 (LXC Container) Node 2 (LXC Container) Node 3 (LXC Container) Node X (LXC Container)

LXC Container 2 Kubernetes Cluster Kubernetes Cluster Node 1 (LXC Container) Node 2 (LXC Container) Node 3 (LXC Container) Node X (LXC Container) Node 1 (LXC Container) Node 2 (LXC Container) Node 3 (LXC Container) Node X (LXC Container)

LXC Container X Kubernetes Cluster Kubernetes Cluster Node 1 (LXC Container) Node 2 (LXC Container) Node 3 (LXC Container) Node X (LXC Container) Node 1 (LXC Container) Node 2 (LXC Container) Node 3 (LXC Container) Node X (LXC Container)

Figure 22: Structure of the Nested LXC Cluster

So we are going to deploy the same k8s cluster as we developed previously for the Kuber- netes Cluster. This cluster is going to be located in a nested LXC container as represents the next figure. Devops is going to be a LXC container that hosts the nodes, also LXC containers, that are going to form the Kubernetes Cluster.

Figure 23: Structure of our Example

35 4.5.1 Creating LXC Containers In order to deploy the cluster, we need to create the scenario and to do so we need some configurations. First of all we need to install LXD in our host and create a LXC container, in our case is defined as Devops. To do so we are creating it with the next commands:

$ snap install lxd $ lxd init $ lxc launch ubuntu:20.04 devops

Figure 24: Creating a LXC Container

Also we need to create the profile for microk8s that are going to use the nodes containers, which is going to be inherited from the host.

$ lxc profile create microk8s $ wget https:// raw. githubusercontent .com/ ubuntu / microk8s / master / tests /lxc/microk8s -zfs. profile \ -O microk8s.profile $ cat microk8s.profile | lxc profile edit microk8s $ lxc profile assign devops default , microk8s

Figure 25: Creating the microk8s profile

Then inside the LXC container, Devops, we are going to create the nodes of the cluster, and we are going to do the same as before, create the profile and then create the containers. To create the multi-node Cluster, we are going to add node-02 to the node-01 and to do so, we are going to use the next microk8s commands

Node-01# microk8s add-node microk8s join 10.234.156.79:25000/72b949125b30d2801955db837ef26020 ...

Node-02# microk8s join 10.234.156.79:25000/72b949125b30d2801955db837ef26020

Figure 26: Adding a node to the cluster

Once the first container is initialized, node-01, we are going to follow the same procedure to develop the Kubernetes Cluster using microk8s, and so on for the rest of the nodes. Note that we only need to enable the addons for the rest of the nodes, and in one node we can deploy all, that is going to automatically choose in which node will the pods be deployed. So now we are able to deploy the same cluster as in the Kubernetes Cluster section.

36 5 Results

The final result of this TFG project is the creation of three different clusters using different technologies, Docker Compose and Microk8s. Also at the end of the document there are attached the Docker and Kubernetes slides for an academic purpose and a guide to develop the recreated clusters. This guide consists of configuration files and instructions to make it easy for any people that want todevelop the same applications or for the collaboration of new students. In this section, there is a brief summary about the three clusters and the the basic config- uration of each one. All the detailed information of the procedure or the specific settings can be found in the documentation attached to this project. In addition, there is also a technical explanation of how each functionality has been performed.

5.1 Docker Compose Cluster The Docker Cluster has been deployed with almost all the applications that we desired. The only application we could not develop was OpenVPN. That does not mean that the application is not working, the issue was that we were no capable of configuring the VPN server to establish a TAP interface connection in order to access all the services of the cluster. As we can see, with a simple command we can know what is the status of the services running inside our Docker cluster and which ports of the Host are exposed to the exterior and which are the internal ports for each Docker Application.

Figure 27: Docker Compose Cluster Status

As we can observe the applications hosted are not not exposing any ports of the host, only Nginx and Jitsi applications, because they need it to work. We can access the applications via the created network for the cluster and its going to gain access through the FQDN, that are the names showed above, like redmine_redmine_1 and the port of the container.

37 To show in a more visual way, the next figure shows all the Proxy Hosts configured with Nginx Proxy Manager, that host the web applications of the other services deployed in the cluster.

Figure 28: Proxy Hosts of the Docker Cluster

5.2 Kubernetes Cluster To deploy the Kubernetes Cluster was more challenging than deploying it in Docker. The main reason is because K8s is evolving every day, and you must adapt to the changes. We have to take in mind that not all the desired applications are developed in K8s so we could not deploy all of them. GitLab has nowadays a PoC in K8s and Jitsi is not offering any support for it, also we have to remark that a GitHub repository[4] with Helm charts is deprecated and it has no longer development. This is due to the Helm 3’s public release and it ended the support on Nov 13, 2020. However we were able to deploy enough applications to cover a company’s needs, we could cover the communication, despite the conferences, the repositories of data, knowledge or information and the easy accessibility.

38 As K8s has integrated ingress, an API object that manages external access to the services in a cluster, also it may provide load balancing, SSL termination and name-based virtual hosting, we do not need to deploy Nginx Proxy Manager. So with the help of Lens, a K8s IDE, it is easy to show the cluster management. In the next figure are represented all the deployments of the cluster.

Figure 29: Deployments of the Kubernetes Cluster

We know that with the deployment of an application in k8s it will generate the pods, that are the containers running the applications and the services, that expose the application to an externally accessible port. So the next figures represent our pods and services.

Figure 30: Pods of the Kubernetes Cluster

39 Figure 31: Services of the Kubernetes Cluster

The final figure that represents the accessibility of the applications is the onewiththe Ingresses of the applications, that provides external access to the cluster. So editing our /etc/host and with a DNS provider we can access to the cluster.

Figure 32: Ingresses of the Kubernetes Cluster

5.3 Nested LXC Container Kubernetes Cluster This cluster needed a lot of configurations, as mentioned in the Incidences section, wehad a lot of issues to make possible the creation of the cluster. Being the main problem the creation of a zpool in the LXC host. Finally we solve it creating the pool manually using the next command.

40 $ zfs create zpool/lxd-devops

Figure 33: Creating a Zpool

The results are very similar to the previous section, but this time we have deployed three nodes in the cluster, that are hosting the deployments of each application. Again, with the help of Lens, we can show the cluster specifications. The first figure represents the cluster nodes.

Figure 34: Nodes of the Nested LXC Kubernetes Cluster

To be able to access the services from outside the cluster, from the host, devops, we needed to edit the /etc/hosts to point to the services inside the cluster, that are exposed, they have an external IP. Inside our master node, in this case node-01, we also need to configure the /etc/hosts to point to the desired service.

Figure 35: Ingresses of the Nested LXC Kubernetes Cluster

41 root@devops:~# cat /etc/hosts 127.0.0.1 localhost

10.234.156.21 lxc.adminer.com 10.234.156.22 lxc.mattermost.com 10.234.156.23 lxc.nextcloud.com 10.234.156.24 lxc.ldap.com 10.234.156.25 lxc.redmine.com 10.234.156.26 lxc.xwiki.com

Figure 36: Hosts of Devops

Now we are able to access to an application by the URL defined in the hosts file. For example, we are going to access lxc.mattermost.com:8065

Figure 37: Mattermost application of the Nested LXC Kubernetes Cluster

As we can see, after all the creation of the cluster and deploying the same applications as before, we can observe that the size of the zpool of the container has increased.

$ zfs list zpool/lxd-devops NAME USED AVAIL REFER MOUNTPOINT zpool/lxd-devops 73.4 G 463G 24K none

Figure 38: Zpool status

42 6 Budget

If the object of the thesis is not a prototype at least you should include in this section an estimation of the number of hours you have dedicated to the thesis, evaluated at cost of junior engineer. If you have used a specific software you should also include the license and amortization costs. In terms of the budget involved in this project we are talking of a total amount of almost 50.000€. This budget is determined taking into account different terms. First of all, we have the salary of the team, this is distributed into two different wages, one for the project leader that is 20€/h and 15€/h for the two junior engineers. In order to finish the project in time, the team is working for 6 months, though the project leader and one of the junior engineers are spending 20h per week and the other junior engineer is spending 40h per week, with a total amount of 520 hours and 960 hours respectively. The second important thing is the equipment needed, this consists of one laptop for each team member and one special laptop to save all the data. As we are maintaining an Open Source methodology we do not have any cost for the software. The last thing we have to take into account is that the team needs an office which should include some facilities like internet connection and dispose of water and energy. So the breakdown of the budget is in the next table that shows the different parts of it.

Table 8: Total Budget for the Project

7 Costs

Once we had done a budget for the project, we can determine the real costs of it, so the costs of the project are divided into three different concepts, the team, the material and the utilities. First of all we have the team, that is composed by three members. The project leader as it has more responsibilities has a wage of 15€/h and the rest of the team, the junior enginees have a wage of 12€/h. In order to complete the project the team is going to work during six months with a total amount of 320 hours, for the project leader and one of the engineers, they are working 20h per week, and the other engineer, that is working 40h per week, has a total amount of 960 hours. So the salary of the team is 24.480 € and 8.568 € for the Social Security.

43 Table 9: Cost of the Project Members

To be able to do the project the team needs some equipment such as tables, chairs and technical laptops. This laptops should run linux and have at least 8GB of ram, with a SSD hard drive of 128GB and an Intel Core I7 processor. So the next table shows the material needed to complete the project that has a total investment of 11.200 €.

Table 10: Cost of the Project Material

One important aspect that we need to take into account is the amortization, that will determine the cost of our investment. In order to work we buyed the equipment men- tioned earlier and it has an amortization cost. As the table below shows, we can see the devaluation of the material buyed. The team only needs this material for 6 months, but the amortization is calculated annually, so the total depreciation of the material used for the project is 1.896 €.

Table 11: Cost of the Project Amortization

The last thing that suppose a cost for the team is all the utilities needed, these utilities are the water, energy, telephony and internet and the rent of the office. As shown below it does not support a very expensive cost as it cost 3.320 € for the whole duration of the project.

44 Table 12: Cost of the Project Utilities

With all the tables represented above we can determine that is not an expensive project as one of the goals we have for it. So we are talking about 45.000 € for a project that is going to improve the deployment of the services of a company and can reduce millions due to the simplicity of it.

Table 13: Total Cost of the Project

45 8 Environmental Impact

We know that almost every thing we use uses energy, and more when it comes to the IT sector. In 2019 the global data center energy use was 250 TWh, about 1% of global electricity consumption and based on current efficiency improvement trends, electricity consumption is projected to rise to around 270 TWh in 2022. [5] Also we need to figure out if we are using the machines well. There are a lotofdata centers and machines that we are not using them efficiently and we could extract more of them. So in terms of energy reduction, is where the containers come in. Containers, that are packaged software for faster build, ship and deploy, affects in a very positive way to the environment. This is because if they increases the efficiency of the data centers, it will reduce the energy they consume. So if containers can reduce the equipment needed to deploy applications, it also will reduce the energy we spend on maintaining this equipments. Also we need to note that the objective of containerized applications was not to reduce the energy consumption, but is a factor that benefits of it. On the other hand, there is and article[6] that states that Docker containers use more energy than a traditional deployment. Nonethless we can determine that is the begining of an important change to the society and if a lot of developers come together to obtain solutions to the enviromental impact it can be very profitable.

46 9 Conclusions

First of all, personally, I am very happy of the work done during this project, since it has taught me to work in a didactic way and I have learned a lot. I have faced new challenges and unknown issues, that had led me to acquire new knowledge in order to be able to carry out the project, that is highly complex. This project allowed us to see the large amount of options there are to host services between our systems. We have gone from the use of methods to improve the performance of the services of a company, like docker, using YAML files, to systems that permits us to integrate a huge amount of separated services and databases, such as the utilization of Helm in Kubernetes. Each of them has its own benefits in their specific scenarios, and it is worth checking which ones to choose before starting to develop a project or trying to integrate two separate ones. In one hand, we have Docker, that nowadays is much more developed and offers a lot of different applications to work with, though it only works on a single node. On the other hand Kubernetes, is simple, powerful and scaleable, though it is still a lot to learn and does not have a lot of deployed applications. As a final statement, both scenarios are much better than the traditional way of deploying applications. They provide a faster way of deploying applications, more secure, and they have less storage capacity. Also is has proven to be pretty easy to manage and deploy. In addition they can be monitored through and managed through Lens. Finally, it could be good to only use Kubernetes, though its learning curve makes it not as attractive yet, compared to Docker.

10 Future Work

This project opens a wide range of windows for future development. First of all there is a lot of research to do, for both Kubernetes and Docker. Also there is a lot of work to do in Dev, because the services we know nowadays will be deployed using this technology. For example, GitLab, has a PoC for Kubernetes, that it is not deployed on this project. As regards of nested LXC Containers were developed in a educational sphere and the reliability of them is still being checked. During the development of this project, Kubernetes has announced that is going to drop support to Docker, note that all the applications use Docker images, but it is not a big deal. In reality it drops the CRI, Container Runtime Interface, support for Docker. The debate now is which runtime is going to be used, Containerd or CRI-O. Finally, we have to note that this project is very upgradable and since the sector keeps evolving every year, it would be easier to develop newer and safer applications.

47 References

[1] Docker Inc. Docker Compose Requirements. [Online] Available: https://docs. docker.com/compose/install/, 2020. [Accessed: 15 Nov 2020]. [2] Canonical Ltd. Microk8s Requirements. [Online] Available: https://microk8s.io/ docs, 2020. [Accessed: 15 Nov 2020]. [3] The Artifact Hub Authors. XWiki Helm Chart. [Online] Available: https:// artifacthub.io/packages/helm/keyporttech/xwiki, 2020. [Accessed: Dec 2020]. [4] Helm Authors. Helm Charts. [Online] Available: https://github.com/helm/charts, 2020. [Accessed: Dec 2020]. [5] IEA 2020. Data Centres and Data Transmission Networks. [Online] Available: https: //www.iea.org/reports/data-centres-and-data-transmission-networks, 2020. [Accessed: 27 July 2020]. [6] Christopher Solinas Eddie Antonio Santos, Carson McLean and Abram Hindle. How does docker affect energy consumption? evaluating workloads in and out of docker containers. May 2017. [Online] Available: https://arxiv.org/pdf/1705.01176.pdf [Accessed: Dec 2020]. [7] The Kubernetes Authors. Kubernetes Documentation. [Online] Available: https: //kubernetes.io/docs/home/, 2020. [Accessed: 27 July 2020]. [8] Kubernetes By Example. Kubernetes Examples. [Online] Available: https://docs. docker.com/get-started/overview/, 2020. [Accessed: 27 July 2020]. [9] Docker Inc. Quickstart: Compose and WordPress. [Online] Available: https://docs. docker.com/compose/wordpress/, 2020. [Accessed: Nov 2020]. [10] Jakub Vrána. Adminer. [Online] Available: https://www.adminer.org, 2020. [Ac- cessed: Nov 2020]. [11] GitLab. GitLab. [Online] Available: https://about.gitlab.com, 2020. [Accessed: Nov 2020]. [12] Jitsi. Jitsi. [Online] Available: https://jitsi.org, 2020. [Accessed: Nov 2020]. [13] Mattermost Inc. Mattermos. [Online] Available: https://mattermost.com, 2020. [Accessed: Nov 2020]. [14] Jamie Curnow. OpenLDAP. [Online] Available: https://nginxproxymanager.com, 2020. [Accessed: Nov 2020]. [15] OpenLDAP Fundation. OpenLDAP. [Online] Available: https://www.openldap.org, 2020. [Accessed: Nov 2020]. [16] Nextcloud GmbH. Nextcloud. [Online] Available: https://nextcloud.com, 2020. [Accessed: Nov 2020]. [17] OpenVPN Inc. OpenVPN. [Online] Available: https://openvpn.net, 2020. [Ac- cessed: Nov 2020].

48 [18] Redmine. Redmine. [Online] Available: https://www.redmine.org, 2020. [Accessed: Nov 2020]. [19] XWiki SAS. XWiki. [Online] Available: https://www.xwiki.org, 2020. [Accessed: Nov 2020]. [20] Canonical Ltd. Microk8s Addons. [Online] Available: https://microk8s.io/docs/ addons, 2020. [Accessed: Dec 2020]. [21] Canonical Ltd. Linux Containers. [Online] Available: https://linuxcontainers. org/, 2020. [Accessed: Dec 2020]. [22] Docker Inc. Why Docker? [Online] Available: https://www.docker.com/ what-docker, 2020. [Accessed: Dec 2020]. [23] Docker Inc. Docker Compose Documentation. [Online] Available: https://docs. docker.com/compose/, 2020. [Accessed: Dec 2020]. [24] Foundation. CRI-O. [Online] Available: https://cri-o. io/, 2020. [Accessed: Dec 2020]. [25] Containerd Authors. Containerd. [Online] Available: https://containerd.io/, 2020. [Accessed: Dec 2020]. [26] VMware inc. Kubernetes Cluster. [Online] Available: https://www.vmware.com/ topics/glossary/content/kubernetes-cluster, 2020. [Accessed: Dec 2020]. [27] The Kubernetes Authors. Kubernetes Commands. [Online] Available: https: //kubernetes.io/docs/reference/generated/kubectl/kubectl-commands, 2020. [Accessed: Dec 2020]. [28] Helm Authors. Using Helm. [Online] Available: https://helm.sh/docs/intro/ using_helm/, 2020. [Accessed: Dec 2020].

49