D6.1 Infrastructures, Continuous integration ap- proach

Project Acronym 5GTANGO Project Title 5G Development and Validation Platform for Global Industry-Specific Network Services and Apps Project Number 761493 (co-funded by the European Commission through Horizon 2020) Instrument Collaborative Innovation Action Start Date 01/06/2017 Duration 30 months Thematic Priority H2020-ICT-2016-2017 – ICT-08-2017 – 5G PPP Convergent Technologies

Deliverable D6.1 Infrastructures, Continuous integration approach Workpackage WP 6 Due Date M8 Submission Date 2017/3/9 Version 0.1 Status Draft Editor Georgios Xylouris (NCSRD) Contributors F. Vicens (ATOS), S. Kolometsos (NCSRD), D. Kyriazis (UPRC), M. Peuster (UPB), S. Schneider (UPB), Peter Twamley (HWIRL), P. Trakadas (SYN), P. Karkazis (SYN), R. Muoz (CTTC), R. Vilalta (CTTC), A. Rocha (ALT) Reviewer(s) Thomas Soenen (IMEC), Peter TTwampley (HWIRL)

Keywords: Infrastructure, CI/CD, DevOps Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Deliverable Type R Document X DEM Demonstrator, pilot, prototype DEC Websites, patent filings, videos, etc. OTHER Dissemination Level PU Public CO Confidential, only for members of the consortium (including the Commission Ser- X vices)

Disclaimer: This document has been produced in the context of the 5GTANGO Project. The research leading to these results has received funding from the European Community’s 5G-PPP under grant agreement n◦ 761493. All information in this document is provided “as is” and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission has no liability in respect of this document, which is merely representing the authors’ view.

ii Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Executive Summary:

The 5GTANGO project defines a versatile DevOps workflow for 5G Network services development, validation and deployment. The workflow supports both the development of critical components of the 5GTANGO platform and at the same time implements a number of environments, built around those components for the developers and testers. The environments are deployed over infrastructure that is contributed by a number of available testbeds and pilot sites. Their deploy- ment extends gradually as the required maturity of software components is reached. To this end this document presents the overall deployment of 5GTANGO infrastructure and provides a brief summary of the infrastructure as is provided by each testbed. The testbeds that contribute to the 5GTANGO infrastructure are: Athens, Aveiro, Barcelona, Paderborn and Dublin. These locations are interconnected over the internet by VPN links attached to the Athens testbed. This approach allows for the distribution of components and for large scale testing, considering to the extent of the infrastructure in each testbed. Furthermore the document presents the environments that will be supporting the activities of the 5GTANGO project. These environment are: Integration Environment (for SP, V&V and SDK), Qualification Environment (for the SP and V&V), Staging Environment (for the developers of NSs and VNFs) and Demonstration Environment (for the verti- cal use case deployment and demonstration). The way of working and guidelines to the developers are provided in the section that specifies the CI/CD workflow. Finally some indicative preliminary component deployment cases are discussed covering the main platforms (SDK, V&V, SP) and the monitoring support for 5GTANGO.

5GTANGO Public iii Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Contents

List of Figures vii

List of Tables ix

1 Introduction 1 1.1 Document dependencies ...... 1

2 State of the Art 3 2.1 Virtualisation and isolation technologies ...... 3 2.1.1 Network Function Virtualisation Infrastructure ...... 3 2.1.2 Network Virtualisation and traffic isolation technologies ...... 6 2.2 Automation ...... 7 2.2.1 Ansible ...... 8 2.2.2 APEX ...... 11 2.3 CI/CD Practices in NFV development ...... 12 2.3.1 OSM ...... 13 2.3.2 ONAP ...... 14 2.3.3 OPNFV ...... 14

3 Infrastructure and Testbeds 17 3.1 Athens testbed ...... 17 3.1.1 Topology ...... 17 3.1.2 Hardware / Software availability ...... 19 3.1.3 Access to NCSRD Infrastructure ...... 20 3.1.4 Scope in the frame of 5GTANGO ...... 20 3.2 Aveiro Testbed ...... 20 3.2.1 Topology ...... 21 3.2.2 Hardware / Software availability ...... 21 3.2.3 Experimental scenarios ...... 21 3.3 Barcelona Testbed ...... 21 3.3.1 Topology ...... 22 3.3.2 Software availability ...... 23 3.3.3 Experimental scenarios ...... 25 3.4 Paderborn University Testbed ...... 25 3.4.1 Topology ...... 25 3.4.2 Hardware / Software availability ...... 25 3.4.3 Experimental scenarios ...... 27 3.5 Ireland Testbed ...... 27 3.5.1 Topology ...... 27 3.5.2 Hardware / Software availability ...... 27 3.5.3 Experimental scenarios ...... 28

iv Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

4 5GTANGO Environments 29 4.1 5GTANGO Framework Development Environment ...... 29 4.2 5GTANGO Framework Integration Environment ...... 33 4.3 5GTANGO Framework Qualification Environment ...... 34 4.4 5GTANGO Framework Staging Environment ...... 35 4.5 5GTANGO Demonstration Environment ...... 36 4.6 5GTANGO Sandbox Environment ...... 37

5 5GTANGO CI/CD Pipeline 38 5.1 Overview ...... 38 5.2 Container build ...... 39 5.3 Unit tests ...... 39 5.4 Code style check ...... 40 5.5 Container publishing ...... 40 5.6 Smoke testing ...... 40 5.7 Containers Promotion to integration ...... 40 5.8 Reports ...... 40 5.9 POST Actions ...... 41 5.10 Putting all together-Jenkinsfile ...... 41

6 Preliminary deployment of TANGO infrastructure components 43 6.1 Service Platform ...... 43 6.2 V&V Deployment Overview ...... 44 6.3 SDK Platform ...... 45 6.4 Hybrid monitoring ...... 45 6.4.1 Passive monitoring ...... 46 6.4.2 Active monitoring ...... 46 6.4.3 Monitoring system architecture ...... 46

7 Conclusions 48

A Example of Unit Testing 49 A.1 Step1: Deploying ancillary tools ...... 49 A.2 Step2: Start the unit tests ...... 50 A.3 Step 3: Clean up the environment ...... 51

B Smoke Testing 52

C Bibliography 53

5GTANGO Public v

Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

List of Figures

2.1 Network Virtualisation Layers ...... 7 2.2 Overall CI/CD process for OSM ...... 12 2.3 Stage 2 of OSM CI/CD ...... 13 2.4 Stage 3 of OSM CI/CD ...... 14 2.5 Stage 4 of OSM CI/CD ...... 14 2.6 ONAP overall CI/CD workflow ...... 15 2.7 Overall OPNFV CI/CD workflow ...... 15

3.1 Pilots and testbed overview ...... 18 3.2 Overall Athens testbed topology ...... 18 3.3 VPN access to Aveiro testbed ...... 21 3.4 The CTTC ADRENALINE Testbed for end-to-end 5G and IoT services ...... 22 3.5 Overall topology of Paderborn testbed ...... 25 3.6 Overall Huawei testbed topology ...... 27

4.1 Overview of 5GTANGO Environments ...... 30 4.2 DevOps-Pipeline ...... 30 4.3 Infrastructure-Environments ...... 31 4.4 Development Environment ...... 32 4.5 Demonstration Environment ...... 37

5.1 CI/CD Pipeline ...... 38

6.1 SP deployment ...... 43 6.2 Validation and Verification deployment ...... 44 6.3 SDK packages and tools ...... 45 6.4 Monitoring Framework architecture ...... 47

5GTANGO Public vii

Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

List of Tables

1.1 Document dependencies ...... 1

5GTANGO Public ix

Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

1 Introduction

The application of DevOps in 5GTANGO depends heavily on the definition and deployment of specific environments and toolkits that are used throughout the DevOps workflow. These envi- ronments offer particular functionalities and features that allow developers and system testers of a particular component to deploy, test, validate and debug that component. Concurrently, the designed DevOps processes and therefore the supporting environments provide the necessary automation features in order to allow the seamless and hustle-free deployment of all the ancillary system blocks necessary for the efficient testing and validation of new features and developments. The 5GTANGO environments are deployed over infrastructure that is provided by geographically distributed testbeds, located in different partners’ premises and interconnected in a unified infras- tructure with a single point of access for all developers. In order to leverage different capabilities or resource availability, an environment may be replicated in more than one testbed. For the developers involved in 5GTANGO, a CI/CD process has been specified for all to comply with. This allows an efficient automated control of code development and integration. 5GTANGO will employ the aforementioned CI/CD methodology not only to fulfill the original promise of the project to provide a platform that will inherently support a DevOps approach in the development, validation and verification of Network Services and their components, but also for the development and implementation of the 5GTANGO components themselves. This first deliverable of WP6 presents the testbeds that are to be used for the creation of 5GTANGO infrastructure. Then, the environments that are going to be deployed on top of this infrastructure are discussed. The next section discusses the 5GTANGO methodology and workflow to be followed for the implementation of all 5GTANGO artefacts. Finally the document elaborates on the preliminary view of the components comprising each 5GTANGO platform, being the Service Platform, VnV, SDK and monitoring).

1.1 Document dependencies

This document integrates the work carried out so far within the other 5GTANGO technical WPs, and as such, contains either implicit or explicit references to deliverables summarised in Tbl. 1.1.

Table 1.1: Document dependencies Deliverable Name Description Reference D2.1: Pilot definition and The document discusses the requirements as elicited by the [17] Requirements pilot definition and specification as well as the main objectives of the project. The most relevant set of requirements are those defined for the Validation and Verification platform and Service Platform. D2.2: Architecture Design This document describes the initial overall 5GTANGO [21] architecture, which is based on the pilot requirements in D2.1 [18] and the first V&V description in D3.1 [19]. The document provides the specification of the 5GTANGO components that will be developed, deployed and validated on the provided infrastructure. To this end, the analysis is useful to see the requirements for the testbeds that provide this infrastructure.

5GTANGO Public 1 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Deliverable Name Description Reference D3.1: V&V Strategy and This document describes the first V&V concepts. The V&V [19] metadata management platform is very much related to the provided infrastructure as the fist environment used for the VNF/NS development and validation in the Sandbox Environment, which is described in the following sections.

2 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

2 State of the Art

The infrastructure of 5GTANGO has three objectives, namely:

• Support the development phase for the vertical use cases (i.e. VNFs, SSM and FSM plugins, validation).

• Support a V&V platform for instantiating test Network Services (NSs) in order to validate and verify them.

• Support the final deployment of the vertical use cases on a close to reality demonstration infrastructure.

To accomplish those objectives, the infrastructure that will be deployed by the 5GTANGO project has to inherently support an efficient level of virtualisation and isolation to allow for concurrent sup- port of all phases and roles of a NS development. In parallel, the same infrastructure has to be used for validation and evaluation of both functional and non-functional features of 5GTANGO modules, such as management and orchestration components. In other words, the deployed infrastructure is not only to be used by the verticals (as would be the case if all the 5GTANGO components were available) but also for the development and testing of the different software components of 5GTANGO. It is therefore obvious that apart from the need for a virtualisation capable infrastructure for the implementation and usage of 5GTANGO artifacts, network automation is also crucial and required to ensure control of the various environments. In addition, the emergence of concepts such as Infrastructure as Code (IAC) allows for the introduction of Continuous Integration/Continuous Delivery (CI/CD) processes that enable DevOps workflows covering various environments and infrastructure configurations. The subsections that follow analyse the aforementioned topics and present current trends and the 5GTANGO exploitation.

2.1 Virtualisation and isolation technologies

This section discusses the state of the art and concepts related to the isolation and virtualisation capabilities that are provided by the infrastructure that supports 5GTANGO.

2.1.1 Network Function Virtualisation Infrastructure ETSI ISG NFV has defined in its specification the term NFV Infrastructure (NFVI) to denote the infrastructure that provides the enablers to support the instantiation and operation environment for VNFs chained together in a VNF forwarding graph. The NFVI is a key component of the NFV architecture that describes the hardware and software components on which virtual networks are deployed. NFVI is composed of NFV infrastructure points-of-presence (NFVI-PoPs) which are where the VNFs, requiring resources for computation, storage, and memory, are deployed by a network oper- ator. NFVI networks interconnect the computing and storage resources contained in an NFVI-PoP.

5GTANGO Public 3 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

This may include specific switching and routing devices to allow external connectivity. The NFVI- PoPs are interconnected via transport network connections in order to form a complete networked infrastructure. The current market for NFVI varies greatly, and there is even debate among vendors as to what constitutes an NFVI component [16]. Vendors have differing interpretations of how to implement the ETSI NFV definitions. Some vendors build complete solutions that include their existing hardware and software solutions, while others provide more focused offerings. SdxCentral’s 2016 “Mega NFV Report Pt. 1: MANO and NFVI” [11] identifies 22 NFVI vendors. These range from established network players like Cisco, which offers Cisco NFV Infrastructure [4], and Ericsson, which offers Ericsson Cloud Execution Environment [5], to smaller, software-based competitors like 6Wind that offers 6WINDGate Packet Processing Software [2]. VMware is well established with its vCloud offering [15] that combines both NFVI and VIM functions.

2.1.1.1 OpenStack OpenStack as an open source project has greatly simplified the path to virtualization for many . The practical evidence for the former abstract statement is that ETSI and OPNFV have defined specifications and released reference platforms for NFV that select OpenStack as the Virtualisation Infrastructure Manager. Additionally, OpenStack is the dominant choice for additional manage- ment and orchestration functions. NFV on OpenStack offers an agile, scalable, and rapidly matur- ing platform with compelling technical and business benefits for telecommunications providers and large enterprises [27] . Examples of such benefits:

• Standardised interfaces between NFV elements and infrastructures are provided

• Resource pools available cover all network segments

• Network and element deployment automation, providing roll-out efficiency

• Pluggable architecture with documented APIs, UIs, shared services, operations, automation

• All popular open source and commercial network plug-ins and drivers are available

• Outstanding global community contributing to rapid pace of innovation, working on unique NFV requirements from users, related communities and standards developing organizations, NFV features in every release since 2013.

• Proven Telecom as well and enterprise implementations: AT&T, China Mobile, SK Telecom, Ericsson, Deutsche Telekom, Comcast, Bloomberg, and more.

Isolation and multi-tenancy OpenStack is inherently a multi-tenant platform where multiple users on the same cluster share compute, storage and networking resources without awareness of other users. Tenant networks are isolated from each other. This is achieved through the Neutron service, which provides each tenant their own network name-space by leveraging either VLAN segregation or VXLAN/GRE tunneling based overlay networks. The selection of the proper technology to implement the network isolation within a datacenter greatly affects the network infrastructure design and the VNF service function chaining model. Briefly, the following types of tenant networks are supported:

• Flat - all instances on the same network (no VLAN, or any other segregation)

4 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

• Local - access provided only to the compute host networking (isolation from external net- works)

• VLAN - 802.1Q tagging is used to denote multiple provider or tenant networks

• Tunnelling (VXLAN and GRE) - use of network overlays to support private communication between instances. In this case a is required to enable traffic traverse destinations outside of the overlay network.

In case the deployment demands reuse of available hardware network elements (switches, routers, etc.), another type of tenant network is the provider network which allows mapping to existing physical networks in a datacenter. Another issue regarding the isolation is the support for QoS either within tenant networks or among tenants. The definition of network QoS can involve many parameters and it is difficult to standardize support for it across installations. Currently, there are a number of Neutron plugins that have their own quality of service API extension, but each has their own parameters and structure [10]. OpenStack offers the following alternatives, with the first one considered the best approach:

• Possible to define QoS policies with Neutron in order to implement the bandwidth limiting API and layout of the QoS models for future API and models extension introducing more types of QoS rules. The cloud operator could provide the option of choosing a QoS policy from a pre-configured list of policies which that particular operator supports. A common way to express such categorization is through the use of the name field in a QoS policy, which can be used to create arbitrary levels of service like “Platinum”, “Gold”, “Silver”, “Bronze”, and “Best-effort” levels. The actual definition of the network QoS to which each of these levels map might vary between installations.

• Doing QoS / traffic classification inside instances. This is limited to the most basic ones, since instances wouldn’t be able to mark external segmentation packets to prioritize traffic at L2/L3 level. Also the tenants could not be trusted to do the right thing

• Nova flavours support for QoS [13] allows bandwidth limiting settings via the libvirt interface on the VM tap. This is enough for basic BW limiting on the VMs, but other QoS rules are not supported, and it also lacks support for service port QoS.

Although isolation between tenants is sufficiently supported for networking resources, this is not the case for computing and memory resources. It is common for tenants to exhibit noisy neighbour symptoms in case multiple tenant VMs are deployed over the same compute node and/or in some cases due to their own co-located VMs. One solution is the use of host aggregates (i.e. a way to group compute-nodes with similar capabilities/purpose) for scheduling the deployment of a single tenants’ VMs in specific groups of compute nodes. Moreover OpenStack supports Enhanced Platform Awareness (EPA) and is able to provide a VM with isolated and explicit access to CPU core(s) and memory space. In 5GTANGO OpenStack deployments (either using OPNFV or from scratch) will be the de-facto used and supported VIM.

2.1.1.2 Kubernetes Kubernetes is an open-source system for automating deployment, scaling, and management of containerized workloads and services that facilitates both declarative configuration and automation

5GTANGO Public 5 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

[9]. It is possible to launch several containers grouped together in an entity called a pod. A pod generally represents one or more containers that should be controlled as a single application. A replication controller ensures that a specified number of pod replicas are running at any one time and are always available, providing stability. Another feature provided is a Kubernetes Service. This service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service. A few of Kubernetes most prominent features include:

• Automatic bin-packing, automatically places containers based on their resource requirements and other constraints, while not sacrificing availability.

• Self-healing, restarts containers that fail, replaces and reschedules containers when nodes die. • Horizontal Scaling, easy to scale applications up and down. • Extensibility, Kubernetes has a large ecosystem with many tools available providing further capabilities.

Isolation and Multi-tenancy Since the introduction of container technologies (docker, lxc, etc.), users that required stronger degree of isolation, particularly for those running in a multi-tenant environments were forced to run containers inside virtual machines sometimes even mapping one VM per container. Recently, a solution has been released that is based on the concept of HyperContainer [7]. HyperContainer is a hypervisor-based container, which allows you to launch Docker images with standard hypervisors (KVM, Xen, etc.). As an open-source project, HyperContainer consists of an OCI compatible runtime implementation, named runV, and a management daemon named hyperd. The idea behind HyperContainer is quite straightforward: to combine the best of both virtualisation and container. In HyperContainer, virtualisation technology makes it possible to build a fully isolated sandbox with an independent guest kernel (so things like top and /proc all work), but from developer’s view, it’s portable and behaves like a standard container. In this context a promising approach for leverage of Kubernetes in NFVI is the Hypernetes, which attempts to integrate HyperContainer into Kubernetes. In order to run HyperContainers in multi-tenant environment, a new network plugin is created plus a modification of an existing volume plugin. Since Hypernetes runs Pod as their own VMs, it can make use of existing IaaS layer technologies for multi-tenant network and persistent volumes. The current Hypernetes implementation uses standard OpenStack components. In 5GTANGO, Kubernetes will be considered as an alternative Virtualisation Infrastructure Manager (VIM), to be interfaced with an Infrastructure Adaptor in order to instantiate and or- chestrate docker based VNFs. To this extent it should be noted that although new evolutions have enabled certain VNF types to be operational over Kubernetes as VIM, a number of networking issues render the solutions of Kubernetes as a VIM still experimental.

2.1.2 Network Virtualisation and traffic isolation technologies Network Virtualisation is a key enabler for network slicing (NS), as it can provide a specified set of network requirements, while ensuring the necessary isolation between network slices. In this section, we will review the suggested technologies from both data and control plane perspectives. Optical network virtualisation (ONV) refers to the partitioning and aggregation of the physical optical infrastructure to create multiple co-existing and independent virtual networks (VN) on top of it. ONV can be introduced at data plane with enabling technologies which support virtualisation (packet or circuit based), or with resource virtualisation at the control plane level [29]. The usage of such virtualisation technologies in NS might accomplish benefits in terms of security, latency, elasticity, resiliency, and bandwidth.

6 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 2.1: Network Virtualisation Layers

At the data plane, network virtualization can be performed differently according to the considered layer (fig. 2.1). At Layer 0, dedicated physical interfaces, wavelengths, cores and modes might be allocated to a VN. At layer 1, OTN tunnels can be considered. At Layer 2, MPLS and FlexEthernet connections can be adopted. Later, the use of VLANs allows creating up to 4094 virtual networks over the same physical Ethernet interfaces. At Layer 3, the composition of overlay networks through tunnelling mechanisms (e.g., NVGRE, NSH) provides the necessary VN. From the control plane perspective, several initiatives are currently addressing the ONV frame- work. In OIF, a Virtual Transport Network Service (VTNS) is the creation and offering of a VN by a provider to a user [26]. VNs may be dynamically created, deleted, or modified and users can perform connection management, monitoring and protection within their allocated VNs. Differ- ent types of VTNS could be associated to operators offering, for example, Bandwidth on Demand (BoD) services, Network as a Service (NaaS) or Network Slicing for 5G Networking. In IETF, the Abstraction and Control of Traffic Engineered Networks (ACTN) architecture [20] defines the requirements, use cases, and an SDN-based architecture, relying on the concepts of network and service abstraction. The architecture encompasses Physical Network Controllers (PNCs) which are responsible for specific technology and/or administrative domains. PNCs are then orchestrated by a Multi-Domain Service Coordinator (MDSC). By doing so, MDSC enables abstraction of the underlying transport resources and deployment of virtual network instances for individual customers / applications, which are controlled by each individual Customer Network Controller. In 5GTANGO the networking part is implemented by equipment that supports SDN. More specifically the WAN part that is formed by the interconnection of all testbeds via the VPN concentrator located in Athens is based on SDN capable equipment. WAN will be controlled by a single or multiple SDN Controllers (i.e. OpenDayLight) and on-top a WAN Infrastructure Manager will be managing the resource provisioning to support the slice instantiation and provision.

2.2 Automation

The Open Source community provides a vast and rich set of automation tools for configuration, provisioning, testing and deployment. They can be used for infrastructure management purposes, accelerating dramatically the time-to-market of applications and services. These tools range from commercial tools with graphical interfaces and helper tools to community contributed and main- tained tools. In order to avoid a lengthy enumeration of all the available tools, this section discusses those tools that will mostly be utilised by 5GTANGO.

5GTANGO Public 7 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

The tools that have been employed are:

• Mirantis Fuel [25]and RedHat Apex [3] for Openstack deployment over bare metal servers.

• Jenkins (integrated with Github) for automated tests over multiple environments.

• Ansible [14] for configuration management, provision and deployment of the Service Platform and supported environments.

The following subsections present Ansible and Apex in more detail.

2.2.1 Ansible Ansible is Open Source GPLv3.0 software that automates software provisioning, configuration management, and application deployment. The Ansible Inc. (property of Red Hat) provides commercially support for Ansible. It has over 1,400 ready to use modules which allow carrying out almost any action needed to set up and configure any IT system. All the deployment and configuration instructions are included in YAML files called playbooks. All the servers Ansible needs to work with must be included in an inventory file which can be easily modified to add new servers to the deployment tasks. Ansible makes it possible to automate tasks without human intervention by configuring a set of standard files. It also makes it possible to use playbooks provided by third parties in order to replicate deployments by just setting system-specific parameters (e.g. IPs, users, passwords). It minimizes the time it takes to replicate a system and reduces the risk of making mistakes.

2.2.1.1 DSL Ansible is a Domain-Specific Language (DSL) that is appropriate for Infrastructure management, like Puppet, Chef or Salt. Ansible’s power comes from its simplicity: you describe in a near human-readable YAML file the sequence of tasks to run into the remote machines. These tasks will then be converted to shell commands and executed inside those machines.

2.2.1.2 Agentless The major difference when compared against Puppet, Chef or Salt is that Ansible doesn’t need to install any software agents on the remote managed devices: it establishes SSH sessions to the target machines, transfers the code to be executed (modules), runs it, return the results and close the connections. This is called the push approach and it is a major argument in favor of using it for 5GTANGO deployments.

2.2.1.3 Inventory The Inventory contains the list of hosts to manage and they are stored, by default, in /etc/ansible/hosts. You can reflect the platform’s topology by grouping your hosts. For example:

[vtu] vtu01 ansible_connection=ssh ansible_user=vtu_user \ ansible_ssh_private_key_file=~/.ssh/vtu-key.pem vtu02 ansible_connection=ssh ansible_user=vtu_user \ ansible_ssh_private_key_file=~/.ssh/vtu-key.pem

8 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

[vtc] vtc01 ansible_connection=ssh ansible_user=vtc_user \ ansible_ssh_private_key_file=~/.ssh/vtc-key.pem vtc02 ansible_connection=ssh ansible_user=vtc_user \ ansible_ssh_private_key_file=~/.ssh/vtc-key.pem [vprx] vprx01 ansible_connection=ssh ansible_user=prx_user \ ansible_ssh_private_key_file=~/.ssh/prx-key.pem vprx02 ansible_connection=ssh ansible_user=prx_user \ ansible_ssh_private_key_file=~/.ssh/prx-key.pem [vcdn:children] vtu vtc vprx

When managing resources on an Openstack VIM, Ansible provides a Python script to handle deployed Instances in a dynamic way, instead of having to write them to a static file inventory.

2.2.1.4 Modules Ansible operations are executed by the concept of modules. Many Ansible modules also encapsulates natively the property of idempotence, i.e. an operation can be executed multiple times without changing its result. Example of category of modules are:

• Cloud modules - to manage cloud resources on different providers like Amazon, Azure and Openstack

• Command modules - to execute operating system command over local or remote machines

• Files modules - for file/directory manipulation like copy, move, remove, archive, unarchive, synchronize

• Inventory modules - to add hosts and group hosts to the Inventory

• Network modules - to manipulate devices from different manufacturers

• Packaging modules - for operating system package handling (pip, yum, apt, etc.)

• System modules - for Operation System specific commands like iptables, selinux, firewalld, ufw, make, modprobe, parted, service, sysctl

By version 2.4, there is a module library of +750 modules.

2.2.1.5 Roles Roles in Ansible are reusable components adhering to a common directory structure and naming convention. Ansible Galaxy component, is a simple way to generate a role structure:

$ cd roles $ ansible-galaxy init myNewRole [--offline]

5GTANGO Public 9 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

2.2.1.6 Playbooks A Playbook is a YAML file that contains the desired state for a target machine or group of machines declared in the Inventory file. This YAML file is built of a header and roles. Roles call tasks and tasks calls modules.

--- - name: deploy Docker engine to the target machine(s) hosts:" {{ target }}" become: true roles: - docker

To execute this playbook just run:

$ ansible-playbook deploy-docker.yml -e target= -v

Where ‘-e’ indicates that an external variable is passed to the playbook. Before applying the playbook to a critical environment, you can check it in dry-mode, i.e. test the execution but do not apply the changes:

$ ansible-playbook deploy-docker.yml -e target= --check

2.2.1.7 Variables A managed system is intrinsically different from another (e.g. it has a unique IP address or a particular version of a software component), but the desired operations can be similar (e.g. deploy the NGINX web server to two nodes). The best way to write clean roles and tasks without hard- coding these details is by passing variables to the playbook. The code is the same, only the variable’s value change. Ansible is able to handle more complex types of variables like arrays and dictionaries. The example below will run different tasks depending on the target operating system distribution:

- include_tasks:" {{ ansible_distribution_release }}.yml"

2.2.1.8 Conditionals and Loops An interesting (and almost unique) characteristic of Ansible language is that it does not make use the traditional “IF . . . THEN . . . ELSE” of most common programming languages. Instead, you declare when: variable == value for conditional decisions. The next example will install the right Apache package according to the target Operating System.

- name: install Apache2 yum: name=httpd update_cache=yes state=latest when: ansible_os_family == "RedHat"

2.2.1.9 Credentials Ansible assumes SSH key pair authentication but you can use the username/password method as well. Credentials can be passed inline with ad-hoc commands or inside the Inventory. Example:

10 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

[vcdn_hosts] localhost ansible_connection=local vtu01.sonata-nfv.eu ansible_connection=ssh ansible_user=sonata \ ansible_ssh_private_key_file=~/.ssh/son-install.pem vtc01.sonata-nfv.eu ansible_connection=ssh ansible_user=sonata \ ansible_ssh_private_key_file=~/.ssh/son-install.pem vprx01.sonata-nfv.eu ansible_connection=ssh ansible_user=sonata \ ansible_ssh_private_key_file=~/.ssh/son-install.pem

2.2.1.10 Security Ansible provides support to hide sensitive variables using Vault (based on PyCrypto library). Vault is a feature of Ansible that allows keeping sensitive data such as passwords or keys in encrypted files, rather than as plaintext in your playbooks or roles. These vault files can then be distributed or placed in source control. The vault feature can encrypt any structured data file used by Ansible, such as inventory variables, variable files or even Ansible tasks and handlers. It is even possible to encrypt arbitrary files, even binary files.

2.2.1.11 Ad-hoc commands In the next example, we are going to inquire what’s the Operating System of the local machine:

$ ansible localhost -m setup | grep ansible_distribution

In the next example, we are going to stop and destroy ALL the running Docker containers at ALL the hosts defined in the Inventory:

$ ansible all -i /etc/ansible/hosts -m command -a "docker stop $(docker ps -a)\ && docker rm $(docker ps -a)

Obviously, the human effort to run this playbook against one machine is the same to run against one hundred: just tell to the Inventory how many machines you want to apply. In summary, Ansible provides an infrastructure as code (IAC) approach that enables consistent and predictable outcomes: you write once and execute many with the same result.

2.2.1.12 New networking modules As a state of the art in version 2.4, there are +360 networking modules to address the most relevant network device vendors like Cisco, Juniper, Arista, Huawey, A10, F5, FortiNet and Dell but also open source solutions like VyOS ( Debian based routers resulting from a fork of Core, created after Brocade acquisition in 2013) and OpenvSwitch.

2.2.2 APEX Apex is an Ansible based Openstack deployment tool that makes use of a light Openstack standalone machine to deploy a complex multi node Openstack infrastructure. Apex is based on Triple-O Openstack project and relies on Redhat community project RDO. OPNFV, now a Linux Foundation project, provides tested Apex packages to automate an Openstack deployment.

5GTANGO Public 11 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Openstack on Openstack (OoO or 3-O) means that you create a basic Openstack environment (with just a reduced set of services like ‘keystone’, ‘nova’, ‘neutron’, ‘cinder’, ‘glance’ and ‘heat’) and use its capabilities to provisioning large private Openstack environments. The light Openstack machine is designated as ‘undercloud’ and can be created in one of two methods: * install the Undercloud bootable ISO image to a bare metal server or virtual machine, or * install the Apex RPM’s (’opnfv-apex-*.rpm’) to a CentOS 7 machine The Undercloud machine is based on Linux virtualization group tools (‘libvirt’ libraries and tools) and contains the images and configuration files to deploy multiple Openstack environments, designated by ‘overcloud’. The Openstack components in the Undercloud run as Docker containers. There are only 3 files to edit in order to deploy the designated topology:

• /etc/opnfv-apex/inventory.yaml - contains the out-of-band IP, MAC addresses, creden- tials and role of your bare metal servers

• /etc/opnfv-apex/network settings.yaml - contains the network topology to implement using two to five networks (control-plane/admin/pxe, private, api, storage, public/external)

• /etc/opnfv-apex/deploy settings.yaml - set the scenario to deploy - ex, ‘os-nosdn-nofeature- noha’ or ‘os-odl-sfc-ha’

Once this is stabilized, just run the ‘opnfv-apex’ playbook from the ‘jump host’ (undercloud) to deploy Openstack to your environment in an automated way:

$ sudo opnfv-deploy -n network_settings.yaml -i inventory.yaml \ -d deploy_settings.yaml

2.3 CI/CD Practices in NFV development

Figure 2.2: Overall CI/CD process for OSM

This section presents the current trend and concepts in CI/CD that are followed by Open Source communities which are developing NFV related platforms (MANO, Network Automation (inlcuding MANO) and NFVI). Although the workflows and tools for CI/CD that are used seem different, the concepts are the same with the ones expressed by 5GTANGO.

12 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

2.3.1 OSM Open Source MANO (OSM) has also adopted CI/CD practices in the development of its platform components. The introduced pipeline that has been adopted by OSM is illustrated on fig. 2.2. As it can be observed the pipeline is divided into four stages

• Stage 1: Launch

This stage, is triggered by Gerrit (a web-based code review tool similar to Jenkins) [6] upon new code commit in the code repository. Multiple pipelines (per module) are initiated and Stage 2 is called.

• Stage 2: Per module pipeline

Figure 2.3: Stage 2 of OSM CI/CD

This stage is operated within a Docker container and provides per-module call-backs:

1. License Scan (fossology - an open source license compliance software system and toolkit [12] )

2. Unittests

3. Package build

4. Artefact creation & storage (Artifactory - Enterprise universal artifact manager [8])

• Stage 3: System Integration

As illustrated by figure fig. 2.4, in this stage a system installation from binaries (artifacts created during Stage 2) is executed. The installed system interacts with emulated VIMs and smoke tests based on pytest fixtures are executed. This stage does not require NFV Infrastructure, Virtual Infrastructure Manager (VIM), SDN Controller etc. The tests leverage OSM Client library and VIM Emulator and are checking against API calls, VNFD, NSD upload, etc.

• Stage 4: System Testing

This is the final stage of the development process. In this stage an actual infrastructure is expected in order to conduct testing of the OSM deployment. A series of VNFs and system test descriptors are used in order to to automate the testing process. When the code passes stage 4 it is considered as a stable release.

5GTANGO Public 13 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 2.4: Stage 3 of OSM CI/CD

Figure 2.5: Stage 4 of OSM CI/CD

2.3.2 ONAP ONAP is developing their CI/CD architecture using a mixture of open source tooling, providing an E2E infrastructure for testing an hourly or triggered master/tagged build for the purposes of declaring it ready in terms of health check and use case functionality. Currently the CI/CD platform is hosted on an private AWS configuration. We can expect this to evolve to a more formal system as it matures. Fig. 2.6 provides an overview of the current status of ONAP CI/CD process. Recently ONAP announced its collaboration with OPNFV for deployment testing and VNF benchmarking over OPNFV based NFVIs. In this context the communities are aligning their CI/CD tooling and approaches.

2.3.3 OPNFV DevOps CI/CD methodologies are the backbone of OPNFV development process. On a nightly basis, scenarios are built and deployed in an automated fashion to Pharos labs across the globe on multiple hardware platforms. This level of built-in testing and automation enables network provisioning, speed, and technical diversity. The two initial releases of OPNFV focused on es- tablishing the base DevOps infrastructure while assembling an initial set of NFV solutions. For the 3rd release, Colorado, focused on providing carrier-grade forwarding performance, scalability and open extensibility, along with functionality for realizing application policies and controlling a

14 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 2.6: ONAP overall CI/CD workflow complex network topology. The fourth release, Danube, builds and integrates multiple end-to-end networking stacks, including MANO, data plane acceleration, and architecture advancements. The fifth release, Euphrates, delivers Kubernetes integration, XCI progress, new carrier-grade features, and improvements in testing, service assurance, MANO integration, performance, and security. OPNFV CI consists of/utilizes several tools hosted and managed by the Linux Foundation (Jenk- ins, Gerrit) or available publicly (Google Cloud Storage, Docker Hub). Jenkins Jobs, wrapper scripts, and common utilities developed, configured and maintained by the OPNFV Releng Project scripts and artifacts (ISO/RPM, docker images, etc) provided by the individual OPNFV projects. How these things fit together can be seen on the diagram illustrated in fig. 2.7.

Figure 2.7: Overall OPNFV CI/CD workflow

OPNFV CI follows a separation of concerns principle. This means that the main responsibility is pushed to individual OPNFV projects to automate deployment and testing. The CI Framework acts as glue between the hardware/tools and the actual deployment & testing to ensure things are

5GTANGO Public 15 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1 done in a certain way in order to enable ease of use/maintenance and increase the stability and scale. This also gives individual projects enough freedom to structure their pipelines depending on their needs with only a few rules to follow. Apart from the benefits to OPNFV itself, the structure of the OPNFV CI lets users bring up CI environments locally by replacing the Releng parts with their own such as Jenkins jobs, wrappers and so on.

16 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

3 Infrastructure and Testbeds

This section discusses the overall 5GTANGO infrastructure and the description of each participating testbed. In general all testbeds provide the following infrastructure components:

• WAN network

• Access network

• datacenter (computing resources for NFVI realisation)

• end user devices and services

Testbeds that contribute in the first deployment of the 5GTANGO infrastructure are:

• NCSRD’s in Athens, Greece

• Altice Labs’ in Aveiro, Portugal

• CTTC’s in Barcelona, Spain

• University of Paderborn’s, in Paderborn, Germany

• HUAWEI HQ’s, in Dublin, Ireland

Fig. 3.1 presents the testbeds and their interconnections. As it can be observed, Athens testbed is the hub providing the interconnection with all the testbeds. Those links are realised by using well known VPN technologies such as OpenVPN, IPSec, GRE and Cisco Anyconnect. The Pilot sites will be interconnected to the main infrastructure in later stages as soon as the vertical use cases components are available for deployment. The pilot sites are hosted by: (i) Nokia; (ii) Weidmuller; (iii) Quobis and (iv) Nurogames. Each of the testbeds is presented in the following subsections.

3.1 Athens testbed

The Athens testbed is the main node of the 5GTANGO infrastructure. Testbeds provide all the ancillary components for the 5GTANGO CI/CD approach. In this view all testbed development operations need to go through the Athens testbed. However, as a given component evolves from Integration and Qualification stages towards Staging, other testbeds become more autonomous.

3.1.1 Topology Fig. 3.2 presents the Athens testbed topology. The enumeration of the components comprising the hosting infrastructure is as follows:

5GTANGO Public 17 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 3.1: Pilots and testbed overview

Figure 3.2: Overall Athens testbed topology

18 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

• SDN Switch - provides the networking infrastructure to be used for the connection of all the segments, and allows for the management and control of traffic forwarding across the infrastructure. The networking equipment used is, of course, accompanied with a number of non-SDN switches. In this deployment Pica8 SDN switch (1Gbps) and a Dell S-4048T (10Gbps) is employed.

• 2 NFVI-PoPs - the PoPs where VNFs are instantiated are represented by three PoPs, (i) a Single node all-in-one Openstack deployment named PoP.1; and (ii) a Multi-Node Openstack deployment named PoP.2.

• CI/CD tools - deployed on separate infrastructure based on ESXi (VMware), that hosts components and services used by the integration environment.

• Ancillary services segment - provides a collection of tools and performance and validation software.

• VPN endpoints connecting testbeds in other premises are also routed towards the central SDN switch.

• WAN Infrastructure Manager (WIM) - manages and controls the traffic across the infrastruc- ture, focusing on interconnecting users, servers and PoPs.

• ODL Controller - control of the SDN networking equipment.

3.1.2 Hardware / Software availability 3.1.2.1 Computing resources The servers hardware consist of the following elements:

• One Dell R210 used as the Fuel jump host – 1 x Intel(R) Xeon(R) CPU X3430 @ 2.40GHz – 4GB RAM – 1TB HDD

• One Dell T5500 functioning as the Controller – 2 x Intel(R) Xeon(R) CPU [email protected] – 16GB RAM – 1.8GB HDD

• Three Dell R610 utilized as Compute Nodes – 2 x Intel(R) Xeon(R) CPU [email protected] – 64GB RAM – 1.8GB HDD

• One more Dell R310 used as NFVI-PoP – 1 x Intel(R) Xeon(R) CPU [email protected] – 16GB RAM – 465GB HDD

5GTANGO Public 19 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

3.1.2.2 Networking Resources The networking resources allocated for the NCSRD PoP Infrastructure are :

• A Dell PowerConnect 5524 Gigabit switch used for the storage network and for the cloud management.

• A Dell PowerConnect S-4048 10Gbps SDN switch

• A Pica8 SDN switch used to emulate the SDN WAN controlled by an OpenDayLight instance

The routing, firewall and access server equipment is based on a Cisco ASA 5510 series.

3.1.2.3 Storage Resources As storage for the NCSRD pilot site the Ceph plugin of OpenStack is being utilized , that is a scalable storage solution that replicates data across nodes. Ceph is being used along all Controller Nodes, each with 1.8TB HDD, with a replication factor 3. This means that all the agents of Ceph have the same data stored on their disks so that an automatic migration is possible.

3.1.3 Access to NCSRD Infrastructure In order to access the NCSRD hosted Infrastructure, AnyConnect VPN or OpenConnect VPN needs to be used. Partners in order to access the Infrastructure, must point their VPN client towards “https://vpn.medianetlab.gr” and log-in with their given credentials.

3.1.4 Scope in the frame of 5GTANGO The scope for Athens testbed in the frame of 5GTANGO is to continue supporting the work of the CI/CD process already started since SONATA by hosting the following environments:

• Staging Environment for vertical network services development and testing.

• CI/CD tools supporting the Dev-ops approach followed in 5GTANGO.

• Integration environment for the integration of the 5GTANGO Service Platform.

• NFV Infrastructure for deployments and testing.

• WAN Infrastructure for WAN Infrastructure Management and slicing support.

3.2 Aveiro Testbed

The Aveiro testbed provides an OpenStack platform deployed with OPNFV 5.0 APEX tool - the Euphrates release delivers OpenStack Pike release. The testbed is interconnected with a site2site VPN connection with the Athens testbed. On the VPN remote access (RA) method, each partner is provided with a unique pair ‘username/password’ to connect on an isolated 5GTANGO segment configured in the ALT Intranet. The latter method is used when the Athens testbed is not available, but also for local ALT end users and developers. For the time being, direct connection from the Internet to 5GTANGO resources at ALT is not allowed. However this can be provided after analysis by the ALT IT department: for security reasons as well as the limited pool of official Internet addresses.

20 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 3.3: VPN access to Aveiro testbed

3.2.1 Topology The VPN access to ALabs testbed for 5GTANGO and the OpenStack platform allocated to 5GTANGO testbed partners is presented in fig. 3.3. It is build on three nodes where each node assumes the three roles: control, compute and CEPH.

3.2.2 Hardware / Software availability The ALabs testbed Infrastructure consists of three Dell RX730 bare metal servers and one Pica8 P-3297 SDN switch. The Dell RX730 servers have the following characteristics:

• CPU: Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz

• RAM: 4*32 GB DDR4 (total memory: 128 GB)

• DSK: 3,75 TB of Available RAID Disk Space

• NET: Intel(R) 10G 2P X520 Adapter + Intel(R) 2P X540/2P I350 rNDC

Considering the 3 nodes with dual socket of 10 cores per node and a conservative over provisioning ratio of 10, this means we would be able to instantiate near 600 ‘m1.small’ flavor VM’s while there is enough memory. Also considering the limit of 384 GB of RAM, then in fact we can host up to 192 VMs simultaneously.

3.2.3 Experimental scenarios This infrastructure is appropriate to run NSs and VNFs developed on behalf of 5GTANGO. Integra- tion, Performance, Security and Conformance tests are also envisaged. Being a peer on distributed topology scenarios is foreseen as well.

3.3 Barcelona Testbed

The ADRENALINE testbed encompasses multiple interrelated although independent components and prototypes, to offer end-to-end services, interconnecting users and applications across a wide

5GTANGO Public 21 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 3.4: The CTTC ADRENALINE Testbed for end-to-end 5G and IoT services range of heterogeneous network and cloud technologies for the development and test of 5G and IoT services in conditions close to production systems. From a global perspective, ADRENALINE involves the following technologies/capabilities, as shown in fig. 3.4 :

1. A fixed/flexi-grid DWDM core network with white box ROADM/OXC nodes and software- defined optical transmission (SDOT) technologies to deploy sliceable-bandwidth variable transceivers (S-BVTs) and programmable optical systems (EOS platform).

2. A packet transport network for the edge (access) and metro segments for traffic aggregation and switching of Ethernet flows with QoS, and alien wavelength transport to the optical core network.

3. A distributed core and edge cloud platform for the deployment of VNFs and VAFs. The core cloud infrastructure composed of a core-DC with high-performance computing (HPC) servers and an intra-DC packet network with alien wavelength transport to the optical core network. The edge cloud infrastructure is composed of micro-DCs in the edge nodes and small-DCs in the COs.

4. An SDN/NFV control and orchestration system to provide global orchestration of the multi- layer (packet/optical) network resources and distributed cloud infrastructure resources, as well as end-to-end services (e.g. service function chaining of VNFs and VAFs) for multi-tenancy (i.e., network slicing).

5. Interconnection with other CTTC testbed facilities providing the wireless HetNet and back- haul (EXTREME Testbed and LENA LTE-EPC protocol stack emulator) and wireless sensors networks (IoTWorld Testbed).

3.3.1 Topology

• Optical core network

22 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

The optical core network includes a photonic mesh network with 4 nodes (2 ROADMs and 2 OXCs) and 5 bidirectional DWDM amplified optical links of up to 150 km (610 km of G.652 and G.655 optical fiber deployed in total). The optical core network is based on the SDN paradigm. The photonic mesh network nodes (i.e., ROADMs and OXCs) are controlled with an active stateful PCE (AS-PCE) on top of a distributed GMPLS control plane for path computation and provisioning. The AS-PCE acts as a unique interfacing element for the T-SDN orchestrator, ultimately delegating the dynamic lightpath provisioning and establishment of connections (termed Label Switched Paths or LSPs) to the underlying GMPLS control plane.

• Edge and metro packet transport network

The packet transport network leverages on the statistical multiplexing nature of cost-effective OpenFlow switches deployed on COTS and using Open vSwitch (OVS) technology. There are a total of ten OpenFlow switches distributed in the edge (access) and metro (aggregation) network segments. The edge packet transport network is composed of four edge nodes, providing connec- tivity to 5G base stations and IoT access gateways and two OpenFlow switches located in the COs. The edge nodes are lightweight industrialized servers based on Intel Next Unit of Computing (NUC) since they have to fit in cell-site or street cabinets. The metro packet transport network is composed of 4 OpenFlow switches. The two nodes connected to the optical core network are packet switches based on OVS but with a 10 Gb/s XFP tunable transponder as alien wavelengths. Both the edge and metro segments are controlled with two OpenDayLight (ODL) SDN controllers using OpenFlow.

• Distributed edge and core cloud platform

The distributed core and edge cloud platform is composed by one core-DC, two small-DCs, and four micro-DCs, leveraging virtual machines (VM) and container-based technologies oriented to offer the appropriate compute resources depending on the network locations. Specifically, VM- centric host virtualisation, largely studied in the scope of large data centres, is used for the core-DC and small-DCs, and container-based technology, less secure but lightweight, for micro-DCs. The core-DC is composed of three compute nodes (HPC servers with a hypervisor to deploy and run VMs) and each small-DC with one compute node. The four micro-DCs are integrated in the edge nodes, together with the OpenFlow switch. The intra-DC packet network of the core-DC, is composed of four OpenFlow switches deployed on COTS hardware and OVS technology as well. Two out of the four OpenFlow switches are equipped with a 10 Gb/s XFP tunable transponder connecting to the optical core network as alien wave- lengths. The four OpenFlow switches are controlled by an SDN controller running ODL responsible for the Intra-DC network connectivity. The distributed cloud computing platform is controlled us- ing three OpenStack controller nodes (Havana release), one for controlling the four compute nodes (computing, image and networking services) of core-DC, another for the two compute nodes of the small-DCs, and the last for the four compute nodes of the micro-DCs.

3.3.2 Software availability

• Cloud Orchestrator

On top of the multiple DC controllers we deploy a cloud orchestrator that enables to deploy general cloud services (e.g for VAFs) across distributed DC infrastructures (micro, small, core) re- sources for multiple tenants. Specifically, the cloud orchestrator allows to instantiate the creation/

5GTANGO Public 23 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1 migration/ deletion of VM/container (computing service), the storage of disk images (image ser- vice), and the management of the VM/container’s network interfaces (networking service) on the required DCs for each tenant. In a scenario with multiple OpenStack controller, OpenStack API can be used as the southbound interface (SBI) of the cloud orchestrator, as well as the northbound interface (NBI) to the tenants. We refer to this recursive hierarchical architecture as OpenStack cascading and has been preliminary deployed in [24].

• Transport SDN orchestrator

The transport SDN orchestrator (T-SDNO) acts as a unified transport network operating sys- tem (or controller of controllers) allows the control (e.g., E2E transport service provisioning), at a higher, abstracted level, of heterogeneous network technologies regardless of the specific control plane technology employed in each domain through the use of the common Transport API defined in [23]. The Transport API enables to abstract a set of control plane functions used by an SDN Controller, allowing the T-SDNO to uniformly interact with heterogeneous control domains. The Transport API paves the way towards the required integration with wireless networks [28]. This abstraction enables network virtualization, that is, to partition the physical infrastructure and dy- namically create, modify or delete multiple co-existing virtual tenant networks (VTN), independent of the underlying transport technology and network protocols. The T-SDNO is also responsible for representing to the tenants an abstracted topology of each VTN (i.e., network discovery) and for enabling the control of the virtual network resources allocated to each VTN as if they were real resources through the Transport API. The conceived T-SDNO architecture is based on the Application-based Network Operations (ABNO) [22].

• NFV orchestrator & VNF Managers

The network service orchestration is responsible to coordinate groups of VNF instances that jointly realize a more complex function (e.g. service function chaining), including joint instantiation and configuration of VNFs and the required connections between different VNFs within the NVFI- PoPs. In our implementation, the interconnection of NFVI-PoPs is managed by the T-SDNO. Typical implementation of NFV MANO’s NFVO and VNFMs are open platform for NFV (OPNFV) and open source MANO (OSM).

• Global Service Orchestrator

The Global Service Orchestrator (GSO) is deployed on top of the T-SDN orchestrator, the cloud orchestrator, and the NFV orchestrator. It is responsible to provide global orchestration of end- to-end services by decomposing the global service into cloud services, network services, and NFV services, and forwarding these service requests to the Cloud Orchestrator, the T-SDN orchestrator and the NFV Orchestrator. On the one hand, the GSO can dynamically provide service function chaining by coordinating the instantiation and configuration of groups of cloud services (i.e., virtual machines/ containers instances) and NFV services (i.e., VNFs), and the connectivity services between them and the service end-points. For example, the GSO can request to the Cloud orchestrator the provisioning of a virtual machine in the core-DC for the deployment of a VAF (e.g. IoT analytics), and to the NFV orchestrator a VNF (e.g a NAT/) in the edge NFVI-PoP, and to the T-SDN orchestrator the required connections between the service end-point, the VNF and the virtual machine in a certain way (forwarding graph) in order to achieve the desired overall end-to-end functionality or service.

24 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 3.5: Overall topology of Paderborn testbed

3.3.3 Experimental scenarios The envisioned scenario is the integration of SONATA Service Platform as a Global Service Orches- trator. In order to do this, SONATA Service Platform needs to be integrated towards Transport SDN Orchestrator, in order to provide inter-DC interconnection through the ADRENALINE optical core and metro networks.

3.4 Paderborn University Testbed

3.4.1 Topology The Paderborn testbed consists of two different testbed installations. The first one is a com- pletely virtualized installation which is hosted in Paderborn University’s vCluster running VMware vSphere 5.1. The second installation is hosted on dedicated hardware that is exclusively used by the 5GTANGO project. Fig. 3.5 shows the overall topology of both testbed installations running at Paderborn University. Both testbeds are accessible by all project partners over a GRE tunnel con- nected to the Athens testbed installation. The Internet uplink of Paderborn University is realized with a redundant 4 GBit/s connection to the Detusches Forschungsnetz (DFN).

3.4.2 Hardware / Software availability This section describes the setup of both testbed installations in more detail.

3.4.2.1 Virtual testbed The virtual testbed is deployed on Paderborn University’s VMware cluster and consists of three virtual machines. Virtual service platform machine:

• 8x vCPU

5GTANGO Public 25 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

• 16 GB RAM • 120 GB HDD • Ubuntu 16.04 LTS • Host: tango-vsp.cs.upb.de Virtual V&V machine:

• 8x vCPU • 16 GB RAM • 120 GB HDD • Ubuntu 16.04 LTS • Host: tango-vvv.cs.upb.de Virtual OpenStack PoP:

• 8x vCPU • 32 GB RAM • 200 GB HDD • Ubuntu 16.04 LTS • Host: tango-vpop.cs.upb.de • Nested virtualization enabled: Yes • OpenStack Octa (or later)

3.4.2.2 Physical testbed The physical testbed provided by Paderborn University is located in one of the University’s labs and consists of three physical machines with the following hardware specifications:

• Intel Xeon E5-1660 v3 (8 Cores, 3 GHz, 20 MB, 140 W) • 32 GB DDR4-RDIMM-Memory (4 x 8 GB), 2.400 MHz, ECC • SATA-SSD (Class 20), 256 GB • SATA-harddisk, 500 GB (7.200 1/min) • 2x Gigabit-Ethernet (PCIe, Intel) The three machines are interconnected with two Cisco SG110D05 Gigabit switches. They are used to run a 5GTANGO service platform, a 5GTANGO V&V, and a OpenStack PoP installation (single node). They are reachable via the following host names:

• tango-sp.cs.upb.de • tango-vv.cs.upb.de • tango-pop.cs.upb.de

26 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 3.6: Overall Huawei testbed topology

3.4.3 Experimental scenarios Due to the two separated testbed installations, the Paderborn testbed can serve for a variety of experimental, test, and evaluation scenarios. The virtualized testbed can be quickly set up and destroyed which makes it a good candidate to act as integration infrastructure used by the project partners to integrate the different components of 5GTANGO. It can also be used to run functional tests against the developed components. However, since it shares its resources with other users of the cluster, it should not be used for performance tests. Another benefit of the virtualised testbed is that it can easily be extended when needed. It is, for example, possible to add additional virtual machines to host other MANO solutions, like OSM, to do more advanced integration tests. The physical testbed, in contrast, provides resources that are exclusively used by the 5GTANGO project. Because of this, it is in particular suited to execute performance tests or profiling experi- ments. In addition, it can be used for pilot evaluations in later stages of the project.

3.5 Ireland Testbed

3.5.1 Topology The Huawei testbed is a single visualized installation hosted on Huawei’s BlueZone network (an isolated environment intended specifically for use in opensource collaborations).The testbed is ac- cessible for all project partners over a CISCO based IPSEC tunnel linked to the Athens testbed. The testbed will provide an alternative Service Platform deployment i.e. ONAP that the project will integrate and use as a testing Service Platform in the V&V Environment. It will also deploy and NFVI-PoP to be used by the ONAP for NS deployments.

3.5.2 Hardware / Software availability The hardware supplied by Huawei for this testbed will consist of 4 RH2288 V2 Huawei Servers

• 2 x 8 Core IvyBridge EP Xeon E5-2650

• 256GB Memory

• 4 x 1 Gig Ethernet

5GTANGO Public 27 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

• 8 x 900Gb SAS

3.5.3 Experimental scenarios The intention is to host and manage an ONAP deployment on top of a VIM (Openstack) deploy- ment. This deployment would be used as part of the qualification environment where the ONAP would be one of the potential target platforms.

28 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

4 5GTANGO Environments

5GTANGO component development may be categorised as follows:

• 5GTANGO Service Platform: The continuation of SONATA Service Platform develop- ment, that brings forward new features and beyond MANO approaches in network slicing, policies and service orchestration.

• 5GTANGO Validation and Verification: The associated platform offering automation for the validation and verification of Network Services and VNFs supporting a DevOps methodology via 5GTANGO SDK.

• 5GTANGO Verticals: The planned vertical use cases that will be developed, deployed and showcased via the 5GTANGO platform, being the Service Platform, the SDK and the VnV altogether.

In order to support on one side the needs of the verticals that expect a somewhat stable platform to work upon and at the same time the needs of further development of 5GTANGO SP, a number of environments are considered, as depicted in fig. 4.1. The Continuous Integration/Continuous Delivery workflow is composed of a set of envi- ronments with specific functionalities. For 5GTANGO, a pipeline comprised of five environments distributed in multiple locations has been established. The complexity of deployments in several environments and locations demands a centralized management strategy. 5GTANGO adopts one of the most famous configuration managers in the market namely Ansible to develop the deployments scripts. As is presented in the fig. 4.2 Jenkins acts as the engine of the pipeline with one master and a farm of slaves. The developers will push the code to GitHub and also the Jenkins file where the pipeline is defined as a code. In this way, the mechanism established for the treatment of the code comes from developers and it is reviewable, auditable and represents a single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project. At the same time, a repository with a set of deployment playbooks is centralized on tng-devops. It is used to deploy specific configurations in each environment. We are joining the power of Jenkins with the flexibility of Ansible to create 5GTANGO CI/CD Pipeline. Fig. 4.3 shows the deployment of environments in each location. To support the 5GTANGO environments, we have an initial architecture where all locations are connected to Athens testbed (see Section 3, above).

4.1 5GTANGO Framework Development Environment

In 5GTANGO, the development Environment is the stage where the developer creates the code, tests it and publishes it. This environment includes The Service Platform, the VnV as well as the SDK. This environment is one of the most used because it is the first iteration loop of the tests and the developer will receive instant feedback from Jenkins if something went wrong with the deployment preparing for integration. The Development Environment utilises the following components for all developers:

5GTANGO Public 29 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 4.1: Overview of 5GTANGO Environments

Figure 4.2: DevOps-Pipeline

30 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 4.3: Infrastructure-Environments

• Jenkins: Runs the pipeline made by the developers with the unit tests, containers build, and check styles.

• GitHub: Controls the versions of the code.

• Ansible: Deploys the environments.

• Docker: Packages the application.

• Code: The code written by the developer in his/her preferred Language.

Fig. 4.4 shows the components of the Development Environment and the usual workflow followed by the Pipeline. Once the developer creates a Pull Request to GitHub, then GitHub automatically created a web hook that triggers a Jenkins job. This job will build, test, publish and deploy the container and at the end of the job will notify the developer if it was successful or not. For Smoke tests, 5GTANGO uses individual pre-integration servers for Service Platform, VnV and SDK. Fig. 4.3 shows the location of the infrastructure for the Development Environment. The pre- integration server for Service Platform and for SDK are located on the Athens island inside the Integration Environment. In case of the SDK, the pre-integration server is located in Barcelona inside the Integration Environment. Pre-testing is the stage of Development Environment where the developers instantiate the last version of his/her unstable code, to run the smoke tests after the promotion of containers to Integration Environment. Infrastructure Resources:

• Pre-integration SP VM

• Pre-integration VnV VM

• Pre-integration SDK VM

5GTANGO Public 31 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 4.4: Development Environment

This environment requires a set of Ansible playbooks to deploy the platforms in pre-integration. These scripts are common for all environments and will be customised with variables passed in the execution of the deployment. Variables for the deployment of this environment:

• Service Platform

– host=pre-int-sp-ath.5gtango.eu – version=latest – platform=sp

• VnV

– host=pre-int-vnv-ath.5gtango.eu – version=latest – platform=vnv

• SDK

– host=pre-int-sdk-ath.5gtango.eu – version=latest – platform=sdk

Deployment cycle:

• After every Pull Request

32 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

4.2 5GTANGO Framework Integration Environment

The integration environment is the stage where the developers will perform the integration tests. One of the principal characteristics of the Integration Environment is that the developers can use it to test the interaction of their code with other components, reducing the complexity by having a place where all components are deployed and being tested for all developers of the platform. In this environment, Jenkins making use of Ansible, plays the central role deploying and performing integration tests. The components of this environment are:

• Jenkins: Performs the deployment and integration tests

• Ansible: Contains the playbooks to deploy the integration environment

• Docker: The containers are available in docker local registry

• Github: The integration tests are versioned in GitHub

Infrastructure Resources:

• Integration SP VM.

• Integration VnV VM.

This environment uses a set of Ansible playbooks to deploy the platforms in Integration. The following is the summary of environment variables passed to ansible to deploy the integration environment. Variables for the deployment of this environment:

• Service Platform

– host=int-sp-ath.5gtango.eu – version=int – platform=sp

• VnV

– host=int-vnv-bcn.5gtango.eu – version=int – platform=vnv

Deployment cycle:

• Daily

• On-demand

5GTANGO Public 33 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

4.3 5GTANGO Framework Qualification Environment

This Environment is used by Service Platform and V&V to perform qualification tests. Regarding the code’s stability, this is the last environment where tests are performed in a quality loop with the developers. After the Qualification environment, 5GTANGO will have 2 more environments that are Staging for the VNF developers and Demo for demonstration purposes. One particularity of this environment is that the Qualification Service Platform is attached to the VnV platform from the Integration Environment. The reason for this link is to use a stable version of the Service Platform for VnV development purposes. The Qualification Environment uses the following components:

• Jenkins: Performs the deployment in qualification as well as the qualification tests

• GitHub: Versioning the qualification tests

• Ansible: Supports Jenkins to deploy the qualification environment

• Docker: For image distribution to the qualification environment

Infrastructure Resources:

• Qualification SP VM

• Qualification VnV VM

To deploy the qualification environment requires a set of playbooks that are configured with the following variables:

• Service Platform

– host=qual-sp-bcn.5gtango.eu – version=qual – platform=sp

• VnV

– host=qual-vnv-bcn.5gtango.eu – version=qual – platform=vnv

Deployment cycle:

• Weekly

• On-demand

34 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

4.4 5GTANGO Framework Staging Environment

The staging environment in 5GTANGO represents the place where the VNF developers can test the VNFs images with stable SP, VnV and SDK platforms. The VNF developers will need to create the Network Service descriptors and VNF Descriptors in order to deploy the services before the demo environment. The last stage of failure reports will come from this environment and the reports will be generated by VNF developers when deploying the Services. The platforms present in this environment are the SDK for NS composition, the VnV to test the VNFs and the Service Platform to deploy the NS. Infrastructure Resources:

• Staging SP VM

• Staging VnV VM

• Staging SDK VM

• Moongen

SP, VnV and SDK platforms are deployed by Jenkins using ansible playbooks with the following variables: Athens island:

• Service Platform – host=sta-sp-ath.5gtango.eu – version=sta – platform=sp

• VnV – host=sta-vnv-ath.5gtango.eu – version=sta – platform=vnv

• SDK – host=sta-sdk-ath.5gtango.eu – version=sta – platform=sdk

Aveiro island:

• Service Platform – host=sta-sp-ave.5gtango.eu – version=sta – platform=sp

• VnV

5GTANGO Public 35 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

– host=sta-vnv-ave.5gtango.eu – version=sta – platform=vnv

• SDK – host=sta-sdk-ave.5gtango.eu – version=sta – platform=sdk Paderborn island:

• Service Platform – host=sta-sp-pad.5gtango.eu – version=sta – platform=sp

• VnV – host=sta-vnv-pad.5gtango.eu – version=sta – platform=vnv

• SDK – host=sta-sdk-pad.5gtango.eu – version=sta – platform=sdk Deployment cycle:

• Weekly • On-demand

4.5 5GTANGO Demonstration Environment

The 5GTANGO Demonstration Environment is illustrated in fig. 4.5. The components that are deployed are similar to the Staging Environment used by the developers for the validation, verifi- cation and testing of their NSs. The Demonstration Environment deployment and its components will be accompanied by automated deployment scripts in order to be deployed and configured in pilot sites as foreseen by the project. It is anticipated that the lifecycle of the 5GTANGO Demonstration Environment will be following the development periods and releases of the main components of the 5GTANGO platforms i.e. V&V, SP and SDK. For the NFVI infrastructures that will be used, the installed software release will be always one release behind for stability reasons. However new NFVI implementations or upgrades of those currently used are foreseen for the whole duration of the project. Details that are related to the Demonstration Environment are released with the deliverable D7.1. Moreover, depending on the vertical use case, the Demonstration Environment created for each case may differ in the number of components and their actual installation locations.

36 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 4.5: Demonstration Environment

4.6 5GTANGO Sandbox Environment

The Sandbox Environment, is covering a gap between the actual Development Environment where the developer is developing the service components and the Staging Environment where the de- veloper tests the NS. The Sandbox Environment is essentially a close to production environment that allows early deployment and prototyping without the complexity of the Staging Environment or dealing with the scripting requirements and methodologies imposed by the VnV. The Sandbox Environment of 5GTANGO is essentially a playground for the developers to be used for early test and deployments. The Sandbox Environment offers two main capabilities to the developers

1. Instantiation of VNFs in a manual way directly on an NFVI-POP, proving also direct access to the APIs offered by the VIM/HEAT/Openstack.

2. Instantiation of a NS through 5GTANGO SP but only on the NFVI-POP provided to the Sand. Env.

The main components of the Sandbox Environment are:

1. NFVI-PoP (single or multi -node) realised by deploying of OPNFV or Openstack latest re- leases.

2. SFC-Agent used for applying required VNF Forwarding Graph (manually through its API)

3. Dashboard for configuring tenant access and snapshot capabilities.

4. Usage guidelines and FAQ.

5GTANGO Public 37 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

5 5GTANGO CI/CD Pipeline

This section details the 5GTANGO CI/CD pipeline.

5.1 Overview

The CI/CD Pipeline is composed of a set of steps illustrated in the figure below (fig. 5.1):

• Container build

• Unit tests

• Code Style check

• Publication of containers in local registry

• Deployment in integration

• Smoke tests

• Containers promotion to integration

• Reports

• Post Actions

• Jenkinsfile

For 5GTANGO, we use pipeline as a code. The entire scenario of the pipeline is declaratively described in a file, located on the root of the repository. This file, named Jenkinsfile, is then used by Jenkins to execute the pipeline. The developer has the freedom to create the steps in the pipeline that he needs and Jenkins will run it each time that the developer creates a commit. Inside the steps, some of the tasks can run in parallel. Therefore multiple containers can be built at the same time during the Container build stage, and since the unit tests and the smoke tests are independent, they can be parallelised as well. Each step in the pipeline should be scripted. These scripts are collected in the Github repository in the pipeline directory. The developer should also use these scripts to test locally. The changes in the pipeline are versioned and can be audited. The next sections describe in detail the different steps of the CI/CD pipeline. This set of steps can be shifted or skipped depending on developer’s requirements.

Figure 5.1: CI/CD Pipeline

38 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

5.2 Container build

5GTANGO uses Docker containers as host for its software components. In the first step, these docker containers should be created for all the components that are developed in the repository that is being tested. Building a Docker container can be scripted as follows: docker build -f Dockerfile -t registry.sonata-nfv.eu:5000/tng-supercontainer .

with

• docker build: The instruction to build the container. • -f Dockerfile: The location of the Dockerfile. • -t registry.sonata-nfv.eu:5000/tng-supercontainer: The name of the container. The first part is the internal docker registry, and the second part is the container name.

• .: The context of the container. It is the pointer to the folder that the Dockerfile will use to copy the files from.

If multiple containers are built, various scripts can be used to create them in parallel. After this step, all updated code is packaged in Docker images and can be tested.

5.3 Unit tests

The unit test is one of the stages that can be performed outside of Jenkins. An option for developers is to use Travis to execute unit tests. sec. A is an example of the unit test using Travis. However, if the developers choose use Jenkins then they should bear in mind these three essential rules:

1. Unit tests should be executed inside the container. 2. Reports should be generated and copied to a local volume within the Jenkins Workspace. 3. If the unit tests fail, then the pipeline should be aborted and marked as FAILED.

The use of Docker has a significant advantage in designing and executing unit tests. The developer is not required to create mock-ups of each component it depends on his implementation. For example, in case of databases, sometimes it is time expensive to build a mock-up. With Docker, it is quick and straightforward to just start a docker container with the database (MySQL, Postgres, Redis, etc.) and connect your container to it. This technique is named “sidecar containers” \cite{jenkinssidecar} and Jenkins pipeline supports it. The following steps have to take place to execute the unit tests correctly:

1. Deploy ancillary tools - “container sidecar” / dependencies 2. Check if they are up and running 3. Start the container to be tested 4. Execute the unit tests and store the results. 5. Clean the environment

sec. A is providing an example of unit test execution and the related steps, making use of ancillary databases.

5GTANGO Public 39 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

5.4 Code style check

To ensure that 5GTANGO provides readable code, we enforce style checks upon every produced line of code while the pipeline. The code style checks are executed inside the container and should generate and store reports. An example of a script that evaluates the quality of python code could be docker run registry.sonata-nfv.eu:5000/son-gtklic pep8 *.py > reports/checkstyle-gtklic.txt

5.5 Container publishing

Once a container passes the unit tests and the style check, we make it available for further testing in integrated environments. To this end, we first need to publish the containers to our local registry. Publishing containers can be done in parallel, through scripts such as docker push registry.sonata-nfv.eu:5000/son-gtksrv

Note that the container will be pushed with the tag latest.

5.6 Smoke testing

In the smoke testing phase, we deploy the new containers in an environment, called pre-integration, to perform some fast integration testing. These test should check if the APIs are correct if the containers can set up connections with other components, etc. The first step is to update the pre-integration environment to include the new containers. 5GTANGO will use Ansible to facilitate this. Ansible playbooks will be made available that deploy, update and terminate 5GTANGO test environments. By executing the playbook that up- dates the pre-integration environment, the newly published containers will replace their legacy versions and the environment will be ready for the smoke tests. An example of a smoke test can be found at [#sec:smoketestappendix]

5.7 Containers Promotion to integration

If the PR was merged, Jenkins automatically would run the Jenkins pipeline using the master branch. In 5GTANGO, we take advantage of this behaviour, adding a condition when the branch is master, then promote the containers to integration using the tag int and update the integration environment deploying the new containers. This means that the containers moved to integration environment has passed the unit tests and smoke tests.

5.8 Reports

In this step, we grab and expose the reports generated by the unit tests, smoke tests and the code style checks.

40 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

5.9 POST Actions

Once the Job is finished, additional actions could be performed like sending emails, trigger other jobs or perform additional cleanup. An example of a POST section in a Jenkins file can be post { success { emailext ( ) } failure { emailext ( ) } }

5.10 Putting all together-Jenkinsfile

The Jenkinsfile is developed in groovy and it must be located in the root folder of the repository. It is a declarative description of the entire pipeline. pipeline { agent any

stages { stage(’Build’) { steps { echo ’Building..’ } } stage(’unittest’) { steps { echo ’Testing..’ } } stage(’stylecheck’) { steps { echo ’Deploying....’ } } } post { sh ’final_cleanup.sh’ } } The simplest way to create the Jenkinsfile is using the Open Blue Ocean Jenkins plugin that allows the developers to develop the pipeline graphically and then export it to Jenkinsfile:

5GTANGO Public 41 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

1. Log in to Jenkins

2. Go to Open Blue Ocean

3. Click on edit

4. Edit the pipeline using the graphical tools

5. Copy and Paste this pipeline into the Jenkinsfile

42 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

6 Preliminary deployment of TANGO infrastructure components

This section discusses the the preliminary deployment of 5GTANGO infrastructure components that are defined within the WP3, WP4 and WP5 workpackages.

6.1 Service Platform

The Service Platform (SP) implements the ETSI NFV Reference Model, namely the Management and Orchestration (MANO) capabilities. The SP can be deployed to a multi distribution bare metal server or to a virtual machine using ‘son-install’ Ansible playbooks or via Linux shell script like this Jenkins job. A ready to use QCOW2 image with the SP is also provided at the project’s FTP repository (http://files.sonata-nfv.eu/). The recommended flavor to run the SP is: 2 cores, 4GB memory and 40GB disk. Actually, it is built of 38 Docker containers, some of them are published in the public Docker Hub (http://hub.docker.com) and the remaining images are in a private Docker repo (http://repository.sonata- nfv.eu:5000/, accessible to those having VPN credencials). This micro-service architecture is shown in fig. 6.1. The family of Gatekeeper micro-services deal with:

• User Management • License Management • Key Performance Indicators The family of MANO micro-services deal with:

• the Life Cycle Management of a NS (SLM)

Figure 6.1: SP deployment

5GTANGO Public 43 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 6.2: Validation and Verification deployment

• the Life Cycle Management of a VNF (FLM)

• the location where to deploy the VNF’s (Placement)

The family of Infrastructure Abstraction micro-services deal with:

• the control of multiple VIM’s (VIM-Adapter)

• the communication with multiple WIN (WIM-Adapter)

The family of Monitory micro-services deal with:

• the monitory server

• the remote probes and the push gateway

The family of Repository micro-services deal with:

• the catalogue of NS’s and VNF’s

• the repository of packages and images

6.2 V&V Deployment Overview

The proposed physical deployment of the V&V is outlined in fig. 6.2. All management and entry functions including external interfaces are within a single V&V Gatekeeper Container. The respon- sibilities of all V&V VNF/NS test management including test lifecycle and overall test execution status is contained within the test life cycle manager (LCM).

44 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 6.3: SDK packages and tools

test execution activities (that is test environment prep, execution and results analysis against a specific test architecture like TTCN-3) are self contained in the Test Executor engine while VNF/NS setup and any activities against the target SUT will be maintained in platform management con- tainers. The components for the catalogues & repository containers are grouped into a single container. Each physical storage is maintained independently within it’s own container.

6.3 SDK Platform

5GTANGO provides a full-featured SDK consisting of a various light-weight tools. Related tools are bundled in packages that are available in separate GitHub repositories. Fig. 6.3 provides an overview of some SDK packages and tools. 5GTANGO builds upon the results and releases of the SONATA project [1]. Wherever possible, existing SONATA repositories are reused and extended but keep the son- prefix (e.g., son-cli and son-emu). Repositories with completely new functionality (e.g., tng-descriptor-generator) or that evolved and diverged particularly from SONATA (e.g., tng-schema) are labeled with the tng- prefix. The tools within a package complement each other and can be used together. Tools of different packages are mostly independent of each other and can be installed separately. All tools can be obtained easily from the GitHub repositories via the git clone command and provide installation and usage instructions. Developers can deploy the SDK and the involved components locally on their computers. The light-weight tools of the SDK can easily be installed and run on common laptops, supporting developers in creating, validating, and testing new network services and corresponding descriptors. Even though the SDK is designed for local deployment at developers’ laptops, it is also deployed in 5GTANGO’s CI/CD framework. This allows to perform integration tests with other compo- nents and enables the continuous development while ensuring compatibility with other 5GTANGO components.

6.4 Hybrid monitoring

5GTANGO introduces an innovative solution to maximize all the benefits of the programmability of 5G software network infrastructure by implementing a special environment for the Validation and Verification (V&V) for testing the new Network Services (NS) before they will deployed in production environment. The V&V testing processes as well as the management of the deployed

5GTANGO Public 45 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

NS require a flexible and expandable monitoring tool. 5GTANGO introduces a powerful monitor- ing framework which is based in state of art monitoring technologies and tools like Prometheus.io, Django RESTful framework etc. In particular, the fulfilment of VNF specific monitoring require- ments demands the implementation of an HTTP API as well as a real-time mechanism that will allow developers/users to monitor performance data related to their deployed NS. At the same time, the monitoring system will collect data from VNFs deployed on virtual ma- chines and containers in different infrastructures. Additionally, in order to facilitate the resource orchestration process, 5GTANGO’s monitoring system will collect and offer information related to the available resources of the infrastructure, as mandated by VNF Placement. The proposed monitoring approach provides two different ways to collect monitoring data from NS/VNF; the pas- sive monitoring process which gathers monitoring data generated by (default or custom) metrics already installed in each VNF and the active monitoring process which uses probes outside of the under testing NS to generate and steer network traffic through the NS/VNF, in order to measure network performance (latency, bandwidth etc.)

6.4.1 Passive monitoring It is of paramount importance to collect monitoring data from as many as possible sources. In the implemented framework, there are four different types of sources for collecting data: 1) container probe which runs inside the container-based VNFs to collect data related to their performance, 2) VM probe that collects data from Virtual Machines (VMs) hosting VNFs, 3) OpenFlow probe which is a Python software that utilizes OpenDayLight API to collect data from the OpenFlow controller, and 4) OpenStack probe that has also been developed as a software module (in Python language) that uses OpenStack API to collect data from all OpenStack components. The collection of information from the above-mentioned components will also address the requirement of VNF Status Monitor, providing service status information (e.g. error state). Apart from offering an API to developers for pushing, collecting and processing monitoring data related to their deployed NS/VNF, the monitoring framework will accommodate VNF-specific alerting rules for real-time no- tifications. In this respect, 5GTANGO monitoring framework will offer the capability to developers to define service-specific metrics and rules, whose violation will inform them in real-time.

6.4.2 Active monitoring Network performance is a very critical issue for the under development NS which must address all the requirements in a 5G environment. The Monitoring system provides an on demand triggered mechanism which uses sosftware (iperf,owap etc) and hardware (moongen) components in order to measure the network performance of a NS. The active monitoring capability is going to be available in both V&V and production environments but it is obvious that the evaluation of the networking performance NS during the V&V is very important and it is going to be one of most critical evaluation points during the benchmarking process.

6.4.3 Monitoring system architecture 5GTANGO’s monitoring solution complies with the scalability requirement dictated by 5G net- works. So,one of the cornerstones of the monitoring framework implementation is to deliver a carrier-grade solution that would fulfil scalability requirements in a multi-PoP environment. As can be noticed from fig. 6.4, several components of the Monitoring Framework are distributed across the several Points of Presence (PoPs). First, each PoP will have its own websocket server to accommodate developers’ demands for streaming data, although the management of websockets is

46 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

Figure 6.4: Monitoring Framework architecture handled by the Monitoring Manager instance in a centralized way. Second, Prometheus Monitoring servers follow a distributed (cascaded) architecture. The local Prometheus servers collect and store metric data from the VNFs deployed in the PoP, while only the alerts are sent to the federated Prometheus server for further processing and forwarding to the subscribed users. Moreover, the alerting rules and notifications are based on monitoring data collected in different PoPs and thus the decision must be made on a federation level. Another scalability requirement concerns the large flow of data from the monitoring probes to the Monitoring Server and its respective database that might affect the service performance in extreme cases. In this respect, an architectural decision to address this scalability issue was to support a distributed architecture regarding the monitoring server and its database, working in a cascaded fashion along with proper modifications on component level. In particular, the functionality of the monitoring probe does not send data to the monitoring server in cases where the value difference is less than a delta threshold defined by the developer. The same will be the case in the communication between the monitoring server within a NFVI and the monitoring server in the Service Platform.

5GTANGO Public 47 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

7 Conclusions

This document establishes the importance of infrastructure availability and environment diversity supporting the 5GTANO activities. The project exploitation of the proposed DevOps approach is twofold. From one side, 5GTANGO system comprised by i) an SDK platform, ii) a V&V platform and iii) a Service Platform, proposes a DevOps approach that is inherenlty supported from the aformentioned components. In this context Network Service developers and testers are exploit- ing this process in their NS development, validation and deployment. On the other side, as the aforementioned components are still under development, the project itself (internaly) adopts the same DevOps process in order to develop all the artefacts for its system. In this view a mutli- tude of environments, implemented over physical or virtual infrastructure elements will be realised. These environments will be realised either across-testbed or replicated were requred. Due to the large number and complexity of these environment the 5GTANGO project will employ automa- tion methods (i.e. Ansible) in order to make the deployment as much seamless and error free. Each environment maybe invoked and deployed automaticaly during development (this is valid for integration environments) and/or be available to be used for validations, evaluations and demon- strations. As multiple roles and actors are anticipated to be active simultaneusly, ulti-tenancy is a also a required feature in order to successfully allow operations within these environments. In order to make certain that the environments, even when hosted over the same physical infrastructure, they are not affecting each other, isolation mechanisms will be efficiently explored and enforced. The same mechanisms will also be exploited by the 5GTANGO slicing management components, in order to at later stages deploy vertical use cases. This deliverable is taking into account the aforementioned concepts presents the infrastructure and testbeds that will be used for the reali- sation of the environments. It continues with the presentation of the various environments and their mapping on the contributed infrastructure and their scope in the DevOps process. Finally it provides guidelines and specifies the workflow for the CI/CD approach 5GTANGO will follow.

48 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

A Example of Unit Testing

This sections describes an example of implementation of unit tests.

A.1 Step1: Deploying ancillary tools

To execute the unit test the following process needs to be scripted and executed:

1. Pull the latest version of the container images from the docker hub.

2. Next two docker commands deploys the databases in detached mode

3. Check if Mongo is UP and running. This is a trick because we are checking it deploying a bash container in the same network and using “nc” with a 10 minutes timeout.

4. Lastly, the son-catalogue-repository container is deployed

At this moment the ancillary containers are deployed. The deployment script may looks like: pipeline/unittests/start-dependencies.sh

#!/bin/bash docker pull sonatanfv/son-catalogue-repos:dev docker pull mongo

### MONGO CONTAINER echo mongo if![["$(docker inspect -f {{.State.Running}} son-mongo 2> /dev/null)" ==""]]; then docker rm -fv son-mongo; fi docker run -d \ --name son-mongo \ --net=son-sp \ --network-alias=son-mongo \ --network-alias=mongo \ mongo

### Is MONGODB UP? docker run -i \ --rm=true \ --net=son-sp \ bash -c ’echo "Testing if son-mongo is UP" & \ timeout -t 600 bash -c "while ! nc -z son-mongo 27017; \ do sleep 5 && \ echo -n .; done;"’

### CATALOGUES CONTAINER

5GTANGO Public 49 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1 echo son-catalogue-repository if![["$(docker inspect -f {{.State.Running}} son-catalogue-repos 2> /dev/null)" ==""]]; then docker rm -fv son-catalogue-repos; fi docker run -d \ --name son-catalogue-repos \ --net=son-sp \ --network-alias=son-catalogue-repository \ -e MAIN_DB=son-catalogue-repository \ -e MAIN_DB_HOST=mongo:27017 \ sonatanfv/son-catalogue-repos:dev

A.2 Step2: Start the unit tests pipeline/unittests/start-unittests.sh

#!/bin/bash docker run -i \ --rm=true \ --net=son-sp \ --network-alias=son-gtksrv \ -e DATABASE_HOST=son-postgres \ -e POSTGRES_PASSWORD=sonata \ -e POSTGRES_USER=sonatatest \ -e RACK_ENV=integration \ -e MQSERVER=amqp://guest:guest@son-broker:5672 \ -e CATALOGUES_URL=http://son-catalogue-repository:4011/catalogues/api/v2 \ registry.sonata-nfv.eu:5000/son-gtksrv bundle exec rake db:migrate docker run -i \ --rm=true \ --net=son-sp \ --network-alias=son-gtksrv \ -e DATABASE_HOST=son-postgres \ -e POSTGRES_PASSWORD=sonata \ -e POSTGRES_USER=sonatatest \ -e RACK_ENV=integration \ -e MQSERVER=amqp://guest:guest@son-broker:5672 \ -e CATALOGUES_URL=http://son-catalogue-repository:4011/catalogues/api/v2 \ -v"$(pwd)/spec/reports/son-gtksrv:/app/spec/reports" \ registry.sonata-nfv.eu:5000/son-gtksrv bundle exec rake ci:all

Where:

1. First docker command is to populate the database

2. Start the container in interactive mode -i (If it fails, it will report) + option --rm=true to delete the container after execution. Execute the tests and leave the reports in the volume "$(pwd)/spec/reports/son-gtksrv:/app/spec/reports". That is the local folder and the route of our report.

Note that this route is needed to publish the results later.

50 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

A.3 Step 3: Clean up the environment

As a best practice, the resource occupied by our tests should be released. In this case, we need to delete the ancillary containers after the unit test execution. pipeline/unittests/stop-dependencies.sh

#!/bin/bash docker rm -fv son-postgres docker rm -fv son-mongo docker rm -fv son-catalogue-repos

Option -f is used to force the deletion and -v to delete the volumes generated on disk to release the disk space occupied

• Unitary Testing Example

5GTANGO Public 51 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

B Smoke Testing

This is an example of a script that contains a smoke test. The test checks the APIs of the Gatekeeper component.

#!/bin/bash # Create a user NONCE=$(date +%s) USER="sonata-$NONCE" PASS="1234" # returns {"username":"sonata","uuid":"9f107932-19b0-4e9e-87e9-3b0b2ec318a7"}

REGISTER=$(curl -qSfsw’ \n%{http_code}’ -d \ ’{"username":"’$USER’","password":"’$PASS’"}’ \ $server:32001/api/v2/users)

RESP=$(curl -qSfs -d’ {"username":"sonata-’$NONCE’","password":"1234"}’ \ http://$server:32001/api/v2/sessions) token=$(echo $RESP| jq -r ’.token.access_token’) echo "TOKEN="$token

SECONDS_PAUSED=1 curl -f -v http://$server:32001/api curl -f -v -H "Authorization:Bearer $token" http://$server:32001/api/v2/packages echo "Sleeping for $SECONDS_PAUSED..." sleep $SECONDS_PAUSED curl -f -v -H "Authorization:Bearer $token" http://$server:32001/api/v2/services echo "Sleeping for $SECONDS_PAUSED..." sleep $SECONDS_PAUSED curl -f -v -H "Authorization:Bearer $token" http://$server:32001/api/v2/functions echo "Sleeping for $SECONDS_PAUSED..." sleep $SECONDS_PAUSED curl -f -v -H "Authorization:Bearer $token" http://$server:32001/api/v2/requests echo "Sleeping for $SECONDS_PAUSED..." sleep $SECONDS_PAUSED curl -f -v -H "Authorization:Bearer $token" http://$server:32001/api/v2/records/services echo "Sleeping for $SECONDS_PAUSED..."

52 Public 5GTANGO Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

C Bibliography

[1] Website. Online at http://www.sonata-nfv.eu/.

[2] 6wind gate. Website. Online at http://www.6wind.com/products/6windgate/.

[3] Apex opnfv installer. Website. Online at https://wiki.opnfv.org/display/PROJ/Project+ Proposals+Apex.

[4] Cisco nfv infrastructure. Website. Online at https://www.cisco.com/c/en/us/solutions/ service-provider/network-functions-virtualization-nfv-infrastructure/index. html.

[5] Ericsson cloud execution environment. Website. Online at https://www.ericsson.com/ ourportfolio/digital-services-solution-areas/cloud-execution-environment?nav= fgb_101_0363.

[6] Gerrit code review - product overview. Website. Online at https://gerrit-review. googlesource.com/Documentation/intro-quick.html.

[7] Hypervisor-agnostic docker runtime. Website. Online at http://hypercontainer.io/.

[8] Jfrog artifactory - enterprise universal artifact manager. Website. Online at https://jfrog. com/artifactory/.

[9] Kubernetes. Website. Online at https://kubernetes.io/.

[10] Neutron qos api models and extension. Website. Online at http://specs.openstack.org/ openstack/neutron-specs/specs/liberty/qos-api-extension.html.

[11] Nfv report on mano and nfvi. Website. Online at https://www.sdxcentral.com/wp-content/uploads/2016/04/ SDxCentral-Mega-NFV-Report-Part-1-MANO-and-NFVI-2016-B.pdf.

[12] Open source license compliance by open source software. Website. Online at https://www. fossology.org.

[13] Openstack wiki: Instanceresource quota. Website. Online at https://wiki.openstack.org/ wiki/InstanceResourceQuota#Bandwidth_limits.

[14] Redhat ansible. Website. Online at https://www.ansible.com.

[15] vcloud suite overview. Website. Online at https://www.vmware.com/products/ vcloud-suite.html.

[16] What is nfv infrastructure (nfvi)? definition. Website. Online at https://www.sdxcentral. com/nfv/definitions/nfv-infrastructure-nfvi-definition/.

[17] 5GTANGO. Deliverable 2.1: Pilot definition and Requirements, 2017.

5GTANGO Public 53 Document: 5GTANGO/D6.1 Date: March 7, 2018 Security: Public Status: Draft Version: 0.1

[18] 5GTANGO Project. D2.1 Pilots Definition and Requirements. Website, 2017. Online at http://www.5gtango.eu/project-outcomes/deliverables.html.

[19] 5GTANGO Project. D3.1 verification and validation strategy and automated metadata management. Website, 2017. Online at http://www.5gtango.eu/project-outcomes/ deliverables.html.

[20] Ramon Casellas, Ricard Vilalta, Ricardo Mart´ınez,Ra¨ulMu˜noz,Haomian Zheng, and Young Lee. Experimental validation of the actn architecture for flexi-grid optical networks using active stateful hierarchical pces. In Transparent Optical Networks (ICTON), 2017 19th International Conference on, pages 1–4. IEEE, 2017.

[21] 5GTANGO Consortium. D2.2: Architecture Design.

[22] Daniel King and Adrian Farrel. A pce-based architecture for application-based network oper- ations. 2015.

[23] Victor Lopez, Ricard Vilalta, Victor Uceda, Arturo Mayoral, Ramon Casellas, Ricardo Mart´ınez,Raul Mu˜noz,and Juan Pedro Fernandez Palacios. Transport api: A solution for sdn in carriers networks. In ECOC 2016; 42nd European Conference on Optical Communication; Proceedings of, pages 1–3. VDE, 2016.

[24] A Mayoral, R Vilalta, R Mu˜noz,R Casellas, R Mart´ınez,and V L´opez. Cascading of tenant sdn and cloud controllers for 5g network slicing using transport api and openstack api. In Optical Fiber Communications Conference and Exhibition (OFC), 2017, pages 1–3. IEEE, 2017.

[25] MIRANTIS. Manage openstack with fuel and stacklight. Website, 2018. Online at https: //www.mirantis.com/software/openstack/fuel/.

[26] OIF. White paper: Virtual transport network services. 2017.

[27] ”The OpenStack Project”. Openstack: The open source cloud operating system. Website, July 2012. Online at http://www.openstack.org/.

[28] Ricard Vilalta, Arturo Mayoral, Jorge Baranda, Jose Nu˜nez, Ramon Casellas, Ricardo Mart´ınez,Josep Mangues-Bafalluy, and Raul Mu˜noz.Hierarchical sdn orchestration of wireless and optical networks with e2e provisioning and recovery for future 5g networks. In Optical Fiber Communications Conference and Exhibition (OFC), 2016, pages 1–3. IEEE, 2016.

[29] Ricard Vilalta, Arturo Mayoral, Raul Munoz, Ramon Casellas, and Ricardo Mart´ınez.Multi- tenant transport networks with sdn/nfv. Journal of Lightwave Technology, 34(6):1509–1515, 2016.

54 Public 5GTANGO