Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent and Secure Media Access

Project no. 671704 Research and Innovation Action Co-funded by the Horizon 2020 Framework Programme of the European Union

Call identifier: H2020-ICT-2014-1 Topic: ICT-14-2014 - Advanced 5G Network Infrastructure for the Future Internet Start date of project: July 1st, 2015

Deliverable D4.2

Demonstrators infrastructure setup and validation

Due date: 30/06/2017 Submission date: 07/07/2017 Deliverable leader: Enrique García

Dissemination Level PU: Public

PP: Restricted to other programme participants (including the Commission Services) RE: Restricted to a group specified by the consortium (including the Commission Services) CO: Confidential, only for members of the consortium (including the Commission Services)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 1 of 190 List of Contributors

Participant Short Name Contributor i2CAT Fundació I2CAT Shuaib Siddiqui, Albert Viñés, Adrian Rosello, Eduard Escalona, Javier Fernandez Hidalgo APFutura APFUT Enrique García InnoRoute INNO Andreas Foglar, Marian Ulbricht, Christian Liss, Phillip Dockhorn National Centre for Scientific Research NCSRD Eleni Trouva, Yanos Angelopoulos Demokritos ERICSSON ERS Carolina Canales JCP-Connect JCP-C Yaning Liu University of Essex UE Geza Koczian, Mike Parker Cosmote COSMO Costas FilisGeorgios Lyberopoulos. Eleni Theodoropoulou, Ioanna Mesogiti, Konstantinos Filis Intracom Telecom ICOM Konstantinos Katsaros, Vasilis Glykantzis, Konstantinos Chartsias, Dimitrios Kritharidis Telekom Slovenije TS Pavel Kralj Ethernity ETHER Eugene Zetserov Altice Labs AL Victor Marques

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 2 of 190 Change history

Version Date Partners Description/Comments

0.1 05/2017 APFUT ToC definition 0.2 9/06/2017 ICOM, APF, NCSRD Added Intracom, APFutura, NCSRD contributions 0.3 17/06/2017 JCP-C, ERS Added JCPC, Ericsson contributions 0.31 21/06/2017 I2CAT, ETH, TS Added i2CAT, Ethernity, TS contribution 0.32 25/06/2017 ICOM, NSRD, Added contributions and modifications ETHER, INNO 0.4 26/06/2017 JCP-C, APFUT, Added modifications i2CAT 0.5 27/06/2017 AL Added Altice Labs contribution 0.5 28/06/2017 UE, COSMO First Review. 0.5 02/07/2017 UE Second Review 0.6 06/07/2017 APFUT First Version 0.7 7/07/2017 APFUT ,INNO, ICOM Detected and solved several style issues. 1.0 7/07/2017 APFUT Final Version

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 3 of 190

Table of Contents

List of Contributors ...... 2 Change history ...... 3 1. Introduction ...... 9 2. CHARISMA demonstrators and field trials ...... 11 2.1. NCSRD Demonstrator ...... 11 2.1.1. Physical Level Architecture ...... 11 2.1.2. Logical Level Architecture ...... 15 2.1.3. Planned integration and interfacing ...... 21 2.2. APFutura (Centelles) field-trial ...... 51 2.2.1. Physical Level Architecture (Hardware) ...... 52 2.2.2. Logical Level Architecture (Software) ...... 55 2.2.3. Planned integration and interfacing ...... 57 2.3. TS field-trial ...... 63 2.3.1. Physical Level Architecture (Hardware) ...... 63 2.3.2. Logical Level Architecture (Software) ...... 66 2.3.3. Planned integration and interfacing ...... 67 3. Software deployment and configuration ...... 73 3.1. Control Management and Orchestration deployment ...... 73 3.1.1. Service Orchestration (TeNOR) ...... 73 3.1.2. Service Monitoring & Analytics...... 73 3.1.3. Open Access Manager ...... 75 3.2. VNF ...... 75 3.2.1. IDS ...... 75 3.2.2. (FW) ...... 76 3.2.3. Cache Controller ...... 77 3.2.4. Cache ...... 79 4. Testing and Validation ...... 85 4.1. Testing tools selection and rationale ...... 85 4.1.1. Robot Framework ...... 85 4.1.2. OFDM-PON testing tools ...... 86 4.2. Hardware System testing ...... 86 4.2.1. TrustNode Testing ...... 86

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 4 of 190 4.2.2. MobCache testing ...... 113 4.2.3. Smart NIC testing ...... 116 4.2.4. OFDM Testing ...... 134 4.2.5. Fronthaul testing...... 139 4.2.6. Optical wireless link testing ...... 141 4.3. Software components testing ...... 143 4.3.1. Control Management and Orchestration (CMO) components testing ...... 143 4.3.2. VNF Testing ...... 174 5. Conclusions ...... 184 References ...... 185 Acronyms ...... 186

List of Figures

Figure 1: Physical-level architecture of the NCSRD demonstrator (data plane) ...... 12 Figure 2: Physical-level architecture of the NCSRD demonstrator (control plane) ...... 13 Figure 3: Logical-level architecture of the NCSRD demonstrator ...... 15 Figure 4: Traffic steering at NCSRD for the security scenario. Client sending traffic to the network core.. ... 17 Figure 5: Traffic steering at NCSRD for the security scenario- Server sending traffic to the client...... 18 Figure 6: Traffic steering at NCSRD for the caching scenario - Client sending traffic to the network core. ... 19 Figure 7: Traffic steering at NCSRD for the caching scenario - Server sending traffic to the client...... 20 Figure 8: SDN control flow sequence diagram...... 32 Figure 9: Sequence diagram for the OAM - SDN controller - Backhaul interaction ...... 33 Figure 10: ODL and SDN-enabled Backhaul integration with NCSRD infrastructure ...... 34 Figure 11: Infrastructure at NCSRD premises ...... 34 Figure 12: Customer with C-VLAN 10 belonging VNO with S-VLAN 1 ...... 35 Figure 13: Customer with C-VLAN 20 belonging to VNO with S-VLAN 2 ...... 35 Figure 14: Laptop 1 screenshot after deploying a service for VNO 1 (C-VLAN 10) ...... 36 Figure 15: Laptop 2 screenshot after deploying a service for VNO 2 (C-VLAN 20) ...... 36 Figure 16: CMO workflow for the establishment of a vCache peering service...... 46 Figure 17: Simplified vCache peering setup. The vCC is omitted for simplicity...... 46 Figure 18 Physical-level architecture of the APFutura demonstrator ...... 53 Figure 19 Physical-level architecture of the APFutura field trial (data plane) ...... 54 Figure 20 Physical-level architecture of the APFutura field trial (control plane) ...... 55 Figure 21 Logical level architecture for low latency ITS demo featuring a robot self-driving car...... 56 Figure 22 Logical Level Architecture showing software for the APFutura field trial ...... 57 Figure 23: interfacing between SmartNIC and OpenStack ...... 58 Figure 24: ODL SDN Controlled Caching System Architecture ...... 61 Figure 25: Physical-level architecture of the Telekom Slovenije field trial (inventory) ...... 64 Figure 26: Physical-level architecture of the Telekom Slovenije field trial (data plane) ...... 65 Figure 27: Physical-level architecture of the Telekom Slovenije field trial (control plane) ...... 65 Figure 28: Logical level architecture of the Telekom Slovenije field trial ...... 66 Figure 29: Linksys WRT1200AC OpenWRT enabled CPE device with dual LTE modems...... 67 Figure 30: UML diagram of OpenWRT management interface – CPE ...... 68 Figure 31: UML diagram of the LTE DNS management interface – vDNS ...... 69 Figure 32: Embedded device providing interface to CMO ...... 72

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 5 of 190 Figure 33: OFDM-PON – interface to CMO ...... 72 Figure 34: OpenStack deployment and configuration for the virtual IDS VNF ...... 76 Figure 35: OpenStack deployment and configuration for the virtual firewall VNF ...... 76 Figure 36: The vCC structure ...... 77 Figure 37: Test output on powerline L-wire according DIN EN 61000-6-3 [13] ...... 87 Figure 38: Test output on powerline N-wire according DIN EN 61000-6-3 [13] ...... 87 Figure 39: Test output on powerline L-wire according DIN EN 61000-6-3 [13] ...... 88 Figure 40: DIN EN 61000 compliant test setup. The picture shows the TrustNode device on a horizontal turn-table placed in a shielded cabin...... 89 Figure 41: Test output according DIN EN 61000-6-3 [13] ...... 90 Figure 42: Test output according DIN EN 61000-6-3 [13] ...... 91 Figure 43: Test setup for fast transients injection according to DIN EN 61000-6-3 ...... 92 Figure 44: TrustNode Hardware test setup. The picture shows the TrustNode device, which is connected to a packet generator and analyser. The analyser checks the packets for loss and cyclic Redundancy Check (CRC)-errors...... 95 Figure 45: TrustNode with open case. See on top of the circuit board: the Ethernet PHY chips, one per Ethernet jack ...... 110 Figure 46: SmartNIC Test Generator ...... 116 Figure 47: Network Scheme ...... 128 Figure 48: OFDM-PON Testing Scheme ...... 134 Figure 49: Bitloading and subcarrier constellations for OFDM testing ...... 135 Figure 50:OLT DSP blocks influencing the SNR ...... 135 Figure 51: EVM over subcarrier at different locations in DSP chain ...... 136 Figure 52: PHY component test ...... 137 Figure 53: ONU DSP chain ...... 138 Figure 54: EVM for decoded carrier after ONU DSP for 11.5 to 12 GHz OLT band ...... 138 Figure 55: Ethernet Fronthaul SyncTest ...... 139 Figure 56: Ethernet Fronthaul Dual RUTest ...... 140 Figure 57: OW Link FieldTest ...... 141 Figure 58: Planned OW link at Altice Labs, Aveiro ...... 142 Figure 59: OW link node to be installed in Aveiro (1G capable) ...... 142 Figure 60: Robot framework HTML result output - MA target resource API test - Type: Server ...... 153 Figure 61: Robot framework HTML result output - MA target resource API test - Type: Network Device .. 155 Figure 62: Robot framework result HTML output - MA data querying API test ...... 156 Figure 63: Robot framework result output - MA alert rule management API test ...... 157 Figure 64: SDN backhaul 1 ...... 165 Figure 65: Switch detection in ODL...... 166 Figure 66: Topology detection in ODL...... 166 Figure 67: Switch and ports in ODL...... 166 Figure 68: Postman results for test description SDN_backhaul_2 ...... 168 Figure 69: Postman results for malformed request in test SDN_backhaul_3 ...... 169 Figure 70: Postman results for conflicting requests in test SDN_backhaul_3...... 169 Figure 71: SDN backhaul 4 ...... 170 Figure 72: SDN Backhaul 5 ...... 172 Figure 73: The wireless backhaul nodes connected with the RF cable link and the tester ...... 173 Figure 74: IDS VNF ...... 174 Figure 75: Robot framework result output – IDS VNF functionality test ...... 175 Figure 76: Firewall VNF ...... 176 Figure 77: Robot framework result HTML output – Firewall VNF functionality test ...... 177

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 6 of 190

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 7 of 190 Executive Summary

This deliverable D4.2 is a report of the technical work and development made by all partners of the CHARISMA work package WP4, for the second year of the project, as they have designed the final 5G field trial demonstrators. Building upon the first year’s work that was reported in the deliverable D4.1 (Demonstrators design and prototyping) this document represents the work that has gone into implementing and deploying the CHARISMA technologies into the show cases and use cases that were proposed in the first year of the project. In this document, we provide a global view and architecture of the deployments of all demonstrators, both physically and logically, to accommodate each use case. We also provide a short explanation of which CHARISMA features are being demonstrated for each 5G field trial and use case. These features being the objectives of the CHARISMA project: security, low latency, and open access. Next, we present one-by-one the CHARISMA components developed during the project that are being deployed in the field trials, providing details for each one of its functionality, interfacing and configuration. We have also covered all the software deployment for the demos, taking into account that the three field trials all run the same CHARISMA control, management and orchestration (CMO) system, but with different configurations according to the particular hardware setup for each demonstrator. Finally, we show all the testing and validation tests, including hardware and software, that each partner has performed in-situ to confirm the correct integration into the field trial architecture. The purpose of this document it to allow the reader to understand and be able to reproduce any aspects of each field trial and demo show case. The analysis and validations of the results emerging from the 5G field trials and use case scenarios will be reported in the deliverable D4.3 at the end of the project.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 8 of 190 1. Introduction

This deliverable D4.2 provides a view of the CHARISMA project demonstrators setup, the infrastructure employed for each demo use case, and how each device is integrated into their respective infrastructure. In addition, it provides a plan for the validation of the technologies in each of the demonstrators and their performance in the designed 5G use cases. This document complements the information previously provided in the deliverable D4.1 (Demonstrators design and prototyping), as well as that described in the accompanying deliverable D3.4 (Intelligence-driven v-security, including content caching and traffic handling) that also provides more detail on each software component appearing in this document D4.2.

The three CHARISMA demonstrators and field trials are located in: 1) Ljubljana at the premises of Telekom Slovenije, ; 2) Centelles (near Barcelona in Spain) at the premises of APFutura; and 3) in the laboratories of Demokritos (NCSRD) in Athens, Greece.

This document presents a detailed view of the infrastructure deployed in each field trial and the software used in each of the demonstrations. Each demonstrator works to showcase one or more of the key 5G features of the CHARISMA project, these being: Security, low-latency, and open-access. In addition, the field trial demonstrators have been created so that each showcase demonstrates within a single environment context the multiple hardware and software solutions that have been specially designed and developed within CHARISMA. The objectives of each demo are:  In NCSRD: Development of an end-to-end secure, multi-tenant, converged 5G network, via slicing of virtualized compute, storage and network resources to different service providers. Network intelligence (such as security and caching functions) is distributed out towards end-users over a hierarchical architecture, featuring optimized and secure cross-slice communications.  In Telekom Slovenije: Demonstration of all three of the key CHARISMA objectives: Multi-tenancy, Open Access and Security. In particular, these three features are showcased by complementing the existing network with additional virtual network slices to provide an overall 5G network functionality. These slices can be used to serve users such as Network Operators, or to supporting Virtual Network Operators (VNOs) with their different requirements (energy aggregators, mobile MVNOs (MVNOs), etc.).  In APFutura: In this field trial, there are two 5G demonstrators being showcased, based upon: Low latency, and the Bus Use Case. o Low Latency demonstrator: The goal of this demo is to demonstrate two of the key objectives of CHARISMA: low latency and open access, in a 5G networking context. This scenario simulates a robot that moves packages inside a warehouse that is managed remotely by a Controller in the Converged Aggregation Level 3 (CAL3). Latency has an impact on how rapidly the control orders for safe operation of the robot arrive from the controller, so as to provide accurate and precise movement inside the warehouse. o Bus use case: The goal of this demo is to demonstrate the service availability and reliability that can be achieved using MoBcache devices in a 5G network. The demo simulates users

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 9 of 190 inside a moving bus that request a video from a content server outside the operator’s network. Other users requesting the same video will experience a reduced waiting time, because the request is being serviced from the caching device. The Control, Management and Orchestration (CMO) allows the caching to be isolated for each of the Virtual Network Operators.

This deliverable D4.2 is organized as follows. After short remarks in chapter 1, we continue with chapter 2 that provides an overview of the CHARISMA field trials, where there is one subchapter for each of the three demonstrators. In these subchapters, the physical and logical architectures of the test beds for each demonstrator is described. In addition, for each demonstrator there is also subchapter explaining the planned integration and interfacing required for each of the devices and components used in the particular testbed; each testbed having a different integration and implementation configuration of the CHARISMA software and hardware designs. Chapter 3 presents all the deployed software and its configuration, starting with the CMO (Control, Management and Orchestration) that is formed by several subcomponents, which have each been integrated into a single package. The CMO manages the configuration of all the software and each device within the CHARISMA infrastructure. The main components from the CMO are: the Virtual Network Function (VNF) Orchestrator (TeNOR) that is responsible for the lifecycle of each instantiation; the Open Access Manager (OAM) that manage the users, the slicing and configuring devices; and the Monitoring & Analytics service that monitors the servers, alerts, etc., and analyses all information so as to provide warning of problems before they occur. This chapter also covers the VNFs used in the CHARISMA infrastructure, their deployment, and configuration. These VNFs include: the Intrusion Detection System (IDS) that manages the rules of the elements and reports to the Monitoring and Analytics service; the Firewall that protects the network slices and the flows on them; the Cache Controller that provides the management of caching services for a virtual network operator (VNO); and the Cache that is used for the caching and prefetching capabilities. Chapter 4 is dedicated to the test and validation of each hardware device used in the field trials, as well as the software components including the VNFs. The first part of the chapter is the subsection “testing tools selection and rationale”, where the external tools used for testing are described. Finally, there are some conclusions made in the chapter 5.

The aim of this deliverable D4.2 is to show how each CHARISMA demonstrator has been designed to exhibit a particular 5G use case, and to describe the required hardware infrastructure, and installation and configuration of the software components, so as to achieve a successful show case. In so doing, we also indicate the evolution and development of the design and prototyping of the technologies described in the earlier deliverable D4.1.

At this stage of the project, we present some of the validation results that have been gained for some of the technologies as they have been integrated into the field-trial demonstrators. Full validation of all components within their integrated field-trial contexts was not possible up to the point of writing of this deliverable. However, a full methodology has been developed for the validation and evaluation of all the technologies (presented in the chapter 4), and this provides the basis for the final validation of the field- trial and test results that will be undertaken in the final 6 months of the CHARISMA project. These will be presented in the final deliverable D4.3 of the work package WP4 at the end of the project, along with an analysis of the final results in their 5G use-case scenario contexts.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 10 of 190 2. CHARISMA demonstrators and field trials

In this chapter, we provide an overview of the CHARISMA demonstrators and field trials, at their locations in Athens, Centelles, and Ljubljana. For each location, we describe the physical and logical architectures of the test beds, and also explain the planned integration and interfacing required for all the devices and components.

2.1. NCSRD Demonstrator To experimentally access and evaluate many of the developments within WP3, we have developed a demonstrator at the NCSRD premises in Athens that allows experimentation with the CMO components, so as to showcase the CHARISMA objectives of security and open access. In particular, the NCSRD demonstrator aims for the development of an end-to-end secure, multi-tenant, converged 5G network, via slicing of virtualized compute, storage and network resources to different service providers. The CHARISMA features targeted for demonstration are security and multi-tenancy. Specifically, network intelligence, such as security and caching functions, is distributed out towards end-users over a hierarchical architecture. This is achieved through the deployment and orchestration of virtual security and caching services in the implemented Network Function Virtualisation (NFV) Infrastructure-Points of Presence at the aggregation points of the network (CALs). An Intrusion Detection System (IDS), a firewall, a cache controller and a cache were implemented as Virtual Network Functions (VNFs) to be used in the demonstrations. The management of the developed VNFs is accomplished through the NFV Orchestrator. Co-operation and interaction of the developed security VNFs with the Monitoring and Analytics and Service Policy Manager components of the CMO is used for attack identification and mitigation. Finally, slice provisioning and optimized and secure cross-slice communication are features also demonstrated over the NCSRD demonstrator infrastructure. Slice allocation is realized through the communication of OAM with the devices comprising the infrastructure.

The developed testbed is a 5G end-to-end experimental facility, showcasing a next-generation network that implements future network evolution software-defined paradigms, such as Software Defined Network (SDN) and Network Function Virtualisation (NFV), and demonstrates their strong integration with cloud and fog computing technologies.

2.1.1. Physical Level Architecture Figure 1 illustrates the physical architecture and topology of the NCSRD demonstrator, indicating how it maps onto the converged aggregation level (CAL) architecture of CHARISMA. The experimental setup comprises five main parts: the core network, the backhaul network, the access network, mobile terminals connecting to the access network and distributed cloud deployments with computing, storage and networking capabilities. We assume that the whole infrastructure is owned by a single infrastructure provider that leases slices of the entire infrastructure to Virtual Network Operators (VNOs), enabling them to offer services to their customers. Each slice comprises of physical, virtual compute and network resources that can be isolated and dedicated for use by a VNO. The level of isolation of each resource varies depending on its nature.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 11 of 190

Figure 1: Physical-level architecture of the NCSRD demonstrator (data plane)

The access network is composed by two types of 4G wireless access networks, based on Wireless Fidelity (WiFi) and Long Term Evolution (LTE) technologies. The eNodeB features an external RF interface, allowing the connection of LTE User Equipment. A WiFi Customer Premises Equipment (CPE) also provides connectivity to the testbed for WiFi-enabled devices and terminals.

An SDN switch interconnects both access networks to the backhaul network. At this CAL1 aggregation point of the two access networks we implement a CHARISMA Intelligent Management Unit (IMU) with the addition of a compute node, in which virtual network functions (VNFs) associated with the existing VNO networks are running. As in the first-year setup for this demonstrator, to allow capturing of the packets that flow between the Evolved Packet Core (EPC) and the eNodeB, we include a server that performs GPRS Tunneling Protocol (GTP) encapsulation/decapsulation functions. This server is connected to an SDN switch that has access to the cloud infrastructure, allowing the forwarding of traffic to the deployed security functions. The backhaul part of the network is implemented using a wireless backhaul link with SDN management functionalities.

The core network of this testbed is composed by the EPC, running a standard-compliant implementation of an LTE stack, including the Mobility Management Entity (MME), Home Subscriber Service (HSS), Serving Gateway (SGW) and PGW services. The core network is another point of the infrastructure at which we have decided to implement an IMU with the addition of another compute node. Again, another SDN switch connects this IMU with the core network, allowing traffic re-direction to the virtualised services offered at CAL3. The implementation of the two IMUs at CAL1 and CAL3 is based on the deployment of cloud infrastructure NFV infrastructure – Point of Presence (NFVI-PoPs) to support the implemented CHARISMA virtualized functions, aimed to provide enhanced security and caching functionalities.

A cloud controller for managing the cloud infrastructure has been deployed, enabling the management of the compute nodes at CAL1 and CAL3. Moreover, a separate cloud infrastructure has been setup to host the CHARISMA CMO services, which are responsible for the control, management and orchestration of all resources comprising the testbed. Finally, servers dedicated to run the services offered by VNOs to their customers, denoted as Application Servers, belong to the core network. These servers are connected

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 12 of 190 through a standard switch, which also provides access to the Internet through interconnection to other devices (routers).

Figure 2: Physical-level architecture of the NCSRD demonstrator (control plane)

Figure 1 provides an overview of the entire setup of the NCRSD demonstrator, demonstrating the data plane of the testbed, illustrating the flow of the data between the different devices. Figure 2 provides an overview of the control plane of the testbed. As depicted in the figure, all devices comprising the NCSRD demonstrator are connected to a single switch that allows their management. In a real-world setup, having in mind that the devices comprising the testbed would be placed in different sites and distant locations (access, backhaul and core networks) this would not be the case. Many legacy switches and routers would allow the interconnection of all these devices and enable their management. However, the design and implementation of such setup is out of the scope of this demonstrator, as our purpose is to demonstrate the WP3 developments and selected functionalities such as security and multi-tenancy, which are not affected by the assumption of a single switch being used for the devices management.

The specifications of the physical devices used are enumerated in the table below.

Table 1: Specifications of the physical devices used in the NCSRD demonstrator

ID Role Vendor CPU Model CPU cores RAM Storage Other Features 1 User Equipment B (laptop) SONY Intel Centrino 2 x 1.2GHz 2 GB 120 GB U7600 1 User Equipment A (laptop) Toshiba Intel i5 – 4200M 4 x 2.5GHz 4 GB 360 GB 1 User Equipment C (mobile Samsung Exynos 5 Octa 4 x 1.6GHz 2 GB 16 GB phone) Galaxy S4 5410 & 4x 1.2GHz

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 13 of 190 1 4G LTE USB adapter HUAWEI - - - - HUAWEI E3372h 2 WIFI CPE Linksys - 2 x 256 128 MB WRT120AC 1.30GHz MB 3 eNodeB HP Intel i7 – 4790 8 x 3.6GHz 8 GB 500 GB USRP B210 external RF interface 4 SDN switch CAL1 Turbo-X Intel(R) 2 x 4 GB 300 GB 5 network Core(TM)2 Duo 2.66GHz interface CPU E6750 cards 5 SDN switch CAL3 Turbo-X Intel(R) 2 x 2 GB 300 GB 5 network Pentium(R) 4 2.60GHz interface cards 6 Backhaul node 7 Backhaul node 8 GTP encapsulation/ Turbo-X Intel i5 - 4460 4 x 3.2GHz 16 GB 1 TB 5 network decapsulation server interface cards 9 Cloud Compute Node 1 Turbo-X Intel(R) 4 x 32 GB 256 GB Core(TM) i7- 3.60GHz SSD 7700 CPU 10 Cloud Compute Node 2 Turbo-X Intel(R) 4 x 32 GB 256 GB Core(TM) i7- 3.60GHz SSD 7700 CPU 11 Cloud Controller Node Turbo-X Intel(R) 4 x 16 GB 1 TB Core(TM) i5- 3.30GHz 2500K CPU 12 EPC HP Intel i5 - 2400 4 x 3.1GHz 16 GB 500 GB 13 Managed Switch 1 Netgear GS716T 14 VNO application server DELL Intel i7 - 2600 4 x 3.8GHz 16 GB 320 GB 15 Managed Switch 2 – for HP - - - - 2510-48 management J9020A 16 CMO Cloud Infrastructure Fujitsu Intel(R) Xeon(R) 8 x 96 GB CPU X5677 3.47GHz

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 14 of 190 2.1.2. Logical Level Architecture

Figure 3: Logical-level architecture of the NCSRD demonstrator

Figure 3 provides a logical view of the NCSRD demonstrator setup with the software components that were installed. The logical level architecture comprises of:

 The WiFi CPE running Open Wireless Receiver / transmitter (OpenWRT) 15.05 (Chaos Calmer) software.

 The LTE testbed with EPC and eNodeB servers running Amarisoft software over Ubuntu 14.04 server.

 The UE (laptop), running software to perform a distributed denial of service (DDoS) attack (attack software) over Ubuntu 16.04 server. The laptop is connected to the LTE network through a Huawei 4G LTE USB adapter and thus, the supporting driver from Huawei has been installed to allow LTE connectivity. Two additional UEs, a laptop running Ubuntu 16.04 and a mobile phone running Android have been setup with the attack software.

 The GTP encapsulation/decapsulation server with Ubuntu 14.04 server running GTP encapsulation and decapsulation software based on the packet handling library PF_RING.

 Two SDN switches providing access to the NFVI-PoPs, at CAL1 and CAL3 respectively, have been implemented using Open vSwitch software over Ubuntu 16.04.

 The cloud infrastructure, which is comprised of three different servers (Controller Node, Compute Node CAL1 and Compute Node CAL3) running OpenStack Newton cloud platform in a “Provider networks” deployment scheme.

 The IDS VSF (Virtual Security Function), which is an OpenStack virtual machine (VM) with Ubuntu 16.04 server running Snort, Barnyard2 and Snorby open source software.

 The firewall VSF, which is an OpenStack VM with Ubuntu 16.04 server running Open vSwitch.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 15 of 190  The cache VNF, which is an OpenStack VM with Ubuntu 16.04 server running Squid v3.5.20 open source software.

 The cache controller VNF, which is an OpenStack VM with Ubuntu 14.05 server running three programs: a cache controller daemon called JCP_CCACHEDAEMON, a web cache manager daemon called JCP_WEBCACHEMNGR and a daemon updating the database called JCP_UPDATABASE.

 The VNO application server with Ubuntu 16.04 server running Lighttpd web server. A video application was created to be used for both security and caching demonstrations.

 The backhaul nodes, in which the OpenFlow agent and Data path abstraction applications have been deployed.

 The CMO cloud infrastructure server is based on VMware cloud and virtualisation software. Within this server the CHARISMA Control, Management and Orchestration (CMO) services are running, whose implementation was targeted in WP3 tasks. Specifically, the CMO platform is comprised of the following services: o The TeNOR NFV Orchestrator responsible for the lifecycle management of Network Services and the VNF Manager (VNFM) responsible for the lifecycle management of individual VNFs. o The Open Access Manager (OAM) responsible for the creation of virtual slices. o The Monitoring and Analytics (M&A) component, which is responsible for performing metrics and notification acquisition from both physical and virtual resources of the infrastructure. o The Service Policy Management (SPM) that allows policy-based programming, automation and control of the underlying infrastructure. o The network controller (OpenDaylight), which is responsible for the management of network resources in the NFVI-PoPs and the network resources outside the NFVI-PoPs respectively.

A detailed description of the software modules used for each of the CMO services is provided in Section 3 of this deliverable.

2.1.2.1. Slicing and Multi-tenancy over the NCSRD pilot infrastructure An in-depth explanation of the CHARISMA architecture SDN capabilities to efficiently make use of virtualized Network Services in a multi-tenant environment is described below. Slicing of the different Virtual Network Operators (VNOs) consists of assigning two VLAN (Virtual Local Area Network) tags on each VNO slice: one is the main “VNO VLAN TAG” travelling through most parts of the network and the other is the “VNO VLAN TAG pair” which is used only on certain parts of service function chaining mechanism to diversify same flow packets that need to be handled in a different manner. For each VLAN used, OpenStack needs to have a corresponding network with the same VLAN. OpenStack uses internally, in the integration bridge (br-int), a VLAN swapping mechanism. As a result, each external to OpenStack VLAN (“VNO VLAN TAG” and “VNO VLAN TAG pair”) have an internal OpenStack VLAN counterpart. For the rest of the description this will be called “OSCP”. Network services used in CHARISMA that require communication with the core network must have two network ports. Each port is attached to one of the aforementioned OpenStack VLAN networks. Custom Open vSwitch (OVS) rules can be applied on the SDN switches and on compute node OVS bridges. These rules are defined by the Open Access Manager at the network slice creation and then applied by the SDN controller managing the OVS bridges.

Traffic steering implementation at the NCSRD demonstrator for the security scenario

The following figure (Figure 4) provides information on the traffic steering applied to the devices comprising the NCSRD demonstrator for supporting the security scenario. Specifically, the traffic steering rules applied are illustrated for traffic directed from the User Equipment towards the core network.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 16 of 190

Figure 4: Traffic steering at NCSRD for the security scenario. Client sending traffic to the network core. (Yellow arrows denote SDN rules applied at network slice creation).

1. A user-client is connected to “OpenWRT wireless access point” using VNO SSID (Service Set Identifier) which in OpenWRT is tagged with the VNO VLAN TAG. The user sends a request towards the core network (or the Internet). 2. The packet from access point goes to “SDN switch 1” where it can either it can either continue to the core network or it can be sent to the OpenStack Compute Node towards a Network Service. To follow the route to the cloud infrastructure a custom OVS rule is needed: “Incoming packets from port 1 with VNO VLAN TAG are sent to port 2 towards the OpenStack Compute Node” 3. The packet is then forwarded to the enp3s0 interface of the physical server hosting “OpenStack Compute Node 1” and gets to the “br-provider” OVS bridge. There, the packet is automatically forwarded to “br-int” “int-br-eth1” by an OpenStack OVS rule. 4. In “br-int” another custom OVS rule is used, sending packets to the Network Service (NS) first port: “Incoming packets from port “int-br-provider” with VNO VLAN TAG are sent to port “tapXX1” after their VLAN tag is stripped “. 5. Logic can be applied on the packet inside the NS, so that the CHARISMA architecture allows packet forwarding towards the core network through the NS second port. 6. If the packet (or a new packet) needs to continue, it enters “br-int” bridge from port “tapXX2”, and a custom OVS rule sends packets to “br-provider”: “Incoming packets from port “tapXX2” ” with OSCP VNO VLAN TAG pair are sent to port “int-br-provider” after swapping OSCP VNO VLAN TAG pair with VNO VLAN TAG pair ”. 7. In “br-provider” an OpenStack OVS rule is applied: “Incoming packets from port “phy-br-provider” with VNO VLAN TAG pair are sent to port “enp3s0” ”. 8. In “SDN switch 1” a custom OVS rule sends packets to the backhaul link: “Incoming packets from port 2 with VNO VLAN TAG pair are sent to port 3, swapping VNO VLAN TAG pair with VNO VLAN TAG”. 9. The backhaul forwards the packet to “SDN switch 2”. There, either steps 2-7 are repeated or the packet is sent directly to the router through port 3. Following the latter route, the appropriate

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 17 of 190 custom OVS rule is: “Incoming packets from port 1 with VNO VLAN TAG are sent to port 3 removing the VNO VLAN TAG”. 10. The router then sends the packet to the Internet and receives a response.

Figure 5: Traffic steering at NCSRD for the security scenario- Server sending traffic to the client. (Yellow arrows denote SDN rules applied at network slice creation).

11. Figure 5 demonstrates the traffic steering rules applied for traffic directed from the core network towards the User Equipment.Router response packet enters the “SDN switch 2” from port 3 and a custom OVS rule forwards it to port 1: “Incoming packets from port 3 with the known client network Classless Inter-Domain Routing (CIDR) are sent to port 1 adding VNO VLAN TAG”. 12. Backhaul forwards the packet to “SDN switch 1”. There the packet is sent to OpenStack Compute Node. The custom OVS rule is: “Incoming packets from port 3 with VNO VLAN TAG are sent to port 2 swapping VNO VLAN TAG with VNO VLAN TAG pair” 13. In “br-provider” an OpenStack OVS rule is used to send packets to “br-int”: “Incoming packets from port “enp3s0” with VNO VLAN TAG pair are sent to port “phy-br-provider” ”. 14. In “br-int” the custom OVS rule sends the packet to port “tapXX2”: “Incoming packets from port “int-br-provider” ” with VNO VLAN TAG pair are sent to port “tapXX2” after their VLAN tag is stripped”. 15. The packet enters the NS. Packets from the NS heading towards the clients must exit through the first port (eth1). 16. In “br-int” a custom OVS rule sends packets to “int-br-provider” port: “Incoming packets from port “tapXX1” are sent to port “int-br-provider” adding the OSCP VNO VLAN TAG“. 17. In “br-provider” the packet is automatically forwarded to “enp3s0” port by OpenStack OVS rule, swapping OSCP VNO VLAN TAG with VNO VLAN TAG. 18. In “SDN switch 1” the packet is forwarded to port 1 by the custom OVS rule: “Incoming packets from port 2 with VNO VLAN TAG are sent to port 1”.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 18 of 190 19. In “OpenWRT Access Point” packets with VNO VLAN TAG are sent to the appropriate user connected to the VNO SSID stripping the VNO VLAN TAG.

Traffic steering implementation at the NCSRD demonstrator for the multi-tenancy and caching scenario

Figure 6 illustrates the rules applied to the devices comprising the NCSRD demonstrator for steering the traffic for the needs of the multi-tenancy and caching scenario. We explain step-by-step the process for traffic directed from the User Equipment towards the core network. Each VNO is assigned one VLAN TAG called “VNO VLAN TAG”. For the VLAN used, OpenStack needs to have a corresponding network with the same VLAN. OpenStack uses internally, in the integration bridge (br-int), a VLAN swapping mechanism. For the rest of the description this will be called “OSCP”. Cache network service used in CHARISMA project requires only one network port attached to OpenStack VLAN network. Custom OVS rules are needed only on the SDN switches. These rules are defined by the Open Access Manager at the network slice creation and then applied by the SDN controller managing the OVS bridges.

Figure 6: Traffic steering at NCSRD for the caching scenario - Client sending traffic to the network core. (Yellow arrows denote SDN rules applied at network slice creation).

1. A user-client is connected to “OpenWRT wireless access point” using VNO SSID which in OpenWRT is tagged with “VNO VLAN TAG”. The user sends a request towards the core network (or the Internet). 2. The packet from access point goes to “SDN switch 1” where it can either it can either continue to the core network or it can be sent to the OpenStack Compute Node towards a network service. To follow the route to the cloud infrastructure a custom OVS rule is needed: “Incoming packets from port 1 with VNO VLAN TAG are sent to port 2 towards the OpenStack Compute Node and their destination MAC Address is changed to that of the Cache, so that it can be intercepted by Squid” 3. The packet is then forwarded to enp3s0 interface of the physical server hosting “OpenStack Compute Node 1” and gets to the “br-provider” OVS bridge. There, the packet is automatically forwarded to “br-int” “int-br-provider” by an OpenStack OVS rule.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 19 of 190 4. In “br-int”, the packet is forwarded to all the corresponding access and trunk ports and therefore it reaches the cache, due to the OVS configuration which is automatically done by OpenStack. 5. Inside the NS, Squid takes care of the packet, by either fetching the result from the origin server or by serving the client with cached content. If it needs to query the origin server, then the packet has as destination MAC address the MAC address of default gateway (router). Otherwise, the packet has as destination MAC address the MAC address of the client. 6. In “br-int”, the packet is forwarded to all the corresponding access and trunk ports, due to the OVS configuration which is automatically done by OpenStack, and it reaches “br-provider”. 7. In “br-provider”, the packet is forwarded again to all the corresponding access and trunk ports, due to the OVS configuration which is automatically done by OpenStack, and it reaches “SDN switch 1”. 8. In “SDN switch 1”, it is forwarded either to OpenWRT Access Point if it has as destination the client or to the backhaul if the destination is the server. No custom OVS rule is needed. If there is no L2 rule to forward the packet based on its destination MAC address, then it is flooded in order to reach its destination (default behaviour). 9. The backhaul forwards the packet to “SDN switch 2”. There, either steps 2-7 are repeated or the packet is send directly to the router through port 3. Following the latter route, the appropriate custom OVS rule is: “Incoming packets from port 1 with VNO VLAN TAG are sent to port 3 removing the VNO VLAN TAG” 10. Then the router sends packet to the internet and receives response.

Figure 7: Traffic steering at NCSRD for the caching scenario - Server sending traffic to the client. (Yellow arrows denote SDN rules applied at network slice creation).

Figure 7 shows the traffic steering rules applied for the opposite direction, when traffic is directed from the core network towards the User Equipment.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 20 of 190 11. Router response packet enters the “SDN switch 2” form port 3 and a custom OVS rule forwards it to port 1: “Incoming packets from port 3 with the known client network CIDR are sent to port 1 adding VNO VLAN TAG ”. 12. Backhaul forwards the packet to “SDN switch 1”. There the packet is sent to OpenStack Compute Node, due to the OVS configuration, which is automatically done by OpenStack. 13. After reaching “br-provider”, the packet is automatically forwarded to “br-int”, due to the OpenStack OVS configuration. 14. In “br-int”, the packet is following again the default behaviour dictated by the OpenStack OVS configuration. 15. The packet enters the NS. Squid takes care of caching and forwarding the packet masquerading as the origin server. 16. In “br-int”, the packet is forwarded to all the corresponding access and trunk ports, due to the OVS configuration which is automatically done by OpenStack , and it reaches “br-provider”. 17. In “br-provider”, the packet is forwarded again to all the corresponding access and trunk ports, due to the OVS configuration which is automatically done by OpenStack, and it reaches “SDN switch 1”. 18. In “SDN switch 1” the packet will be forwarded to the client due to the default behaviour. 19. In “OpenWRT Access Point” packets with VNO VLAN TAG are sent to the appropriate user connected to the VNO SSID stripping the VNO VLAN TAG.

2.1.3. Planned integration and interfacing The following section provides a more detailed description of the software components that have been installed and deployed for the purposes of the NCSRD demonstrator, and how these are integrated and interact.

2.1.3.1. CHARISMA GUI – M&A integration The CHARISMA graphical user interface (GUI) interacts with the Monitoring and Analytics (M&A) module to manage system users, monitored resources, alerting and visualization. Both software applications are deployed in separate virtual machines in the CMO cloud infrastructure and communicate with Representational State Transfer (RESTful) interfaces. The interfaces are defined in detail in deliverable D3.4 – paragraph 2.3.3.2. For the data visualization part M&A uses the Grafana software tool which provides users the ability to create and save dashboards with graphs and charts. Grafana was embedded in the CHARISMA GUI application as an HTML iframe. In order to restrict users from accessing other users dashboards the following configuration was needed in the $WORKING_DIR/conf/defaults.ini configuration file along with the centralized authentication mechanism of CHARISMA GUI providing the same token for both the user and the Grafana back-end authentication.

[auth] disable_login_form = true

[auth.generic_oauth] enabled = true client_id = APP_CLIENT_ID client_secret = APP_CLIENT_SECRET auth_url = https://…/authorize token_url = https://…/oauth/token allowed_domains = allowed_charisma_domain.com allow_sign_up = true

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 21 of 190

Finally, when a resource requests a resource from a different domain, protocol, or port to its own, browsers, by default, restrict such communication. To allow the interconnection of the CHARISMA GUI and MA, the Cross-Origin Resource Sharing (CORS) mechanism was used. It gives web servers cross-domain access controls, which enable secure cross-domain data transfers.

2.1.3.2. M&A - SPM integration The Monitoring and Analytics module and the Service Policy Manager (SPM) communicate when the former needs to inform the latter about the beginning or the ending of an alert notification. Alert notifications occur when all of the conditions of an alert rule are true for a specified amount of time. This interaction takes place via RESTful interfaces. The interfaces are defined in detail in deliverable D3.4 – Section 2.3.3.2.

2.1.3.3. SPM - CHARISMA GUI

SPM is based on Ericsson’s commercial policy manager implementation, being hosted in Ericsson premises and remotely accessible over the Internet. The interactions between the CHARISMA GUI and the SPM happen therefore remotely via HTTP REST calls over the open web. The main procedures that enable the interactions between the SPM and the CHARISMA GUI are as described in detail in D3.4, and can be summarized as follows: 1. Create Security Policy: It enables the creation and definition (by the infrastructure operator or by a particular tenant/VNO) of a new policy in the SPM. 2. Read Security Policy: It allows the infrastructure operator or a particular tenant/VNO to read already existing policies in the SPM. 3. Update Security Policy: It allows the infrastructure operator or a particular tenant/VNO to update an already existing policy in the SPM. 4. Delete Security Policy: It allows the infrastructure operator or a particular tenant/VNO to delete an already existing policies in the SPM. These four methods are currently being tested, in order to ensure proper interoperation between the SPM and the CHARISMA GUI.

2.1.3.4. OAM - ODL integration An initial integration phase between the OAM and the OpenDaylight (ODL) was performed, where their interface was successfully tested, using mininet infrastructure. A simple example on how to deploy Ethernet Virtual Private Line (EVPL) services for two VNOs and one customer for each VNO, was provided. The VNOs were identified by Service VLANs (S-VLANs) 1 and 2 and the customers by Customer VLAN (C- VLAN) 10 and 20. The following is the list of requests sent to the REST application programming interface (API) of the SDN Controller: 1. Topology discovery 2. Create 2 Ethernet Virtual Connectionz (EVCs) a. 1st JavaScript Object Notation (JSON) – Create EVCs between two User Network Interfaces (UNIs) in locked state for two VNOs

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 22 of 190 b. 2nd JSON – Create EVPL service for VNO 1 c. 3rd JSON – Create EVPL service for VNO 2 3. Delete the Service

The following are the actual requests and responses of the process of provisioning slices on the backhaul through the SDN Controller:

*** Retrieve the network nodes, their interfaces and the links between them ***

1. Topology discovery.

 Headers: Content-type: application/xml Accept: application/xml Authentication: admin:admin (this is the default one and must be changed)  Method: GET  URL: https://:8181/restconf/operational/network-topology:network-topology  Body (Response):

flow:1 openflow:1:2 openflow:1 openflow:1:2 openflow:2 openflow:2:2 openflow:2:2 openflow:2 openflow:2:2 openflow:1 openflow:1:2 openflow:2 openflow:2:1

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 23 of 190 /a:nodes/a:node[a:id='openflow:2']/a:node- connector[a:id='openflow:2:1'] openflow:2:2 /a:nodes/a:node[a:id='openflow:2']/a:node- connector[a:id='openflow:2:2'] openflow:2:LOCAL /a:nodes/a:node[a:id='openflow:2']/a:node- connector[a:id='openflow:2:LOCAL'] /a:nodes/a:node[a:id='openflow:2'] openflow:1 /a:nodes/a:node[a:id='openflow:1'] openflow:1:LOCAL /a:nodes/a:node[a:id='openflow:1']/a:node- connector[a:id='openflow:1:LOCAL'] openflow:1:1 /a:nodes/a:node[a:id='openflow:1']/a:node- connector[a:id='openflow:1:1'] openflow:1:2 /a:nodes/a:node[a:id='openflow:1']/a:node- connector[a:id='openflow:1:2']  Status Code: 200 OK (successful response)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 24 of 190

*** Deploy an EVPL service with two VNOs ***

2. Create 2 EVCs

1st JSON – Create EVCs between two UNIs in locked state for two VNOs.

 Headers: Content-type: application/json Accept: application/json Authentication: admin:admin (this is the default one and must be changed)  Method: POST  URL: https://:8181/restconf/config  Body (Request):

{ "mef-ce2:attributes" : { "evc": [ { "evc-id": "evc:1", "admin-state": "LOCKED", "admin-root-svlan-id": "10", "cevlan-preserved": "true", "evc-type": "point-to-point", " point-to-point-unis":{ " point-to-point-uni": [ { "uni-id":"uni:1" }, { "uni-id":"uni:2" } ] } }, { "evc-id": "evc:2", "admin-state": "LOCKED", "admin-root-svlan-id": "20", "cevlan-preserved": "true", "evc-type": "point-to-point", "point-to-point-unis":{ "point-to-point-uni": [

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 25 of 190 { "uni-id":"uni:1" }, { "uni-id":"uni:2" } ] } }

] ,

"uni": [ { "uni-id": "uni:1", "node": "openflow:1", "port": "openflow:1:1", "ce-vlan-untagged-ptagged": "10", "maximum-number-of-evcs": "2", "l2cp-address-set": "CTA", "bundling-multiplexing": "bundling-multiplexing", "bundling-multiplexing-evcs-per-uni":{ "bundling-multiplexing-evc-per-uni": [ { "evc-ref":"evc:1", "cevlan-map":"10" }, { "evc-ref":"evc:2", "cevlan-map":"20" } ] } }, { "uni-id": "uni:2", "node": "openflow:2", "port": "openflow:2:1", "ce-vlan-untagged-ptagged": "10", "maximum-number-of-evcs": "2", "l2cp-address-set": "CTA", "bundling-multiplexing": "bundling-multiplexing", "bundling-multiplexing-evcs-per-uni":{ "bundling-multiplexing-evc-per-uni":

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 26 of 190 [ { "evc-ref":"evc:1", "cevlan-map":"10" }, { "evc-ref":"evc:2", "cevlan-map":"20" } ] } } ] } }

 Status Code: 204 No Content (successful response)

The values of the node(s) and the port(s) can be retrieved by the topology manager. The Infrastructure Provider (InfP) should also define the names of the UNIs (“uni-id”), of the EVCs (“evc-id”), the S-VLAN id of each VNO (“admin-root-svlan-id”), the mapping between the EVCs and the C-VLANs (“evc-ref”) – (“cevlan- map”). Finally the field “ce-vlan-untagged-ptagged” defines how the untagged and priority tagged frames will be handled. For administrative purposes, the EVCs of VNOs are locked and they can be unlocked by sending the appropriate requests below.

2nd JSON – Create EVPL service for VNO1

 Headers: Content-type: application/json Accept: application/json Authentication: admin:admin (this is the default one and must be changed)  Method: PUT  URL: https://:8181/restconf/config/mef-ce2:attributes/mef-ce2:evc/evc:1  Body:

{ "evc":{ "evc-id": "evc:1", "admin-state": "UNLOCKED", "admin-root-svlan-id":"10", "cevlan-preserved": "true", "evc-type": "point-to-point", " point-to-point-unis":{ "point-to-point-uni": [

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 27 of 190 { "uni-id":"uni:1" }, { "uni-id":"uni:2" } ] } } }  Status Code: 200 OK (successful response)

3rd JSON – Create EVPL service for VNO 2

 Headers: Content-type: application/json Accept: application/json Authentication: admin:admin (this is the default one and must be changed)  Method: PUT  URL: https://:8181/restconf/config/mef-ce2:attributes/mef-ce2:evc/evc:2  Body:

{ "evc":{ "evc-id": "evc:2", "admin-state": "UNLOCKED", "admin-root-svlan-id":"20", "cevlan-preserved": "true", "evc-type": "point-to-point", " point-to-point-unis":{ " point-to-point-uni": [ { "uni-id":"uni:1" }, { "uni-id":"uni:2" } ] } } }  Status Code: 200 OK (successful response)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 28 of 190 *** Delete EVPL Service ***

3. Delete the Service  Headers: Content-type: application/json Accept: application/json Authentication: admin:admin  Method: DELETE  URL: https://:8181/restconf/config/mef-ce2:attributes

Using the same principles, the OAM can request the provisioning of any number of slices through the SDN Controller.

2.1.3.5. OAM - Slice devices e.g. WiFi CPE, EPC, eNB EPC configuration request must be completed successfully in order to continue with the evolved Node B (eNB) configuration request. Sending a new request deletes previous configurations.

EPC configuration API

The following request configures EPC to send packets tagged with the specified VLAN.

Method POST epc/configure URL Headers Content-Type: application/json { Request Body "VNO_id": 1,

"VLAN_tag": 10

} Returns Success: 201 - Failure: 400, 401, 404, 500

eNB configuration API

The following request configures eNB to send packets tagged with the specified VLAN.

Method POST URL enb/configure Headers Content-Type: application/json { Request Body "VNO_id": 1,

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 29 of 190 "VLAN_tag": 10

} Returns Success: 201 - Failure: 400, 401, 404, 500

WIFI CPE

The WIFI CPE is a router running OpenWRT image. OpenWRT is an open source project for embedded operating system based on Linux, mostly installed in devices for routing network traffic. It has a web interface, but it can be configured also through the command line. In the context of CHARISMA, OpenWRT is used for providing traffic isolation between the different slices of the system. This traffic isolation is achieved by configuring network interfaces with specific vlans for each slice. It can be done by editing the configuration file located in /etc/config/network. For example, for creating a slice with vlan 1043, following configuration should be added to the mentioned file: config interface 'lan1043' option ifname ‘eth1.1043' option type 'bridge' option proto ‘dhcp’

After that, the interface wifi needs to be configured. For that the /etc/config/wireless file needs to be modified to include: config wifi-iface option device 'radio0' option mode 'ap' option isolate '0' option bgscan '0' option wds '0' option ssid 'CHARISMA_1043' option network 'lan1043' option encryption 'psk2' option key 'char1234'

2.1.3.6. ODL - SDN switches integration Integrating the OpenDaylight controller with the SDN switches of the NCSRD testbed allows one to apply OpenFlow rules to the SDN switches of the whole architecture by one centralized unit, the ODL controller. The SDN switches are switches that need to be controlled are depicted in Figure 1 with numbers 4 and 5. The OpenFlow rules are applied on the internal Open Virtual Switch (OVS) bridges. In each SDN switch there is only one bridge named “br0”. Additionally, two more OVS bridges per OpenStack compute node are controlled by the ODL controller. OpenStack compute node bridges are named “br-int” which is

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 30 of 190 responsible for internal traffic integration by VLAN tag manipulation and “br-provider” which assigns appropriate VLAN tags to egress traffic. In order to manage the OpenFlow switches, the OpenDaylight controller requires the “odl-openflowplugin- all” which is installed with following command:

karaf#>feature:install odl-openflowplugin-all

Each bridge is identified by a Datapath Identifier. This is a 64 bit number determined as follows according to the OpenFlow specification. At first we ensure that each switch has a different Datapatch Identifier (DPID) number. E.g. 00:00:00:00:00:00:00:08 and 00:00:00:00:00:00:00:09. Then we connect them with the SDN controller:

e.g. ovs-vsctl set-controller br0 tcp:x.x.x.x:6633 allow_sign_up = true

After all the connections have been established, the REST API of the ODL controller can be used to manage flows on the bridges. Available interactions are “Create” and “Delete” flows:

 Create flows

In the request URL the following parameters must be defined: the OpenFlow switch name ( 1 to 1 mapping between dpid name and OpenFlow name), the OpenFlow table and the number of the OpenFlow entry. The format of the body defines the match and actions fields.

Method PUT URL http://:8181/restconf/config/opendaylight- inventory:nodes/node/openflow:1/table/0/flow/1 Headers o Content-Type: application/xml o Accept: application/xml o Authentication

Request Body false 0 0 2 124 255 false 2048

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 31 of 190 10.0.1.1/24 12 1 34 FooXf1 2 false

Returns Success: 204 - Failure: 400, 401, 404, 500

 Delete flows

Delete request uses similar syntax as before but using method DELETE.

Method DELETE URL http://:8181/restconf/config/opendaylight- inventory:nodes/node/openflow:1/table/0/flow/1 Headers o Content-Type: application/xml o Accept: application/xml o Authentication

Returns Success: 204 - Failure: 400, 401, 404, 500

In that way, the ODL controller has the ability to enforce all the flows described in the section 2.1.2, which are necessary for the VNO slice creation and the dynamic traffic manipulation.

Figure 8: SDN control flow sequence diagram.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 32 of 190 2.1.3.7. OAM - Backhaul integration The sequence diagram below describes the integration and interfacing between the OAM, ODL and backhaul:

Figure 9: Sequence diagram for the OAM - SDN controller - Backhaul interaction

An initial integration was performed at the NCSRD premises of the SDN-enabled wireless backhaul and the SDN controller (ODL) with the CE2.0 functionality. The “RESTClient” (Firefox add-on) was used to communicate with the SDN controller. This will be substituted by the OAM in the final demo at NCSRD. A wireless Point-to-Point (PtP) link was deployed between the WiFi CPE and CAL3, using two SDN-enabled V- band “StreetNode™ V60-PTP” 1 wireless backhaul systems, engineered to be SDN compatible, as described in D3.4. The aim of the test was to establish communication between the two VNFs running at the CAL3, each one belonging to a different VNO, and the respective VNO user, connected to the Wi-Fi CPE, by setting up network slices through EVPL services over the backhaul (BH) link. A WiFi CPE introduced two SSIDs, mapping each one of them to a specific C-VLAN. Two laptops were connected using these two SSIDs respectively, representing two different customers of different VNOs. The WiFi CPE forwarded data to BH1 where S-VLAN tags were pushed, one per C-VLAN. Data was transmitted through RF cable to BH2, which popped the S-VLAN tags and forwarded data to CAL3, instantiated on a cloud infrastructure (OpenStack). BH1 and BH2 were configured through ODL and OpenFlow. The infrastructure setup for the test can be seen in Figure 10 and Figure 11.

1 http://www.intracom-telecom.com/en/products/wireless_network_systems/small_cell/streetnodeV60.htm

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 33 of 190

Figure 10: ODL and SDN-enabled Backhaul integration with NCSRD infrastructure

Figure 11: Infrastructure at NCSRD premises

Using the RESTClient to send the scripts (presented in 2.1.3.4 OAM - ODL integration) to the SDN controller’s REST API, two EVPL services were created, one for each VNO. The range of the customer VLANs of each VNO, as well as the identifier of the VNO (S-VLAN ID), were defined. The ODL then sends appropriate OpenFlow rules to the backhaul switches in order to deploy the service. Two slices were created: one for customer with C-VLAN 10 (VNO1) and the identifier S-VLAN 1 (Figure 12), and one for customer with C-VLAN 20 (VNO 2) and the identifier S-VLAN 2 (Figure 13). As S-VLANs are pushed at the entry point of the backhaul network (S-VLAN Domain) and popped at its exit point, in order to visually demonstrate the insertion of the S-VLAN IDs, instead of using the radio link, the traffic at each backhaul was redirected to one of their available Gigabit Ethernet (GbE) interfaces and these were then connected with a switch which performed port-mirroring. Correct insertion of the S-VLAN was then demonstrated on a laptop running Wireshark.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 34 of 190

Figure 12: Customer with C-VLAN 10 belonging VNO with S-VLAN 1

Figure 13: Customer with C-VLAN 20 belonging to VNO with S-VLAN 2

When this test was completed, the port mirroring switch was removed and the radio interface was used for the rest of the demo. Communication from each customer’s laptop to the respective VNO’s VNF in CAL 3 was tested with ping (Figure 14, Figure 15). Furthermore, slice isolation was successfully demonstrated as customer 1 was able to communicate only with VNO1’s VNF (VNF1), and respectively, customer 2 was able to communicate only with VNO2’s VNF (VNF2).

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 35 of 190

Figure 14: Laptop 1 screenshot after deploying a service for VNO 1 (C-VLAN 10)

Figure 15: Laptop 2 screenshot after deploying a service for VNO 2 (C-VLAN 20)

2.1.3.8. OAM - OpenStack integration

The Open Access Manager (OAM) makes use of the official OpenStack client for communication with the OpenStack VIM (Virtualized infrastructure Manager). Although the OpenStack client is very extensive, the OAM only requires a subset of the methods to fulfill CHARISMA’s requirements.

The methods described here try to describe the most relevant parts of the code, to help the reader to understand the actions performed by the OAM in OpenStack. The whole implementation can be found in the CHARISMA Git repository [1].

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 36 of 190 Login

Before doing any request as InfP (infrastructure provider) or VNO, the OAM needs to login into OpenStack using the user credentials. This method initializes an OpenStack session, which is stored in the OAM database. That information is retrieved, when users’ further actions require communication with OpenStack.

The required parameters are:

 auth_url: Keystone authentication url  username: Username of the user in OpenStack.  password: Password of the user in OpenStack.  project_name: Name of the OpenStack project in which to log in. from keystoneauth1.identity import v3 ... auth = v3.Password(auth_url=auth_url, username=username, password=password, project_name=project_name, user_domain_id='default', project_domain_id='default') sess = session.Session(auth=auth)

List projects

This method is only available for Infrastructure Providers (InfPs). It lists all the projects on the OpenStack instance. Each OpenStack project represents a VNO of the CHARISMA platform. import keystoneclient.v3.client as keystone_client ... keystone = keystone_client.Client(session=sess) openstack_projects = keystone.projects.list()

Create project

This method, only available for Infrastructure Providers, allows the creation of projects in OpenStack. It is called by the OAM once a VNO is created. The required parameters are:

 name: name of the project, which matches the VNO name.  description: additional information of the project.  domain: name of the OpenStack domain in which the project would be created.

import keystoneclient.v3.client as keystone_client ... keystone = keystone_client.Client(session=sess) project = keystone.projects.create(name=name, description=description, domain=domain, enabled=True)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 37 of 190 Delete project

Allows the Infrastructure Providers to delete an OpenStack project. This method is called once a VNO is deleted from the CHARISMA platform. This method only requires the ID of the project as a parameter. import keystoneclient.v3.client as keystone_client ... keystone = keystone_client.Client(session=sess) keystone.projects.delete(project=project_id)

List users

Using the list_users method, the infrastructure provider retrieves all the users registered in the OpenStack instance. In that OpenStack instance, each project represents a VNO. All the users have to be member of an OpenStack project import keystoneclient.v3.client as keystone_client ... keystone = keystone_client.Client(session=sess) openstack_users = keystone.users.list()

List project users

This method allows one to retrieve the users of an OpenStack project, i.e. a VNO. This method only requires the OpenStack ID of the project as a parameter. import keystoneclient.v3.client as keystone_client ... keystone = keystone_client.Client(session=sess) openstack_users = keystone.users.list((project=project_id)

Create user

This method is only available for Infrastructure Providers, and its purpose is to create the VNO users in the OpenStack instance. Each user belongs to a single OpenStack project. The parameters required for user creation are:

 name: username for the new user  password: password for the new user, that will be used for authentication  email: email of the user  description: additional information about the user  domain: OpenStack domain in which the user should be created  project: Name of the project the user would belong to  role: Role that the user would have in the selected project. import keystoneclient.v3.client as keystone_client ... keystone = keystone_client.Client(session=sess) user = manager.create(name=name, domain=domain, password=password, default_project=project, project=project, role=role,

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 38 of 190 email=email, description=description, enabled=True) rolemanager = keystone_client.roles.RoleManager(keystone) rolemanager.grant(role=role, user=user, project=project)

As seen in the previous block of code, the user is first created, and then the role is assigned to the user through the OpenStack role manager.

Delete user

This method allows Infrastructure Providers to remove users from the OpenStack instance. It is called once VNOs are removed, but it can also be used to remove users from a VNO without deleting the VNO. The only required parameter is the OpenStack ID of the user to be removed. import keystoneclient.v3.client as keystone_client ... keystone = keystone_client.Client(session=sess) manager = keystone_client.users.UserManager(keystone) manager.delete(user=user_id)

List roles

In order to map the OAM roles to the ones in OpenStack, it is required to get the list of roles of the OpenStack instance, since they are different in each OpenStack configuration. This method is used by the OAM when creating a VNO.

import keystoneclient.v3.client as keystone_client ... keystone = keystone_client.Client(session=sess) rolemanager = keystone_client.roles.RoleManager(keystone) roles = rolemanager.list()

List availability zones

Availability zones represent CHARISMA CALs. This method allows one to retrieve the CALs of an OpenStack instance. This information is used by the OAM for the instantiation of Network Services, allowing one to select one CAL per VNF. It is also used by the OAM for defining which CALs a VNO is allowed to deploy. import novaclient.client as nova_client … nova = nova_client.Client(version="2.1", session=sess) zones = nova.availability_zones.list()

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 39 of 190

Create network

When creating slices, the OAM creates a network with a specific VLAN in the OpenStack instance, for traffic isolation between different slices. This method requires one to also create a subnetwork in OpenStack. The parameters received by this method are:

 Net: dictionary containing following keys: o name: Name of the new network (it matches the name of the slice) o tenant_id: id of the project it belongs (it matches the VNO owning the slice) o cidr: IP addressing scheme for the subnetwork.

import neutronclient.v2_0.client as neutron_client … neutron = neutron_client.Client(session=sess) network_body = {'network': {'name': net['name'], 'provider:network_type': 'vlan', 'provider:segmentation_id': net['vlan'], 'provider:physical_network':'provider', 'tenant_id':net['tenant_id'], 'admin_state_up': True}} network = neutron.create_network(body=network_body) body_create_subnet = {'subnets': [{'cidr': net[‘cidr’], 'ip_version': 4, 'network_id': network[‘id’}]} subnet = neutron.create_subnet(body=body_create_subnet)

Delete network

When removing a slice, the OAM needs to also delete the associated network in OpenStack. The only parameter required for performing this action is the ID of the network.

import neutronclient.v2_0.client as neutron_client … neutron = neutron_client.Client(session=sess) network = neutron.show_network(network=network_id) neutron.delete_subnet(subnet=network['network']['subnets'][0]) neutron.delete_network(network=network_id)

Share network

Implemented for the vCaching use case, this method is used for sharing two OpenStack networks between two different projects. The OAM makes use of it when the Infrastructure Provider wants to share two slices between two different VNOs.

The required parameters are:

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 40 of 190  network_id: id of the OpenStack network to be shared  target_project_id: the id of the project to which the network wants to be shared with conn = connection.Connection(**auth_args) conn.network.create_rbac_policy(action='access_as_shared',object_type='network', target_tenant=target_project_id, object_id=network_id)

Add project to host aggregate filter

The Host Aggregate filter is an OpenStack feature that allows one to filter which availability zones can be used for deploying the VMs of a project. Although the OAM already knows in which CALs a VNO is able to deploy, itis used to add a second layer of security.

The parameters required for this method are:

 aggregate_id: id of the host_aggregate to update  project_id: id of the OpenStack project to be added in the filter.

import novaclient.client as nova_client … nova = nova_client.Client(version="2.1", session=sess) aggregate = nova.aggregates.get(aggregate_id) metadata = aggregate['metadata'] if 'filter_tenant_id' not in metadata.keys(): metadata['filter_tenant_id'] = project_id else: metadata['filter_tenant_id'] += ',' + project_id OpenStackClient.set_host_aggregate_metadata(auth=auth, aggregate=aggregate_id, metadata=metadata)

Remove project from host aggregate filter

When a VNO is removed from the OAM, the host aggregate filter must be updated by removing it from the filter. As in the previous method, only the aggregate_id and the project_id are required.

import novaclient.client as nova_client … nova = nova_client.Client(version="2.1", session=sess) aggregate = nova.aggregates.get(aggregate_id) metadata = aggregate['metadata'] current_filter = metadata['filter_tenant_id'] current_projects = current_filter.split(",") current_projects.remove(project_id) if len(current_projects) == 0: metadata['filter_tenant_id'] = None else: metadata['filter_tenant_id'] = ",".join(current_projects)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 41 of 190 OpenStackClient.set_host_aggregate_metadata(auth=auth, aggregate=aggregate_id, metadata=metadata)

2.1.3.9. Tenor - OpenStack integration

TeNOR uses the OpenStack APIs. It is compatible with OpenStack Liberty and newer versions. The particular APIs can be found in the following references: Nova [3], Keystone [4] Glance [5], Keystone [6], and Orchestration [7].

2.1.3.10. OAM – TeNOR

The OAM consumes the methods exposed from TeNOR in its API [2].

2.1.3.11. vCache service orchestration

2.1.3.11.1. vCC-vCache communication

To be able to communicate with the virtual cache (vCache), the NETCONF client has been installed in the virtual cache controller (vCC). We use netopeer-cli as the NETCONF client to send the management messages to netopeer-server running on vCaches (for example 192.168.100.10).

> netopeer-cli > connect --login squid 192.168.100.10

There are four kinds of communications that have been enabled between vCC and vCache, including cfg- squid-conf, cfg-squid-ist, cfg-prefetch-conf, cfg-prefetch-list.  Cfg-squid-conf provides interface to allow the cache controller (CC) to configure squid.conf.  Cfg-squid-list allows the CC to collect user request information from Squid.  cfg-prefetch-conf provides interface to prefetcher in order to allow the CC to configure it.  cfg-prefetch-list provides interface to prefetcher to allow the CC to exchange command with it. Communications to configure Squid

> netconf> get-config --filter=getSquidConf.xml running //get running info > netconf> edit-config --config editSquidConf.xml running

An example of getSquidConf.xml:

An example of edigSquidConf.xml: 3129 256 MB

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 42 of 190 lru lru 0 0 KB 4096 KB 90 95 192.168.100.2 192.168.100.2 sibling 3128 3130 proxy-only no-digest no-netdb-exchange

Communications to configure Prefetch

To get information of the current running configuration of Prefetcher, a Cache Controller Daemon (CCD) runs the following script to connect to the managed cache node (CN) and get the running configuration. The filter can be used to get the specific information of Prefetcher configuration in CN. The following script is called by the CCD to retrieve the current configuration of a particular CN. Input of the script: ./getPfConf.sh Output of the script: The output is the Prefetcher configuration of the CN in xml. It should print the tags of the datastore.xml including the tags in pfconfig.output file. #!/bin/bash IP_CN=$1 TMP_FILE=result.txt DEST_FILE=pfconfig.output netopeer-cli > $TMP_FILE < Communications to retrieve the user request list from vCaches

The vCC retrieves the user request information of the access.log in vCache per second, through the NETCONF communications. Below is an example of how the vCC retrieves the information from access.log in vCache. The below is the script to get the request list. The input is a filter xml file: getrequestlist.xml The script is getRequestList.sh #!/bin/bash IP_CN=$1 TMP_FILE=resultsget.txt netopeer-cli > $TMP_FILE <

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 43 of 190 connect --login root $IP_CN get-config --filter=getrequestlist.xml running disconnect exit KONEC Communications to send a prefetch command to vCaches

This module is in charge of pushing the list of contents to be fetched by a specific CN. The list is stored on the CC and has the same format as the datastore.xml of this module. Thus, each time the CCD modifies this list, CCD netconf client should replace and push the new list to a specific CN. This is performed by the following script (./pushPfList.sh): Input of the script: $ ./pushPfList.sh Where is the list of contents to be fetched in xml format and is running, startup or candidate. #!/bin/bash IP_CN=$1 DATASTORE=$2 CONFIG=$3 netopeer-cli > $TMP_FILE <

2.1.3.12. Transparent caching As explained in D3.4, transparent (or interception) caching aims to hide the existence of vCaches from end users, avoiding the need for proxy configurations at the end devices. To this end, the forwarding substrate is configured to direct user HTTP traffic towards an instantiated vCache. This is performed though OpenFlow that, in the context of the NCSRD demonstration environment, is employed to establish a flow rule at the forwarding switch connecting the CAL1 micro-data centre (DC) to the rest of the network infrastructure. The example below illustrates such a rule configuration. The rule matches with all transmission control protocol (TCP) traffic towards port 80 (HTTP), and incoming switch port 19. As an action, the packet header is rewritten so that the MAC address of the vCache is employed and the output port is port 21 i.e., the bridge port that leads to the vCache.

> sudo ovs-ofctl add-flow br-int "priority=100 in_port=19 dl_type=0x0800 nw_proto=6 tp_dst=80 actions=mod_dl_dst:fa:16:3e:1f:8a:b2,21"

This configuration allows users’ HTTP packets to reach a vCache. In the reverse direction, a vCache masquerades as the content server, i.e. HTTP response packets carry the content servers IP address as the source IP address, instead of the vCache IP address that is actually responding. However anti-spoofing rules installed by OpenStack’s port-security module need to be bypassed, so the response packets are not dropped. This is accomplished with the following configuration:

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 44 of 190 > neutron port-update {port_id_of_squid_in_openstack} --allowed-address- pairs type=dict list=true mac_address=:{MAC_Address_of_squid},ip_address=0.0.0.0/0

#to allow the vCache masquerade as anyone/any_ip_address

Transparent caching requires further configuration of the vCache VNF, as detailed in Section3.2.4.

2.1.3.13. vCache Peering Service Orchestration

Also illustrated in D3.4, Figure 16 below provides an overview of the CHARISMA CMO workflow for the establishment of the proposed vCache peering service. The described procedure assumes that the network slices of two tenants (e.g. tenants A and B) have been created, and their vCache instances have been instantiated for the support of local vCache service provisioning, as described in Section 2.1.3.11.1 above. The described procedure targets a simplified, demonstration-level setup including a single vCache instance per VNO (see Figure 17). The setup focuses on the essential operations of establishing optimised and secure communication between network slices, and realizing cache peering at the application level. This setup will enable subsequent performance evaluation efforts on quantifying the impact of cache peering on resource isolation.

In the following we provide implementation details about the orchestration of the overall service by the CHARISMA CMO, highlighting in particular interfacing with the underlying Virtualised Infrastructure Manager (VIM), i.e. OpenStack.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 45 of 190

Figure 16: CMO workflow for the establishment of a vCache peering service.

Figure 17: Simplified vCache peering setup. The vCC is omitted for simplicity.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 46 of 190 The overall process is divided into four sets of tightly related steps. Namely:

A. Create shared network (Steps 4-10) This procedure is performed within one of the tenants’ (e.g., VNO A) administration environment (dashboard) provided by CHARISMA. The operation is triggered by the VNO GUI and gets translated (via steps 5 and 6) to OpenStack API calls, performed by the orchestrator (TeNOR).

Required Input:  Name of shared network (sharedNetworkName)  Name of shared subnet (sharedSubnetName). For convenience we can use again the sharedNetworkName  Network type of the shared network (networkType), e.g. vlan  Name of the physical network (physicalNetworkName)  Subnet CIDR (subnetCIDR), e.g. 192.168.60.0/24

The procedure is performed with the following command in OpenStack’s command line interface (CLI):

> neutron net-create sharedNetworkName --provider:network_type networkType -- provider:physical_network physicalNetworkName

> neutron subnet-create sharedSubnetName subnetCIDR --name sharedNetworkName

Achieving this programmatically through OpenStack’s Networking API, translates to the following API calls (i.e. in step 6):

> curl -g -i -X POST http://controller:9696/v2.0/networks.json -H "User- Agent: python-neutronclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}79eacbb084a3adb5285a5d4da854f3a3cd4c39b1" -d '{"network": {"provider:network_type": "networkType", "name": "sharedNetworkName", "provider:physical_network": "physicalNetworkName", "admin_state_up": true}}'

> curl -g -i -X POST http://controller:9696/v2.0/subnets.json -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}7a6beac606f387d702aa5aec131011518c41373f" -d '{"subnet": {"network_id": "sharedNetworkId", "ip_version": 4, "cidr": "subnetCIDR", "name": "sharedSubnetName"}}'

B. Role-Based Access Control (RBAC) (Steps 11-17) This procedure is performed within the administration environment (dashboard) of the VNO that creates the shared network. The operation is triggered by the VNO GUI and gets translated (via steps 12 and 13) to OpenStack API calls, performed by the orchestrator (TeNOR).

Required Input:  OpenStack tenant ID of peering VNO (tenandID)  Network ID of shared network (sharedNetworkID)

The procedure is performed with the following command in OpenStack’s CLI:

> neutron rbac-create --target-tenant tenandID \

--action access_as_shared --type network sharedNetworkID

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 47 of 190 Achieving this programmatically through OpenStack API translates to the following API call (i.e. step 13):

> curl -g -i -X POST http://controller:9696/v2.0/rbac-policies.json -H "User- Agent: python-neutronclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}aee6c5d7f31d73f9987cf9fa63ee7e1d6797afb4" -d '{"rbac_policy": {"action": "access_as_shared", "object_type": "network", "target_tenant": "tenandID", "object_id": "sharedNetworkID"}}'

C. Join shared network (Steps 18-24) This procedure is also performed within the tenant administration environment (dashboard) provided by CHARISMA. The procedure is performed by both VNOs separately. Assuming that VNO A creates the shared network, VNO B should only be able to perform this step once it has been granted access to the shared network, i.e. the RBAC step has completed. The operation is triggered by the VNO GUI and gets translated (via steps 19 and 20) to OpenStack calls, performed by the orchestrator (TeNOR).

Required Input:  Name of shared network (sharedNetworkName)  Nova security group (securityGroupName)  Port ID of the port (portID). It is created by the 1st command below  Name of the VM which we would like to attach the interface (novaInstanceName)

The procedure is performed with the following command in OpenStack’s CLI:

> neutron port-create --security-group securityGroupName sharedNetworkName

> nova interface-attach --port-id portID novaInstanceName

Achieving this programmatically through OpenStack API translates to the following API call (i.e. step 20):

> curl -g -i -X POST http://controller:9696/v2.0/ports.json -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}de5b73fb7cdfb0166814c4d04d4a3aa1565b1eb1" -d '{"port": {"network_id": "networkId", "security_groups": ["securityGroupName"], "admin_state_up": true}}'

> curl -g -i -X POST http://controller:8774/v2/{tenantId}/servers/{serverId}/os-interface -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}623c81dc4dd48ecf2b9f17ce6dff2194097f63eb" -d '{"interfaceAttachment": {"port_id": "portID"}}'

D. Establish Squid peering link (Steps 25-33) This step is also performed within the tenant administration environment (dashboard) provided by CHARISMA. The procedure is performed by both VNOs separately, and assumes that all previous steps have been completed so as to be effective. The operation is triggered by the VNO GUI and its purpose is to provide the vCC (via steps 26 to 28) with the configuration parameters needed to establish the peering link at the application level (see required input below.) Step 29, i.e. the actual establishment of the sibling/peering link is realized through a NETCONF interface between the vCC and each vCache (see Section 2.1.3.11.1).

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 48 of 190 2.1.3.14. Traffic handling As explained in D3.3, CHARISMA builds on the processing of vCache access log information for the identification of destination IP addresses associated with cache misses. For such destinations, forwarding rules are dynamically established allowing traffic to bypass a vCache, saving thus the latency incurred by the unnecessary traversal of the virtualized infrastructure in the (expected) event of a cache miss. The identification of such non-cacheable flows takes place at the vCC and is based on the access log information retrieved and stored in the Request SQLite table, which has the following structure:

Table 2: Request SQLite table at vCC

Fields Description Values ID ID of user request (KEY) INT, self_increased (NOT NULL) Url The URL of the requested chunk or VARCHAR (NOT NULL) file Timestamp The timestamp of the request INT FromCN CN_ID that forwards the request2 INT FromUserIP IP of user that initiates the request VARCHAR ResultCode Denotes an HTTP/ICP HIT or a MISS INT

Along the lines of the design described in D3.4, our implementation first targets the identification of non- cacheable flows linked to one-timers, i.e. URLs requested only once (or rarely). Below, we provide an example script that parses the Request table identifying one-timers within the entries logged after timeWindowStart =10 (example value). Tuning the value of the references variable, we can identify URLs with more than one, but still, few requests.

#!/bin/sh

timeWindowStart="10" references="2" sqlite3 accessLogvCache1.db “select Url from REQUEST where Timestamp > $timeWindowStart group by Url having count(*)<$references;” | sed 's/https\?:\/\///' > result.dmp

input=result.dmp

while read line

do

nslookup $line | tail -2 | head -1 | awk '{print $2}'

done < "$input"

Subsequent extensions to the access log importing mechanism of the vCC will further enhance the Request entries with information on whether an entry corresponds to a cache hit or a miss (shown as ResultCode in the table above). Based on this additional information, the script will further identify URLs associated with multiple cache misses, sorting them in descending frequency order. This will be accomplished by the following slight modification of the script above. The end result is a list of URLs corresponding to the most frequent cache misses, followed by one-timers.

2 Refers to other table in the database; not used for traffic handling.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 49 of 190 #!/bin/sh … MISSCODE="5" … sqlite3 accessLogvCache1.db “select Url from REQUEST where Timestamp > $timeWindowStart and ResultCode=$MISSCODE group by Url having count(*)<$references order by count(*) desc;” | sed 's/https\?:\/\///' > result.dmp …

The vCC delivers the identified non-cacheable flow rules to the SDN controller through the latter’s northbound REST API. The SDN controller then applies the specified rules to the SDN-enabled switch connecting the CAL1 IMU/μDC with the rest of the network (see Section 2.1.1). The rules correspond to configurations similar to the example below. The priority of the rule is higher than the default rule which forwards all the traffic to the vCache (Section 2.1.2.1) and matches also the destination IP address of some identified non-cacheable flows (i.e. nw_dst=192.168.2.200):

> sudo ovs-ofctl add-flow br-int "priority=1000 in_port=23 dl_type=0x0800 nw_proto=6 tp_dst=80 nw_dst=192.168.2.200 actions=normal

2.1.3.15. CMO - vCC communications

The API is based on the pure socket communications, and the vCC listens on the port 8808 to be ready to receive the information from the CMO. There are two main functionalities that should be provided by the CMO to the vCC:  IP addresses of newly created vCaches;  Cache_peer configurations including two IP addresses: the IP of the vCache to be configured, and the IP of the peering caches that need to be added. After creating/deleting a new vCache for a specific VNO, the CMO is required to inform the vCC that it is allocating to this VNO the IP address of this created/deleted vCache.  Add a new vCache: CMO is required to inform the vCC about the IP address of this new vCache. Send a message “vCache, add, IP_address”  Delete a new vCache, CMO is required to inform the vCC about the IP address of this deleted vCache. Send a message “vCache, del, IP_address” In order to implement the cache peering and multi-tenancy scenarios, CMO needs to be able to inform the vCC which vCache needs to add a peering cache rule.  Add a peering cache: CMO is required to inform the vCC about the IP address of the configured vCache and the IP address of the peering cache. Send a message “peering, add, IP_vCache, IP_peeringCache”Delete a peering cache  Delete a cache peering rule: CMO is required to inform the vCC about the IP address of the configured vCache and the IP address of the peering cache. Send a message “peering, del, IP_vCache, IP_peeringCache”

2.1.3.16. Integration Issue and lesson learnt

Amongst the issues we faced during the integration process at the NCSRD demonstrator was the need for a unified authentication mechanism to enable visualisation of the monitoring metrics captured from the

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 50 of 190 infrastructure devices and services belonging to the infrastructure provider and the different VNOs. This requirement highlighted the need for integrated authentication mechanisms exposed by the CHARISMA GUI and the Monitoring and Analytics and the visualisation tool used, Grafana. To resolve this issue, the CHARISMA GUI authentication was extended by making use of the “User Management Interface” defined in Deliverable 3.4 – Section 2.3.3.2. The user ID, name and password of each VNO are saved at the M&A Management Database and then the ID is used as a label to the time-series data saved at the M&A Prometheus Database. M&A back-end implementation using the same authentication credentials as the CHARISMA GUI permits data access only to the authenticated user that owns the data. Similarly, the Grafana software tool, used for visualization of the data, is automatically synced by the back-end of the M&A, with the same user credentials on every “User Management Interface” interaction. In that way, each user dashboard can contain only the data that correspond to the user, providing a seamless authentication experience.

Another issue we encountered was derived by the fact that the NCSRD demonstrator is addressing two different use cases, security and multi-tenancy/caching over the same infrastructure. Traffic re-direction within the implemented NFVI-PoPs is different for each use case. The traffic handling logic for both the security and the multi-tenancy/ caching scenario has been extensively described in Section 2.1.2.1. A solution that serves both scenarios is still under investigation, since different network services currently require slightly different rules on the Open Virtual Switch bridges to succeed in providing their advertised functionality. In case such a solution, integrating both cases, does not exist, we will manually apply the different traffic engineering rules when switching from the security demonstrator to the multi-tenancy and caching demonstrator.

2.2. APFutura (Centelles) field-trial To demonstrate and evaluate the specific characteristics of the CHARISMA architectural concept, such as: Low Latency, Open Access and service availability/reliability, Apfutura is allowing its optical infrastructure (which is similar to other operators in Spain) to act as a single field-trial test bed for two different demos. This environment simulates a very close implementation of next-generation 5G networking, and indicates how CHARISMA can be used to improve end-user experience by lowering the latency while keeping the service reliability, as well as to improve network management for operator itself, with an easier and centralized management and orchestration component (CMO) that uses virtual network functions (VNFs) at the different converged aggregation levels (CALs). From the security point of view, in both demos the authentication and data integrity is provided by the network infrastructure. Having no restrictions respective to the address layout, the IP routing is vulnerable to attacks focusing on packet routing. That is why in the APFutura demos we use IPv6 and a hierarchical network structure, whereby the hierarchical address layout with specialised routing hardware is used to speed-up the traffic forwarding. To avoid the vulnerability of attacks focusing on packet routing, the TrustNode device features MACSec to provides the integrity of the data exchange between devices. Also, TrustNode provides 6Tree technology, which is a fast-hierarchical routing concept with the feature that there is only one configuration parameter that tells the device his logical position in the network. Using this concept, the route of a packet through the network is strictly predefined by the value of the IPv6address bits that are inside the protection area of MACSec. Applying this check means that network devices are not vulnerable to address spoofing attacks, i.e. upon receipt of a 6Tree packet, the source address and the path through the 6Tree network are verified, which authenticates the route from the source through to the destination

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 51 of 190 2.2.1. Physical Level Architecture (Hardware) Figure 18 shows the physical architecture of the Apfutura demonstrator. This infrastructure replicates any existing optical network as used by any generic operator. Such an operator could use his own infrastructure to provide wholesale services to other operators; i.e. not all operators need to deploy their own infrastructure. Most of the network functions are virtualized in the different aggregation levels (CALs) so different VNOs (Virtual Network Operators) with their own Virtual Network Functions (VNFs) can be created in just several minutes over the same physical infrastructure. The main part of the hardware is deployed in the CAL3, where the APFutura Server is located. This server contains all the virtual network functions (VNFs), and the CHARISMA control management and orchestration component (CMO). Ethernity’s SmartNIC has also been installed into the server, so as to reduce the latency and the server power consumption by off-loading data processing onto the SmartNIC. Innoroute’s TrustNode is located at the CAL3. The TrustNode device is a hardware accelerated platform router that has been designed for fast IPv6 processing/routing and rapid network prototyping. This router supports high-speed hierarchical routing, so it can also be positioned in any CAL of the network. However, for the actual demo at APFutura, there is only one TrustNode router, located inside the CAL3. Using these two CHARISMA devices, SmartNIC and TrustNode, we can reduce the latency, and so achieve a low latency network compliant with the relevant 5G KPIs and future intelligent transport system (ITS) scenarios. The more that conventional NICs and routers are replaced with these new devices developed within the CHARISMA project, the more will network latency be reduced. In addition to these devices at CAL3, we also have the following CHARISMA equipment located at the other CALs in the architecture: In CAL2 we have the Altice Labs’s optical line termination (OLT), a new generation passive optical networking (PON) device. In CAL1 we have the Altice Labs’s optical network termination units (ONTs) that we will use in some cases to provide WiFi or data connections for different VNOs, because the ONTs are sliceable using VLANs. In the CAL1 we also have JCPC’s MoBcache device for hierarchical caching, that we will use in the service reliability demo. CAL0 is the end-user CAL and we have several devices simulating an end-user’s computer or mobile phone. For the CAL0, we also have another MoBcache device located within a bus; again, for the service reliability demo.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 52 of 190

Figure 18 Physical-level architecture of the APFutura demonstrator

The next table shows a detailed inventory for all CHARISMA devices used in the APFutura field trial.

Table 3: Specification of physical devices used in the APFutura field trial

ID Role Vendor CPU Model CPU cores RAM Storage Other Features 1 APFutura Server DELL Intel Xeon E5620 4 32GB 126GB DDR3 2 SmartNIC Ethernity 3 TrustNode InnoRoute Intel E3800 4 4GB externa FPGA l acceleration 4 OLT Altice Labs 5 ONT Altice Labs 6 P2P Wireless 7 MoBcache JCP-C Freescal Quad- 2GB 512MB cores 1.2 DDR3 NAND GHz flash 8GB eMMC 1 mSSD upto 256GB 8-10 PCs and Mobile ------

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 53 of 190

Figure 19 Physical-level architecture of the APFutura field trial (data plane)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 54 of 190

Figure 20 Physical-level architecture of the APFutura field trial (control plane)

2.2.2. Logical Level Architecture (Software) From the logical level architecture point of view there are 2 different scenarios: The first one is the low latency demo; and the second one is the Bus Demo to show service reliability. The low latency demo is based on the two CHARISMA devices that have been specially designed for low latency operation: the SmartNIC and the TrustNode. In addition, a mini robot car has been built to simulate an Intelligent Transport System (ITS) scenario, where low latency operation is particularly critical, for on- line intelligence and remote decision making for a self-driving car. Figure 21 shows how the AFPutura CHARISMA network can be configured using the CMO in order to configure the ONT, OLT, TrustNode, OpenStack, in this ITS scenario; and then subsequently use the OAM to create an end-to-end slicing in every CAL. The logical architecture for this low latency CHARISMA demo is supported by the internal software of the TrustNode to accelerate the routing of all the packets and provide very fast processing, and the SmartNIC that avoids the Linux kernel in the management of the received packets, so as to reduce the latency even more.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 55 of 190

Figure 21 Logical level architecture for low latency ITS demo featuring a robot self-driving car.

The Figure 22 shows all the software involved in the Apfutura field trial:

 The ONTs are using Altice Labs firmware. With this firmware, we can manage the slicing in every ONT’s physical port. This ONT works also as a CPE, so we can manage Dynamic Host Configuration Protocol (DHCP), Access List Control (ACLs), etc. if necessary.  The OLT uses Altice Labs firmware. With this firmware, we can manage and configure the ONT from Layer 1 to Layer 3.  The MoBcache uses JCPC’s software that allows this caching device to provide WiFi connectivity to the end-user, or to use a LTE connection to provide Internet access if there is no (WiFi) connection with other MoBcache devices.  The TrustNode router runs the 6-Tree software, that allows hierarchical routing and provides functions to reduce latency.  The APFutura Server. Inside this server we can find: o CMO, that provides functions to control, manage, and orchestrate all running services. o Open Access Manager, that provides the possibility of slicing creation. o Caching Controller VNF, that with the OpenStack VM with Ubuntu controls all the MoBcache devices. o The Cache VNF that is an OpenStack VM with Ubuntu 16.04.

All the software modules have a detailed description in Section 3 of this document.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 56 of 190

Figure 22 Logical Level Architecture showing software for the APFutura field trial

2.2.3. Planned integration and interfacing

2.2.3.1. SmartNIC CMO interfacing

The interfacing between the CHARISMA CMO and the Ethernity SmartNIC defined in this section is based on the RESTFul approach, and is targeted to manage the different resources of the VNFs, and support the VNO open access approach. An example of such a service is firewall (FW) implementation acceleration. Here, the FW service is a common example of vCPE implementation and other network solutions. Usually a FW is applied after an IDS decision, and in its simplest manner it presents 5-7 tuples with a permit to drop decision. Another good example is slicing with the help of an additional VLAN tag or any other tunnelling mechanism (in CHARISMA we chose second Tag and VNO differentiation). Figure 23, below, explains the interfacing between the SmartNIC and OpenStack elements.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 57 of 190

Figure 23: interfacing between SmartNIC and OpenStack

To provide support for resource management, Ethernity provides a high-level API for the CMO and VNF subsystems. The API enables the management of services to support the service level agreement (SLA) and VNO policies. The Ethernity API also enables the acceleration of the FW function.

VNO support

This service is directly applied by the CMO to the SmartNIC, and works after SmartNIC installation has finished. The Network Service Descriptor (NSD) consists of information as defined below, and iss used by the NFV Orchestrator to instantiate a Network Service, which would be formed by one or more VNF Forwarding Graphs, VNFs, physical network functions (PNFs) and Virtual Link information elements (VLs). The NSD also describes the deployment flavours of the Network Service. The following is not a complete list of the information elements constituting the NSD, but a minimum subset needed to onboard the Network Service (NS). nsd basic element

Vendor: Ethernity Version: 1.01 connection point Id: ID of the Connection Point Type: virtual port, or virtual NIC address, or physical port, or physical NIC address VNO service deployment

ACE-NIC (i.e. SmartNIC) enables one to define a vPort (virtual Interface), mapping the vFunctions in the Server to the vPort in the NIC. This service enables one to simplify the data processing in the user space. For example, doing two VMs with same Dynamic Host Configuration Protocol (DHCP) function at the Server for different VNOs (Virtual Network Operators) requires different tags for two VMs with two vFunctions. In this case, SmartNIC is responsible for manipulating the packet to provide mapping of double tag + application to the relevant vFunction in user space. More than that, the vPort enables the provision of policing and shaping for flows in both directions.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 58 of 190 VNO Service definition

The key parameters can be a value or wildcard; but at least some value in addition to the name has to be applied. The action field can also consist of wildcards, but at least one value has to be added.

Key: Ace-vF name – Application instance name: DHCP, FW or DNS …. or it can be simply VNF1, VM1 or any other [must] Ace-vF pPort – physical port on NIC [must] Ace-vF VLAN – vF VLAN tag to map packet to specific vF in OpenStack [can be number or wildcard] Ace-vF Application – Src-Dst MAC/EthType/Src-Dst IP/IP proto/Src-Dst ProtoPort [can be numbers or wildcard] Action: Ace-vF VNO-tag – add tag for VNO Ace-vF VNO-Tag-action – can be add / swap Ace-vF Policy: shaper burst size/ policer rate Ace-vF Statistics: yes/no

VNO APIs: Service Create/Destroy Service Get/GetNext Service Monitor

FW acceleration

The FW definition is based on the ACL approach. The solution assumes that the IDS functions have already been applied for Deep Packet Inspection (DPI), and the decision for that specific section already extracted. The following Key parameters can be a value or wildcard, but at least some value in addition to the name has to be applied. The action field can be also a value or wildcard, but at least one has to be added

Key: number – the order number of AACL pPort – physical port on NIC VLAN – VLAN tag of packet [can be number or wildcard] Src-Dst MAC/EthType/Src-Dst IP/IP proto/Src-Dst ProtoPort [can be numbers or wildcard] Action: Drop/Permit, in case of permit the following fields could be applied Policy: policer rate Statistics: yes/no (optional for next phase)

APIs: ACL Create/Destroy ACL Get/GetNext

2.2.3.2. OAM – TrustNode

The configuration of the TrustNode in 6Tree mode is done by changing the device’s IPv6 prefix. The 6Tree network-prefix can be configured using the front panel of the router. For SDN applications, an optional REST-JSON interface can be provided, which listens in on the management interface of the device.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 59 of 190

REST:*{POST: URL: :6363/setprefix} \begin{lstlisting} json-input: { "prefix": string, } json-output: { "status":["ok","error"], "log":string } \end

2.2.3.3. OAM – OLT, ONT

Currently, the OLT is managed by the OAM through an XML over HTTP/HTTPS interface. The OLT acts as a server accepting XML commands and responds also with an XML-encoded response. The API is documented using XSD (XML Schema Definition). This has the advantage of closely mirroring the native functionality of the device and allowing automatic code generation for clients. Simple Netowrk Management Protocol (SNMP) v1/v2 and command line (CLI) interfaces are also available but were not used under the framework of CHARISMA. The Gigabit PON (GPON) part of the ONTs is managed indirectly through the OLT via the OMCI (ONU management and control interface – ITU-T G.988) protocol. The gateway functionality of the ONTs has the same management interfaces as the OLT (XML over HTTP/HTTPS, CLI and SNMP) plus the possibility to be managed over TR-069. In order to facilitate the integration of the OLT on an SDN/NVF scenario, Altice Labs is working on the support of NETCONF/YANG and Openflow. The full support of these protocols will not be available during the CHARISMA lifetime

2.2.3.4. CMO – vCC, vCache The CMO communicates and configures the vCC directly, and the CMO configures the vCaches through the vCC. The detailed communications between CMO and vCC/vCaches have been described in Section 2.1.3.15

2.2.3.5. OAM – MobCache The management and configuration of MoBcache by the OAM is based on OpenDayLight (ODL) SDN controller shown in Figure 24. The ODL SDN controller works as a NETCONF client to manage the configuration of MoBcache remotely. It provides a northbound interface (NBI), namely the Restconf API, towards service providers and network operators to receive their configuration requirements. It needs also the southbound interfaces (SBI) for the MoBcache caching functionality (including the caching and the prefetching) to pre-verify the commands given by the NBI and communicate with the NETCONF Server running on MoBcache.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 60 of 190

Figure 24: ODL SDN Controlled Caching System Architecture

In order to allow ODL SDN controller communicating with MoBcache, two YANG models (jcp-squid.yang and jcp-prefetch.yang) need to be created for Cache and Prefetch modules respectively. The following is an example of jcp-squid.yang model for the configuration of Squid running on MoBcache. module jcp-squid-config { namespace "urn:jcp:com:squidconfig"; prefix "jcpsqc"; import ietf-inet-types { prefix "inet"; } organization "jcp"; description "JCPversion of the netconf controller for Squid proxy. revision "2015-03-27"{ description "Datastore model and RPC call"; } typedef percent{ type uint16{ range "0 .. 100"; } description "Percentage"; } container squidconfig{ description "Configuration and operational parameters for a Squid Http Proxy."; leaf httpPort{ type inet:port-number; must 'current() <= 10000' { error-message "Squid proxy port number out of bound"; } default 3128; } leaf cacheMem{ type string; default "256 MB"; } leaf memoryReplacementPolicy{ type string; default "lru"; } leaf cacheReplacementPolicy{ type string; default "lru"; }

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 61 of 190 leaf maxOpenDiskFds{ type uint32; default 0; } leaf minimumObjectSize{ type string; default "0 KB"; } leaf maximumObjectSize{ type string; default "4096 KB"; } leaf cacheSwapLow{ type percent; default 90; } leaf cacheSwapHigh{ type percent; default 95; } } rpc set-squidconf-http-port{ description "Configure Squid Http Proxy remotely"; input{

leaf http_port{ type inet:port-number; must 'current() <= 10000' { error-message "Squid proxy port number out of bound"; } } } output{

leaf set-http-port-result{ type enumeration { enum "failure" { value 0; description "Setting failed"; } enum "success" { value 1; description "Setting succeeded"; } } description "Result types"; } } } }

After the installation of jcp-squid-config yang model within SDN controller, the following APIs are available in ODL rest side. For easily understanding, libnetconfd is the name given to MoBcache when initially registered by NETCONF protocol. Those are in the following table.

Table 4: The list of ODL Restful APIs

Method APIs post /config/opendaylight-inventory:nodes/node/libnetconfd/yang-ext:mount/ get /config/opendaylight-inventory:nodes/node/libnetconfd/yang-ext:mount/jcp-squid- config:squidconfig/ put /config/opendaylight-inventory:nodes/node/libnetconfd/yang-ext:mount/jcp-squid-

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 62 of 190 config:squidconfig/ Delete /config/opendaylight-inventory:nodes/node/libnetconfd/yang-ext:mount/jcp-squid- config:squidconfig/

Get /operational/opendaylight-inventory:nodes/node/libnetconfd/yang-ext:mount/jcp-squid- config:squidconfig/ post /operations/opendaylight-inventory:nodes/node/libnetconfd/yang-ext:mount/jcp-squid- config:set-squidconf-http-port

2.3. TS field-trial

2.3.1. Physical Level Architecture (Hardware) Telekom Slovenije has set up a dedicated physical infrastructure located at its laboratory in Ljubljana for the purpose of the CHARISMA project field trial, and serves as a future 5G laboratory environment for Telekom Slovenije that has been enhanced with CHARISMA concepts. The field trial therefore enables the CHARISMA project consortium to test and validate the 5G concepts and products developed in the project. The infrastructure consists of a cloud and virtualisation environment, network connectivity and 4G – LTE radio access network, while the radio access network (RAN) is setup as Cloud-RAN (C-RAN) environment. The objectives of the demonstration are CHARISMA’s Multi-tenancy, Open Access and Security features, by complementing the existing network with additional virtual network slices to serve the users of the Telecom Operator, while also supporting VNOs (energy aggregators, MVNOs, etc.). A brief demo description follows:  A Virtual Network Infrastructure is created on top of the existing infrastructure serving the residential customers of a Telecom Operator (vSlice1);  A Business customer, e.g. a Grid Operator, requires connectivity of measurement devices with guaranteed Quality of Service (QoS);  The Telecom Operator creates a new Virtual Network Infrastructure (VNI - vSlice2);  The Grid Operator logs in as a VNO onto the CHARISMA CMO GUI;  Connectivity is established and the measurement devices are able to connect to the Central Office of the Grid Operator;  Traffic in vSlice1 exceeds capacity limit – Reported by CHARISMA CMO - No performance impact on vSlice2;  Traffic increase in vSlice2 - CHARISMA CMO shows warning, according to SLA agreement with VNO – vSlice2 capacity may be increased accordingly. The main part of the TS field trial environment is the Ericsson HDS8000 cloud / virtualisation infrastructure, which represents the main compute power of the setup. It serves as a container for virtual network functions (VNFs) and for the CHARISMA control management and orchestration component (CMO). The cloud infrastructure may be spread over vast geographical areas, and managed centrally. This enables a telecom operator to spread the cloud environment and the CHARISMA intelligent management units (IMUs) at different converged aggregation levels (CALs).

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 63 of 190 The radio access network (RAN) is connected to the packet core network, residing in the cloud / virtualisation environment, via a high capacity orthogonal frequency division multiplexing passive optical network (OFDM-PON) and ultra-low latency hardware IPv6 router TrustNode. Both products are contributions from CHARISMA consortium partners. The OFDM-PON had been contributed by HHI, while the TrustNode comes from InnoRoute. The field trial access network consists of two cloud radio access network (C-RAN) legs connected via the TrustNode router. A C-RAN is comprised of a base band unit (BBU) and multiple remote radio units (RRUs). Additionally, the access network is extended via LTE-enabled customer premises equipment (CPE) providing WiFi access over the LTE network.

Figure 25: Physical-level architecture of the Telekom Slovenije field trial (inventory)

The physical architecture of the Telekom Slovenije field trial is depicted in Figure 25, showing the three main parts of the field trial:  Ericsson HDS8000 cloud virtualisation platform (1)  Ericsson C-RAN access network (4, 5) and extension via WiFi CPE (5)  Network connectivity – OFDM-PON (2) and TrustNode router (3) The hardware inventory detailed specification is available in Table 5, below. Table 5: Specification of physical devices used in the Telekom Slovenije field trial

ID Role Vendor CPU Model CPU cores RAM Storage Other Features 1 HDS 8000 Ericsson 1024 4 TB GB 2 OFDM PON HHI 3 TrustNode Router Innoroute Intel E3800 4 4GB externa FPGA l acceleration 4 Baseband unit Ericsson 5 Remote radio unit Ericsson

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 64 of 190 ID Role Vendor CPU Model CPU cores RAM Storage Other Features 6 Linksys 1200 ACS Linksys Marvell Armada 2 x 1.3 GHz 512 128 MB OpenWRT 385 88F6820 MB enabled router

The packet core network is deployed on top of the HDS8000 virtualisation platform, with all virtual packet core functions being Ericsson’s commercially available VNFs (vMME, vPGWs and vSGW). The packet core network is integrated with the test and production Home Subscriber Server (HSS).

Figure 26: Physical-level architecture of the Telekom Slovenije field trial (data plane)

Figure 27: Physical-level architecture of the Telekom Slovenije field trial (control plane)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 65 of 190 2.3.2. Logical Level Architecture (Software) Form the logical architecture perspective, the Ljubljana field trial comprises of the CHARISMA CMO and elements at the CAL0, CAL1, CAL2 and CAL3 levels. The Figure 28 describes the logical level architecture, focusing on end-to-end slicing, and control and management of the converged layers, with the CAL delineations clearly shown.

Figure 28: Logical level architecture of the Telekom Slovenije field trial

The logical level architecture comprises:  CHARISMA CMO level containing the following components: TeNOR, OpenAccess and Monitoring&Analytics. The infrastructure supporting the CMO level is based on the Ericsson HDS8000 virtualisation platform. A detailed description of the above components is as follows: o The TeNOR NFV Orchestrator is responsible for the lifecycle management of Network Services and the VNFM responsible for the lifecycle management of individual VNFs. o The Open Access Manager is responsible for the creation of virtual slices. o The Monitoring & Analytics component is responsible for performing metrics and notification acquisition from both physical and virtual resources of the infrastructure.  CAL3 virtual EPC infrastructure that is also based on Ericsson’s HDS8000 virtualisation platform. The CAL3 level contains the essential virtual components of the EPC as provided by Ericsson.  CAL2 level consisting of CHARISMA partners’ hardware equipment: the OFDM-PON and TrustNode IPv6 router. Both elements have been located in order to ensure the low latency requirements.  CAL1 level consisting of the Ericsson cloud radio access network (C-RAN). For the scope of the project, two BBUs and three RRU are deployed.  CAL0 level comprising CPE, LTE modems and smart phones as terminal equipment, and also on- premises virtualisation capabilities: o The Linksys WRT1200AC routers with open source OpenWRT firmware are used as CPE, with the purpose of extending the access network to the edge via WiFi access technology. o HUAWEI E3372 LTE USB sticks to connect the CPE via LTE towards the packet core network.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 66 of 190 o Virtualisation capabilities enabling deployment of the VNFs at CAL0 are based on the Ericsson HDS8000 virtualisation platform that enables the operator to deploy VNFs at remote hardware locations and manage all together centrally. o VNFs deployed at CAL0 simulate the central office function of offloading close to the end- user devices in order to reduce latency. The VNFs would calculate average consumption from multiple devices connected to the same CPE at CAL0, and forward aggregate to a central location when required.  IXIA traffic simulator is used for client simulation and central office simulation in the case of a Smart Grid scenario. Clients represent the measurement devices that periodically send measurement data to the Central Office for processing and analysis. The slicing of resources and the separation of traffic itself is based on the Access Point Name (APN) mechanism of the mobile networks. Each APN that is the subject of the field trial is terminated at its own dedicated Packet Data Network Gateway (PDN GW) element of the EPC network. The PDN GW serves as a gateway element towards operator external networks, such as the Internet or simulated Smart Grid network in this case. In addition, slicing is also implemented on the CAL0 layer WiFi access network of the CPE. A dedicated WiFi network is created at the CPE, and associated with specific LTE APN via an internal VLAN in the CPE. In case a VNF is deployed at CAL0, traffic from equipment with OpenWRT firmware is also configured

2.3.3. Planned integration and interfacing

2.3.3.1. OAM - Customer premises equipment – (CPE)

The CPE is part of the end-to-end network slice, providing sliced WiFi access and routing capabilities for traffic towards the VNF deployed at CAL0 when required. In order to support slicing of the CPE, the firmware had been modified, and the interface depicted in Figure 29 has been designed.

Figure 29: Linksys WRT1200AC OpenWRT enabled CPE device with dual LTE modems.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 67 of 190

Figure 30: UML diagram of OpenWRT management interface – CPE

The interface enables the creation and configuration of the WiFi network SSID, and binds it with the APN configuration on the side connecting to the mobile network via the LTE radio access. The WiFi SSID is created over the common WiFi hardware radio interface at the CPE. On the mobile side, a separate hardware mode is used for each APN configured.

The interface towards the CPE specified by above UML diagram is implemented as a REST containing JSON format. The JSON notation of the interface is following:

{ "id": "0", "mobile": { "apn": "apn_value", "username": "username_value", "password": "password_value" }, "wifi": { "ssid": "ssid_value", "psk": "psk_value", "devices": [ { "mac": "mac1_value" }, { "mac": "mac2_value" } ] }}

2.3.3.2. OAM - LTE domain name service – (vDNS) Since network slices are APN based, the APN is configured each time a network slice is created. The APN points towards the PDN GW element.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 68 of 190 <> InterfaceLTEDNS

-apn_name -pgw_name -pgw_ip_address

Figure 31: UML diagram of the LTE DNS management interface – vDNS

The interface towards LTE DNS specified by above UML diagram is implemented as a REST containing JSON format. JSON notation of the interface is following:

{ "apn_name": "apn_value", "pgw_name": "apn_name_value", "pgw_ip_address": "apn_ip_address_value" }

2.3.3.3. Monitoring and Analytics – PDN GW integration

The following are the common performance counters to be observed by the CHARISMA Monitoring and Analytics (M&A) module. The performance counters are specified to observe the load of the PDN GWs, and are specified in Table 6. Performance counter Description ggsnControlLoad Weighted Packet Data Protocol (PDP) context load in control. This gauge is obsoleted due to redesign of Over Load Protection. Over Load Protection (OLP) going from calculated weighted bytes to actuall physical memory. To get the free estimated physical memory use node "status" command. ggsnPayloadLoad This gauge keeps track of the weighted load on the node for the payload part. The gauge is used for the new load balancing scheme. ggsnApnDownlinkBytes The total number of downlink user plane bytes processed on a per APN basis by the Gateway GPRS Support Node (GGSN) or PGW on interfaces of Gn/Gp, S5/S8, GTP S2a/S2b and Iu- U(3GDT). ggsnApnDownlinkDrops The total number of downlink user plane packets dropped on a per APN basis by the GGSN or PGW. ggsnApnDownlinkPackets The total number of downlink user plane packets processed on a per APN basis by the GGSN or PGW on interfaces of Gn/Gp, S5/S8, GTP S2a/S2b and Iu-U(3GDT). ggsnApnUplinkPackets The total number of uplink user plane packets processed on a per APN basis by the GGSN or PGW on interfaces of Gn/Gp, S5/S8, GTP S2a/S2b and Iu-U(3GDT). ggsnApnUplinkBytes The total number of uplink user plane bytes processed on a per

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 69 of 190 APN basis by the GGSN or PGW on interfaces of Gn/Gp, S5/S8, GTP S2a/S2b and Iu-U(3GDT). ggsnApnUplinkDrops The total number of uplink user plane packets dropped on a per APN basis by the GGSN or PGW on interfaces of Gn/Gp, S5/S8, GTP S2a/S2b and Iu-U(3GDT). pgwApnActiveEpsBearer The total number of IPv4, IPv6 and IPv4v6 Evolved Packet System (EPS) bearers associated with the APN On the PGW S5/S8, GTP S2a/S2b interfaces. Table 6: PDN GW performance counters to be observed

Monitoring interface specification (common for all trials) in JSON format:

{"metric": { "name": "metric_name", "documentation": "metric_documentation", "value": val, "type": "gauge / counter", "label_list":[ { "label": { "key": "label_key_1", "value": "label_value_1" } }, { "label": { "key": "label_key_n", "value": "label_value_n" } } ] } }

The JSON data representation format can get transformed into Prometheus Text format. The JSON representation above can be transformed by the following line: metric_name_counter{label_key_1="label_value_1", label_key_n="label_value_n"} val

Exposing multiple metrics to the Prometheus server parser requires splitting each metric value using the new line character (“\n”). The transformation mechanism has been implemented in Python and makes use of the “Prometheus_client” Python library. Metric type can be either Counter or Gauge. Counters go up, and are reset when the process restarts, while Gauges can go up and down. Transformation implementation for both Counter and Gauge is provided below:

import json, requests

data = json.loads(requests.get(self._endpoint).content.decode('UTF-8'))

from prometheus_client import Counter c = Counter(data["metric"]["name"], data["metric"]["documentation"])

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 70 of 190 c.labelnames = [] c.labelvalues = [] for label in data["metric"]["label_list"]: c.labelnames.append(label["label"]["key"]) c.labelvalues.append(label["label"]["value"])

c.inc(data["metric"]["value"])

import json, requests

data = json.loads(requests.get(json_data_endpoint).content)

from prometheus_client import Gauge g = Gauge(data["metric"]["name"], data["metric"]["documentation"]) g.labelnames = [] g.labelvalues = [] for label in data["metric"]["label_list"]: g.labelnames.append(label["label"]["key"]) g.labelvalues.append(label["label"]["value"])

g.set(data["metric"]["value"])

The data are then exposed to Prometheus server:

import time from prometheus_client import start_http_server

if __name__ == '__main__': # Start up the server to expose the metrics. start_http_server(8000) while True: REGISTRY.register(counter_or_gauge_instance) time.sleep(1)

2.3.3.4. OAM – TrustNode integration

The configuration of the TrustNode in 6Tree mode is done by changing the device’s IPv6 prefix. The 6Tree network-prefix can be configured using the front panel of the router. For SDN applications an optional REST-JSON interface can be provided, which listens in on the management interface of the device.

REST:*{POST: URL: :6363/setprefix} \begin{lstlisting} json-input: { "prefix": string,

} json-output: { "status":["ok","error"], "log":string } \end{lstlisting}

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 71 of 190

2.3.3.5. OAM – OFDM-PON integration

The OAM integration for OFDM-PON has already been described in the earlier deliverables D3.2 and D2.2 previously. The interface to the CMO is provided by means of an embedded device - a Raspberry PI, which runs a Linux OS (Debian derivate). The REST-API is provided by the Python module Flask. It is currently being tested by providing ID information via the REST API at HHI labs.

CMO

REST Flask Python Linux ARM

Figure 32: Embedded device providing interface to CMO Figure 33: OFDM-PON – interface to CMO

In the next step, it is planned to integrate with the CHARISMA CMO.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 72 of 190 3. Software deployment and configuration

The three field trial demos will each run the same CHARISMA CMO that has been developed during the project, and configured as appropriate according to the hardware set-ups for each of the demonstrators, as already discussed in the previous chapter 2. Here we now discuss the software deployment and configuration as it will be applied for each of the field trials.

3.1. Control Management and Orchestration deployment CHARISMA’s CMO is a modular component formed by several subcomponents. The CMO can be deployed as a single package, since all the different modules are deployed by using a single script, which takes care of installing all the subcomponents. A more detailed documentation on how to deploy it can be found in the developer’s guide ([8]) Due to the modular nature of the CMO its components can also be individually installed as standalone units (one by one). In the following subchapters will be discussing the deployment instructions for two of the main components of the CMO: the VNF Orchestrator (TeNOR) and the Open Access Manager (OAM).

3.1.1. Service Orchestration (TeNOR) To install the orchestrator, three options are available: using a Vagrant file, using Docker, or via step-by- step deployment. These are all similar, and more in-depth information about how to install it one way or another is provided inside the read.me file included with the code. The generic steps on how to perform the deployment and configuration of the Service Orchestration (TeNOR) are the following: 1. Code needs to be downloaded from the Git repository. 2. Once the code is downloaded, in the root directory of the downloaded repository, the Read.me file contains instructions on how to deploy TeNOR in the host. In that file, there are instructions with alternative ways to install the orchestrator, mainly: through a Vagrant file, using Docker, or step-by-step instructions. The scripts take care of all dependencies (mainly RabbitMQ and MongoDB). By default, MongoDB runs without authentication. For enabling the authentication, MongoDB (also TeNOR) should be configured to use the generated credentials. 3. When all the scripts are run, TeNOR is ready. Note 1: OpenStack needs to be installed and configured separately. Note 2: In-depth information can be found in the CHARISMA Development Manual [8].

3.1.2. Service Monitoring & Analytics The Monitoring and Analytics Service (M&A) resides in an Ubuntu 16.04 virtual machine (it also been tested successfully in Linux Centos 7) in the CMO Cloud Infrastructure. The deployment and configuration of the service takes place at the initial system setup. As is already mentioned in the parallel deliverable D3.4 - section 2.3 – the CHARISMA M&A is strongly dependent on the Prometheus Monitoring software. As a result, for the deployment of the service, the Prometheus server and alert manager components are

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 73 of 190 required. This task has been automated using Ansible automation software. To configure the deployment parameters, the following configuration files must be modified: charisma-ansible-prometheus/inventory:

[vm_static] #“VM IP” “VM username” 10.10.10.2 ansible_user=username charisma-ansible-prometheus/ansible-prometheus.yml:

- hosts: vm_static roles: - ansible-prometheus vars: prometheus_components: [ "prometheus", "node_exporter", "alertmanager" ] prometheus_node_exporter_use_systemd: yes prometheus_version: 1.6.1 prometheus_node_exporter_version: 0.14.0 prometheus_alertmanager_version: 0.5.1 become: yes become_method: sudo

After that, the Prometheus components are in place and configured to communicate with the CHARISMA M&A service. For the M&A software to run, the Python 3 programming language is required, as well as certain Python libraries. The easiest way to install them, are the following commands:

sudo apt update sudo apt install -y python3 python3-pip sudo pip3 install -U pip sudo pip3 install flask flask_cors sqlalchemy

The software can then be downloaded from the NCSRD Git repository:

git clone https://[email protected]/mnlab-dev/monitoring-api.git

In the monitoring-api/main.conf file, the TCP port and Prometheus directories for target resource and alert rule data can be defined.

[DEFAULT] port = 8082

[Prometheus] #target_resource_directory = /etc/prometheus/tgroups target_resource_directory = ./ #alert_rule_directory = /etc/prometheus/rules alert_rule_directory = ./

The service is then ready to run with the following command, and the service is ready at the specified port:

python3 main.py

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 74 of 190 3.1.3. Open Access Manager In order to install the CHARISMA Open Access Manager (OAM) some basic software needs to be installed. That software is Ruby, Python and MongoDB, which are prerequisites. These need to be installed separately, with the instructions for installing them included in the CHARISMA Development Manual [8]. Once the dependencies are satisfied, installation is ready to start, and the process is very similar to that as previously described to install the TeNOR orchestrator: 1. Code needs to be downloaded from the Git repository. 2. Within the code files, a read.me file is provided with the instructions to complete the deployment. 3. The code it is divided into two big blocks: iml and pml (these get installed when following the scripts). 4. Iml requires executing the “rake start” command. The config can be tweaked by modifying the file included in the config folder. Should that not be modified, a default config is used. 5. Pml is then deployed next, when the script is executed.

Note: Iml is tasked with storing the information about the physical resources. It also implements parts of the infrastructure manager. Pml deploys the slice manager, the Network Service manager, the user manager and parts of the infrastructure manager.

3.2. VNF

3.2.1. IDS Deployment of the IDS software has been explained in detail in the earlier deliverable D3.2 – section 4.4 and Appendix III. The software required to function properly is packaged inside a system image, and uploaded at the VIM (OpenStack) image storage service (Glance), and can be instantiated at any time by the CHARISMA NFVO. The image has been designed to require two network interfaces in the following order: 1. one interface for management on the CHARISMA management network; 2. one interface for packet inspection on the VNO slice network. Considering the management network (in the image called “provider”) to be on the subnet 10.100.80.0/24 and one VNO network (in the image called “vlan-10”) on subnet 10.77.10.0/24, the OpenStack deployment will be as shown in the following image:

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 75 of 190

Figure 34: OpenStack deployment and configuration for the virtual IDS VNF

3.2.2. Firewall (FW) As above, the deployment of the Firewall software has already been explained in detail in the deliverable D3.2 - section 4.4 and Appendix III. The image to be deployed through the NFV Orchestrator has been designed to require three network interfaces in the following order: 1. one interface for management on the CHARISMA management network; 2. one interface for the WAN side of the Firewall on the VNO slice network; 3. one interface for the LAN side of the Firewall on the VNO slice – vlan pair network. Considering the management network (in the image called “provider”) to be on the subnet 10.100.80.0/24 and one VNO network (in the image called “vlan-10”) on subnet 10.77.10.0/24, and the VNO slice – vlan pair network (in the image called “vlan-3010”) on subnet 11.77.10.0/24, the OpenStack deployment is as shown in the following image.

Figure 35: OpenStack deployment and configuration for the virtual firewall VNF

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 76 of 190 3.2.3. Cache Controller In the CHARISMA caching management, the vCC running as a VNF is managed by the CMO orchestrator and provides the management of caching services for a VNO. Each VNO is assigned by one vCC, and several vCaches are distributed in different CALs in the CHARISMA network. The vCC allows the VNO to autonomously manage and configure the vCaches allocated to it.

Figure 36: The vCC structure

The vCC has been implemented on the Ubuntu 16.05, and includes three main components: CCD, Database (DB) and Web Manager.  The CCD daemon has the following functionalities: o CCD runs as a cache controller daemon to communicate with Netconf and external modules; as a central point it runs the Netconf client (netopeer-server); o CCD sends requests to Squid Proxy and Prefetcher through Netconf commands; o CCD configures CNs by exchanging messages over the Netconf protocol.  CC DB stores HTTP Proxy and prefetcher information in the CN: o Information of the CN, such as storage, caching algorithms in the Table CacheNode; o Information of Chunks in Table CHUNK; o Information of user requests in Table REQUEST.  Web cache manager daemon: o Detect all the events from the GUI of the web manager; o Apache configuration; o Hypertext Preprocessor (PHP) on the backend side and html/ Cascading Style Sheets (CSS) the front side. The database (DB) is designed and installed in the CC. We use SQLite that is a relational database management system contained in a C programming library. The DB includes three tables:

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 77 of 190 > CREATE TABLE CacheNode(ID integer PRIMARY KEY, Type text, IPEth0 varchar, IPWlan0 varchar, IPWlan1 varchar, IPLTE varchar, SquidCap integer, SquidAlgo text, PrefetchCap integer, PrefetchAlgo text, ConnectedNetwork text, Status text); > CREATE TABLE REQUEST(ID integer PRIMARY KEY, Url varchar, Timestamp integer, FromCN integer, FromUserIP varchar); > CREATE TABLE CHUNK(ID integer PRIMARY KEY, Type text, UrlCh varchar, UrlFile varchar, Size integer);

Installation of netopeer client

> git clone https://github.com/CESNET/libnetconf.git

> cd libnetconf

> ./configure --enable-notifications # Notifications must be enabled for netopeer to compile.

> ./configure --with-nacm-recovery-uid=uid # To avoid the access denied issue when the Netconf client tries to connect to the NCS

> make

> make install

#In some Ubuntu distributions (14.10, for example) you may have to install libxml2-dev, libxslt-dev and libssh-dev.

> git clone https://github.com/CESNET/netopeer.git

> cd netopeer

#Compiling client

> sudo apt-get install libreadline-dev

> cd netopeer/cli

>./configure

> make

> make install

To start the vCC service, we need to start the WebCacheManager daemon and the CacheController daemon, and JCP_UPDATABASE daemon (for prefetch).

> cd /home/jcp/Desktop/CacheController > ./JCP_CCACHEDAEMON$ > ./JCP_WEBCACHEMNGR /var/www/MagicBoxwww_modified/Configuration/ > ./JCP_UPDATABASE

The JCP_CCACHEDAEMON is the main cache controller daemon communicating with the CC internal components and external components. The JCP_WEBCACHEMNGER provides a web GUI interface to allow the cache system manager to get and edit the configuration information of the vCaches. The JCP_UPDATABASE is responsible for retrieving the fresh information of user requests in vCaches, and storing this information in the database.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 78 of 190 3.2.4. Cache The virtualized caching (vCaching) solution of CHARISMA builds on the support of the application level caching functionality in the form of a VNF. We term each caching VNF instance as a vCache. vCaches are realized on top of a generic Ubuntu 14.04 LTS virtual machine image. For the caching functionality we have selected the Squid implementation3, since it is a mature and widely supported implementation. In the following we provide details on the software deployment and configuration of vCaches. This material includes the baseline setup and configuration, as well as configurations for transparent caching, communication with the vCC, and pre-fetching.

3.2.4.1. Squid installation and configuration Our setup is based on Squid v3.5.20. Installation of the corresponding software is performed as follows:

> sudo apt-get update > sudo apt-get upgrade > sudo apt-get install squid > sudo initctl show-config squid3 # enable starting squid at startup

Squid configurations are made by editing the squid.conf file.

Transparent caching support

In order for the vCache operating system not to discard incoming packets destined to content servers, we apply the following configuration of the Linux Operating System (OS) user-space application program, which is typically used for the configuration of the OS forwarding tables.

> iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port $SQUIDPORT

This command allows traffic destined to port 80, i.e. HTTP traffic, to be delivered to a port on which Squid is listening ($SQUIDPORT), even though the local VM does not hold the destination IP address of the request packet. Squid is required to be built with interception support, i.e. configured and built with the linux support. Namely:

> ./configure --prefix=/opt/squid/ --with-logdir=/var/log/squid/ --with- pidfile=/var/run/squid.pid --enable-storeio=ufs,aufs --enable-removal- policies=lru,heap --enable-icmp --enable-useragent-log --enable-referer-log --enable-cache-digests --with-large-files --enable-snmp --enable-linux- netfilter

Finally, the following line is added to the squid.conf file to allow Squid to listen on SQUIDPORT=3129.

http_port 3129 intercept

Parent-child and sibling relationship configuration

As also described in D3.4, the overall vCache peering setup requires the establishment of cooperative caching relationships between the constituent vCache VNFs of the overall solution. These relationships

3 http://www.squid-cache.org/

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 79 of 190 correspond to the following corresponding configurations of Squid (we only present settings related to vCache peering). Peering vCache

#Sibling relationship with peering VNO vCache

acl siblingA src peering_VNO_vCache_IP

icp_access allow siblingA

http_access allow siblingA

miss_access deny siblingA

cache_peer peering_VNO_vCache_IP sibling 3128 3130 proxy-only no-digest no- netdb-exchange

#Parent-child relationships with local vCaches. #Not supported in the current integrated demo setup

acl localvCacheX src local_vCache_IPs/Subnet

acl localvCacheX src local_Host_Prefetcher

icp_access allow local_vCache_IPs/Subnet

http_access allow local_vCache_IPs/Subnet

miss_access allow local_vCache_IPs/Subnet

http_access allow local_Host_Prefetcher

cache_peer local_vCache_IPs/Subnet parent 80 0 no-digest no-netdb-exchange

cache_peer_access_ local_vCache_IPs/Subnet allow local_Host_Prefetcher

cache_peer_access_ local_vCache_IPs/Subnet deny all

cache_peer_access_ peering_VNO_vCache_IP allow local_vCache_IPs/Subnet

cache_peer_access_ peering_VNO_vCache_IP deny all

icp_access deny all

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 80 of 190 Local vCache

#Sibling relationship with scaled-out local vCaches

acl siblingY src local_vCache_IPs #IP addresses of other local vCaches

icp_access allow siblingY

http_access allow siblingY

miss_access deny siblingY

cache_peer local_vCache_IPs sibling 3128 3130 proxy-only no-digest no-netdb- exchange

#Sibling relationship with peering vCache #Not supported in the current integrated demo setup

acl peeringVCache src peering_vCache_IP

icp_access allow peeringVCache

http_access allow peeringVCache

miss_access allow peeringVCache

cache_peer peering_vCache_IP sibling 8100 8101 proxy-only no-digest no- netdb-exchange

# The configured ports are not the default ones, as they are handled by the reverse proxy on the peering VNO. The latter is configured to forward all received ICP traffic at port 8100 to port 3128 of the peering VNO’s peering vCache, as well as all received HTTP traffic at port 8101 to port 3130 of the peering VNO’s peering vCache.

icp_access deny all

Squid configurations of vCache VNFs are applied by the vCC, through the NETCONF based vCache-vCC interface (see next).

3.2.4.2. Communication with the virtual cache controller (vCC) Each vCache communicates with the vCC so as to be appropriately configured and managed. This includes the ability of the vCC to issue prefetch commands, and to dynamically configure vCache sibling and parent- child relationships. The following setup steps are applied at the creation of the vCache image, i.e. they are not subject to vCaching service orchestration, and their role is to prepare the vCache VNF for communication with the vCC.

Prefetching setup

Within the location of the Prefetcher and Common source code:

> make

> sudo ./prefetcher -d -p -h 5555 # Run the prefetcher:

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 81 of 190 The option “-p” enables the debug logging, “-d” enables debugging and the “-h” option sets the port of the Prefetcher.

Install libnetconf and netopeer server

The capability of the vCC to manage a vCache is based on the support of the NETCONF protocol. To this end, a server side NETCONF instance is installed and configured on vCaches. We use the Netopeer tool4 for this purpose. In the following we present the setup and configuration procedures in detail. Step 1: Installation of libxml2 (http://xmlsoft.org/)

> sudo apt-cache search libxml2

> sudo apt-get install libxml2-dev

> sudo apt-get install libxslt-dev

Step 2: Installation of libssh (www.libssh.org/) The installation requires openssl or libgcrypt-dev. If an old version of libssh-dev has been installed, it must be removed:

> sudo apt-get remove libssh-dev

> sudo apt-get purge libssh-dev

> sudo rm -rf /usr/include/libssh/

Then:

> tar -Jxvf libssh-0.7.3.tar.gz

> cd libssh-0.7.3

> mkdir build && cd build

> cmake -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_BUILD_TYPE=Debug ..

> make

> sudo make install

Step 3: Installation of libdbus-dev package

> sudo apt-cache search libdbus

> sudo apt-get install libdbus-1-dev

Step 4: Installation of libevent, doxygen, libcurl-dev

> sudo apt-get install libevent-dev

> sudo apt-get install doxygen

> sudo apt-get install libcurl4-gnutls-dev

Step 5: Installation of netopeer library

4 The employed netopeer version is only compatible with libnetconf-0.9.0, not libnetconf-0.10.0.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 82 of 190 > git clone https://github.com/CESNET/libnetconf.git

> cd libnetconf

> ./configure --enable-notifications # Notifications must be enabled for netopeer to compile.

> ./configure --with-nacm-recovery-uid=uid # To avoid the access denied issue when the Netconf client tries to connect to the NCS

> make

> make install

#In some Ubuntu distributions (14.10, for example) you may have to install libxml2-dev, libxslt-dev and libssh-dev.

> git clone https://github.com/CESNET/netopeer.git

> cd netopeer

#Compiling server

> sudo apt-get install python-libxml2

> cd server

> ./configure --sysconfdir=/etc

> make

> make doc #without this step install script fails

> make install

Step 6: TransAPI debian packages installation TransAPI is the interface through which the NETCONF server changes the setting of various devices (e.g., network interfaces), the system (e.g., time zone) and the configuration of software programs (e.g., Squid Proxy) on the machine. The Netopeer tool provides two transAPIs for network interfaces and system configuration. For a cache node, there are four new transAPI packages, including cfg-squid-conf, cfg-squid- ist, cfg-prefetch-conf, cfg-prefetch-list, which should be installed and configured.

 Cfg-squid-conf: Squid configuration transAPI provides interface to allow the vCC to configure squid.conf.  Cfg-squid-list: Squid list transAPI allows the vCC to collect user request information from Squid.  cfg-prefetch-conf: Prefetcher configuration transAPI provides interface to prefetcher in order to allow the vCC to configure it.  cfg-prefetch-list: Prefetcher list transAPI provides interface to prefetcher in order to allow the vCC to exchange command with it.

Step 7:

Starting Squid and netopeer server.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 83 of 190 > sudo mkdir /var/run/squid

> sudo chown squid.squid /var/run/squid

> squid -NCd1 -f /usr/local/etc/squid/squid.conf

> sudo netopeer-server -v 5

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 84 of 190 4. Testing and Validation

Having outlined the architectural set-ups (both physical and logical) and given descriptions of the hardware components that will be used for the three field trial demonstrators in chapter 2, followed by the software for the CMO and VNFs that lie above the PHY infrastructure in chapter 3, in this chapter we now present the methodology for the testing and validations of these CHARISMA technologies. This chapter is divided into three sections: the first describes the testing tools that have been selected and the rationale for their selection, while the subsequent sections describe the tests for the hardware devices and software components.

4.1. Testing tools selection and rationale

4.1.1. Robot Framework Robot Framework is a Python-based, extensible keyword-driven test automation framework for end-to-end acceptance testing and acceptance-test-driven development (ATDD). Test cases are automated by writing steps using Robot framework keywords. It can be used for testing distributed, heterogeneous applications, and where verification requires the interaction of several technologies and interfaces.

The Robot Framework has a set of features that facilitate testing. These include:

 Enables easy-to-use tabular syntax for creating test cases in a uniform way.  Provides ability to create reusable higher-level keywords from the existing keywords.  Provides easy-to-read result reports and logs in HTML format.  Platform and application independent.  Provides a simple library API for creating customised test libraries, which can be implemented with either Python or Java.  Provides a command line interface, XML and HTML based output files for integration into existing build infrastructure (continuous integration systems).  Provides support for Selenium for web testing, Java GUI testing, running processes, Telnet, Secure Shell (SSH), and so on.  Supports creating data-driven test cases.  Built-in support for variables, practical particularly for testing in different environments.  Provides tagging to categorise and select test cases to be executed.  Enables easy integration with source control: test suites are just files and directories whose version can be designated with the production code.  Provides test-case and test-suite -level setup and tear down.  The modular architecture supports the creation of tests even for applications with several diverse interfaces.

Robot Framework has been used for the testing for the Monitoring & Analytics component, the virtual IDS, virtual firewall and the virtual cache.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 85 of 190

4.1.2. OFDM-PON testing tools For OFDM-PON testing, a self-implemented Matlab® script was introduced as a reference implementation for the physical layer. Depending on the test case, parts of this script are deactivated and processing is done by hardware. Interfacing to the script is done by the VXI-11 protocol via Ethernet connection to a Digital Storagte Oscilloscope (DSO) for data capturing, e.g. keep the whole ONU component simulated for stand-alone tests of the OLT, or to an Arbitrary Waveform Generator (AWG), e.g. keep the OLT simulated for stand-alone tests of the Optical Network Unit (ONU) component. The following hardware has been used for the test cases, described below:  LeCroy Wavemaster 830Zi Oscilloscope (30 GHz, 80 GSa/s, 8 bit) for capturing the whole 16 to 17 GHz OLT band,

 Tektronix InfiniVision MSO7104A Oscilloscope (1 GHz, 4 GSa/s, 8 bit) for capturing a 2 x 0.25 GSa/s OLT band,

 Tektronix AWG7122B AWG (multiplexable 2 x 12 GSa/s, 8 to 10 bit) for simulating half of the OLT band to offer partial band access to the ONU.

4.2. Hardware System testing

4.2.1. TrustNode Testing The following sections describe the tests for the hardware accelerated routing platform TrustNode. These tests verify the correct function of the device in the CHARISMA framework, with the tests being divided into the following sub-sections: electromagnetic compatibility (EMC) hardware verification, FlowEngine test, 6Tree speed test, TrustNode functional test.

4.2.1.1. EMC hardware test The TrustNode has been designed as an experimental device, with the hardware destined for practical applications in a field trial environment. In which case, EMC testing must be done to estimate the influence of the TrustNode on other equipment and vice versa. The following tests check the TrustNode device according to the DIN EN 61000 standard [13]. Due to the dependency between electromagnetic behaviour and loaded configuration and functionality, the device tests are executed with the 6Tree configuration loaded to the device and 6Tree traffic is flooded into the ports. The tests show if the operation of the TrustNode device is safe with respect to it disturbing of other devices, or if the electromagnetic emissions of other devices can disturb the function of the TrustNode.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 86 of 190 Test Description Identifier EMC Test number 1 Test Purpose Electromagnetic emission on powerline 0.15...30 MHz Configuration See DIN EN 61000-6-3 [13]

Test Step Type Description Result Sequence 0 stimuli Configure device according DIN EN 61000-6-3 [13] 1 check

Figure 37: Test output on powerline L-wire according DIN EN 61000-6-3 [13]

Figure 38: Test output on powerline N-wire according DIN EN 61000-6-3 [13]

Check if emission (yellow, gree) is below threshold (red)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 87 of 190 Test Description

Figure 39: Test output on powerline L-wire according DIN EN 61000-6-3 [13]

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 88 of 190 Test Description Identifier EMC Test number 2 Test Purpose Electromagnetic emission radiation 30...1000 MHz Configuration See DIN EN 61000-6-3 [13]

Test Step Type Description Result Sequence 0 stimuli Configure device according DIN EN 61000-6-3 [13]

Figure 40: DIN EN 61000 compliant test setup. The picture shows the TrustNode device on a horizontal turn- table placed in a shielded cabin.

1 check Check if emission (yellow) is below threshold (red)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 89 of 190 Test Description

Figure 41: Test output according DIN EN 61000-6-3 [13]

2 stimuli Turn device 90° on the horizontal table 3 check Repeat step 1 if angle is below 360° 4 stimuli Change antenna polarisation from horizontal to vertical 5 check do step 1 6 stimuli Do step 2 7 check Repeat step 5 until angle is 360° Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 90 of 190 Test Description Identifier EMC Test number 3 Test Purpose Electromagnetic emission radiation 1000...6000 MHz Configuration See DIN EN 61000-6-3 [13]

Test Step Type Description Result Sequence 0 stimuli Configure device according to DIN EN 61000-6-3 [13] 1 check Check if emission (yellow) is below threshold (red)

Figure 42: Test output according DIN EN 61000-6-3 [13]

2 stimuli Turn device 90° on the horizontal table 3 check Repeat step 1 if angle is below 360° 4 stimuli Change antenna polarisation from horizontal to vertical 5 check do step 1 6 stimuli Do step 2 7 check Repeat step 5 until angle is 360° Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 91 of 190 Test Description Identifier EMC Test number 4 Test Purpose Check behaviour for fast transients according to DIN EN 61000-6-3 [13] Configuration See DIN EN 61000-6-3 [13]

Figure 43: Test setup for fast transients injection according to DIN EN 61000-6-3

Test Step Type Description Result Sequence 0 stimuli Configure device according DIN EN 61000-6-3 [13] 1 stimuli Apply 1 kV to transient to power supply 2 check Check if device is still working 3 stimuli Apply 2kV to transient to power supply 4 check Check if device is still working 5 stimuli Apply 0.5 kV to transient to LAN connector 6 check Check if device is still working 7 stimuli Apply 1 kV to transient to LAN connector 8 check Check if device is still working Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 92 of 190 4.2.1.2. FlowEngine Test As described in the earlier deliverable D1.3, the FlowEngine is a block of the TrustNode’s programmable hardware VHDL ((Very High Speed Integrated Circuit) VHSIC Hardware Description Language) design which is responsible for packet classification and processing. The FlowEngine is necessary for the fast IPv6 based hierarchically routing 6Tree, which is one of the key technologies for providing low latency in CHARISMA. In the following Tests, all data rates are as measured similar to the definitions used in RFC 2697 [9] and RCF 2698 [10], yet not in bytes of IP packets per second, but as Ethernet frame sizes excluding Frame CheckSum (FCS). The test bench consists of a Vivado Simulator and ModelSim. The test bench waits for the FlowEngine to be initialized, then applies a set of stimuli to the management interface via a virtual component to configure the FlowEngine, change its configuration during runtime, and to read out counter values and status flags.

The test bench and every unit of the FlowEngine contains signals to check for critical states, errors, and to provide high-level information on frame processing, state changes, throughputs, reception of bad frames, FIFO (First In First Out) overruns and underruns etc. In addition, log files for the output traffic are created.

The network traffic stimuli include:

A. Illegal frame formats and frames with RX_ER marking; B. All frame sizes between 0 B (just preamble or preamble with start-of-frame delimiter) and approx. 2,000 B, plus some discrete oversized frames (to set the RX FIFO Overflow flags); C. Untagged, C-Tagged, Prio-Tagged, and badly tagged frames to check correct service mappings; D. Frames for all ports; E. All relevant stimuli previously used to detect bugs as a mean to implement regression tests.

Test Description Identifier Simulation Test Number 1 Test Purpose The goal is to show that the colour marking is working.

Configuration

Test Step Type Description Result Sequence 1 stimuli The value of Committed Burst Size (CBS)=Excess Burst Size (EBS)=2,048 B is configured. The stimulus consists of 1 packet only, with a size of 1,024 B, contained in trace.0.dump.

2 stimuli CBS is set to 0 B and the same packet is sent again.

3 stimuli CBS=EBS=0 B is set and the packet is sent again.

4 check At first, the packet should be marked green, then yellow, and then red. The colour can be checked in the NoC header, the red packet should be discarded.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 93 of 190 Test Description Identifier Simulation Test Number 2 Test Purpose The goal is to check, if the token buckets are incremented according to the specified rates. Configuration

Test Step Type Description Result Sequence 1 stimuli The value of CBS is 2,048 B. The stimulus consists of 1 packet only, the packet size is 1,024 B.

2 stimuli CBS is set to 0 B and the same packet is sent again.

3 check The two token buckets Tc and Te should be full in the beginning, Tc(t=0) = CBS, Te(t=0) = EBS, which is true three full refresh cycles after configuring the value, i.e., after approx. 190 µs. In the first case, as a result of the received packet, the Tc should be decreased by 1,024 B, but two full refresh cycles, i.e., 126 µs, later it should be back at its maximum again (Tc=CBS). During this time, it should be increased by either 781 B or by 782 B per full refresh cycle. The packet should be marked green.

4 check The CBS is 0 B, so the packet should be marked yellow. As a result of the received packet, Te should be decreased by 1,024 B, but two full refresh cycles later it should be at its maximum again (Te=EBS). During this time, it should be increased step-by-step, as previously observed for the green bucket.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 94 of 190 Test Description Identifier Hardware Test number 1 Test Purpose

Figure 44: TrustNode Hardware test setup. The picture shows the TrustNode device, which is connected to a packet generator and analyser. The analyser checks the packets for loss and cyclic Redundancy Check (CRC)-errors.

All frame sizes from 64 B up to the Maximun Transfer Unit (MTU) of the test equipment are sent once per direction (upstream and downstream), with a rate of 100 frames per second. The low rate guarantees that no frame loss occurs. If no frame is filtered out or modified, then this test passed.

Before starting this test, and directly after this test all FPGA (Field Programmable Gate Array) states accessible via the management interface have to be read out. The differences between both readouts as well as the values of the post-test readout have to be checked in detail for unusual values.

The goal is to show that the effect of yellow marking.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli Set queue length thresholds to high value, e.g., 255 frames, to prevent any influence. Increase data rate until 140 Mb/s. Packets should be sent equally distributed (traffic without bursts).

1 check Although part of the stream should be marked yellow, it should not have any measurable effect, since there is no reason for discarding. Packets should be received without any loss.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 95 of 190

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 96 of 190 Test Description Identifier Hardware Test number 4 Test Purpose The goal is to show the effect of the burst sizes, if they are near the MTU.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli Send a flow with a rate lower than Commited Information Rate (CIR), e.g., 80 Mbps. Use 1,518 B frames, excluding the FCS.

Packets should be sent equally distributed (traffic without bursts).

1 stimuli Steadily increase the CBS from 1 kB to 2 kB. Then set EBS to 1 kB.

2 stimuli Then set CBS to 1 kB and then steadily increase EBS from 1 kB to 2 kB.

3 stimuli Send a flow with a rate lower than CIR+ Excess Information Rate (EIR), but higher than CIR, e.g., 130 Mb/s. Use 1,518 B frames, excluding the FCS. Packets should be sent equally distributed (traffic without bursts).

4 stimuli Increase smoothly the CBS from 1 kB to 2 kB. Set EBS to 1 kB. Then set CBS to 1 kB and increase smoothly EBS form 1 kB to 2 kB.

5 check There is a fundamental difference between the two possible implementation modes of colour marking, “strict” and “loose” [12]. With strict implementation, no packets should pass at all until CBS or EBS is equal or bigger than the frame size (currently 1,518 B). In this test for data rate 80 Mb/s, first it should be observed that packets are received as soon as CBS is bigger than the frame size. These packets should be marked green, and no frame loss should be detected in this case. With the same data rate and constant CBS = 1 kB, packet should be forwarded as soon as EBS reaches the frame size. Note that these packets should be marked yellow, so only a rate of up to EIR should be received and frame loss is expected.

6 check In case of 130 Mb/s stimuli, first it should be observed that packets are received as soon as the CBS is bigger than the frame size. These packets should be marked green, but since the incoming rate is higher than the CIR, and EBS is less than the frame size, the FlowEngine is supposed to

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 97 of 190 Test Description forward only at a rate of up to CIR, so frame loss is expected. With the same data rate and constant CBS = 1 kB, packets should be forwarded as soon as EBS reaches the frame size. Note that these packets should be marked yellow, so the maximum forwarding rate is EIR, meaning that frame loss is expected.

check In a “loose” implementation the algorithm allows packets to be forwarded, if the token bucket is not yet completely empty. This means that the burst sizes should not have any affect, as long as the average data rate is equal to the configured rate (without bursts and little jitter) and the bucket refresh interval is small enough.

check The expectations described above are valid only, if the implementation follows RFC 4115 [11]. In case of implementing RFC 2698 [10], the first check is performed always against the yellow token bucket, so EBS “overrides” CBS: if EBS is smaller than the frame size, the packet will be marked red, no matter if it fits to the green token bucket or not.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 98 of 190 Test Description Identifier Hardware Test number 5 Test Purpose The goal is to show the effect of EBS on burst-tolerance.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli Send a flow with 140 Mb/s (packets should be sent equally distributed). Then send a short burst with 800 Mb/s. The burst should consist of 1300 1,518 B packets.

1 stimuli Increase the EBS by any value between 2 and 20 kB. Then send the same burst described above. Repeat these steps until EBS = 256 kB is reached (max value). For easier detection, the continuous flow and the burst should differ in some parameter, e.g., MAC address. Both of them have to be assigned to the same service.

2 check The rate of the continuous flow and the burst is much higher than CIR+EIR, but lower than the link rate, so any frame loss should occur because of red marking. In the beginning the EBS is low, so almost all packets of the burst should be filtered (some packets may go through because of the timing of the packets or the loose implementation).

3 check As the EBS increased more and more packets from the burst should be detected at the test equipment input, since more and more will be marked yellow, not red. Around EBS ≈ 200 kB all packets of the burst should be marked yellow and forwarded by the FlowEngine.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 99 of 190

Test Description Identifier Hardware Test number 6 Test Purpose The goal is to show the accuracy of the CIR (also for corner cases) and CBS (in strict mode).

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli For loose implementation: send a flow at almost the line rate, repeat the test with different packet sizes. For strict implementation: send a flow at almost the line rate, first use packet size 1,001 B, then 1,000 B, then other sizes below 1,000 B.

1 stimuli Repeat the test with different CIR values (4 kb/s, 10 kb/s … 100 Mb/s … 1 Gb/s) Packets should be sent equally distributed (traffic without bursts).

2 check In this test the 1 Rate 2 Colour mode is used, so the EIR/EBS values are not considered. In case of loose implementation a packet will be coloured to green if CBS > 0, so on long term the received flow data rate at the test equipment should be equal to CIR. This should be independent from the packet sizes.

3 check In case of strict implementation a packet will be coloured to green if CBS > packet size, so no traffic should be detected when size 1,001 B is used. If the packets are smaller than 1,001, the same behavior is expected as it is described for loose implementation.

4 check If the CIR > traffic data rate no traffic loss is expected.

Note: the smaller the packets that are used, the larger the initial burst will be that is received at the test equipment at the beginning of the tests.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 100 of 190 Test Description Identifier Hardware Test number 9 Test Purpose The goal is to show the precision of the CBS in corner cases.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 Stimuli Send a flow with almost the line rate. Packets should be sent equally distributed (traffic without bursts).

1 stimuli Change CBS from the minimum towards the maximum value (test performer is free to choose the granularity).

2 check Since the CIR is extremely low compared to the flow data rate, the expected traffic at the test equipment reception port is a short burst with data amount equal to the actual CBS. Depending on the implementation (loose/strict) and the packet sizes, a little difference may be observed between the received number of bytes and CBS.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 101 of 190 Test Description Identifier Hardware Test number 10 Test Purpose The goal is to show the effect of a bad burst configuration.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 Stimuli Set CBS = EBS = 0 kB and send a flow with a rate of 80..140 Mb/s.

1 stimuli CBS to 2 kB, leave EBS on 0 kB. Send a flow with a rate of 8..140 Mb/s.

2 stimuli Set CBS back to 0 kB and EBS to 2 kB. Send a flow with a rate of 80..140 Mb/s. Packets should be sent equally distributed all the time (traffic without bursts).

3 check Same behaviour is expected with the different implementation modes (see Test number 4). Both “strict” and “loose” implementation should mark packets red if the token counter is zero, so no packets should be forwarded at all when CBS = EBS = 0 kB. When CBS = 2 kB, EBS = 0 kB, traffic should be received until the rate of CIR (100 Mb/s, these packets are green), but above this data rate all packets should be marked red and discarded. If CBS = 0 kB and EBS = 2 kB, traffic should be received until the rate of EIR (50 Mb/s, these packets are yellow), but above this data rate all packets should be marked red and discarded.

4 check The expectations described above are valid only if the implementation follows RFC 4115 [11]. In case of implementing RFC 2698 [10] the first check is performed always against the yellow token bucket, so EBS “overwrites” CBS: if EBS is smaller than the frame size, the packet will be marked red, no matter if it fits to the green token bucket or not.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 102 of 190

Test Description Identifier Hardware Test number 11 Test Purpose The goal is to show the effect of a bad CIR/EIR configuration.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 Stimuli Send some traffic with a rate not higher than the line rate, e.g., 80 Mb/s. First set both CIR and EIR to 0 Mb/s.

1 stimuli Then stop the traffic, increase CIR to 100 Mb/s, leave EIR as it was, start the traffic again.

2 stimuli In the end set CIR back to 0 Mb/s and EIR to 50 Mb/s (before stopping the traffic, after starting it again). Packets should be sent equally distributed in every flow (traffic without bursts).

3 check If CIR = EBS = 0 only a “few” packets should pass the FlowEngine. Value of “few” depends on the implementation (strict/loose), the CBS/EBS and the packet size, but means at least 2 packets (1 green, 1 yellow) with the given Burst configuration, considering that the maximum frame size is 1,518 B. With CIR = 100 Mb/s, EIR = 0 Mb/s no frame loss should be detected, every packet should be marked green. When CIR = 0 Mb/s and EIR = 50 Mb/s, a rate of 50 Mb/s should be captured with the test equipment (yellow packets) so frame loss should be detected.

4 check The expectations described above are valid only if the implementation follows RFC 4115 [11]. In case of implementing RFC 2698 [10] the first check is performed always against the yellow token bucket, so EIR = 0 “overwrites” CIR: if the yellow token bucket is empty, the packet will be marked red, no matter if it fits to the green token bucket or not.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 103 of 190

Test Description Identifier Hardware Test number 12 Test Purpose The goal is to show that discarding on one flowtype does not affect the flow on the other type.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli For a flow assigned to flowtype 0, the data rate can be any value up to 100 Mb/s, e.g., 80 Mb/s.

1 stimuli For a flow assigned to flowtype 1 increase the data rate from, e.g., 10 Mb/s to 50 Mb/s.

2 stimuli Increase further to 70 Mb/s and above. Packets should be sent equally distributed all the time (traffic without bursts).

3 check Until the 50 Mb/s is not reached by flowtype 1 assigned flow, no packet loss should be detected on the reception side, all packets of both flows should be marked green and received. If the data rate of this flow is between 50 and 70 Mb/s some packets should be marked yellow on flowtype 1, but still no packet loss should occur since the line rate is higher than the sum of the flow rates. Above 70 Mb/s on flowtype 1, packet loss should be detected but only on flowtype 1, flow assigned to flowtype 0 should be received without any loss.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 104 of 190

Test Description Identifier Hardware Test number 13 Test Purpose The goal is to show that packets with lower queue number are dropped first.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli Both flows should have the data rate of 400 Mb/s. Packets should be sent equally distributed (traffic without bursts). Use the following line rate: 1000, 750, 600, 350 and 200Mb/s.

1 check Line rate 1000 Mb/s: no frame loss should be detected at all.

2 check Line rate 750 Mb/s: part of the yellow packets should be discarded from the flow that is assigned to service 1.

3 check Line rate 600 Mb/s: part of the green packets and all of the yellow packets should be discarded from the flow that is assigned to service 1.

4 check Line rate 350 Mb/s: the complete service 1 assigned flow should be discarded and part of the yellow packets from the other flow as well.

5 check Line rate 200 Mb/s: the complete service 1 assigned flow should be discarded, just like the yellow packets and part of the green packets from the other flow

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 105 of 190

Test Description Identifier Hardware Test number 14 Test Purpose The goal is to show that discarding on one port does not affect the traffic on another port.

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli Traffic rate on port 0 should be below CIR + EIR. Packets should be sent equally distributed (traffic without bursts). On port 1 smoothly increase data rate from 140 Mb/s towards 150 Mb/s and above. Packets should be sent equally distributed on both ports (traffic without bursts).

1 check On port 0 all packets should be marked green. CBS and EBS are bigger than the MTU, neither of them should have any effect. Traffic data rate is below the line rate, so packets should be received without any loss.

2 check On port 1 above rate 150 Mb/s packet loss should be detected. Data rate of the traffic should increase until 150 Mb/s then it should keep this rate.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 106 of 190

Test Description Identifier Hardware Test number 15 Test Purpose The goal is to check, if only the unicast counters are incremented on unicast reception (neither multicast, nor broadcast counters).

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli Eight flowtype are configured per port. All traffic is halted.

1 stimuli Then all Management Information Base (MIB) counters are read out.

2 stimuli Then 10 frames per flowtype per port with a unicast destination MAC address are sent downstream into the FlowEngine.

3 stimuli After completing this, all MIB counters are read out again.

4 check The unicast counter of each flowtype should have the value 10, and all other counters except for the octet counters should be 0.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 107 of 190

Test Description Identifier Hardware Test number 16 Test Purpose The goal is to check, if only the multicast counters are incremented on multicast reception (neither unicast, nor broadcast counters).

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli Eight flowtype are configured per port via PCIe (Peripheral Component Interconnect Express). All traffic is halted.

1 stimuli All MIB counters are read out.

2 stimuli 10 frames per flowtype per port with a multicast destination MAC address are sent downstream into the FlowEngine.

3 stimuli All MIB counters are read out again.

4 check The multicast counter of each service should have the value 10, and all other counters except for the octet counters should be 0.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 108 of 190

Test Description Identifier Hardware Test number 17 Test Purpose The goal is to check, if only the broadcast counters are incremented on broadcast reception (neither unicast, nor multicast counters).

Configuration See Illustration and Hardware Test 1 setup

Test Step Type Description Result Sequence 0 stimuli Eight flowtype are configured per port via PCIe.

1 stimuli All traffic is halted.

2 stimuli All MIB counters are read out.

3 stimuli 10 frames per flowtype per port with a broadcast destination MAC address are sent downstream into the FlowEngine.

4 stimuli All MIB counters are read out again.

5 check The broadcast counter of each flowtype should have the value 10, and all other counters except for the octet should be 0.

6 Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 109 of 190 4.2.1.3. 6Tree speed test This test will verify the devices propagation delay on hardware level. Test Description Identifier 6Tree speed test Test Purpose The test should show the propagation delay for 6Tree frames through the device. Configuration The test setup is similar to the Illustration, with the case of the device needing to be opened. An Oscilloscope with two input channels will be needed.

Figure 45: TrustNode with open case. See on top of the circuit board: the Ethernet PHY chips, one per Ethernet jack

Illustration 1: TrustNode with open case. See on top of the circuit board: the Ethernet PHY Test Step chips,Type one per Ehernet jack Description Result Sequence 0 stimuli Start 6Tree traffic with low bitrate of 1 packet/s 1 stimuli Attach probe 1 to pin 15 of the PHY chip which belongs to the input port 2 stimuli Attach probe 2 to pin 48 of the PHY chip which belongs to the output port 3 check Measure the time difference of the rising edged of both signals Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 110 of 190

4.2.1.4. TrustNode functional test

The following test is to be applied to test the devices functionality in the CHARISMA testbeds.

TN configure Identifier INR_TN_test_1 Test Purpose TrustNode management interface testing. Configuration TrustNode, DHCP-server References Rest interface description:

Applicability HTTP-POST request

 Pre-test  Power on conditions  DHCP configured

Test Step Type Description Result Sequence 0 configure Attach management network interface to the TrustNode, wait until the device requests a IP from the DHCP server 1 configure Set the 6Tree prefix via REST interface 2 Check Check if new prefix is displayed on the 7segment- display Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 111 of 190 TN connectivity Identifier INR_TN_test_2 Test Purpose TrustNode traffic connectivity Configuration TrustNode, Traffic generator References Applicability Generating IPv6 packets

 Pre-test  INR_TN_test_1 conditions

Test Step Type Description Result Sequence 1 configure Configure prefix, maybe according to example configuration:

2 stimulus Send 6Tree packet into port(n) 3 check Check if packets arrives on the corresponding output port 4 repeat Repeat #3 and #2 for all port combinations Test Verdict

TN throughput Identifier INR_TN_test_3 Test Purpose TrustNode traffic throughput Configuration TrustNode, Traffic generator References Applicability Generating IPv6 packets

 Pre-test  INR_TN_test_2 conditions

Test Step Type Description Result Sequence 1 configure Configure according to INR_TN_test_2 2 stimulus Send IPv6 packets at linespeed to port(n) 3 check Check throughput rate Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 112 of 190 4.2.2. MobCache testing The following tables describe the tests designed for the MoBcache functionalities in CHARISMA. Their purpose is to verify that the implemented MoBcache functionalities are operating correctly. The tests focus on the detection of handover of a user equipment (UE) and the success of prefetching user requested content.

Test Description Identifier MoBcache_network_1 Test Purpose Network connectivity: this test verifies the IP level connectivity between MoBcache and vCC by both WiFi and LTE interfaces Configuration The required configuration includes: 1. The LTE interface of MoBcache connects to LTE network 2. If MoBcache is a rootMoBcache (rootMB), it connects to the CHARISMA network by Ethernet 3. If MoBcache is a childMoBcache (childMB), it connects to rootMB by WiFi 802.11ac 4. vCC is deployed in CHARISMA network 5. An UE connects to childMB by WiFi 802.11n 6. Enable Internet Control Message Protocol (ICMP) test in MB and vCC References Applicability List of features and capabilities which are required to be supported to execute this test

Pre-test conditions  vCC has been setup by CHARISMA CMO in CAL3

Test Step Type Description Result Sequence 1 rootMB sends ICMP packets to vCC over Ethernet 2 rootMB receives ICMP response from vCC over Ethernet 3 childMB sends ICMP packets to vCC over LTE 4 childMB receives ICMP response from vCC over LTE 5 childMB sends ICMP packets to vCC over WiFi 6 childMB receives ICMP response from vCC over WiFi 7 UE sends ICMP packets to vCC over WiFi 6 UE receives ICMP response from vCC over WiFi Test Verdict

Test Description Identifier MoBcache_network_2 Test Purpose Network connectivity: this test verifies the IP level connectivity between MoBcache and content server by both WiFi and LTE interfaces Configuration The required configuration includes: 1. The LTE interface of MoBcache connects to LTE network 2. If MoBcache is a rootMB, it connects to the CHARISMA network by Ethernet 3. If MoBcache is a childMB, it connects to rootMB by WiFi 802.11ac 4. vCC is deployed in CHARISMA network 5. An UE connects to childMB by WiFi 802.11n References Applicability

Pre-test conditions  Content server has been established 

Test Step Type Description Result Sequence 1 rootMB sends ICMP packets to content server over

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 113 of 190 Test Description Ethernet 2 rootMB receives ICMP response from content server over Ethernet 3 childMB sends ICMP packets to content server over LTE 4 childMB receives ICMP response from content server over LTE 5 childMB sends ICMP packets to content server over WiFi 6 childMB receives ICMP response from content server over WiFi 7 UE sends ICMP packets to content server over WiFi 6 UE receives ICMP response from content server over WiFi Test Verdict

Test Description Identifier MoBcache_cache_1 Test Purpose Cache Functionality: this test verifies the cache functionality of MoBcache Configuration The required configuration includes: 1. UE1 and UE2 connect to a childMB by WiFi 802.11n, childMB connects to a rootMB by WiFi 802.11ac, and rootMB connects to CHARISMA network by Ethernet 2. UE1 and UE2 play a video from content server, the video stream passing childMB and rootMB References Applicability  Content server has been configured with HTTP Live Streaming (HLS) video streaming

Pre-test conditions  Content server has been established 

Test Step Type Description Result Sequence 1 UE1 sends a HTTP request for a HLS video to content server 2 UE1 receives the HLS video chunks from content server 3 The received chunks by UE1 have been cached in rootMB 4 The received chunks by UE1 have been cached in childMB 5 UE2 sends HTTP requests for the same HLS video to content server 6 UE2 receives the HLS video chunks from childMB Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 114 of 190 Test Description Identifier MoBcache_prefetch_1 Test Purpose Monitoring info: this test verifies the monitoring information between MoBcache and vCC Configuration The required configuration includes: 1. rootMB1 and rootMB2 connect to CHARISMA network by Ethernet 2. childMB connects to rootMB1 by WiFi 802.11ac 3. UE connects to childMB by WiFi 802.11n 4. childMB with the associated UE switches from rootMB1 to rootMB2 References Applicability  Content server has been configured with HLS video streaming  childMB sends throughput info to vCC periodically.  childMB is able to switch from rootMB1 to rootMB2

Pre-test conditions  Content server has been established  vCC has been initialized and configured

Test Step Type Description Result Sequence 1 UE sends a HLS video request to content server, and receives video streaming 2 vCC receives the throughput info of the WiFi interface between childMB and rootMB1 3 childMB switches from rootMB1 to rootMB2 4 vCC receives the throughput info of the WiFi interface between childMB and rootMB2 Test Verdict

Test Description Identifier MoBcache_prefetch_2 Test Purpose Prefetching functionality: this test verifies the prefetching functionality detected and operated by vCC Configuration The required configuration includes: 1. rootMB1 and rootMB2 connect to CHARISMA network by Ethernet 2. childMB connects to rootMB1 by WiFi 802.11ac 3. UE connects to childMB by WiFi 802.11n 4. UE plays a HLS video from content server 5. childMB with the associated UE switches from rootMB1 to rootMB2 References Applicability  Content server has been configured with HLS video streaming  childMB sends throughput info to vCC periodically.  childMB is able to switch from rootMB1 to rootMB2  vCC is able to detect a handover according to the throughput changes while childMB switches from rootMB1 and rootMB2

Pre-test conditions  Content server has been established  vCC has been initialized and configured

Test Step Type Description Result Sequence 1 UE sends a HLS video request to content server, and receives video streaming 2 vCC receives the throughput info of the WiFi interface between childMB and rootMB1 3 childMB switches from rootMB1 to rootMB2 4 vCC detects the switch, and send a prefetch order to rootMB2 5 rootMB2 receives the order from vCC, sends a

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 115 of 190 Test Description prefech command to content server, and receives and caches the prefetched content 6 childMB is able to retrieve the user requested content from rootMB2 as soon as childMB connects to the rootMB2 7 UE is able to seamless play the HLS video during the switching Test Verdict

4.2.3. Smart NIC testing The following tables describe the tests that have been designed for the Smart NIC functionalities in CHARISMA.

ACE-NIC Tests Description Server: Supermicro X9SRL -F CPU: Intel Xeon CPU E5-2620 v2 @2.1Ghz x 12 DRAM: 8GB Operating System: CentOS 7.3.1611 Kernel Version: 3.18.4-generic Disk: 500GB Open vSwitch: v2.5.1 Server Adapters: Intel XL710 i40 v1.38 for software (Kernel and User) OVS ENET (Trademark for Ethernity Network technology) driver: v450.08.68A

Test Generator: IXIA/Spirent The test setups used for Kernel and User software OVS on the XL710 and accelerated OVS on the ENET ACE- NIC are shown in the diagrams below:

Figure 46: SmartNIC Test Generator

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 116 of 190

Test Description

Identifier ACE-NIC, Test 1.1 Test Purpose Test the Installation process of ACE-NIC in Open Stack environment. Configuration The configuration is listed below in the test sequence, step 5

References Test environment must include some Xeon multicore Server with Open Stack; OVS in kernel mode (same test could be repeated in DPDK (OpenStack mode of shared memory for NIC) mode) Server is clean, means has no previous ACE-NIC installation. Test will require the Ethernity User Guide and CLI Guide for basic configuration validation. Applicability Any application generating traffic L4.

Pre-test • See Server description conditions • See installation guide for ACE-NIC • Minimum environment requirements: Server with ACE-NIC, PC with Terminal

Test Step Type Description Result Sequence 1 Read Guide To read ENET Installation Guide procedure, validate Next step ready the Server parameters are matching the ACE-NIC requirements. 2 NIC physical There are different types of servers, so PCIe slot has to Check Sever has insertion be inserted properly. recognised NIC

3 Pre-install Read and perform ‘Before installation’ instructions Ethernity software ready to be installed 4 Installation 1. Install Intel Driver: see ENET Guide for Intel Procedure 2. Install ENET driver: see ENET Guide for NIC described in Guide passed without errors 5 Configuration Configure IP interfaces on NIC: Verify check Linux: “ ifconfig up” configuration by Note: after reset this step must be done again. typing “ifconfig” 6 Functional Connect server with the NIC to Network, get IP from Validate no check DHCP Server. losses on ping Send ping using the command “ping “ The installation guide is correct. Tested on CentOS 7.3.1611. No other system Test tested. The problem found with PCIe2, but it is not relevant for the project. Verdict

Test Description

Identifier ACE-NIC, Test 1.2 Test Purpose Test the removal process of ACE-NIC in Open Stack environment. Configuration The configuration is listed below in the test sequence, step 1 and 2

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 117 of 190 References Test environment must include some Xeon multicore Server with Open Stack; OVS in kernel mode (same test could be repeated in DPDK mode) Server is clean, means has no previous ACE-NIC installation. Test will require the Ethernity User Guide and CLI Guide for basic configuration validation. Applicability Linux

Pre-test • See Server description conditions • See installation guide for ACE-NIC • Minimum environment requirements: Server with ACE-NIC, PC with Terminal

Test Step Type Description Result Sequence 1 Check Server with Check Server has full NIC configuration: NIC linux: lspci | grep Xilinx “XX:00.0 Memory controller: Xilinx Corporation Device 7022”

Initialization of all run “ ./Appinit ” features and driver

Opens the ENET run “ ENET.e “ application

See the version of ENET: run “ mea ver “ the SW and HW. Indicated that there’s a communication with the FPGA. 3 Uninstall “sudo make uninstall” in the right path of the “uninstall finished Ethernity software package successfully” software package

2 Remove driver “ sudo apt-get purge intel-linux- Validate that driver graphicsinstaller && sudo apt-get autoremove is removed. “ or OS related. Test 4 Installation Repeat test 1.1 Installation passed Verdict without errors. Test passed correctly. No problem found

Test Description

Identifier ACE-NIC, Test 2.1 Test Purpose Test NIC configuration with ENET CLI: ACE-NIC configuration for VLAN, Router, VxLAN. Device under test (DUT) is checked by test equipment for forwarding packets per configuration. Configuration The configuration is listed below in the test sequence step 5

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 118 of 190 References Test environment must include some Xeon multicore Server with Open Stack; OVS in kernel mode (same test could be repeated in DPDK mode) Previous installation of ACE-NIC is required. Test will need to be configure per definition above by CLI commands with User Guide of Ethernity. Applicability Any application generating traffic L4/L2 and tunnelled traffic.

Pre-test conditions • See Server description • See installation guide for ACE-NIC • Minimum environment requirements: Server with ACE-NIC, Traffic Generator, PC with Terminal.

Test Step Type Description Result Sequence 1 Connect Xena Connect Xena’s port to NIC’s interface Leds in Xena’s to NIC’s application changed interfaces to green 2 Run Appinit Run “Appinit” by typing the command “./Appinit” Connects driver to in the PCI-e, Initialize all features. Interfaces are up. 3 Activate ENET Type “ENET.e” Activates the CLI. application “Welcome to FPGA CLI Environment” 4 Enable ingress MEA port ingress set all -a 1 All ingress ports and egress MEA port egress set all -a 1 and egress port are ports enable done. 5 enable VLAN Command in CLI “MEA service set create “done” configuration FF001 FF001 D.C 0 1 0 1000000000 0 32000 0 0 1 -ra 0 -l2Type 1” 6 Run traffic Create packet with Vlan 1, and start traffic See traffic on the De_port 7 Stop traffic Stop traffic 8 clear all “MEA (Ethernity driver internal name) service set No services services delete all” 9 Enable “MEA action set create -f -ed 1 0 routing -pm 1 0 -lmid 1 0 1 0 -r configuration 8100”

“MEA forwarder add 0 50 3 1 0 1 -action 1

“MEA service set create FFF001 FFF001 D.C 0 1 0 1000000000 0 32000 0 0 0 -ra 0 -v 50 -f 1 0 -l 0 0 D.C -l2Type 3” 10 Run traffic Create packet with matching DA (destination See traffic on the address) and SA (Source address) MAC, Vlan 1, De_port, check and start traffic TTL decrement, and DA MAC change. Test Tested performed with last version of ENET CLI. Tested VLAN, Router configuration Verdict according to test definition. Traffic from Xena test equipment passed bridging and routing. VxLAN is tested by different method, but it is not relevant for the project since no VxLAN decided to use in the project

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 119 of 190 NOTE: Appinit – name of the file that enables the configuration between the driver and the PICe, and runs all FPGA features. Test Description

Identifier ACE-NIC, Test 2.2 Test Purpose Test ACE-NIC configuration with OF: ACE-NIC configuration for VLAN, Router, NAT(Network Address Translation) through OF (Open Flow) Controller OF v1.4 DUT is checked by Test equipment for forwarding packets accordingly. Configuration The configuration is listed below in the test sequence step 3

References Test environment must include some Xeon multicore Server with Open Stack; OVS in kernel mode (same test could be repeated in DPDK mode) Previous installation of ACE-NIC is required. Test will need to be configured per definition of ENET OF User Guide of Ethernity; Test will require ENET (ENET – trademark for Ethernity Networks technology) OF Spec Guide. Applicability Any application generating traffic L4.

Pre-test • See Server description conditions • See installation guide for ACE-NIC • Minimum environment requirements: Server with ACE-NIC, Traffic Generator, PC with Terminal

Test Step Type Description Result Sequence 1 Run MUL (Open Connect the Ethernity OF client to OF manager. connectivity Source OF See “Basic configuration of Open Flow“ in ENET between controller) in install guide. Ethernity OF manager Openflow agent and controller enabled 2 Connect Xena to Connect ports from Xena to NIC’s interface Green light in NIC’s ports Xena’s application 3 Run Vlan See ‘note 1’ below. VLAN 300 for example. commands in OF manager 4 Generate Create packet with Vlan 1, and start traffic matching packet in Xena 5 Run traffic Press ‘run traffic’, see traffic received. Traffic received in dest_port 6 Delete all See config bellow No services services available 7 Run Router See config bellow commands in

OF manager 8 Generate Create a packet without VLAN, add the specified matching packet IP. in Xena 9 Run traffic Press ‘run traffic’, see traffic received. Traffic received in dest_port 10 Delete all See config bellow No services services available 11 Run NAT See config bellow commands in OF manager

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 120 of 190 12 Generate Create a packet with matching src ip, dest ip in matching packet TCP, dest port and src port. No VLAN. in Xena 13 Run traffic Press ‘run traffic’, see traffic received Traffic received in

dest_port Test for 802.1q , and Router passed correctly through OF Agent. NAT was tested partially, the configuration only, since NAT is not going to be used in project. Bridge and Router traffic has same behaviour as it was in CLI configuration test Test Verdict

Command Line configuration required to configure the parameters of the test: #create metering of-meter add switch 0xbe78a2211a000000 meter-id 1 meter-type kbps burst yes stats yes meter-band drop rate 150000 burst-size 100 commit-meter

#create metering of-meter add switch 0xbe78a2211a000000 meter-id 2 meter-type kbps burst yes stats yes meter-band drop rate 100000 burst-size 100 commit-meter of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 1 table 0 flow-priority 100 instruction-goto 1 instruction-meter 1 commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 2 table 0 flow-priority 101 instruction-goto 1 commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 4 table 0 flow-priority 102 instruction-goto 1 commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 5 table 0 flow-priority 103 instruction-goto 2 commit # create ingress VLAN of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x8100 vid 300 vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port * table 1 flow-priority 10 instruction- goto 22 instruction-meter 2 commit

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 121 of 190 # create ingress VLAN of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto * tos * dport * sport * in-port * table 2 flow-priority 10 instruction-goto 22 commit # create VLAN bridging of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x8100 vid 300 vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port * table 22 flow-priority 110 instruction-apply action-add output normal action-list-end commit #port 3 of-flow add switch 0x000000e0ed30933e smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 3 table 0 flow-priority 10 instruction-goto 5 commit #port 4 of-flow add switch 0x000000e0ed30933e smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 4 table 0 flow-priority 11 instruction-goto 5 commit # ingress tagged of-flow add switch 0x000000e0ed30933e smac * dmac * eth-type 0x8100 vid 30 vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port * table 5 flow-priority 110 instruction-goto 14 commit # IP interface IP packets send packet to controller of-flow add switch 0x000000e0ed30933e smac * dmac 00:e0:ed:30:93:fa eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip 1.1.1.1/32 sip * proto * tos * dport * sport * in-port * table 14 flow- priority 101 instruction-apply action-add output controller action-list-end commit

# IP interface IP packets send packet to controller of-flow add switch 0x000000e0ed30933e smac * dmac 00:e0:ed:30:93:fa eth-type 0x806 vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip 1.1.1.1/32 sip * proto * tos * dport * sport * in-port * table 14 flow-priority 102 instruction-apply action-add output controller action-list-end commit #create group - replace the source mac , destination mac , decrement the TTL of-group add switch 0x000000e0ed30933e group 1 type all action-add set-dmac 04:F4:BC:37:8C:03 action-add set-smac 00:e0:ed:30:93:fa action-add dec-nw-ttl action-add output 4 commit-group

# filter dmac and da IP - forward to group 1

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 122 of 190 of-flow add switch 0x000000e0ed30933e smac * dmac 00:e0:ed:30:93:fa eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip 30.0.0.1/32 sip * proto * tos * dport * sport * in-port * table 14 flow-priority 103 instruction-apply action-add group-id 1 action-list-end commit #create group - replace the source mac , destination mac , decrement the TTL of-group add switch 0x000000e0ed30933e group 2 type all action-add set-dmac 04:F4:BC:37:8C:02 action-add set-smac 00:e0:ed:30:93:fa action-add dec-nw-ttl action-add output 3 commit-group # filter dmac and da IP - forward to group 2 of-flow add switch 0x000000e0ed30933e smac * dmac 00:e0:ed:30:93:fa eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip 20.0.0.1/32 sip * proto * tos * dport * sport * in-port * table 14 flow-priority 104 instruction-apply action-add group-id 2 action-list-end commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 1 table 0 flow-priority 100 instruction-goto 1 commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 2 table 0 flow-priority 101 instruction-goto 2 commit # untagged traffic of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto * tos * dport * sport * in-port * table 1 flow-priority 101 instruction-goto 25 commit # untagged traffic of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto * tos * dport * sport * in-port * table 2 flow-priority 101 instruction- goto 26 commit #create group from lan to wan - replace the source mac , destination mac , decrement the TTL, replace the source ip , replace the source port of-group add switch 0xbe78a2211a000000 group 1 type all action-add set-dmac 02:00:00:00:00:01 action- add set-smac 00:e0:ed:30:93:fa action-add dec-nw-ttl action-add nw-saddr 192.168.1.1 # change also the source port to 30000 action-add output 2 commit-group

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 123 of 190 #create group from wan to lan - replace the source mac , destination mac , decrement the TTL, replace the dest ip , replace the dest port of-group add switch 0xbe78a2211a000000 group 2 type all action-add set-dmac 01:00:00:00:00:02 action- add set-smac 00:e0:ed:30:93:fa action-add dec-nw-ttl action-add nw-daddr 192.168.1.100 # change also the destination port to 50000 action-add output 1 commit-group

# for table miss - in case received tcp packets and there is no match of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto 6 tos * dport * sport * in-port * table 25 flow-priority 2 instruction- apply action-add output controller action-list-end commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip 2.3.4.5/32 sip 192.168.1.100/32 proto 6 tos * dport 80 sport 50000 in-port * table 25 flow-priority 101 instruction-apply action-add group-id 1 action-list-end commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip 192.168.1.1/32 sip 5.4.3.2/32 proto 6 tos * dport 30000 sport 80 in-port * table 26 flow-priority 101 instruction-apply action-add group-id 2 action-list-end commit (adjust parameters) delete ------# delete of-flow del switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip 192.168.1.1/32 sip 5.4.3.2/32 proto 6 tos * dport 30000 sport 80 in-port * table 26 flow-priority 101 tunnel-id *

# delete of-flow del switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls- tc * mplsbos * dip 2.3.4.5/32 sip 192.168.1.100/32 proto 6 tos * dport 80 sport 50000 in-port * table 25 flow-priority 101 tunnelid *

Test Description Identifier ACE-NIC, Test 2.3 Test Purpose Test the VM (VNF) to Network acceleration with VxLAN encapsulation. Test consists from 2 steps: • first - VM is sending traffic which encapsulated in SW (software) with VxLAN (or other Tunnel) where DUT is measured by test equipment for throughput • second - VM is sending traffic which encapsulated by ACE-NIC with VxLAN (or another Tunnel) DUT is measured by test equipment for throughput.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 124 of 190 Configuration The configuration is listed below in the test sequence step 4

References Test environment must include some Xeon multicore Server with Open Stack; OVS in kernel mode (same test could be repeated in DPDK mode) Previous installation of ACE-NIC is required. Test will need to be configure per definition above by CLI commands with User Guide of Ethernity. Applicability Any application generating traffic L4 tunnel.

Pre-test conditions • See Server description • See installation guide for ACE-NIC • Minimum environment requirements: Server with ACE-NIC, Traffic Generator, PC with Terminal.

Test Step Type Description Result Sequence Connect Server equipped with SmartNIC to 1 setup Setup ready Test Equipment Everything Run “ ./Appinit” in the correct folder. 2 Run NIC initialized and Run “ ENET.e ” interfaces up. Check Receive NIC’s 3 communication Type “MEA ver” version with NIC Enable ingress All ingress ports “MEA port ingress set all -a 1” 4 and egress and egress port are “MEA port egress set all -a 1” ports enabled. Create VLAN In linux “ vconfig add 5 configuration “ Create In linux “ ifconfig 6 interface netmask “ configuration Create VxLAN 7 See config bellow ENET CLI encapsulation

See that subtype is In ENET CLI write “MEA service show entry 8 Show VxLAN 14, and ID as all” declared. 9 Send ping Ping Receive reply Test is completed with VxLAN with the VM/Container environment in Kernel mode (no DPDK), resulting in improvement of 5 times data rate and 4-time latency. Test Verdict

In ENET CLI: UL: MEA service set create 105 fffff fffff 63 0 1 0 100000000 0 32000 0 0 1 127 -l2Type 1 -subType 14 -priType 3 -inf 1 0x123456 -h 0 0 0 0 -hType 64 DL: MEA GW global set -Infra_VLAN 0x8100000a MEA globals set if local_IP 0 10.100.1.0 MEA globals set if local_IP 1 10.101.1.1 MEA globals set if local_IP 2 10.102.1.2 MEA globals set if local_IP 3 10.103.1.3

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 125 of 190 MEA globals set if local_mac 0 00:01:00:22:00:01 MEA globals set if local_mac 1 00:01:00:22:00:02 MEA globals set if local_mac 2 00:01:00:22:00:03 MEA globals set if local_mac 3 00:01:00:22:00:04 MEA GW global set -vxlan_L4_dst_port 4789 MEA service set create 127 fff00a fff00a D.C 0 1 0 1000000000 0 32000 0 0 1 105 -l2Type 1 -h 0 0 0 0 - hvxlan_DL 1 00:01:23:45:67:89 192.160.11.12 0x123456 4789 2233 50 0 -hType 55

Test Description Identifier ACE-NIC, Test 2.4 Test Purpose Test VM to Network with OVS VxLAN encapsulation offloaded on ACE-NIC. Test: Configure OVS for VxLAN encapsulation; Check that OVS configuration offloaded on ACE-NIC. Verify that VM traffic encapsulated by ACE-NIC and not by Linux stack. On DUT: verify the traffic is going wire speed in ACE-NIC Configuration The configuration is listed below in the test sequence step 3

References Test environment must include some Xeon multicore Server with Open Stack; OVS in kernel mode (same test could be repeated in DPDK mode) Previous installation of ACE-NIC is required. Test will need to be configured per definition above by CLI commands with User Guide of Ethernity. Applicability Any application generating traffic L4.

Pre-test conditions • See Server description • See installation guide for ACE-NIC • Minimum environment requirements: Server with ACE-NIC, Traffic Generator, PC with Terminal

Test Step Type Description Result Sequence 1 setup Connect Server equipped with SmarNIC to Test Setup ready Equipment 2 install Check OVS, and Driver inhalation: Installation is Configure VM on OVS and Send ping from test ready Equipment and get response from VM Show OVS version: ovs-vswitchd --version Show ENET driver: meaCli mea version Show OVS ENET plugin: ovs-parms enet 3 OVS See configuration example below VxLAN

configuration Validate that Configuration is offloaded, by CLI configured commands on SmartNIC with OVS, Show ENET VxLAN: mea service show entry all SmartNIC (see below Show service output) configured with VxLAN automatically 4 Traffic Send Ping to VM from test equipment, get response Encapsulation with VxLAN encapsulation in SmartNIC Validate that Linux packet Rx without VxLAN, only Validate that Linux packet Tx without VxLAN Linuc inf: command

Test Verdict OVS v5.2.1 test passed with ENET OVS-DB configuration of VxLAN. 8 cores were dedicated to Docker environment without docker brg, Linux in kernel mode. Tested both throughput and latency to Central Processing Unit (CPU) which was improved 5 times in both directions.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 126 of 190 OVS is the Open Virtual Switch, a part of OpenStack. It presents the data path, and is part of the interfacing for smartNIC: OVS can configure VXLAN configuration together with Openflow rules. The following example shows OVS Vxlan configuration that include VM VLAN 10, VxLAN 6000 and VxLAN 7000, the VLAN is only internal VLAN and not include at VXLAN packet. The example configures only one side vconfig add vnet0 10 – create VM interface on VLAN 10 ifconfig vnet0.10 up – interface up ovs-vsctl add-br br1 – create OVS bridge name br1 vconfig add ens1 10 - create interface for ENET on VLAN 10 ifconfig ens1.10 up – interface up ovs-vsctl add-port br1 vnet0.10 – add OVS bridge VM to bridge ovs-vsctl add-port br1 ens1.10 – add ENET interface to bridge ovs-vsctl add-port br1 vxlan -- set Interface vxlan type=vxlan options:key=6000 options:dst_port=4789 options:local_ip=192.168.1.10 options:remote_ip=192.168.1.2 ofport_request=1000 – add VXLAN 6000 to OVS bridge , OVS port 1000 OVS Openflow configuration: configure a simple bridging that include the VM, 2 VxLAN VNI ports and Enet interface. ovs-ofctl add-flow -O Openflow14 br1 priority=100, dl_type=0x8100,dl_vlan=10,in_port=1,table=0,actions=goto_table:1 ovs-ofctl add-flow -O Openflow14 br1 priority=200, dl_type=0x8100,dl_vlan=10,in_port=2,table=0,actions=goto_table:1 ovs-ofctl add-flow -O Openflow14 br1 priority=300, dl_type=0x8100,dl_vlan=10,in_port=1000,table=0,actions=goto_table:1 ovs-ofctl add-flow -O Openflow14 br1 priority=400, dl_type=0x8100,dl_vlan=10,in_port=1001,table=0,actions=goto_table:1 ovs-ofctl add-flow -O Openflow14 br1 priority=200, dl_type=0x8100, dl_vlan=10,table=1,actions=goto_table:13 ovs-ofctl add-flow -O Openflow14 br1 priority=200,table=13,dl_type=0x8100,dl_vlan=10,actions=output:normal

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 127 of 190

Figure 47: Network Scheme

// service 14 assigned to VxLAN

SID Src L2 subType NTAG NTAG Inner Inner Pri Pri pri Hash Port ty from To from To Fr To type Group ------INT 4 105 1 14 0x0fffff NA 0x123456 NA 63 NA DSCP 0 INT 3 127 1 0 0xfff00a NA NA NA DC7 NA noIP 0 INT 2 100 0 0 0x0ff000 NA NA NA DC7 NA noIP 0 INT 1 127 0 0 0x0ff000 NA NA NA DC7 NA noIP 0 ------number of Items 4

Test Description Identifier ACE-NIC, Test 3.1 Test Purpose Test the Network-to-Network acceleration with NAT Function. Test consists of 2 steps: • first is SW NAT where VM forwarding traffic and DUT is measured by test equipment for throughput (i.e. NAT in CPU). Test can be repeated in Kernel and DPDK mode. • second only first packet traverse Linux and rule is added to ACE-NIC for any next packet of NAT. DUT is measured by test equipment for throughput. Configuration The configuration is listed below in the test sequence step 4

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 128 of 190 References Test environment must include some Xeon multicore Server with Open Stack; OVS in kernel mode (same test could be repeated in DPDK mode) Previous installation of ACE-NIC is required. Test will need to be configure per definition above by CLI commands with User Guide of Ethernity. Applicability Any application generating traffic NAT or Routing.

Pre-test conditions • See Server description • See installation guide for ACE-NIC • Minimum environment requirements: Server with ACE-NIC, Traffic Generator, PC with Terminal

Test Step Type Description Result Sequence Connect NIC Connect NIC to server, and connect ports from Green light in 1 and Xena/Ixia Xena/Ixia to NIC. application Everything Run “ ./Appinit” in the correct folder. 2 Run NIC initialized and Run “ ENET.e ” interfaces up. Check Receive NIC’s 3 communication Type “MEA ver” version with NIC All ingress Enable ingress ports and “MEA port ingress set all -a 1” 4 and egress egress port are “MEA port egress set all -a 1” ports enabled.

In ENET CLI type: See services “MEA service set create FF001 FF001 created by Create service D.C 0 1 0 1000000000 0 32000 0 0 1 typing “MEA 5 to and from ra 0 -l2Type 1” Service show CPU “MEA service set create < CPU port > FF001 Out all” FF001 D.C 0 1 0 1000000000 0 32000 0 0 1 -ra 0 -l2Type 1” Install NAT 6 Download and install NAT third party from internet. third party See CPU’s Run traffic from Xena/Ixia, see traffic receives. RMONs watch CPU RMONs (Remote Network Monitoring) incrementing, 7 Send traffic by typing “MEA Counters watch for RMON show ” latency and throughput. NAT will be Run script for 8 In OF use the script in configuration bellow processed in NIC - NAT HW. See CPU’s RMONs

Run traffic from IXIA/XENA, see traffic receives. constant, 9 Send traffic watch CPU RMONs by typing “MEA Counters watch for latency RMON show ” decreased and throughput

Test Description

improvement

Routing tested instead of NAT. The decision once done in Kernel data path compared with pure Flow Processor L3 forwarding. The resulted throughput 1.6MPPS (Mega packets per Test Verdict second) vs 17MPPS in SmartNIC, For latency 1600us vs 11us in SmartNIC

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 129 of 190 Command line configuration for OF Controller presented hereunder of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 1 table 0 flow-priority 100 instruction-goto 1 commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 2 table 0 flow-priority 101 instruction-goto 2 commit # untagged traffic of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto * tos * dport * sport * in-port * table 1 flow-priority 101 instruction-goto 25 commit # untagged traffic of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto * tos * dport * sport * in-port * table 2 flow-priority 101 instruction-goto 26 commit #create group from lan to wan - replace the source mac , destination mac , decrement the TTL, replace the source ip , replace the source port of-group add switch 0xbe78a2211a000000 group 1 type all action-add set-dmac 02:00:00:00:00:01 action- add set-smac 00:e0:ed:30:93:fa action-add dec-nw-ttl action-add nw-saddr 192.168.1.1 # change also the source port to 30000 action-add output 2 commit-group #create group from wan to lan - replace the source mac , destination mac , decrement the TTL, replace the dest ip , replace the dest port of-group add switch 0xbe78a2211a000000 group 2 type all action-add set-dmac 01:00:00:00:00:02 action- add set-smac 00:e0:ed:30:93:fa action-add dec-nw-ttl action-add nw-daddr 192.168.1.100 # change also the destination port to 50000 action-add output 1 commit-group # for table miss - in case received tcp packets and there is no match of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto 6 tos * dport * sport * in-port * table 25 flow-priority 2 instruction- apply action-add output controller action-list-end commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip 2.3.4.5/32 sip 192.168.1.100/32 proto 6 tos * dport 80 sport 50000 in-port * table 25 flow-priority 101 instruction-apply action-add group-id 1 action-list-end commit

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 130 of 190 of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip 192.168.1.1/32 sip 5.4.3.2/32 proto 6 tos * dport 30000 sport 80 in-port * table 26 flow-priority 101 instruction-apply action-add group-id 2 action-list-end commit

Test Description Identifier ACE-NIC, Test 3.2 Test Purpose Test Network to Network by OVS offload of NAT VNF on ACE-NIC. Test: Configure OVS for NAT/or Routing; Check that OVS configuration offloaded on ACENIC. On DUT: Verify that VM traffic NAT by ACE-NIC wire speed and not by Linux stack Configuration The configuration is listed elow in thsequence step 6.

References Test environment must include some Xeon multicore Server with Open Stack; OVS in kernel mode (same test could be repeated in DPDK mode) Previous installation of ACE-NIC is required. Test will need to be configure per definition above by CLI commands with User Guide of Ethernity Applicability Any application generating traffic L4.

Pre-test conditions See Server description • Seeinstallation guide for ACE-NIC • Minimum environment requirements: Server with ACE-NIC, Traffic Generator, PC with Terminal

Test Step Type Description Result Sequence Connect NIC Connect NIC to server, and connect ports from Green light in 1 and Xena/Ixia Xena/Ixia to NIC. application 2 Check OVS, and Driver inhalation: Installation is Configure VM on OVS and Send ping from test ready Equipment and get response from VM install Show OVS version: ovs-vswitchd --version Show ENET driver: meaCli mea version Show OVS ENET plugin: ovs-parms enet Create Use the script in configuration bellow to create 3 services services in OVS Install NAT 4 Download and install NAT third party from internet. third party See CPU’s RMONs Run traffic from Xena/Ixia, see traffic receives. watch incrementing, 5 Send traffic CPU RMONs by typing “MEA Counters watch for RMON show ” latency and throughput. NAT will be Run script for 6 In OF use the script in configuration bellow processed in NIC - NAT HW. See CPU’s RMONs

constant, Run traffic from IXIA/XENA, see traffic receives. watch for 7 Send traffic watch CPU RMONs by typing “MEA Counters latency RMON show ” decreased and throughput increased.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 131 of 190

Test Verdict Routing tested in Open Stack environment. Configuration working with OVS-DB. Environment Dockers (Containers) instead of VMs. The decision done in OVS data path and compared with pure Flow Processor L3 forwarding. The resulted throughput 1.1MPPS vs 17MPPS in SmartNIC, For latency 1800us vs 11us in SmartNIC

OVS configuration for new service modprobe 8021q killall ovsdb-server killall ovs-vswitchd rmmod openvswitch /sbin/modprobe openvswitch mkdir -p etc/openvswitch rm -f /etc/openvswitch/conf.db rm -f /usr/local/etc/openvswitch/conf.db ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock -- remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach --log-file ovs-vsctl --no-wait init ovs-vswitchd --pidfile --detach ovs-appctl vlog/set dbg ovs-appctl vlog/list ovs-vsctl del-br enet_br ovs-vsctl del-br br1 ovs-vsctl add-br enet_br ovs-vsctl add-br br1 ovs-vsctl set bridge enet_br ovs-vsctl add-port enet_br ifenet104 -- set interface ifenet104 type=internal ovs-vsctl add- port enet_br ifenet105 -- set interface ifenet105 type=internal ovs-vsctl add-port enet_br ifenet106 -- set interface ifenet106 type=internal ovs-vsctl add-port enet_br ifenet107 -- set interface ifenet107 type=internal ovs-vsctl add-port enet_br ens1 ifconfig ifenet104 10.0.0.20 netmask 255.255.255.0 ifconfig ifenet105 20.0.0.20 netmask 255.255.255.0 ifconfig ifenet106 30.0.0.20 netmask 255.255.255.0 ifconfig ifenet107 40.0.0.20 netmask 255.255.255.0 ifconfig ifenet104 up ifconfig ifenet105 up ifconfig ifenet106 up ifconfig ifenet107 up ovs-ofctl add-flow -O Openflow14 enet_br priority=1000,dl_type=0x8100,dl_vlan=20,in_port=1,table=0,actions=goto_table:2 ovs-ofctl add-flow -O Openflow14 enet_br priority=2000,dl_type=0x8100,dl_vlan=20,in_port=2,table=0,actions=goto_table:3 ovs-ofctl add-flow -O Openflow14 enet_br priority=200,table=2,dl_type=0x8100,dl_vlan=20,actions=output:2 ovs-ofctl add-flow -O Openflow14 enet_br priority=200,table=3,dl_type=0x8100,dl_vlan=20,actions=output:1 OF configuration of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 1 table 0 flow-priority 100 instruction-goto 1 commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type * vid * vlan-pcp * mpls-label * mpls-tc * mpls-bos * dip * sip * proto * tos * dport * sport * in-port 2 table 0 flow-priority 101 instruction-goto 2 commit # untagged traffic

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 132 of 190 of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto * tos * dport * sport * in-port * table 1 flow-priority 101 instruction-goto 25 commit # untagged traffic of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto * tos * dport * sport * in-port * table 2 flow-priority 101 instruction-goto 26 commit #create group from lan to wan - replace the source mac , destination mac , decrement the TTL, replace the source ip , replace the source port of-group add switch 0xbe78a2211a000000 group 1 type all action-add set-dmac 02:00:00:00:00:01 action- add set-smac 00:e0:ed:30:93:fa action-add dec-nw-ttl action-add nw-saddr 192.168.1.1 # change also the source port to 30000 action-add output 2 commit-group #create group from wan to lan - replace the source mac , destination mac , decrement the TTL, replace the dest ip , replace the dest port of-group add switch 0xbe78a2211a000000 group 2 type all action-add set-dmac 01:00:00:00:00:02 action- add set-smac 00:e0:ed:30:93:fa action-add dec-nw-ttl action-add nw-daddr 192.168.1.100 # change also the destination port to 50000 action-add output 1 commit-group # for table miss - in case received tcp packets and there is no match of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip * sip * proto 6 tos * dport * sport * in-port * table 25 flow-priority 2 instruction- apply action-add output controller action-list-end commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip 2.3.4.5/32 sip 192.168.1.100/32 proto 6 tos * dport 80 sport 50000 in-port * table 25 flow-priority 101 instruction-apply action-add group-id 1 action-list-end commit of-flow add switch 0xbe78a2211a000000 smac * dmac * eth-type 0x800 vid * vlan-pcp * mpls-label * mpls-tc * mplsbos * dip 192.168.1.1/32 sip 5.4.3.2/32 proto 6 tos * dport 30000 sport 80 in-port * table 26 flow-priority 101 instruction-apply action-add group-id 2 action-list-end commit

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 133 of 190 4.2.4. OFDM Testing Test Description Identifier OFDM-PON_TxPHY Test Purpose The purpose of this test to verify the correct function of the PHY component of the OLT-Tx. Configuration OFDM-PON OLT hardware (FPGA board incl. Digital-to-Analogue Converter (DAC) modules) Synthesiser (16 GHz) to generate OLT clock Digital Storage Oscilloscope (DSO) (40GSa/s or more) Computer holding OFDM receiver implemented using Matlab

OFDM spectrum f

OLT Scope Matlab

16GHz

Clock

Figure 48: OFDM-PON Testing Scheme

References OFDM-PHY component (OLT) is described in deliverable D2.1 Applicability

Pre-test conditions Function of VHDL components under test must be verified via behavioural simulations using ModelSim software. This is being verified by comparison with output Matlab reference implementation. Functions of hardware must be verified insofar, that simple test pattern (impulse, saw-tooth, etc.) can be generated and are displayed correctly using an oscilloscope.

Test Step Type Description Result Sequence 1 configure Setup configuration (see above) Done 2 check Evaluate OFDM spectrum, check for Correct bitloading for correct assignment (0…8/16 GHz) all subcarriers (see below) 3 check Evaluate subcarrier EVM (Error Vector between 17 and 25 dB Magnitude) for all 1024 subcarriers for best case (see below) Test Verdict ok The setup for the OFDM-PON_TxPHY test has been used to verify the function of the OFDM-PON OLT implementation.

In a first series of tests, the received signal (an off-line Matlab based Rx) has been analysed with respect to the correct bitloading. The bitloading of the OLT can be changed via a virtual IO (In/Out) function of the FPGA. The bitloading vector of length 1024, holding an integer representing the modulation format for each subcarrier has been passed to the FPGA. Subsequently, the OFDM signal changes and can now be detected by the offline receiver. An example bitloading is depicted in Figure 49 (left). The received constellation diagrams are shown for selected subcarriers in Figure 49 (right).

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 134 of 190

Figure 49: Bitloading and subcarrier constellations for OFDM testing

In a second series of tests, the received signal (an off-line Matlab based Rx) has been analysed with respect to the estimated Signal-to-Noise Ratio (SNR) for each subcarrier at different locations in real-time digital signal processing (DSP) (Figure 50).

CP Dyn. I/Q IFFT DAC insert Reduct. mixer

LO Output OFDM-PHY (OLT-Tx) (DDS)

Figure 50:OLT DSP blocks influencing the SNR

At the Inverse Fast Fourier Transform (IFFT) output with a resolution of 20 bits the signal-to-noise ratio (SNR) varies between 37…51 dB (dark blue). Since the DAC modules can only handle 6 bit resolution, the IFFT output signal must be reduced to 6 bit as well, which causes a drop of the SNR to values between 27…31 dB (red curve). After the digital in-phase/quadrature (I/Q) block, but right before the DAC, the SNR is further decreased to values of 24…28 dB (orange). The signal quality at the DAC depends on the usage of a single ended or differential signal. The best performance can be achieved using the differential signal (green). In that case the SNR is between 17…25 dB. If only a single-ended signal can be used the SNR then drops further to values of 14…23 dB. To avoid this signal degradation a balun can be used. In that case the performance is between the single-ended and differential case with values between 15…25 dB. It must be noted that the balun (balanced to unbalanced transformer) was operated outside specification for frequencies above 5 GHz. Therefore, we expect a further improvement for those frequencies if a 20 GHz balun5 can be used.

5 Will be integrated into the setup later

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 135 of 190

Figure 51: EVM over subcarrier at different locations in DSP chain

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 136 of 190

Test Description Identifier OFDM-PON_RxPHY Test Purpose The purpose of this test to verify the correct function of the PHY component of the ONU-Rx. Configuration OFDM-PON OLT hardware (FPGA board incl. DAC modules) Synthesiser (16 GHz) to generate OLT clock OFDM-PON ONU hardware I/Q downconverter Local oscillator (LO) for down-conversion Computer receiving information via debug interface

OFDM spectrum f Debug interface

OLT I/Q ONU PC

16GHz e.g. 11GHz

Clock LO

Figure 52: PHY component test

References The OFDM-PHY (ONU) component is described in D2.1 Applicability

Pre-test conditions Function of VHDL components under test must be verified via behavioural simulations using ModelSim software. This is being verified by comparison with output Matlab reference implementation. Function of OLT hardware must be verified insofar, that the OFDM-PON_TxPHY test was successful. Annotations  PC partly processes DSP for instance  Clocks and LO are provided by market-available voltage-controlled oscillator / phase-locked loop (VCO/PLL) combinations (Macom MAOC-41500 and AD HMC515 VCO respectively, both with AD ADF-41020) and by laboratory synthesizers.

Test Step Type Description Result Sequence 1 configure Setup configuration (see above) ok Annotations:  LO integrated to ONU setup, providing a range from 11.5 to 12.5 GHz in 250 MHz grid (step of 16 carriers)  ONU samples with 500 MSa/s, providing 32 carriers (max. 31 usable due to system concept) 2 check Evaluate OFDM spectrum, check for correct ok assignment (0…8/16GHz) 3 check Evaluate subcarrier EVM for all received subcarriers at -8 to -16.5 ONU dB, Annotation: All following sub-modules enabled for carrier- measurement, processing it off-line dependent

4 check I/Q-imbalance detection and correction ok EVM gain: up to -3 dB

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 137 of 190

Test Description 5 check Carrier Frequency Offset (CFO)-detection and ok correction

6 check Signal Frequency Offset (SFO)-detection and Detection: correction ok Annotation: Reference oscillator already precise enough (-0.8 to -2.5 ppm to OLT) to get no noticeable signal quality degradation for system-intrinsic EVM. Test Verdict ok

The ONU is connected electrically back-to-back to the ONU front-end. It provides the received intermediate frequency (IF) signal to an IQ-Mixer, provided by the optical direct-detection front-end in a complete system setup. The mixer converts the signal to a real- and an imaginary baseband representation under the provision of a local oscillator. By choosing the oscillator frequency, the ONU selects the partial band to decode. An ADC each digitalizes these signals. After low-pass-filtering, the ADCs are allowed to sample with a significantly reduced rate compared to the OLTs DAC. Sampling frequency defines the maximum ONU bandwidth, respecting both signal paths. The system setup enables the ONUs DSP decoding up to 32 out of 1024 OLT-generated subcarriers when choosing 500 MSa/s. Most of the DSP is realized by Matlab® code, and works off-line for now, transferring its data to the simulation computer through the FPGA, designated ultimately for full real-time DSP. The code performs the following actions:

Figure 53: ONU DSP chain

Synchronisation bases on searching the carrier phase (CP) to find the symbol edges and, after removing the CP and transferring it back to frequency domain by Fast Fourier Transform (FFT), on finding a training sequence. The training sequence consist of up to known 20 OFDM symbols, representing known Quadrature Phase Shift Keying (QPSK) channel symbol sequences. The ONU can estimate all the parameters needed to equalize the received signal by evaluating these channel symbol sequences for its sub-band. The parameters are: Channel response, I/Q-impairment, sampling- and, if already estimated in gross, carrier frequency offset. After equalizing, the following ONU signal quality can be observed:

Figure 54: EVM for decoded carrier after ONU DSP for 11.5 to 12 GHz OLT band

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 138 of 190 It can be noted that the signal quality is reduced as compared to the values measured at the OLT’s electrical frontend. Despite statistical effects, like the phase jitter of the oscillators and the effects from non-exact parameter estimation and equalization, the main degradations can be attributed to filtering: Due to not applying data-rate-reducing guard bands in the dense OFDM spectrum for separating ONU bands, filtering has to be done sharply. The resulting spectral truncation removes signal power from the edge carriers and lets OFDM symbols interfere by a widened impulse response, both raising the EVM. ONU site mitigation is assumed to be successful by oversampling and adequate filtering. If a lower system EVM is needed, pulse shaping by the OLT is avoided by not raising the demands on the high-speed OLT’s DAC component.

4.2.5. Fronthaul testing Test Description Identifier Ethernet_Fronthaul_SyncTest Test Purpose The purpose of- this test to verify the correct function of the SyncE and PTP function of the nodes Configuration Ext. clock Digital Unit (DU) TrustNode Radio Unit (RU) Spectrum analyser

„Clock master“ RefCLKin SyncE/PTP SyncE/PTP

1GbE

Ext. 1GbE link RU Spectrum DU TrustNode DAC clock digital analyser SyncE/PTP 1GbE Figure 55: Ethernet Fronthaul SyncTest

References Applicability

Pre-test conditions DU and RU will be provided by HHI from the H2020 iCIRRUS Project and their function will be tested there (these nodes are planned to be available in M27), TrustNode supports Ethernet-based synchronisation

Test Step Type Description Result Sequence 1 configure Setup at HHI Labs Planned for M28 2 check Check derivation of clock at DU, TrustNode, and RU 3 check Check output clock if input clock is changed 4 5 6 Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 139 of 190 Test Description Identifier Ethernet_Fronthaul_Dual_RUTest Test Purpose The purpose of- this test to verify the synchronisation by joint detection of the signal at the UE Configuration Digital Unit (DU) TrustNode Radio Unit (RU), 2x End user device (UE)

Figure 56: Ethernet Fronthaul Dual RUTest

References See below Applicability

Pre-test conditions Ethernet_Fronthaul_SyncTest successful, 60GHz link working

Test Step Type Description Result Sequence 1 configure Setup at HHI Labs Planned for M29 2 check Check received signal at UE, when RU#1 is on 3 check Check received signal at UE, when RU#2 is on 4 check Check received signal at UE, when RU#1 and RU#2 are on 5 6 Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 140 of 190 4.2.6. Optical wireless link testing Test Description Identifier OW_Link FieldTest Test Purpose The purpose of this test to verify the correct function of the Optical Wireless (OW) backhaul link in Mediterranean weather conditions. Configuration OW link (2 nodes) Control/Logging PC Weather sensor

Figure 57: OW Link FieldTest

References See below Applicability

Pre-test conditions Function of OW link (working at a HHI installation)

Test Step Type Description Result Sequence 1 configure Setup at Altice Labs (see above) Planned for M27- M30 2 check Check connectivity 3 check Check weather sensor information 4 check Check logging function 5 6 Test Verdict

Based on the experiments conducted in Aveiro during the FP9 SODALES project and reported in [14], the next generation of Optical Wireless (OW) links will be installed in Aveiro. The new OW links will be deployed outdoors at a distance of approximately 70 m, between rooftops of the Altice Labs, with a height difference of approx. 10 m to emulate a macro-cell-to-small-cell backhaul scenario. Next to the frontends, a weather station Vaisala PWD12 is deployed to study the impact of weather. Visibility up to 2 km and precipitation (rain, snow, fog) are monitored every two minutes. Direct dependencies as well as statistical values concerning the availability will be the result. In comparison to the long-term measurements in Berlin, described in [15], the link in Aveiro will be challenged by the different Mediterranean climate.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 141 of 190 OW#1 OW#2 70 m

Figure 58: Planned OW link at Altice Labs, Aveiro

For the outdoor tests, a real-time optical wireless communications (OWC) system, developed at Fraunhofer HHI, with a peak gross data rate of 1 Gb/s was used. A commercially available real-time digital signal processing (DSP) unit with 100 MHz bandwidth using OFDM and a fine-granular rate adaptation with modulation formats up to 4096 QAM (Quadrature Amplitude Modulation) is used. The DSP includes all PHY and MAC functions as well as analogue-to-digital and digital-to-analogue converters. In combination with proprietary LED (Light Emitting Diode) driver and photo-receiver electronics, high data rates are achievable. At the transmitter, the low-cost infrared LED SFH 4451 with an active semiconductor area of 0.3 x 0.3 mm² is used with a centre wavelength of 850 nm. The parabolic reflector mounted directly on the LED chip realizes a divergence of 17° at full width half maximum (FWHM) and enlarges the effective area of the LED to 1.65 mm². A convex lens with 166 mm focal length and 100 mm diameter reduces the divergence of the beam to 0.285° FWHM. The resulting spot at 100 m has a radius of only 0.5 m. The spot is almost homogenously illuminated, in contrast to the Gaussian beam profile of laser-based free-space optical systems, which simplifies the initial alignment of the link and makes the system robust against small misalignments. Transceivers could, for instance, be mounted on a streetlight. The same convex lens as used at the transmitter focuses the received power onto the photodiode (PD) S6968 from Hamamatsu having a relatively large effective area of 1.5 cm². Likewise, the lens reduces the field of view (FOV) of the receiver to 2.4° to decrease background radiation, e.g. due to sunlight scattered at a cloud.

Figure 59: OW link node to be installed in Aveiro (1G capable)

Optics and electronics are setup in a waterproof housing to withstand the outdoor conditions. A sunshade is installed to reduce the influence of scattered sunlight further. For alignment, a telescope can be mounted temporarily on the housing, enabling a setup of the system within minutes.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 142 of 190 4.3. Software components testing Having discussed the testing procedures for the CHARISMA hardware components in the previous section, we now turn to the testing methodologies for the software components of the CHARISMA network: the CMO and VNFs.

4.3.1. Control Management and Orchestration (CMO) components testing

4.3.1.1. Service Orchestration

The following tables describe the TeNOR tests in CHARISMA, so as to verify proper functionality and performance of TeNOR with the OAM. The first test relates to assessing the set of VNF Descriptors (VNFDs) for the TeNOR catalogue.

Test Description Identifier OAM_Tenor_upload_VNFD

Test Purpose This test verifies that an Infrastructure Provider is able to upload a set of VNFD’s to TeNOR’s catalogue through the Open Access Manager.

Test Step Type Description Result Sequence 1 Create OAM REST client for the Infrastructure Provider

2 Infrastructure Provider posts a list of VNFDs to Open Access Manager northbound REST API

3 Open Access Manager responses with HTTP 200 (OK). TeNOR contains the uploaded VNFDs.

4 Infrastructure provider posts the same list of VNFDs to Open Access Manager northbound REST API

5 Open Access Manager responses with HTTP 209 (Conflict), since descriptors already exist in Catalogue.

6 Infrastructure provider posts a list of wrong formatted VNFDs to Open Access Manager northbound REST API.

7 Open Access Manager responses with HTTP 400 (Bad Request) and the error message in the body.

8 Infrastructure provider posts a list of VNFDs to Open Access Manager northbound REST API. There’s an error uploading the third VNFD to TeNOR.

9 Open Access Manager responses with the proper HTTP error code (depending on the exception occurred), and rollbacks the action. TeNOR does not contain any VNFD of the list.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 143 of 190 Test Description Identifier OAM_Tenor_upload_NSD

Test Purpose This test verifies that an Infrastructure Provider is able to upload a NSD to TeNOR’s catalogue through the Open Access Manager.

Test Step Type Description Result Sequence 1 Create OAM REST client for the Infrastructure Provider

2 Load predefined VNFDs in TeNOR’s catalogue

3 Infrastructure Provider posts a NSD to Open Access Manager northbound REST API

4 Open Access Manager responses with HTTP 200 (OK). TeNOR contains the new NSD.

5 Infrastructure provider posts the same NSD to Open Access Manager northbound REST API.

6 Open Access Manager responses with HTTP 209 (Conflict), since descriptor already exist in Catalogue.

7 Infrastructure provider posts a wrong formatted NSD to Open Access Manager northbound REST API.

8 Open Access Manager responses with HTTP 400 (Bad Request) and the error message in the body.

9 Infrastructure provider posts a NSD with reference to non-existing VNFDs.

10 Open Access Manager responses with HTTP 400 (Bad Request) and the error message in the body.

Test Verdict

Test Description Identifier OAM_Tenor_retrieve_VNFDs

Test Purpose This test verifies that the Infrastructure Provider and the VNOs are able to list the available VNFDs in TeNOR’s catalogue using the Open Access Manager.

Test Step Type Description Result Sequence 1 Create OAM REST client for the Infrastructure Provider or the VNO.

2 Load predefined VNFDs in TeNOR’s catalogue

3 Infrastructure Provider or VNO sends a request to Open Access Manager to list all existing VNFDs.

4 Open Access Manager responses with HTTP 200 (OK). The response contains the information of all VNFDs in JSON format.

5 Infrastructure provider or VNO sends a request to Open Access Manager in order to retrieve a single VNFD, given its id.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 144 of 190 Test Description

6 Open Access Manager responses with HTTP 200 (OK). The response contains the information of the specific VNFD in JSON format.

7 Infrastructure provider or VNO sends a request to Open Access Manager in order to retrieve a single VNFD, but giving a non-existing id.

8 Open Access Manager responses with HTTP 404 (Not Found).

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 145 of 190 Test Description Identifier OAM_Tenor_retrieve_NSDs

Test Purpose This test verifies that the Infrastructure Provider and the VNOs are able to list the available NSDs in TeNOR’s catalogue using the Open Access Manager.

Test Step Type Description Result Sequence 1 Create OAM REST client for the Infrastructure Provider or the VNO.

2 Load predefined NSDs in TeNOR’s catalogue

3 Infrastructure Provider or VNO sends a request to Open Access Manager to list all existing NSDs.

4 Open Access Manager response with HTTP 200 (OK). The response contain the information of all NSDs in JSON format.

5 Infrastructure provider or VNO sends a request to Open Access Manager in order to retrieve a single NSD, given its ID.

6 Open Access Manager responses with HTTP 200 (OK). The response contains the information of the specific NSD in JSON format.

7 Infrastructure provider or VNO sends a request to Open Access Manager in order to retrieve a single NSD, but giving a non-existing ID.

8 Open Access Manager responses with HTTP 404 (Not Found).

Test Verdict

Test Description Identifier OAM_Tenor_check_availability

Test Purpose This test verifies the reaction of the Open Access Manager when TeNOR is not available.

Test Step Type Description Result Sequence 1 Create OAM REST client for the Infrastructure Provider or the VNO.

2 Infrastructure Provider or VNO sends a request to Open Access Manager to list all existing NSDs.

3 Open Access Manager responses with HTTP 503 (Service Unavailable). The message of the body contains information about the error.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 146 of 190 Test Description Identifier OAM_Tenor_instantiate_network_service

Test Purpose The test checks that a VNO is able to instantiate a Network Service through the Open Access Manager.

Test Step Type Description Result Sequence 1 Create OAM REST client for the VNO.

2 Load predefined VNFDs and NSDs to TeNOR.

3 Create virtual slice in Open Access Manager.

4 VNO sends a request to instantiate a Network Service, specifying in which CAL it should be deployed.

5 Open Access Manager responses with HTTP 202 (Accepted). Open Access Manager gets notified when TeNOR instantiates the service and its state is modified to “RUNNING”

6 VNO sends a request to instantiate a Network Service, specifying a CAL that does not belong to the current slice.

7 Open Access Manager response with HTTP 401 (Unauthorized)

8 VNO sends a request to instantiate a Network Service, specifying in which CAL it should be deployed.

9 Open Access Manager response with HTTP 202 (Accepted). There’s an error deploying the Network Service and Open Access Manager change its state to “ERROR”. Open Access Manager performs rollback action and no VNFD of the Network Service is deployed.

10 VNO sends a request to instantiate a non-existing Network Service.

11 Open Access Manager responses with HTTP 404 (Not Found)

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 147 of 190 Test Description Identifier OAM_Tenor_stop_network_service

Test Purpose The test checks that a VNO is able to stop a running Network Service through the Open Access Manager.

Test Step Type Description Result Sequence 1 Create OAM REST client for the VNO.

2 Load predefined VNFDs and NSDs to TeNOR.

3 Create virtual slice in Open Access Manager.

4 Instantiate Network Service in OpenStack. This Network Service is deployed in a slice of the VNO.

5 VNO sends a request to stop a running Network Service.

6 Open Access Manager responses with HTTP 202 (Accepted). Open Access Manager gets notified when TeNOR stops the service and its state is modified to “STOPPED”

7 VNO sends a request to stop a Network Service that does not belong to any of his/her slices.

8 Open Access Manager responses with HTTP 404 (Not Found)

10 VNO sends a request to stop a non-existing Network Service. 11 Open Access Manager responses with HTTP 404 (Not Found)

12 VNO sends a request to stop a stopped Network Service of his/her slice.

13 Open Access Manager responses with HTTP 403 (Forbidden).

Test Verdict

Test Description Identifier OAM_Tenor_start_network_service

Test Purpose The test checks that a VNO is able to start a stopped Network Service through the Open Access Manager.

Test Step Type Description Result Sequence 1 Create OAM REST client for the VNO.

2 Load predefined VNFDs and NSDs to TeNOR.

3 Create virtual slice in Open Access Manager.

4 Instantiate and stop Network Service in OpenStack. This Network Service is deployed in a slice of the VNO.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 148 of 190 Test Description

5 VNO sends a request to run a stopped Network Service of his/her slice.

6 Open Access Manager responses with HTTP 202 (Accepted). Open Access Manager gets notified when TeNOR stops the service and its state is modified to “RUNNING”

7 VNO sends a request to start a Network Service that does not belong to any of his/her slices.

8 Open Access Manager responses with HTTP 404 (Not Found)

10 VNO sends a request to start a non-existing Network Service.

11 Open Access Manager response with HTTP 404 (Not Found)

12 VNO sends a request to start a stopped Network Service of his/her slice.

13 Open Access Manager responses with HTTP 202 (Accepted). There’s an error starting the Network Service and Open Access Manager change its state to “ERROR”. Open Access Manager performs rollback action and no VNFD of the Network Service is running.

14 VNO sends a request to start a running Network Service of his/her slice.

15 Open Access Manager responses with HTTP 403 (Forbidden).

Test Verdict

Step Types:

 A stimulus corresponds to an event that triggers a specific action on a Function Under Test (FUT), like sending a message for instance.

 A configure corresponds to an action to modify the FUT or the component configuration.

 A check consists of observing that one FUT behaves as described/expected: i.e. resource creation, update, deletion, etc… For each check in the Test Sequence, a result can be recorded.

4.3.1.2. Service Policy Manager (Caching and Security)

The following tables describe the tests for the Policy Manager (SPM) module in CHARISMA. The tests verify proper functionality for each SPM interface.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 149 of 190 4.3.1.3. Testing of SPM – M&A interface Test Description Identifier Reception of properly built Alert Notification message Test Purpose This test verifies that the SPM is able to properly understand and process a well- formed Alert Notification message coming from the M&A module, that being either a beginning Alert Notification or an end Alert Notification Configuration Required entities: M&A server and SPM server References Deliverable D3.2 Applicability

Pre-test conditions The M&A and the SPM must share a valid network connection over the Internet (e.g. both able to return “ping” requests).

Test Step Type Description Result Sequence 1 M&A sends Alert Notification message to the SPM using the previously established network connection. 2 M&A receives status code 201 and “alert_rule_id” from the SPM Verify that the SPM has properly stored the Alert in its internal structure Test Verdict

Test Description Identifier Reception of badly built Alert Notification message Test Purpose This test verifies that the SPM is able to return an error message, in the case that it receives a badly-formed Alert Notification message coming from the M&A module, this being either a beginning Alert Notification or an end Alert Notification. Configuration Required entities: M&A server and SPM server References Deliverable D3.2 Applicability

Pre-test conditions The M&A and the SPM must share a valid network connection over the Internet (e.g. both able to return “ping” requests).

Test Step Type Description Result Sequence 1 M&A sends Alert Notification message to the SPM using the previously established network connection. The message is poorly built (not following the syntax described in https://confluence.i2cat.net/display/CHARISMA/4.+Alert +Notification+Interface 2 M&A receives an error status code (could be 400, 401, 500 depending on the specific error) from the SPM

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 150 of 190 4.3.1.4. Testing of SPM – Orchestrator interface Test Description Identifier Request to create a new instance of a specific VNF Test Purpose This test verifies that the SPM is able to properly request that the Orchestrator instantiates a specific VNF Configuration Required entities: TeNOR Orchestrator and SPM server References Deliverable D3.2 Applicability

Pre-test conditions The Orchestrator and the SPM must share a valid network connection over the Internet (e.g. both able to return “ping” requests).

Test Step Type Description Result Sequence 1 SPM sends a Create New Instance message to the Orchestrator, according to TeNOR’s interface description http://t- nova.github.io/TeNOR/doc/index.html#/Provisioning

2 SPM receives status code 201 from the Orchestrator Test Verdict

Test Description Identifier Request to obtain information about existing VNF Test Purpose This test verifies that the SPM is able to properly obtain information about currently existing VNF instances Configuration Required entities: TeNOR Orchestrator and SPM server References Deliverable D3.2 Applicability

Pre-test conditions The Orchestrator and the SPM must share a valid network connection over the Internet (e.g. both able to return “ping” requests).

Test Step Type Description Result Sequence 1 SPM sends a request to the Orchestrator to obtain information about a particular instance, according to TeNOR’s interface description http://t- nova.github.io/TeNOR/doc/index.html#!/Provisioning/ge t_ns_instances_id

2 SPM receives status code 201 from the Orchestrator, and the VNF instance description Test Verdict

Test Description Identifier Request to terminate an instance of a specific VNF Test Purpose This test verifies that the SPM is able to properly request that the Orchestrator terminates a specific VNF Configuration Required entities: TeNOR Orchestrator and SPM server References Deliverable D3.2 Applicability

Pre-test conditions The Orchestrator and the SPM must share a valid network connection over the Internet (e.g. both able to return “ping” requests).

Test Step Type Description Result Sequence 1 SPM sends a Delete Instance message to the Orchestrator, according to TeNOR’s interface description http://t- nova.github.io/TeNOR/doc/index.html#!/Provisioning/de

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 151 of 190 Test Description lete_ns_instances_id

2 SPM receives status code 204 from the Orchestrator, indicating that the instance as been deleted Test Verdict

4.3.1.5. Service Monitoring & Analytics

The following tables describe the tests for the Monitoring and Analytics (M&A) module in CHARISMA. The tests verify proper functionality for each M&A interface.

Test Description Identifier Target Server Management Interface Test – Type Linux Test Purpose This test verifies all Target Server Management interface functionalities: 1) Registration: Save Target Server information to the M&A database and install required agent to the Target Server. 2) Retrieval: - Retrieve Target Server information from M&A database. - Retrieve Target Server monitoring data from timeseries database. 3) Deletion: Delete Target Server information from M&A database. Configuration Required entities: M&A application and a Target Server running Linux on the same network (or otherwise reachable). References Deliverable D3.2, Paragraph 4.2.1, Requirement: SMA.5 Applicability The target server used must have Python >=2.6 installed.

Pre-test conditions There is a physical or virtual server available under a known IP address.

Test Step Type Description Result Sequence 1 Send POST request registering new Target Server using the known IP address. 2 Receive status code 201 and new Target Server ID. PASS 3 Send GET request using returned Target Server ID to retrieve Target Server information. 4 Receive status code 200 and Target Server PASS information. 5 Send GET request to retrieve timeseries data for a default metric within the last n seconds. 6 Receive status code 200 and requested data. PASS 7 Send DELETE request using returned Target Server ID. 8 Receive status code 200. PASS 9 Send GET request using returned Target Server ID to retrieve Target Server information. 10 Receive status code 404. PASS 11 Send GET request to retrieve timeseries data for a default metric within the last n seconds. 12 Receive status code 200 and empty data. PASS Test Verdict PASS

The following picture illustrates the HTML format output produced by the execution of the robot framework test designed to verify the Monitoring and Analytics target resource management API functionality for Linux devices using Prometheus agent to expose their data.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 152 of 190

Figure 60: Robot framework HTML result output - MA target resource API test - Type: Linux Server

Test Description Identifier Target Server Management Interface Test – Type Network Device Test Purpose This test verifies all Target Server Management interface functionalities: 1) Registration: Save Target Server information to the M&A database and install required agent to the Monitoring Server. 2) Retrieval: - Retrieve Target Server information from M&A database. - Retrieve Target Server monitoring data from timeseries database. 3) Deletion: Delete Target Server information from M&A database and stop M&A from communicating with specified Target Server agent. Configuration Required entities: M&A application and a network device as Target Server on the same network (or otherwise reachable). References Deliverable 3.2, Paragraph 4.2.1, Requirement: SMA.4 Applicability Network device must expose SNMP data.

Pre-test conditions There is a physical or virtual network device available under a known IP address.

Test Step Type Description Result Sequence 1 Send POST request registering new Target Server

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 153 of 190 Test Description using the known IP address. 2 Receive status code 201 and new Target Server ID. PASS 3 Send GET request using returned Target Server ID to retrieve Target Server information. 4 Receive status code 200 and Target Server PASS information. 5 Send GET request to retrieve timeseries data for a default metric within the last n seconds. 6 Receive status code 200 and requested data. PASS 7 Send DELETE request using returned Target Server ID. 8 Receive status code 200. PASS 9 Send GET request using returned Target Server ID to retrieve Target Server information. 10 Receive status code 404. PASS 11 Send GET request to retrieve timeseries data for a default metric within the last n seconds. 12 Receive status code 200 and empty data. PASS Test Verdict PASS

The following picture illustrates the HTML format output produced by the execution of the robot framework test designed to verify the Monitoring and Analytics target resource management API functionality for devices using SNMP protocol to expose their data.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 154 of 190 Figure 61: Robot framework HTML result output - MA target resource API test - Type: Network Device

Test Description Identifier Data Querying Interface Test Test Purpose This test verifies Data Querying interface functionality of retrieving any requested timeseries data. This test can also be used to verify normal operation of monitoring for all the available resources in the CHARISMA architecture. Configuration Required entities: M&A application deployed to monitor data from at least one Target Server. References Deliverable 3.2, Paragraph 4.2.1, Requirement: SMA.4, SMA.5, SMA.6, SMA.7, SMA.8, SMA.9, SMA.10, SMA.11 Applicability Monitored Target Server must be registered to the M&A.

Pre-test conditions M&A is set to monitor at least one entity. For testing monitoring status of all CHARISMA resources, all must be registered to the M&A beforehand.

Test Step Type Description Result Sequence 1 Send POST request requesting timeseries data within the last n seconds applying additional filters. 2 Receive status code 200 and new requested data. PASS Test Verdict PASS

The following picture illustrates the HTML format output produced by the execution of the robot framework test designed to verify the Monitoring and Analytics data querying API functionality.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 155 of 190

Figure 62: Robot framework result HTML output - MA data querying API test

Test Description Identifier Alert Rule Management Interface Test Test Purpose This test verifies all Alert Rule Management interface functionalities: 1) Registration: Register Alert Rule information to the M&A database and set M&A to evaluate its conditions. 2) Retrieval: Retrieve Alert Rule information from M&A database. 3) Functionality Verification: Trigger Alert Rule conditions and verify that an Alert Notification is created. 4) Deletion: Delete Alert Rule information from M&A database and stop M&A from evaluating its conditions. Configuration Required entities: M&A application and a Target Server on the same network (or otherwise reachable). References Deliverable D3.2, Paragraph 4.2.1, Requirement: SMA.2 Applicability Monitored Target Server must be registered to the M&A.

Pre-test conditions There is a registered Target Server available.

Test Step Type Description Result Sequence 1 Send POST request registering new Alert Rule using the known IP address. 2 Receive status code 201 and new Alert Rule ID. PASS 3 Send GET request using returned Alert Rule ID to retrieve Alert Rule information. 4 Receive status code 200 and Alert Rule information. PASS 5 Trigger Alert Rule conditions by running custom script

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 156 of 190 Test Description in the Target Server used. 6 Send GET|POST request to query for the corresponding Alert Notification within the last n seconds in the M&A timeseries database where Alert Notifications are temporarily saved. 7 Receive status code 200 and requested data. PASS 8 Send DELETE request using returned Alert Rule ID. 9 Receive status code 200. PASS 10 Send GET request using returned Alert Rule ID to retrieve Alert Rule information. 11 Receive status code 404. PASS 12 Send GET request to retrieve timeseries data for the enabled Alert Rules. 13 Receive status code 200 and Alert Rule is not part of PASS the returned data list. Test Verdict PASS

The following picture illustrates the HTML format output produced by the execution of the robot framework test designed to verify the Monitoring and Analytics alert rule management API functionality.

Figure 63: Robot framework result output - MA alert rule management API test

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 157 of 190 4.3.1.6. Open Access Manager

The following tables describe the tests for the Open Access Manager module in CHARISMA. The tests verify the proper functionality for each OAM interface.

Test Description Identifier VNO Creation Test

Test Purpose This test verifies that an Infrastructure Provider is able to create a VNO.

Test Step Type Description Result Sequence 1 Infrastructure Provider posts the description of the VNO to be created to the Open Access Manager northbound REST API 2 Open Access Manager responses with HTTP 201 (Created) and updates its database accordingly 3 Infrastructure Provider sends a GET request of the VNOs list to the OAM northbound REST API 4 Open Access Manager responses with HTTP 200 (OK) and a list of the existing VNOs including the new one 5 Infrastructure provider posts the same VNO description to be created to OAM northbound REST API

6 OAM responses with HTTP 209 (Conflict), since the VNO is already created.

Test Verdict

Test Description Identifier VNO User Creation Test

Test Purpose This test verifies that an Infrastructure Provider is able to create a VNO user

Test Step Type Description Result Sequence 1 Infrastructure Provider posts the description of the VNO User to be created to the Open Access Manager northbound REST API 2 Open Access Manager responses with HTTP 201 (Created) and updates its database accordingly. 3 Infrastructure Provider sends a GET request of the users list to the OAM northbound REST API 4 Open Access Manager responses with HTTP 200 (OK) and a list of the existing users including the new one 5 Infrastructure provider posts the same user description to be created to OAM northbound REST API

6 OAM responses with HTTP 209 (Conflict), since the user is already created.

Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 158 of 190 Test Description Identifier Virtual Slice Creation Test

Test Purpose This test verifies that an Infrastructure Provider is able to create a Virtual Slice

Test Step Type Description Result Sequence 1 Infrastructure Provider posts the description of the Virtual Slice to be created to the Open Access Manager northbound REST API 2 Open Access Manager posts the description of the Virtual Slice to be created to the Infrastructure Provider’s REST API 3 Infrastructure Provider’s REST API responses with HTTP 201 (Created) and updates its database accordingly 4 Open Access Manager responses with HTTP 201 (Created) Test Verdict

Test Description Identifier VNO Removal Test

Test Purpose This test verifies that an Infrastructure Provider is able to remove a VNO

Test Step Type Description Result Sequence 1 Infrastructure Provider sends a DELETE request with the VNO ID to be removed to the Open Access Manager northbound REST API 2 Open Access Manager responses with HTTP 200 (OK) and updates its database accordingly 3 Infrastructure Provider sends a GET request of the VNOs list to the OAM northbound REST API 4 Open Access Manager responses with HTTP 200 (OK) and a list of the existing VNOs without the removed VNO Test Verdict

Test Description Identifier VNO User Removal Test

Test Purpose This test verifies that an Infrastructure Provider is able to remove a VNO user

Test Step Type Description Result Sequence 1 Infrastructure Provider sends a DELETE request with the user ID to be removed to the Open Access Manager northbound REST API 2 Open Access Manager responds with HTTP 200 (OK) and updates its database accordingly 3 Infrastructure Provider sends a GET request of the users list to the OAM northbound REST API 4 Open Access Manager responds with HTTP 200 (OK) and a list of the existing users without the newly removed user Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 159 of 190 Test Description Identifier Virtual Slice Removal Test

Test Purpose This test verifies that an Infrastructure Provider is able to remove a Virtual Slice

Test Step Type Description Result Sequence 1 Infrastructure Provider sends a DELETE request with the ID of the Virtual Slice to be removed to the Open Access Manager northbound REST API 2 Open Access Manager sends a DELETE request with the ID of the Virtual Slice to be removed to the Infrastructure Provider’s REST API 3 Infrastructure Provider’s REST API responds with HTTP 200 (OK) and updates its database accordingly 4 Open Access Manager responses with HTTP 200 (OK) Test Verdict

Test Description Identifier Resource availability test

Test Purpose This test verifies if a virtual slice can be created using the available compute and network resources of a specified infrastructure.

Test Step Type Description Result Sequence 1 The Slice manager asks the OAM Infrastructure Manager module if the specified infrastructure is registered in the Infrastructure Repository. 2 The OAM Infrastructure Manager module returns true, it exists. 3 The Slice manager asks the OAM Infrastructure Manager module if there are enough compute and network resources to create a new slice. 4 The OAM Infrastructure Manager module checks if the compute nodes are available using the PoP OpenStack API. 5 The OpenStack API answers HTTP 200 OK. 6 The OAM Infrastructure Manager module checks if the network nodes can place another slice. 7 The network nodes, e.g. an OLT, responds HTTP 200 (OK) Test Verdict

Test Description Identifier Slice availability test

Test Purpose This test verifies if a virtual slice is available to host a Network Service.

Test Step Type Description Result Sequence 1 The Network Service Manager asks the Slice Manager if the Virtual Slice is created and available. 4 The Slice Manager checks if the slice is available. 5 The Slice Repository returns true, it is available. The Network Service Manager checks the availability of the slice’s compute resources with the OAM

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 160 of 190 Test Description Infrastructure Manager module. 6 The OAM Infrastructure Manager module checks if the compute nodes are available using the PoP OpenStack API. 7 The OpenStack API answers HTTP 200 OK. 8 The OAM Infrastructure Manager module returns true, it is available. Test Verdict

Test Description Identifier Physical network management Test Purpose This test verifies that an Infrastructure Provider is able to create, get the information, and remove a physical network.

Test Step Type Description Result Sequence 1 Infrastructure Provider tries to create a physical network using the OAM northbound REST API. 2 Open Access Manager responses with HTTP 201 (Created) and updates its database accordingly 3 Infrastructure Provider tries to get the list of physical networks using the OAM northbound REST API. 4 Open Access Manager responds with HTTP 200 (OK) and the list of the physical networks, including the new one. 5 Infrastructure Provider tries to delete the new physical network using the OAM northbound REST API. 6 OAM responds with HTTP 200 (OK), and the physical network information is removed from the database. 7 Infrastructure Provider tries to delete the same physical network again. 8 OAM responses with HTTP 404 (Not Found), since the physical network does not exist. Test Verdict

Test Description Identifier Physical resources management Test Purpose This test verifies that an Infrastructure Provider is able to manage the resources of a physical network.

Test Step Type Description Result Sequence 1 Infrastructure Provider tries to add a GPON OLT to the physical network. Therefore, its location and its IP are specified 2 Open Access Manager responds with HTTP 201 (Created) and updates its database accordingly. The IP is able to retrieve information from the OLT. 3 Infrastructure Provider tries to add an OpenStack to the physical network. Therefore its credentials, IP and configuration are specified. 4 Open Access Manager responds with HTTP 201 (Created) and the database is accordingly updated. The Infrastructure Provider is able to retrieve information from the OpenStack instance. 5 Infrastructure Provider tries to delete a resource from the physical network. 6 OAM responds with HTTP 200 (OK), and the resource information is removed from the physical network. 7 Infrastructure Provider tries to delete a non-existing

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 161 of 190 Test Description resource from the physical network. 8 OAM responses with HTTP 404 (Not Found), since the resource does not exist. Test Verdict

Test Description Identifier VNO Creation Test Purpose This test verifies that an Infrastructure Provider is able to create a VNO

Test Step Type Description Result Sequence 1 Infrastructure Provider posts the description of the VNO to be created to the Open Access Manager northbound REST API. 2 Open Access Manager responds with HTTP 201 (Created) and updates its database accordingly. 3 Infrastructure Provider sends a GET request of the VNOs list to the OAM northbound REST API. 4 Open Access Manager responds with HTTP 200 (OK) and a list of the existing VNOs including the new one. 5 Infrastructure Provider posts the same VNO description to be created to the OAM northbound REST API 6 OAM responds with HTTP 209 (Conflict), since the VNO is already created. Test Verdict

Test Description Identifier VNO User Creation Test Purpose This test verifies that an Infrastructure Provider is able to create a VNO user.

Test Step Type Description Result Sequence 1 Infrastructure Provider posts the description of the VNO User to be created to the Open Access Manager northbound REST API. 2 Open Access Manager responds with HTTP 201 (Created) and updates its database accordingly. 3 Infrastructure Provider sends a GET request of the users list to the OAM northbound REST API. 4 Open Access Manager responds with HTTP 200 (OK) and a list of the existing users including the new one. 5 Infrastructure Provider posts the same user description to be created to the OAM northbound REST API. 6 OAM responses with HTTP 209 (Conflict), since the user is already created. Test Verdict

Test Description Identifier Slice creation Test Purpose Test verifies that the Infrastructure Provider (IP) is able to create slices over his/her physical infrastructure and assign it to a specific VNO.

Test Step Type Description Result Sequence

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 162 of 190 Test Description 1 Infrastructure Provider configures a new slice, mapping virtual resources to the physical ones, and selecting a VLAN for the isolation. InfP posts the description of the slice to the Open Access Manager through its northbound REST API. 2 Open Access Manager responds with HTTP 201 (Created). 3  Slice information is stored in OAM database.  OpenStack contains a network representing the slice with the specified VLAN.  Slice is assigned to specified VNO.

3 Infrastructure Provider sends a GET request of the Slices list to the OAM northbound REST API. 4 Open Access Manager responds with HTTP 200 (OK) and a list of the existing Slices, including the new one. 5 Infrastructure Provider tries to create another Slice with same VLAN. 6 <> OAM responds with HTTP 209 (Conflict), since this VLAN is already assigned to an existing slice. 7 Infrastructure Provider tries to assign a new slice to a non-existing VNO. 8 OAM responds with HTTP 400 (Bad Request), since the VNO does not exist. 9 Infrastructure Provider tries to map a virtual resource to a non-existing physical resource. 10 OAM responds with HTTP 400 (Bad Request), since the physical resource does not exist. Test Verdict

Test Description Identifier Slice Creation (VNO) Test Purpose This test verifies that a VNO is not allowed to create slices.

Test Step Type Description Result Sequence 1 VNO posts the description of a new slice to be created to the Open Access Manager northbound REST API. 2 Open Access Manager responds with HTTP 403 (Forbidden). Test Verdict

Test Description Identifier Slice Access (InfP and VNO) Test Purpose This test verifies that the InfP is able to retrieve information from all slices, but a VNO has only access to the one assigned to him/her.

Test Step Type Description Result Sequence 1 InfP creates two different slices (slice1 and slice2) and assigns them to different VNOs (vno1 and vno2). 2 InfP tries to get the information from slice1 and slice2 through the OAM northbound REST API. 3 OAM responses with HTTP 200 (OK) and returns the information of each slice. 4 VNO1 tries to get information from slice1. 5 OAM responses with HTTP 200(OK) and returns the

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 163 of 190 Test Description information of slice1 6 VNO1 tries to get information from slice2. 7 <> OAM responses with HTTP 403 (Forbidden), since slice2 is not assigned to him/her. 8 VNO2 tries to get information from slice2. 9 OAM responses with HTTP 200 (OK) and returns the information of slice2. 10 VNO2 tries to get information from slice1. 11 OAM responses with HTTP 403 (Forbidden), since slice1 is not assigned to him/her. Test Verdict

Test Description Identifier Slice Removal (InfP) Test Purpose This test verifies that the InfP is able to delete any slice, and that the physical resources assigned to this slice (ports, vlans..) are liberated.

Test Step Type Description Result Sequence 1 Infrastructure Provider removes an existing slice through the OAM northbound REST API. 2 OAM responses with HTTP 200 (OK).

3  Slice VLAN is free for future use.  Physical resources assigned to the slice are liberated.  The slice information has been removed from the database. 4 Infrastructure Provider tries to remove a non-existing slice. 5 OAM responses with HTTP 404 (Not Found), since slice does not exist. Test Verdict

Test Description Identifier Slice Removal (VNO) Test Purpose This test verifies that a VNO is not allowed to delete any slice, no matter if it is assigned to him/her or not.

Test Step Type Description Result Sequence 1 VNO tries to remove a slice through the OAM northbound REST API. 2 OAM responses with HTTP 403 (Forbidden), since VNOs are not allowed to remove slices. Test Verdict

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 164 of 190 4.3.1.7. SDN Wireless Backhaul

The following table describes the tests designed for the SDN-enabled 60-GHz wireless backhaul in CHARISMA. The tests’ purpose is to verify that the SDN backhaul is operating correctly, with the tests focusing on the networking operational domain, with particular regard to connectivity and traffic isolation.

Test Description Identifier SDN_backhaul_1 Test Purpose Backhaul switches connectivity: This test verifies that the switches have successfully connected to the SDN controller and the SDN controller shows essential information about them. After activating the link between them the SDN controller is able to identify the topology. Configuration

Figure 64: SDN backhaul 1

This test requires a running SDN controller, with the TCP connections initiated from the switches to the SDN controller. We have to correctly configure the DPID of the switch as well as the IP address and the port of the SDN controller. References Applicability

Pre-test conditions  The SDN controller is running the CE 2.0 application  The backhaul OpenFlow enabled switches are connected through a legacy switch to a laptop where the SDN controller is running.

Test Step Type Description Result Sequence 1 Initiated TCP connection from the backhaul switch no.1 to the SDN controller. 2 Successful connectivity. The SDN controller displays PASS the DPID of the switch and port information. 3 Initiated TCP connection from the backhaul switch no.2 to the SDN controller. 4 Successful connectivity. The SDN controller displays PASS the DPID of the switch and port information. 5 Activate the radio link. 6 LLDP (Link Layer Discovery Protocol) packets sent PASS from one switch are received by the other , and forwarded to the SDN controller. 7 SDN controller identifies the topology and displays it in PASS the GUI. Test Verdict PASS

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 165 of 190

Figure 65: Switch detection in ODL.

Figure 66: Topology detection in ODL.

Figure 67: Switch and ports in ODL.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 166 of 190 Test Description Identifier SDN_backhaul_2 Test Purpose REST API testing1 – service deployment: This test verifies that we get the correct status codes for the services we want to deploy Configuration This test aims to test that the SDN controller will return correct status codes for commands (defined by the CE 2.0 app’s Yang models) that will be sent in order to deploy the slicing services. Both switches are connected to the SDN controller and the SDN controller runs the CE 2.0 application. It is focused only on the northbound REST API of the SDN controller and the correct flow installation on the switch will be a part of future use case testing. References Applicability

Pre-test conditions  Switches are successfully connected to the SDN controller and the radio link is active.  Setup a REST CLIENT to send requests in JSON format.

Test Step Type Description Result Sequence 1 Send a get topology request to the REST API of the SDN controller 2 Validate the correct status code (200 OK) PASS 3 Send a request to deploy a service (INACTIVE) for two VNOs 4 Validate the correct corresponding status code (204 PASS OK) 5 Send a request to activate service for VNO 1 6 Validate the correct corresponding status code (200 PASS OK) 7 Send a request to activate service for VNO 2 8 Validate the correct corresponding status code (200 PASS OK) Test Verdict PASS

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 167 of 190

Figure 68: Postman results for test description SDN_backhaul_2

Test Description Identifier SDN_backhaul_3 Test Purpose REST API testing2 – malformed and conflicting requests: This test verifies that we get the correct status codes when we send a malformed message to the SDN controller or a conflicting message with the already deployed services. Configuration This test aims to send commands that are either malformed or create a conflict with the already existing services. Both switches are connected to the SDN controller and the SDN controller runs the CE 2.0 application. It is focused only on the northbound REST API of the SDN controller and the correct flow installation on the switch will be a part of future testing use case. References - Applicability -

Pre-test conditions  Switches have successfully connected to the SDN controller and the radio link is active.  Setup a REST CLIENT to send requests in JSON format.

Test Step Type Description Result Sequence 1 Send a malformed JSON command to the SDN’s controller REST API. 2 Validate the corresponding status code (400 Bad PASS Request). 3 Send a request to deploy a service (INACTIVE) for two VNOs. Send a request to deploy a service which is conflicting with the previous one.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 168 of 190 Test Description 4 Validate the corresponding status code (409 Conflict) PASS 5 6 Test Verdict PASS

Figure 69: Postman results for malformed request in test SDN_backhaul_3

Figure 70: Postman results for conflicting requests in test SDN_backhaul_3

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 169 of 190 Test Description Identifier SDN_backhaul_4 Test Purpose Two EVPL services – end-to-end connectivity: This test aims to create two EVPL services (for two corresponding VNOs) between two backhaul switches, and to show that the packets are successfully delivered with the correct C-VLANs to the corresponding UNIs. C-VLANs that are not assigned to any VNO must be dropped. Configuration

Figure 71: SDN backhaul 4

Two EVPL services associating three UNIs are configured. Firstly, an EVPL service for VNO 1 with S-VLAN 1 is created between port 1 of switch 1 and port 1 of switch 2, for customers with C-VLANs 2 and 3. Secondly, an EVPL service for VNO 2 with S-VLAN 2 is created between port 1 of switch 1 and port 2 of switch 2, for customers with C-VLANs 4 and 5. (We are not able to show the S-VLAN IDs of each VNO, because of the radio link connection.)

References - Applicability -

Pre-test conditions  Switches are successfully connected to the SDN controller and the radio link between them is active.  Service for 2 VNOs with customers (C-VLAN IDs 2 and 3) and (C-VLAN IDs 4 and 5) respectively have been deployed using the SDN controller and flows have been installed successfully to the switches.  Testers are attached to the UNIs.

Test Step Type Description Result Sequence 1 Tester offers tagged unicast Service Frames with C- VLAN IDs 2 and 3 at port gbe1 of backhaul switch 1 2 Frames with customer C-VLAN IDs 2 and 3 are PASS delivered to the port gbe1 of switch 2 3 Tester offers tagged unicast Service Frames with C- VLAN IDs 4 and 5 at port gbe1 of backhaul switch 1 4 Frames with customer C-VLAN ID 4 and 5 are PASS delivered to the port gbe2 of switch 2 5 Tester offers tagged unicast Service Frames with C- VLAN IDs 2 and 3 at port gbe1 of backhaul switch 2 6 Frames with customer C-VLAN IDs 2 and 3 are PASS delivered to the port gbe1 of switch 1 7 Tester offers tagged unicast Service Frames with C- VLAN IDs 4 and 5 at port gbe2 of backhaul switch 2 8 Frames with customer CE-VLAN IDs 4 and 5 are PASS delivered to the port gbe1 of switch 1

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 170 of 190 Test Description 9 Tester offers tagged unicast Service Frames with C- VLAN ID 6 at port gbe1 of backhaul switch 1. 10 Frames are getting dropped. (No frames are delivered PASS to gbe1 or gbe2 of switch 2) . 11 Tester offers tagged unicast Service Frames with C- VLAN ID 6 at port gbe1 of backhaul switch 2. 12 Frames are getting dropped. (No frames are delivered PASS to gbe1 of backhaul switch 1 and gbe2 of backhaul switch 2). 13 Tester offers tagged unicast Service Frames with C- VLAN ID 6 at port gbe2 of backhaul switch 2. 14 Frames are getting dropped. (No frames are delivered PASS to gbe1 of backhaul switch 1 or gbe1 of backhaul switch 2) Test Verdict PASS

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 171 of 190 Test Description Identifier SDN_backhaul_5 Test Purpose Two EVPL services – correct assignment of S-VLANs per VNO: This test aims to create EVPL services for two VNOs between two backhaul switches, and to show that the correct S-VLAN for each VNO is getting pushed. C-VLANs that are not assigned to any VNO must be dropped. Configuration

Figure 72: SDN Backhaul 5

This time, in order to show that the correct S-VLAN are getting pushed we are not using the radio link in order to connect the backhaul devices. Instead, we connect them using the GbE interfaces and a non-OpenFlow switch in-between, which mirrors the packets of the link to a packet sniffer. Two EVPL services associating two UNIs are configured. An EVPL service for VNO 1 with S-VLAN 1 and C-VLAN IDs 2 and 3 is created between port 1 of switch 1 and port 1 of switch 2. An EVPL service for VNO 1 with S-VLAN 2 and C-VLAN IDs 4 and 5 is created between port 1 of switch 1 and port 1 of switch 2.

References - Applicability -

Pre-test conditions  Switches are connected physically using the GbE interfaces and a non- OpenFlow switch in between. Switches are successfully connected to the SDN controller.  Service for 2 VNOs with customers (C-VLAN IDs 2 and 3) and (C-VLAN IDs 4 and 5) respectively have been deployed using the SDN controller and flows have been installed successfully to the switches.  Testers are attached to the UNIs and the port of the non-OpenFlow enabled switch that mirrors the traffic on the non-OpenFlow enabled switch.

Test Step Type Description Result Sequence 1 Tester offers tagged unicast Service Frames with C- VLAN IDs 2 and 3 at port gbe1 of backhaul switch 1 2 Frames with customer C-VLAN IDs 2 and 3 and S- PASS VLAN 1 are received at the laptop running Wireshark 3 Tester offers tagged unicast Service Frames with C- VLAN IDs 4 and 5 at port gbe1 of backhaul switch 1 4 Frames with customer C-VLAN ID 4 and 5 and S-VLAN PASS

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 172 of 190 Test Description 2 are received at the laptop running Wireshark 5 Tester offers tagged unicast Service Frames with C- VLAN IDs 2 and 3 at port gbe1 of backhaul switch 2 6 Frames with customer C-VLAN IDs 2 and 3 and S- PASS VLAN 1 are received at the laptop running Wireshark 7 Tester offers tagged unicast Service Frames with C- VLAN IDs 4 and 5 at port gbe2 of backhaul switch 2 8 Frames with customer C-VLAN ID 4 and 5 and S-VLAN PASS 2 are received at the laptop running Wireshark 9 Tester offers tagged unicast Service Frames with C- VLAN ID 6 at port gbe1 of backhaul switch 1. 10 Frames are getting dropped (No frames are received at PASS the laptop running Wireshark). 11 Tester offers tagged unicast Service Frames with C- VLAN ID 6 at port gbe1 of backhaul switch 2. 12 Frames are getting dropped (No frames are received at PASS the laptop running Wireshark). Test Verdict PASS

Figure 73: The wireless backhaul nodes connected with the RF cable link and the tester

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 173 of 190 4.3.2. VNF Testing

4.3.2.1. IDS Test Description Identifier IDSVNF Test Purpose Testing the basic functionality of the IDS VNF: 1) Add a new IDS rule (Snort rule) to identify a particular attack 2) Check if the new rule is enforced, and if the attack is identified in the IDS logs,

Configuration

Figure 74: IDS VNF

The IDS VNF is deployed within an OpenStack compute node. The IDS VNF will be configured with a set of rules that identify some specified attacks. Host A is connected to the compute node and has the responsibility of testing the functionality of the IDS VNF by performing a list of attacks. Application A running in host A will produce traffic that simulates a particular attack and send this traffic to the IDS VNF. Then, through SSH/FTP (File Transfer Protocol) access, it will check the logs of the IDS VNF, verifying that the attack has been identified by the IDS.

References “Service reliability (Denial of Service (DoS) protection)” requirement defined in Deliverable D3.2 “Initial 5G multi-provider v-security realization: Orchestration and Management” (Section 2.1.1 CHARISMA Use Case Security Analysis and Section 2.1.2 Security Requirements Summary).

Requirement 2 - “The system shall support advanced end‐to‐end security” requirement as defined in Table 4-2 of Deliverable D1.2 “Refined architecture definitions and specifications”.

Requirement 5 - “Each tenant should be able to define its own security policies, deciding the deployment of desired security services (e.g. virtual IDS, firewall) and their configuration without affecting the other tenant’s services” as defined in Table 4- 2 of Deliverable D1.2 “Refined architecture definitions and specifications”.

Applicability Deployment of NFVI-PoP environment in which the IDS VNF will be running.

Pre-test conditions  The firewall VNF is deployed in an NFVI-PoP.  The service IDS Rule Receiver, which is responsible for receiving requests for new IDS rules, is instantiated.

Test Step Type Description Result Sequence 3 Configure Send HTTP request to IDS Rule Receiver service with

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 174 of 190 Test Description IDS rule to identify a specific attack 4 Check Check if the rule has been added in the IDS rules of the PASS IDS VNF 5 Stimulus Send traffic from application A to the IDS VNF. Traffic sent is many requests (flooding attack) 6 Check Connect to the ID VNF through SSH or FTP and check PASS if the produced logs include alerts for the specific attack. Test Verdict PASS

The following picture illustrates the HTML format output produced by the execution of the robot framework test designed to verify the IDS VNF functionality.

Figure 75: Robot framework result output – IDS VNF functionality test

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 175 of 190 4.3.2.2. Firewall Test Description Identifier Firewall VNF Test Purpose Testing the basic functionality of the firewall VNF 1) Add a new firewall rule (OVS-based rule) dropping packets from a specific IP address 2) Check if the new rule is enforced, not allowing the traffic coming from the particular IP address to pass

Configuration

Figure 76: Firewall VNF

The firewall VNF is deployed within an OpenStack compute node. We are connecting two hosts to the two interfaces of the compute node. We use two applications deployed in these hosts for testing the firewall functionality. Application A sends traffic to application B. When a firewall rule applies for dropping packets with source IP host A, then host B does not receive any traffic coming from host A. The test will be implemented in Robot Framework.

References “Service reliability (DoS protection)” requirement defined in Deliverable D3.2 “Initial 5G multi-provider v-security realization: Orchestration and Management” (Section 2.1.1 CHARISMA Use Case Security Analysis and Section 2.1.2 Security Requirements Summary)

Requirement 2 - “The system shall support advanced end‐to‐end security” requirement as defined in Table 4-2 of Deliverable D1.2 “Refined architecture definitions and specifications”

Requirement 5 - “Each tenant should be able to define its own security policies, deciding the deployment of desired security services (e.g. virtual IDS, firewall) and their configuration without affecting the other tenant’s services” as defined in Table 4- 2 of Deliverable D1.2 “Refined architecture definitions and specifications”

Applicability Deployment of NFVI-PoP environment in which the firewall VNF will be running.

Pre-test conditions  The firewall VNF is deployed in an NFVI-PoP.  The service Firewall Rule Receiver, which is responsible for receiving requests for new firewall rules, is instantiated.  Setup of two hosts A and B, interconnected through the NFVI-PoP (as shown in the figure)  Setup of two applications A and B in hosts And B respectively, for sending and receiving traffic between the two hosts.

Test Step Type Description Result Sequence 1 Stimulus Send traffic from application A to application B 2 Check Check if application B is receiving traffic with host’s A PASS source IP 3 Configure Send HTTP request to Firewall Rule Receiver service with firewall rule to drop packets from host’s A source

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 176 of 190 Test Description IP in JSON format 4 Check Check if the rule has been added in the OVS rules of PASS the firewall VNF 5 Stimulus Send traffic from application A to application B 6 Check Check that application B is not receiving traffic from PASS host’s A source IP Test Verdict PASS

The following picture illustrates the HTML format output produced by the execution of the robot framework test designed to verify the Firewall VNF functionality.

Figure 77: Robot framework result HTML output – Firewall VNF functionality test

4.3.2.3. Cache Controller

The following tables describe the tests designed for the vCC functionality in CHARISMA. Their purpose is to verify that the implemented vCC system is operating correctly, with the tests focusing on the functional and operational domain.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 177 of 190 Test Description Identifier vCC_connection Test Purpose Network connectivity: this test verifies IP level connectivity between vCC and vCaches. Configuration This test will verify if the connection and communications between vCC and CMO have been well setup, if the vCaches have been successfully added in the DB of vCC, and if the communications between vCC and vCaches have been well connected. References - Applicability This test does not focus on configuration level communications to vCache and CMO, but rather on network connectivity. Pre-test conditions One network slice has been setup vCC and vCache nodes have been instantiated

Test Step Type Description Result Sequence 1 vCC verifies if it has been successfully informed of the IP addresses of the vCaches. 2 Check the DB about the vCaches. PASS 3 vCC sends a series of ICMP Echo Request packets to vCache. 4 vCache receives one ICMP Echo Reply packet for PASS each ICMP Echo Request Test Verdict PASS

Test Description Identifier vCC_functional Test Purpose This test verifies if the vCC can successfully configure vCaches for squid and prefetch functionalities. Configuration vCaches info has been added in vCC database, and the connection of vCC and vCaches have been successfully done. References - Applicability This test focuses on the configuration level communications between vCC and vCache, including the configuration of squid and prefetch, the retrieval of user request list and the prefetch command. Pre-test conditions One network slice has been setup. vCC and vCache nodes have been instantiated and well connected.

Test Step Type Description Result Sequence 1 vCC sends message to get the squid/prefetch configuration on vCache. 2 Check if the result has been successfully received by PASS vCC. 3 vCC sends message to edit the squid/prefetch configuration on vCache. 4 Check if the edit operation has been successfully done PASS on vCaches. 5 vCC sends message to retrieve the user request information on vCache. 6 Check if the result has been successfully received by PASS vCC. 7 vCC sends a prefetch command to a vCache for a specific content. 8 Check if the content has been successfully prefetched PASS onto vCaches. Test Verdict PASS

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 178 of 190

4.3.2.4. Cache

The following tables describe the tests designed for the vCache peering functionality in CHARISMA. Their purpose is to verify that the implemented vCache peering system is operating correctly, with the tests focusing on the networking operational domain, with particular regard to connectivity and traffic isolation.

Test Description Identifier vCache_peering_1 Test Purpose Network connectivity: this test verifies IP level connectivity between clients/end-users and the deployed vCaches. Configuration This test takes place in a setup of two VNOs, i.e., VNO A and VNO B operating one vCache each, i.e. vCache A and vCache B, respectively. For each VNO a single client/end user instance is further instantiated, i.e. client A and client B, respectively. The test is symmetric for the two VNOs and assumes that each VNO VLAN-based network slice has been created. For all vCaches and Clients the eth0 network interface is used to communicate over the created network slice. References - Applicability This test does not focus on the application level configuration of caches, but rather on the host level network connectivity. Pre-test conditions One network slice has been setup per examined VNO. Clients and vCache nodes have been instantiated.

Test Step Type Description Result Sequence 1 Client A sends a series of ICMP Echo Request packets to vCache A over eth0 2 Client A receives one ICMP Echo Reply packet for PASS each ICMP Echo Request sent to vCache A over eth0 3 Client B sends a series of ICMP Echo Request packets to vCache B over eth0 4 Client B receives one ICMP Echo Reply packets for PASS each ICMP Echo Request sent to vCache B over eth0 Test Verdict PASS

Test Description Identifier vCache_peering_2 Test Purpose Network connectivity: this test verifies IP level connectivity between the deployed vCaches and the Content Server. Configuration This test takes place in a setup of two VNOs, i.e. VNO A and VNO B operating one vCache each, i.e. vCache A and vCache B, respectively. A Content Server (web server) is further deployed for the provisioning of Web content to end users/clients. The Content Server does not belong to any of the two VNO network slices. The test is symmetric for the two VNOs and assumes that each VNO VLAN-based network slice has been created. For all vCaches and Clients the eth0 network interface is used to communicate with the content server. References - Applicability This test does not focus on the application level configuration of caches, but rather on the host level network connectivity. Pre-test conditions One network slice has been setup per examined VNO. vCache nodes have been instantiated. A Content Server has been instantiated, outside the VNO network slices.

Test Step Type Description Result Sequence 1 vCache A sends a series of ICMP Echo Request packets to the Content Server (eth0) 2 vCache A receives one ICMP Echo Reply packet for PASS

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 179 of 190 Test Description each ICMP Echo Request sent to the Content Server 3 vCache B sends a series of ICMP Echo Request packets to the Content Server (eth0) 4 vCache B receives one ICMP Echo Reply packet for PASS each ICMP Echo Request sent to the Content Server (eth0) Test Verdict PASS

Test Description Identifier vCache_peering_3 Test Purpose Network connectivity: this test verifies IP level connectivity between the deployed vCaches and the Content Server. Configuration This test takes place in a setup of two VNOs, i.e. VNO A and VNO B operating one vCache each, i.e. vCache A and vCache B, respectively. A Content Server (web server) is further deployed for the provisioning of Web content to end users/clients. The Content Server does not belong to any of the two VNO network slices. The test is symmetric for the two VNOs and assumes that each VNO VLAN-based network slice has been created. For all vCaches and Clients the eth0 network interface is used to communicate with the content server. References - Applicability This test does not focus on the application level configuration of caches, but rather on the host level network connectivity. Pre-test conditions One network slice has been setup per examined VNO. vCache nodes have been instantiated. A Content Server has been instantiated, outside the VNO network slices.

Test Step Type Description Result Sequence 1 vCache A sends a series of ICMP Echo Request packets to vCache B over eth1 2 vCache A receives one ICMP Echo Reply packet for PASS each ICMP Echo Request sent to vCache B over eth1 3 vCache B sends a series of ICMP Echo Request packets to vCache A over eth1 4 vCache B receives one ICMP Echo Reply packet for PASS each ICMP Echo Request sent to vCache A over eth1 Test Verdict PASS

Test Description Identifier vCache_peering_4 Test Purpose Traffic isolation connectivity: this test verifies traffic isolation between slices. In particular, this test verifies that a client in one VNO (network slice) cannot communicate with the vCache of the other VNO (network slice) over any network interface. Configuration This test takes place in a setup of two VNOs, i.e. VNO A and VNO B operating one vCache each, i.e. vCache A and vCache B, respectively. For each VNO a single client/end user instance is further instantiated, i.e. client A and client B, respectively. The test is symmetric for the two VNOs and assumes that each VNO VLAN-based network slice has been created. For all vCaches and Clients the eth0 network interface is used to communicate over the created network slice. Network interface eth1 is used by each vCache to communicate over the shared network. References - Applicability This test does not focus on the application level configuration of caches, but rather on the host level network connectivity. Pre-test conditions One network slice has been setup per examined VNO. vCache nodes have been instantiated. A Content Server has been instantiated, outside the VNO network slices.

Test Step Type Description Result

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 180 of 190 Test Description Sequence 1 vCache A sends a series of ICMP Echo Request packets to vCache B over eth1 2 Client A receives no ICMP Echo Reply packet for any PASS ICMP Echo Request sent to vCache B eth0 3 Client B sends a series of ICMP Echo Request packets to vCache A eth0 4 Client B receives no ICMP Echo Reply packets for any PASS ICMP Echo Request sent to vCache A eth0 Test Verdict PASS

Test Description Identifier vCache_peering_5 Test Purpose Traffic isolation connectivity: this test verifies traffic isolation between slices. In particular, this test verifies that vCaches cannot communicate with each other over any other network interface (apart from eth1). Configuration This test takes place in a setup of two VNOs, i.e. VNO A and VNO B operating one vCache each, i.e. vCache A and vCache B, respectively. A shared network has been created for the two VNOs. Network interface eth1 is used by each vCache to communicate over this network. For all vCaches and Clients the eth0 network interface is used to communicate over the created network slice. The test is symmetric for the two VNOs and assumes that each VNO VLAN-based network slice has been created. References - Applicability This test does not focus on the application level configuration of caches, but rather on the host level network connectivity. Pre-test conditions One network slice has been setup per examined VNO. vCache nodes have been instantiated. Shared network has been created.

Test Step Type Description Result Sequence 1 vCache A sends over eth0 a series of ICMP Echo Request packets to vCache B eth1 2 vCache A receives no ICMP Echo Reply packet for any PASS ICMP Echo Request sent to vCache B 3 vCache A sends over eth1 a series of ICMP Echo Request packets to vCache B eth0 4 vCache A receives no ICMP Echo Reply packet for PASS any ICMP Echo Request sent to vCache B Test Verdict PASS

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 181 of 190 Test Description Identifier vCache_peering_6 Test Purpose Caching service validation: this test verifies the correct operation of the caching service. In particular, this test verifies that clients can retrieve non-cached content through their VNO’s vCache, i.e. non-cached content can be fetched by vCaches from the content server. Configuration This test takes place in a setup of two VNOs, i.e. VNO A and VNO B operating one vCache each, i.e. vCache A and vCache B, respectively. A shared network has been created for the two VNOs. Network interface eth1 is used by each vCache to communicate over this network. A Content Server (web server) is further deployed for the provisioning of Web content to end users/clients. The Content Server does not belong to any of the two VNO network slices. For all vCaches and Clients the eth0 network interface is used to communicate over the created network slice. The test is symmetric for the two VNOs and assumes that each VNO VLAN-based network slice has been created. References - Applicability This test focuses on the application level configuration of caches. Pre-test conditions One network slice has been setup per examined VNO. vCache nodes have been instantiated. Caches are empty when the test commences (all previous cached items purged). The Content Server has been instantiated.

Test Step Type Description Result Sequence 1 Client in VNO A sends an HTTP query for content item A. 2 Client in VNO A receives requested content item. PASS 3 vCache A has cached content item A. PASS 4 Client in VNO B sends an HTTP query for content item B. 5 Client in VNO B receives requested content item. PASS 6 vCache B has cached content item B. PASS Test Verdict PASS

Test Description Identifier vCache_peering_7 Test Purpose Caching service validation: this test verifies the correct operation of the caching service. In particular, this test verifies that clients can retrieve cached content through their VNOs vCache. Configuration This test takes place in a setup of two VNOs, i.e. VNO A and VNO B operating one vCache each, i.e. vCache A and vCache B, respectively. A shared network has been created for the two VNOs. Network interface eth1 is used by each vCache to communicate over this network. A Content Server (web server) is further deployed for the provisioning of Web content to end users/clients. The Content Server does not belong to any of the two VNO network slices. For all vCaches and Clients the eth0 network interface is used to communicate over the created network slice. The test is symmetric for the two VNOs and assumes that each VNO VLAN-based network slice has been created. References - Applicability This test focuses on the application level configuration of caches. Pre-test conditions One network slice has been setup per examined VNO. vCache nodes have been instantiated. Content items A and B have been cached locally by vCache A and vCache B respectively. The Content Server has been instantiated.

Test Step Type Description Result Sequence 1 Client in VNO A sends an HTTP query for content item A. 2 Client in VNO A receives requested content item. PASS 3 vCache A access log shows cache hit for content item PASS A.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 182 of 190 Test Description 4 Client in VNO B sends an HTTP query for content item B. 5 Client in VNO B receives requested content item. PASS 6 vCache B access log shows cache hit for content item PASS B. Test Verdict PASS

Test Description Identifier vCache_peering_8 Test Purpose Caching service validation: this test verifies the correct operation of the caching service. In particular, this test verifies that vCaches can exchange cached content over the established peering link. Configuration This test takes place in a setup of two VNOs, i.e. VNO A and VNO B operating one vCache each, i.e. vCache A and vCache B, respectively. A shared network has been created for the two VNOs. Network interface eth1 is used by each vCache to communicate over this network. A Content Server (web server) is further deployed for the provisioning of Web content to end users/clients. The Content Server does not belong to any of the two VNO network slices. For all vCaches and Clients the eth0 network interface is used to communicate over the created network slice. The test is symmetric for the two VNOs and assumes that each VNO VLAN-based network slice has been created. References - Applicability This test focuses on the application level configuration of caches. Pre-test conditions One network slice has been setup per examined VNO. vCache nodes have been instantiated. Content items A and B have been cached locally by vCache A and vCache B respectively. The Content Server has been instantiated.

Test Step Type Description Result Sequence 1 Client in VNO A sends an HTTP query for content item B. 2 Client in VNO A receives requested content item. PASS 3 vCache A access log shows a local cache miss and a PASS sibling cache hit for content item B. 4 vCache B access log shows a cache hit for content PASS item B. 5 Client in VNO B receives requested content item.

6 vCache B access log shows a local cache miss and a PASS sibling cache hit for content item A. 7 vCache A access log shows a cache hit for content PASS item A. Test Verdict PASS

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 183 of 190 5. Conclusions

The three demonstrators located at NCSRD (Athens), Telekom Slovenije (Ljubljana), and APFutura (Centelles) have each deployed different 5G infrastructures to demonstrate the 3 key features that the CHARISMA project is especially designed to exhibit: Security in the NCSRD and Telekom Slovenije demonstrators; Low-latency in the Apfutura field trial and NCSRD demonstrator; and Open Access in the Apfutura and Telekom Slovenije field trials. At the hardware level, two devices (i.e. TrustNode router, and the SmartNIC acceleration card) specially designed for reduce the latency have been installed in instances of the three demonstrator locations: TrustNode in Telekom Slovenije and Apfutura; and SmartNIC in Apfutura and NCSRD. From the software point of view, there is a single SW package comprising the Control, Management and Orchestrator (CMO), that is deployed at the three field trial locations. With respect to the deployment of other software and the VNFs at each of the locations, this depends on what each demonstrator wants to demonstrate and its 5G use case scenario. This deliverable D4.2 has presented the evolution and developments from the earlier deliverable D4.1 “Demonstrators design and prototyping”. According to this current D4.2 document, each demonstrator has been assigned its own physical and logical infrastructure to run each testbed, all of which has been reported here. Also, this deliverable D4.2 explains how to install the various items of software required, and also how to configure the various items of hardware equipment that are being used to validate the CHARISMA 5G concept. In providing a comprehensive overview of the installation and testing of the hardware and software, this document can therefore also be used as a guide for third parties interested in deploying our CHARISMA 5G technical solutions. This document represents a lot of the experimental and developmental work that every partner in this project has undertaken over the past 24 months. The next deliverable in this WP4 is the deliverable D4.3 “Validation field, test results and analysis evaluation”, which reports on the final test results arising from the full validation field trials being undertaken at the 3 locations, to verify the 5G performance capabilities of the various use case scenarios. In addition, the D4.3 document will also report on any modifications or improvements that may have been applied to the hardware and software technology solutions in the final 6 months of the CHARISMA project. Indeed, D4.3 can also report on any improved 5G architecture solutions as proposed by the CHARISMA project, so as to make it more useful and enable more robust operation for any operator that is interested in applying the CHARISMA technical solution to their own 5G network infrastructure.

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 184 of 190 References

[1] Charisma Git Repository: https://bitbucket.i2cat.net/projects/CHARISMA-EU 1 [2] TeNOR public API: http://t-nova.github.io/TeNOR/doc/index.html 2 [3] OpenStack Nova API: https://developer.openstack.org/api-ref/compute/ 3 [4] OpenStack Keystone API: https://developer.openstack.org/api-ref/identity/v3/ 4 [5] OpenStack Image Service API: https://developer.openstack.org/api-ref/image/v2/ 5 [6] OpenStack Neutron https://developer.openstack.org/api-ref/networking/v2/ 6 [7] OpenStack Orchestration API: https://developer.openstack.org/api-ref/orchestration/v1/ 7 [8] Charisma Development Manual: https://bitbucket.i2cat.net/projects/CHARISMA- EU/repos/docs/browse/CHARISMADevelopmentmanual.doc.pdf [9] Heinanen, J. and R. Guerin, “A Single Rate Three Color Marker”, RFC 2697, September 1999. 3 [10] Heinanen, J. and R. Guerin, "A Two Rate Three Color Marker", RFC 2698, September 1999. 4 [11] O. Aboul-Magd and S. Rabie, “A Differentiated Service Two-Rate, Three-Color Marker with Efficient Handling of in-Profile Traffic”, RFC 4115, July 2005. 5 [12] Y. Bernet, S. Blake, D. Grossman and A. Smith, “An Informal Management Model for Diffserv Routers”, RFC 3290, May 2002, page 49-54. 6 [13] DIN, EN. "61000-6-3* 09/2007 Elektromagnetische Verträglichkeit (EMV)-Teil 6-3." 7 [14] André Brízido, et al., “D4.4 SODALES Report on lab and field service validation”, 2016 [15] D. Schulz et al., “Long-term outdoor measurements using a rate-adaptive hybrid optical wireless / 60 GHz link over 100 m” (Invited), accepted at ICTON 2017 (ICTON conference: 7/2 - 7/6/17)

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 185 of 190 Acronyms

ACD Analogue-to-Digital Converter ACL Access Control List API Application Programming Interface APN Access Point Name ATTD Acceptance-Test-Driven Development AWG Arbitrary Waveform Generator BBU BaseBand Unit BH Backhaul C-VLAN Customer VLAN CAL Converged Aggregation Level CBS Committed Burst Size CC Cache Controller CCD Cache Controller Daemon CE Carrier Ethernet CDN Content Distribution Network CFO Carrier Frequency Offset childMB Child MoBcache CIDR Classless Inter-Domain Routing CIR Committed Information Rate CLI Command Line Interface CMO Control Management and Orchestration CN Cache Node CORS Cross-Origin Resource Sharing CP Cache Proxy CPE Customer Premises Equipment C-RAN Cloud-RAN CRC Cyclic Redundancy Check CSS Cascading Style Sheets CPU Central Processing Unit DA Destination Address DAC Digital-to-Analogue Converter DB Database DDoS Distributed Denial of Service DHCP Dynamic Host Configuration Protocol DNS Domain Name System DoS Denial of Service DPI Deep Packet Inspection DPID Datapatch Identifier

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 186 of 190 DSP Digital Signal Processing DSO Digital Storage Oscilloscope DU Digital Unit DUT Device Under Test EBS Excess Burst Size EIR Excess Information Rate EMC Electromagnetic compatibility eNB eNodeB (evolved Node B) ENET Trademark for Ethernity Network technology EPC Evolved Packet Core EPS Evolved Packet System EVC Ethernet Virtual Connection EVM Error Vector Magnitude EVPL Ethernet Virtual Private Line FCS Frame CheckSum FFT Fast Fourier Transform FIFO First In First Out FOV Field of View FPGA Field Programmable Gate Array FTP File Transfer Protocol FUT Function Under Test FW Firewall FWHM Full-Width Half-Maximum GbE Gigabit Ethernet GGSN Gateway GPRS Support Node GPON Gigabit PON GPRS General Packet Radio Service GTP GPRS Tunnelling Protocol GUI Graphical User Interface GW Gateway HLS HTTP Live Streaming HSS Home Subscriber Service HTTP Hypertext Transfer Protocol HW Hardware ICMP Internet Control Message Protocol ICP Internet Cache Protocol ID Identification IDS Intrusion Detection System IEEE Institute of Electrical and Electronics Engineers IETF Internet Engineering Task Force IF Intermediate Frequency IFFT Inverse Fast Fourier Transform IMU Intelligent Management Unit InfP Infrastructure Provider IO In/Out

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 187 of 190 IP Internet Protocol I/Q In-phase/Quadrature ITS Intelligent Transport System JSON JavaScript Object Notation LED Light Emitting Diode LLDP Link Layer Discovery Protocol LO Local Oscillator LTE Long-Term Evolution M&A Monitoring and Analytics MAC Media Access Control MANO Management and Orchestration MB MoBcache DC Micro Data Centre MIB Management Information Base MME Mobility Management Entity MPPS MegaPackets Per Second MTU Maximum Transfer Unit NAT Network Address Translation NFV Network Function Virtualisation NFVI NFV Infrastructure NIC Network Interface Card NS Network Service NSD Network Service Descriptor OAM Open Access Manager ODL OpenDaylight OF OpenFlow OFDM Orthogonal Frequency Division Multiplexing OLP Over Load Protection OLT Optical Line Termination OpenWRT Open Wireless Receiver / Transmitter OS Operating System OSCP OpenStack VLAN CounterPart OTT Over-the-top content OVS Open Virtual Switch (Open vSwitch) OW Optical Wireless OWC Optical Wireless Communications PBS Peak Burst Size PC Personal Computer PCIe Peripheral Component Interconnect Express PD Photodiode PDN Packet Data Network PDP Packet Data Protocol PGW PDN Gateway PHP Hypertext Preprocessor PIR Peak Information Rate

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 188 of 190 PLL Phase Locked Loop PNF Physical Network Function PON Passive Optical Network PoP Point of Presence PtP Point-to-Point QAM Quadrature Amplitude Modulation QoS Quality of Service QPSK Quadrature Phase Shift Keying RAN Radio Access Network RBAC Role-Based Access Control RESTful Representational State Transfer RF Radio Frequency RMON IETF standard for Remote Network Monitoring rootMB Root MoBcache RPM RPM Package Manager RRU Remote Radio Unit RU Radio Unit Rx Receiver S-VLAN Service VLAN SA Source Address SDN Software Defined Networks SFO Signal Frequency Offset SGW Serving Gateway SLA Service Level Agreement SNMP Simple Network Management Protocol SNR Signal-to-Noise Ratio SPM Service Policy Manager SSH Secure shell SSID Service Set Identifier SW Software TCP Transmission Control Protocol TeNOR The NFV Orchestrator Tx Transmitter UE User Equipment UI User Interface UNI User Network Interface ONT Optical Network Termination ONU Optical Network Unit USB Universal Serial Bus vCache Virtual Cache vCC Virtual Cache Controller VCO Voltage Controlled Oscillator vDNS Virtual Domain Name System vFW Virtual Firewall VHDL VHSIC Hardware Description Language

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 189 of 190 VHSIC Very High Speed Integrated Circuit vIDS Virtual Intrusion Detection System VI Virtualized Infrastructure VIM Virtualized Infrastructure Manager VL Virtual Link VLAN Virtual Local Area Network VM Virtual Machine VNF Virtual Network Function VNFC VNF Component VNFD VNF Descriptor VNFM VNF Manager VNO Virtual Network Operator VSF Virtual Security Function WiFi Wireless Fidelity

CHARISMA – D4.2 – v1.0 Demonstators Infrastructure Setup and Validation Page 190 of 190