Faculty of Electrical and Computer Engineering Communications Laboratory Deutsche Telekom Chair of Communication Networks

Bachelor Thesis

Improving Simultaneous Migration Time of Virtual Machines using SDN

Anna Triginer Perera Born on: 28.03.1996 in Barcelona Matriculation number: 4812628 Matriculation year: 2018

Referee Prof. Dr.-Ing. Dr. h.. Frank Fitzek Supervisor Dipl.-Ing. Robert-Steve Schmoll

Submitted on: 08.03.2019

Statement of authorship

I hereby certify that I have authored this Bachelor Thesis entitled Improving Simultane- ous Migration Time of Virtual Machines using SDN independently and without undue assistance from third parties. No other than the resources and references indicated in this thesis have been used. I have marked both literal and accordingly adopted quotations as such. There were no additional persons involved in the intellectual preparation of the present thesis. I am aware that violations of this declaration may lead to subsequent withdrawal of the degree.

Dresden, 08.03.2019

Anna Triginer Perera Abstract

Cloud computing is a key technology in providing services in networks and will play an even bigger role in restructuring the for enabling the 5G mobile communication network. For providers, economy of scale is important as is consolidation of re- sources. In order to always have optimal use of available servers, live migration is used intensively for load balancing, fault tolerance, power management, etc. This also applies to a cross-data-center case, where network resources are limited compared to a local mi- gration between hosts, which often support multiple dedicated networks for different tasks. The goal of this thesis is to use a programmable network mechanism in order to decrease the time needed for migrating several Virtual Machines concurrently. For this purpose, a measure environment has been created with Kernel-based (KVM) as technology and a Ryu Software-defined Network (SDN) controller has been set through Openflow to optimize the migration performance in terms of bandwidth allo- cation. Contents

List of Figures4

List of Tables5

1 Introduction6

2 Theoretical Background8 2.1 Virtual Machines ...... 8 2.1.1 Live Migration ...... 9 2.1.1.1 Memory Migration ...... 10 2.2 Software Defined Networking ...... 14 2.2.1 OpenFlow ...... 15 2.3 Related work ...... 17

3 Environment 20 3.1 Hardware Specifications ...... 20 3.2 Network Infrastructure ...... 21 3.3 Software Setup ...... 23 3.3.1 Virtualization Setup ...... 23 3.3.2 Open vSwitch and SDN controller ...... 24 3.3.2.1 Ryu Rest API ...... 26 3.4 Migration Procedure and Measurement Tools ...... 30

4 Experiments and Results 33

5 Conclusions and Future Work 41

References 43

3 List of Figures

2.1 and Virtual Machines ...... 9 2.2 Pre-copy Flowchard. Image source: [1] ...... 11 2.3 SDN Architecture ...... 15 2.4 Open vSwitch Components. Image source: [2] ...... 16

3.1 Current Network Topology ...... 21 3.2 Distributed Virtual Switch ...... 22

4.1 Single VM Live Migration Throughput ...... 34 4.2 Live Migration Comparison: 2 VMs ...... 35 4.3 Live Migration Comparison: 8 VMs ...... 35 4.4 Single VM Live Migration 50% stressed ...... 36 4.5 Relation between Bandwidth Allocation and Time ...... 37 4.6 Live Migration Comparison: 3 VMs ...... 38 4.7 Burstable Bandwidth Migration of 3VMs ...... 39 4.8 Comparison between four Live Migration Techniques ...... 40

4 List of Tables

2.1 Pre-copy Algorithm Migration. VM size = 16GB ...... 13

3.1 Ubuntu0-2 Specifications ...... 20 3.2 Servers ...... 21 3.3 Virtual Machine Characteristics ...... 23 3.4 Queue Setting Example ...... 27 3.5 QoS Rules Example ...... 28

4.1 Live Migration Metrics: 8 VMs ...... 35 4.2 Bandwith Allocation Queue Settings ...... 37 4.3 Bandwidth Allocation QoS Rules ...... 38 4.4 Burstable Bandwidth Allocation Queue Settings ...... 39 4.5 Live Migration Metrics Comparison ...... 40

5 1 Introduction

Nowadays data centers are indispensable to manage the amount of data we generate contin- uously. Data centers are centralized locations where computing and networking equipment is concentrated for the purpose of collecting, storing, processing, distributing or allowing access to large amounts of data [3]. Due to the need to access to this information without a physical interface, these data centers have become Cloud Data Centers. The cloud is a virtual infrastructure which is accessible by a local network or Internet from a remote location. Currently, economy of scale and energy consumption in cloud infrastructures are important for Cloud providers. The 5G mobile communication network is going to be close related to [4]. This combination will provide substantial richness in capacity, flexibility and functionality to mobile network operators.

Virtualization technology has been used by these data centers to become more efficient and reduce the cost associated with purchase, set up, cooling and maintenance. Many operat- ing system instances can be currently run on a single physical machine providing better use of physical resources [5, 6]. The migration of virtual machines (VM) located in data cen- ters provides more flexibility in managing their infrastructure. The migration procedure is also necessary for load balancing the traffic over the network and prevent congestion and bottlenecks on the infrastructure. Furthermore, migration is used for maintenance purposes considering that the equipment and software need upgrades and replacements regularly.

6 This thesis proposes to improve live migration performance between one physical host to another in terms of time. The focus of this improvement is placed on the network connec- tion. To achieve this, the VMs are migrated simultaneously through the same link under a programmable network. With a Software-defined Network (SDN) controller, bandwidth allocation rules are created to manage the migration process.

Parallel live migration topic has not yet been deeply investigated but theoretical studies have demonstrated improvement on the migration with this method. Therefore, this thesis wants to focus on this research gap at the same time as using SDN technology which is the near future way to design, build and operate networks.

The thesis content has been separated in two chapters, one regarding the theoretical background explanation necessary to carry out simultaneous migrations and the other one regarding the environment developed to test the parallel live migration. On the theoretical part, virtualization mechanism are explained focusing on live migration of virtualized instances. On the other hand, it is presented Software Defined Networks (SDN) technology emphasizing the SDN controllers used for the experimental development. As to the experimental part, the system design, hardware and software used is described. Finally, experiments and and testing results illustrate the performance of parallel live migration under a programmable network.

1 Introduction 7 2 Theoretical Background

2.1 Virtual Machines

Virtualization technology allows to create multiple simulated environments or dedicated resources from a single physical hardware system [6]. Virtualization provides the virtual environments with an easier backup and recovery, better scalability and decreases the energy consumption because there are less physical machines. The hypervisor is the soft- ware that connects directly to the hardware and allows to split one system into separate, distinct and secure environments known as virtual machines (VMs). Therefore, a virtual machine (VM) is an emulation of an operating system (OS) created and executed by a hypervisor. The hypervisor can emulate multiple virtual hardware platforms isolated from each other (except KVM hypervisor that can not by itself). Thus, on the same physical host it is possible to run virtual machines with different operating system as we can see in the Figure 2.1.

Kernel-based Virtual Machine (KVM) is a virtualization module in the kernel that allows the kernel to function as a hypervisor. The KVM kernel module cannot, by itself, create a VM. To do so, it must use QEMU, a user-space process. QEMU is inherently a hardware emulator. It is provided as operations support systems (OSS) for emulating [7]. The QEMU emulator interacts with the KVM kernel module to execute guest-system processing while the KVM kernel module handles the VM exit from guest systems and executes VM entry instructions. The tool used for managing the virtualization platform is [8], an Open-source API, daemon and management tool. The libvirt project has developed a virtualization abstraction layer which is able to manage a set of virtual ma-

8 chines across different . The goal of libvirt is to provide a library that offers all necessary operations for hypervisor management without implementing functionalities [9]. In addition, it is possible to manage the VMs through a graphical user interface (GUI), Virt-manager [10] which calls libvirt functions.

Figure 2.1: Hypervisor and Virtual Machines

2.1.1 Live Migration

Virtual machine live migration process consists on moving a virtual machine from one physical host to another without perceiving the interruption of its services. The machine remains on, the network connections remain active and applications continue to run while the VM is relocated. Live migration imposes certain restrictions on the source and desti- nation physical hosts requiring compatible virtualization software, comparable CPU types and the hosts’ membership to the same subnetwork.

During a migration process there is a data transfer taking place between the source and the destination. The data transfer involves three aspects [11]:

• CPU state. These states can be: Idle (nothing running), Running a user-space program or Running kernel (servicing interrupts or resources management).

• Memory content. Includes the memory state of the VM operating system and all the processes running on it. It is the most significant procedure during a migration and it is data-intensive.

2 Theoretical Background 9 • Storage content. Consists on the migration of the disk virtual machine image1. It is the most data-intensive transfer. For this reason, it is generally an optional phase of the migration because the disk image is often accessible from source and destination hosts. If this is not the case, a new disk image needs to be created at the destination and requires to be synchronized with the source storage.

2.1.1.1 Memory Migration

As it has been mentioned, memory migration is an important aspect of virtual machine migration. There are two main solution techniques developed: pre-copy and post-copy which are explained below. The grater focus is placed on the pre-copy method because is the most popular and widely used in virtualization platforms [12] and it is the technique used for the migration experiments of this thesis.

Pre-copy. Pre-copy migration main idea is to send the VM memory using an iterative process trial [5]. The algorithm steps are summarized in Figure 2.2.

Phase 0: Pre-migration. An active virtual machine on the source host and a pre-selected target host with the guaranteed resources to receive the migration are needed.

Phase 1: Resource reservation. The source announce to the destination the migration of the VM. The destination reserves the necessary resources to receive the new OS accordingly.

Phase 2: Iterative Copy. This is the core step of pre-copy migration algorithm. At the first iteration all the VM’s memory pages 2 are transferred to the target. Then, in subsequent iterations only those pages dirtied 3 during the previous transfer are copied.

Phase 3: Stop-and-Copy. The virtual machine is suspended. CPU state and remaining dirty pages are transferred to the destination host. At this phase, there is a consistent suspended copy of the VM on both hosts. The copy at the source enables resuming the machine in case of cancellation or failure.

1disk image: consist of a file with the contents and structure of a disk volume or of an entire data storage device. QEMU uses qcow2 file format of disk images. 2memory page: fixed-length contiguous block of virtual memory. It is the smallest unit of data for memory management in a virtual memory operating system. 3dirty pages: memory page that has been modified.

2 Theoretical Background 10 Phase 4: Commitment and Resume. Destination host gives notice of the suc- cessfully received OS image copy. The source receives this report as a commitment of the migration transaction and the source discards the original VM. Finally, the migrated machine becomes active again at the new host.

Figure 2.2: Pre-copy Flowchard. Image source: [1]

Post-copy. In post-copy, live migration begins with the suspension of the VM at the source host. Once the machine is suspended, the CPU state is transferred to the destina- tion while the memory state reminds in the source. Then, the VM starts running at the destination where the machine tries to access to memory pages unavailable because they are still in the source. This access to the nonexistent pages generates a page fault redi- rected to the source which responds with the faulted page needed. Therefore, the memory

2 Theoretical Background 11 pages transfer in post-copy algorithm is done by demand and it transfers only once each VM memory page. Otherwise, if the destination fails during the migration process, pre- copy can recover the VM, whereas post-copy cannot because the VM state is distributed between both source and destination [13].

Metrics and Influential Parameters in Migration. Concerning the purpose of tracking the performance of a live migration process, it is necessary to define metrics and param- eters that play a role in the migration. The most important metrics to take into account are total migration time and downtime. Total migration time measures the duration since the migration begins at the origin host until the machine reaches the destination and gets a coherent state with the original one [11]. Another major performance metric is down- time which is the time phase when there is a user-perceptible service unavailability at the machine [14]. This means that the services provided by the VM are entirely unavailable. It is currently not possible to migrate a VM without stopping it even for a short time. The downtime effect is the application degradation. In addition, we can consider as sup- plement parameters the total network traffic, the data size needed to be transferred for the migration process, and the energy consumption which involves the extra energy cost as a result of the live migration.

Furthermore, there are several factors that influence to the value of the described met- rics. Hereafter, the impact of these factors on the total migration time and downtime and how the pre-copy and post-copy algorithms get affected by these parameters is ex- plained. Migration link bandwidth is one of the most important parameters on migration performances. The link capacity is inversely proportional to the total migration time. A fast link allows more information per second. Consequently, less time is needed to send all the migration data. In Table 2.1 we can see the total migration time at different link bandwidth. The VM has Server 18 (LTS) as OS and a 16GB disk size. The live migration of the same virtual machine at 1 Gbps is 93.29% faster than the 100 Mbps one.

Page dirty rate is another important parameter. It is the rate at which memory pages in the VM are modified. It directly affects to the pre-copy algorithm because a higher page dirty rate results in more data being sent per iteration which leads to a longer total migration time. In addition, higher page dirty rate results in longer VM downtime as more

2 Theoretical Background 12 Link Speed Total migration time 100Mbps 160.14s 450 Mbps 35.55s 1 Gbps 10.74s

Table 2.1: Pre-copy Algorithm Migration. VM size = 16GB pages are needed to be sent in the final transfer run in which the VM is suspended. The relation between the page dirty rate an the migration performance is not linear because of the stop conditions defined in the algorithm [15]. When the dirty page rate is lower than the link capacity, the migration algorithm is able to transfer all modified pages every iteration. Accordingly, the total migration time and downtime are lower. On the case in which the dirty page rate is higher than the link speed, the link has not enough capacity to send all modified pages and the algorithm is forced to start the stop-and-copy stage. Therefore, the migration gets to the final stage and a great quantity of pages remind to be sent. This ends up with longer migration time because more modified pages have to be sent in each round. In addition, due to the reminding number of pages needed to be send in the stop-and-copy stage, there is a higher downtime.

During the migration process there are more operations and data that are necessary to be sent which do not belong to the VM memory content. It refers to the Pre and post migra- tion overheads. These overheads can be treated as static and represent an extensive per- centage of the migration data. According to Sherif Akoush publication [15], pre-migration overheads constitutes around 77% of the total migration time on a 10 Gbps link for a 512 MB idle VM. Therefore, over high speed links the overhead data becomes significant compared to the iterative pre-copy and stop-and-copy stages.

2 Theoretical Background 13 2.2 Software Defined Networking

Nowadays networks are static, slow to change and dedicated to specific services. Software Defined Networks (SDN) present an architecture to make networks agile and flexible. With SDN is possible to consolidate multiple services in one common infrastructure. Therefore, Software Defined Networks are being considered as a very promising solution for data centers and cloud environment and are in the scope of interest of many research groups and service providers all over the world [16].

SDN architecture separates the network’s control logic from the forwarding plane (hard- ware) introducing the ability to program the network. This separation advocates agility and flexibility to the network by simplifying the network management and allowing more control over network traffic flow than before, hence, facilitating network evolution.

Figure 2.3 represents the three planes and interfaces in which SDN architecture is com- prised [17]:

Data Plane. Network infrastructure held by the interconnected forwarding devices.

Control Plane. Represents the centralized SDN controller software. This controller programs the forwarding devices through the Southbound Interface.

Application Plane. Layer formed by cloud, management or business applications which can place their network demands to the SDN controller through the North- bound Interface.

Southbound Interface. Allows the communication between the forwarding devices and control plane elements by and API using a communication protocol.

Northbound Interface. Offers an API to the application plane allowing the commu- nication between the applications and the controller.

Notice that with this architecture, the Forwarding Elements are these hardware or software- based elements of the network witch its exclusive functionality is to send packets. Any control performance is removed from them. This logic control is now handled by the SDN controller or the applications above it. The network overview that the controller has at its disposal and the centralized control, simplifies the development of more networking

2 Theoretical Background 14 functions, services and applications.

Figure 2.3: SDN Architecture

2.2.1 OpenFlow

OpenFlow is a SDN technology proposed to standardize the way that a controller commu- nicates with network devices in an SDN architecture [18], the southbound API. OpenFlow is currently the most commonly deployed Software Defined Networking (SDN) technol- ogy [19]. With the OpenFlow protocol the devices flows of the entire network can be controlled by using a Flow Table.

An OpenFlow switch has one or more flow tables. Each flow entry matches whit a subset of the traffic and performs an action to it. The OpenFlow switch, controlled using the OpenFlow protocol, can behave like a switch, router or firewall depending on the flow entries installed on it an the dropping, forwarding or modifying actions performed by these entries. In particular, Open vSwitch (OVS) is an open-source OpenFlow switch that works as a virtual switch in the virtualized environments such as KVM. Figure 2.4 pics

2 Theoretical Background 15 the eight core OVS components. Ovs-vswitchd is the daemon that implements the switch and ovsdb-server a lightweight database server. Moreover, the following components are the configuration tools: ovs-dpctl is used to configure the Kernel module; ovs-appctl it is a unity for executing the OVS daemons; ovs-vsctl is used to acces and actualize the ovs-vswitchd configuration; ovs-ofctl shows the cached flows.

Figure 2.4: Open vSwitch Components. Image source: [2]

2 Theoretical Background 16 2.3 Related work

Over the past few years live migration performance has been a widely studied topic ad- dressed from different points of view. In any case, there is a gap on this research field that has not been extensively studied in literature: simultaneous live migration of multiple virtual machines.

Previous studies which have served as a precedent and as a background for this thesis topic are presented in this section separated in two categories: single and multiple VM migration studies.

Single machine Live migration.

• S. Akoush [15] studies the live migration behavior in pre-copy architectures. He describes the involved stages of the pre-copy migration and represents them with equations taking into account the most influential parameters on the migration pro- cess.

• U. Deshpande [20] uses an hybrid migration technique which implements a traffic- sensitive migration mechanism to calculate the network contention between the mi- gration and the application traffic to use a combination of pre-copy and post-copy migration techniques that reduce contention.

• FBA algorithm [21] proposes two optimal strategies based on the pre-copy mecha- nism: the first one is Bandwidth Adaptive Allocation which avoids the monotonous change of the bandwidth compared to the pre-copy algorithm; and the second one is Page Judgment which avoids the transmission of redundant data compared to the pre-copy algorithm.

• Guaranteeing Delay of Live Virtual Machine Migration by Determining and Pro- visioning Appropriate Bandwidth publication [22] theoretically analyzes how much bandwidth is required to ensure the total migration time and downtime of a live VM migration.

Multiple machines Live migration.

• A. Stage and T. Setzer [23] propose a theoretical scheme a for classifying VMs based on their workload characteristics and they also propose adequate resource and

2 Theoretical Background 17 migration scheduling models for each class taking network bandwidth requirements and network topologies into account.

• G. Sun proposes a new technique for efficient live migration of multiple virtual machines [24]. Queuing models have been developed (i.e., M/M/C/C and M/M/C) to quantify performance metrics and evaluated by conducting mathematical analysis.

• Live Gang Migration [25] presents the design and the implementation of the de- duplication procedure that reduces the overhead of transferred information due to the significant amount of identical memory between co-located VMs. Thus, the paper focus is on tracking the identical memory pages between the VMs and transferring only one copy of these identical pages.

• VMPatrol [26] is a QoS framework for VM migration. It uses a model that sets a minimal bandwidth for a migration flow to cause non degradation on the network performance of other flows. In addition, the migration has to be completed within an specific time. The framework uses a cost migration model that predicts the migration times assuming as constant parameters the memory size of a VM, the page dirty rate and the bandwidth of the link. The model provides the minimum bandwidth required for the migration within the deadline time.

• VMbuddies publication [12] aims to solve the problems of correlated VM migration raised on multi-tier applications by coordinating the VM migration and letting the inter-cloud network bandwidth to be exclusively used by the migration traffic.

• T. K. Sarker [27] demonstrates how different sequences of migration and different allocations of bandwidth result in different total migration times and total migration downtimes. This paper concentrates on developing multiple VM migration schedul- ing algorithm such that the performance of the migration is maximized.

This selected literature regarding single live migration provides an overview of its problems and influential parameters. Getting to know these metrics that influence on the migration performance and the network contention between migration data and other services traffic, enables to see that the possession of the network control and the influential parameters can help to the live migration improvement. On the other side, parallel migration litera- ture provides reasons to perform simultaneous migration because co-located VMs (virtual machines located on the same host) may need to cooperate between them. It can also be

2 Theoretical Background 18 seen that several publications regarding parallel migration are theoretically based and not lively performed but anyway the simulation carried out demonstrate an improvement.

2 Theoretical Background 19 3 Environment

In the following chapter the system implementation is described. Firstly, the hardware devices and the network infrastructure are characterized. Then, it is explained the vir- tualization environment and the software installation needed to profile the live migration procedure.

3.1 Hardware Specifications

In this section, it is presented the specification tables of the significant used devices in the setup. The system is essentially comprised of two servers in which it is carried out the virtualization environment, one more machine used as SDN controller and a laptop used to manage the other machines, see Figure 3.1. Table 3.1 specifies the hardware of the three main machines, Ubuntu0 to Ubuntu2 and Table 3.2 describes the operating systems installed on them.

Model FUJITSU Desktop ESPRIMO Q956 CPU Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz Memory 16 GB

Table 3.1: Ubuntu0-2 Specifications

20 Server OS ubuntu0 Ubuntu server 18.04 ubuntu1 Ubuntu server 18.04 ubuntu2 Ubuntu desktop 16.04

Table 3.2: Servers Operating System

3.2 Network Infrastructure

The set environment is comprised by two networks as it is drawn in the Figure 3.1. On one hand, 10.0.1.0/24 network is where the live migration takes place. On the other hand, the 192.168.0.0/24 network is used for management and configuration purposes and also to provide Ubuntu0 and Ubuntu1 servers with internet. Ubuntu0 and Ubuntu1 are connected to the 10.0.1.0/24 network through a TP-Link 8-Port Gigabit Switch (TL-SG108) by the default Ethernet interface. Therefore, the 10.0.1.0 network speed is 1 Gbps. Furthermore, using a USB to Gigabit Ethernet adapter the servers are connected to the 192.168.0.0 network through the router.

Figure 3.1: Current Network Topology

3 Environment 21 The current environment state is the evolution of a previous setup which did not meet the requirements. Remind that it is necessary a scenario in which live migrations are taking place under polices. The first Software Defined Network implemented was composed of a Zodiac FX . Zodiac FX is a 100 Mbps SDN switch that is currently not providing QoS functions and it is not running Open vSwitch. Since the aim is to replicate a real scenario and implementing QoS SDN rules, Zodiac FX was not the proper device. Consequently, the present scenario and set up was the proposed one to carry out the experiments.

A distributed virtual switch is the solution proposed to set an environment that satisfies the requirements. It is inspired by an IBM document, see [28]. As it is said in this doc- ument: ”A virtual switch within one server can transparently join with a virtual switch in another server, making migration of VMs between servers (and their virtual interfaces) much simpler, because they can attach to the distributed virtual switch in another server and transparently join its virtual switched network.” (M. Tim Jones 2010, p. 3). Figure 3.2 corresponds to the distributed virtual switch implementation.

Figure 3.2: Distributed Virtual Switch

3 Environment 22 3.3 Software Setup

In the following sections it is specified the used software to execute and profile a parallel live migration process. In the first section the virtualization software is described. In the second one, it is explained the software and configuration used to implement the SDN network.

3.3.1 Virtualization Setup

In other to create the virtualization environment, the following packages have been in- stalled:

$ sudo apt-get install -kvm libvirt-clients libvirt-daemon-system \ virt-manager

The hypervisor implemented to create and manage virtual machines is KVM along with QEMU, for further information see Chapter 2.1. Additionally, libvirt library has been installed to manage the hypervisor. Finally, virt-manager [10] has been also installed. It is a desktop user interface for managing virtual machines which is used to graphically install the operating system of the VMs.

The virtual machines created to perform the migration are initialized with the specifi- cations shown in the Table 3.3. The command used to create the VMs is the following one:

$ virt-install -n --os-type=Linux --ram=1024 --vcpus=1 \ --disk size=6 --cdrom

OS Ubuntu server 18.04 RAM 1 GB vCPUs number 1 Disk size 6 GB

Table 3.3: Virtual Machine Characteristics

3 Environment 23 3.3.2 Open vSwitch and SDN controller

Open vSwitch is installed in both servers, Ubuntu0 and Ubuntu1, as a distributed virtual switch solution. The following commands are needed to create an OVS (Open vSwitch) bridge and add a port to it. This bridge connects the Virtual Network Interface (vNIC) of a virtual machine to the Network Interface Card (NIC) of each server.

$ sudo ovs-vsctl add-br ovs-br # create brdge named "ovs-br" $ sudo ovs-vsctl add-port enp0s31f6 # add the interface to the bridge $ sudo ovs-vsctl show # shows a summary of the current configuration

Additionally, the configuration of the virtual machines running at Ubuntu0 and Ubuntu1 has to be changed. The virtual interface of each VM has to be attached to the previous ovs-br bridge created. To change the Domain XML file of a VM the the following command has to be used:

$ virsh edit

The Domain XML file of a virtual machine should be modified as follows:

...

,→ function='0x0' /> ...

Once the Open vSwitch bridge is created, it is needed to connect both Open vSwitch to the standard port of the SDN controller, Ubuntu2. It is also necessary to start the OVSDB (Open vSwitch Database Management Protocol) which is an OpenFlow configuration pro- tocol that is designed to manage Open vSwitch. The purpose of the database is to save and manage the configurations.

3 Environment 24 $ sudo ovs-vsctl set-controller ovs-br tcp:192.168.0.2:6653 $ sudo ovs-vsctl set-manager ptcp:6632

The chosen SDN controller is Ryu [29] because it is open source, based on Python and supports all OpenFlow versions. Ryu is a component-based software defined networking framework able to mange the Open vSwitches. The Ryu installation comes with Python scripts that automatically connects the Open vSwitch to the Ryu controller. To implement the Quality of Service feature, the configuration steps provided by Raspberry-Pi based Software Defined Network article [2] have been followed. To run the controller we need to execute at Ubuntu2 the following commands:

$ cd ~/ryu/ryu/app $ sed ’/OFPFlowMod(/,/)/s/)/, table_id=1)/’ simple_switch_13.py > \ qos_simple_switch_13.py $ ryu-manager rest_qos qos_simple_switch_13 rest_conf_switch

When the QoS controller is executed with the previous commands, the configuration is run on the local machine and not in the remote Open vSwitches. To solve this problem, the solution proposed in the article mentioned above has been followed [2]. To implement the solution, it is required to install Ryu at the migration servers and install the rpyc python package in both of them to use the Remote Procedure Call (RPC) system that enables to call a function available on a remote server.

$ sudo pip3 install ryu # install Ryu $ pip3 install rpyc # install rpyc package

Finally, we need to run the rpyc classic python script in both servers where Open vSwitch is running. The command to enable the SDN controller and the remote Open vSwitch connection is:

$ rpyc_classic.py --host=

At this point the Ryu SDN controller is running and we can configure the Open vSwitches using the Ryu REST API [30]. To check that the previous configuration is working, this command can be run at Ubuntu0 and Ubuntu1 servers:

3 Environment 25 $ sudo ovs-vsctl show c370696a-2a71-49c4-86f7-c8c404df0e10 Manager "ptcp:6632" Bridge ovs-br Controller "tcp:192.168.0.102:6653" is_connected: true Port "enp0s31f6" Interface "enp0s31f6" Port ovs-br Interface ovs-br type: internal ovs_version: "2.9.2"

It is necessary to mention that in Ubuntu2, the SDN controller where the Ryu framework is running, the configuration was not working when the previous installed OS was Ubuntu server 18.04, as in the other Ubuntu servers. When the Ryu application was running, it was unable to join the Ubuntu0 and Ubuntu1 switches even the joining message was printed at the controller. The configuration started working when changing the Ubuntu2 operating system to an older version, Ubuntu desktop 16.04.

3.3.2.1 Ryu Rest API

This section describes how to set QoS functions to the network using REST. Quality of service refers to any technology that manages data traffic. In the proposed environment QoS is used to allocate the network resources to different migration procedures.

Once the controller is runing, the first step is to set the both Open vSwitches address in the OVSDB:

$ curl -X PUT -d '"tcp:127.0.0.1:6632"' \ http://localhost:8080/v1.0/conf/switches//ovsdb_addr.

Ubuntu0 and Ubuntu1 ovs-br interfaces are set with an easier datapath-id to be configured effortlessly. The datapath-id is the MAC address of the OVS brige.

3 Environment 26 ubuntu@ubuntu0:~$ ovs-vsctl set bridge ovs-br \ other-config:datapath-id=0000000000000001 ubuntu@ubuntu1:~$ ovs-vsctl set bridge ovs-br \ other-config:datapath-id=0000000000000002

The following command can be run to check that both OVS are properly connected to the controller. The output of this command should be the switches datapath-id.

$ curl -X GET http://localhost:8080/v1.0/conf/switches ["0000000000000001" , "0000000000000002" ]

The following step is creating and setting multiple queues. For example, two queues are created with the next command. One queue is set to a minimum speed of 700 Mbps and the other one to 300 Mbps. The queues are created in every switch connected to the controller. At Table 3.4 it is shown the queue setting using the following command: curl -X POST -d '{"port_name": "enp0s31f6", "type": "linux-htb", \ "max_rate": "1000000000", "queues": [{"min_rate": "700000000"}, \ {"min_rate": "300000000"}]}' http://localhost:8080/qos/queue/all

Queue ID Max rate Mini rate 0 (1 Gbps) 700 Mbps 1 (1 Gbps) 300 Mbps

Table 3.4: Queue Setting Example

Once the queues are created, QoS rules have to be added to these queues. All created rules are applied to both switches to ensure the same behaviour in both directions. Following the previous example, migration traffic through the default port 49152 is set to queue 0 and migration traffic through port 49153 set to queue 1. See Table 3.5.

$ curl -X POST -d '{"match": {"nw_dst": "10.0.1.100", "nw_proto": "TCP",\ "tp_dst": "49152"}, "actions":{"queue": "0"}}' \ http://localhost:8080/qos/rules/0000000000000002 $ curl -X POST -d '{"match": {"nw_dst": "10.0.1.101", "nw_proto": "TCP",\

3 Environment 27 "tp_dst": "49152"}, "actions":{"queue": "0"}}' \ http://localhost:8080/qos/rules/0000000000000001 $ curl -X POST -d '{"match": {"nw_dst": "10.0.1.100", "nw_proto": "TCP",\ "tp_dst": "49153"}, "actions":{"queue": "1"}}' \ http://localhost:8080/qos/rules/0000000000000002 $ curl -X POST -d '{"match": {"nw_dst": "10.0.1.101", "nw_proto": "TCP",\ "tp_dst": "49153"}, "actions":{"queue": "1"}}' \ http://localhost:8080/qos/rules/0000000000000001

Destionation address Protocol Destionation port Queue ID 10.0.1.100 (Ubuntu0) TCP 49152 0 10.0.1.101 (Ubuntu1) TCP 49152 0 10.0.1.100 (Ubuntu0) TCP 49153 1 10.0.1.101 (Ubuntu1) TCP 49153 1

Table 3.5: QoS Rules Example

Whit the set configuration, the expected behavior of two virtual machines parallel migra- tion is the following one: Iperf client at ubuntu1: iperf -c ovs0 -i 1 -p 49152

[ 3] local 10.0.1.101 port 56636 connected with 10.0.1.100 port 49152 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 77.5 MBytes 650 Mbits/sec [ 3] 1.0- 2.0 sec 77.8 MBytes 652 Mbits/sec [ 3] 2.0- 3.0 sec 77.9 MBytes 653 Mbits/sec [ 3] 3.0- 4.0 sec 75.5 MBytes 633 Mbits/sec [ 3] 4.0- 5.0 sec 77.8 MBytes 652 Mbits/sec

Iperf client at ubuntu1: iperf -c ovs0 -i 1 -p 49153

[ 3] local 10.0.1.101 port 38632 connected with 10.0.1.100 port 49153 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 33.1 MBytes 278 Mbits/sec [ 3] 1.0- 2.0 sec 35.8 MBytes 300 Mbits/sec [ 3] 2.0- 3.0 sec 33.1 MBytes 278 Mbits/sec [ 3] 3.0- 4.0 sec 35.0 MBytes 294 Mbits/sec [ 3] 4.0- 5.0 sec 33.1 MBytes 278 Mbits/sec [ 3] 5.0- 6.0 sec 112 MBytes 935 Mbits/sec [ 3] 6.0- 7.0 sec 111 MBytes 934 Mbits/sec

With the iperf tool TCP packets are sent from Ubuntu1 to Ubuntu0 through ports 49152 and 49153. When both connections are taking place at the same time, as in a real parallel

3 Environment 28 migration, the tool output shows that the bandwidth given to the 49152 port is almost 700 Mbps as set in queue 0 and the port 49153 bandwidth is 300 Mbps. When the iperf client at port 49152 is finished, the bandwidth given to port 49153 is set to the maximum. This renders the bandwidth used for the migration is the maximum as possible in any case.

With the use of the shown res qos request methods it is possible to set as many QoS queues as desired with the corresponding rules for any migration case. In this section, only the environment commands used to implement bandwidth allocation are presented, for further information see [30].

3 Environment 29 3.4 Migration Procedure and Measurement Tools

Live migration procedure takes place in the environment presented in the previous chap- ter. A complete system is created to perform any live migration technique and monitoring the network throughput. In this chapter it is described the migration procedure used to live migrate virtual machines and also the way in which the migration parameters are measured.

Step 1. The disk image of the VMs wished to be migrated is copied from the source to the destination server using SFTP (Secure File Transfer Protocol). Assuming that the environment wanted to simulate is provided with shared storage between servers, the disk image transfer is not counted as a parameter involving on the migration time or network throughput.

Step 2 (optional). Only if it is necessary to undertake bandwidth allocation for the present migration, queues to perform QoS have to be configured on the SDN controller (Ubuntu2) using the Ryu Rest API explained in Section 3.3.2.1.

Step 3. Tcpdump tool is executed to profile the migration throughput. This tool is a packet analyzer that filters the packets that match with a boolean expression. In the set environment, the tcpdump command execution is different in the case of sequential migration (one or more machines migrated one by one) and parallel migration (multiple machines migrated at the same time).

- Sequential migration. The migration data is sent using the TCP protocol through a single port. The default migration port is 49152. Therefore, only one tcpdump command needs to be executed filtering the packets by the destination port. In this way, the resulting packets of the migration are saved in a file. For example, in the next command the tcpdump tool is saving to the given file the packets generated as the outcome of a sequential migration via the default port.

$ sudo tcpdump -i ovs-br -n tcp dst port 49152 >> capture-file

3 Environment 30 - Parallel migration. Virtual machines are being migrated simultaneously by different ports. On the case that parallel migration is taking place thorough the default port, the migrations are going to take place through the 49152 port and the immediately consecutive. Therefore, tcpdump tool has to be started to analyze every migration port. For example, if two virtual machines are willing to be migrated the next two commands have to be executed:

$ sudo tcpdump -i ovs-br -n tcp dst port 49152 >> capture-file-49152 $ sudo tcpdump -i ovs-br -n tcp dst port 49153 >> capture-file-49153

Step 4. While the packet analyzer tool is running, it is time to execute the migration commands in a sequentially or parallel manner. The migration command used to migrate a given VM in this environment is the following one:

$ virsh migrate qemu+ssh:///system \ tcp://:

There is no need to specify the port in the command. If there is no port parameter the migration is taking place through the default port 49152.

To perform a sequential migration, it is needed to run multiple times the previous com- mand, one by one, given the selected virtual machine names willing to be migrated. Be- sides, to run a parallel migration is necessary to run the same commands, as in the sequential migration, but in background. Executing commands in background with bash is possible adding the ampersand symbol (&) at the end of each migration command. Thereby, the migration execution of all the virtual machines starts at the same time with- out waiting for the previous machines to be finished.

Step 5. Once the migration is finished, all the migration data packets have been filtered and saved by the tcpdump tool. In the case of a sequential migration, all the data is stored in one file because all the virtual machines migration have been taking place thorough one single port. In the parallel migration case, the number of files generated is the same as the number of virtual machines migrated since every machine is migrated through a different port. At this step, a python parser script reads and analyze the tcpdump capture files

3 Environment 31 to get the migration throughput and the total migration time. This is how two lines of a capture file looks like:

19:03:15.799734 IP 10.0.1.100.55728 > 10.0.1.101.49152: Flags [.], seq 5353256:5354704, ack 1, win 229, options [nop,nop,TS val 1005428512 ecr 1823258211], length 1448

19:03:15.800235 IP 10.0.1.100.55728 > 10.0.1.101.49152: Flags [.], seq 5354704:5356152, ack 1, win 229, options [nop,nop,TS val 1005428513 ecr 1823258212], length 1448

Using the time stamp of each line and the length in bytes of the each packet, it is taken the migration data sent in each approximately millisecond interval. In this way, the migration throughput over time is calculated. In addition, the script draws the throughput shape of the migration.

To sum up, following the provided steps on the described environment, it is possible to migrate as many VMs as wanted using sequential migration, parallel migration or a combination of them under a QoS network. Moreover, any migration carried out can be monitored and drawn.

3 Environment 32 4 Experiments and Results

In this chapter, the experiments carried out in the designed environment are presented. The environment scheme used is shown in Figure 3.1. Sequential and parallel live migra- tion studies have been held and the results will be presented according to difficulty, from the simplest case to the most complicated. This following experiments serve to confirm or disprove the hypothesis: improving parallel live migration using Software Defined Net- works.

In the first instance, Figure 4.1 shows the live migration’s throughput of a single virtual machine with the characteristics shown in Table 3.3. The single virtual machine migra- tion is the the elemental one. It can be seen a constant throughput over the time reaching the maximum bandwidth available between both servers. The live migration takes 5.47s. Notice the virtual machine is in standby, any application is running on it.

In order to use the maximum link capacity between Ubuntu0 and Ubuntu1 using the TCP protocol it is necessary to disable the segmentation offload from the interface used for the migration.

$ sudo ethtool -K tso off

33 Figure 4.1: Single VM Live Migration Throughput

Next step consists in migrating two virtual machines. This migration is carried out us- ing the sequential and parallel migration techniques. Figure 4.2 shows the comparison between the resulting throughput of these two different performances. It can be seen that the total migration time of the parallel migration finishes 1.81 seconds before than the sequential. Regarding the throughput shape, it is important to emphasize the par- allel migration graphic. The two virtual machines share the medium bandwidth equally. Moreover, in the sequential migration it can be identified each virtual machine migration because of the throughput degradation phenomenon, a ”gap” can be seen between them. To lower this effect, the TCP windows size in both servers has been set to the maximum.

In Figure 4.3 it is shown a comparison between the sequential and parallel migration per- formance of eight VMs. This figure aims to demonstrate that the total live migration time of a larger number of virtual machines migrated in parallel is still shorter than the se- quential traditional technique. Once more, the parallel migrated machines equally divide the link capacity and the sequential machines use the maximum possible throughput. As it can be seen in Table 4.1, the total migration time of the sequential migration is 54.02 seconds and the parallel execution is 39.51 seconds. Therefore, the parallel migration fin- ishes 14.51 seconds before the other execution.

4 Experiments and Results 34 Figure 4.2: Live Migration Comparison: 2 VMs

Figure 4.3: Live Migration Comparison: 8 VMs

Sequential migration Parallel migration Total migration time 54.02s 39.51s Average bandwidth 917 Mbps 117 Mbps x 8 VMs

Table 4.1: Live Migration Metrics: 8 VMs

The previous experiments do not simulate a real live migration between two data center servers because the machines used before were on a standby state. For this reason, in the following experiment the Ubuntu stress tool is going to be run in the virtual machines to

4 Experiments and Results 35 simulate applications activity on them.

$ stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.5;}' \ < /proc/meminfo)k --vm-keep -m 1 --vm-hang 1

Using this command the 50% of the virtual machine’s memory is being used. The vm-keep parameter is used to stress the memory stably and the vm-hang option is changing the memory content every second. Only the virtual machine memory is stressed because the VM’s CPU state impact is almost negligible [12]. As is shown in Figure 4.4, a stressed machine takes 2.5 seconds more which corresponds to a 46% time increase than the same non-stressed machine.

Figure 4.4: Single VM Live Migration 50% stressed

The following step proposes using Software Defined Networking in order to apply band- width allocation in a parallel live migration. On the assumption of improving the live migration time, the theorem presented by H. Liu and B. He [12] is taken:

Theorem: ”When a set of VMs are migrated concurrently, if there exists an optimal band- width allocation to minimize the maximum migration completion time of the VMs, then the bandwidth resource should be fully used (...) and all VMs have equivalent migration completion time (...).”

4 Experiments and Results 36 Figure 4.5: Relation between Bandwidth Allocation and Time

For the purpose of making all machines finishing the migration at the same time, is nec- essary to find the correlation between the migration bandwidth and the total migration time. To find this relation, one virtual machine has been migrated at different bandwidth values, in particular, ten times at each bandwidth value. Figure 4.5 shows the average total migration time and bandwidth relation between a non-stressed virtual machine and a stressed one. As can be seen, the relation is not linear. This behavior is caused by the pre-copy migration algorithm.

In the following experiment, the bandwidth allocation technique is implemented accord- ing to the theorem and the bandwidth/time relation. It consist of three virtual machines migration, one virtual machine is stressed and the other two are not. Tables 4.2 and 4.3 shows which QoS queues and rules have been used. In this case, more bandwidth has been provided to the stressed machine. Stressed machine is migrated through port 49160 to be distinguished from the other two machines. The bandwidth level values are chosen in order to make the three machines migration finish at the same time.

Queue ID Max rate Min rate 0 - 400 Mbps 1 301 Mbps 300 Mbps 2 301 Mbps 300 Mbps

Table 4.2: Bandwith Allocation Queue Settings

4 Experiments and Results 37 Destination address Protocol Destination port Queue ID 10.0.1.100 TCP 49152 1 10.0.1.101 TCP 49152 1 10.0.1.100 TCP 49153 2 10.0.1.101 TCP 49153 2 10.0.1.100 TCP 49160 0 10.0.1.101 TCP 49160 0

Table 4.3: Bandwidth Allocation QoS Rules

Figure 4.6 shows the comparison between the parallel migration and the new bandwidth allocation experiment. The chosen bandwidth values were supposed to make the three machines finish the migration at the same time. As it can be see, this goal has not been accomplished and other bandwidth values have been also tested with non success result. In the mentioned figure, it can be seen that even the two non stressed machines do not finish their migration at the same time. Therefore, as two identical machines in the same state can differ in migration time, it indicates that is hard to apply the theorem in a real environment. However, the bandwidth allocation migration has finished 0.59 seconds before than the default parallel migration.

Figure 4.6: Live Migration Comparison: 3 VMs

Owing to the non accuracy of the bandwidth allocation method, burstable bandwidth is tested. This technique consist of providing a machine with almost the whole link band- width for a period of time. Every period, the machine which is given the high bandwidth

4 Experiments and Results 38 is changed. In this experiment the same previous virtual machines are used. Two QoS queues have been created as it is shown in Table 4.4, one set to 750 Mbps and the other one set to 100 Mbps. The rules applied to these queues are dynamic, this means that in every period of time there is one virtual machine set to the queue 0 to get the high bandwidth and the other two are set to the lower bandwidth queue. Figure 4.7 shows this burst technique in which period time values have been chosen arbitrary. At the first period the high level is given to the stressed machine and in the two following, the large bandwidth is given to the non-stressed machines. Notice the second and third period are set one second shorter than the first one because as it has been demonstrated, non-stressed machines need less time to be migrated at the same bandwidth. Finally, when the stressed machine has finished, the QoS rules are deleted and the non-stressed machines finish their migration in parallel.

Queue ID Max rate Min rate 0 1 Gbps 750 Mbps 1 1 Gbps 100 Mbps

Table 4.4: Burstable Bandwidth Allocation Queue Settings

Figure 4.7: Burstable Bandwidth Migration of 3VMs

4 Experiments and Results 39 In summary, three different simultaneous live migration techniques have been tested. The firsts one consist of migrating a set of virtual machines at the same time. This is the simplest technique and does not suppose any change on the network. The second one consist on giving a specific bandwidth value to each machine according to its workload in order to make all migrations finish at the same time. The third one consist on dynamically give a high bandwidth level to one machine during a period of time which is adjusted to the machine workload as well. Figure 4.8 shows the comparison between all these three parallel techniques and the sequential migration. As it can be seen in Table 4.5 all the parallel migration procedures improve the sequential migration performance. The burstable migration turns out the best.

Figure 4.8: Comparison between four Live Migration Techniques

Migration technique Total migration time (s) Sequential 28.73 Parallel 27.04 Bandwidth allocation 26.45 Burstable bandwidth 24.09

Table 4.5: Live Migration Metrics Comparison

4 Experiments and Results 40 5 Conclusions and Future Work

In this thesis the simultaneous live migration of virtual machines has been studied, tested and compared with sequential migration. A complete system has been created to perform and profile the metrics of any migration taking place on it. This environment implements the use of SDN technology in order to apply Quality of Service rules to the network traffic.

It has been proposed three parallel migration techniques which improve the live migration performance, in terms of time. This techniques are applied on the network and do not involve any migration algorithm modification on the two migration nodes. As it has been tested, the burstable allocation technique, which consist in dynamically allocating a high bandwidth level to one machine during a period of time, has the best time performance. In contrast, bandwidth allocation technique attempts to set optimal bandwidth allocation to the machines in order to minimize the maxim migration time. Bandwidth allocation was expected to perform the best but has not shown as good results as the burstable per- formance. This is an example of how applying a mathematical theorem to a real scenario may not conclude in the expected results. It has been proved that two different virtual machines provided by the same characteristics and migrated at the same bandwidth level have different migration times. Therefore, it can be concluded that the migration time of a virtual machine in the implemented scenario is not completely deterministic. This be- haviour can be prompted by the pre-copy algorithm used to migrate the machines. Lastly, the simple parallel technique, which consist in migrating at the same time a set of virtual machines, has shown a substantial time improvement in comparison with the sequential migration. In the migration of eight machines, it has shown a 27% improvement over the sequential migration.

41 Additionally, the SDN design implementation on this thesis can be used in environments in which there is a network contention between migration traffic and other services. As the migration procedure can be considered critical due the performance degradation of the applications running on the virtual machines, the necessary bandwidth allocated can be adjusted to the migration procedure in order to complete it faster while the other services can partially consume a lower bandwidth level.

As future work, it is proposed to model the burstable technique by identifying the relation between the period of the high bandwidth allocated machine the total migration time. By finding this pattern this technique can be exploited to the maximum.

Since the techniques proposed on this thesis consist on migrating a set of virtual machines at the same time, the combination of sequential and parallel migration can also be inves- tigated.

Moreover, research into the relation between the virtual machine characteristics and net- work features would allow to control the migration time choosing the appropriate band- width for the migration. This would serve agile migrations to each and every virtualized environment without previously studying it.

5 Conclusions and Future Work 42 References

[1] A. Choudhary, M. Chandra Govil, G. Singh, L. Awasthi, E. Pilli, and D. Kapil, “A critical survey of live virtual machine migration techniques,” Journal of Cloud Computing, vol. 6, p. 23, 11 2017. [2] J. Rischke, “Raspberry-pi based software defined network,” Technische Universit¨at Dresden, Apr 2017. [3] D. Abts and B. Felderman, “A guided tour through data-center networking,” ACM Queue, vol. 10, pp. 10–23, 05 2012. [4] X. Wang, G. Han, X. Du, and J. J. P. C. Rodrigues, “Mobile cloud computing in 5g: Emerging trends, issues, and challenges [guest editorial],” IEEE Network, vol. 29, no. 2, pp. 4–5, March 2015. [5] C. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C. Limpach, I. Pratt, and A. Warfield, “Live migration of virtual machines,” in Proceedings of the 2Nd Conference on Symposium on Networked Systems Design & Implementation - Volume 2, ser. NSDI’05. Berkeley, CA, USA: USENIX Association, 2005, pp. 273–286. [Online]. Available: http://dl.acm.org/citation.cfm?id=1251203.1251223 [6] “Virtualization.” [Online]. Available: https://www.redhat.com/en/topics/ virtualization/what-is-virtualization [7] Y. Goto, “Kernel-based virtual machine technology,” vol. 47, 07 2011. [8] “Libvirt: The virtualization api.” [Online]. Available: https://libvirt.org/index.html [9] M. T. Jones, “Virtual networking in linux,” 2009. [Online]. Available: https: //www.ibm.com/developerworks/linux/library/l-hypervisor [10] “.” [Online]. Available: https://virt-manager.org/ [11] H. Ben Arab, “Virtual machines live migration,” 03 2015. [12] H. Liu and B. He, “Vmbuddies: Coordinating live migration of multi-tier applica- tions in cloud environments,” IEEE Transactions on Parallel and Distributed Sys- tems, vol. 26, no. 4, pp. 1192–1205, April 2015. [13] M. Hines, U. Deshpande, and K. Gopalan, “Post-copy live migration of virtual ma- chines,” Operating Systems Review, vol. 43, pp. 14–26, 07 2009.

43 [14] F. Salfner, P. Tr¨oger,and A. Polze, “Downtime analysis of virtual machine live mi- gration,” 01 2011, pp. 100–105. [15] S. Akoush, R. Sohan, A. Rice, A. W. Moore, and A. Hopper, “Predicting the per- formance of virtual machine migration,” in 2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, Aug 2010, pp. 37–46. [16] M. Markowski, P. Ryba, and K. Puchala, “Software defined networking research laboratory-experimental topologies and scenarios,” in 2016 Third European Network Intelligence Conference (ENIC), Sep. 2016, pp. 252–256. [17] D. Kreutz, F. M. V. Ramos, P. E. Ver´ıssimo,C. E. Rothenberg, S. Azodolmolky, and S. Uhlig, “Software-defined networking: A comprehensive survey,” Proceedings of the IEEE, vol. 103, no. 1, pp. 14–76, Jan 2015. [18] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Openflow: Enabling innovation in campus networks,” SIGCOMM Comput. Commun. Rev., vol. 38, no. 2, pp. 69–74, Mar. 2008. [Online]. Available: http://doi.acm.org/10.1145/1355734.1355746 [19] A. Lara, A. Kolasani, and B. Ramamurthy, “Network innovation using openflow: A survey,” IEEE Communications Surveys Tutorials, vol. 16, no. 1, pp. 493–512, First 2014. [20] U. Deshpande and K. Keahey, “Traffic-sensitive live migration of virtual machines,” in 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Com- puting, May 2015, pp. 51–60. [21] S. Kai and C. Yan, “Flexible bandwidth allocation algorithm for virtual machine live migration,” vol. 4, no. 2, 2016. [22] J. Zhang, F. Ren, R. Shu, T. Huang, and Y. Liu, “Guaranteeing delay of live virtual machine migration by determining and provisioning appropriate bandwidth,” IEEE Transactions on Computers, vol. 65, no. 9, pp. 2910–2917, Sep. 2016. [23] A. Stage and T. Setzer, “Network-aware migration control and scheduling of differen- tiated virtual machine workloads,” in 2009 ICSE Workshop on Software Engineering Challenges of Cloud Computing, May 2009, pp. 9–14. [24] G. Sun, D. Liao, V. Anand, D. Zhao, and H. Yu, “A new technique for efficient live migration of multiple virtual machines,” Future Generation Computer Systems, vol. 55, pp. 74 – 86, 2016. [25] U. Deshpande, X. Wang, and K. Gopalan, “Live gang migration of virtual machines,” ser. HPDC ’11. New York, NY, USA: ACM, 2011, pp. 135–146. [26] V. Mann, A. Vishnoi, A. Iyer, and P. Bhattacharya, “Vmpatrol: Dynamic and auto- mated qos for virtual machine migrations,” in 2012 8th international conference on network and service management (cnsm) and 2012 workshop on systems virtualiztion management (svm), Oct 2012, pp. 174–178. [27] T. K. Sarker and M. Tang, “Performance-driven live migration of multiple virtual machines in datacenters,” in 2013 IEEE International Conference on Granular Com- puting (GrC), Dec 2013, pp. 253–258.

References 44 [28] M. T. Jones, “Virtual networking in linux,” IBM developerWorks, Oct 2010. [Online]. Available: https://www.ibm.com/developerworks/linux/library/ l-virtual-networking/ [29] “Ryu sdn framework community.” [Online]. Available: https://osrg.github.io/ryu/ [30] “Ryu - qos functions that can be set using rest.” [Online]. Available: https://osrg.github.io/ryu-book/en/html/rest qos.html

References 45