Malardalen¨ University School of Innovation Design and Engineering Vaster¨ ˚as, Sweden

Thesis for the Degree of Master of Science in Science with Specialization in Embedded Systems 30.0 credits

Resource for Real-time Industrial Clouds

Sharmin Sultana Sheuly [email protected]

Examiner: Moris Behnam [email protected]

Supervisor: Mohammad Ashjaei [email protected]

June 10, 2016 Abstract

Cloud computing is emerging very fast as it has potential in transforming IT industry by replacing local systems as well as in reshaping the design of IT hardware. It helps companies to share their infrastructure resources over ensuring better utilization of them. Nowadays developers do not need to deploy ex- pensive hardware or human resource to maintain them because of computing. Such elasticity of resources is new in the IT world. With the help of virtualization, clouds meet different types of customer demand as well as ensure better utiliza- tion of resources. There are two types of virtualization technique (dominant): (i) hardware level or system level virtualization, and (ii) (OS) level virtualization. In the industry system level virtualization is commonly used. How- ever it introduces some performance overhead because of its heavy weight nature. OS level virtualization replacing system level virtualization as it is of light weight nature and has lower performance overhead. Nevertheless, more research is nec- essary to compare these two technologies in case of performance overhead. In this thesis, a comparison is made in between these two technologies to find the suitable one for real time industrial cloud. is chosen to represent system level virtu- alization and and OpenVZ for OS level virtualization. To compare them the considered performance criteria are: migration time, downtime, CPU con- sumption during migration, execution time. The evaluation showed that OS level virtualization technique OpenVZ is more suitable for industrial real time cloud as it has better migration utility, shorter downtime and lower CPU consumption during migration. Contents

1 Introduction5 1.1 Motivation...... 6 1.2 Thesis Contributions...... 7 1.3 Thesis Outline...... 7

2 Background8 2.1 ...... 8 2.2 System Level Virtulization...... 9 2.2.1 XEN...... 10 2.2.2 XEN Toolstack...... 11 2.2.3 XEN Migration...... 11 2.3 Operating System (OS) Level Virtualization...... 12 2.3.1 Docker...... 13 2.3.2 OpenVZ...... 14 2.3.3 OpenVZ Migration...... 16

3 Problem Formulation 17

4 Solution Method 18

5 System set-up 22 5.1 Local Area Network (LAN) Set up...... 22 5.2 XEN ...... 23 5.2.1 XEN installation...... 24 5.2.2 Setting up Bridged Network...... 24 5.2.3 DomU Install with Virt-Manager...... 24 5.2.4 migration...... 24 5.3 Docker...... 28 5.3.1 Docker Installation, Container Creation and Container Migration...... 28 5.4 OpenVZ...... 29 5.4.1 OpenVZ Installation...... 29 5.4.2 Container Creation...... 29 5.4.3 Container Migration...... 30 5.5 Test Bed Specifications Summary...... 30

1 6 Evaluation 31 6.1 Migration time...... 31 6.2 Downtime...... 35 6.3 CPU consumption during migration...... 36 6.4 Execution Time...... 36 6.5 Evaluation Result...... 37

7 Related work 37

8 Conclusion 43 8.1 Summary...... 43 8.2 Future Work...... 43

Appendix A 51 A.1 XEN toolstack...... 51 A.1.1 XEND Toolstack...... 51 A.1.2 XL Toolstack...... 52 A.1.3 ...... 52 A.1.4 Virt-Manager...... 52

Appendix B 53 B.1 Docker Installation, Container Creation...... 53 B.2 Container Migration...... 53

2 List of Figures

1 Different cloud types ([1])...... 8 2 XEN architecture (adopted from [2])...... 10 3 Major Docker components (High level overview) [3]...... 14 4 OpenVZ architecture [4]...... 15 5 Solution method used in this thesis...... 19 6 Industrial Real time cloud...... 21 7 LAN set up...... 23 8 Start of Migration Virtual machine vmx...... 26 9 vmx migrated and paused in the destination host...... 27 10 Migration of vmx completed...... 28 11 Algorithm for measuring migration time...... 33

3 List of Tables

1 Test bed specifications of XEN...... 31 2 Test bed specifications of OpenVZ...... 31 3 Migration time for different RAM sizes...... 32 4 Migration time changing VM/container number (RAM size 1027)...... 34 5 Migration time changing host CPU consumption (RAM size 1027)...... 34 6 Downtime changing RAM size...... 35 7 CPU conumption during migration (in percent)...... 36 8 Execution time...... 36 9 Summary of Evaluation...... 37

4 1 Introduction

Cloud computing has emerged as a very convenient word in the networking world. All over the world users utilizes IT services offered by cloud comput- ing. It provides infrastructure, platform, and software as services. These services are consumed in a pay-as-you-go model. In the industrial world, cloud became an alternative to local systems as it performs better [5], [6]. In industrial real time cloud all the required computation is conducted in cloud upon an external event and depending on this computation response is provided within a predefined time. However, In 2008 a survey was con- ducted with six data centres, which showed servers utilize only 10-30% of total processing capacity. On the other hand, CPU utilization rate in desk- top is on an average below 5% [7]. In addition to this, with the growth of VLSI technology, there is a rapid advancement in computing power. Unfortunately, this power is not exploited fully as single process running on a system does not exploit that much resource. The solution of the above problem lies in virtualization which refers to unification of multiple systems onto a single system without compromis- ing performance and security [8], [9]. Virtualization technologies became dominant technology in the industrial world for this reason. Server vir- tualization increases capability of data centers. Application of virtualiza- tion in different areas such as Cloud Environments, Internet of Things, and Network Functions is becoming more extensive [10]. In addition to these, High-Performance Computing (HPC) centres are using this technology to fulfil their clients need. For example, some clients may want their applica- tion to run in a specific operating system while others may require specific hardware. To address these issues at first the users specify their need. De- pending on this specification virtual environment is set up for them which is known as plug-and-play computing [11]. There exists three virtualization techniques: (i) hardware level or system level virtualization, (ii) operat- ing system level virtualization and (iii) high-level language virtual machines [12], [8]. Among these techniques the first two are dominant. Difference in these two virtualization technique lies in the separation of host (virtualizing part) and domain (virtualized part) [8]. In hardware level virtualization the separation is in hardware abstraction layer while in operating system level virtualization it is in system call/ABI layer.

5 1.1 Motivation Hardware level virtualization is the dominent virtualization technology in the industrial world as it provides good isolation and encapsulation . One virtual machine (VM) cannot access codes running in another VM because of isolated environment. However, this isolation comes at a cost of perfor- mance overhead. System calls in VMs are taken care of by virtual machine monitor or hypervisor introducing additional latency. If a guest issues a hardware operation, it must be translated and reissued by the hypervisor. This latency introduces higher response time of tasks which are running in- side the VM. Another challenge is that, there exists a semantic gap between the virtual machine and the service. The reason behind this is that, guest OS provides an abstraction to the VM and VMs operate below it [13]. In addition to this, VM instances should not have RAM size higher than host RAM. This fact imposes a restriction on the number of VM that can run on the host. Every VM instance has its own operating system which makes it of heavyweight nature. Because of the above problems, industry is moving towards operating system level virtualization which is a light weight alter- native to hardware level virtualization. There is no abstraction of hardware in OS level virtualization rather there exists partitions among containers. A process running in one container cannot access process in another container. Containers share the host kernel and no extra OS is installed in each con- tainer. Therefore, it is of light weight nature. However, in case of isolation hardware level virtualization outperforms operating system level virtualiza- tion. There exists work where comparision made among virtualization techniques considering industrial systems like MapReduce (MR) clusters [14], general usage [15], [16], [17], high performance scientific applications [18]. No work have been conducted to find suitable virtualization technique for real time industrial cloud. In real time system timely performance is very impor- tant. For this reason, virtualization techniques should be compared taking into account performance criteria that effect real time constraint (in time response). In the previous work authors considered industrial systems or general usage or high performance scientific applications without consider- ing real time constraint at the time of comparison. Therefore, in this thesis our focus will be on contrasting hardware level virtualization and operating system level virtualization taking into account different performance criteria to find suitable virtualization technique for real time industrial cloud.

6 1.2 Thesis Contributions There exists research work comparing different hardware level virtualization techniques. There also exists comparison between hardware level virtualiza- tion and operating system level virtualization. However, no work have been conducted to find suitable virtualization technique for real time industrial cloud. The goal of the thesis is to find a virtualization technique which is suitable to be used in real-time cloud computing in industry. We achieve the main goal by presenting the following contributions:

• We review the work related to cloud performance overhead.

• We investigate virtualization techniques.

• We set up a system consisting of hardware level virtualization and operating system level virtualization consecutively.

• We evaluate performance criteria in the system to find suitable virtu- alization technique for real time industrial cloud.

1.3 Thesis Outline This thesis report consist of 8 sections. These sections are organised as fol- lows: Section 2 describes background of this thesis. It describes cloud computing, system level virtualization technique XEN, Operating System virtualization technique Docker and OpenVZ. In section 3 the problem is formulated and the research questions are de- scribed. Section 4 describes the solution method which is used to answer the research questions. Section 5 contains the procedure which is followed to set up the systems. Section 6 describes the evaluation of the systems that is set up. Different performance criteria is measured and the data is mention in this section. Section 7 contains the work related to performance evaluation in virtualized cloud environment. It also includes the difference of the thesis from these works. Section 8 describes Conclusion (summary and future work) of the thesis.

7 2 Background

In this section background of this thesis which includes cloud computing, system level virtulization and operating system (OS) level Virtualization is described.

2.1 Cloud computing Cloud computing has emerged as an alternative to on premise computing services [6]. It means services are provided to the consumer with elements which are not visible to the consumer (as if covered by a cloud) and these services are accessed via internet [1]. Cloud service refers not only to the software provided to the consumer but also to the hardware in the that are utilized to support those services. Advantages of cloud are: (i) Provision of infinite computing resource on demand, (ii) User can start on small scale and increase resources when it is necessary and (iii) User can let go the resources when it is no longer useful [19]. If the service is provided to the public as pay-as-you-go manner, then it is called public cloud and the sold services are called utility computing. On the other hand, cloud services provided by the internal data center, which are not available to the public are called private cloud [19]. Combination of private and public cloud is called hybrid cloud [20]. On the other hand, community cloud is built for a special purpose. Figure1 shows different types of cloud.

Figure 1: Different cloud types ([1]).

Cloud services are divided into three layers (As a business model) Software (SaaS), (PaaS), and Infrastructure as a Service (IaaS) [21].

8 SaaS refers to the applications provided to the consumer with the help of cloud platform and infrastructure layer [22]. Example of SaaS is GoogleApps ( calender, word and spreadsheet processing, Gmail etc.) and these applications are accessible via web browser. The underlying infrastructure (i.e. servers, network, operating systems etc.) is not manageable by the customer [23, 22]. PaaS provides a platform to the consumers where they can write their ap- plications and load the corresponding code in the cloud. It contains all the necessary libraries and services required to develop the application. The developer can control the configuration of the application environment, but not the underlying platform. For example Grails, Ruby and java applica- tions can be deployed with [23, 22]. IaaS facilitates virtulization technology (i.e.provision of storage, network, storage resource). With the help of this service consumer can run arbitrary software in the cloud. For example, Amazon provides computing resource and storage resource to the consumers [23, 22].

2.2 System Level Virtulization System level virtualization which is also known as hardware level virtualiza- tion means virtualizing hardware to create an environment which behaves as if an individual computer with OS and many of such virtualized com- puter can be created on top of single hardware [24]. Abstraction of memory, operation on CPU, networking, I/O activity are very close to real machine (sometimes equal)[8]. The reason of using system level virtualization are: (a) secure separation, (b) server consolidation and (c) portability [2], [25]. Hypervisor has become the fundamental software to realize System level vir- tualization. It is the entity in a system which is responsible of creating and managing virtual machines running on top of it. Companies like Red Hat, VMware, xen.org, etc. are familiar provider of this technology. A computer on which virtual machines are running is called host computer and the virtual machines are called guest machines [9]. There are two types of hypervisor, type I hypervisor and type II hypervisor. As it is explained earlier, in this work XEN was chosen to represent system level virtulization and below a brief description of it is given.

9 2.2.1 XEN XEN hypervisor is an x86 (VMM) which facili- tates efficient utilization of resource by consolidation [26], hosting services in the same location, compuatation in a secured environment [27] and easy migration of virtual machines [28]. It manages multiple virtual guest OS to run simultaneously on a single hardware. XEN architecture made of two distinct domains Dom0 and DomU, as shown in Figure2. The Dom0 is a OS which has been modified to serve special purpose. All the virtual machines which are also known as DomUs are created and configured by Dom0 as it provides control panel to do so (Figure2)[9].

Figure 2: XEN architecture (adopted from [2]).

Here, we briefly describe three basic aspects of the virtual machine in- terface (x86 interface which is paravertulized ) [2]. Meomory management One of the most difficult part of paravirtualized x86 architechture is the memory management part, as special technique required both in hypervi- sor and at the time of migration. If Translation look aside buffer (TLB) is managed by software these difficult tasks become easier [29]. The segment descriptor does not have full privilege as XEN does. In addition, to this it may not have access to address space reserved by XEN. Guest operating system does not have direct write access to the page tables. A validation process is conducted by XEN before every update. CPU In general, no other entity in a system have more privilege than Operating system (OS). However, in virtualized environment XEN has more privileges than guest operating system. For this reason, guest OS are changed. For

10 taking care of system calls a fast handler can be installed in the guest OS. This will prevent indirection through XEN [2]. Device I/O Instead of imitating hardware devices XEN introduces device abstractions which enables isolation. Each I/O data destined or coming from the guest OS redirected through XEN with the help of shared-memory and asyn- chronous buffer descriptor rings [2].

2.2.2 XEN Toolstack XEN uses different toolstack to enable user interface, which are:

1. XEND Toolstack

2. XL Toolstack

3. Libvirt

4. Virt-Manager

Detailed description of these toolstack can be found in Appendix A.

2.2.3 XEN Migration Virtual machines are loosely connected to the physical machine and because of this reason they can be migrated to another physical machine. XEN has the feature of migrating virtual machines from one host to another. Some of the advantages of migration of VMs are:

• Load balancing: If a host becomes overloaded some VMs can be moved to host with lower resource usage.

• Failure of hardware: If the host hardware fails, the guest VMs can be migrated to another host, and the failed hardware can be repaired.

• Saving energy: If a host is not running significant number of VMs, then the VMs can be redistributed to other hosts and the host can be powered off. In this way energy can be saved at the time of low usage.

• Lower latency: To have lower latency VMs can be moved to another geographic location [30].

11 It is not required to restart the server after migration of virtual machine. In addition to this, downtime is very low [31]. There are two types of migration: (i) Offline Migration (ii) Live Migration. Offline Migration In offline migration, the VM is suspended and a copy of its memory is moved to destination host. The VM is restarted in the destination host while memory in the source host is freed. The migration time mainly depends on network bandwidth and latency [30]. Different tool stack can be used for offline migration. Below some of them are provided

• XM tool stack: xm migrate

• XL tool stack: xl migrate

• Libvirt API: virsh migrate GuestName libvirtURI

Live Migration In live migration, at the time of transferring the memory of the migrating VM, the VM keeps running. If any modification is made to any page at the time of migration the change is also sent to the destination. The sent memory is updated taking into account the change. In the new host the registers are loaded and the migrating VM restarted. Different tool stack can be used for live migration. Below some of them are provided

• XM tool stack: xm migrate –live

• XL tool stack: xl migrate –live

• Libvirt API: virsh migrate –live GuestName libvirtURI

2.3 Operating System (OS) Level Virtualization In OS level virtualization the operating system is changed in a way that it acts as a host for virtualization and it contains a root file system, executables and system libraries [32], [33]. Applications run on containers (domains) and each container has access to virtulized resources. There is no redundancy as only one OS runs in the system. In System level virtualization, the work as an emulator, which try to emulate the whole computing environment (i.e. RAM, Disk drive, processor type etc.). On the other hand, in OS level virtualization instaed of emulating actual machine, resources (disk space, kernel etc.) are shared among guest instances. In the following subsections Docker and OpenVZ will be described as a representative of Operating System (OS) Level Virtualization [34].

12 2.3.1 Docker With container based technology several domains (containers) can run on single OS and for this reason its of lightweight nature [35]. It creates an extra layer of operating system abstraction to deploy applications. It has APIs to create containers that run processes in isolation. The Docker so- lution consists of the following components (high level overview) (Figure3).:

• Docker image The Docker containers are launched from Docker images, which builds the Docker world. It can be considered as the source code of Docker containers [36]. If one considers Linux kernel is layer zero, whenever an image is run, a layer is put on top of it [37].

• The Docker daemon It is the responsibility of Docker daemon to create and monitor Docker containers. It also takes care of Docker images. The host oprating sys- tem launches the daemon [3].

• Docker containers Each container in Docker acts as a separate operating system which can be booted or shut down. There is no or emulation in Docker. Any application can be converted to lightweight container with the help of Docker [35]. A Docker container not only contains a software component but also all of its dependencies (bina- ries, libraries, scripts, jars etc.) [34].

• Docker client Docker clients communicate with the Docker daemon to manage the containers and for this communication a command line client binary Docker is used [36].

• Registries Docker registries are used for storage and distribution of Docker im- ages. There exists two types of registries: private and public. The public registry is called Docker Hub which is operated by Docker Inc..

13 Private registries are owned by organizations and does not have over- head due to download from internet [3, 36].

Figure 3: Major Docker components (High level overview) [3].

2.3.2 OpenVZ OpenVZ is a OS level virtualization technology for Linux. It creates mul- tiple containers on a physical server ensuring isolation which are known as virtual environments (VEs) or containers. It uses a modified version of linux kernel. A container has its own root account, IP address, users, files, ap- plications, libraries, CPU quotas, network configuration. All the containers share the resources as well as the kernel of the host. Containers must run same OS as the physical server. Compared to real machine a container has limited functions. There are two types of virtual network interfaces to the containers: (i) virtual network device (or venet), (ii) virtual Ethernet device (or veth). A virtual network (venet) device has limited functionality and creates simply a connection between container and host OS. However it has lower overhead. On the other hand, overhead in virtual Ethernet device (veth) is slightly higher than in virtual network device. Its advantage is that it acts as an Ethernet device. Architechture of OpenVZ is provided in Figure4[38], [39], [4].

Kernel Isolation

• Process: Namespace is the technology of abstracting global system resource. A process within a namespace enjoys privileges as if it has

14 .

Figure 4: OpenVZ architecture [4].

its own instance of resource. In OpenVZ, there exists a separate PID namespace for each container. Because of this feature, a set of pro- cesses inside a container can be suspended or resumed. In addition to this, every container has its interprocess communication (IPC) names- pace. IPC resources like System V IPC objects and POSIX message queues are separated by IPC namespace [40].

• Isolating Filsystem: In OpenVZ, filsystem is isolated using chroot. Chroot is a process of changing root directory of a parent process and its children for isolating them from the rest of the computer. It is analogous to putting a process in jail. If a process which is being executed in a root directory changed environment, it cannot access file outside that directory. Using this mechanism isolation is provided to applications, system libraries etc. [40], [41].

• Isolating network: In OpenVZ, network isolation is achieved using net namespace. Each container has its own IP address. There also exists routing tables and filters [41].

Management of resource For proper functionality of multiple containers on a single host resource management is necessary. In OpenVZ, there are four primary controllers for resource management:

15 • User beancounters (and VSwap): It is a set of limits and guarantees for preventing a single container using all the system resources.

• Disk quota: There can be disk quota for each container. In addition to this inside a container there can be user/group disk quota. Therefore, there is two level disk quota in OpenVZ.

• CPU Fair scheduler: The OpenVZ CPU scheduler consists of two level scheduler. The container which is going to have the next time slice is determined by the first level scheduler. And this scheduling is per- formed taking into account containers CPU priority. The second level scheduler decides which process to run inside a container taking into account process priority [4].

• I/O priorities and I/O limits: Every container has its own I/O priority. Available I/O is distributed according to the assigned priority [41].

2.3.3 OpenVZ Migration In OpenVZ, a container can be migrated from one host to another. This is done by checkpointing (an extension of the OpenVZ kernel) the state of a container and restoring it in the destination host. However, the container can be restored in the same host too. Without checkpointing migration was only possible through shutting down the container and then rebooting. This mechanism introduces a long downtime and this downtime is not transpar- ent to the user too. Checkpointing solves this problem. There are two phases in the process of checkpoint/restore. Firstly in the checkpointing phase, the current state of the process is saved. Address space, register set, allocated resources, and other process private data are included in it. In the restoration phase, the checkpointed process is recon- structed from the saved image. After that, the execution is resumed at the point of suspension. OpenVZ provides a special utility called vzmigrate for supporting migration. With the help of this utility live migration can be performed. Live migration refers to the process where container freezes for a very short amount of time during migration and the process of migration remains transparent to the user. Live or online migration can be performed with the command vzmigrate online < host > VEID, where ’host’ refers to the ip address of destination host and ’VEID’ refers to ID of the container. Migration can also be performed without the help of vzmigrate utility. For this, vzctl utility and a file system backup tool is necessary. A container can be checkpointed with the command: vzctl chkpnt VEID dumpfile .

16 With the help of this command, the state of the migrating container is saved to a dump file specified by the path. In addition to this, the container is stopped. The dump file and file system is transferred to the destination node. In the destination node the target container is restored using the command: vzctl restore VEID dumpfile . The file system at the time of checkpointing must be same as the file system at the time of restora- tion [42, 43].

3 Problem Formulation

For controlled and efficient usage of resources and isolation virtual machines (VMs) are used traditionally. Isolation is achieved by deploying applications in separate VMs while controlled usage of resource is obtained by creating VMs with resource constraints. VMs create an extra level of abstraction which comes at a cost of performance. An experiment showed that system level virtualiztion technique XEN increases response time by 400% as the number of applications increased from one to four. Another experiment showed that the peak consumptions from two nodes are at 3.8 CPUs. How- ever, CPU consumption can rise beyond 3.8 CPUs after consolidating two virtual containers because of virtualization overhead. This rate becomes higher because of migration. Experiments showed that OS-level virtualiza- tion outperforms system level virtualization in case of performance overhead [44]. However, system level virtualization is used traditionally. Therefore, more experiments should be conducted to compare these two types of vir- tualization technologies in case of performance overhead. Virtualization technology reduces cost while increasing the probability of lower availability of applications. With the failure of host all the VMs run- ning on it fails stopping the applications running them. For continuous performance of applications features like live migration and checkpoint re- store was introduced to virtualized environment. However, in spite of these features still there is a small amount of time during migration when ap- plications become unavailable. In addition to this, not all virtualization environment provide the migration features [16]. On the other hand, for the cloud providers it is very crucial to maximize resource usage to make return- on-investment. Migration of VMs is one of the methods used for maximizing resource usage [45]. There was an experiment which showed there was 1% drop in sales of Amazon’s EC2 because of 100 msec latency. In the same way, because of rise in search response time, profit decreased by

17 20% [46]. For this reason, there should not be any (or as low as possible) presence of latency due to migration. Therefore, analysis of performance overhead (related to migration) in real time industrial cloud is very impor- tant. The research question on which this thesis is based on is: Which virtualization technique is suitable to be used in real-time cloud com- puting in industry?

4 Solution Method

To conduct the thesis, we decided to follow several steps which are shown in Figure5. First Step: In the first step, we reviewed research papers, to get an insight of real time industrial cloud performance overhead. Second Step: In the second step, review was conducted to find conve- niently used OS-level virtualization software and system level virtualization software in the same context. For hardware level virtualization XEN was chosen as the virtual machine (VM) hypervisor. The reason behind choos- ing XEN was that it provides secure execution of virtual machines without any degradation in performance while the other commercial or free solutions provide one service compromising the other [2]. In addition to this, it has migration utility. XEN is used by popular internet hosting service compa- nies like Amazon EC2, IBM SoftLayer, Liquid Web, Fujitsu Global Cloud Platform, , OrionVM and .

18 Figure 5: Solution method used in this thesis.

To represent operating system level virtualization Docker and OpenVZ were chosen which are open source projects. The reason behind choos- ing Docker was that it is open source and provides easy containerization. In addition to this, Companies like Google, Microsoft adopting this tech- nology [47]. Schibsted Media Group, Dassault Systemes, Oxford Univer- sity Press, Amadeus, EURECOM, Swisscom, The New York Times, Orbitz, PaaS [48], Red Hat’s OpenShift Origin PaaS [49], Apprenda PaaS [50] etc. are some of the customers of Docker. It is a very fast growing technology [51]. However, Docker was excluded from evaluation as Docker containers could not be migrated. On the other hand, the reason behind choosing OpenVZ is that it is open source and has some advantages like: (i) better efficiency (ii) better scalability (iii) higher container density (iv) bet-

19 ter resource management. In addition to this, it has utility for conducting migration while Docker does not provide such utility. Some of the OpenVZ customers are: Atlassian, Funtoo Linux, FastVPS, MontanaLinux, Parallels, Pixar Animation Studios, Travis CI, Yandex etc [52].

Third Step: In the third step, crucial performance criteria in industrial real time cloud was identified. Migration time, downtime, execution time and CPU consumption at the time of migration are some of the performance criteria that effect real time industrial cloud. Migration time refers to the time taken by a virtual machine (VM)/container to migrate from one host to another while downtime refers to the time period when the VM/container service is not available to the user. CPU utilization indicates the usage of computer’s processing power. Execution time of a program refers to the time needed by the system to execute that program. In real time system, an action is performed upon an event within an prede- fined time. It is very crucial to perform the action within the time constraint. In Figure6, a high level overview of an industrial real time cloud contain- ing two servers is shown. It performs some computation on the sensor data sent to it and transmits the result to the actuator. For example, consider a real time task A is executing in server1’s VM/container. In between the execution migration of that VM/container is performed. If the downtime and migration time is too high the task can miss its deadline. Therefore, good migration utility and lower downtime is expected in this case. On the other hand resource consumption during migration should be lowered to ensure a good turnover. In this work, the considered resource was CPU consumption. High CPU consumption during migration may hamper func- tionality of other task. For this reason, CPU consumption during migration is considered as one of the performance criteria. If a tasks execution time is high than there is a probability of deadline miss of task and degradation of overall performance. For this reason, execution time is considered as one of the performance criteria during evaluation.

20 Figure 6: Industrial Real time cloud.

Fourth Step: In the fourth step, the method of evaluating the chosen performance criteria was selected reviewing related work. Akoush et al. [53] conducted post copy migration and measured migration time and downtime. In pre-copy migration there are six stages (initialisation, reservation, iter- ative pre-copy, stop-and-copy, commitment, activation). The authors mea- sured migration time by measuring time taken by all six stages. Downtime is the time taken by last three steps (Stop-and-copy, Commitment, Activa- tion) of migration. However, the authors did not write the measurement steps in details. Hu et al. [54] designed an automated testing framework for measuring migration time and downtime. The test actions were conducted by server. To measure the total migration time authors initiated migration using command line utility and recorded the time before and after the migration commands to complete. For measuring the downtime authors pinged the target VM. In this work, we followed the above method to mea- sure migration time and downtime. The ping interval was 1 second as the downtime was in the second range. The migration time and downtime was

21 measured changing RAM size of the migrating VM/container. In addition to this, migration time was measured changing the number of VM/container in the source host while the target VM/container was migrating. Another experiment was conducted where migration was conducted when the CPU consumption was at normal state and the migration time was measured. Then the source host was loaded as heavily as possible with burnP6 stress program. The migration was conducted from loaded source host and migra- tion time was measured. Shirinbab et al. [55] measured CPU utilization using xentop command. In this work, we decided to use this command for measuring CPU consumption in XEN installed system. In OpenVZ top command was used. To measure execution time of a program a clock was started at the beginning of the program and stopped at the end. The time interval indicated by the clock is the execution time. Fifth Step: In the fifth step a small scale cloud environment was set up where XEN, Docker and OpenVZ was implemented successively. Sixth Step: In the last step the implemented systems was compared taking migration time, downtime, CPU consumption during migration and execu- tion time into account.

5 System set-up

5.1 Local Area Network (LAN) Set up For the experiment in this thesis we made a LAN network with two personal computers connected with a Cisco WS-C2960-24TT-L Catalyst switch (net- work is shown in Figure7).

22 Figure 7: LAN set up.

Express set up was used to configure the switch as no console cable was provided. The ip address ”http://10.0.0.1” provided access to the swtich. To check if the LAN is setup correctly or not, in the first laptop the static ip address was set to 192.168.2.2 and laptop was disconnected from internet. Then the first laptop was pinged from the other one. The other laptop was also disconnected from internet. The main idea was installing XEN in the computers shown in Figure7 and then measuring all the selected performance criteria. Afterwards erasing the computers and installing Docker and measuring all the selected performance criteria. And same procedure was followed in OpenVZ.

5.2 XEN Hypervisor In this work two computers were used where Community Enterprise Oper- ating System (CentOS) 6.7 [56] was installed. Different versions of are tried for this setup and Ubuntu 12.04.5 is chosen. On top of Ubuntu four virtual machines were created. We observed crashes of the system dur- ing booting after trying migration of VM. Therefore, it was decided to set up the system with CentOS, which is based on Red Hat Enterprise Linux (RHEL) and more compatible with XEN than Ubuntu.

23 5.2.1 XEN installation We started with 32 bit CentOS 6.7 as one of the computers has only 2GB RAM. Unfortunately Xen4CentOS repository was not available in 32 bit kernel. For this reason, two computers were set up with 64 bit CentOS 6.7. Xen4CentOS Stack was installed afterward [57]. Before booting the XEN enabled kernel we made sure that boot configuration file is updated. After rebooting the system it was checked if new kernel is running. It was ensured XEN is running ( ’xl info’ and ’xl list’ command ran in the terminal). It was observed that XEN version 4.4 Dom0 was running.

5.2.2 Setting up Bridged Network Bridge is set up to provide XEN guests networking. In /etc/sysconfig/network- scripts directory a file called ifcfg-br0 was created to make a bridge named br0. The main network device was eth0. There was no Ethernet inter- face configuration file in CentOS 6.7. For this reason, a configuration file ifcfg-eth0 was created in the directory /etc/sysconfig/network-scripts. This configuration file was modified so that eth0 interface point to bridge inter- face. In both computers bridge was named as br0.

5.2.3 DomU Install with Virt-Manager The virtual machines (DomU) was created with virt-manager. However, it can also be created with the help of XL toolstack or XM toolstack. After the Virt-manager interface appears it was connected to XEN. With virt- manager fully virtualized virtual machines are created. In this work, XEND service is enabled as migration of VM using XL toolstack and libvirt showed some problem.

5.2.4 Virtual machine migration Virtual machine was migrated from one computer to another running XEN. The migration requirements are as follow:

1. If the VM is created using shared storage (both Dom0’s see the same disk), xm migrate command migrates the VM to destination host. If the VM is not created using shared storage, it is necessary that the destination host gets access to the root filesystem

24 of the migrating VM. In this work, VM is not created using shared storage. The root filesystem was contained in a disk image with a path /var/lib/libvirt/images/VMx.img. For a successful migration it is necessary that the destination host have access to VMx.img image file via the same path /var/lib/libvirt/images/VMx.img. In this work this was done by placing the image in a network file system (NFS) server (source host) and then mounting it in the NFS client (destina- tion host).

2. The destination host has adequate memory to accommodate the mi- grating VM.

3. In this work source and destination had the same version of XEN hy- pervisor.

4. Configuration of host and destination is changed to allow migration of VM. VM migration is disabled by default. The settings in /etc/xen/xend- config.sxp is changed to allow relocation request from any server.

5. Firewall is disabled. There were some difficulties in connecting NFS client to the NFS server [58, 59].

After settig up the required configurations the migration command runs on the terminal. Figure8 shows virtual machine called vmx is started to migrate from source host to destination host (ip address 192.168.0.12). vmx migrated and paused in the destination host (Figure9). In Figure 10 it can be seen that virtual machine vmx has completed its migration and it is running in the destination host.

25 Figure 8: Start of Migration Virtual machine vmx.

26 Figure 9: vmx migrated and paused in the destination host.

27 Figure 10: Migration of vmx completed.

5.3 Docker In this subsection, installation method of Docker, container creation and problem faced during migration will be described. Docker was installed in two computers with Ubuntu 14.04 OS.

5.3.1 Docker Installation, Container Creation and Container Mi- gration Docker does not have any utility for migration of containers. However, in this work most of the selected performance criteria are migration related. For this reason, a Docker experimental binary was used rather than installing Dockers official release from docker web page [60]. Building an experimental software gives users opportunity to use features early and helps maintainers get feedback. The used Docker experimental binary was downloaded from git repository and has the extra utility checkpoint and restore which are not present in the Dockers official release. With this checkpoint utility a process can be friezed and with restore utility it can be restored at the point

28 it was friezed. These utilities are necessary to checkpoint the migrating container in the host node and restoring it in the destination node. In this work, Docker containers could not be migrated because of the problem of a supporting software. Detailed steps of Docker Installation, Container Creation and Container Migration is provided in the Appendix B.

5.4 OpenVZ In this subsection the method followed to install OpenVZ, creation of new container and migration of new container will be provided.

5.4.1 OpenVZ Installation For installing OpenVZ 64 bit CentOS 6.7 was installed in two comput- ers as in OpenVZ installation webpage [61] it was recommended to use RHEL(CentOS, Scientific Linux) 6 platform. Rather than creating sepa- rate partition default partition /vz was used. The OpenVZ repository was downloaded and placed in the computers repository. In addition to this, OpenVZ GPG key was imported. OpenVZ kernel was installed as without it OpenVZ functionality is limited. The main part of OpenVZ software is OpenVZ kernel. Before rebooting into the new kernel, it was checked if the new kernel was the default in the grub configuration file. In addition to this, for proper functionality of OpenVZ some of the kernel parameters needed proper settings and this change was made in the /etc/sysctl.conf file. SELinux was disabled in both computers and some user-level tools were in- stalled. Afterwards computer was rebooted. OpenVZ was working properly on both nodes [61].

5.4.2 Container Creation For creating OpenVZ container an OS template is necessary. An OS tem- plate refers to Linux distribution which is installed in a container. After- wards gzipped tarball is used for packing it. With the help of such cache container creation can be completed within a minute. Precreated template cache was downloaded [62]. In the next stage, the downloaded tarbell was placed in the /vz/template/cache/ directory. To create a container used command was: vzctl create CTID ostemplate osname, where CTID is the container ID. The container was started using the command: vzctl start CTID. To Confirm container is created successfully command: vzlist is used which shows list of all the containers [61].

29 5.4.3 Container Migration OpenVZ has the utility vzmigrate for migrating containers from one hard- ware node to another. For connecting to the destination node Secure Shell (SSH) network protocol is used. With ssh a secure channel is created be- tween ssh client and server. Each time a connection is made between source host and the destination host with ssh, password insertion is necessary. To avoid password prompt each time a migration is made a public key is made in the source host with the command ssh keygen t rsa. The created public key was transferred to the destination node and saved in the .ssh/authorized keys/ folder. From the source host it was tested to see if a ssh login can be made without password. After successfully setting up the ssh public key the command: vzmigrate [ online] destination address is used for live migration of the container.

5.5 Test Bed Specifications Summary In previous subsections installation process of XEN hypervisor, Docker and OpenVZ is provided. In this subsection the test bed specification is provided in a summarized way. The configuration of the used hosts (for XEN VMs) and XEN VMs are given below in Table1. In the host which has 2GB RAM XEN VMs were not created all together for meeting physical RAM constraint. The configuration of the used hosts (OpenVZ) and OpenVZ containers are given below in Table2. For XEN and OpenVZ different hardware platform (computers) was used as OpenVZ container live migration was not successful in between the same hosts (used in XEN). According to expert opinion, sometimes live OpenVZ migration is not possible between computers with different processor family. For this reason Different hardware platform is used. This does not effect validity of comparison. During migration a file is transferred from source host to destination host. File transfer in wireless network is effected by WiFi radio, bandwidth, misconfigured network settings etc.. However, in this work LAN with same network bandwidth is used instead of internet. Besides, CPU speed and RAM size of host in a LAN does not effect file (small size) transfer Largely. Therefore, there is no risk of large change in migration time and downtime because of different hardware platform. Execution time could have been affected by CPU speed and RAM size of host if a compute intensive application were used. However, in this work no compute intensive application were used to evaluate execution time.

30 Table 1: Test bed specifications of XEN Parameter Host 1 Host 2 Processor name (R) Core(TM) i7 Intel(R) Core(TM2)DUO Processor speed 1.73GHz 2.26GHz RAM size 8GB 2 GB OS CentOS 6.7 CentOS 6.7 XEN version XEN 4.4 XEN 4.4 VM 1 1027 MB ( Ubuntu 14.04) 1027 MB ( Ubuntu 14.04) VM 2 762 MB ( Ubuntu 14.04) 762 MB( Ubuntu 14.04) VM 3 545 MB (Ubuntu 14.04) 545 MB ( Ubuntu 14.04)

Table 2: Test bed specifications of OpenVZ Parameter Host 1 Host 2 Processor name Intel(R) Core(TM2)DUO Intel(R) Core(TM2)DUO Processor speed 2.26GHz 2.10 GHz RAM size 2GB 2 GB OS CentOS 6.7 CentOS 6.7 Container 1 1027 MB 1027 MB Container 2 762 MB 762 MB Container 3 545 MB 545 MB

6 Evaluation

In This section, the performance criteria migration time, downtime, Execu- tion time and CPU consumption during migration will be evaluated in XEN and OpenVZ . As Docker container could not be migrated it was excluded from evaluation.

6.1 Migration time Live migration is based on the concept of migration and it refers to migra- tion of virtual machine/container from one host to another in such a way that the process of migration remains transparent to the user [63]. In this work, virtual machines of different RAM size were created in XEN which were migrated from one host to another with the command: xm migrate - -live destination ip address. Three virtual machines were created of RAM size : 1027 MB, 762 MB and 545 MB. The experiment was

31 conducted for six hours. Within this six hours, each virtual machine was migrated 12 times and the migration time was measured using the script mig time (Figure 11). Therefore, in total 36 VM migrations were com- pleted within six hours. In case of OpenVZ, three containers were created and RAM size was set to 1027 MB, 762 MB and 545 MB with the command: vzctl set CTID - - ram - - swap - - save. Swap memory size was set to zero. Each container was migrated from one host to another 12 times and migration time was measured using the script mig time (Figure 11). The experiment was conducted for four hours while 36 migration time were measured. The maximum, minimum and median of the recorded migration time for XEN and OpenVZ is given below in Table3. However, experiment conduction time (six hours for XEN and four hours for OpenVZ) does not indicate bad performance of XEN and good performance of OpenVZ.

Table 3: Migration time for different RAM sizes RAM size data type XEN (sec) OpenVZ (sec) maximum 16 23 1027 MB minimum 11 21 madian 14 22 maximum 14 23 762 MB minimum 9 21 median 11 22 maximum 12 23 545 MB minimum 8 21 median 10 22

From the evaluated data (Table3), it can be observed that with the de- crease in RAM size XEN VM migration time decreases, but OpenVZ does not show such characteristics and migration time remains constant in all the containers. This happens because in XEN at the time of migration the whole memory gets copied to destination host. It is monitored if any change is made to any memory page during migration and when any change takes place the corresponding change is transferred to the destination. Therefore, in case of XEN the higher the RAM size the longer it takes to transfer it. In a word, migration time is proportional to RAM size. On the other hand, in OpenVZ for container migration state of processes in the container and global state of container is saved to an image file which is transferred

32 to the destination and restored there later. State of a process includes its private data which are: address space, opened files/pipes/sockets, current working directory, timers, terminal settings, System V IPC structures, user identities (uid, gid, etc), process identities (pid, pgrp, sid, etc) etc. There- fore, it is more of like taking snapshot of a program’s state which is known as checkpointing and transferring the resulting image file. The larger the image file is, the longer it takes it to transfer. In a word, migration time is proportional to the overall size of the transferred image file. And this size is proportional to the memory usage of processes running inside the OpenVZ container. In this work, no processes were running inside the con- tainer during migration. For this reason, only global state of the container was saved in the image file and transferred. Therefore the image file is of same size in each cases, irrespective of the RAM size (as the whole RAM does not get copied) [64, 65, 42]. For this reason, with change in RAM size migration time changed in XEN but not in OpenVZ. Another fact that can be observed from Table3 is that, XEN has lower migration time than in OpenVZ. However, migration time in OpenVZ can be lowered using shared storage [66]. In this work, no shared storage was used in case of OpenVZ. In addition to this, in case of XEN with increase in RAM size migration time increases whether or not the the corresponding RAM pages are in use. However, for OpenVZ no such thing happens and one can set upper bound of memory usage without effecting the migration time.

Figure 11: Algorithm for measuring migration time.

Another experiment was conducted where number of virtual machines running on the host were changed from 0 to 2 and the effect of this change on migration time was observed. It means in the first case there was no VM running in the source host except the target VM which is going to be mi- grated. The target VM was migrated and the migration time was observed. Then in the second case there was 1 VM running on the source host except

33 the target VM which is going to be migrated and the migration time was measured. For each case four migrations were conducted within an hour. The Average of the four migration time was calculated. In the same way, OpenVZ container migration time was measured while number of containers running on the host changed from 0 to 2. The result of this experiment is given in Table4.

Table 4: Migration time changing VM/container number (RAM size 1027) Number of VM/container XEN (sec) OpenVZ (sec) 0 14 22 1 15 22 2 14 22

From this experiment (Table4), it can be observed that, number of VM/container running on the host do not effect migration time. The reason behind this is that neither VM nor container monopolize the host slowing down the migration process and because of this increasing VM/container number did not effect migration time. In the third experiment, the effect of CPU consumption on migration time was observed. For this the migration time was observed in normal state. Then the Host CPU was loaded as heavily as possible with burnP6 (A stress tester for Linux) and migration time was measured by migrating target VM. Only one core was loaded with the command: burnP6 & rather than loading all the available cores. In the same way, OpenVZ host was loaded with burnP6 and migration time was measured. The result of this experiment is given in Table5.

Table 5: Migration time changing host CPU consumption (RAM size 1027) Host CPU state XEN (sec) OpenVZ (sec) No burnP6 14 22 With burnP6 14 22

From these data (Table5), it can be observed that stressing (utilizing 100%) only one core does not effect migration time. It indicates migration of VM/container does not consume high CPU resource. If all the cores were stressed (utilizing 100%), it would have effected migration time. However,

34 in this work old computer was used and stressing all the cores could harm the machine. For this reason only one core was stressed.

6.2 Downtime Downtime is a duration of time where the migrating virtual machine is not available to the user [67]. To calculate the downtime of the migrating VM, it was pinged. If the ping does not receive any reply, then the virtual machine is not functioning. The number of ping packets that does not receive any reply multiplied by the ping interval indicates the downtime. For measuring downtime ping sequence of 1 second interval was used. Three VMs were created of RAM size: 1027 MB, 762 MB, 545 MB. For each VM 10 migrations were conducted to measure downtime. In total 30 migrations were conducted in 4 hours. In the same way downtime of migrating container is measured in OpenVZ. The maximum, minimum and median of recorded downtime are given in Table6.

Table 6: Downtime changing RAM size RAM size data type XEN (sec) OpenVZ (sec) maximum 6 2 minimum 3 2 1027 MB madian 4 2 maximum 6 2 minimum 4 2 762 MB median 5 2 maximum 6 2 minimum 2 2 545 MB median 4 2

In XEN there was a slight decrease in downtime with decrease in RAM size (Table6). However, in case of OpenVZ, no such characteristics was observed. The reason behind this is that, in case of OpenvZ a snapshot of process state is transferred not the whole memory (as it happens in XEN). Therefore, RAM size does not effect OpenVZ downtime. Another obser- vation that can be made from Table6 is that, downtime in XEN VM was higher than in OpenVZ container. In XEN VM migration transferring and restoration of whole memory takes place while in OpenVZ only the image file is transferred and restored. For this reason a longer downtime is involved in XEN VM.

35 6.3 CPU consumption during migration The CPU consumption of the host was recorded when no migration was taking place and also the consumption at the time of migration. The output is shown in Table7.

Table 7: CPU conumption during migration (in percent) Time XEN (percent) OpenVZ (percent) Before migration 10-20% 10-15% At migration 40-80% 22-30% After migration 10-20% 10-11%

From the evaluated data (Table7) it can be observed that XEN con- sumes higher CPU resources (40-80%) during migration than OpenVZ (22- 30%). In XEN the whole memory is copied and transferred during migration consuming higher CPU resources while in OpenVZ only a snapshot of the current processes are transferred consuming lower CPU resources.

6.4 Execution Time To find execution time of an application running in a virtual machine and container, two testing applications were made. The first application prints the sentence ”Hello world” 10,000 times. The other application calls the function gettimeofday() 100000 times and saves the output in a structure. After that it performs some operations (finding maximum and minimum) on the saved data. A clock was started at the beginning of each application and stopped at the end. The output of the evaluation is shown in Table8.

Table 8: Execution time Application XEN(microsec) OpenVZ (microsec) Application 1 23123 23122 Application 2 21738 21738

The execution time of an application running in a virtual machine and container is same (Table8). However, in this work very simple applications were chosen. If High Performance Computing (HPC) application which has intensive calculation was chosen like, Adufu et al. [18] there would have been a difference in execution time.

36 6.5 Evaluation Result XEN has lower migration time however if the RAM size of XEN VM were in- creased further it would have taken longer time for migration than OpenVZ. In OpenVZ no transfer of unnecessary RAM pages takes place while in XEN whole RAM transferred irrespective of their usage which is inefficient. Therefore, OpenVZ has a better migration utility than XEN. OpenVZ has a lower downtime than XEN. In addition, it consumes lower CPU resource during migration, which makes it more economically attractive. Docker can be excluded as it does not have migration utility. Deployment of OpenVZ container is easier and less time consuming than XEN VM. Creating an OpenVZ container takes approximately 1 sec while XEN VM creation takes approximately 30 minutes as it involved installation of OS. Another prob- lem with XEN was that at the time of creating VM it should be considered that VM RAM size should not exceed host RAM size. In OpenVZ, no such consideration is necessary. Based on the analysis it can be concluded that OpenVZ is more suitable than XEN for industrial real time cloud. Summary of the comparison is shown in Table9 indicating better performer between XEN and OpenVZ.

Table 9: Summary of Evaluation Performance Criteria Better Performer Migration Time OpenVZ Downtime OpenVZ CPU consumption during migration OpenVZ Execution time none

7 Related work

Nowadays there is a tendency to use containers which is an alternative to traditional hypervisor based virtualization in MapReduce (MR) clusters for sharing of resource and performance isolation. MapReduce (MR) clusters are used for generation and process of large data sets. Xavier et al. [14] conducted an experiment to compare performance of container-based sys- tems (Linux VServer, OpenVZ and Linux Containers (LXC)) in terms of running on MapReduce (MR) clusters. Linux VServer, OpenVZ and Linux Containers (LXC) are similar in terms of security, isolation and performance while the difference lies in resource management. The basis of the experi-

37 ment was micro and macro benchmarks. Micro benchmark refers to com- paratively small piece of code for evaluating performance of a system while macro benchmark evaluates the system as whole. The benchmark code usu- ally consists of test cases. The results obtained from micro benchmarks supposed to effect the macro benchmarks. This made it useful to identify bottlenecks. Four identical nodes with two 2.27 GHz processors each, L3 cache of size 8M in each core, RAM of size 16GB and one disk of size 146 GB was used for the experiment. The chosen micro benchmark was TestDF- SIO benchmark which gives an idea of the efficiency of the cluster in terms of I/O. Th measurement of throughput is done in Mbps. The time taken by individual task for writing and reading files is the basis of this metric. The file size has direct effect on the result. The behaviour of the system becomes linear when the file size gets larger than 2GB. The authors assumed the re- sult was influenced by network throughput. The selected macro benchmarks are: Wordcount, Terasort, IBS. The macro benchmark Wordcount is used for counting the occurrence number of a word in a dataset. This bench- mark is well known for comparing performance among Hadoop clusters. The authors created the input file of 30GB size by looping the same text file. Result showed that, all the container based system reach a near native performance.Terasort Benchmark is used to measure the speed of a Hadoop cluster. This benchmark is used to sort data very fast. It comprise of two steps: (i) input data generation (ii) sorting the input data. The result was same as wordcount. With IBS benchmark the authors tested performance isolation in a modified Hadoop cluster. They measured execution time of an application in one container. Then the same application rerun side by side with a stress test and execution time was measured. The performance degradation of the chosen application was observed. There was no perfor- mance degradation during CPU stress test. However, during memory and I/O stress test there was a little degradation. The authors have reached a conclusion that all the container based systems perform similarly. In addi- tion to this they concluded that Linux Containers (LXC) outperforms others in terms of resource restriction among containers.

Felter et al. [15] conducted an experiment where traditional virtual ma- chines were compared with Linux container by stressing CPU, memory, net- working and storage resources. KVM was used as the hypervisor and Docker as a container manager. The authors did not created containers inside a VM or VMs inside a container to avoid redundancy. The target system resources were saturated by the benchmarks for the measurement of overhead. The used benchmark software are Linpack, STREAM, RandomAccess, nuttcp,

38 netperf request-response. Linpack is used to measure system’s computing power which solves a system of linear equations. Most of the computations are double-precision floating point multiplication of a scalar with a vector. The multiplication result is added to another vector. By running Linpack a good amount of time is spent in solving mathematical operations and it re- quires regular memory accesses. In addition to this, it stresses floating point capability of the core. Docker outperformed KVM in this case. This hap- pened because of the abstracting system information. STREAM benchmark performs simple operations on vectors to measure memory bandwidth. The performance is mainly dependent on bandwidth to memory. Though it also depends on TLB misses to a lesser extent. The benchmark was executed in two configurations: (i) one NUMA ( non-uniform memory access) node con- tained the complete computation (ii) the computation was in both nodes. The performance of Linux, Docker and KVM were the same in both con- figurations. In RandomAccess benchmark random memory performance is stressed. A section of memory is initialized as the working set of the bench- mark. In that memory section, arbitrary 8-byte word are read, modified and then the modified word is written back. The RandomAccess performance was evaluated in single socket (8 cores) and on both sockets (all 16 cores). In case of single socket (8 cores), Linux and Docker performs better than KVM while in case of both sockets (all 16 cores) all three of them perform identically. To measure network bandwidth between the target system and a system identical to the target system nuttcp tool was used. These two systems were connected using direct 10 Gbps Ethernet link. Nuttcp client runs on the SUT (system under test) and nuttcp server runs on the other system. In case of client-to-server SUT performs as a transmitter while server-to-client case it becomes the receiver. In all three systems (native, Docker,KVM) throughput reaches 9.3 Gbps in both directions. The authors used fio tool to measure overhead introduced by virtualizing block storage. In cloud environment block storage is used for consistency and better per- formance. For testing they added a 20 TB IBM FlashSystem TM 840 flash SSD to the server under test. In case of sequential read and write all the systems performed identically. However in case of random read, write and mixed (70% read, 30% write) Docker outperforms KVM. In case of Redis (data structure store) both native and Docker system performs better than KVM. Therefore the authors concluded that, containers perform equal or better in all cases than VMs. It was also concluded that very little over- head for CPU and memory performance was introduced for both KVM and Docker. In addition to this, both of these virtualization technologies should be used with extra care in case of I/O intensive workloads.

39 Wubin et al. [16] compared hypervisor-based platform and container- based platform from a high availability (HA) perspective (live migration, failure detection, and checkpoint/restore). HA means a systems is contin- uously functional (almost all the time). The considered Hypervisor based platforms are VMware, Citrix XenServer and Marathon everRun MX. Docker/LXC and OpenVZ are the considered container based platform. In VMware high availability (HA) is provided with the help of Distributed Resource Sched- uler (DRS), VMware FT, VMware HA and vMotion. The strategy behind VMware HA is failover clustering strategy. With the failover of ESXi host sending of heartbeat signal to vCenter server is stopped. If vCenter server does not receives any heartbeat signal, it sesets the VM. The same thing happens in case of application failover. At the time of maintenance VMs can be migrated with the help of vMotion to achieve HA. Fault tolerance (FT) in VMware is achieved through vLockStep protocol. With the help of this protocol primary and secondary copy of a VM (FT protected) is syn- chronised. XenServer also provides HA services though host levels failovers can only be handled. Third party software can be used with XenServer to address VM and application failovers. In addition to this, FT is not sup- ported in XenServer. Remus can be integrated to it to enable FT support. In case of container based system to provide HA checkpoint/restore utility is necessary. In Docker checkpoint/restore utility is not available while in OpenVZ this utility is ready to use. In addition to this, HA feature like Live migration is supported in OpenVZ. However, features like automatic state synchronization,failure detection, failover management are not supported in both OpenVZ and Docker/LXC. The authors concluded that high availabil- ity features in container-based platforms are not adequate though it is ready to use in Hypervisor based platforms.

Tafa et al. [17] compared the performance of five hypervisors: XEN-PV, XENHVM, Open-VZ, KVM-FV, KVM-PV in terms of CPU Consumption, memory utilization, total migration time and downtime. To implement mi- gration the authors created a script called heartcare. This script sends a message to heartbeat which initiates migration. In XEN-PV (XEN paravir- tualized) to measure CPU consumption they used xentop command. Before migrating a virtual machine was lower than the time of migration. To mea- sure usage of memory in XEN the authors use MemAccess tool. Primarily the utilization was 10,6 % which increased to 10,7% after migration. The authors conducted migration within same physical host to measure migra- tion time. For this measurement with the sending of the migration initiation

40 message to heartbeat tool, a counter is started. This counter indicates the transfer time. The migration time was 2.66 second. The measured down- time was 4ms. The reason behind very low downtime is that they conducted the migration within the same physical host, the CPU was very fast and the application was not big. The authors also changed MTU (Message Trans- fer Unit) and evaluated all the four (transferring time, downtime, memory utilization and CPU consumption) performance criteria. Changing MTU automatically changes packet size. The used MTU was 1000B and 32B. The result showed CPU consumption, memory utilization, migration time and downtime increases when packet size decreases. The same experiment was conducted on Xen Full virtual machine with MTU sizes: 1500B, 1000B, 32B. A comparison is made between Xen Full virtual machine (XEN-HVM) and XEN paravirtualized machine which showed in XEN-HVM consumes more CPU and memory than XEN-PV. In OpenVZ the authors measured CPU wasted time in /proc/vz/vstat to measure CPU consumption and used stream tool to measure memory utilization. In case of OPenVZ the used MTU sizes are: 1500B, 1000B, 32B. The CPU consumption and memory utilization was a litle higher in OpenVZ compared to XEN. The migration time and downtime was smaller than XEN. In KVM hypervisor the evalu- ation of CPU consumption was done using SAR Utility tool and Memory Utilization was measured using an open source stress tool with modification. The used MTU size was same as OpenVZ. The CPU consumption is higher in KVM-HVM compared to XEN-HVM. The performance of KVM-HVM was lower than other hypervisor. KVM-PV (paravirtualized) performed better than KVM-HVM and it showed similarity to XEN-HVM. Finally, the authors concluded that there is no single hypervisor which is good in all aforementioned performance criteria.

Sometimes virtualization with a good degree of both isolation and effi- ciency is required. Soltesz et al. [68] presented an alternative to hypervisor based system in scenarios like HPC clusters, the Grid, hosting centers, and PlanetLab. A prior work related to resource containers and security contain- ers applied to general purpose operating systems are synthesised to provide a new approach. Container based system that was considered in this paper for describing design and implementation is Linux-VServer. To contrast this container based system a hypervisor based system (XEN)is presented. For efficient computation in High Performance Computing (HPC) applications maximum usage of limited resources are required. In these cases, there is a large balancing required between effective resource allocation and execu- tion time minimization. The comparison showed that XEN supports more

41 than one kernels, allows networks stack virtualization, and permits migra- tion. However, VServer does not support migration. VServer has a small kernel. The performance of i/o related benchmarks is worse on XEN than on VServer. In case of server-type workloads the performance of container based system is 2x better than hypervisor based system.

Adufu et al. [18] conducted an experiment to see if container based technology is suitable for executing high performance scientific applications. They compared execution time of the application running in Docker con- tainer and VM created with the help of OpenStack. In Docker more than one user space run on one host. CPU, Memory and I/O resources are al- located to containers using . Cloud Management Platforms (CMP) (i.e. OpenStack) is used to manage hypervisor-oriented cloud environment. OpenStack enables efficient usage of cloud resources. Nova Compute is in- cluded in it which can be used to create new VM. Openstack is commonly used for administrating cloud environment containing HPC application. Mil- lions or billions of takes makes a High Throughput Computing (HTC) and Many Task Computing(MTC) applications. The authors selected autodock3 which is a simulation software for molecular modeling. It is a CPU inten- sive job. Molecular docking means Structure-Based Drug Design (SBDD) computation which is a compute-intensive experiment. The authors created VMs and containers and executed a single docking process repeatedly in them. The docking application in this case is the autodock3 tools. Each machine used in this experiment has 30GB of RAM, runs Ubuntu 14.04. In case of Docker host machine also runs Ubuntu 14.04. Launching of container and execution of autodock3 takes 176 seconds on average. On the other hand Launching of VM and execution of autodock3 takes 191 seconds. Docker has a shorter time because in case of Docker it is not necessary to start up a guest OS. The authors conducted another experiment where execution time was measured while total RAM size was changed ( 12GB, 24GB, 36GB and 48G). The result showed that execution time increases with the increase of RAM size. With 24GB memory VM outperformed Docker.In all other cases Docker performs better than VMs. Finally, the authors concluded that exe- cution time of the selected application is less in Docker than in hypervisor.

From the above it can be observed that no experiment was conducted to find suitable virtualization technique for industrial real time cloud consid- ering performance criteria: migration time, downtime, CPU consumption during migration, execution time. In this work, we decided to conduct an experiment focusing this problem. System level and OS level virtualization

42 technologies used in the above stated works was considered during the choice of virtualization technologies in this work.

8 Conclusion

In this section, summary of the work and roadmap for further research will be provided.

8.1 Summary This thesis is about finding an appropriate virtualization technique for real time industrial cloud. The research question on which this thesis is based on is: Which virtualization technique is suitable to be used in real-time cloud com- puting in industry? To answer this question, a cloud environment was set up with XEN, OpenVZ and Docker consecutively. However, Docker container could not be migrated because of some technical difficulties. For this reason, it was excluded from evaluation. The system was evaluated considering migration time, down- time, CPU consumption during migration and execution time as perfor- mance criteria. In real time system, timely performance is very crucial. A longer downtime and migration time can result in deadline miss. For this reason, shorter downtime and better migration utility is expected. In addi- tion, resource usage should be minimal to ensure better turn over. Based on this work, it can be concluded that OS level virtualization technique OpenVZ is suitable to be used in real-time cloud computing in industry as it provides better migration utility, shorter downtime and lower CPU re- source usage during migration. In addition, it is developer friendly (lower deployment time and less constraints). Nevertheless, more research should be conducted considering other performance criteria (i.e. isolation).

8.2 Future Work In this work migration time, downtime, CPU consumption during migration and execution time were considered as performance criteria. However, net- work bandwidth is also one of the important criteria that effects migration time which was not considered in this work. In future, this criteria will be considered for evaluation. In addition to this, only one hypervisor XEN and two container based virtualization technique OpenVZ and Docker were considered. In the future, more virtualization techniques will be considered.

43 For evaluating execution time a very simple program ran in this work, in future a compute intensive application will be chosen. A virtualization tech- nology should provide isolation among VM/container. In further research isolation will be considered as performance criteria.

44 References

[1] B. Sosinsky, Cloud computing bible, vol. 762. John Wiley & Sons, 2010.

[2] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, “Xen and the art of virtual- ization,” SIGOPS Oper. Syst. Rev., vol. 37, pp. 164–177, Oct. 2003.

[3] A. Mouat, “Using docker,” 2015.

[4] Y. Zheng and D. M. Nicol, “A virtual time system for -based network emulations,” in Proceedings of the 2011 IEEE Workshop on Principles of Advanced and Distributed Simulation, PADS ’11, (Wash- ington, DC, USA), pp. 1–10, IEEE Computer Society, 2011.

[5] J. Katzel, “Power of the cloud. (cover story).,” Control Engineering, vol. 58, no. 12, p. 16, 2011.

[6] M. Sahinoglu and L. Cueva-Parra, “Cloud computing,” Wiley Interdis- ciplinary Reviews: Computational Statistics, vol. 3, no. 1, pp. 47–68, 2011.

[7] A. V. A. F. C. J. A. B. Filho, “Cloud services,” in Cloud Comput- ing and Communications ( LatinCloud ), 2nd IEEE Latin American Conference, pp. 59 – 64, IEEE, 2013.

[8] D. C. Van Moolenbroek, R. Appuswamy, and A. S. Tanenbaum, “To- wards a flexible, lightweight virtualization alternative,” in Proceedings of International Conference on Systems and Storage, pp. 1–7, ACM, 2014.

[9] A. Desai, R. Oza, P. Sharma, and B. Patel, “Hypervisor: A survey on concepts and taxonomy,” International Journal of Innovative Technol- ogy and Exploring Engineering, vol. 2, no. 3, pp. 222–225, 2013.

[10] R. Morabito, J. Kjallman, and M. Komu, “Hypervisors vs. lightweight virtualization: A performance comparison,” in Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 386–393, March 2015.

[11] S. L. Scott, G. G Vallee, T. Naughton, A. Tikotekar, C. Engelmann, and H. Ong, “System-level virtualization research at oak ridge national lab- oratory,” Future Generation Computer Systems, vol. 26, no. 3, pp. 304– 307, 2010.

45 [12] M. Rosenblum, “The reincarnation of virtual machines,” Queue, vol. 2, no. 5, p. 34, 2004. [13] P. M. Chen and B. D. Noble, “When virtual is better than real [operat- ing system relocation to virtual machines],” in Hot Topics in Operating Systems, 2001. Proceedings of the Eighth Workshop on, pp. 133–138, IEEE, 2001. [14] M. Gomes Xavier, M. Veiga Neves, F. de Rose, and C. Augusto, “A performance comparison of container-based virtualization systems for mapreduce clusters,” in Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on, pp. 299–306, IEEE, 2014. [15] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An updated per- formance comparison of virtual machines and linux containers,” in Per- formance Analysis of Systems and Software (ISPASS), 2015 IEEE In- ternational Symposium On, pp. 171–172, IEEE, 2015. [16] W. Li and A. Kanso, “Comparing containers versus virtual machines for achieving high availability,” in Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 353–358, IEEE, 2015. [17] I. Tafa, E. Zanaj, E. Kajo, A. Bejleri, and A. Xhuvani, “The comparison of virtual machine migration performance between xen-hvm, xen-pv, open-vz, kvm-fv, kvm-pv,” IJCSMS International Journal of Computer Science: Management Studies, vol. 11, no. 2, pp. 65–75, 2011. [18] T. Adufu, J. Choi, and Y. Kim, “Is container-based technology a winner for high performance scientific applications?,” in Network Op- erations and Management Symposium (APNOMS), 2015 17th Asia- Pacific, pp. 507–510, IEEE, 2015. [19] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A view of cloud computing,” Commun. ACM, vol. 53, pp. 50–58, Apr. 2010. [20] F. Magoul`es,J. Pan, and F. Teng, Cloud computing: Data-intensive computing and scheduling. CRC press, 2012. [21] H. Rajaei and J. Wappelhorst, “Clouds & grids: a network and sim- ulation perspective,” in Proceedings of the 14th Communications and Networking Symposium, pp. 143–150, Society for Computer Simulation International, 2011.

46 [22] C. Weinhardt, D.-I.-W. A. Anandasivam, B. Blau, D.-I. N. Borissov, D.- M. T. Meinl, D.-I.-W. W. Michalk, and J. St¨oßer,“Cloud computing–a classification, business models, and research directions,” Business & Information Systems Engineering, vol. 1, no. 5, pp. 391–399, 2009. [23] P. Mell and T. Grance, “The nist definition of cloud computing,” Com- munications of the ACM, vol. 53, no. 6, p. 50, 2010. [24] G. P´ek,L. Butty´an,and B. Bencs´ath,“A survey of security issues in hardware virtualization,” ACM Comput. Surv., vol. 45, pp. 40:1–40:34, July 2013. [25] null, “System-level virtualization for high performance computing,” in Parallel, Distributed and Network-Based Processing, 2008. PDP 2008. 16th Euromicro Conference on, pp. 636–643, Feb 2008. [26] C. A. Waldspurger, “Memory resource management in esx server,” SIGOPS Oper. Syst. Rev., vol. 36, pp. 181–194, Dec. 2002. [27] G. W. Dunlap, S. T. King, S. Cinar, M. A. Basrai, and P. M. Chen, “Revirt: Enabling intrusion analysis through virtual-machine logging and replay,” ACM SIGOPS Operating Systems Review, vol. 36, no. SI, pp. 211–224, 2002. [28] M. Kozuch and M. Satyanarayanan, “Internet suspend/resume,” in Mobile Computing Systems and Applications, 2002. Proceedings Fourth IEEE Workshop on, pp. 40–46, IEEE, 2002. [29] D. R. Engler, S. K. Gupta, and M. F. Kaashoek, “Avm: Application-level virtual memory,” in Hot Topics in Operating Sys- tems, 1995.(HotOS-V), Proceedings., Fifth Workshop on, pp. 72–77, IEEE, 1995. [30] L. YamunaDevi, P. Aruna, D. D. Sudha, and N. Priya, “Security in vir- tual machine live migration for kvm,” in Process Automation, Control and Computing (PACC), 2011 International Conference on, pp. 1–6, IEEE, 2011.

[31] “8 proven real-world ways to use docker.” http://wiki.xenproject. org/wiki/Migration, Access date: 18 th April, 2016. [32] S. Osman, D. Subhraveti, G. Su, and J. Nieh, “The design and imple- mentation of zap: A system for migrating computing environments,” SIGOPS Oper. Syst. Rev., vol. 36, pp. 361–376, Dec. 2002.

47 [33] S. Soltesz, H. P¨otzl, M. E. Fiuczynski, A. Bavier, and L. Peterson, “Container-based operating system virtualization: A scalable, high- performance alternative to hypervisors,” SIGOPS Oper. Syst. Rev., vol. 41, pp. 275–287, Mar. 2007.

[34] J. Fink, “Docker: A , operating system-level vir- tualization framework,” Code4Lib Journal, vol. 25, 2014.

[35] P. Raj, J. Chelladhurai, and S. S. Vinod, Learning Docker. Packt Pub- lishing Ltd, June 2015.

[36] J. Turnbull, The Docker Book. Lulu. com, 2014.

[37] O. Hane, Build Your Own PaaS with Docker. Packt Publishing Ltd, 2015.

[38] “Openvz.” https://openvz.org/Main_Page, Access date: 21 st April, 2016.

[39] M. Furman, OpenVZ Essentials. Packt Publishing Ltd, 2014.

[40] “Namespaces.” http://man7.org/linux/man-pages/man7/ namespaces.7.html, Access date: 22 nd April, 2016.

[41] “Openvz linux containers.” http://www.slideshare.net/kolyshkin/ openvz-linux-containers, Access date: 22 nd April, 2016.

[42] “Checkpointing and live migration.” https://openvz.org/ Checkpointing_and_live_migration, Access date: 22 nd April, 2016.

[43] “Checkpointing internals.” https://openvz.org/Checkpointing_ internals, Access date: 22 nd April, 2016.

[44] P. Padala, X. Zhu, Z. Wang, S. Singhal, K. G. Shin, et al., “Performance evaluation of virtualization technologies for server consolidation,” HP Labs Tec. Report, 2007.

[45] F. P. Tso, G. Hamilton, K. Oikonomou, and D. P. Pezaros, “Imple- menting scalable, network-aware virtual machine migration for cloud data centers,” in 2013 IEEE Sixth International Conference on Cloud Computing, pp. 557–564, IEEE, 2013.

48 [46] R. Kohavi, R. M. Henne, and D. Sommerfield, “Practical guide to con- trolled experiments on the web: listen to your customers not to the hippo,” in Proceedings of the 13th ACM SIGKDD international con- ference on Knowledge discovery and data mining, pp. 959–967, ACM, 2007.

[47] “8 proven real-world ways to use docker.” https://www.airpair.com/ docker/posts/8-proven-real-world-ways-to-use-docker, Access date: 18 th April, 2016.

[48] “Cloud foundry: Diego explained by onsi fakhouri.” http://www.activestate.com/blog/2014/09/ cloud-foundry-diego-explained-onsi-fakhouri, Access date: 25 th february 2016.

[49] “Red hat to update docker container tech for enterprises.” http: //www.computerworld.com/article/2488315/open-source-tools/ red-hat-to-update-docker-container-tech-for-enterprises. html, Access date: 25 th february 2016.

[50] “Paas and docker containers work together in latest apprenda release.” http://http://www. datacenterknowledge.com/archives/2015/04/28/ paas-and-docker-containers-work-together-in-latest-apprenda-release/, Access date: 25 th february 2016.

[51] “Docker customers.” https://www.docker.com/customers, Access date: 18 th April, 2016.

[52] “Success stories.” https://openvz.org/Success_stories, Access date: 24th April, 2016.

[53] S. Akoush, R. Sohan, A. Rice, A. W. Moore, and A. Hopper, “Predict- ing the performance of virtual machine migration,” in Modeling, Analy- sis & Simulation of Computer and Telecommunication Systems (MAS- COTS), 2010 IEEE International Symposium on, pp. 37–46, IEEE, 2010.

[54] W. Hu, A. Hicks, L. Zhang, E. M. Dow, V. Soni, H. Jiang, R. Bull, and J. N. Matthews, “A quantitative study of virtual machine live migra- tion,” in Proceedings of the 2013 ACM Cloud and Autonomic Comput- ing Conference, p. 11, ACM, 2013.

49 [55] S. Shirinbab, L. Lundberg, and D. Ilie, “Performance comparison of kvm, vmware and xenserver using a large telecommunication applica- tion,” in Cloud Computing, IARIA XPS Press, 2014.

[56] “Download centos linux iso images.” http:https://wiki.centos. org/Download, Access date: 22 MArch 2016.

[57] “Xen4 centos quick start.” https://wiki.centos.org/HowTos/Xen/ Xen4QuickStart, Access date: 22 MArch 2016.

[58] “Migrating xen domu guests betwen host systems.” http: //www.virtuatopia.com/index.php/Migrating_Xen_domainU_ Guests_Between_Host_Systems, Access date: 22 MArch 2016.

[59] “A live migration example.” https://www.centos.org/docs/5/html/ 5.2/Virtualization/sect-Virtualization-Virtualization_ live_migration-An_example_of_a_configuration_for_live_ migration.html, Access date: 22 MArch 2016.

[60] “Install docker.” https://docs.docker.com/linux/step_one/, Ac- cess date: 4 th May 2016.

[61] “Quick installation.” https://openvz.org/Quick_installation, Ac- cess date: 23 rd April, 2016.

[62] “Download/template/precreated.” https://openvz.org/Download/ template/precreated, Access date: 23 rd April, 2016.

[63] E. Gustafsson, “Optimizing total migration time in virtual machine live migration,” 2013.

[64] “Xen live migration.” https://access.redhat.com/documentation/ en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/ chap-Virtualization-Xen_live_migration.html, Access date: 27 th April 2016.

[65] “Checkpointing internals.” https://openvz.org/Checkpointing_ internals, Access date: 27 th April 2016.

[66] Y. Zhao and W. Huang, “Adaptive distributed load balancing algorithm based on live migration of virtual machines in cloud,” in INC, IMS and IDC, 2009. NCM’09. Fifth International Joint Conference on, pp. 170– 175, IEEE, 2009.

50 [67] F. Salfner, P. Tr¨oger,and A. Polze, “Downtime analysis of virtual ma- chine live migration,” in The Fourth International Conference on De- pendability (DEPEND 2011). IARIA, pp. 100–105, 2011. [68] S. Soltesz, H. P¨otzl, M. E. Fiuczynski, A. Bavier, and L. Peterson, “Container-based operating system virtualization: a scalable, high- performance alternative to hypervisors,” in ACM SIGOPS Operating Systems Review, vol. 41, pp. 275–287, ACM, 2007. [69] “Choice of toolstacks.” http://wiki.xen.org/wiki/Choice_of_ Toolstacks#Default_.2F_XEND_.28Deprecated_in_Xen_4.1.3B_ Removed_in_4.5.29, Access date: 7 th April 2016. [70] “Xm documentation.” http://xenbits.xen.org/docs/4. 4-testing/man/xm.1.html, Access date: 7 th April 2016. [71] “Xl.” http://wiki.xen.org/wiki/XL, Access date: 16 th April, 2016. [72] “Libvirt.” http://wiki.xen.org/wiki/Libvirt, Access date:22 MArch 2016. [73] “Libvirt virtualization api.” https://libvirt.org/goals.html, Ac- cess date: 17 th April, 2016. [74] “Manage virtual machines with virt-manager.” https: //virt-manager.org/, Access date: 17 th April, 2016. [75] “Checkpoint/restore tool http://criu.org.” https://github.com/ xemul/criu, Access date: 4 th May 2016. [76] “tianon/cgroupfs-mount.” https://github.com/tianon/ cgroupfs-mount, Access date: 4 th May 2016. [77] “P.haul.” https://criu.org/P.Haul, Access date: 10 th May 2016. [78] “Live migration using criu.” https://github.com/xemul/p.haul, Ac- cess date: 4 th May 2016.

Appendix A

A.1 XEN toolstack A.1.1 XEND Toolstack Previously XEN had XM command line user interface for management pur- pose which used XEND toolstack. Synopsis of XM user interface is ’xm

51 subcommand [args]’. However, it is now replaced by XL user interface which uses XL toolstack. XEND continues to be comprised in the new version of XEN but the service is not enabled by default. By default the XL toolstack is enabled in new version of XEN [69], [70].

A.1.2 XL Toolstack XL toolstack is a lightweight toolstack for XL command line user interface and it is built using libxenlight library (libxl). From XEN project 4.1 it is the default toolstack though there is XEND toolstack also available but it is not enabled. XEND code is very fragile, at the time of upgrade it becomes prone to bugs. In addition to this, it is difficult to debug. With XEND tool- stack it is not possible to create PVH guest and remus integration. These problems are solved in XL toolstack [71].

A.1.3 Libvirt Libvirt is a tool kit which can be used to deal with virtualization features of Linux OS. Its goal is to provide a layer which is sufficient to manage domains. Libvirt can be used to control multiple hypervisors with one inter- face. It provides APIs for modifying, controlling, creating, monitoring and migrating domains. With the help of libvirt multiple nodes can be accessed. These APIs also provide Resource operations for managing domains. Com- panies (i.e.Oracle and SUSE) use libvirt for offering cloud service. Libvirt XEN driver must have access to XEN daemon for proper functionality. For this reason, UNIX socket interface is enabled [72], [73].

A.1.4 Virt-Manager Virt-manager which is a desktop user interface that uses libvirt for managing virtual machines. It was developed by RedHat and was included in CentOS. Its primary objective was to manage KVM VM, but it also provides service for XEN and LXC (linux container). XEN does not include virt-manager by default, for this reason it was installed. It provides services like

• Overview of running domains.

• Domains resource usage statistics.

52 • Creating new domains.

• Adjusting domains resource allocation [74].

Appendix B

B.1 Docker Installation, Container Creation • At first CRIU version 2.1 was cloned from git repository [75]. CRIU is a project for implementing checkpoint/restore in Linux. Necessary packages (i.e. protobuf, protobuf-c etc.) for CRIU were installed. To avoid errors from install-man two packages: asciidoc and xmlto were installed. To have CRIU in standard paths make install command ran in the source directory. The command criu check ran on the teminal to see if it is installed correctly. It returned ”Looks OK” indicating successful installation.

• The used Docker experimental binary is compiled Docker experimen- tal release with experimental feature checkpoint and restore. The bi- nary was downloaded from git repository. The binary file was copied to /usr/bin/ directory. To give execution permission sudo chmod +x /usr/bin/docker command ran. To avoid cgroup mounting error git repository [76] was cloned and ran ./cgroupfs-mount command. This mounted devices cgroup. Then the command /usr/bin/docker dae- mon -s aufs is used to run docker. To verify if docker has the utility checkpoint and restore docker help command ran on the terminal.

• A new container is created with the command: /usr/bin/docker run -d busybox:latest /bin/sh -c ’i=0; while true; do echo $ i >> /foo; i=$(expr $i + 1); sleep 1; done’. This command pulls an image busy- from docker hub and runs a container that echos numbers starting from 0.

B.2 Container Migration In this work, P.Haul [77] which is a project based on CRIU for imple- menting live migration was used. It was cloned from git repository [78] and in the source directory it was installed by running python setup.py in- stall command. In the destination node command ./p.haul-wrap service ran. To migrate a container with id CTID ./p.haul-wrap client destination ip docker CTID command ran in the host node. However, P.Haul scan

53 /var/lib/docker/containers directory trying to find out full name of migrat- ing container. Unfortunately containers name exists in /var/lib/docker/0.0/containers folder not in /var/lib/docker/containers directory. Therefore, migration was stopped showing error. For this reason, migration of Docker container was not possible.

54