Performance Evaluation of Virtualization Technology Juncheng Gu 5191-0572
Total Page:16
File Type:pdf, Size:1020Kb
Performance Evaluation of Virtualization Technology Juncheng Gu 5191-0572 Abstract—Although virtualization brings numerous benefits, it meanwhile incurs performance loss. Therefore, it’s indispensable to evaluate the performance of virtualization. In this work, we measure and analyze the performance of three major virtualization technologies including LXC, Xen and KVM. We separately test the virtualization of processor, memory, disk, network, isolation with specific benchmarks, and then analyze the results by examining their design and implementation. The result shows LXC, the light weighted virtualization, achieves the best performance, and I/O virtualization (disk, network) is the performance bottleneck of all virtualization technologies. Our work can help user make informed decision about their choice of hypervisor. I. INTRODUCTION 1.1 Background Virtualization technologies have become very important and gained widespread usage in the area of cloud computing and big data application because of its tremendous benefits, such as flexibility, independence, isolation, security, high resource utilizing rate, power saving and so on. In virtualization systems, resource virtualization of underlying hardware and concurrent execution of virtual machines are in the charge of software named virtual machine monitor (VMM) or hypervisor (in Xen). VMM abstracts underlying hardware and provides the same view to virtual machines, which enables virtual machine to run on physical machines with different hardware configuration. Classifying from the perspective of VMM, there are three popular virtualization technologies: KVM, Xen and Resource Container, which adopt the full-virtualization, para-virtualization, and container-based virtualization respectively. KVM (Kernel-based Virtual Machine) is a VMM using full-virtualization and it is one of the most recent virtualization techniques. [1] With hardware extension support such as Intel-VT or AMD-V, it can construct an inexpensive virtual machine upon x86 hardware structure with an acceptable performance. Intel VT-x is a function added in x86 hardware to switch to hypervisor when sensitive instructions are detected by the CPU.[2] Because x86 CPUs have instinctive drawbacks on virtualization, it makes CPU failing to detect sensitive instructions when guest operating system execute them. Intel VT-x was developed to solve this problem, as it separate CPU modes to VMX root modes and VMX non-root mode. As shown in Figure1.1, VMX non-root mode is the execution mode for guest system. KVM is mainly to handle VM Exits and the execution of VM Entry instructions. Figure 1.1 Intel VT-x Figure 1.2 KVM/QEMU flow KVM has been a standard kernel module and added into Linux kernel. Therefore it can take advantages of standard Linux kernel and hardware virtualization technology. However the KVM kernel module cannot create a virtual machine by itself. It requires support of QEMU, a user-space process, which is an inherent hardware emulator.[3] Figure1.2 shows the corporation of KVM/QEMU. KVM kernel module switches the Linux kernel to a hypervisor. For each guest system, QEMU emulate a guest system and makes system calls. When the guest system starts to execute, QEMU calls ioctl() to instruct KVM kernel module to start guest system. The KVM performs a VM Entry and begins the executing the guest system. When a sensitive instruction occurs, VM Exit is executed and KVM identifies the instruction and exit the VM. This QEMU/KVM flow is repeated during a VM running.[4] The virtual machine (guest O/S) of KVM has no privilege to access the I/O devices. As a feature of full- virtualization, the virtual machine has no knowledge about the host operating system because it is no aware that it’s not running on real machine. Besides, KVM has userspace which takes charge of the I/O virtualization by employing a lightly modified QEMU to emulate the behavior of I/O or sometimes necessarily triggering the real I/O device. Any I/O requests of guest O/S are trapped into userspace and emulated by QEMU. Figure 1.3 KVM structure Xen is a para-virtualization featured hypervisor, which was proposed in [5]. Xen needs to modify the kernel of both the host and the guest O/S, but it requires no change to application binary interface (ABI) and thus existing applications can run without extra modification. Through para-virtualization, Xen achieves high performance because the guest O/S knows that it is running in virtual environment. In Xen, only the hypervisor itself runs in ring 0, and the guest O/S runs upon ring 1, which is different from full-virtualization. Xen imports hypercall which has the same function with syscall. A hypercall is to a hypervisor what a syscall is to a kernel. Guest domains call hypercall and raise software trap to the hypervisor, just as a system is software trap from an application to the kernel. Guest domains use hypercall to request privileged operations. The structure of Xen is shown in Figure1.4, a special domain “Domain 0” is added in guest system as a control interface, which is created as Xen is booted. Domain 0 is responsible to create and manage other domains. Besides, it is also used to schedule physical memory allocations and physical disk and network devices accesses. Operating system with different kernel can run above Xen. In our experiments, we all use Linux kernel as both host and guest systems to obtain accurate data under same experiment condition for different virtualizations. Figure 1.4 Xen structure Resource Container is container-based virtualization approach, also known as operating system level virtualization.[6] It works at the O/S level and is a lightweight virtualization technology. It logically contains all the system resources being used by an application to achieve a particular independent activity. The difference between container-based and hypervisor-based virtualization is remarkable, which is illustrated in Figure… Hypervisor-based virtualization provides abstraction for full guest OS, while container-based virtualization provides abstractions directly for the guest processes. Unlike hypervisor-based virtualization with high performance overhead, resource container promises a near- native performance. Since container works at the operating system level, it requires that all virtual machines share a same OS kernel. In this way, the isolation in container is supposed to be weaker than traditional virtual machines. Here, we mainly consider Linux Container (LXC). The isolation in LXC employs kernel namespace, which is a feature of Linux kernel and it allows processes to have different views in the system. Besides, it mainly relies on external tool-- cgroup for resource management, such as configuring network namespace, and process control. Figure 1.5 Container-based virtualization vs Hypervisor-based virtualizaiton 1.2 Motivation However, benefits are not always for free. Although virtualization technology provides many merits, it inevitably incurs some performance loss. The existing VMM debases the performance of some specific operations. For example, I/O virtualization is the bottleneck of most VMM, because of the frequent trap or mode switching caused by I/O instructions. I/O intensive workload might be greatly influenced because of the bad performance of I/O virtualization. Besides, memory management in virtualization system is much more complicated than normal operating system, such as two-layer mapping. Although some optimization approaches, such as shadow page table, eliminate the complicity in normal case, there is still high overhead when page faults occur. Therefore, it is indispensable to measure and analyze the performance of virtualization technology, comparing with bare metal physical machine. Secondly, as we mentioned before, there are a lot of virtualization technologies, each of which has specific advantages and shortcomings. To gain better performance or benefits, application and virtualization system should match each other’s feature. It is essential to know VMM’s feature and select the right VMM before deploying applications. That’s why measuring and analyzing the performance of VMM is expecting. II. RELATED WORK. A lot of study has been made on virtualization techniques, especially on comparison of Xen and KVM. In paper [8] [9] [10], researchers evaluate the performance of virtual machine monitor. P. Barham proposed the idea of Xen and compared XenoLinux with native Linux, VMware and User-mode Linux with SPEC CPU2000, OSDB, dbench and SPEC web99[5]. Chen evaluated synthetically the performance of OpenVZ, Xen and KVM, with SPEC CPU2006, RAMSPEED, Bonnie++, NetIO and made synthetical analysis about their performance characteristics. [11] [12] evaluated the performance for Xen, Mware and LXC, mainly verified that the isolation performance of hypervisor is better than the container-based virtualization. [13] made qualitative comparison among Xen, KVM and basic linux in their overall performance, implementation details and general features. This paper mainly measured the network performance using Netperf, and system performance using UnixBench. Andrea Chierici presented their work of comparison of Xen and KVM in their overall performance, performance isolation and scalability. [14] This paper tested overall performance using their benchmark suite, which measured the overall performance with a CPU-intensive test, a kernel compile and IOzone write and read test. Performance isolation is measured by SPEC web2005, which indicated that Xen has good isolation properties for