Evaluation of Different Hypervisors Performances Using Different Benchmarks
Total Page:16
File Type:pdf, Size:1020Kb
International Journal of Scientific & Engineering Research, Volume 7, Issue 11, November-2016 ISSN 2229-5518 1936 Evaluation of different Hypervisors Performances using Different Benchmarks Shrutika Dhargave Prof. S. C. Karande Computer Enginnering MIT, Pune Computer Enginnering MIT, Pune [email protected] [email protected] Abstract: Virtualization has become a hypervisor. Many different hypervisors popular way to make more efficient use of (both open source and commercial) exist server resources within both private data today, each with their own advantages and centers and public cloud platforms. disadvantages. Hypervisors are widely used in cloud environments and their impact on II. Background Knowledge application performance has been a topic of significant research and practical 1. XEN server: interest. While recent advances in CPU Xen is an open source hypervisor originally architectures and new virtualization developed at the University of Cambridge techniques have reduced the performance and now distributed by Citrix Systems, Inc. cost of using virtualization, overheads still The first public release of Xen occurred in exist, particularly when multiple virtual 2003 [1]. It is designed for various hardware machines are competing for resources. platforms, especially x86, and supports a The paper will cover the comparisons of wide range of guest operating systems, some hypervisors based on their including Windows, Linux, Solaris and performance. TheIJSER hypervisors which are versions of the BSD family. Xen employs to be compared are XEN, VMware, KVM, para-virtualization from the very beginning. and Hyper-V. Thus, the paper gives a Through para-virtualization, Xen can brief idea about each hypervisors and achieve very high performance, but it has the theirs performances at different levels. disadvantage of supporting Linux only; and that Linux has to have a modified kernel and Keywords: Virtualization, Hypervisor, bootloader, and a fixed layout with two Performance, Benchmarks partitions, one for hard disk and one for swap. Xen also implements support for I. Introduction hardware-assisted virtualization. In this configuration, it does not require modifying The recent growth in cloud environments has the guest OS, which make it possible to host accelerated the advancement of Windows guests. virtualization through hypervisors; however, with so many different virtualization 2. VMware ESXi: technologies, it is difficult to ascertain how VMware ESXi is an operating system- different hypervisors impact application independent hypervisor based on the VM performance and whether the same kernel operating system interfacing with performance can be achieved for each agents that run atop it. ESXi is the exclusive IJSER © 2016 http://www.ijser.org International Journal of Scientific & Engineering Research, Volume 7, Issue 11, November-2016 ISSN 2229-5518 1937 hypervisor for VMware vSphere 5.x continued to release new Hyper-V versions licenses. VMware describes an ESXi system with new Windows server versions. So far, as similar to a stateless compute node. State there are a total of four versions, including information can be uploaded from a saved Windows Server 2012 R2, Windows Server configuration file. ESXi's VM kernel 2012, Windows Server 2008 R2 and interfaces directly with VMware agents and Windows Server 2008.Since Hyper-V’s approved third-party modules. debut, it has always been a Windows Server Virtualization administrators can configure feature, which could be installed whenever a VMware ESXi through its console or the server administrator decided to do so. It’s VMware vSphere Client and check also available as a separate product called VMware's Hardware Compatibility List for Microsoft Hyper-V Server. Basically, approved, supported hardware on which to Microsoft Hyper-V Server is a standalone install ESXi [2]. and shortened version of Windows Server where Microsoft cut out everything 3. KVM: irrelevant to virtualization, services and KVM is a hardware-assisted virtualization Graphical User Interface (GUI) to make the developed by Qumranet, Inc. and was server as small as possible. Plus, without the merged with upstream mainline Linux bells and whistles, the server requires less kernel in 2007, giving the Linux kernel maintenance time and it is less vulnerable, native virtualization capabilities. KVM because, for example, fewer components make use of the virtualization extensions mean less patching. Hyper-V is a hybrid Intel VT-x and AMD-V. In 2008, Red Hat, hypervisor, which is installed from OS (via Inc. acquired Qumranet. KVM is a kernel Windows wizard of adding roles) [4]. module to the Linux kernel, which provides the core virtualization infrastructure and III. Performance based on turns a Linux host into a hypervisor. different methods Scheduling of processes and memory is handled throughIJSER the kernel itself. Device 1. Hadoop Benchmark: emulation is handle by a modified version of In paper [5], Hadoop implementation of the QEMU [3]. The guest is actually executed in MapReduce framework is used. The Hadoop the user space of the host and it looks like a cluster comprises of 4 nodes that all reside in regular process to the underlying host kernel. the same physical machine. Each node has KVM supports I/O para-virtualization using one virtual core pinned to a different virtio subsystem. Virtio is a virtualization physical core, allotted 2 GB of memory, 50 standard for device (network, disk, etc.) GB of disk space, and is set to run at most 2 drivers where the guest’s device driver is map tasks or 2 reduce tasks. We run 3 aware of running in a virtual environment, different Hadoop benchmarks found in and communicates directly with the version 1.0.2 including TestDFSIO hypervisor. This enables the guests to get Write/Read, and TeraSort and 3 benchmarks high performance network and disk within the HiBench suite, namely, Word operations. Count, K-Means Clustering, and Hivebench. Each of these benchmarks was run ten times 4. Hyper-V: and the average was used Microsoft could not ignore the virtualization For HiBench, he performance impact when trend. Microsoft introduced Hyper-V as a using different hypervisors was negligible. virtualization platform in 2008, and it Performance difference between the IJSER © 2016 http://www.ijser.org International Journal of Scientific & Engineering Research, Volume 7, Issue 11, November-2016 ISSN 2229-5518 1938 hypervisors for each benchmark is not very Write, Xen was found to be slower than significant with the highest percentage KVM. difference being 15.7% for Hive Aggregate. They shows very high saturation of the CPU 2. GPU Pass-through Performance: for each hypervisor for word count. The disk GPU virtualization and GPU-pass-through is being utilized at an average of less than are used within a variety of contexts, from 40% for each hypervisor. high performance computing to virtual For TestDFSIO Write on a 4 node Hadoop desktop infrastructure. Accessing one or cluster for each hypervisor using varied more GPUs within a virtual machine is write size to HDFS from 5GB to 25GB with typically accomplished by one of two 5GB increments shows that KVM has the strategies: 1) via API remoting with device slowest write throughput for all sizes at an emulation; or 2) using PCI pass-through. We average of 42.0 MB/s. The performance characterize GPGPU performance within variation is due to the efficiency of request virtual machines across two hardware merging when processing a large number of systems, 4 hypervisors, and 3 application write requests to disk. For the read-intensive sets [6]. benchmark TestDFSIO Read, KVM reads KVM again performs well across both the data at an average throughput of 23.2 MB/s Delta and Bespin systems. In the case of the for the four data sizes. If we look at the Delta system, in fact, KVM, significantly physical core utilizations for these VCPUs at outperforms the base system. VMWare the host level only KVM is heavily utilizing perform close to the base system, while Xen the physical CPUs. achieves between 72–90% of the base TeraSort is both a CPU and disk intensive system’s performance. LAMMPS is unique benchmark. Each worker has to read data among our benchmarks, in that it exercises from disk, perform the sort (where both the GPU and multiple CPU cores. intermediate data gets written back to the LAMMPS performs well across both disk), sends it to the reducer node where the hypervisors and systems. Surprisingly, reducer again performsIJSER an aggregate sort and LAMMPS showed better efficiency on the writes the final result back to disk. For a Delta system than the Bespin system, combination of heavy read and write- achieving greater than 98% efficiency across intensive I/O and CPU used for sorting, Xen the board, while Xen on the Bespin system performed the best and was able to sort the occasionally drops as low as 96.5% data the quickest while KVM’s performance efficiency. LULESH is a highly compute- was slightly worse than Xen’s performance. intensive simulation, with limited data For TestDFSIO Read, KVM was faster than movement between the host/virtual machine Xen, but in this map phase Xen is faster than and the GPU, making it ideal for GPU KVM. This difference is due to the added acceleration. Overall, we see very little CPU needed to compute the sort on the data; overhead, there is a slight scaling effect that whereas Xen can use dom0 to offload some is most apparent in the case of the Xen of the disk read which frees up CPU for the hypervisor. VMs in Xen to perform the sort, the KVM VMs must use its own CPU to read and 3. SIGAR Framework: perform the sort. During the reduce phase, SIGAR Framework for Xen-In CPU we see that Xen completes faster while utilization test, lower CPU consumption and KVM takes the longest time. In TestDFSIO less variation (in case of medium and high workloads) is better for a guest OS.