International Journal of Scientific & Engineering Research, Volume 7, Issue 11, November-2016 ISSN 2229-5518 1936

Evaluation of different Performances using Different Benchmarks

Shrutika Dhargave Prof. S. C. Karande Computer Enginnering MIT, Pune Computer Enginnering MIT, Pune [email protected] [email protected]

Abstract: has become a . Many different hypervisors popular way to make more efficient use of (both open source and commercial) exist server resources within both private data today, each with their own advantages and centers and public cloud platforms. disadvantages. Hypervisors are widely used in cloud environments and their impact on II. Background Knowledge application performance has been a topic of significant research and practical 1. server: interest. While recent advances in CPU Xen is an open source hypervisor originally architectures and new virtualization developed at the University of Cambridge techniques have reduced the performance and now distributed by , Inc. cost of using virtualization, overheads still The first public release of Xen occurred in exist, particularly when multiple virtual 2003 [1]. It is designed for various hardware machines are competing for resources. platforms, especially x86, and supports a The paper will cover the comparisons of wide range of guest operating systems, some hypervisors based on their including Windows, Linux, Solaris and performance. TheIJSER hypervisors which are versions of the BSD family. Xen employs to be compared are XEN, VMware, KVM, para-virtualization from the very beginning. and Hyper-V. Thus, the paper gives a Through para-virtualization, Xen can brief idea about each hypervisors and achieve very high performance, but it has the theirs performances at different levels. disadvantage of supporting Linux only; and that Linux has to have a modified kernel and Keywords: Virtualization, Hypervisor, bootloader, and a fixed layout with two Performance, Benchmarks partitions, one for hard disk and one for swap. Xen also implements support for I. Introduction hardware-assisted virtualization. In this configuration, it does not require modifying The recent growth in cloud environments has the guest OS, which make it possible to host accelerated the advancement of Windows guests. virtualization through hypervisors; however, with so many different virtualization 2. VMware ESXi: technologies, it is difficult to ascertain how VMware ESXi is an operating system- different hypervisors impact application independent hypervisor based on the VM performance and whether the same kernel operating system interfacing with performance can be achieved for each agents that run atop it. ESXi is the exclusive

IJSER © 2016 http://www.ijser.org International Journal of Scientific & Engineering Research, Volume 7, Issue 11, November-2016 ISSN 2229-5518 1937

hypervisor for VMware vSphere 5.x continued to release new Hyper-V versions licenses. VMware describes an ESXi system with new Windows server versions. So far, as similar to a stateless compute node. State there are a total of four versions, including information can be uploaded from a saved Windows Server 2012 R2, Windows Server configuration file. ESXi's VM kernel 2012, Windows Server 2008 R2 and interfaces directly with VMware agents and Windows Server 2008.Since Hyper-V’s approved third-party modules. debut, it has always been a Windows Server Virtualization administrators can configure feature, which could be installed whenever a VMware ESXi through its console or the server administrator decided to do so. It’s VMware vSphere Client and check also available as a separate product called VMware's Hardware Compatibility List for Microsoft Hyper-V Server. Basically, approved, supported hardware on which to Microsoft Hyper-V Server is a standalone install ESXi [2]. and shortened version of Windows Server where Microsoft cut out everything 3. KVM: irrelevant to virtualization, services and KVM is a hardware-assisted virtualization Graphical User Interface (GUI) to make the developed by Qumranet, Inc. and was server as small as possible. Plus, without the merged with upstream mainline Linux bells and whistles, the server requires less kernel in 2007, giving the Linux kernel maintenance time and it is less vulnerable, native virtualization capabilities. KVM because, for example, fewer components make use of the virtualization extensions mean less patching. Hyper-V is a hybrid Intel VT-x and AMD-V. In 2008, Red Hat, hypervisor, which is installed from OS (via Inc. acquired Qumranet. KVM is a kernel Windows wizard of adding roles) [4]. module to the Linux kernel, which provides the core virtualization infrastructure and III. Performance based on turns a Linux host into a hypervisor. different methods Scheduling of processes and memory is handled throughIJSER the kernel itself. Device 1. Hadoop Benchmark: emulation is handle by a modified version of In paper [5], Hadoop implementation of the QEMU [3]. The guest is actually executed in MapReduce framework is used. The Hadoop the user space of the host and it looks like a cluster comprises of 4 nodes that all reside in regular process to the underlying host kernel. the same physical machine. Each node has KVM supports I/O para-virtualization using one virtual core pinned to a different virtio subsystem. Virtio is a virtualization physical core, allotted 2 GB of memory, 50 standard for device (network, disk, etc.) GB of disk space, and is set to run at most 2 drivers where the guest’s device driver is map tasks or 2 reduce tasks. We run 3 aware of running in a virtual environment, different Hadoop benchmarks found in and communicates directly with the version 1.0.2 including TestDFSIO hypervisor. This enables the guests to get Write/Read, and TeraSort and 3 benchmarks high performance network and disk within the HiBench suite, namely, Word operations. Count, K-Means Clustering, and Hivebench. Each of these benchmarks was run ten times 4. Hyper-V: and the average was used Microsoft could not ignore the virtualization For HiBench, he performance impact when trend. Microsoft introduced Hyper-V as a using different hypervisors was negligible. virtualization platform in 2008, and it Performance difference between the

IJSER © 2016 http://www.ijser.org International Journal of Scientific & Engineering Research, Volume 7, Issue 11, November-2016 ISSN 2229-5518 1938

hypervisors for each benchmark is not very Write, Xen was found to be slower than significant with the highest percentage KVM. difference being 15.7% for Hive Aggregate. They shows very high saturation of the CPU 2. GPU Pass-through Performance: for each hypervisor for word count. The disk GPU virtualization and GPU-pass-through is being utilized at an average of less than are used within a variety of contexts, from 40% for each hypervisor. high performance computing to virtual For TestDFSIO Write on a 4 node Hadoop desktop infrastructure. Accessing one or cluster for each hypervisor using varied more GPUs within a is write size to HDFS from 5GB to 25GB with typically accomplished by one of two 5GB increments shows that KVM has the strategies: 1) via API remoting with device slowest write throughput for all sizes at an emulation; or 2) using PCI pass-through. We average of 42.0 MB/s. The performance characterize GPGPU performance within variation is due to the efficiency of request virtual machines across two hardware merging when processing a large number of systems, 4 hypervisors, and 3 application write requests to disk. For the read-intensive sets [6]. benchmark TestDFSIO Read, KVM reads KVM again performs well across both the data at an average throughput of 23.2 MB/s Delta and Bespin systems. In the case of the for the four data sizes. If we look at the Delta system, in fact, KVM, significantly physical core utilizations for these VCPUs at outperforms the base system. VMWare the host level only KVM is heavily utilizing perform close to the base system, while Xen the physical CPUs. achieves between 72–90% of the base TeraSort is both a CPU and disk intensive system’s performance. LAMMPS is unique benchmark. Each worker has to read data among our benchmarks, in that it exercises from disk, perform the sort (where both the GPU and multiple CPU cores. intermediate data gets written back to the LAMMPS performs well across both disk), sends it to the reducer node where the hypervisors and systems. Surprisingly, reducer again performsIJSER an aggregate sort and LAMMPS showed better efficiency on the writes the final result back to disk. For a Delta system than the Bespin system, combination of heavy read and write- achieving greater than 98% efficiency across intensive I/O and CPU used for sorting, Xen the board, while Xen on the Bespin system performed the best and was able to sort the occasionally drops as low as 96.5% data the quickest while KVM’s performance efficiency. LULESH is a highly compute- was slightly worse than Xen’s performance. intensive simulation, with limited data For TestDFSIO Read, KVM was faster than movement between the host/virtual machine Xen, but in this map phase Xen is faster than and the GPU, making it ideal for GPU KVM. This difference is due to the added acceleration. Overall, we see very little CPU needed to compute the sort on the data; overhead, there is a slight scaling effect that whereas Xen can use dom0 to offload some is most apparent in the case of the Xen of the disk read which frees up CPU for the hypervisor. VMs in Xen to perform the sort, the KVM VMs must use its own CPU to read and 3. SIGAR Framework: perform the sort. During the reduce phase, SIGAR Framework for Xen-In CPU we see that Xen completes faster while utilization test, lower CPU consumption and KVM takes the longest time. In TestDFSIO less variation (in case of medium and high workloads) is better for a guest OS. In case

IJSER © 2016 http://www.ijser.org International Journal of Scientific & Engineering Research, Volume 7, Issue 11, November-2016 ISSN 2229-5518 1939

of Memory test, high available memory Hypervisor. The worse Memory Utilization indicates superior performance of a guest is KVM_FTP/Web Server. This is because OS. In case of Disk I/O tests higher KVM offers more overhead than other sequential read and write are the signs of Hypervisors. The Transfer Time in OpenVZ better guest OS on the XenServer and XEN-PV Web Server is better than that Hypervisor. And in network performance of other hypervisors. CPU consumption in high transfer rate and less variation (in the KVM–PV is better than in KVM-FV case of medium to high workloads) indicates because of the implementation of “virtio the better performance for a guest OS[7]. driver” in KVM-PV [9]. SIGAR Framework for ESXi- In CPU utilization test, lower CPU consumption and IV. Conclusion and Future less variation (in case of medium and high Scope workloads) is good for a guest OS. In case of Memory test, high available memory The comparisons of the hypervisors are only indicates superior performance of a guest based on performances. Depending on the OS. In case of Disk I/O tests higher read and framework, each hypervisor has good write are the signs of better guest OS on the performances with a slight difference from ESXi Hypervisor. And in Network the other hypervisors. It is very difficult to performance high transfer rate and less conclude that which hypervisor is better variation (in the case of medium to high among them. Thus depending on the project workloads) indicates the better performance criteria appropriate hypervisor can be for a guest OS[8]. selected. As stated before, hypervisors are compared based on performances. The 4. FTP and HTTP approach: hypervisors can also be compared on the Regarding to Transfer Time, the real base of ease of development for future work. environment has a performance approximately 2 times better than other References hypervisors. It meansIJSER that the Hypervisor includes a consider time delay during FTP [1] Paul Barham, Boris Dragovic, Keir transmission. The real environment has a Fraser, Steven Hand, Tim Harris, Alex Ho, better performance in CPU consumption and Rolf Neugebauer, Ian Pratt and Andrew Memory Utilization than the Hypervisors. Warfield, “Xen and the Art of The Time Transfer in OpenVZ_FTP Server Virtualization,” Proceedings of the ACM is smaller than that of the other hypervisors. Symposium on Operating Systems This happens because OpenVZ hypervisor is Principles, 2003. less complex than the others. In OpenVZ [2]http://searchvmware.techtarget.com/ each Guest OS has the same kernel as Host definition/VMware-ESXi. OS and each Guest looks like a simple [3] A. Singh, “An Introduction to process. Virtualization”, kernelthread.com, January XEN-PV_FTP Server has a better 2004. performance in CPU consumption, because [4] https://hyperv.veeam.com/blog/what-is- XEN-PV has a better isolation between hyper-v-technology. Guest OS and Host OS. The Memory [5] Jack Li, Qingyang Wang, Deepal Utilization in XEN-PV_FTP Server is better Jayasinghe, Junhee Park, Tao Zhu, Calton than the memory utilization in other Pu, "Performance Overhead Among Three Hypervisors. The second one is OpenVZ Hypervisors: An Experimental Study using

IJSER © 2016 http://www.ijser.org International Journal of Scientific & Engineering Research, Volume 7, Issue 11, November-2016 ISSN 2229-5518 1940

Hadoop Benchmarks", IEEE International Congress on Big Data (BigData Congress) 2013. [6] John Paul Walters, Andrew J. Younge, Dong-In Kang, Ke-Thia Yao, Mikyung Kang, Stephen P. Crago, and Geoffrey C. Fox, "GPU Pass-through Performance: A Comparison of KVM, Xen, VMWare ESXi, and LXC for CUDA and OpenCL Applications" IEEE 7th International Conference on (CLOUD), 2014. [7] Vijaya Vardhan Reddy, Dr. Lakshmi Rajamani, "Performance Evaluation of Operating Systems in the Private Cloud with XenServer Hypervisor Using SIGAR Framework",9th International Conference on Computer Science & Education (ICCSE), 2014. [8] Vijaya Vardhan Reddy, Dr. Lakshmi Rajamani, "Evaluation of Different Operating Systems Performance in the Private Cloud with ESXi Hypervisor using SIGAR Framework", 5th International Conference - Confluence The Next Generation Information Technology Summit (Confluence), 2014. [9] Igli Tafa, ErmalIJSER Beqiri, Hakik Paci, Elinda Kajo, Aleksandër Xhuvani, "The evaluation of Transfer Time, CPU Consumption and Memory Utilization in XEN-PV, XEN-HVM, OpenVZ, KVM-FV and KVM-PV Hypervisors using FTP and HTTP approaches", Third International Conference on Intelligent Networking and Collaborative Systems (INCoS), 2011

IJSER © 2016 http://www.ijser.org