The Impact on the Performance of Co-Running Virtual Machines in a Virtualized Environment

The Impact on the Performance of Co-Running Virtual Machines in a Virtualized Environment

The Impact on the Performance of Co-running Virtual Machines in a Virtualized Environment Gildo Torres Chen Liu Clarkson University Clarkson University 8 Clarkson Ave 8 Clarkson Ave Potsdam, New York Potsdam, New York [email protected] [email protected] ABSTRACT threads. At the time, efforts were mainly aimed at balanc- The success of cloud computing technologies heavily depends ing each thread's progress while maintaining priorities and on the underlying hardware as well as the system software enforcing fairness. One of the key factors that architects support for virtualization. As hardware resources become relied on for achieving better performance, along with inno- more abundant with each technology generation, the com- vative architectural improvements, was to increase the speed plexity of managing the resources of computing systems has of the clock. In recent years, however, power-thermal issues increased dramatically. Past research has demonstrated that have limited the pace at which processor frequency can be contention for shared resources in modern multi-core multi- increased. In an effort to utilize the abundant transistor threaded microprocessors (MMMP) can lead to poor and real estate available, and at the same time to contain the unpredictable performance. In this paper we conduct a per- power-thermal issues, current developments in microproces- formance degradation study targeting virtualized environ- sor design favor increasing core counts over frequency scaling ment. Firstly, we present our findings of the possible im- to improve processor performance and energy efficiency. pact on the performance of virtual machines (VMs) when As a result, chip multi-processors (CMPs) have been es- managed by the default Linux scheduler as regular host pro- tablished as the dominant architecture employed by modern cesses. Secondly, we study how the performance of virtual microprocessor design. Integrating multiple cores on a chip machines can be affected by different ways of co-scheduling and multiple threads in a core adds new dimensions to the at the host level. Finally, we conduct a correlation study in task of managing available hardware resources. In so-called which we strive to determine which hardware event(s) can multi-core multi-threading microprocessors (MMMPs), con- be used to identify performance degradation of the VMs tention for shared hardware resources becomes a big chal- and the applications running within. Our experimental re- lenge. For the scheduling algorithms used by the operating sults show that if not managed carefully, the performance system (OS) in multi-core computing platforms, the primary degradation of individual VMs can be as high as 135%. We strategy for distributing threads among cores is load balanc- believe that low-level hardware information collected at run- ing, for example, symmetric multiprocessing (SMP). The time can be used to assist the host scheduler in managing scheduling policy tries to balance the ready-to-run threads co-running virtual machines in order to alleviate contention across available resources with the objective of ensuring a for resources, therefore reducing performance degradation fair distribution of CPU time by minimizing the idling as of individual VMs as well as improving the overall system well as avoiding the overloading of the cores. Threads com- throughput. pete for the computation and memory resources if they are sharing the same core; if they are running on separate cores, they will contest for the Last Level Cache (LLC), memory Keywords bus or interconnects, DRAM controllers and pre-fetchers if Cloud Computing; Virtual Machine Management; Kernel sharing the same die [21]. Previous studies [3, 6, 17, 15, 12, Virtual Machine; Hardware Performance Counters 20, 11, 10] have shown that contention on shared hardware resources affects the execution time of co-running threads 1. INTRODUCTION and the memory bandwidth available to them. The other side of the story is the flourish of the cloud com- Not so long ago, hardware resources were deemed scarce puting technology. Cloud computing, facilitated by hard- in the era of single-core microprocessors. Managing such ware virtualization technologies (e.g., Intel-VT and AMD- resources for multi-programming systems was tasked with V) and CMP architectures, has become pervasive and has distributing the limited CPU time among multiple running transformed the way enterprises deploy and manage their IT infrastructures. Common services provided through cloud Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed computing include infrastructure as a service (IaaS), plat- for profit or commercial advantage and that copies bear this notice and the full cita- form as a service (PaaS), and software as a service (SaaS), tion on the first page. Copyrights for components of this work owned by others than among others. It provides the foundation for a truly agile ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission enterprise, so that IT can deliver an infrastructure that is and/or a fee. Request permissions from [email protected]. flexible, scalable, and most importantly, economical through ARMS-CC’16, July 29 2016, Chicago, IL, USA efficient resource utilization [16]. c 2016 ACM. ISBN 978-1-4503-4227-8/16/07. $15.00 Virtualization offers users the illusion that their remote DOI: http://dx.doi.org/10.1145/2962564.2962573 machine is running the operating system of their interest 2.1 Virtual Machine Monitor on its own dedicated hardware. However, underneath that In a virtualized environment, the hypervisor is responsi- illusion is a completely different reality, where different OS ble for creating and managing the virtual machines. In this images (Virtual Machines) from different users are running work we use the Kernel Virtual Machine (KVM) hypervi- concurrently on the same physical server. Because a sin- sor. KVM [13] is a full virtualization solution for Linux that gle Virtual Machine (VM) normally will not fully utilize the can run unmodified guest images. It has been included in hardware resources available on MMMPs, multiple VMs are the mainline Linux kernel since version 2.6.20 and is imple- put on the MMMP platforms to be executed simultaneously mented as a loadable kernel module that converts the Linux so as to improve the overall resource utilization on the cloud kernel into a bare metal hypervisor. KVM relies on hard- side. Aggregately, this means boosting the system through- ware (CPUs) containing virtualization extensions like Intel put in terms of the total number of VMs supported by the VT-X or AMD-V, leveraging those features to virtualize the cloud service provider (CSP) and even reducing the energy CPU. cost of CSP's infrastructure by consolidating the VMs and In KVM architecture, the VMs are mapped to regular turning off the resources that are not being used. Linux processes (i.e., QEMU processes) and are scheduled Co-running VMs, however, are not exempted from con- by the standard Linux scheduler. This allows KVM to bene- tention on shared resources in MMMPs. Similar to the fit from all the features of the Linux kernel such as memory thread scenario, VMs would compete for the computation, management, hardware device drivers, etc. Device emula- memory, and I/O resources. Their performance directly de- tion is handled by QEMU. It provides emulated BIOS, PCI pends on which VMs are put together side by side on the bus, USB bus and a standard set of devices such as IDE and same core. If not managed carefully, this contention can SCSI disk controllers, network cards, etc. [16]. cause a significant performance degradation of the VMs, against the original motivation for co-locating them together. 2.2 Hardware Performance Counters Traditionally, load balancing of MMMPs have been un- Hardware performance counters (HPCs) are special hard- der the purview of the OS scheduler. This is still the case ware registers available on most modern processors. These in cloud environments that use hosted virtualization such registers can be used to count the number of occurrences of as the Kernel Virtual Machine (KVM) [13]. In the case certain types of hardware events, as well as occurrences of of bare-metal virtualization, the scheduler is implemented specific signals related to the processor's function, such as as part of the Virtual Machine Monitor (VMM, a.k.a. hy- instructions executed, cache-misses suffered, branches mis- pervisor). Regardless of where the scheduler resides, the predicted, etc. These hardware events are counted at native scheduler tries to evenly balance the workload among ex- execution speed, without slowing down the kernel or appli- isting cores. Normally, these workloads are processes and cations because they use dedicated hardware that does not threads, but in a cloud environment they also include en- incur additional overhead. Although originally implemented tire virtual machines1. On top of that, the VMs (and the for purposes such as debugging hardware designs during de- processes/threads within them) exhibit different behaviors velopment, identifying bottlenecks and tuning performance at different times during their lifetimes, sometimes being in program execution, nowadays they are widely used for computation-intensive, sometimes being memory-intensive, gathering runtime information of programs and performance sometimes being I/O intensive, and other times following a analysis [18, 2]. mixed behavior. The fundamental challenge is the semantic The types and number of available events that can be gap, i.e., the hypervisor is unaware of the runtime behav- tracked, as well as the methodologies for using these hard- ior of the concurrent VMs and the potential contention on ware counters, vary widely not only across architectures, but processor resources they caused herein, and lacks the mech- also across systems sharing the same Instruction Set Archi- anism to act correspondingly.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us