Understanding the Effects of Hypervisor I/O Scheduling for Virtual Machine Performance Interference

Understanding the Effects of Hypervisor I/O Scheduling for Virtual Machine Performance Interference

Understanding the Effects of Hypervisor I/O Scheduling for Virtual Machine Performance Interference Ziye Yang, Haifeng Fang, Yingjun Wu, Chunqi Li, Bin Zhao H. Howie Huang EMC Labs China The George Washington University fziye.yang, fang.haifeng, yingjun.wu, chunqi.li, [email protected] [email protected] Abstract public, which one is in use is unknown. To address this problem, we use a gray-box method in our framework to classify the In virtualized environments, the customers who purchase virtual machines scheduling algorithm. For a closed-source hypervisor, we use (VMs) from a third-party cloud would expect that their VMs run in an isolated manner. However, the performance of a VM can be negatively affected by a black-box analysis to obtain the scheduling properties such as co-resident VMs. In this paper, we propose vExplorer, a distributed VM I/O I/O throughput and latency. performance measurement and analysis framework, where one can use a set With the knowledge of I/O scheduling algorithm, a malicious of representative I/O operations to identify the I/O scheduling characteristics within a hypervisor, and potentially leverage this knowledge to carry out I/O user can intentionally slow down co-located (co-resident) VMs based performance attacks to slow down the execution of the target VMs. by launching various attacking workloads. The main feature of We evaluate our prototype on both Xen and VMware platforms with four such I/O performance attack is to deploy non-trivial I/O work- server benchmarks and show that vExplorer is practical and effective. We also conduct similar tests on Amazon’s EC2 platform and successfully slow down loads and manipulate the shared I/O queues to have an unfair the performance of target VMs. advantage. Note that space and time locality are the two major considerations in I/O scheduling schedulers. For example, the 1. Introduction scheduling algorithms (e.g., Deadline, and Completely Fair Queuing or CFQ) merge the I/O requests that are continuous Cloud providers employ virtualization techniques that allow in logical block address (LBA) for better space locality, while physical machines to be shared by multiple virtual machines other algorithms (e.g., Anticipatory Scheduling or AS [8] and (VMs) owned by different tenants. While resource sharing CFQ too) have a time window to anticipatorily execute the improves hardware utilization and service reliability, this may incoming I/O requests that are adjacent with previous I/O also open doors to side channel or performance interference requests in LBA. attacks by malicious tenants. For example, CPU cache based In this work, we design and develop a distributed perfor- attack has been studied in cloud environment [1, 2, 3, 4], which mance measurement and analysis framework, vExplorer, that might be mitigated to a lesser degree when each core in new allows co-resident VMs to issue a group of I/O workloads to multi-core CPUs is used exclusively by a single VM (at the cost understand I/O scheduling algorithms in a hypervisor. In partic- of reduced CPU utilization). On the other hand, I/O resources ular, two types of representative workloads are proposed in this are mostly shared in virtualized environments, and I/O based framework: the Prober workload is responsible for identifying performance attacks remains a great threat, especially for data- the I/O scheduling characteristics of a hypervisor that include intensive applications [5, 6, 7]. In this paper, we discuss the the algorithm and related properties, and the Attacker workload possibility of such attacks, and especially focus on the effects can be utilized to form I/O performance attacks, where one of disk I/O scheduling in a hypervisor for VM performance can dynamically configure the I/O workloads with the param- interference. eters (e.g., percentage of read/write operations) based on the The premise of virtual I/O based attacks is to deploy ma- extracted scheduling knowledge. To summarize, we make the licious VMs that are co-located with target VMs and aim to following contributions in this paper: slow down their performance by over-utilizing the shared I/O • We design and develop vExplorer , which can be used to resources. Previous work shows the feasibility of co-locating identify the characteristics of I/O scheduling in a hypervi- VMs on same physical machines in a public cloud [1]. In this sor. Also, the Prober workloads can be adopted as an I/O work, we will demonstrate that a well designed measurement profiling benchmark in virtualized environments. framework can help study virtual I/O scheduling, and such • We discuss the feasibility of VM based I/O performance knowledge can be potentially applied to exploit the usage of attacks through a simple mathematical model, and also the underlying I/O resources. design a set of Attacker workloads that are shown effec- Extracting the I/O scheduling knowledge in a hypervisor is tive on virtualized platforms such as Xen and VMware. challenging. Generally, hypervisors can be divided into two Furthermore, we conduct the experiments on Amazon classes, i.e., open-source hypervisor (e.g., Xen) and closed- EC2 platform [9], where several VMs are deployed on a source hypervisor (e.g., VMware ESX server). For an open- physical host and their virtual disks (local instance store) source hypervisor, while the knowledge of the I/O schedulers is are mapped into one local disk. For four benchmarks we observe significant performance reduction on target VMs. workloads are defined through a group of single I/O commands The remainder of this paper is organized as follows. Sec- (IOs), in form of <sequence id, daemon id, launch time, tion 2 presents the design and implementation of our prototype end time, file info, IO mode, IO offset, IO size> shown in system, vExplorer. Section 3 presents the profiling work of I/O Table 1. scheduling on both Xen and VMware. Section 4 demonstrates TABLE 1. IO Description VM I/O scheduling based attacks with the predefined mathe- matical model. Section 5 shows a case study of our approach Sequence id Unique id of the IO in time sequence on Amazon EC2, and Section 6 discusses related work. Finally, Daemon id The execution owner (IMD) of the IO we conclude in Section 7. Launch time The launch time of the IO, controlled by the DC End time The ending time of the IO, collected by each IMD 2. System Design and Implementation File info Target file of the IO, e.g., /dev/sda1 IO mode Operation mode: read or write, sync or non-sync IO offset The offset of the IO The challenge of exploring I/O performance attacks is to con- IO size The I/O size, e.g., 4KB, 8KB and etc. trol the access patterns of the I/O workloads in various VMs for extracting the scheduling characteristics of a hypervisor. Fig- We also define several typical workload modes that will be ure 1 shows the architecture of vExplorer system that consists used in our experiments. of distributed I/O controller (DC), I/O measurement daemon • Sequential mode. Each program sequentially reads or (IMD) and analytical module (AM). When the measurement writes a target file (i.e., a raw disk device) from the begins, the Monitor in the DC interacts with the IMDs within beginning to end. Furthermore, if each adjacent pair various VMs and directs each IMD to execute the I/O tasks of IOs (sorted by issuing time) satisfies this formula, generated by the Workload module; then the outputs produced IO (IO offset) = IO (IO offset) + IO (IO size), by each IMD are stored into the Output Container (e.g., a j i i then such workload can be categorized as seq-non-gap database); finally the DC delivers the results to the AM for mode, which is designed for verifying the scheduling knowledge extraction. This process can be repeated iteratively optimization for space locality. for training and analysis. • Burst mode. Each receiver continually runs a given set of I/O tasks in a time interval. This mode can be applied to identify the maximum I/O throughput of the hypervisor. • Random mode. Among a fixed number of I/O commands, the program randomly reads/writes a target file in a ratio (ranged from 0% to 100%), and the remaining IOs are sequential I/O commands. The usage of random mode is to measure VM I/O latency on different I/O sizes. 2.2. I/O Measurement Daemon IMD, a daemon running within a VM, is responsible for interacting with the Monitor and executes dispatched I/O com- mands. Once an IMD is adopted as a working node by the Mon- itor, it spawns several IOworkers according to the requirements Fig. 1. vExplorer System Architecture from the monitor. For executing IOs at a specified launch time, two approaches are provided: • Time synchronization. Each VM holding the IMD must 2.1. Distributed I/O Controller synchronize the time with the DC host through NTP (Net- work Time Protocol) during the working node registration. The Monitor module is in charge of communicating with • Timer event control. We choose the timer policy pro- each IMD and dispatching the workloads. At the beginning, it posed in Linux 2.6 due to its flexibility and accuracy, waits for the registry requests from each IMD. Upon receiving which will not be affected by the side effects of process a registry request, a service process is spawned for information scheduling. exchange through the network. When the number of IMDs ex- ceeds a threshold (e.g., 3), the Monitor starts to dispatch the I/O 2.3.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us