
Kangaroo: A Tenant-Centric Software-Defined Cloud Infrastructure Kaveh Razavi∗, Ana Ion∗, Genc Tatoy, Kyuho Jeongz, Renato Figueiredoz, Guillaume Pierrey, and Thilo Kielmann∗ ∗VU University Amsterdam, yIRISA / Universite´ de Rennes 1, and zUniversity of Florida Abstract—Applications on cloud infrastructures acquire vir- When a set of nested VMs outgrow the available capacity tual machines (VMs) from providers when necessary. The current of their hosting VM, they can be live-migrated to another interface for acquiring VMs from most providers, however, is possibly new larger VM through a programmable interface. too limiting for the tenants, in terms of granularity in which This also allows co-locating complementary workloads in the VMs can be acquired (e.g., small, medium, large, etc.), while same machine, which in turn increases resource utilization giving very limited control over their placement. The former while improving performance. Finally, Kangaroo runs indif- leads to VM underutilization, and the latter has performance implications, both translating into higher costs for the tenants. ferently within VMs hosted by any cloud provider such as In this work, we leverage nested virtualization and a networking Amazon EC2 and private clouds. It can therefore live-migrate overlay to tackle these problems. We present Kangaroo, an unmodified applications across cloud providers and across OpenStack-based virtual infrastructure provider, and IPOPsm, different types of virtual machine monitors (VMMs), without a virtual networking switch for communication between nested disrupting application behavior. VMs over different infrastructure VMs. In addition, we design and implement Skippy, the realization of our proposed virtual Our evaluations based on microbenchmarks and real-world infrastructure API for programming Kangaroo. Our benchmarks applications show that Kangaroo’s fine-grained resource allo- show that through careful mapping of nested VMs to infrastruc- cation can significantly reduce resource usage compared to ture VMs, Kangaroo achieves up to an order of magnitude better horizontal scaling, while co-locating compatible workloads performance, with only half the cost on Amazon EC2. Further, brings order-of-magnitude performance improvements. Live- Kangaroo’s unified OpenStack API allows us to migrate an entire migrating workloads across VMs and across cloud providers application between Amazon EC2 and our local OpenNebula exhibits acceptable overheads, without imposing any downtime deployment within a few minutes, without any downtime or nor application modifications. modification to the application code. The remainder of this paper is organized as follows: Section II discusses the background and our motivation for I. INTRODUCTION this work. Sections III and IV respectively present Kangaroo’s Cloud applications are extraordinarily varied, ranging from architecture and implementation. Section V presents evalua- one-person projects to huge collaborative efforts, and spanning tions and, after discussing the related work in Section VI, every possible application domain. Many applications start Section VII concludes. with modest computing resource requirements which vary thereafter according to the success or failure of their authors. II. MOTIVATION AND BACKGROUND However, they all need to define their resources as a combi- nation of instance types selected from a standard list. We describe our motivation for this work by formulating some important tenant requirements that are currently not Infrastructure providers promote horizontal scaling (the available in infrastructure providers (Section II-A). We then addition or removal of VMs of identical types) as a practical discuss why it is sometimes difficult or against the interest of solution for applications which need to vary their resource the providers to implement these requirements (Section II-B). usage over time. However, although this provides a simple After establishing our motivation, we take a detailed look at and efficient model for the infrastructure providers, it presents nested virtualization, the driving technology behind Kangaroo stringent limitations for cloud tenants. First, standard instance that provides the desired tenant requirements (Section II-C). types seldom match the exact resource requirements of partic- ular applications. As a consequence, cloud applications rarely A. Tenant requirements utilize all the resources offered by their VMs. Second, cloud platforms offer little support for fine-grained VM placement. There are three desired tenant requirements that current However, as we show later in this paper, co-locating compat- providers do not support. ible workloads in the same physical machine can bring up to an order of magnitude performance improvements. 1) Fine-grained VM Allocation: Currently, whenever a tenant needs to scale out, she has the choice between a number To address these challenges this paper proposes Kangaroo, of instance types that provide the tenant with various amount of a tenant-centric nested infrastructure. Kangaroo executes cloud resources. These instance types are defined statically, and they applications in VMs running within VMs that are allocated reflect resource requirements of various classes of applications. from the infrastructure providers. This nested infrastructure For example, a tenant with a compute-bound workload can enables fine-grained resource allocation and dynamic resizing. acquire a VM from the class of instance types that provide a high number of cores, or a number of GPUs if the workload As a direct result, usually it is tedious to port an application can run on GPUs. that is written for one provider to another. To make the matter worse, different providers have adopted different virtualization This “instance type” abstraction creates a static granularity technologies (e.g., Xen, KVM, Hyper-V, etc.) which means in which resources can be acquired from a provider. From the that it is typically not possible to run a VM using a certain tenant’s point of view, the static nature of this abstraction is virtualization technology on another. Thus, even if the API is limiting: It does not capture the requirements of all possible standardized and adopted by providers, it is still not straight- applications, and for the ones that it does, it is often not forward to simply migrate an application from one provider to an exact match. This results in provisioning of unnecessary another. resources for the sake of the ones necessary. The final requirement is the ability to avoid vendor lock-in As an example, assume that a tenant needs to allocate 10 by using a unified API for allocating VMs, and the possibility cores with 2 GB of memory for a certain application. Since for migrating an application without changing its code or any this amount of resources is asymmetrical, it is very unlikely downtime. that the provider has an instance type with these exact resource requirements. The tenant has two choices: 1) Either allocate B. Infrastructure Provider Conflicts one VM from an instance type with the same number of cores, but excessive amount of memory, or 2) a number of VMs Implementing the tenants’ requirements is not always from smaller instance types that provide the same number of straightforward for the providers. As we shall see, it may also cores. The former results in over-provisioning of memory and not be in their interest. hence waste of money, and the later results in high core-to-core 1) Fine-grained VM Allocation: Static instance types allow latency, possibly affecting performance. Even with balanced for simpler scheduling, physical resource provisioning, and resource requirements, it can easily be that there is no matching pricing. Providing support for fine-grained VM allocation instance type. means more engineering effort, and possibly less flexibility Since the tenant knows the requirements of her application in physical resource management that directly translates into the best, she should be able to define instance types dynami- higher operational cost. cally, and be able to allocate VMs from it. 2) Control over VM placement: Without tenant support for 2) Control over VM placement: A tenant has the best VM migration, the provider has the complete control over knowledge about how the components of her application com- the location of all tenants’ VMs. It can migrate the VMs municate with each other. Naturally, the closer the components at will for resource management purposes such as scaling that communicate with each other are, the better networking out/in its physical resources. Supporting tenant-controlled VM performance they observe. This means that the tenants should placement makes it difficult, if not impossible, to perform these have control over the mapping of their VMs to the underlying simple resource management tasks. infrastructure. Further, the flexibility of controlling VM place- 3) Unified Provider API: Infrastructure providers started ment opens up interesting resource management possibilities appearing before any coordinated standardization effort, and so that we will discuss in Section IV-C. did the virtualization technologies that the providers adopted. It Currently, there is no or very limited support for controlled will be a substantial engineering effort to provide a new set of placement of VMs on the infrastructure’s physical resources. API while preserving and maintaining the old one. Further, the For example, Amazon EC2 does not provide any support vendor lock-in helps keeping a stable user base that
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-