Multi-Hypervisor Virtual Machines: Enabling an Ecosystem Of

Multi-Hypervisor Virtual Machines: Enabling an Ecosystem Of

Multi-Hypervisor Virtual Machines: Enabling an Ecosystem of Hypervisor-level Services Kartik Gopalan, Rohit Kugve, Hardik Bagdi, and Yaohui Hu, Binghamton University; Daniel Williams and Nilton Bila, IBM T.J. Watson Research Center https://www.usenix.org/conference/atc17/technical-sessions/presentation/gopalan This paper is included in the Proceedings of the 2017 USENIX Annual Technical Conference (USENIX ATC ’17). July 12–14, 2017 • Santa Clara, CA, USA ISBN 978-1-931971-38-6 Open access to the Proceedings of the 2017 USENIX Annual Technical Conference is sponsored by USENIX. Multi-hypervisor Virtual Machines: Enabling an Ecosystem of Hypervisor-level Services Kartik Gopalan, Rohith Kugve, Hardik Bagdi, Yaohui Hu Computer Science, Binghamton University Dan Williams, Nilton Bila IBM T.J. Watson Research Center Abstract tegrity for commodity guests. CloudVisor [68] guar- antees guest privacy and integrity on untrusted clouds. Public cloud software marketplaces already offer users a RTS provides a Real-time Embedded Hypervisor [52] for wealth of choice in operating systems, database manage- real-time guests. These specialized hypervisors may not ment systems, financial software, and virtual network- provide guests with the full slate of memory, virtual CPU ing, all deployable and configurable at the click of a but- (VCPU), and I/O management, but rely upon either an- ton. Unfortunately, this level of customization has not other commodity hypervisor, or the guest itself, to fill in extended to emerging hypervisor-level services, partly the missing services. because traditional virtual machines (VMs) are fully con- Currently there is no good way to expose multiple trolled by only one hypervisor at a time. Currently, a VM services to a guest. For a guest which needs multiple in a cloud platform cannot concurrently use hypervisor- hypervisor-level services, the first option is for the single level services from multiple third-parties in a compart- controlling hypervisor to bundle all services in its super- mentalized manner. We propose the notion of a multi- visor mode. Unfortunately, this approach leads to a “fat” hypervisor VM, which is an unmodified guest that can si- feature-filled hypervisor that may no longer be trustwor- multaneously use services from multiple coresident, but thy because it runs too many untrusted or mutually dis- isolated, hypervisors. We present a new virtualization ar- trusting services. One could de-privilege some services chitecture, called Span virtualization, that leverages nest- to the hypervisor’s user space as processes that control ing to allow multiple hypervisors to concurrently control the guest indirectly via event interposition and system a guest’s memory, virtual CPU, and I/O resources. Our calls [63, 40]. However, public cloud providers would be prototype of Span virtualization on the KVM/QEMU reluctant to execute untrusted third-party services in the platform enables a guest to use services such as intro- hypervisor’s native user space due to a potentially large spection, network monitoring, guest mirroring, and hy- user-kernel interface. pervisor refresh, with performance comparable to tradi- The next option is to de-privilege the services further, tional nested VMs. running each in a Service VM with a full-fledged OS. For instance, rather than running a single Domain0 VM run- 1 Introduction ning Linux that bundles services for all guests, Xen [4] can use several disaggregated [23] service domains for In recent years, a number of hypervisor-level services resilience. Service domains, while currently trusted by have been proposed such as rootkit detection [61], live Xen, could be adapted to run third-party untrusted ser- patching [19], intrusion detection [27], high availabil- vices. A service VM has a less powerful interface to ity [24], and virtual machine (VM) introspection [30, the hypervisor than a user space service. However, nei- 53, 26, 49, 42, 60]. By running inside the hypervisor ther user space services nor Service VMs allow control instead of the guest, these services can operate on mul- over low-level guest resources, such as page mappings or tiple guests, while remaining transparent to the guests. VCPU scheduling, which require hypervisor privileges. Recent years have also seen a rise in the number of spe- One could use nested virtualization [34, 10, 48, 29] cialized hypervisors that are tailored to provide VMs to vertically stack hypervisor-level services, such that a with specific services. For instance, McAfee Deep De- trusted base hypervisor at layer-0 (L0) controls the phys- fender [46] uses a micro-hypervisor called DeepSafe to ical hardware and runs a service hypervisor at layer-1 improve guest security. SecVisor [56] provides code in- (L1), which fully or partially controls the guest at layer-2 USENIX Association 2017 USENIX Annual Technical Conference 235 V Span VM that runs on two L1s (H3 and H4). This paper V2 3 V V 4 5 makes the following contributions: V H H H H 1 1 2 3 4 • We examine the solution space for providing mul- tiple services to a common guest and identify the relative merits of possible solutions. L 0 • We present the design of Span virtualization which enables multiple L1s to concurrently control an un- Figure 1: Span virtualization: V1 is a traditional single- modified guest’s memory, VCPU, and I/O devices level VM. V2 is a traditional nested VM. V3, V4 and V5 using a relatively simple interface with L0. are multi-hypervisor VMs. • We describe our implementation of Span virtualiza- tion by extending the nested virtualization support (L2). Nested virtualization is experiencing considerable in KVM/QEMU [40] and show that Span virtual- interest [25, 31, 64, 68, 55, 53, 56, 39, 7, 65, 45]. For ex- ization can be implemented within existing hyper- ample, one can use nesting [21] to run McAfee Deep De- visors with moderate changes. fender [46], which does not provide full system and I/O • We evaluate Span VMs running unmodified Linux virtualization, as a guest on XenDesktop [20], a full com- and simultaneously using multiple L1 services in- modity hypervisor, so that guests can use the services of cluding VM introspection, network monitoring, both. Similarly, Bromium [15] uses nesting on a Xen- guest mirroring, and hypervisor refresh. We find based hypervisor for security. Ravello [2] and XenBlan- that Span VMs perform comparably with nested ket [66, 57] use nesting on public clouds for cross-cloud VMs and within 0–20% of single-level VMs, across portability. However, vertical stacking reduces the de- different configurations and benchmarks. gree of guest control and visibility to lower layers com- pared to the layer directly controlling the guest. Also, the 2 Solution Space overhead of nested virtualization beyond two layers can Table 1 compares possible solutions for providing multi- become rather high [10]. ple services to a guest. These are single-level virtualiza- Instead, we propose Span virtualization, which pro- tion, user space services, service VMs, nested virtualiza- vides horizontal layering of multiple hypervisor-level tion, and Span. services. A Span VM, or a multi-hypervisor VM, is First, like single-level and nested alternatives, Span an unmodified guest whose resources (virtual memory, virtualization provides L1s with control over the virtual- CPU, and I/O) can be simultaneously controlled by mul- ized instruction set architecture (ISA) of the guest, which tiple coresident, but isolated, hypervisors. A base hy- includes trapped instructions, memory mappings, VCPU pervisor at L0 provides a core set of services and uses scheduling, and I/O. nested virtualization to run multiple deprivileged service Unlike all alternatives, Span L1s support both full and hypervisors at L1. Each L1 augments L0’s services by partial guest control. Span L1s can range from full hy- adding/replacing one or more services. Since the L0 pervisors that control all guest resources to specialized no longer needs to implement every conceivable service, hypervisors that control only some guest resources. L0’s footprint can be smaller than a feature-filled hyper- Next, consider the impact of service failures. In visor. Henceforth, we use the following terms: single-level virtualization, failure of a privileged service • Guest or VM refers to a top-level VM, with quali- impacts the L0 hypervisor, all coresident services, and fiers single-level, nested, and Span as needed. all guests. For all other cases, the L0 hypervisor is pro- • L1 refers to a service hypervisor at layer-1. tected from service failures because services are deprivi- • L0 refers to the base hypervisor at layer-0. leged. Furthermore, failure of a deprivileged service im- • Hypervisor refers to the role of either L0 or any L1 pacts only those guests to which the service is attached. in managing guest resources. Next, consider the failure impact on coresident ser- vices. User space services are isolated by process-level Figure 1 illustrates possible Span VM configurations. isolation and hence protected from each other’s fail- One L0 hypervisor runs multiple L1 hypervisors (H1, ure. However, process-level isolation is only as strong H2, H3, and H4) and multiple guests (V1, V2, V3, V4 and as the user-level privileges with which the services run. V5). V1 is a traditional single-level (non-nested) guest that Nested virtualization provides only one deprivileged ser- runs on L0. V2 is a traditional nested guest that runs on vice compartment. Hence services for the same guest only one hypervisor (H1). The rest are multi-hypervisor

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us