HIL: Designing an Exokernel for the Data Center
Total Page:16
File Type:pdf, Size:1020Kb
HIL: Designing an Exokernel for the Data Center Jason Hennessey Sahil Tikale Ata Turk Boston University Boston University Boston University [email protected] [email protected] [email protected] Emine Ugur Kaynar Chris Hill Peter Desnoyers Boston University Massachusetts Institute of Northeastern University [email protected] Technology [email protected] [email protected] Orran Krieger Boston University [email protected] Keywords cloud computing, datacenter management, bare Juju charms, Puppet manifests, etc.—much like an operat- metal, exokernel, IaaS, PaaS ing system hides disks and address spaces behind file sys- tems and processes. Different tools are better suited for some Abstract applications than others; e.g. Ironic is used for OpenStack deployment, while xCAT is well-suited for large High Per- We propose a new Exokernel-like layer to allow mutually formance Computing (HPC) deployments. untrusting physically deployed services to efficiently share Each of these tools takes control of the hardware it man- the resources of a data center. We believe that such a layer of- ages, and each provides very different higher-level abstrac- fers not only efficiency gains, but may also enable new eco- tions. A cluster operator must thus decide between, for ex- nomic models, new applications, and new security-sensitive ample, Ironic or MaaS for software deployment; and the data uses. A prototype (currently in active use) demonstrates that center operator who wants to use multiple tools is forced the proposed layer is viable, and can support a variety of ex- to statically partition the data center into silos of hardware. isting provisioning tools and use cases. Moreover, it is unlikely that most data centers will be able to transition fully to using a new tool; organizations may have 1. Introduction decades of investment in legacy tools, e.g., for deploying There is a growing demand for mechanisms to simplify de- HPC clusters, and will be slow to adopt new tools, leading ployment of applications and services on physical systems, to even more hardware silos. resulting in the development of tools such as OpenStack We believe that there is a need for a new fundamen- Ironic, Canonical MaaS, Emulab, GENI, Foreman, xCat, and tal layer that allows different physical provisioning systems others [10, 12, 22, 53, 55]. to share the data center while allowing resources to move Each of these tools hides the low-level hardware features back and forth between them. We describe our design of (IPMI, PXE, boot devices, etc.) from applications and pro- such a layer, called the Hardware Isolation Layer (HIL), vides higher-level abstractions such as image deployment, which adopts an Exokernel-like [19] approach. With this ap- proach, rather than providing higher level abstractions that virtualize the physical resources, the lowest layer only iso- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed lates/multiplexes the resources and richer functionality is for profit or commercial advantage and that copies bear this notice and the full citation provided by systems that run on top of it. With HIL, we on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or partition physical hardware and connectivity, while enabling republish, to post on servers or to redistribute to lists, requires prior specific permission direct access to those resources to the physical provisioning and/or a fee. Request permissions from [email protected]. SoCC ’16, October 05 - 07, 2016, Santa Clara, CA, USA. systems that use them. This approach allows existing provi- Copyright is held by the owner/author(s). Publication rights licensed to ACM. sioning systems, including Ironic, MaaS, and Foreman, to be ACM ISBN 978-1-4503-4525-5/16/10. $15.00. DOI: http://dx.doi.org/10.1145/2987550.2987588 adapted with little or no change. 155 One major advantage of an Exokernel approach is sim- 2. Physical provisioning use cases plicity. Our current prototype is less than 3,000 lines of code Although hardware virtualization has been widely success- and is sufficiently functional that we use it in production on 1 ful in the data center, there still remain many applications the Massachusetts Open Cloud (MOC) , in a 576-core clus- and systems which benefit from, or even require, “bare- ter. Our current usage of HIL is limited to flexibly allocat- metal” hardware. ing infrastructure for staging efforts and research as needed; however, the potential value of the HIL approach is much Cloud/virtualization software: This is one of the most ob- larger. We envision HIL being used to: vious cases, as these systems provide virtualized resources rather than consuming them. These systems are often enor- • move resources between multiple production clusters in mously complicated to deploy, and come with their own pro- a data center, enabling resources to be moved to match visioning system (e.g. Mirantis Fuel, Red Hats Open Stack demand, and enabling new models for supporting HPC Platform Director). applications at Cloud economics, • enable a marketplace model for data centers [11, 17], Big data platforms: Environments such as Hadoop have tra- providing the interface between hardware providers and ditionally been deployed on physical resources because of users of this hardware, who are in turn free to adopt the impact of virtualization on these workloads [20, 58]. whichever provisioning tool or software deployment More recently the use of on-demand virtualized deploy- mechanism that best fits their needs, ments [34] such as Amazon EMR [4] and IBM BigIn- • enable application specific provisioning services; imag- sights [30] has grown, due to their cost-effectiveness and ine, in a general purpose data center, deploying an iso- convenience for smaller deployments. In search of higher lated 1000 node cluster in seconds to solve an interactive performance, cloud vendors like IBM [48], Rackspace [44], supercomputing problem such as was shown on special- and Internap [31] have recently started to provide on- ized hardware by project Kittyhawk [7], demand bare-metal big data platforms. • enable security-sensitive applications that cannot take ad- HPC Systems: Many HPC applications achieve signifi- vantage of public cloud for security or compliance rea- cantly higher performance on physical rather than virtual- sons (e.g., HIPAA, military, finance) to be deployed in ized infrastructure. Research institutions typically have large the middle of a public cloud, and shared HPC clusters, deployed on statically configured in- • enable new scheduling services to move resources be- frastructure, and in many cases there is more demand than tween all the services in a data center based on demand, there is infrastructure to satisfy that demand. These sytems economic models, or to meet power regulations [50]. could usefully exploit any idle cycles on other data center in- frastructure if those cycles could be made available to them. The contributions of this paper are: (1) definition of a data Although efficient, large shared clusters present problems center isolation layer that enables improved efficiency and for highly specialized communities; for example, an infor- utilization of resources, (2) definition of an architecture for mal survey of two national clusters shows that they support realizing this layer and demonstration of a prototype imple- only about 10% of the tools typically used by the neuro- mentation sufficiently functional for production use, (3) an- science community [26]. Such specialized users would be alytical and experimental evaluation of the prototype imple- better served if they could stand up their own HPC clusters, mentation that demonstrates its performance characteristics perhaps for brief periods of time. and value for a number of use cases, and (4) discussion of the change required to different provisioning tools to exploit Latency-sensitive applications: For certain applications the this new layer. higher variation in latency in a virtualized environment may Section 2 describes use cases HIL is designed to support; lead directly to reduced performance. Examples of this in- Section 3 provides a high-level view of the HIL functional- clude tightly-coupled numeric computations [21], as well as ity and approach; Section 4 discusses design requirements, certain cyber-physical and gaming applications. In this case Section 5 describes the technologies used to implement HIL, the use of dedicated hardware allows extreme measures to and Section 6 discusses the architecture and implementation. be taken such as disabling most sources of interrupts and in- In Section 7 we present measurements of HIL in use, then in stalling special-purpose operating systems [47]. Sections 8, 9, and 10 discuss related work, future directions, and conclusions. Security and compliance: The use of separate hardware may be required due to regulatory or contractual confidentiality 1 HIL is being developed as part of the Massachusetts Open Cloud (MOC), requirements, or may be useful in providing enhanced secu- a collaboration between the Commonwealth of Massachusetts, research uni- rity for sensitive data. versities, and an array of industry partners, to create a non-profit Infrastruc- ture as a Service public cloud [2]. 156 Staging and research: Physical hardware may also be re- quired, typically for brief periods, to enable staging/testing of services