Linux* Containers Streamline, Complement Hypervisor
Total Page:16
File Type:pdf, Size:1020Kb
WhITE Paper Communications and Storage Infrastructure Container-Based Virtualization Linux* Containers Streamline Virtualization and Complement Hypervisor-Based Virtual Machines As strategic approaches such as software-defined networking (SDN) and network function virtualization (NFV) become more mainstream, combinations of both OS-level virtualization and hypervisor-based virtual machines (VMs) often provide dramatic cost, agility, and performance benefits, throughout the data center. 1 Executive Summary By decoupling physical computing resources from the software that executes on them, virtualization has dramatically improved resource utilization, enabled server consolidation, and allowed workloads to migrate among physical hosts for usages As virtualization using such as load balancing and failover. The following key techniques for resource Linux* containers emerges as sharing and partitioning have been under continual development and refinement a viable option for mainstream since work began on them in the 1960s: computing environments, Intel is investing in enablement • Hypervisor-based virtualization has been the main focus within the Intel® through hardware and software architecture ecosystem. Increased server headroom and hardware assists from technologies that include the Intel® Virtualization Technology (Intel® VT)1 have helped broaden its adoption and following: the scope of workloads for which it is effective on open-standard hardware. • The Intel® Data Plane • OS-level virtualization has been developed largely by makers of proprietary Development Kit (Intel® DPDK) operating systems and system architectures. Linux* containers, introduced in 2007, extend the capabilities for highly elastic or latency-sensitive workloads on • Intel® Solid State Drive open-standards hardware. Data Center Family for PCI Express* Both approaches have distinct advantages. For example, whereas each hypervisor- based virtual machine (VM) runs an entire copy of the OS, multiple containers • Intel® Virtualization Technology share one kernel instance, significantly reducing overhead. While hypervisor-based VMs can be provisioned far more quickly than physical servers—taking only tens of seconds as opposed to several weeks—that lag is unacceptable for workloads that range from real-time data analytics to control systems for autonomous vehicles. On the other hand, hypervisor-based VMs allow for multi-OS environments and superior workload isolation (even in public clouds). Enterprises are also finding that new virtualization usage models that draw on both containers and hypervisors help them get more value out of strategic approaches such as public cloud, software-defined networking (SDN), and network function virtualization (NFV). NOTE: In the context of this paper, the term “container” is a generalized reference for any virtual partition other than a hypervisor-based VM (e.g., a chroot jail, FreeBSD jail, Solaris* container/zone, or Linux container). Linux Containers Streamline Virtualization and Complement Hypervisor-Based Virtual Machines Table of Contents This paper introduces technical 2 The Historical Context for executives, architects, and engineers to Containers and Hypervisors 1 Executive Summary .................1 the potential value of Linux containers, Hypervisors and containers have a joint 2 The Historical Context for particularly in conjunction with Containers and Hypervisors .........2 ancestry, represented in Figure 1, which hypervisor-based virtualization; it has been driven by efforts to decrease 2.1 Resource-Partitioning Techniques includes the following sections: Not Based on Hypervisors .......3 the prescriptive coupling between 2.2 Emergence of Software-Only, • The Historical Context for Containers software workloads and the hardware Hypervisor-Based Virtualization and Hypervisors traces the they run on. From the beginning, these for Intel® architecture ..........4 development of these technologies advances enhanced the flexibility of 2.3 Hardware-Assisted, Hypervisor- from their common origin as an computing environments, enabling a Based Virtualization with Intel® effort to share and partition system single system to handle a broader range Virtualization Technology .......4 resources among workloads. of tasks. As a result, two related aspects 3 Virtualization Using of the interactions between hardware Linux Containers ...................5 • Virtualization Using Linux Containers and software have evolved: 3.1 Usage Model: Platform as a introduces the support for container- Service on Software-Defined based virtualization in Linux, • r esource sharing among workloads Infrastructure ..................6 illustrated by representative usage allows greater efficiency compared to 3.2 Usage Model: Combined models. the use of dedicated, single-purpose Hypervisor-Based VMs and equipment. Containers in the Public Cloud ..7 • Enabling Technologies from Intel 4 Enabling Technologies from Intel for for Container-Based Virtualization • r esource partitioning ensures that Container-Based Virtualization ......8 discusses Intel’s hardware and the system requirements of each 4.1 Intel® Data Plane software technologies that help workload are met and prevents Development Kit ................8 improve performance and data unwanted interactions among 4.2 Intel® Solid-State Drive protection when using Linux workloads (isolating sensitive data, Data Center Family containers. for example). for PCI Express* ................9 4.3 Enhanced Workload Isolation • Containers in real-World The first stage depicted in Figure 1 with Intel® Virtualization Implementations examines a few is that of the monolithic compute Technology ....................9 examples of early adopters that are environment, where the hardware and 5 Containers in real-World using containers as part of their software are logically conjoined, in a Implementations ................. 10 virtualization infrastructures today. single-purpose architecture. The ability 5.1 Google: Combinations and Layers to make significant changes to either of Complementary Containers and Hypervisors .............. 10 5.2 Heroku and Salesforce.com: Container-Based CrM, Worldwide 11 6 Conclusion ....................... 12 User User User … User Env Env … Env Monolithic Compute Enviroment Control Program Supervisor Supervisor (Hypervisor) Supervisor System Example: System Example: System Example: System Example: GE-635/Multics GE-635/Multics IBM 360 IBM 370 (circa 1959) (circa 1962) (circa 1964) (circa 1972) Figure 1. Early development of approaches to resource sharing and partitioning. 2 Linux Containers Streamline Virtualization and Complement Hypervisor-Based Virtual Machines hardware or software, short of replacing IBM introduced its first hypervisor it is vulnerable to intentional efforts the entire system, is severely limited for production (but unsupported) by users or programs to “break out” or even nonexistent. The second stage use in 1967, to run on System/360* of their jails and gain unauthorized includes the supervisor (progenitor of mainframes, followed by a fully access to resources. the kernel), an intermediary between supported version for System/370* • FreeBSD* jails are similar in concept user space and system resources that equipment in 1972. The company is to chroot jails, but with a greater allows hardware and software to be generally recognized as having held emphasis on security. This mechanism changed independently of each other. the industry’s most prominent role in was introduced as a feature of That de-coupling is advanced further the development of hypervisor-based FreeBSD 4.0 in 2000. FreeBSD jail in the third stage, where the supervisor virtualization from that point through definitions can explicitly restrict access mediates resources among multiple the rest of the twentieth century. outside the sandboxed environment users, allowing time-sharing among by entities such as files, processes, multiple, simultaneous jobs. 2.1 Resource-Partitioning Techniques Not Based on Hypervisors and user accounts (including The fourth stage adds a control accounts created by the jail definition program (CP, also known as the Although hypervisors enrich isolation specifically for that purpose). While “hypervisor”) that presents an among virtual environments running this approach significantly enhances independent view of the complete on a single physical host, they also add control over resources compared environment to each application; complexity and consume resources. to chroot, it is likewise incapable of applications execute in isolation, Alternate approaches to partitioning supporting full virtualization, because unaware of each other. Importantly, resources were developed as interest the FreeBSD jails must share a single various hypervisor-enabled virtual in Unix* and Unix-like operating OS kernel. environments can run different systems grew in the late 1970s and • Solaris containers (Solaris zones) operating systems simultaneously on early 1980s, driven by organizations build on the virtualization capabilities the same hardware, which would be that included Bell Laboratories, of chroot and FreeBSD jails. Sun impossible under most approaches AT&T, Digital Equipment Corporation, Microsystems introduced this feature to virtualization that are not based on Sun Microsystems, and academic with the name “Solaris containers” hypervisors. (Running workloads based institutions. These technologies (and as part of Solaris 10 in 2005, and on different operating systems on the others), which are