Containers and Cloud: from LXC to Docker to Kubernetes
Total Page:16
File Type:pdf, Size:1020Kb
CLOUD TIDBITS WELCOME TO CLOUD TIDBITS! In each issue, I’ll look at a different “tidbit” of technology that I consider unique or eye-catching, and of particular Containers and interest to the IEEE Cloud Computing readers. Today’s tidbit focuses on container technology and how it’s emerging as an important part of the cloud computing infrastructure. Cloud: From Cloud Computing’s Multiple OS Capability Many formal definitions of cloud computing exist. The National Institute of Standards and Technol- LXC to Docker ogy’s internationally accepted definition calls for “resource pooling,” where the “provider’s computing resources are pooled to serve multiple consumers using a multitenant model, with different physical to Kubernetes and virtual resources dynamically assigned and reassigned according to consumer demand.”1 It also calls for “rapid elasticity,” where “capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward Cloud Systems with Hypervisors and commensurate with demand.” Containers Most agree that the definition implies some kind Most commercial cloud computing systems—both ser- of technology that provides an isolation and mult- vices and cloud operating system software products— itenancy layer, and where computing resources are use hypervisors. Enterprise VMware installations, split up and dynamically shared using an operating which can rightly be called early private clouds, use technique that implements the specified multiten- the ESXi Hypervisor (www.vmware.com/products/es- ant model. Two technologies are commonly used xi-and-esx/overview). Some public clouds (Terremark, here: the hypervisor and the container. You might Savvis, and Bluelock, for example) use ESXi as well. be familiar with how a hypervisor provides for vir- Both Rackspace and Amazon Web Services (AWS) use tual machines (VMs). You might be less familiar the XEN Hypervisor (www.xenproject.org/developers/ with containers, the most common of which rely on teams/hypervisor.html), which gained tremendous Linux kernel containment features, more commonly popularity because of its early open source inclusion known as LXC (https://linuxcontainers.org). Both with Linux. Because Linux has now shifted to sup- technologies support isolation and multitenancy. port KVM (www.linux-kvm.org), another open source Not all agree that a hypervisor or container is re- quired to call a given system a cloud; several special- ized service providers offer what is generally called a bare metal cloud, where they apply the referenced elasticity and automation to the rapid provisioning and assignment of physical servers, eliminating the overhead of a hypervisor or container altogether. Although interesting for the most demanding appli- cations, the somewhat oxymoron term “bare metal DAVID cloud” is something perhaps Tidbits will look at in BERNSTEIN more detail in a later column. Thus, we’re left with the working definition that Cloud Strategy Partners, cloud computing, at its core, has hypervisors or con- [email protected] tainers as a fundamental technology. 2325-6095/14/$31.00 © 2014 IEEE SEPTEMBER 2014 IEEE CLOUD COMPUTING 81 CLOUD TIDBITS APP APP APP A B C As Linux emerged as the dominant open plat- Libs Libs Libs form, replacing these earlier variations, the technol- ogy found its way into the standard distribution in the form of LXC. APP APP APP Figure 1 compares application deployment using OS OS OS A B C a hypervisor and a container. As the figure shows, A B C the hypervisor-based deployment is ideal when ap- Libs Libs Libs plications on the same cloud require different op- erating systems or OS versions (for example, RHEL Linux, Debian Linux, Ubuntu Linux, Windows Hypervisor Container engine 2000, Windows 2008, Windows 2012). The abstrac- tion must be at the VM level to provide this capabil- Host OS Host OS ity of running different OS versions. With containers, applications share an OS (and, Server Server where appropriate, binaries and libraries), and as a re- (a) (b) sult these deployments will be significantly smaller in size than hypervisor deployments, making it possible Figure 1. Comparison of (a) hypervisor and (b) container-based to store hundreds of containers on a physical host deployments. A hypervisor-based deployment is ideal when applications (versus a strictly limited number of VMs). Because on the same cloud require different operating systems or different OS containers use the host OS, restarting a container versions; in container-based systems, applications share an operating doesn’t mean restarting or rebooting the OS. system, so these deployments can be significantly smaller in size. Those familiar with Linux implementations know that there’s a great degree of binary applica- tion portability among Linux variants, with librar- alternative, KVM has found its way into more recent- ies occasionally required to complete the portability. ly constructed clouds (such as AT&T, HP, Comcast, Therefore, it’s practical to have one container pack- and Orange). KVM is also a favorite hypervisor of the age that will run on almost all Linux-based clouds. OpenStack project and is used in most OpenStack dis- tributions (such as RedHat, Cloudscaling, Piston, and Docker Containers Nebula). Of course, Microsoft uses its Hyper-V hy- Docker (www.docker.com) is an open source project pervisor underneath both Microsoft Azure and Micro- providing a systematic way to automate the faster soft Private Cloud (www.microsoft.com/en-us/server deployment of Linux applications inside portable -cloud/solutions/virtualization.aspx). containers. Basically, Docker extends LXC with a However, not all well-known public clouds use kernel-and application-level API that together run hypervisors. For example, Google, IBM/Softlayer, processes in isolation: CPU, memory, I/O, network, and Joyent are all examples of extremely successful and so on. Docker also uses namespaces to com- public cloud platforms using containers, not VMs. pletely isolate an application’s view of the underly- Some trace inspiration for containers back to the ing operating environment, including process trees, Unix chroot command, which was introduced as part network, user IDs, and file systems. of Unix version 7 in 1979. In 1998, an extended ver- Docker containers are created using base images. sion of chroot was implemented in FreeBSD and called A Docker image can include just the OS fundamen- jail. In 2004, the capability was improved and released tals, or it can consist of a sophisticated prebuilt appli- with Solaris 10 as zones. By Solaris 11, a full-blown ca- cation stack ready for launch. When building images pability based on zones was completed and called con- with Docker, each action taken (that is, command ex- tainers. By that time, other proprietary Unix vendors ecuted, such as apt-get install) forms a new layer on offered similar capabilities—for example, HP-UX con- top of the previous one. Commands can be executed tainers and IBM AIX workload partitions. manually or automatically using Dockerfiles. 82 IEEE CLOUD COMPUTING WWW.COMPUTER.ORG/CLOUDCOMPUTING Application PaaS PaaS Virtual Virtual Virtual Virtual appliance appliance PaaSPaaS PaaS appliance appliance Container Container Container Container Container Container VM VM VM VM VM VM VM OS OS OS OS OS OS OS OS OS OS OS Figure 2. Possible layering combinations for application runtimes. Each Dockerfile is a script composed Appliance,2 and Bitnami (https://bitnami Those who want to deploy applica- of various commands (instructions) and .com), provide application runtime en- tions with the least infrastructure will arguments listed successively to auto- vironments that shield the application choose the simple container-to-OS ap- matically perform actions on a base from the bare OS by providing an inter- proach. This is why container-based cloud image to create (or form) a new image. face for applications with higher-level, vendors can claim improved performance They’re used to organize deployment ar- more portable constructs. Virtual appli- when compared to hypervisor-based tifacts and simplify the deployment pro- ances gained popularity with equipment clouds. A recent benchmark of a “fast cess from start to finish. manufacturers who wanted to provide data” NewSQL system claimed that in an Containers can run on VMs too. If a a vehicle for distributing software ver- apples-to-apples comparison, running on cloud has the right native container run- sions of an appliance—for example, a IBM Softlayer using containers resulted time (such as some of the clouds men- network load balancer, WAN optimizer, in a fivefold performance improvement tioned) a container can run directly on or firewall. Virtual appliances can run over the same benchmark running on the VM. If the cloud only supports hyper- on top of a VM or a container (native Amazon AWS using a hypervisor.3 visor-based VMs, there’s no problem—the LXC-based or running on top of a VM). Software developers tend to prefer entire application, container, and OS For even more isolation from the using PaaS, which will use a container if stack can be placed on a VM and run just OS, especially desired by application available for its runtime, to maximize per- like any other application to the OS stack. programmers, application runtimes can formance as well as to manage application be reconfigured into total platform-as- clustering. If not, the PaaS will run a con- Abstractions on Top of VMs and a-service (PaaS) runtimes. Readers will tainer on a VM. Consequently, as PaaS Containers remember that last issue I discussed gains in popularity, so do containers. Both VMs and containers provide a rath- Cloud Foundry PaaS, and mentioned However, using containers for secu- er low-level construct. Basically, both that it uses container technology for de- rity isolation might not be a good idea. present an operating system interface to ployment. It’s for precisely this reason In an August 2013 blog,4 one of Dock- the developer.