HPC Containers in Use Jonathan Sparks SC and Clusters R&D Cray Inc. Bloomington, MN, USA e-mail: [email protected] Abstract— Linux containers in the commercial world are environments. This study uses several different container changing the landscape for application development and technologies and applications to evaluate the performance deployments. Container technologies are also making inroads overhead, deployment, and isolation techniques for each. into HPC environments, as exemplified by NERSC’s Shifter Our focus is on traditional HPC use cases and applications; and LBL’s Singularity. While the first generation of HPC more general use cases have been covered by previous containers offers some of the same benefits as the existing open container frameworks, like CoreOS or Docker, they do not work, such as that by Lucas Chaufournier [17]. address the cloud/commercial feature sets such as virtualized This paper is organized as follows: Section II provides an networks, full isolation, and orchestration. This paper will overview of virtualization techniques; Section III presents explore the use of containers in the HPC environment and the different container runtime environments; Section IV summarize our study to determine how best to use these describes the motivation behind our study; Section V technologies in the HPC environment at scale. presents the experiments performed in order to evaluate both application performance and the system overhead incurred in Keywords- Shifter; HPC; Cray; container; virtualization; hosting the environment; Section VI presents related work. Docker; CoreOS Conclusions and future work are presented in Section VII. I. INTRODUCTION II. VIRTUALIZATION ontainers seem to have burst onto the infrastructure In this section we provide some background on the two C scene for good reason. Due to fast startup times, types of virtualization technologies that we study in this isolation, a low resource footprint, and encapsulation of paper. dependencies, they are rapidly becoming the tool of choice for large, sophisticated application deployments. To meet A. Hardware Virtualization the increasing demand for computational power, containers Hardware virtualization involves virtualizing the are now being deployed on HPC systems. hardware on a server and creating virtual machine instances Virtualization technologies have become very popular in (VMs) that provide the abstraction of a physical machine recent years, such as Xen [1] and KVM [2]. These (see Figure 1a). Hardware virtualization involves running a hypervisor-based virtualization solutions bring several hypervisor, also referred to as a virtual machine monitor benefits, including hardware independence, high (VMM), on the bare-metal server. The hypervisor emulates availability, isolation, and security. They have been widely virtual hardware such as the CPU, memory, I/O, and adopted in industry computing environments. For example, network devices for each virtual machine. Each VM then Amazon Elastic Cloud (EC2) [2] uses Xen and Google’s runs an independent operating system and applications on Compute Engine [4] utilizes KVM. However, their adoption top of that OS. The hypervisor is also responsible for is still constrained in the HPC context due to inherent VM multiplexing the underlying physical resources across the performance overheads and system dependencies, especially resident VMs. in terms of I/O [5]. Modern hypervisors support multiple strategies for On the other hand, lightweight container-based resource allocation and sharing of physical resources. virtualizations, such as Linux-VServer [6] and Linux Physical resources may be strictly partitioned (dedicated) to Containers (LXC) [7] or Docker [8], have attracted each VM or shared in a best-effort manner. The hypervisor considerable attention recently. Containers share the host’s is also responsible for isolation. Isolation among VMs is resources and provide process isolation, making the provided by trapping privileged hardware accesses by the container-based solution a more efficient competitor to the guest operating system and performing these operations in traditional hypervisor-based solution. It is anticipated that the hypervisor on its behalf. the container-based solution will continue to grow and influence the direction of virtualized computing. This paper will demonstrate that container-based virtualization is a powerful technology in HPC Control groups are a kernel mechanism for controlling the resource allocation to process groups [11]. Cgroups exist for each major resource group type: CPU, memory, network, block I/O, and devices. The resource allocation for each of these can be controlled individually, allowing the complete resource limits for a process or a process group to be specified. Namespaces provide an abstraction for a kernel resource that makes it appear to the container that it has its own private, isolated instance of the resource [12]. In Linux, there are namespaces for isolating process IDs, user IDs, file system mount points, networking interfaces, IPC, and host names. Figure 1. Hypervisor and container-based virtualization C. Containers in HPC B. Container-base Virtualization Implementing existing enterprise container solutions in Container-based virtualization involves virtualizing the HPC systems will require modifications to the software operating system rather than the physical hardware (Figure stack. HPC systems traditionally already have resource 1b). OS-level virtualizations are also referred to as management and job scheduling systems in place, so the containers. Each container encapsulates a group of container runtime environments will need to integrate into processes that are isolated from other containers or the existing system resource manager. The container processes on the system. The host OS kernel is responsible infrastructure should follow the core concepts of containers. for implementing the container environment, isolation, and That is, to abstract the application from the host’s software resources. This infrastructure allocates CPUs, memory, and stack, but be able to access host-level resources when network I/O to each container. necessary, notably the network and storage. Unfortunately, Containers provide lightweight virtualization since they modifying core components of the host to support do not run their own OS kernels like VMs, but instead rely containers introduces a portability problem. Standard on the underlying host kernel for OS services. In some resource managers provide the interfaces required to cases, the underlying OS kernel may emulate a different OS orchestrate a container runtime environment on the system. kernel version to processes within a container. This is a A resource manager such as Moab/Torque on Cray feature often used to support backward OS compatibility or systems utilizes scripts that are the core of the job execution, to emulate different OS APIs such as in Solaris zones [10]. and these job scripts have the responsibility for configuring OS virtualization is not a new technology, and many OS the environment and passing the required information at virtualization techniques exist including Solaris Zones, process start. In our experiments we use Cray ALPS as the BSD-jails [28], and Linux LXC (Figure 2). The recent application launcher to start both parallel MPI applications emergence of Docker, a container platform similar to LXC and serial workloads. but with a layered file system and added software engineering benefits like building and debugging, has III. CONTAINER RUNTIME ENVIRONMENTS renewed interest in container-based virtualization. In this section, we describe the container environments used in our experiments and certain common container attributes pertinent to this study. For the purposes of this paper, we defined container environments to be the tools used to create, manage, and deploy a container. We used two container environments used by the general community: runC [8] and rkt [9]; and two from HPC ⎯ Shifter [18] and Singularity [19]. A. runC The command-line utility runC executes applications packaged according to the Open Container Initiative (OCI) format [25] and is a compliant implementation of the Open Figure 2. Container technology Container Initiative specification. The program, runC, integrates well with existing process Linux containers in particular employ two key features: supervisors to provide a container runtime environment for control groups and namespaces. applications. It can be used with existing resource management tools, and the container will be executed as a 1) Host Level Access: The ability to accesses the host- child of the supervisor process. level resources from within the container. Typically these Containers are configured using the notion of bundles. A are the host’s network interfaces, either TCP/IP or Cray bundle for a container is defined as a directory that includes Aries native interface. a specification file named config.json and a root file system. 2) Privileged Operations: All jobs need to be run by the user and as the user. The system workload managers will B. rkt enforce this policy. If the container runtime requires root or elevated status, then special system configuration must be The application container engine rkt is the container manager and execution environment for Linux systems. applied. Designed for security, simplicity, and compatibility within 3) Runtime Environment Pass-through: A common cluster architectures, rkt discovers, verifies, fetches, and workflow
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-