
34th IEEE International Parallel and Distributed Processing Symposium New Orleans, LA (May 2020) Fault-Tolerant Containers Using NiLiCon Diyu Zhou Yuval Tamir Computer Science Department, UCLA Computer Science Department, UCLA [email protected] [email protected] Abstract—Many services deployed in the cloud require failures using replication [21], [23], [24], [29], [36], [37]. high reliability and must thus survive machine failures. Pro- Most of these techniques involve a primary VM and a ”warm viding such fault tolerance transparently, without requiring backup” (standby spare) VM [23], [36]. The applications application modifications, has motivated extensive research on replicating virtual machines (VMs). Cloud computing typically run in the primary, which is periodically paused so that relies on VMs or containers to provide an isolation and its state can be checkpointed to the backup. If the primary multitenancy layer. Containers have advantages over VMs in fails, the applications or services are started on the backup smaller size, faster startup, and avoiding the need to manage from the checkpointed state. In order for this failover to updates of multiple VMs. This paper reports on the design, be transparent to the environment outside the VM (e.g., the implementation, and evaluation of NiLiCon — a transparent container replication mechanism for fault tolerance. To the service clients), these fault tolerance mechanisms ensure that best of our knowledge, NiLiCon is the first implementation of the backup resumes from a state that is consistent with the container replication, demonstrating that it can be used for final primary state visible to the clients. transparent deployment of critical services in the cloud. Despite the advantages of containers, there has been very NiLiCon is based on high-frequency asynchronous incremen- little work on high availability and fault tolerance techniques tal checkpointing to a warm spare, as previously used for VMs. The challenge to accomplishing this is that, compared for containers [26]. In particular, there has been limited work to VMs, there is much tighter coupling between the container on high availability techniques [27], [28]. However, to the state and the state of the underlying platform. NiLiCon meets best of our knowledge, there are are no prior works that re- this challenge, eliminating the need to deploy services in VMs, port on application-transparent, client-transparent, container- with performance overheads that are competitive with those of based fault tolerance mechanisms that support stateful ap- similar VM replication mechanisms. Specifically, with the seven benchmarks used in the evaluation, the performance overhead plications. NiLiCon (Nine Lives Containers), as described in of NiLiCon is in the range of 19%-67%. For fail-stop faults, this paper, is such a mechanism. the recovery rate is 100%. The VM-level fault tolerance techniques discussed above Keywords-fault tolerance; replication; do support stateful applications and provide application transparency as well as client transparency. Hence, NiLiCon uses the same basic approach [23]. Since container state I. INTRODUCTION does not include the entire kernel, the size of the periodic Servers commonly host applications in virtual machines checkpoints can be expected to be smaller, potentially result- (VMs) and/or containers to facilitate efficient, flexible re- ing in lower overhead when the technique is applied at the source management [19], [30], [35]. In some environments container level. On the other hand, there is a much tighter containers and VMs are used together. However, in others coupling between a container and the underlying kernel than containers alone have become the preferred choice since between a VM and the underlying hypervisor. In particular, their storage and deployment consume fewer resources, there is much more container state in the kernel (e.g., the list allowing for greater agility and elasticity [19], [27]. of open file descriptors of the applications in the container) Many of the applications and services hosted by dat- than there is VM state in the hypervisor. Thus, implementing acenters require high availability and/or high reliability. the warm backup replication scheme with containers is a Meeting these needs can be left to the application developers more challenging task. NiLiCon is an existence proof that who can develop fully customized solutions or adapt their this challenge can be met. applications to be compatible with more general-purpose The starting point of NiLiCon’s implementation is based middleware. However, there are obvious benefits to avoid- on a tool called CRIU (Checkpoint/Restore in User ing this extra burden on developers and support legacy Space) [3], which is able to checkpoint a container under software without requiring extensive modifications. This Linux. However, the existing implementation of CRIU and has motivated the development of application-transparent the kernel interface provided by Linux kernel incur high mechanisms for high reliability. overhead for some of CRIU’s operations. For example, The above considerations have led to the development of our measurements show that collecting container namespace a plethora of techniques and products for tolerating VM information may take up to 100ms. Hence, using the un- Epoch 0 Epoch 1 VM or resumes execution while the content of the staging buffer Container Execute Stop Execute Stop … is concurrently transferred to the backup VM (Send state). Remus or Local Send Wait Release Local … In order to identify the changes in VM state since the last NiLiCon state copy state for ACK output state copy state transfer, during the Pause interval of each epoch all Figure 1. Workflow of Remus and NiLiCon on the primary host. the pages within the VM are set to be read-only. Thus, an exception is generated the first time that a page is modified in an epoch, allowing the hypervisor to track modified pages. modified CRIU and Linux, it is not feasible to support the A key issue addressed by Remus is the handling of the short checkpointing intervals (tens of milliseconds) required output of the primary VM to the external world. There for client-server applications. An important contribution of are two key destinations of output from the VM: network our work is the identification and mitigation of all major and disk. For the network, incoming packets are processed performance bottlenecks in the current implementation of normally. However, outgoing packets generated during the container checkpointing with CRIU. Execute phase are buffered. The outgoing packets buffered The implementation of NiLiCon has involved significant during an epoch, k, are released (Release output) during modifications to CRIU and a few small kernel changes. epoch k +1, once the backup VM acknowledges the receipt We have validated the operation of NiLiCon and evaluated of the primary VM’s state changes produced during epoch its overhead using seven benchmarks, five of which are k. The delay (buffering) of outputs is needed to ensure that, server applications. For fail-stop faults, the recovery rate upon failover, the state of the backup is consistent with the with NiLiCon was 100%. The performance overhead was in most recent outputs observable by the external world. Due the range of 19%-67%. During normal operation, the CPU to this delay, in order to support client-server applications, utilization on the backup was in the range of 6.8%-40%. the checkpointing interval is short — tens of milliseconds. We make the following contributions: 1) Demonstrate Remus handles disk output as changes to internal state. a working implementation of the first container repli- Specifically, the primary and backup VMs have separate cation mechanisms that is client-transparent, application- disks, whose content are initially identical. During each transparent, and supports stateful applications; 2) Identify epoch, reads from the disk are processed normally. Writes the major performance bottlenecks in the current CRIU to the disk are directly applied to the primary VM’s disk and container checkpoint implementation and the Linux kernel asynchronously transmitted to the backup VM. The backup interface and propose mechanisms to resolve them. VM buffers the disk writes in memory. Disk writes from an Section II presents Remus [23] and CRIU [3], which epoch, k, are written to the backup disk during epoch k +1, are the bases for NiLiCon. The key differences between after the backup receives all the state changes performed by NiLiCon and Remus are explained in §III. The design and the primary VM during epoch k. operation of NiLiCon are discussed in §IV. Key implemen- B. CRIU: Container Live Migration Mechanism tation optimizations are presented in §V. The experimental setup and evaluation results are presented in §VI and §VII, CRIU (Checkpoint/Restore In Userspace) [3] is a tool that respectively. Related work is discussed in §VIII. can checkpoint and restore complicated real-world container state on Linux. CRIU can be used to perform live migration II. BACKGROUND of a container. This involves obtaining the container state NiLiCon is built on Remus [23] and CRIU [3]. Algorith- (checkpointing) on one host and restoring it on another. mically, NiLiCon operates on containers in the same way that This requires migrating the user-level memory and register Remus operates on VMs, as described in §II-A. A key part of state of the container. Additionally, due to the tight coupling this technique is the mechanism used to periodically check- between the container and the underlying operating system, point the primary state to the backup. CRIU, as described there is critical container state, such as opened file descrip- in §II-B, is the starting point for NiLiCon’s checkpointing tors and sockets, within the kernel that must be migrated. For mechanism. checkpointing and restoring in-kernel container state, CRIU relies on kernel interfaces, such as the proc and sys file A. Remus: Passive VM Replication systems, as well as system calls, such as ptrace, getsockopt, With Remus [23], there is a primary VM that executes and setsockopt.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-