Fault-Tolerant Containers Using Nilicon

Total Page:16

File Type:pdf, Size:1020Kb

Fault-Tolerant Containers Using Nilicon 34th IEEE International Parallel and Distributed Processing Symposium New Orleans, LA (May 2020) Fault-Tolerant Containers Using NiLiCon Diyu Zhou Yuval Tamir Computer Science Department, UCLA Computer Science Department, UCLA [email protected] [email protected] Abstract—Many services deployed in the cloud require failures using replication [21], [23], [24], [29], [36], [37]. high reliability and must thus survive machine failures. Pro- Most of these techniques involve a primary VM and a ”warm viding such fault tolerance transparently, without requiring backup” (standby spare) VM [23], [36]. The applications application modifications, has motivated extensive research on replicating virtual machines (VMs). Cloud computing typically run in the primary, which is periodically paused so that relies on VMs or containers to provide an isolation and its state can be checkpointed to the backup. If the primary multitenancy layer. Containers have advantages over VMs in fails, the applications or services are started on the backup smaller size, faster startup, and avoiding the need to manage from the checkpointed state. In order for this failover to updates of multiple VMs. This paper reports on the design, be transparent to the environment outside the VM (e.g., the implementation, and evaluation of NiLiCon — a transparent container replication mechanism for fault tolerance. To the service clients), these fault tolerance mechanisms ensure that best of our knowledge, NiLiCon is the first implementation of the backup resumes from a state that is consistent with the container replication, demonstrating that it can be used for final primary state visible to the clients. transparent deployment of critical services in the cloud. Despite the advantages of containers, there has been very NiLiCon is based on high-frequency asynchronous incremen- little work on high availability and fault tolerance techniques tal checkpointing to a warm spare, as previously used for VMs. The challenge to accomplishing this is that, compared for containers [26]. In particular, there has been limited work to VMs, there is much tighter coupling between the container on high availability techniques [27], [28]. However, to the state and the state of the underlying platform. NiLiCon meets best of our knowledge, there are are no prior works that re- this challenge, eliminating the need to deploy services in VMs, port on application-transparent, client-transparent, container- with performance overheads that are competitive with those of based fault tolerance mechanisms that support stateful ap- similar VM replication mechanisms. Specifically, with the seven benchmarks used in the evaluation, the performance overhead plications. NiLiCon (Nine Lives Containers), as described in of NiLiCon is in the range of 19%-67%. For fail-stop faults, this paper, is such a mechanism. the recovery rate is 100%. The VM-level fault tolerance techniques discussed above Keywords-fault tolerance; replication; do support stateful applications and provide application transparency as well as client transparency. Hence, NiLiCon uses the same basic approach [23]. Since container state I. INTRODUCTION does not include the entire kernel, the size of the periodic Servers commonly host applications in virtual machines checkpoints can be expected to be smaller, potentially result- (VMs) and/or containers to facilitate efficient, flexible re- ing in lower overhead when the technique is applied at the source management [19], [30], [35]. In some environments container level. On the other hand, there is a much tighter containers and VMs are used together. However, in others coupling between a container and the underlying kernel than containers alone have become the preferred choice since between a VM and the underlying hypervisor. In particular, their storage and deployment consume fewer resources, there is much more container state in the kernel (e.g., the list allowing for greater agility and elasticity [19], [27]. of open file descriptors of the applications in the container) Many of the applications and services hosted by dat- than there is VM state in the hypervisor. Thus, implementing acenters require high availability and/or high reliability. the warm backup replication scheme with containers is a Meeting these needs can be left to the application developers more challenging task. NiLiCon is an existence proof that who can develop fully customized solutions or adapt their this challenge can be met. applications to be compatible with more general-purpose The starting point of NiLiCon’s implementation is based middleware. However, there are obvious benefits to avoid- on a tool called CRIU (Checkpoint/Restore in User ing this extra burden on developers and support legacy Space) [3], which is able to checkpoint a container under software without requiring extensive modifications. This Linux. However, the existing implementation of CRIU and has motivated the development of application-transparent the kernel interface provided by Linux kernel incur high mechanisms for high reliability. overhead for some of CRIU’s operations. For example, The above considerations have led to the development of our measurements show that collecting container namespace a plethora of techniques and products for tolerating VM information may take up to 100ms. Hence, using the un- Epoch 0 Epoch 1 VM or resumes execution while the content of the staging buffer Container Execute Stop Execute Stop … is concurrently transferred to the backup VM (Send state). Remus or Local Send Wait Release Local … In order to identify the changes in VM state since the last NiLiCon state copy state for ACK output state copy state transfer, during the Pause interval of each epoch all Figure 1. Workflow of Remus and NiLiCon on the primary host. the pages within the VM are set to be read-only. Thus, an exception is generated the first time that a page is modified in an epoch, allowing the hypervisor to track modified pages. modified CRIU and Linux, it is not feasible to support the A key issue addressed by Remus is the handling of the short checkpointing intervals (tens of milliseconds) required output of the primary VM to the external world. There for client-server applications. An important contribution of are two key destinations of output from the VM: network our work is the identification and mitigation of all major and disk. For the network, incoming packets are processed performance bottlenecks in the current implementation of normally. However, outgoing packets generated during the container checkpointing with CRIU. Execute phase are buffered. The outgoing packets buffered The implementation of NiLiCon has involved significant during an epoch, k, are released (Release output) during modifications to CRIU and a few small kernel changes. epoch k +1, once the backup VM acknowledges the receipt We have validated the operation of NiLiCon and evaluated of the primary VM’s state changes produced during epoch its overhead using seven benchmarks, five of which are k. The delay (buffering) of outputs is needed to ensure that, server applications. For fail-stop faults, the recovery rate upon failover, the state of the backup is consistent with the with NiLiCon was 100%. The performance overhead was in most recent outputs observable by the external world. Due the range of 19%-67%. During normal operation, the CPU to this delay, in order to support client-server applications, utilization on the backup was in the range of 6.8%-40%. the checkpointing interval is short — tens of milliseconds. We make the following contributions: 1) Demonstrate Remus handles disk output as changes to internal state. a working implementation of the first container repli- Specifically, the primary and backup VMs have separate cation mechanisms that is client-transparent, application- disks, whose content are initially identical. During each transparent, and supports stateful applications; 2) Identify epoch, reads from the disk are processed normally. Writes the major performance bottlenecks in the current CRIU to the disk are directly applied to the primary VM’s disk and container checkpoint implementation and the Linux kernel asynchronously transmitted to the backup VM. The backup interface and propose mechanisms to resolve them. VM buffers the disk writes in memory. Disk writes from an Section II presents Remus [23] and CRIU [3], which epoch, k, are written to the backup disk during epoch k +1, are the bases for NiLiCon. The key differences between after the backup receives all the state changes performed by NiLiCon and Remus are explained in §III. The design and the primary VM during epoch k. operation of NiLiCon are discussed in §IV. Key implemen- B. CRIU: Container Live Migration Mechanism tation optimizations are presented in §V. The experimental setup and evaluation results are presented in §VI and §VII, CRIU (Checkpoint/Restore In Userspace) [3] is a tool that respectively. Related work is discussed in §VIII. can checkpoint and restore complicated real-world container state on Linux. CRIU can be used to perform live migration II. BACKGROUND of a container. This involves obtaining the container state NiLiCon is built on Remus [23] and CRIU [3]. Algorith- (checkpointing) on one host and restoring it on another. mically, NiLiCon operates on containers in the same way that This requires migrating the user-level memory and register Remus operates on VMs, as described in §II-A. A key part of state of the container. Additionally, due to the tight coupling this technique is the mechanism used to periodically check- between the container and the underlying operating system, point the primary state to the backup. CRIU, as described there is critical container state, such as opened file descrip- in §II-B, is the starting point for NiLiCon’s checkpointing tors and sockets, within the kernel that must be migrated. For mechanism. checkpointing and restoring in-kernel container state, CRIU relies on kernel interfaces, such as the proc and sys file A. Remus: Passive VM Replication systems, as well as system calls, such as ptrace, getsockopt, With Remus [23], there is a primary VM that executes and setsockopt.
Recommended publications
  • Flexible Lustre Management
    Flexible Lustre management Making less work for Admins ORNL is managed by UT-Battelle for the US Department of Energy How do we know Lustre condition today • Polling proc / sysfs files – The knocking on the door model – Parse stats, rpc info, etc for performance deviations. • Constant collection of debug logs – Heavy parsing for common problems. • The death of a node – Have to examine kdumps and /or lustre dump Origins of a new approach • Requirements for Linux kernel integration. – No more proc usage – Migration to sysfs and debugfs – Used to configure your file system. – Started in lustre 2.9 and still on going. • Two ways to configure your file system. – On MGS server run lctl conf_param … • Directly accessed proc seq_files. – On MSG server run lctl set_param –P • Originally used an upcall to lctl for configuration • Introduced in Lustre 2.4 but was broken until lustre 2.12 (LU-7004) – Configuring file system works transparently before and after sysfs migration. Changes introduced with sysfs / debugfs migration • sysfs has a one item per file rule. • Complex proc files moved to debugfs • Moving to debugfs introduced permission problems – Only debugging files should be their. – Both debugfs and procfs have scaling issues. • Moving to sysfs introduced the ability to send uevents – Item of most interest from LUG 2018 Linux Lustre client talk. – Both lctl conf_param and lctl set_param –P use this approach • lctl conf_param can set sysfs attributes without uevents. See class_modify_config() – We get life cycle events for free – udev is now involved. What do we get by using udev ? • Under the hood – uevents are collect by systemd and then processed by udev rules – /etc/udev/rules.d/99-lustre.rules – SUBSYSTEM=="lustre", ACTION=="change", ENV{PARAM}=="?*", RUN+="/usr/sbin/lctl set_param '$env{PARAM}=$env{SETTING}’” • You can create your own udev rule – http://reactivated.net/writing_udev_rules.html – /lib/udev/rules.d/* for examples – Add udev_log="debug” to /etc/udev.conf if you have problems • Using systemd for long task.
    [Show full text]
  • Reducing Power Consumption in Mobile Devices by Using a Kernel
    IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. Z, NO. B, AUGUST 2017 1 Reducing Event Latency and Power Consumption in Mobile Devices by Using a Kernel-Level Display Server Stephen Marz, Member, IEEE and Brad Vander Zanden and Wei Gao, Member, IEEE E-mail: [email protected], [email protected], [email protected] Abstract—Mobile devices differ from desktop computers in that they have a limited power source, a battery, and they tend to spend more CPU time on the graphical user interface (GUI). These two facts force us to consider different software approaches in the mobile device kernel that can conserve battery life and reduce latency, which is the duration of time between the inception of an event and the reaction to the event. One area to consider is a software package called the display server. The display server is middleware that handles all GUI activities between an application and the operating system, such as event handling and drawing to the screen. In both desktop and mobile devices, the display server is located in the application layer. However, the kernel layer contains most of the information needed for handling events and drawing graphics, which forces the application-level display server to make a series of system calls in order to coordinate events and to draw graphics. These calls interrupt the CPU which can increase both latency and power consumption, and also require the kernel to maintain event queues that duplicate event queues in the display server. A further drawback of placing the display server in the application layer is that the display server contains most of the information required to efficiently schedule the application and this information is not communicated to existing kernels, meaning that GUI-oriented applications are scheduled less efficiently than they might be, which further increases power consumption.
    [Show full text]
  • I.MX Linux® Reference Manual
    i.MX Linux® Reference Manual Document Number: IMXLXRM Rev. 1, 01/2017 i.MX Linux® Reference Manual, Rev. 1, 01/2017 2 NXP Semiconductors Contents Section number Title Page Chapter 1 About this Book 1.1 Audience....................................................................................................................................................................... 27 1.1.1 Conventions................................................................................................................................................... 27 1.1.2 Definitions, Acronyms, and Abbreviations....................................................................................................27 Chapter 2 Introduction 2.1 Overview.......................................................................................................................................................................31 2.1.1 Software Base................................................................................................................................................ 31 2.1.2 Features.......................................................................................................................................................... 31 Chapter 3 Machine-Specific Layer (MSL) 3.1 Introduction...................................................................................................................................................................37 3.2 Interrupts (Operation)..................................................................................................................................................
    [Show full text]
  • Networking with Wicked in SUSE® Linux Enterprise 12
    Networking with Wicked in SUSE® Linux Enterprise 12 Something Wicked This Way Comes Guide www.suse.com Solution Guide Server Server Solution Guide Networking with Wicked in SUSE Linux Enterprise 12 Wicked QuickStart Guide Abstract: Introduced with SUSE® Linux Enterprise 12, Wicked is the new network management tool for Linux, largely replacing the sysconfig package to manage the ever-more-complicated network configurations. Wicked provides network configuration as a service, enabling you to change your configuration dynamically. This paper covers the basics of Wicked with an emphasis on Recently, new technologies have accelerated the trend toward providing correlations between how things were done previ- complexity. Virtualization requires on-demand provisioning of ously and how they need to be done now. resources, including networks. Converged networks that mix data and storage traffic on a shared link require a tighter integra- Introduction tion between stacks that were previously mostly independent. When S.u.S.E.1 first introduced its Linux distribution, network- ing requirements were relatively simple and static. Over time Today, more than 20 years after the first SUSE distribution, net- networking evolved to become far more complex and dynamic. work configurations are very difficult to manage properly, let For example, automatic address configuration protocols such as alone easily (see Figure 1). DHCP or IPv6 auto-configuration entered the picture along with a plethora of new classes of network devices. Modern Network Landscape While this evolution was happening, the concepts behind man- aging a Linux system’s network configuration didn’t change much. The basic idea of storing a configuration in some files and applying it at system boot up using a collection of scripts and system programs was pretty much the same.
    [Show full text]
  • O'reilly Linux Kernel in a Nutshell.Pdf
    ,title.4229 Page i Friday, December 1, 2006 9:52 AM LINUX KERNEL IN A NUTSHELL ,title.4229 Page ii Friday, December 1, 2006 9:52 AM Other Linux resources from O’Reilly Related titles Building Embedded Linux Running Linux Systems Understanding Linux Linux Device Drivers Network Internals Linux in a Nutshell Understanding the Linux Linux Pocket Guide Kernel Linux Books linux.oreilly.com is a complete catalog of O’Reilly’s Resource Center books on Linux and Unix and related technologies, in- cluding sample chapters and code examples. Conferences O’Reilly brings diverse innovators together to nurture the ideas that spark revolutionary industries. We spe- cialize in documenting the latest tools and systems, translating the innovator’s knowledge into useful skills for those in the trenches. Visit conferences.oreilly.com for our upcoming events. Safari Bookshelf (safari.oreilly.com) is the premier on- line reference library for programmers and IT professionals. Conduct searches across more than 1,000 books. Subscribers can zero in on answers to time-critical questions in a matter of seconds. Read the books on your Bookshelf from cover to cover or sim- ply flip to the page you need. Try it today for free. ,title.4229 Page iii Friday, December 1, 2006 9:52 AM LINUX KERNEL IN A NUTSHELL Greg Kroah-Hartman Beijing • Cambridge • Farnham • Köln • Paris • Sebastopol • Taipei • Tokyo ,LKNSTOC.fm.8428 Page v Friday, December 1, 2006 9:55 AM Chapter 1 Table of Contents Preface . ix Part I. Building the Kernel 1. Introduction . 3 Using This Book 4 2. Requirements for Building and Using the Kernel .
    [Show full text]
  • Communicating Between the Kernel and User-Space in Linux Using Netlink Sockets
    SOFTWARE—PRACTICE AND EXPERIENCE Softw. Pract. Exper. 2010; 00:1–7 Prepared using speauth.cls [Version: 2002/09/23 v2.2] Communicating between the kernel and user-space in Linux using Netlink sockets Pablo Neira Ayuso∗,∗1, Rafael M. Gasca1 and Laurent Lefevre2 1 QUIVIR Research Group, Departament of Computer Languages and Systems, University of Seville, Spain. 2 RESO/LIP team, INRIA, University of Lyon, France. SUMMARY When developing Linux kernel features, it is a good practise to expose the necessary details to user-space to enable extensibility. This allows the development of new features and sophisticated configurations from user-space. Commonly, software developers have to face the task of looking for a good way to communicate between kernel and user-space in Linux. This tutorial introduces you to Netlink sockets, a flexible and extensible messaging system that provides communication between kernel and user-space. In this tutorial, we provide fundamental guidelines for practitioners who wish to develop Netlink-based interfaces. key words: kernel interfaces, netlink, linux 1. INTRODUCTION Portable open-source operating systems like Linux [1] provide a good environment to develop applications for the real-world since they can be used in very different platforms: from very small embedded devices, like smartphones and PDAs, to standalone computers and large scale clusters. Moreover, the availability of the source code also allows its study and modification, this renders Linux useful for both the industry and the academia. The core of Linux, like many modern operating systems, follows a monolithic † design for performance reasons. The main bricks that compose the operating system are implemented ∗Correspondence to: Pablo Neira Ayuso, ETS Ingenieria Informatica, Department of Computer Languages and Systems.
    [Show full text]
  • Beancounters
    Resource Management: Beancounters Pavel Emelianov Denis Lunev Kirill Korotaev [email protected] [email protected] [email protected] July 31, 2007 Abstract The paper outlines various means of resource management available in the Linux kernel, such as per-process limits (the setrlimit(2) interface), shows their shortcomings, and illustrares the need for another resource control mechanism: beancounters. Beancounters are a set of per-process group parameters (proposed and implemented by Alan Cox and Andrey Savochkin and further developed for OpenVZ) which can be used with or without containers. Beancounters history, architecture, goals, efficiency, and some in-depth implementation details are given. 1 Current state of resource Again, most of these resource limits apply to a management in the Linux single process, which means, for example, that all the memory may be consumed by a single user run- kernel ning the appropriate number of processes. Setting the limits in such a way as to have the value of Currently the Linux kernel has only one way to multiplying the per-process limit by the number of control resource consumption of running processes processes staying below the available values is im- – it is UNIX-like resource limits (rlimits). practical. Rlimits set upper bounds for some resource us- age parameters. Most of these limits apply to a single process, except for the limit for the number 2 Beancounters of processes, which applies to a user. For some time now Linus has been accepting The main goal of these limits is to protect pro- patches adding the so-called namespaces into the cesses from an accidental misbehavior (like infi- kernel.
    [Show full text]
  • Postmodern Strace Dmitry Levin
    Postmodern strace Dmitry Levin Brussels, 2020 Traditional strace [1/30] Printing instruction pointer and timestamps print instruction pointer: -i option print timestamps: -r, -t, -tt, -ttt, and -T options Size and format of strings string size: -s option string format: -x and -xx options Verbosity of syscall decoding abbreviate output: -e abbrev=set, -v option dereference structures: -e verbose=set print raw undecoded syscalls: -e raw=set Traditional strace [2/30] Printing signals print signals: -e signal=set Dumping dump the data read from the specified descriptors: -e read=set dump the data written to the specified descriptors: -e write=set Redirecting output to files or pipelines write the trace to a file or pipeline: -o filename option write traces of processes to separate files: -ff -o filename Traditional strace [3/30] System call filtering trace only the specified set of system calls: -e trace=set System call statistics count time, calls, and errors for each system call: -c option sort the histogram printed by the -c option: -S sortby option Tracing control attach to existing processes: -p pid option trace child processes: -f option Modern strace [4/30] Tracing output format pathnames accessed by name or descriptor: -y option network protocol associated with descriptors: -yy option stack of function calls: -k option System call filtering pathnames accessed by name or descriptor: -P option regular expressions: -e trace=/regexp optional specifications: -e trace=?spec new syscall classes: %stat, %lstat, %fstat, %statfs, %fstatfs, %%stat, %%statfs
    [Show full text]
  • Compression and Replication of Device Files Using Deduplication Technique
    Compression and Replication of Device Files using Deduplication Technique {tag} {/tag} International Journal of Computer Applications © 2012 by IJCA Journal Volume 45 - Number 1 Year of Publication: 2012 Authors: Bharati Siddhant Agarwal Abhishek Somani Rukmi Patel Ankita Shingvi 10.5120/6744-8847 {bibtex}pxc3878847.bib{/bibtex} Abstract One of the biggest challenges to the data storage community is how to effectively store data without taking the exact same data and storing again and again in different locations on the back end servers. In this paper, we outline the performance achievements that can be achieved by exploiting block level data deduplication in File Systems. We have achieved replication of data by storing a deduplicated copy of the original data on the backup partition, which can enable us to access or restore data in case of a system crash. References - Request-based Device-mapper multipath and Dynamic load balancing by kiyoshi Ueda, Jun'ichi Nomura and Mike Christie. 1 / 2 Compression and Replication of Device Files using Deduplication Technique - A Device Mapper based Encryption Layer for TransCrypt by Sainath Vella. - Edward Goggin, Linux Multipathing Proceedings of the Linux Symposium, 2005. - Jens Axboe, Notes on the Generic Block Layer Rewrite in Linux 2. 5, Documentation/block/ biodoc. txt, 2007. - Netlink - Communication between kernel and userspace (PF NETLINK). Manual Page Netlink. - Satyam Sharma, Rajat Moona, and Dheeraj Sanghi. TransCrypt: A Secure and Transparent Encrypting File System for Enterprises. - Mike Anderson, SCSI Mid-Level Multipath, Proceedings of the Linux Symposium, 2003. - Daniel Pierre Bovet and Marco Cesati. Understanding the Linux Kernel. O'Reilly& Associates, Inc.
    [Show full text]
  • Daemon Management Under Systemd ZBIGNIEWSYSADMIN JĘDRZEJEWSKI-SZMEK and JÓHANN B
    Daemon Management Under Systemd ZBIGNIEWSYSADMIN JĘDRZEJEWSKI-SZMEK AND JÓHANN B. GUÐMUNDSSON Zbigniew Jędrzejewski-Szmek he systemd project is the basic user-space building block used to works in a mixed experimental- construct a modern Linux OS. The main daemon, systemd, is the first computational neuroscience lab process started by the kernel, and it brings up the system and acts as and writes stochastic simulators T and programs for the analysis a service manager. This article shows how to start a daemon under systemd, of experimental data. In his free time he works describes the supervision and management capabilities that systemd pro- on systemd and the Fedora Linux distribution. vides, and shows how they can be applied to turn a simple application into [email protected] a robust and secure daemon. It is a common misconception that systemd is somehow limited to desktop distributions. This is hardly true; similarly to Jóhann B. Guðmundsson, the Linux kernel, systemd supports and is used on servers and desktops, but Penguin Farmer, IT Fireman, Archer, Enduro Rider, Viking- it is also in the cloud and extends all the way down to embedded devices. In Reenactor, and general general it tries to be as portable as the kernel. It is now the default on new insignificant being in an installations in Debian, Ubuntu, Fedora/RHEL/CentOS, OpenSUSE/SUSE, insignificant world, living in the middle of the Arch, Tizen, and various derivatives. North Atlantic on an erupting rock on top of the world who has done a thing or two in Systemd refers both to the system manager and to the project as a whole.
    [Show full text]
  • Interaction Between the User and Kernel Space in Linux
    1 Interaction Between the User and Kernel Space in Linux Kai Lüke, Technische Universität Berlin F Abstract—System calls based on context switches from user to kernel supported in POSIX as well as System V style. A very space are the established concept for interaction in operating systems. common principle for IPC are sockets, and pipes can be seen On top of them the Linux kernel offers various paradigms for commu- as their most simple case. Besides the popular IP family with nication and management of resources and tasks. The principles and TCP/UDP, the local Unix domain sockets play a big role basic workings of system calls, interrupts, virtual system calls, special for many applications, while Netlink sockets are specific purpose virtual filesystems, process signals, shared memory, pipes, Unix or IP sockets and other IPC methods like the POSIX or System V to the Linux kernel and are not often found in user space message queue and Netlink are are explained and related to each other applications due to portability to other operating systems. in their differences. Because Linux is not a puristic project but home for There have been attempts to bring the D-BUS IPC into many different concepts, only a mere overview is presented here with the kernel with a Netlink implementation, kdbus and then focus on system calls. Bus1, but this will not be covered here. Also comparative studies with other operating systems are out of scope. 1 INTRODUCTION 2 KERNEL AND USER SPACE ERNELS in the Unix family are normally not initiating K any actions with outside effects, but rely on requests The processes have virtualized access to the memory as well from user space to perform these actions.
    [Show full text]
  • Instant OS Updates Via Userspace Checkpoint-And
    Instant OS Updates via Userspace Checkpoint-and-Restart Sanidhya Kashyap, Changwoo Min, Byoungyoung Lee, and Taesoo Kim, Georgia Institute of Technology; Pavel Emelyanov, CRIU and Odin, Inc. https://www.usenix.org/conference/atc16/technical-sessions/presentation/kashyap This paper is included in the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16). June 22–24, 2016 • Denver, CO, USA 978-1-931971-30-0 Open access to the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16) is sponsored by USENIX. Instant OS Updates via Userspace Checkpoint-and-Restart Sanidhya Kashyap Changwoo Min Byoungyoung Lee Taesoo Kim Pavel Emelyanov† Georgia Institute of Technology †CRIU & Odin, Inc. # errors # lines Abstract 50 1000K 40 100K In recent years, operating systems have become increas- 10K 30 1K 20 ingly complex and thus more prone to security and per- 100 formance issues. Accordingly, system updates to address 10 10 these issues have become more frequently available and 0 1 increasingly important. To complete such updates, users 3.13.0-x 3.16.0-x 3.19.0-x May 2014 must reboot their systems, resulting in unavoidable down- build/diff errors #layout errors Jun 2015 time and further loss of the states of running applications. #static local errors #num lines++ We present KUP, a practical OS update mechanism that Figure 1: Limitation of dynamic kernel hot-patching using employs a userspace checkpoint-and-restart mechanism, kpatch. Only two successful updates (3.13.0.32 34 and → which uses an optimized data structure for checkpoint- 3.19.0.20 21) out of 23 Ubuntu kernel package releases.
    [Show full text]