Fast and Live Hypervisor Replacement

Total Page:16

File Type:pdf, Size:1020Kb

Fast and Live Hypervisor Replacement Fast and Live Hypervisor Replacement Spoorti Doddamani Piush Sinha Hui Lu Binghamton University Binghamton University Binghamton University New York, USA New York, USA New York, USA [email protected] [email protected] [email protected] Tsu-Hsiang K. Cheng Hardik H. Bagdi Kartik Gopalan Binghamton University Binghamton University Binghamton University New York, USA New York, USA New York, USA [email protected] [email protected] [email protected] Abstract International Conference on Virtual Execution Environments (VEE Hypervisors are increasingly complex and must be often ’19), April 14, 2019, Providence, RI, USA. ACM, New York, NY, USA, updated for applying security patches, bug fixes, and feature 14 pages. https://doi.org/10.1145/3313808.3313821 upgrades. However, in a virtualized cloud infrastructure, up- 1 Introduction dates to an operational hypervisor can be highly disruptive. Before being updated, virtual machines (VMs) running on Virtualization-based server consolidation is a common prac- a hypervisor must be either migrated away or shut down, tice in today’s cloud data centers [2, 24, 43]. Hypervisors host resulting in downtime, performance loss, and network over- multiple virtual machines (VMs), or guests, on a single phys- head. We present a new technique, called HyperFresh, to ical host to improve resource utilization and achieve agility transparently replace a hypervisor with a new updated in- in resource provisioning for cloud applications [3, 5–7, 50]. stance without disrupting any running VMs. A thin shim Hypervisors must be often updated or replaced for various layer, called the hyperplexor, performs live hypervisor re- purposes, such as for applying security/bug fixes [23, 41] placement by remapping guest memory to a new updated adding new features [15, 25], or simply for software reju- hypervisor on the same machine. The hyperplexor lever- venation [40] to reset the effects of any unknown memory ages nested virtualization for hypervisor replacement while leaks or other latent bugs. minimizing nesting overheads during normal execution. We Updating a hypervisor usually requires a system reboot, present a prototype implementation of the hyperplexor on especially in the cases of system failures and software aging. the KVM/QEMU platform that can perform live hypervi- Live patching [61] can be used to perform some of these sor replacement within 10ms. We also demonstrate how a updates without rebooting, but it relies greatly on the old hyperplexor-based approach can used for sub-second reloca- hypervisor being patched, which can be buggy and unsta- tion of containers for live OS replacement. ble. To eliminate the need for a system reboot and mitigate service disruption, another approach is to live migrate the CCS Concepts • Software and its engineering → Vir- VMs from the current host to another host that runs a clean tual machines; Operating systems. and updated hypervisor. Though widely used, live migrat- Keywords Hypervisor, Virtualization, Container, Live Mi- ing [18, 27] tens or hundreds of VMs from one physical host gration to another, i.e. inter-host live migration, can lead to signifi- cant service disruption, long total migration time, and large ACM Reference Format: migration-triggered network traffic, which can also affect Spoorti Doddamani, Piush Sinha, Hui Lu, Tsu-Hsiang K. Cheng, other unrelated VMs. Hardik H. Bagdi, and Kartik Gopalan. 2019. Fast and Live Hypervi- sor Replacement. In Proceedings of the 15th ACM SIGPLAN/SIGOPS In this paper, we present HyperFresh, a faster and less disruptive approach to live hypervisor replacement which Permission to make digital or hard copies of all or part of this work for transparently and quickly replaces an old hypervisor with a personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear new instance on the same host while minimizing impact on this notice and the full citation on the first page. Copyrights for components running VMs. Using nested virtualization [12], a lightweight of this work owned by others than ACM must be honored. Abstracting with shim layer, called hyperplexor, runs beneath the traditional credit is permitted. To copy otherwise, or republish, to post on servers or to full-fledged hypervisor on which VMs run. The new replace- redistribute to lists, requires prior specific permission and/or a fee. Request ment hypervisor is instantiated as a guest atop the hyper- permissions from [email protected]. VEE ’19, April 14, 2019, Providence, RI, USA plexor. Next, the states of all VMs are transferred from the © 2019 Association for Computing Machinery. old hypervisor to the replacement hypervisor via intra-host ACM ISBN 978-1-4503-6020-3/19/04...$15.00 live VM migration. https://doi.org/10.1145/3313808.3313821 VEE ’19, April 14, 2019, Providence, RI, USA S. Doddamani, P. Sinha, H. Lu, T. Cheng, H. Bagdi and K. Gopalan However, two major challenges must be tackled with this VM VM approach. First, existing live migration techniques [18, 27] Old Replacement incur significant memory copying overhead, even for intra- Hypervisor Hypervisor VM VM host VM transfers. Secondly, nested virtualization can de- (L1) (L1) grade a VM’s performance during normal execution of VMs Old Replacement Hyperplexor when no hypervisor replacement is being performed. Hyper- Hypervisor Hypervisor (L0) Fresh addresses these two challenges as follows. a. Inter-host migration b. Nested migration on same host First, instead of copying a VM’s memory, the hyperplexor relocates the ownership of the VM’s memory pages from Figure 1. Hypervisor replacement in (a) Inter-host (non- the old hypervisor to the replacement hypervisor. The hy- nested) and (b) Intra-host (nested) setting. perplexor records the mappings of the VM’s guest-physical optimizations to reduce nesting overheads, and live con- and host-physical address space from the old hypervisor and tainer relocation for OS replacement. In the rest of this pa- uses them to reconstruct the VM’s memory mappings on per, we first demonstrate the quantitative overheads ofVM the replacement hypervisor. Most of the remapping opera- migration-based hypervisor replacement, followed by the tions are performed out of the critical path of the VM state HyperFresh design, implementation, and evaluation, and transfer, leading to a very low hypervisor replacement time finally discussion of related work and conclusions. of around 10ms, irrespective of the size of the VMs being relocated. In contrast, traditional intra-host VM migration, 2 Problem Demonstration involving memory copying, can take several seconds. For the same reason, HyperFresh also scales well when remapping In this section, we examine the performance of traditional multiple VMs to the replacement hypervisor. live migration for hypervisor replacement to motivate the HyperFresh addresses the second challenge of nesting need for a faster remapping-based mechanism. overhead during normal execution as follows. In compar- 2.1 Using Pre-Copy For Hypervisor Replacement ison with the traditional single-level virtualization setup, where the hypervisor directly controls the hardware, nested Inter-Host Live VM Migration: To refresh a hypervisor, a virtualization introduces additional overheads, especially traditional approach is to live migrate VMs from their current for I/O virtualization. Hence HyperFresh includes a number host to another host (or hosts) having an updated hypervisor. of optimizations to minimize nesting overheads, allowing As shown in Figure1(a), we can leverage the state-of-the-art the hypervisor and its VMs to execute mostly without hy- pre-copy live VM migration technique. Pre-copy VM live perplexor intervention during normal operations. Specifi- migration consists of three major phases: iterative memory cally, HyperFresh uses direct device assignment (VT-d) for pre-copy rounds, stop-and-copy, and resumption. During emulation-free I/O path to the hypervisor, dedicates physical the memory pre-copy phase, the first iteration transfers all CPUs to reduce scheduling overheads for the hypervisor, memory pages over the network to the destination, while reduces CPU utilization on hyperplexor by disabling the the VM continues to execute concurrently. In the subsequent polling of hypervisor VCPUs, and eliminates VM Exits due iterations, only the dirtied pages are transferred. After a to external device interrupts. certain round of iterations, determined by a convergence Finally, as a lightweight alternative to VMs, containers [21, criteria, the stop-and-copy phase is initiated, during which 52–54] can be used to consolidate multiple processes. We the VM is paused at the source and any remaining dirty pages, demonstrate how the hyperplexor-based approach can be VCPUs, and I/O state are transferred to the destination VM. used for live relocation of containers to support replacing the Finally, the VM is resumed at the destination. underlying OS. Specifically, we demonstrate sub-second live Intra-Host Live VM Migration: As Figure1(b) shows, relocation of a container from an old OS to a replacement OS with nested virtualization, a base hyperplexor at layer-0 (L0) by combining hyperplexor-based memory remapping mech- can run deprivileged hypervisors at layer-1 (L1), which con- anism and a well-known process migration tool, CRIU [58]. trol VMs running at layer-2 (L2). At hypervisor replacement In this case, the hyperplexor runs as a thin shim layer (hy- time,
Recommended publications
  • Effective Virtual CPU Configuration with QEMU and Libvirt
    Effective Virtual CPU Configuration with QEMU and libvirt Kashyap Chamarthy <[email protected]> Open Source Summit Edinburgh, 2018 1 / 38 Timeline of recent CPU flaws, 2018 (a) Jan 03 • Spectre v1: Bounds Check Bypass Jan 03 • Spectre v2: Branch Target Injection Jan 03 • Meltdown: Rogue Data Cache Load May 21 • Spectre-NG: Speculative Store Bypass Jun 21 • TLBleed: Side-channel attack over shared TLBs 2 / 38 Timeline of recent CPU flaws, 2018 (b) Jun 29 • NetSpectre: Side-channel attack over local network Jul 10 • Spectre-NG: Bounds Check Bypass Store Aug 14 • L1TF: "L1 Terminal Fault" ... • ? 3 / 38 Related talks in the ‘References’ section Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications What this talk is not about 4 / 38 Related talks in the ‘References’ section What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications 4 / 38 What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications Related talks in the ‘References’ section 4 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP QEMU QEMU VM1 VM2 Custom Disk1 Disk2 Appliance ioctl() KVM-based virtualization components Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP Custom Appliance KVM-based virtualization components QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) Custom Appliance KVM-based virtualization components libvirtd QMP QMP QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 libguestfs (guestfish) Custom Appliance KVM-based virtualization components OpenStack, et al.
    [Show full text]
  • Understanding Full Virtualization, Paravirtualization, and Hardware Assist
    VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist Contents Introduction .................................................................................................................1 Overview of x86 Virtualization..................................................................................2 CPU Virtualization .......................................................................................................3 The Challenges of x86 Hardware Virtualization ...........................................................................................................3 Technique 1 - Full Virtualization using Binary Translation......................................................................................4 Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5 Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6 Memory Virtualization................................................................................................6 Device and I/O Virtualization.....................................................................................7 Summarizing the Current State of x86 Virtualization Techniques......................8 Full Virtualization with Binary Translation is the Most Established Technology Today..........................8 Hardware Assist is the Future of Virtualization, but the Real Gains Have
    [Show full text]
  • Introduction to Virtualization
    z Systems Introduction to Virtualization SHARE Orlando Linux and VM Program Romney White, IBM [email protected] z Systems Architecture and Technology © 2015 IBM Corporation Agenda ° Introduction to Virtualization – Concept – Server Virtualization Approaches – Hypervisor Implementation Methods – Why Virtualization Matters ° Virtualization on z Systems – Logical Partitions – Virtual Machines 2 z Systems Virtualization Technology © 2015 IBM Corporation Virtualization Concept Virtual Resources Proxies for real resources: same interfaces/functions, different attributes May be part of a physical resource or multiple physical resources Virtualization Creates virtual resources and "maps" them to real resources Primarily accomplished with software or firmware Resources Components with architecturally-defined interfaces/functions May be centralized or distributed - usually physical Examples: memory, disk drives, networks, servers Separates presentation of resources to users from actual resources Aggregates pools of resources for allocation to users as virtual resources 3 z Systems Virtualization Technology © 2015 IBM Corporation Server Virtualization Approaches Hardware Partitioning Bare-metal Hypervisor Hosted Hypervisor Apps ... Apps Apps ... Apps Apps ... Apps OS OS OS OS OS OS Adjustable partitions Hypervisor Hypervisor Partition Controller Host OS SMP Server SMP Server SMP Server Server is subdivided into fractions Hypervisor provides fine-grained Hypervisor uses OS services to each of which can run an OS timesharing of all resources
    [Show full text]
  • Fast and Scalable VMM Live Upgrade in Large Cloud Infrastructure
    Fast and Scalable VMM Live Upgrade in Large Cloud Infrastructure Xiantao Zhang Xiao Zheng Zhi Wang Alibaba Group Alibaba Group Florida State University [email protected] [email protected] [email protected] Qi Li Junkang Fu Yang Zhang Tsinghua University Alibaba Group Alibaba Group [email protected] [email protected] [email protected] Yibin Shen Alibaba Group [email protected] Abstract hand over passthrough devices to the new KVM instance High availability is the most important and challenging prob- without losing any ongoing (DMA) operations. Our evalua- lem for cloud providers. However, virtual machine mon- tion shows that Orthus can reduce the total migration time itor (VMM), a crucial component of the cloud infrastruc- and downtime by more than 99% and 90%, respectively. We ture, has to be frequently updated and restarted to add secu- have deployed Orthus in one of the largest cloud infrastruc- rity patches and new features, undermining high availabil- tures for a long time. It has become the most effective and ity. There are two existing live update methods to improve indispensable tool in our daily maintenance of hundreds of the cloud availability: kernel live patching and Virtual Ma- thousands of servers and millions of VMs. chine (VM) live migration. However, they both have serious CCS Concepts • Security and privacy → Virtualization drawbacks that impair their usefulness in the large cloud and security; • Computer systems organization → Avail- infrastructure: kernel live patching cannot handle complex ability. changes (e.g., changes to persistent data structures); and VM live migration may incur unacceptably long delays when Keywords virtualization; live upgrade; cloud infrastructure migrating millions of VMs in the whole cloud, for example, ACM Reference Format: to deploy urgent security patches.
    [Show full text]
  • KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St
    St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 5-2018 KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Pasunuru, Srinath Reddy, "KVM Based Virtualization and Remote Management" (2018). Culminating Projects in Information Assurance. 53. https://repository.stcloudstate.edu/msia_etds/53 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. 1 KVM Based Virtualization and Remote Management by Srinath Reddy Pasunuru A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree Master of Science in Information Assurance May, 2018 Starred Paper Committee Susantha Herath, Chairperson Ezzat Kirmani Sneh Kalia 2 Abstract In the recent past, cloud computing is the most significant shifts and Kernel Virtual Machine (KVM) is the most commonly deployed hypervisor which are used in the IaaS layer of the cloud computing systems. The Hypervisor is the one which provides the complete virtualization environment which will intend to virtualize as much as hardware and systems which will include the CPUs, Memory, network interfaces and so on. Because of the virtualization technologies such as the KVM and others such as ESXi, there has been a significant decrease in the usage if the resources and decrease in the costs involved.
    [Show full text]
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]
  • The Operating System Process in Virtualization for Cloud Computing 1J
    INFOKARA RESEARCH ISSN NO: 1021-9056 THE OPERATING SYSTEM PROCESS IN VIRTUALIZATION FOR CLOUD COMPUTING 1J. Saravanan, 2Saravanan .P 1M.Phil. Research Scholar, D.B.Jain College (Autonomous), Thoraipakkam, Chennai, India. E-mail: [email protected] 2Assistant Professor, D.B.Jain College (Autonomous), Thoraipakkam, Chennai, India. E-mail: [email protected] ABSTRACT: OS-level virtualization is an era that walls the working system to create a couple of remoted Virtual Machines (VM). An OS-level VM is a digital execution environment that may be forked right away from the baserunning environment. OS-level virtualization has been extensively used to improve safety, manageability, and availability of today’s complicated software program surroundings, with small runtime and resource overhead, and with minimal modifications to the existing computing infrastructure. Keywords: Operating System Virtualization, Virtual Machines, Virtual Environment Cloud Computing, Virtual Private System. 1. INTRODUCTION: Operating System Virtualization (OS Virtualization) is the last form of Virtualization in Cloud Computing. Operating system virtualization is a part of virtualization technology and is a form of server virtualization. In this OS Virtualization tutorial, we are going to cowl makes use of, working, types, kinds of disks, blessings of Operating System Virtualization. Operating System virtualizations consists of a modified shape than a normal operating system so that exceptional customers can perform its give up-use unique applications. This entire process shall perform on a unmarried laptop at a time. In OS virtualizations, the virtual eyes surroundings accept the command from any of the users working it and performs the different duties on the identical gadget by using running specific packages.
    [Show full text]
  • What's New in the Z/VM 6.3 Hypervisor Session 17515
    What's New in the z/VM 6.3 Hypervisor Session 17515 John Franciscovich IBM: z/VM Development Endicott, NY Insert Custom Presented by Bill Bitner Session QR if [email protected] Desired Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. BladeCenter* FICON* OMEGAMON* RACF* System z9* zSecure DB2* GDPS* Performance Toolkit for VM Storwize* System z10* z/VM* DS6000* HiperSockets Power* System Storage* Tivoli* z Systems* DS8000* HyperSwap PowerVM System x* zEnterprise* ECKD IBM z13* PR/SM System z* z/OS* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. Java and all Java based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
    [Show full text]
  • Virtualizationoverview
    VMWAREW H WHITEI T E PPAPERA P E R Virtualization Overview 1 VMWARE WHITE PAPER Table of Contents Introduction .............................................................................................................................................. 3 Virtualization in a Nutshell ................................................................................................................... 3 Virtualization Approaches .................................................................................................................... 4 Virtualization for Server Consolidation and Containment ........................................................... 7 How Virtualization Complements New-Generation Hardware .................................................. 8 Para-virtualization ................................................................................................................................... 8 VMware’s Virtualization Portfolio ........................................................................................................ 9 Glossary ..................................................................................................................................................... 10 2 VMWARE WHITE PAPER Virtualization Overview Introduction Virtualization in a Nutshell Among the leading business challenges confronting CIOs and Simply put, virtualization is an idea whose time has come. IT managers today are: cost-effective utilization of IT infrastruc- The term virtualization broadly describes the separation
    [Show full text]
  • Paravirtualization (PV)
    Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels of privilege known as Ring 0, 1, 2 and 3 to operating systems and applications to manage access to the computer hardware. While user level applications typically run in Ring 3, the operating system needs to have direct access to the memory and hardware and must execute its privileged instructions in Ring 0. x86 privilege level architecture without virtualization Technique 1: Full Virtualization using Binary Translation This approach relies on binary translation to trap (into the VMM) and to virtualize certain sensitive and non-virtualizable instructions with new sequences of instructions that have the intended effect on the virtual hardware. Meanwhile, user level code is directly executed on the processor for high performance virtualization. Binary translation approach to x86 virtualization Full Virtualization using Binary Translation This combination of binary translation and direct execution provides Full Virtualization as the guest OS is completely decoupled from the underlying hardware by the virtualization layer. The guest OS is not aware it is being virtualized and requires no modification. The hypervisor translates all operating system instructions at run-time on the fly and caches the results for future use, while user level instructions run unmodified at native speed. VMware’s virtualization products such as VMWare ESXi and Microsoft Virtual Server are examples of full virtualization. Full Virtualization using Binary Translation The performance of full virtualization may not be ideal because it involves binary translation at run-time which is time consuming and can incur a large performance overhead.
    [Show full text]
  • IIP (System Z9 Integrated Information Processor) Computer CEC (Central Electronics Complex) Server
    IBM System z z/VM Basics Arwed Tschoeke Systems Architect [email protected] © 2009 IBM Corporation © 2008 IBM Corporation Introduction We'll explain basic concepts of System z: – Terminology – Processors – Memory – I/O – Networking We'll see that z/VM virtualizes a System z machine: – Virtual processors – Virtual memory – … and so on Where appropriate, we'll compare or contrast: – PR/SM or LPAR – z/OS – Linux 2 z/VM: The Very Basics z/VM: The Very Basics 1 IBM System z © 2008 IBM Corporation System z Parts Nomenclature x86, UNIX, etc. System z Memory Storage (though we are moving toward "memory") Disk, Storage DASD – Direct Access Storage Device Processor Processor, Engine, PU (processing unit) IOP (I/O processor) CPU (central processing unit) CP (central processor) SAP (system assist processor) Specialty engines –IFL (Integrated Facility for Linux) –zAAP (System z Application Assist Processor) –zIIP (System z9 Integrated Information Processor) Computer CEC (central electronics complex) Server 3 z/VM: The Very Basics © 2008 IBM Corporation IBM System z Virtualization Genetics Over 40 years of continuous innovation in virtualization – Refined to support modern business requirements System z10 . – Exploit hardware technology for economical growth , .. ity System z9 z/VM V5 bil – LPAR, Integrated Facility for Linux, HiperSockets xi 64-Bit Fle – System z Application Assist Processors s, zSeries es VM/ESA Virtual Switch – System z Information Integration stn 9672 bu ESA Guest LANs Set Observer Ro Processors y, 9x21 ilit VM/XA Virtual Machine
    [Show full text]
  • Live Kernel Patching Using Kgraft
    SUSE Linux Enterprise Server 12 SP4 Live Kernel Patching Using kGraft SUSE Linux Enterprise Server 12 SP4 This document describes the basic principles of the kGraft live patching technology and provides usage guidelines for the SLE Live Patching service. kGraft is a live patching technology for runtime patching of the Linux kernel, without stopping the kernel. This maximizes system uptime, and thus system availability, which is important for mission-critical systems. By allowing dynamic patching of the kernel, the technology also encourages users to install critical security updates without deferring them to a scheduled downtime. A kGraft patch is a kernel module, intended for replacing whole functions in the kernel. kGraft primarily oers in-kernel infrastructure for integration of the patched code with base kernel code at runtime. SLE Live Patching is a service provided on top of regular SUSE Linux Enterprise Server maintenance. kGraft patches distributed through SLE Live Patching supplement regular SLES maintenance updates. Common update stack and procedures can be used for SLE Live Patching deployment. Publication Date: 09/24/2021 Contents 1 Advantages of kGraft 3 2 Low-level Function of kGraft 3 1 Live Kernel Patching Using kGraft 3 Installing kGraft Patches 4 4 Patch Lifecycle 6 5 Removing a kGraft Patch 6 6 Stuck Kernel Execution Threads 6 7 The kgr Tool 7 8 Scope of kGraft Technology 7 9 Scope of SLE Live Patching 8 10 Interaction with the Support Processes 8 11 GNU Free Documentation License 8 2 Live Kernel Patching Using kGraft 1 Advantages of kGraft Live kernel patching using kGraft is especially useful for quick response in emergencies (when serious vulnerabilities are known and should be xed when possible or there are serious system stability issues with a known x).
    [Show full text]