Instant OS Updates Via Userspace Checkpoint-And

Total Page:16

File Type:pdf, Size:1020Kb

Instant OS Updates Via Userspace Checkpoint-And Instant OS Updates via Userspace Checkpoint-and-Restart Sanidhya Kashyap, Changwoo Min, Byoungyoung Lee, and Taesoo Kim, Georgia Institute of Technology; Pavel Emelyanov, CRIU and Odin, Inc. https://www.usenix.org/conference/atc16/technical-sessions/presentation/kashyap This paper is included in the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16). June 22–24, 2016 • Denver, CO, USA 978-1-931971-30-0 Open access to the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16) is sponsored by USENIX. Instant OS Updates via Userspace Checkpoint-and-Restart Sanidhya Kashyap Changwoo Min Byoungyoung Lee Taesoo Kim Pavel Emelyanov† Georgia Institute of Technology †CRIU & Odin, Inc. # errors # lines Abstract 50 1000K 40 100K In recent years, operating systems have become increas- 10K 30 1K 20 ingly complex and thus more prone to security and per- 100 formance issues. Accordingly, system updates to address 10 10 these issues have become more frequently available and 0 1 increasingly important. To complete such updates, users 3.13.0-x 3.16.0-x 3.19.0-x May 2014 must reboot their systems, resulting in unavoidable down- build/diff errors #layout errors Jun 2015 time and further loss of the states of running applications. #static local errors #num lines++ We present KUP, a practical OS update mechanism that Figure 1: Limitation of dynamic kernel hot-patching using employs a userspace checkpoint-and-restart mechanism, kpatch. Only two successful updates (3.13.0.32 34 and → which uses an optimized data structure for checkpoint- 3.19.0.20 21) out of 23 Ubuntu kernel package releases. X- → ing on disk as well as a memory persistence mechanism axis represents before-version and after-version, and dotted bars across the update, coupled with a fast in-place kernel represent failures in executing kpatch. Each failure case rep- switch. This allows for instant kernel updates spanning resents the following legends: build/diff errors—kpatch failed across major kernel versions without any kernel modifica- during a build and diff processes; # static local—a patch mod- tions. ifies a value of static local variables; # layout error—a patch Our evaluation shows that KUP can support any type changes a layout of a data structure. of real kernel patches (e.g., security, minor or even ma- jor releases) with large-scale applications that include 73] directly applies patches to the running kernel. As a memcached, mysql, or in the middle of the Linux kernel result, system updates can be performed without incurring compilation, unlike well-known dynamic hot-patching application downtime. However, dynamic hot-patching techniques (e.g., ksplice). Not only that, KUP can update is inherently limited to patches that do not semantically a running Linux kernel in 3 seconds (overall downtime) modify data structures. Therefore, it is commonly used without losing 32 GB of memcached data from kernel ver- to apply simple security patches that contain minor code sion v3.17-rc7 to v4.1. changes. For instance, kpatch [67], a popular dynamic update tool developed by Red Hat, was able to support 2 1 Introduction out of 23 minor updates over a year of Ubuntu’s kernel Today, computer users routinely update their operating releases (see Figure 1). systems either to patch security vulnerabilities, fix bugs, Rolling updates [20, 60] are another viable technique or add a new set of features to the existing system. Unfor- for large-scale systems. In rolling updates, system admin- tunately, to fully incorporate a new update, users have no istrators apply an update to a small group of machines; choice but to restart their systems, which results in the loss if there is no failure, they apply the same update to the of the running states of applications. This unavoidable rest of the machines in the data center. This helps admin- disruption not only brings inconvenience to end users, but istrators minimize the risk of update failures and system also causes an adverse financial impact on business.For downtime. However, this process requires careful plan- example, updating a memcached [57] server at Facebook ning to minimize the disruption of running services due that caches around 120 GB data in 144 GB RAM [24] re- to the inevitable downtime of applications. quires a prolonged warm-up phase of 90-100 minutes [30]. There are several research projects that attempt to solve The more critical problem is when a system update fails. the OS update problem either by proposing new OS con- Such failure often leads to additional downtime or main- struction [5, 7, 23, 65, 80], or by providing a transfer tenance costs thereafter, resulting in an immediate loss of function for system updates [28]. In order to keep their in- active customers [20, 21]. Still, even with the unavoidable ternal states switchable, such OSes require radical design disruption and risks involved in update failures, system changes, thereby making these modifications impractical updates are necessary to promptly mitigate the known for the commodity OSes [19, 69]. security issues and resolve critical bugs that might hurt To address these issues, we present KUP—a new up- the correctness of operations [8, 25, 44, 68, 71]. date mechanism that allows for prompt kernel updates To solve these problems, two large sets of techniques without modifying a commodity OS. KUP incorporates are used in practice: dynamic hot-patching and rolling application checkpoint-and-restart (C/R) for saving and updates. Dynamic hot-patching (or live update) [4, 67, restoring the application states before and after the update, 1 USENIX Association 2016 USENIX Annual Technical Conference 605 and an in-place kernel switch to quickly boot into a new 2 Related work kernel. This allows KUP to minimize the downtime of In this section, we compare KUP’s approach to previous running applications during the system update. Unlike studies in four areas: dynamic hot-patching, new OS dynamic hot-patching mechanisms, KUP can perform a designs, live migration, and application C/R for updates. full system update regardless of the complexity of up- dates, and thus can support a broader range of system Dynamic hot-patching. Industries rely on solutions updates, including security and bug fixes, the addition of such as Ksplice [4], kpatch [67], and KGraft [73] to new features, and even major kernel upgrades. To address promptly mitigate the security vulnerabilities without update failures, KUP provides a safe fallback mechanism any significant outage of their production systems. How- that provides a mechanism to restore the original system, ever, as shown in Figure 1, the hot-patching techniques checkpointed right before the update. have critical limitations, especially in handling data lay- out changes, thus making it impossible to guarantee the KUP works by leveraging the OS provided infrastruc- safety of live updates. Previous studies have proposed ture such as procfs, ptrace system call [37] for obtain- dynamic software update schemes on event-driven sys- ing information during the checkpoint of an application, tems [1, 27, 28, 36], object oriented languages [39, 63] and process forking (clone, fork) mechanism along with and even C language [4, 35, 53, 54, 56] including formal other system calls [15, 37] and netlink sockets to re- proofs [34, 55, 72]. store the application after update. To enable an effi- However, unlike previous update techniques, KUP re- cient, userspace-based C/R of an application during an lies on a whole kernel switch to apply any kind of patch, update, we design an efficient checkpointing storage for- regardless of its complexity. KUP allows for system-wide mat that improves both the checkpoint size and KUP’s live updates without modifying programs or tracking their performance. However, the aforementioned solution of state changes, unlike previous works [19, 35]. KUP still ends up storing RAM data on the disk, which is not suitable for disk-less systems [30, 57].1 We further ex- New operating system designs for live update. An- tend KUP and make it generic enough to support disk-less other approach is to design a new OS with well-defined systems by proposing a memory persistence mechanism, abstractions between interfaces and implementations so which we implement via binary patching and dynamic that each component can be replaced online without dis- OS instrumentation. This approach is complementary to ruption. Representative examples include Microkernel our userspace solution. (e.g., exokernel [23], K42 [5, 7, 70], Barrelfish [6, 80]) and LibOS (e.g., Drawbridge [65]). In particular, Pro- This paper makes the following three contributions: teos [28] based on MINIX 3, and a Linux variant [69], are designed to update their components online and also • We design a simple, yet robust update mechanism transform internal data structures to adopt the updated ker- by using application C/R, and implement an open nel. Unfortunately, with a large amount of code changes source prototype of KUP. (e.g., new features), constructing the state transfer func- • We show KUP’s effectiveness by providing an in- tions either requires manual effort or is infeasible in many depth analysis of the proposed techniques with real cases [5, 7, 11, 28, 69, 70]. applications, micro-benchmarks and full software Autopod [66] and the work by Siniavine et al. [69] are stack representing generic data center application. closest to KUP. Autopod uses a namespace mechanism • We devise, implement and evaluate a safe fall- for process decoupling and migrating applications across back mechanism for KUP that can be easily real- machines, whereas Siniavine et al. provides a kernel space ized through application C/R and an in-place kernel C/R mechanism coupled with an in-place kernel switch to switch. update the OS, and employs techniques similar to those of Otherworld [19].
Recommended publications
  • Checkpoint and Restoration of Micro-Service in Docker Containers
    3rd International Conference on Mechatronics and Industrial Informatics (ICMII 2015) Checkpoint and Restoration of Micro-service in Docker Containers Chen Yang School of Information Security Engineering, Shanghai Jiao Tong University, China 200240 [email protected] Keywords: Lightweight Virtualization, Checkpoint/restore, Docker. Abstract. In the present days of rapid adoption of micro-service, it is imperative to build a system to support and ensure the high performance and high availability for micro-services. Lightweight virtualization, which we also called container, has the ability to run multiple isolated sets of processes under a single kernel instance. Because of the possibility of obtaining a low overhead comparable to the near-native performance of a bare server, the container techniques, such as openvz, lxc, docker, they are widely used for micro-service [1]. In this paper, we present the high availability of micro-service in containers. We investigate capabilities provided by container (docker, openvz) to model and build the Micro-service infrastructure and compare different checkpoint and restore technologies for high availability. Finally, we present preliminary performance results of the infrastructure tuned to the micro-service. Introduction Lightweight virtualization, named the operating system level virtualization technology, partitions the physical machines resource, creating multiple isolated user-space instances. Each container acts exactly like a stand-alone server. A container can be rebooted independently and have root access, users, IP address, memory, processes, files, etc. Unlike traditional virtualization with the hypervisor layer, containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with openvz, vserver and more recently lxc, Solaris with zones, and FreeBSD with Jails [2].
    [Show full text]
  • Red Hat Enterprise Linux 7 7.1 Release Notes
    Red Hat Enterprise Linux 7 7.1 Release Notes Release Notes for Red Hat Enterprise Linux 7 Red Hat Customer Content Services Red Hat Enterprise Linux 7 7.1 Release Notes Release Notes for Red Hat Enterprise Linux 7 Red Hat Customer Content Services Legal Notice Copyright © 2015 Red Hat, Inc. This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
    [Show full text]
  • The Linux Kernel Module Programming Guide
    The Linux Kernel Module Programming Guide Peter Jay Salzman Michael Burian Ori Pomerantz Copyright © 2001 Peter Jay Salzman 2007−05−18 ver 2.6.4 The Linux Kernel Module Programming Guide is a free book; you may reproduce and/or modify it under the terms of the Open Software License, version 1.1. You can obtain a copy of this license at http://opensource.org/licenses/osl.php. This book is distributed in the hope it will be useful, but without any warranty, without even the implied warranty of merchantability or fitness for a particular purpose. The author encourages wide distribution of this book for personal or commercial use, provided the above copyright notice remains intact and the method adheres to the provisions of the Open Software License. In summary, you may copy and distribute this book free of charge or for a profit. No explicit permission is required from the author for reproduction of this book in any medium, physical or electronic. Derivative works and translations of this document must be placed under the Open Software License, and the original copyright notice must remain intact. If you have contributed new material to this book, you must make the material and source code available for your revisions. Please make revisions and updates available directly to the document maintainer, Peter Jay Salzman <[email protected]>. This will allow for the merging of updates and provide consistent revisions to the Linux community. If you publish or distribute this book commercially, donations, royalties, and/or printed copies are greatly appreciated by the author and the Linux Documentation Project (LDP).
    [Show full text]
  • Flexible Lustre Management
    Flexible Lustre management Making less work for Admins ORNL is managed by UT-Battelle for the US Department of Energy How do we know Lustre condition today • Polling proc / sysfs files – The knocking on the door model – Parse stats, rpc info, etc for performance deviations. • Constant collection of debug logs – Heavy parsing for common problems. • The death of a node – Have to examine kdumps and /or lustre dump Origins of a new approach • Requirements for Linux kernel integration. – No more proc usage – Migration to sysfs and debugfs – Used to configure your file system. – Started in lustre 2.9 and still on going. • Two ways to configure your file system. – On MGS server run lctl conf_param … • Directly accessed proc seq_files. – On MSG server run lctl set_param –P • Originally used an upcall to lctl for configuration • Introduced in Lustre 2.4 but was broken until lustre 2.12 (LU-7004) – Configuring file system works transparently before and after sysfs migration. Changes introduced with sysfs / debugfs migration • sysfs has a one item per file rule. • Complex proc files moved to debugfs • Moving to debugfs introduced permission problems – Only debugging files should be their. – Both debugfs and procfs have scaling issues. • Moving to sysfs introduced the ability to send uevents – Item of most interest from LUG 2018 Linux Lustre client talk. – Both lctl conf_param and lctl set_param –P use this approach • lctl conf_param can set sysfs attributes without uevents. See class_modify_config() – We get life cycle events for free – udev is now involved. What do we get by using udev ? • Under the hood – uevents are collect by systemd and then processed by udev rules – /etc/udev/rules.d/99-lustre.rules – SUBSYSTEM=="lustre", ACTION=="change", ENV{PARAM}=="?*", RUN+="/usr/sbin/lctl set_param '$env{PARAM}=$env{SETTING}’” • You can create your own udev rule – http://reactivated.net/writing_udev_rules.html – /lib/udev/rules.d/* for examples – Add udev_log="debug” to /etc/udev.conf if you have problems • Using systemd for long task.
    [Show full text]
  • Reducing Power Consumption in Mobile Devices by Using a Kernel
    IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. Z, NO. B, AUGUST 2017 1 Reducing Event Latency and Power Consumption in Mobile Devices by Using a Kernel-Level Display Server Stephen Marz, Member, IEEE and Brad Vander Zanden and Wei Gao, Member, IEEE E-mail: [email protected], [email protected], [email protected] Abstract—Mobile devices differ from desktop computers in that they have a limited power source, a battery, and they tend to spend more CPU time on the graphical user interface (GUI). These two facts force us to consider different software approaches in the mobile device kernel that can conserve battery life and reduce latency, which is the duration of time between the inception of an event and the reaction to the event. One area to consider is a software package called the display server. The display server is middleware that handles all GUI activities between an application and the operating system, such as event handling and drawing to the screen. In both desktop and mobile devices, the display server is located in the application layer. However, the kernel layer contains most of the information needed for handling events and drawing graphics, which forces the application-level display server to make a series of system calls in order to coordinate events and to draw graphics. These calls interrupt the CPU which can increase both latency and power consumption, and also require the kernel to maintain event queues that duplicate event queues in the display server. A further drawback of placing the display server in the application layer is that the display server contains most of the information required to efficiently schedule the application and this information is not communicated to existing kernels, meaning that GUI-oriented applications are scheduled less efficiently than they might be, which further increases power consumption.
    [Show full text]
  • Version 7.8-Systemd
    Linux From Scratch Version 7.8-systemd Created by Gerard Beekmans Edited by Douglas R. Reno Linux From Scratch: Version 7.8-systemd by Created by Gerard Beekmans and Edited by Douglas R. Reno Copyright © 1999-2015 Gerard Beekmans Copyright © 1999-2015, Gerard Beekmans All rights reserved. This book is licensed under a Creative Commons License. Computer instructions may be extracted from the book under the MIT License. Linux® is a registered trademark of Linus Torvalds. Linux From Scratch - Version 7.8-systemd Table of Contents Preface .......................................................................................................................................................................... vii i. Foreword ............................................................................................................................................................. vii ii. Audience ............................................................................................................................................................ vii iii. LFS Target Architectures ................................................................................................................................ viii iv. LFS and Standards ............................................................................................................................................ ix v. Rationale for Packages in the Book .................................................................................................................... x vi. Prerequisites
    [Show full text]
  • Fast and Scalable VMM Live Upgrade in Large Cloud Infrastructure
    Fast and Scalable VMM Live Upgrade in Large Cloud Infrastructure Xiantao Zhang Xiao Zheng Zhi Wang Alibaba Group Alibaba Group Florida State University [email protected] [email protected] [email protected] Qi Li Junkang Fu Yang Zhang Tsinghua University Alibaba Group Alibaba Group [email protected] [email protected] [email protected] Yibin Shen Alibaba Group [email protected] Abstract hand over passthrough devices to the new KVM instance High availability is the most important and challenging prob- without losing any ongoing (DMA) operations. Our evalua- lem for cloud providers. However, virtual machine mon- tion shows that Orthus can reduce the total migration time itor (VMM), a crucial component of the cloud infrastruc- and downtime by more than 99% and 90%, respectively. We ture, has to be frequently updated and restarted to add secu- have deployed Orthus in one of the largest cloud infrastruc- rity patches and new features, undermining high availabil- tures for a long time. It has become the most effective and ity. There are two existing live update methods to improve indispensable tool in our daily maintenance of hundreds of the cloud availability: kernel live patching and Virtual Ma- thousands of servers and millions of VMs. chine (VM) live migration. However, they both have serious CCS Concepts • Security and privacy → Virtualization drawbacks that impair their usefulness in the large cloud and security; • Computer systems organization → Avail- infrastructure: kernel live patching cannot handle complex ability. changes (e.g., changes to persistent data structures); and VM live migration may incur unacceptably long delays when Keywords virtualization; live upgrade; cloud infrastructure migrating millions of VMs in the whole cloud, for example, ACM Reference Format: to deploy urgent security patches.
    [Show full text]
  • The Xen Port of Kexec / Kdump a Short Introduction and Status Report
    The Xen Port of Kexec / Kdump A short introduction and status report Magnus Damm Simon Horman VA Linux Systems Japan K.K. www.valinux.co.jp/en/ Xen Summit, September 2006 Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 1 / 17 Outline Introduction to Kexec What is Kexec? Kexec Examples Kexec Overview Introduction to Kdump What is Kdump? Kdump Kernels The Crash Utility Xen Porting Effort Kexec under Xen Kdump under Xen The Dumpread Tool Partial Dumps Current Status Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 2 / 17 Introduction to Kexec Outline Introduction to Kexec What is Kexec? Kexec Examples Kexec Overview Introduction to Kdump What is Kdump? Kdump Kernels The Crash Utility Xen Porting Effort Kexec under Xen Kdump under Xen The Dumpread Tool Partial Dumps Current Status Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 3 / 17 Kexec allows you to reboot from Linux into any kernel. as long as the new kernel doesn’t depend on the BIOS for setup. Introduction to Kexec What is Kexec? What is Kexec? “kexec is a system call that implements the ability to shutdown your current kernel, and to start another kernel. It is like a reboot but it is indepedent of the system firmware...” Configuration help text in Linux-2.6.17 Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 4 / 17 . as long as the new kernel doesn’t depend on the BIOS for setup. Introduction to Kexec What is Kexec? What is Kexec? “kexec is a system call that implements the ability to shutdown your current kernel, and to start another kernel.
    [Show full text]
  • I.MX Linux® Reference Manual
    i.MX Linux® Reference Manual Document Number: IMXLXRM Rev. 1, 01/2017 i.MX Linux® Reference Manual, Rev. 1, 01/2017 2 NXP Semiconductors Contents Section number Title Page Chapter 1 About this Book 1.1 Audience....................................................................................................................................................................... 27 1.1.1 Conventions................................................................................................................................................... 27 1.1.2 Definitions, Acronyms, and Abbreviations....................................................................................................27 Chapter 2 Introduction 2.1 Overview.......................................................................................................................................................................31 2.1.1 Software Base................................................................................................................................................ 31 2.1.2 Features.......................................................................................................................................................... 31 Chapter 3 Machine-Specific Layer (MSL) 3.1 Introduction...................................................................................................................................................................37 3.2 Interrupts (Operation)..................................................................................................................................................
    [Show full text]
  • Proceedings of the Linux Symposium
    Proceedings of the Linux Symposium Volume One June 27th–30th, 2007 Ottawa, Ontario Canada Contents The Price of Safety: Evaluating IOMMU Performance 9 Ben-Yehuda, Xenidis, Mostrows, Rister, Bruemmer, Van Doorn Linux on Cell Broadband Engine status update 21 Arnd Bergmann Linux Kernel Debugging on Google-sized clusters 29 M. Bligh, M. Desnoyers, & R. Schultz Ltrace Internals 41 Rodrigo Rubira Branco Evaluating effects of cache memory compression on embedded systems 53 Anderson Briglia, Allan Bezerra, Leonid Moiseichuk, & Nitin Gupta ACPI in Linux – Myths vs. Reality 65 Len Brown Cool Hand Linux – Handheld Thermal Extensions 75 Len Brown Asynchronous System Calls 81 Zach Brown Frysk 1, Kernel 0? 87 Andrew Cagney Keeping Kernel Performance from Regressions 93 T. Chen, L. Ananiev, and A. Tikhonov Breaking the Chains—Using LinuxBIOS to Liberate Embedded x86 Processors 103 J. Crouse, M. Jones, & R. Minnich GANESHA, a multi-usage with large cache NFSv4 server 113 P. Deniel, T. Leibovici, & J.-C. Lafoucrière Why Virtualization Fragmentation Sucks 125 Justin M. Forbes A New Network File System is Born: Comparison of SMB2, CIFS, and NFS 131 Steven French Supporting the Allocation of Large Contiguous Regions of Memory 141 Mel Gorman Kernel Scalability—Expanding the Horizon Beyond Fine Grain Locks 153 Corey Gough, Suresh Siddha, & Ken Chen Kdump: Smarter, Easier, Trustier 167 Vivek Goyal Using KVM to run Xen guests without Xen 179 R.A. Harper, A.N. Aliguori & M.D. Day Djprobe—Kernel probing with the smallest overhead 189 M. Hiramatsu and S. Oshima Desktop integration of Bluetooth 201 Marcel Holtmann How virtualization makes power management different 205 Yu Ke Ptrace, Utrace, Uprobes: Lightweight, Dynamic Tracing of User Apps 215 J.
    [Show full text]
  • Container-Based Virtualization for Byte-Addressable NVM Data Storage
    2016 IEEE International Conference on Big Data (Big Data) Container-Based Virtualization for Byte-Addressable NVM Data Storage Ellis R. Giles Rice University Houston, Texas [email protected] Abstract—Container based virtualization is rapidly growing Storage Class Memory, or SCM, is an exciting new in popularity for cloud deployments and applications as a memory technology with the potential of replacing hard virtualization alternative due to the ease of deployment cou- drives and SSDs as it offers high-speed, byte-addressable pled with high-performance. Emerging byte-addressable, non- volatile memories, commonly called Storage Class Memory or persistence on the main memory bus. Several technologies SCM, technologies are promising both byte-addressability and are currently under research and development, each with dif- persistence near DRAM speeds operating on the main memory ferent performance, durability, and capacity characteristics. bus. These new memory alternatives open up a new realm of These include a ReRAM by Micron and Sony, a slower, but applications that no longer have to rely on slow, block-based very large capacity Phase Change Memory or PCM by Mi- persistence, but can rather operate directly on persistent data using ordinary loads and stores through the cache hierarchy cron and others, and a fast, smaller spin-torque ST-MRAM coupled with transaction techniques. by Everspin. High-speed, byte-addressable persistence will However, SCM presents a new challenge for container-based give rise to new applications that no longer have to rely on applications, which typically access persistent data through slow, block based storage devices and to serialize data for layers of block based file isolation.
    [Show full text]
  • Linux Performance Tools
    Linux Performance Tools Brendan Gregg Senior Performance Architect Performance Engineering Team [email protected] @brendangregg This Tutorial • A tour of many Linux performance tools – To show you what can be done – With guidance for how to do it • This includes objectives, discussion, live demos – See the video of this tutorial Observability Benchmarking Tuning Stac Tuning • Massive AWS EC2 Linux cloud – 10s of thousands of cloud instances • FreeBSD for content delivery – ~33% of US Internet traffic at night • Over 50M subscribers – Recently launched in ANZ • Use Linux server tools as needed – After cloud monitoring (Atlas, etc.) and instance monitoring (Vector) tools Agenda • Methodologies • Tools • Tool Types: – Observability – Benchmarking – Tuning – Static • Profiling • Tracing Methodologies Methodologies • Objectives: – Recognize the Streetlight Anti-Method – Perform the Workload Characterization Method – Perform the USE Method – Learn how to start with the questions, before using tools – Be aware of other methodologies My system is slow… DEMO & DISCUSSION Methodologies • There are dozens of performance tools for Linux – Packages: sysstat, procps, coreutils, … – Commercial products • Methodologies can provide guidance for choosing and using tools effectively • A starting point, a process, and an ending point An#-Methodologies • The lack of a deliberate methodology… Street Light An<-Method 1. Pick observability tools that are: – Familiar – Found on the Internet – Found at random 2. Run tools 3. Look for obvious issues Drunk Man An<-Method • Tune things at random until the problem goes away Blame Someone Else An<-Method 1. Find a system or environment component you are not responsible for 2. Hypothesize that the issue is with that component 3. Redirect the issue to the responsible team 4.
    [Show full text]