Kexec/Kdump Implementation in Linux Kernel and Xen Hypervisor Usage, History and Current Developments

Total Page:16

File Type:pdf, Size:1020Kb

Kexec/Kdump Implementation in Linux Kernel and Xen Hypervisor Usage, History and Current Developments kexec/kdump implementation in Linux Kernel and Xen hypervisor usage, history and current developments Sponsored by: & Agenda •What is kexec and kdump ? •History of kexec/kdump development in Linux Kernel and Xen. •Current developments. •Future plans. •Resources. •Questions and Answers ? What is kexec and kdump ? •kexec and kdump are companions. •kdump uses most of the code from kexec. •kexec allows machine restart without jumping into BIOS. •kdump eases kernel debugging when running system was crashed. What is kexec ? •kexec allows machine restart without jumping into BIOS. •It greatly speeds up system initialization. •kexec does not care about current system state. •It is role of administrator to prepare system for restart. •BIOS does not initialize the devices. •Can lead to unpredictable state when devices were not properly shutdown. •Load specified kernel and other modules (if user requires it) into memory (kexec -l|--load <kernel-image> …). •Jump into initialization code which finally calls new kernel (kexec -e|--exec …). What is kdump ? •kdump eases kernel debugging when running system was crashed. •Requires reserved memory (say 128 MB) that cannot be touched. •Ignores the current state of the machine - just tries to boot. •Boot system with crashkernel option (Linux Kernel or Xen hypervisor). •Load specified crash kernel into previously reserved memory by crashkernel option (kexec -p|--load-panic <kernel-image> …). •… the system will reboot into the dump-capture kernel if a system crash is triggered. Trigger points are located in panic(), die(), die_nmi() and in the sysrq handler (ALT-SysRq-c). … (linux/Documentation/kdump/kdump.txt). What is kdump ? •New system running under dump-capture kernel expose raw image of memory of crashed system through /dev/oldmem device interface. •Through /proc/vmcore ELF format image of memory of crashed system is available. •ELF format image is useful for GDB and Crash tool. •… Before analyzing the dump image, you should reboot into a stable kernel. … (linux/Documentation/kdump/kdump.txt). •kexec and kdump are excellent tools, albeit they have some shortcomings. What is kexec and kdump ? kexec/kdump infrastructure: •kexec-tools - userspace tools. •kexec/kdump implementation in Linux Kernel. •kexec/kdump implementation in Xen. Details of kexec load phase •kexec -l|--load <kernel-image> … •Compute a hash (SHA 256) of the loaded kernel. •kexec calls sys_kexec_load(). •sys_kexec_load() loads kernel into memory and establishes identity page tables for the kexec kernel. •On Xen hypervisor dom0 kernel calls HYPERVISOR_kexec_op(KEXEC_CMD_kexec_load, …) hypercall which loads kernel into memory on behalf of dom0 kernel. Details of kexec execution phase •kexec -e|--exec … •kexec calls sys_reboot() with LINUX_REBOOT_CMD_KEXEC cmd argument which later calls kernel_kexec(), •kernel_kexec() calls machine_kexec() which prepare arguments for relocate_kernel() and calls it. •On Xen hypervisor dom0 kernel does not call relocate_kernel() directly. First of all it calls HYPERVISOR_kexec_op(KEXEC_CMD_kexec, …) hypercall which later calls relocate_kernel(). Details of kexec execution phase •relocate_kernel() switch to identity page tables, relocate kernel and then jump into purgatory code. •purgatory code do some arch specific setup (vga reset, PIC init, backup first 640K of memory on x86 platform if it is panic kernel …). •Verify a hash (SHA 256) of the loaded kernel established at load phase and jump into kexec kernel. •New kernel executes as usual. History of kexec/kdump development in Linux Kernel and Xen •kexec/kdump were introduced in Linux Kernel Ver. 2.6.13. •Currently (Linux Kernel Ver. 3.0) kexec/kdump support is available on eight platforms (ARM, IA-64, MIPS, PowerPC, S/390, SH, Tile, x86). •kexec/kdump in Xen is available since Ver. 3.0.4. •kexec/kdump under Xen dom0 is not available with paravirt kernel, but it is with old style Xenlinux patches. Current developments •kexec/kdump implementation in Linux Kernel is quite stable. •kexec/kdump implementation in Xen is quite stable too. •Xenlinux kernels receiving only fixes (if any). •kexec/kdump implementation for pvops Linux Kernel is under development now. •This work is sponsored by Google under Google Summer of Code 2011 program. Current developments •Development is based off 2.6.32 Novell SLES11 Xen kexec/kdump changes. •Initial implementation patch (only for dom0) will be posted shortly on Xen-devel and LKML. •It requires a lot of cleanups. •Currently there are more implementation questions than technical ones. •kexec-tools requires some modification also (will be posted shortly on [email protected], Xen-devel and LKML). Future plans •kexec/kdump support for PV guest domains. •kexec/kdump support for PVonHVM guest domains (some work is done by Olaf Hering). •kexec/kdump support for HVM guest domains - it works, however, some cleanups would be required. Future plans Automatic Xen/dom0 kernel crash recovery: •Xen or dom0 kernel crashing (guest domains were running). •New incarnation of Xen hypervisor and dom0 kernel is started. •Guest domains are restarted from point at which they were running when Xen/dom0 kernel crash occurred. Resources •http://www.kernel.org/ - The Linux Kernel Archives •ftp://ftp.kernel.org/pub/linux/utils/kernel/kexec/ - kexec-tools src •git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git - kexec-tools development tree •kexec man page •Linux Kernel source: linux/Documentation/kdump/ •http://www.xen.org/products/xenhyp.html - Xen Hypervisor •http://xenbits.xen.org/ - Xen Source Repositories •http://people.redhat.com/~anderson/ - Crash tool Questions and Answers ? Thank you for attention .
Recommended publications
  • Oracle® Linux Administrator's Solutions Guide for Release 6
    Oracle® Linux Administrator's Solutions Guide for Release 6 E37355-64 August 2017 Oracle Legal Notices Copyright © 2012, 2017, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S.
    [Show full text]
  • Kdump, a Kexec-Based Kernel Crash Dumping Mechanism
    Kdump, A Kexec-based Kernel Crash Dumping Mechanism Vivek Goyal Eric W. Biederman Hariprasad Nellitheertha IBM Linux NetworkX IBM [email protected] [email protected] [email protected] Abstract important consideration for the success of a so- lution has been the reliability and ease of use. Kdump is a crash dumping solution that pro- Kdump is a kexec based kernel crash dump- vides a very reliable dump generation and cap- ing mechanism, which is being perceived as turing mechanism [01]. It is simple, easy to a reliable crash dumping solution for Linux R . configure and provides a great deal of flexibility This paper begins with brief description of what in terms of dump device selection, dump saving kexec is and what it can do in general case, and mechanism, and plugging-in filtering mecha- then details how kexec has been modified to nism. boot a new kernel even in a system crash event. The idea of kdump has been around for Kexec enables booting into a new kernel while quite some time now, and initial patches for preserving the memory contents in a crash sce- kdump implementation were posted to the nario, and kdump uses this feature to capture Linux kernel mailing list last year [03]. Since the kernel crash dump. Physical memory lay- then, kdump has undergone significant design out and processor state are encoded in ELF core changes to ensure improved reliability, en- format, and these headers are stored in a re- hanced ease of use and cleaner interfaces. This served section of memory. Upon a crash, new paper starts with an overview of the kdump de- kernel boots up from reserved memory and pro- sign and development history.
    [Show full text]
  • Linux Core Dumps
    Linux Core Dumps Kevin Grigorenko [email protected] Many Interactions with Core Dumps systemd-coredump abrtd Process Crashes Ack! 4GB File! Most Interactions with Core Dumps Poof! Process Crashes systemd-coredump Nobody abrtd Looks core kdump not Poof! Kernel configured Crashes So what? ● Crashes are problems! – May be symptoms of security vulnerabilities – May be application bugs ● Data corruption ● Memory leaks – A hard crash kills outstanding work – Without automatic process restarts, crashes lead to service unavailability ● With restarts, a hacker may continue trying. ● We shouldn't be scared of core dumps. – When a dog poops inside the house, we don't just `rm -f $poo` or let it pile up, we try to figure out why or how to avoid it again. What is a core dump? ● It's just a file that contains virtual memory contents, register values, and other meta-data. – User land core dump: Represents state of a particular process (e.g. from crash) – Kernel core dump: Represents state of the kernel (e.g. from panic) and process data ● ELF-formatted file (like a program) User Land User Land Crash core Process 1 Process N Kernel Panic vmcore What is Virtual Memory? ● Virtual Memory is an abstraction over physical memory (RAM/swap) – Simplifies programming – User land: process isolation – Kernel/processor translate virtual address references to physical memory locations 64-bit Process Virtual 8GB RAM Address Space (16EB) (Example) 0 0 16 8 EB GB How much virtual memory is used? ● Use `ps` or similar tools to query user process virtual memory usage (in KB): – $ ps -o pid,vsz,rss -p 14062 PID VSZ RSS 14062 44648 42508 Process 1 Virtual 8GB RAM Memory Usage (VSZ) (Example) 0 0 Resident Page 1 Resident Page 2 16 8 EB GB Process 2 How much virtual memory is used? ● Virtual memory is broken up into virtual memory areas (VMAs), the sum of which equal VSZ and may be printed with: – $ cat /proc/${PID}/smaps 00400000-0040b000 r-xp 00000000 fd:02 22151273 /bin/cat Size: 44 kB Rss: 20 kB Pss: 12 kB..
    [Show full text]
  • Coreboot - the Free Firmware
    coreboot - the free firmware vimacs <https://vimacs.lcpu.club> Linux Club of Peking University May 19th, 2018 . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 1 / 77 License This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. You can find the source code of this presentation at: https://git.wehack.space/coreboot-talk/ . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 2 / 77 Index 1 What is coreboot? History Why use coreboot 2 How coreboot works 3 Building and using coreboot Building Flashing 4 Utilities and Debugging 5 Join the community . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 3 / 77 Index 6 Porting coreboot with autoport ASRock B75 Pro3-M Sandy/Ivy Bridge HP Elitebooks Dell Latitude E6230 7 References . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 4 / 77 1 What is coreboot? History Why use coreboot 2 How coreboot works 3 Building and using coreboot Building Flashing 4 Utilities and Debugging 5 Join the community . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 5 / 77 What is coreboot? coreboot is an extended firmware platform that delivers a lightning fast and secure boot experience on modern computers and embedded systems. As an Open Source project it provides auditability and maximum control over technology. The word ’coreboot’ should always be written in lowercase, even at the start of a sentence. vimacs (LCPU) coreboot - the free firmware May 19th, 2018 6 / 77 History: from LinuxBIOS to coreboot coreboot has a very long history, stretching back more than 18 years to when it was known as LinuxBIOS.
    [Show full text]
  • Arxiv:1907.06110V1 [Cs.DC] 13 Jul 2019 (TCB) with Millions of Lines of Code and a Massive Attack Enclave” of Physical Machines
    Supporting Security Sensitive Tenants in a Bare-Metal Cloud∗† Amin Mosayyebzadeh1, Apoorve Mohan4, Sahil Tikale1, Mania Abdi4, Nabil Schear2, Charles Munson2, Trammell Hudson3 Larry Rudolph3, Gene Cooperman4, Peter Desnoyers4, Orran Krieger1 1Boston University 2MIT Lincoln Laboratory 3Two Sigma 4Northeastern University Abstract operations and implementations; the tenant needs to fully trust the non-maliciousness and competence of the provider. Bolted is a new architecture for bare-metal clouds that en- ables tenants to control tradeoffs between security, price, and While bare-metal clouds [27,46,48,64,66] eliminate the performance. Security-sensitive tenants can minimize their security concerns implicit in virtualization, they do not ad- trust in the public cloud provider and achieve similar levels of dress the rest of the challenges described above. For example, security and control that they can obtain in their own private OpenStack’s bare-metal service still has all of OpenStack in data centers. At the same time, Bolted neither imposes over- the TCB. As another example, existing bare-metal clouds en- head on tenants that are security insensitive nor compromises sure that previous tenants have not compromised firmware by the flexibility or operational efficiency of the provider. Our adopting a one-size-fits-all approach of validation/attestation prototype exploits a novel provisioning system and special- or re-flashing firmware. The tenant has no way to program- ized firmware to enable elasticity similar to virtualized clouds. matically verify the firmware installed and needs to fully trust Experimentally we quantify the cost of different levels of se- the provider. As yet another example, existing bare-metal curity for a variety of workloads and demonstrate the value clouds require the tenant to trust the provider to scrub any of giving control to the tenant.
    [Show full text]
  • SUSE Linux Enterprise Server 15 SP2 Autoyast Guide Autoyast Guide SUSE Linux Enterprise Server 15 SP2
    SUSE Linux Enterprise Server 15 SP2 AutoYaST Guide AutoYaST Guide SUSE Linux Enterprise Server 15 SP2 AutoYaST is a system for unattended mass deployment of SUSE Linux Enterprise Server systems. AutoYaST installations are performed using an AutoYaST control le (also called a “prole”) with your customized installation and conguration data. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents 1 Introduction to AutoYaST 1 1.1 Motivation 1 1.2 Overview and Concept 1 I UNDERSTANDING AND CREATING THE AUTOYAST CONTROL FILE 4 2 The AutoYaST Control
    [Show full text]
  • SUSE Linux Enterprise Server 12 Does Not Provide the Repair System Anymore
    General System Troubleshooting Sascha Wehnert Premium Service Engineer Attachmate Group Germany GmbH [email protected] What is this about? • This session will cover the following topics: ‒ How to speed up a service request ‒ How to gather system information using supportconfig ‒ Configure serial console in grub to trace kernel boot messages ‒ Accessing a non booting systems using the rescue system ‒ System crash situations and how to prepare (i586/x86_64 only) 2 The challenge of a service request • Complete service request description: “We need to increase our disk space.” 3 The challenge of a service request • Which SUSE® Linux Enterprise Server version? • Is this a physical or virtual environment? • If virtual, what virtualization solution is being used? • If physical, local SCSI RAID array? What hardware? • If using HBAs, dm-multipathing or iSCSI connected disks or a 3rd party solution? • Disk and system partition layout? • What has been done so far? What was achieved? What failed? • What information do I need in order to help? 4 What information would be needed? • SUSE Linux Enterprise Server version → /etc/SuSE-release, uname -a • Physical → dmidecode XEN → /proc/xen/xsd_port KVM → /proc/modules • Hardware information → hwinfo • Partition information → parted -l, /etc/fstab • Multipathing/iSCSI → multipath, iscsiadm • Console output or /var/log/YaST2/y2log in case YaST2 has been used 5 supportconfig • Since SUSE Linux Enterprise Server 10 SP4 included in default installation. • Maintained package, updates available via patch channels. For best results always have latest version installed from channels installed. • One single command to get (almost) everything. • Splits data into files separated by topic. • Can be modified to exclude certain data, either via /etc/supportconfig.conf or command options.
    [Show full text]
  • Beyond Init: Systemd Linux Plumbers Conference 2010
    Beyond Init: systemd Linux Plumbers Conference 2010 Kay Sievers Lennart Poettering November 2010 Kay Sievers, Lennart Poettering Beyond Init: systemd Triggers: Boot, Socket, Bus, Device, Path, Timers, More Kay Sievers, Lennart Poettering Beyond Init: systemd Kay Sievers, Lennart Poettering Beyond Init: systemd Substantial coverage of basic OS boot-up tasks, including fsck, mount, quota, hwclock, readahead, tmpfiles, random-seed, console, static module loading, early syslog, plymouth, shutdown, kexec, SELinux, initrd+initrd-less boots. Status: almost made Fedora 14. Kay Sievers, Lennart Poettering Beyond Init: systemd including fsck, mount, quota, hwclock, readahead, tmpfiles, random-seed, console, static module loading, early syslog, plymouth, shutdown, kexec, SELinux, initrd+initrd-less boots. Status: almost made Fedora 14. Substantial coverage of basic OS boot-up tasks, Kay Sievers, Lennart Poettering Beyond Init: systemd mount, quota, hwclock, readahead, tmpfiles, random-seed, console, static module loading, early syslog, plymouth, shutdown, kexec, SELinux, initrd+initrd-less boots. Status: almost made Fedora 14. Substantial coverage of basic OS boot-up tasks, including fsck, Kay Sievers, Lennart Poettering Beyond Init: systemd quota, hwclock, readahead, tmpfiles, random-seed, console, static module loading, early syslog, plymouth, shutdown, kexec, SELinux, initrd+initrd-less boots. Status: almost made Fedora 14. Substantial coverage of basic OS boot-up tasks, including fsck, mount, Kay Sievers, Lennart Poettering Beyond Init: systemd hwclock, readahead, tmpfiles, random-seed, console, static module loading, early syslog, plymouth, shutdown, kexec, SELinux, initrd+initrd-less boots. Status: almost made Fedora 14. Substantial coverage of basic OS boot-up tasks, including fsck, mount, quota, Kay Sievers, Lennart Poettering Beyond Init: systemd readahead, tmpfiles, random-seed, console, static module loading, early syslog, plymouth, shutdown, kexec, SELinux, initrd+initrd-less boots.
    [Show full text]
  • Embedded Linux Conference Europe 2019
    Embedded Linux Conference Europe 2019 Linux kernel debugging: going beyond printk messages Embedded Labworks By Sergio Prado. São Paulo, October 2019 ® Copyright Embedded Labworks 2004-2019. All rights reserved. Embedded Labworks ABOUT THIS DOCUMENT ✗ This document is available under Creative Commons BY- SA 4.0. https://creativecommons.org/licenses/by-sa/4.0/ ✗ The source code of this document is available at: https://e-labworks.com/talks/elce2019 Embedded Labworks $ WHOAMI ✗ Embedded software developer for more than 20 years. ✗ Principal Engineer of Embedded Labworks, a company specialized in the development of software projects and BSPs for embedded systems. https://e-labworks.com/en/ ✗ Active in the embedded systems community in Brazil, creator of the website Embarcados and blogger (Portuguese language). https://sergioprado.org ✗ Contributor of several open source projects, including Buildroot, Yocto Project and the Linux kernel. Embedded Labworks THIS TALK IS NOT ABOUT... ✗ printk and all related functions and features (pr_ and dev_ family of functions, dynamic debug, etc). ✗ Static analysis tools and fuzzing (sparse, smatch, coccinelle, coverity, trinity, syzkaller, syzbot, etc). ✗ User space debugging. ✗ This is also not a tutorial! We will talk about a lot of tools and techniches and have fun with some demos! Embedded Labworks DEBUGGING STEP-BY-STEP 1. Understand the problem. 2. Reproduce the problem. 3. Identify the source of the problem. 4. Fix the problem. 5. Fixed? If so, celebrate! If not, go back to step 1. Embedded Labworks TYPES OF PROBLEMS ✗ We can consider as the top 5 types of problems in software: ✗ Crash. ✗ Lockup. ✗ Logic/implementation error. ✗ Resource leak. ✗ Performance.
    [Show full text]
  • Unbreakable Enterprise Kernel Release Notes for Unbreakable Enterprise Kernel Release 3
    Unbreakable Enterprise Kernel Release Notes for Unbreakable Enterprise Kernel Release 3 E48380-10 June 2020 Oracle Legal Notices Copyright © 2013, 2020, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Generating a White List for Hardware Which Works with Kexec/Kdump
    Generating a White List for Hardware which Works with Kexec/Kdump Fernando Luis Vázquez Cao NTT Open Source Software Center [email protected] Abstract nally, the capture of the crash dump. Kdump is very good at the first two but there are still some The mainstream Linux kernel lacked a crash issues when the dump capture kernel takes control dumping mechanism from its inception until the of the system. In particular the new kernel may fail recent adoption of kdump. This, despite the fact to initialise the underlying devices which, in turn, that there were several solutions available out-of- is likely to lead to a kernel panic or an oops. tree and some of them were even included in major distributions. However concerns about their intru- siveness and reliability prevented them from mak- The underlying problem is that the state of the ing it into the mainstream kernel (aka vanilla ker- devices during a kdump boot is not predictable nel), the main argument being that relying on the because no device shutdown is performed in the resources of a crashing kernel to capture a dump, crashed kernel (it cannot be trusted), and the as they did, is inherently dangerous. firmware stage of the standard boot process is skipped (the dump capture kernel is a soft-booted The appearance of Eric Biederman’s kexec kernel after all). In other words, the inherent as- patches and its subsequent inclusion in the kernel sumption that the firmware (known as the BIOS on as a new system call paved the way for the imple- some systems) is always there to do the dirty work mentation of an idea that had been floating around is not valid anymore.
    [Show full text]
  • SUSE Linux Enterprise Server 12 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 12 SP4 An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to eciently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xiii
    [Show full text]