Performance Profiling in a Virtualized Environment

Total Page:16

File Type:pdf, Size:1020Kb

Performance Profiling in a Virtualized Environment Performance Profiling in a Virtualized Environment Jiaqing Du Nipun Sehrawat Willy Zwaenepoel EPFL, Switzerland IIT Guwahati, India EPFL, Switzerland Abstract can refer to the output of a profiler to identify bottlenecks in a program and tune its performance . PMU-based per- Virtualization is a key enabling technology for cloud formance profiling in a non-virtualized environment has computing. Many applications deployed in a cloud run been well studied. Mature tools built upon PMUs ex- in virtual machines. However, profilers based on CPU ist in almost every popular operating system [9, 6]. They performance counters do not work well in a virtualized are used extensively to tune software performance. How- environment. In this paper, we explore the possibilities ever, this is not the case in a virtualized environment. for achieving performance profiling in virtual machine monitors (VMMs) built on paravirtualization, hardware % CYCLE Function Module assistance, and binary translation. We present the de- 98.5529 vmx vcpu run kvm-intel.ko sign and implementation of performance profiling for a 0.2226 (no symbols) libc.so VMM based on the x86 hardware extensions, with some 0.1034 hpet cpuhp notify vmlinux preliminary experimental results. 0.1034 native patch vmlinux 0.0557 (no symbols) bash 1 Introduction Table 1: Top five time-consuming functions from profil- ing both the VMM and the guest Virtualization is a key enabling technology for cloud computing. Applications deployed in a virtualization- For applications executing in a public cloud, running based cloud, e.g., Amazon’s EC2, run inside virtual ma- a PMU-based profiler directly in a guest does not result chines, which are dynamically mapped onto a cluster of in useful output, because, as far as we know, none of the physical machines. By providing workload consolida- current VMMs expose the PMU programming interfaces tion and migration, virtualization improves the resource properly to a guest1. As more and more applications utilization and the scalability of a cloud. In a virtual- are migrated to virtualization-based public clouds, it is ized environment, the virtual machine monitor (VMM) necessary to add support for PMU-based profiling in vir- exports a set of virtual machines to the guest operating tual machines, without requiring access to the privileged systems (hereafter referred to as guests) and manages ac- VMM. Profiling results help customers of public clouds cesses to the underlying hardware resources shared by to understand the unique characteristics of their comput- multiple guests. In this way virtualization makes it pos- ing environment, identify performance bottlenecks, and sible to run multiple operating systems simultaneously fully exploit the hardware and software resources they on a single physical machine [11, 3, 8]. pay for. In modern processors, the performance monitoring In a private cloud, it is possible to run a profiler di- unit (PMU) is indispensable for performance debugging rectly in the VMM, but some of its intermediate out- of complex software systems. It is employed by soft- put cannot be converted to meaningful final output with- ware profilers to monitor the micro-architectural behav- out the cooperation of the guest. The data in Table 1 ior of a program. Typical hardware events monitored in- result from profiling a computation-intensive applica- clude clock cycles, instruction retirements, cache misses, tion in the guest. The profiler runs in the VMM, and TLB misses, etc. As an example, Table 1 shows a typ- ical output of a profiler. It presents the top-five time- 1It is possible to obtain limited profiling results by running the pro- consuming functions of the whole system. Developers filer in the guest in a timer interrupt driven mode. it monitors CPU cycles. The first row shows that the for a VMM based on the x86 virtualization extensions CPU spends more than 98% of its cycles in the func- and discuss some preliminary experimental results. In tion vmx vcpu run(), which switches the CPU to run Section 6 we survey related work, and we conclude in guest code. As the design of the profiler does not con- Section 7. sider virtualization, all the instructions consumed by the guest are accounted to this VMM function. Therefore, 2 Background we cannot obtain detailed profiling data of the guest. Currently, only XenOprof [10] supports detailed pro- As a hardware component, a PMU consists of a set of filing of virtual machines running in Xen, a VMM based performance counters, a set of event selectors, and the on paravirtualization. For VMMs based on hardware as- digital logic to increase a counter after a specified hard- sistance and binary translation, no such tools exist. En- ware event occurs. When a performance counter reaches abling profiling in the VMM provides users of private a pre-defined threshold, the interrupt controller sends a clouds and developers of both types of clouds a full-scale counter overflow interrupt to the CPU. view of the whole software stack and its interactions with Generally, a profiler consists of the following major the hardware. This will help them tune the performance components: of both the VMM and the applications running in the guest. • Sampling configuration. The profiler registers it- In this paper, we address the problem of performance self as the counter overflow interrupt handler of the profiling for three different virtualization techniques: operating system, selects the monitored hardware paravirtualization, binary translation, and hardware as- events and sets the number of events after which an sistance. We categorize profiling techniques in a vir- interrupt should occur. It programs the PMU hard- tualized environment into two types. Guest-wide pro- ware directly by writing to its registers. filing discloses the runtime characteristics of the guest • Sample collection. When a counter overflow inter- kernel and its active applications. It only requires a pro- rupt occurs, the profiler handles it synchronously. In filer running in the guest, similar to native profiling, i.e., the interrupt context, it records the program counter profiling in a non-virtualized environment. The VMM (PC), the virtual address space of the sampled PC is responsible for virtualizing the PMU hardware, and value, and some additional system state. changes introduced to the VMM are transparent to the guest. System-wide profiling discloses the runtime be- • Sample interpretation. The profiler converts the havior of both the VMM and the active guests. It requires sampled PC values into routine names of the pro- a profiler running in the VMM and provides a full-scale filed process by consulting its virtual memory lay- view of the whole software stack: both the VMM and the out and its binary file compiled with debugging in- guest. formation. The contributions of this paper are as follows. (1) We analyze the challenges of achieving both guest-wide and Native profiling only involves one OS instance, where system-wide profiling for each of the three virtualiza- all three profiling components reside. They interact with tion techniques. Synchronous virtual interrupt delivery each other through facilities provided by the OS. All of to the guest is necessary for guest-wide profiling. The the exchanged information is located in the OS. ability to interpret samples belonging to a guest con- In a virtualized environment, multiple OS instances text into meaningful symbols is required for system-wide are involved, namely the VMM and the guest(s). The profiling. (2) We present profiling solutions for virtual- three components of a profiler may be spread among dif- ization techniques based on hardware assistance and bi- ferent instances, and their interactions may require com- nary translation. (3) We implement both guest-wide and munication between them. In addition, not all the con- system-wide profiling for a VMM based on the x86 vir- ditions necessary for implementing these three compo- tualization extensions and present some preliminary ex- nents may be satisfied. We present a detailed discussion perimental results. of implementing both guest-wide and system-wide pro- The rest of this paper is organized as follows. In Sec- filing for each of the three virtualization techniques in the tion 2 we review the structure of a conventional profiler. following two sections. In Section 3, we analyze the difficulties of supporting guest-wide profiling in a virtualized environment, and 3 Guest-wide Profiling present solutions for each of the three aforementioned virtualization techniques. We discuss system-wide pro- Challenges. By definition, guest-wide profiling runs a filing in Section 4. In Section 5 we present the imple- profiler in the guest and only monitors the guest. Al- mentation of both guest-wide and system-wide profiling though more information about the whole software stack can be obtained by employing system-wide profiling, by setting a field of the control structure. Event delivery sometimes guest-wide profiling is the only way to do per- to a guest is synchronous so the guest profiler samples formance profiling and tuning in a virtualized environ- correct system states. We present our implementation of ment. As we explained before, users of a public cloud guest-wide profiling based on the x86 hardware exten- service are not normally granted the privilege to run a sions in Section 5.1. profiler in the VMM, which is necessary for system-wide profiling. To achieve guest-wide profiling, the VMM should pro- Binary translation. For VMMs based on binary vide facilities that enable the implementation of the three translation, synchronous interrupt delivery is also re- profiling components in a guest and PMU multiplexing, quired. Another obstacle is that the sampled PC val- i.e., saving and restoring PMU registers. Since sample ues point to addresses in the translation cache, not to interpretation under guest-wide profiling is the same as the memory addresses holding the original instructions.
Recommended publications
  • FIPS 140-2 Non-Proprietary Security Policy
    Kernel Crypto API Cryptographic Module version 1.0 FIPS 140-2 Non-Proprietary Security Policy Version 1.3 Last update: 2020-03-02 Prepared by: atsec information security corporation 9130 Jollyville Road, Suite 260 Austin, TX 78759 www.atsec.com © 2020 Canonical Ltd. / atsec information security This document can be reproduced and distributed only whole and intact, including this copyright notice. Kernel Crypto API Cryptographic Module FIPS 140-2 Non-Proprietary Security Policy Table of Contents 1. Cryptographic Module Specification ..................................................................................................... 5 1.1. Module Overview ..................................................................................................................................... 5 1.2. Modes of Operation ................................................................................................................................. 9 2. Cryptographic Module Ports and Interfaces ........................................................................................ 10 3. Roles, Services and Authentication ..................................................................................................... 11 3.1. Roles .......................................................................................................................................................11 3.2. Services ...................................................................................................................................................11
    [Show full text]
  • The Xen Port of Kexec / Kdump a Short Introduction and Status Report
    The Xen Port of Kexec / Kdump A short introduction and status report Magnus Damm Simon Horman VA Linux Systems Japan K.K. www.valinux.co.jp/en/ Xen Summit, September 2006 Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 1 / 17 Outline Introduction to Kexec What is Kexec? Kexec Examples Kexec Overview Introduction to Kdump What is Kdump? Kdump Kernels The Crash Utility Xen Porting Effort Kexec under Xen Kdump under Xen The Dumpread Tool Partial Dumps Current Status Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 2 / 17 Introduction to Kexec Outline Introduction to Kexec What is Kexec? Kexec Examples Kexec Overview Introduction to Kdump What is Kdump? Kdump Kernels The Crash Utility Xen Porting Effort Kexec under Xen Kdump under Xen The Dumpread Tool Partial Dumps Current Status Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 3 / 17 Kexec allows you to reboot from Linux into any kernel. as long as the new kernel doesn’t depend on the BIOS for setup. Introduction to Kexec What is Kexec? What is Kexec? “kexec is a system call that implements the ability to shutdown your current kernel, and to start another kernel. It is like a reboot but it is indepedent of the system firmware...” Configuration help text in Linux-2.6.17 Magnus Damm ([email protected]) Kexec / Kdump Xen Summit, September 2006 4 / 17 . as long as the new kernel doesn’t depend on the BIOS for setup. Introduction to Kexec What is Kexec? What is Kexec? “kexec is a system call that implements the ability to shutdown your current kernel, and to start another kernel.
    [Show full text]
  • The Linux 2.4 Kernel's Startup Procedure
    The Linux 2.4 Kernel’s Startup Procedure William Gatliff 1. Overview This paper describes the Linux 2.4 kernel’s startup process, from the moment the kernel gets control of the host hardware until the kernel is ready to run user processes. Along the way, it covers the programming environment Linux expects at boot time, how peripherals are initialized, and how Linux knows what to do next. 2. The Big Picture Figure 1 is a function call diagram that describes the kernel’s startup procedure. As it shows, kernel initialization proceeds through a number of distinct phases, starting with basic hardware initialization and ending with the kernel’s launching of /bin/init and other user programs. The dashed line in the figure shows that init() is invoked as a kernel thread, not as a function call. Figure 1. The kernel’s startup procedure. Figure 2 is a flowchart that provides an even more generalized picture of the boot process, starting with the bootloader extracting and running the kernel image, and ending with running user programs. Figure 2. The kernel’s startup procedure, in less detail. The following sections describe each of these function calls, including examples taken from the Hitachi SH7750/Sega Dreamcast version of the kernel. 3. In The Beginning... The Linux boot process begins with the kernel’s _stext function, located in arch/<host>/kernel/head.S. This function is called _start in some versions. Interrupts are disabled at this point, and only minimal memory accesses may be possible depending on the capabilities of the host hardware.
    [Show full text]
  • Kdump, a Kexec-Based Kernel Crash Dumping Mechanism
    Kdump, A Kexec-based Kernel Crash Dumping Mechanism Vivek Goyal Eric W. Biederman Hariprasad Nellitheertha IBM Linux NetworkX IBM [email protected] [email protected] [email protected] Abstract important consideration for the success of a so- lution has been the reliability and ease of use. Kdump is a crash dumping solution that pro- Kdump is a kexec based kernel crash dump- vides a very reliable dump generation and cap- ing mechanism, which is being perceived as turing mechanism [01]. It is simple, easy to a reliable crash dumping solution for Linux R . configure and provides a great deal of flexibility This paper begins with brief description of what in terms of dump device selection, dump saving kexec is and what it can do in general case, and mechanism, and plugging-in filtering mecha- then details how kexec has been modified to nism. boot a new kernel even in a system crash event. The idea of kdump has been around for Kexec enables booting into a new kernel while quite some time now, and initial patches for preserving the memory contents in a crash sce- kdump implementation were posted to the nario, and kdump uses this feature to capture Linux kernel mailing list last year [03]. Since the kernel crash dump. Physical memory lay- then, kdump has undergone significant design out and processor state are encoded in ELF core changes to ensure improved reliability, en- format, and these headers are stored in a re- hanced ease of use and cleaner interfaces. This served section of memory. Upon a crash, new paper starts with an overview of the kdump de- kernel boots up from reserved memory and pro- sign and development history.
    [Show full text]
  • Taming Hosted Hypervisors with (Mostly) Deprivileged Execution
    Taming Hosted Hypervisors with (Mostly) Deprivileged Execution Chiachih Wu†, Zhi Wang*, Xuxian Jiang† †North Carolina State University, *Florida State University Virtualization is Widely Used 2 “There are now hundreds of thousands of companies around the world using AWS to run all their business, or at least a portion of it. They are located across 190 countries, which is just about all of them on Earth.” Werner Vogels, CTO at Amazon AWS Summit ‘12 “Virtualization penetration has surpassed 50% of all server workloads, and continues to grow.” Magic Quadrant for x86 Server Virtualization Infrastructure June ‘12 Threats to Hypervisors 3 Large Code Bases Hypervisor SLOC Xen (4.0) 194K VMware ESXi1 200K Hyper-V1 100K KVM (2.6.32.28) 33.6K 1: Data source: NOVA (Steinberg et al., EuroSys ’10) Hypervisor Vulnerabilities Vulnerabilities Xen 41 KVM 24 VMware ESXi 43 VMware Workstation 49 Data source: National Vulnerability Database (‘09~’12) Threats to Hosted Hypervisors 4 Applications … Applications Guest OS Guest OS Hypervisor Host OS Physical Hardware Can we prevent the compromised hypervisor from attacking the rest of the system? DeHype 5 Decomposing the KVM hypervisor codebase De-privileged part user-level (93.2% codebase) Privileged part small kernel module (2.3 KSLOC) Guest VM Applications … Applications Applications Applications … Guest OS Guest OS De-privilege Guest OS Guest OS DeHyped DeHyped KVM KVM’ HypeLet KVM ~4% overhead Host OS Host OS Physical Hardware Physical Hardware Challenges 6 Providing the OS services in user mode Minimizing performance overhead Supporting hardware-assisted memory virtualization at user-level Challenge I 7 Providing the OS services in user mode De-privileged Hypervisor Hypervisor User Kernel Hypervisor HypeLet Host OS Host OS Physical Hardware Physical Hardware Original Hosted Hypervisor DeHype’d Hosted Hypervisor Dependency Decoupling 8 Abstracting the host OS interface and providing OS functionalities in user mode For example Memory allocator: kmalloc/kfree, alloc_page, etc.
    [Show full text]
  • Developer Guide
    Red Hat Enterprise Linux 6 Developer Guide An introduction to application development tools in Red Hat Enterprise Linux 6 Last Updated: 2017-10-20 Red Hat Enterprise Linux 6 Developer Guide An introduction to application development tools in Red Hat Enterprise Linux 6 Robert Krátký Red Hat Customer Content Services [email protected] Don Domingo Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services Legal Notice Copyright © 2016 Red Hat, Inc. and others. This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent.
    [Show full text]
  • Linux Core Dumps
    Linux Core Dumps Kevin Grigorenko [email protected] Many Interactions with Core Dumps systemd-coredump abrtd Process Crashes Ack! 4GB File! Most Interactions with Core Dumps Poof! Process Crashes systemd-coredump Nobody abrtd Looks core kdump not Poof! Kernel configured Crashes So what? ● Crashes are problems! – May be symptoms of security vulnerabilities – May be application bugs ● Data corruption ● Memory leaks – A hard crash kills outstanding work – Without automatic process restarts, crashes lead to service unavailability ● With restarts, a hacker may continue trying. ● We shouldn't be scared of core dumps. – When a dog poops inside the house, we don't just `rm -f $poo` or let it pile up, we try to figure out why or how to avoid it again. What is a core dump? ● It's just a file that contains virtual memory contents, register values, and other meta-data. – User land core dump: Represents state of a particular process (e.g. from crash) – Kernel core dump: Represents state of the kernel (e.g. from panic) and process data ● ELF-formatted file (like a program) User Land User Land Crash core Process 1 Process N Kernel Panic vmcore What is Virtual Memory? ● Virtual Memory is an abstraction over physical memory (RAM/swap) – Simplifies programming – User land: process isolation – Kernel/processor translate virtual address references to physical memory locations 64-bit Process Virtual 8GB RAM Address Space (16EB) (Example) 0 0 16 8 EB GB How much virtual memory is used? ● Use `ps` or similar tools to query user process virtual memory usage (in KB): – $ ps -o pid,vsz,rss -p 14062 PID VSZ RSS 14062 44648 42508 Process 1 Virtual 8GB RAM Memory Usage (VSZ) (Example) 0 0 Resident Page 1 Resident Page 2 16 8 EB GB Process 2 How much virtual memory is used? ● Virtual memory is broken up into virtual memory areas (VMAs), the sum of which equal VSZ and may be printed with: – $ cat /proc/${PID}/smaps 00400000-0040b000 r-xp 00000000 fd:02 22151273 /bin/cat Size: 44 kB Rss: 20 kB Pss: 12 kB..
    [Show full text]
  • SUSE Linux Enterprise Server 15 SP2 Autoyast Guide Autoyast Guide SUSE Linux Enterprise Server 15 SP2
    SUSE Linux Enterprise Server 15 SP2 AutoYaST Guide AutoYaST Guide SUSE Linux Enterprise Server 15 SP2 AutoYaST is a system for unattended mass deployment of SUSE Linux Enterprise Server systems. AutoYaST installations are performed using an AutoYaST control le (also called a “prole”) with your customized installation and conguration data. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents 1 Introduction to AutoYaST 1 1.1 Motivation 1 1.2 Overview and Concept 1 I UNDERSTANDING AND CREATING THE AUTOYAST CONTROL FILE 4 2 The AutoYaST Control
    [Show full text]
  • Guest-Transparent Instruction Authentication for Self-Patching Kernels
    Guest-Transparent Instruction Authentication for Self-Patching Kernels Dannie M. Stanley, Zhui Deng, and Dongyan Xu Rick Porter Shane Snyder Department of Computer Science Applied Communication Sciences US Army CERDEC Purdue University Piscataway, NJ 08854 Information Assurance Division West Lafayette, IN 47907 Email: [email protected] Aberdeen Proving Ground, MD Email: fds,deng14,[email protected] Abstract—Attackers can exploit vulnerable programs that are system. Security mechanisms like NICKLE have been created running with elevated permissions to insert kernel rootkits into a to prevent kernel rootkits by relocating the vulnerable physical system. Security mechanisms have been created to prevent kernel system to a guest virtual machine and enforcing a W ⊕ KX rootkit implantation by relocating the vulnerable physical system to a guest virtual machine and enforcing a W ⊕ KX memory memory access control policy from the host virtual machine access control policy from the host virtual machine monitor. monitor (VMM)[1]. The W ⊕ KX memory access control Such systems must also be able to identify and authorize the policy guarantees that no region of guest memory is both introduction of known-good kernel code. Previous works use writable and kernel-executable. cryptographic hashes to verify the integrity of kernel code at The guest system must have a way to bypass the W ⊕ KX load-time. The hash creation and verification procedure depends on immutable kernel code. However, some modern kernels restriction to load valid kernel code, such as kernel drivers, into contain self-patching kernel code; they may overwrite executable memory. To distinguish between valid kernel code and mali- instructions in memory after load-time.
    [Show full text]
  • Analysing CMS Software Performance Using Igprof, Oprofile and Callgrind
    Analysing CMS software performance using IgProf, OPro¯le and callgrind L Tuura1, V Innocente2, G Eulisse1 1 Northeastern University, Boston, MA, USA 2 CERN, Geneva, Switzerland Abstract. The CMS experiment at LHC has a very large body of software of its own and uses extensively software from outside the experiment. Understanding the performance of such a complex system is a very challenging task, not the least because there are extremely few developer tools capable of pro¯ling software systems of this scale, or producing useful reports. CMS has mainly used IgProf, valgrind, callgrind and OPro¯le for analysing the performance and memory usage patterns of our software. We describe the challenges, at times rather extreme ones, faced as we've analysed the performance of our software and how we've developed an understanding of the performance features. We outline the key lessons learnt so far and the actions taken to make improvements. We describe why an in-house general pro¯ler tool still ends up besting a number of renowned open-source tools, and the improvements we've made to it in the recent year. 1. The performance optimisation issue The Compact Muon Solenoid (CMS) experiment on the Large Hadron Collider at CERN [1] is expected to start running within a year. The computing requirements for the experiment are immense. For the ¯rst year of operation, 2008, we budget 32.5M SI2k (SPECint 2000) computing capacity. This corresponds to some 2650 servers if translated into the most capable computers currently available to us.1 It is critical CMS on the one hand acquires the resources needed for the planned physics analyses, and on the other hand optimises the software to complete the task with the resources available.
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Linux Kernel Initialization
    Bootlin legacy Linux kernel training materials Linux kernel initialization This file is an old chapter of Bootlin’ embedded Linux kernel and driver development training materials (https://bootlin.com/training/kernel/), which has been removed and is no longer maintained. PDF version and sources are available on https://bootlin.com/doc/legacy/kernel-init/ - Kernel, drivers and embedded Linux - Development, consulting, training and support - https://bootlin.com 1/1 Rights to copy © Copyright 2004-2018, Bootlin License: Creative Commons Attribution - Share Alike 3.0 http://creativecommons.org/licenses/by-sa/3.0/legalcode You are free: I to copy, distribute, display, and perform the work I to make derivative works I to make commercial use of the work Under the following conditions: I Attribution. You must give the original author credit. I Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under a license identical to this one. I For any reuse or distribution, you must make clear to others the license terms of this work. I Any of these conditions can be waived if you get permission from the copyright holder. Your fair use and other rights are in no way affected by the above. - Kernel, drivers and embedded Linux - Development, consulting, training and support - https://bootlin.com 2/1 From Bootloader to user space - Kernel, drivers and embedded Linux - Development, consulting, training and support - https://bootlin.com 3/1 Kernel Bootstrap (1) How the kernel bootstraps itself appears in kernel building. Example on ARM (pxa cpu) in Linux 2.6.36: ..
    [Show full text]