Younge Phd-Thesis.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

Younge Phd-Thesis.Pdf ARCHITECTURAL PRINCIPLES AND EXPERIMENTATION OF DISTRIBUTED HIGH PERFORMANCE VIRTUAL CLUSTERS by Andrew J. Younge Submitted to the faculty of Indiana University in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science Indiana University October 2016 Copyright c 2016 Andrew J. Younge All Rights Reserved INDIANA UNIVERSITY GRADUATE COMMITTEE APPROVAL of a dissertation submitted by Andrew J. Younge This dissertation has been read by each member of the following graduate committee and by majority vote has been found to be satisfactory. Date Geoffrey C. Fox, Ph.D, Chair Date Judy Qiu, Ph.D Date Thomas Sterling, Ph.D Date D. Martin Swany, Ph.D ABSTRACT ARCHITECTURAL PRINCIPLES AND EXPERIMENTATION OF DISTRIBUTED HIGH PERFORMANCE VIRTUAL CLUSTERS Andrew J. Younge Department of Computer Science Doctor of Philosophy With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for data-intensive applications. However, a notable performance gap exists between IaaS and typical high performance computing (HPC) resources. This has limited the applicability of IaaS for many potential users, not only for those who look to leverage the benefits of virtualization with traditional scientific computing applications, but also for the growing number of big data scientists whose platforms are unable to build on HPCs advanced hardware resources. Concurrently, we are at the forefront of a convergence in infrastructure between Big Data and HPC, the implications of which suggest that a unified distributed computing architecture could provide computing and storage ca- pabilities for both differing distributed systems use cases. This dissertation proposes such an endeavor by leveraging the performance and advanced hard- ware from the HPC community and providing it in a virtualized infrastructure using High Performance Virtual Clusters. This will not only enable a more diverse user environment within supercomputing applications, but also bring increased performance and capabilities to big data platform services. The project begins with an evaluation of current hypervisors and their via- bility to run HPC workloads within current infrastructure, which helps define existing performance gaps. Next, mechanisms to enable the use of special- ized hardware available in many HPC resources are uncovered, which include advanced accelerators like the Nvidia GPUs and high-speed, low-latency In- finiBand interconnects. The virtualized infrastructure that developed, which leverages such specialized HPC hardware and utilizes best-practices in virtu- alization using KVM, supports advanced scientific computations common in today`s HPC systems. Specifically, we find that example Molecular Dynamics simulations can run at near-native performance, with only a 1-2% overhead in our virtual cluster. These advances are incorporated into a framework for constructing distributed virtual clusters using the OpenStack cloud infrastruc- ture. With high performance virtual clusters, we look to support a broad range of scientific computing challenges, from HPC simulations to big data analytics with a single, unified infrastructure. ABSTRACT APPROVAL As committee members of the candidate's graduate committe, I have read the dissertation abstract of Andrew J. Younge and found it in a form satisfactory to the graduate committee. Date Geoffrey C. Fox, Ph.D, Chair Date Judy Qiu, Ph.D Date Thomas Sterling, Ph.D Date D. Martin Swany, Ph.D ACKNOWLEDGMENTS I am grateful to many, many people for making this dissertation possible. First, I want to express my sincere gratitude and admiration to my advisor, Geoffrey Fox. Geoffrey has provided invaluable advice, knowledge, insight, and direction in a way that made this dissertation possible. It has been a distinct honor and privilege working with him, something which I will carry with me throughout the rest of my career. I would like to thank my committee members, Judy Qiu, Thomas Sterling, and Martin Swany. The guidance received in the formation of this dissertation has been of great help and a fruitful learning experience. Conversations with each member throughout the dissertation writing have proven exceptionally insightful, and I have learned a great deal working with each of them. I also want to explicitly thank Judy Qiu for giving me the opportunity to help organize and lecture in the Distributed Systems and Cloud Computing courses over the years. This teaching experience has taught me how to be both a better lecturer and mentor. Over the years I have had the pleasure of working with a number of mem- bers in the Digital Science Center lab and others throughout Indiana Uni- versity. I would like to thank Javier Diaz-Montes, Thilina Ekanayaka, Saliya Ekanayaka, Xiaoming Gao, Robert Henschel, Supun Kamburugamuve, Nate Keith, Gary Miksik, Jerome Mitchell, Julie Overfield, Greg Pike, Allan Streib, Koji Tanaka, Gregor von Laszewski, Fugang Wang, Stephen Wu, and Yudou viii Zhou for their support over the years. Many of these individuals have helped me directly throughout my tenure at Indiana University and their efforts and encouragement is greatly appreciated. I would like to express my gratitude to the APEX research group at the University Of Southern California Information Sciences Institute, where I was a visiting researcher and intern in 2012 and 2013. In this, I would like to give a special thanks to John Paul Walters at USC/ISI who worked directly with me on many of the research endeavors described within this dissertation. He has provided hands-on mentorship that aided with the basis of this dissertation, and his leadership is one in which I look to emulate in the future. I would also like to thank the Persistent Systems Fellowship through the School of Informatics and Computing for providing me with a fellowship to fund the last three years of my PhD. I would also like to give thanks to the many close friends and roommates in Bloomington, Indiana over the years. Specifically, Id like to thank Ahmed El-Hassany, Jason Hemann, Matthew Jaffee, Aubrey Kelly, Haley MacLeod, Brent McGill, Andrea Sorensen, Dane Sorensen, Larice Thoroughgood, and Miao Zhang. Id like to make a special acknowledgement to my friends in the Bloomington cycling community, many of whom have become my Bloomington pseudo-family. This includes John Brooks, Michael Wallis Jr, Kasey Poracky, Brian Gessler, Niki Gessler, Davy McDonald, Jess Bare, Ian Yarbrough, Kelsey Thetonia, Rob Slover, Pete Stuttgen, Mike Fox, Tom Schwoegler, and the Chi Omega Little 500 cycling team. In this, I specifically need to acknowledge two wonderful people, Larice thoroughgood and John Brooks for their support, patience, and forbearance during the final stages in of my Ph.D. Larice has provided the love and encouragement necessary to see this dissertation through ix to the end. John has donated countless hours of his time helping me edit and revise this manuscript, along with many after-hour philosophical discussions that proved crutial to the work at hand. Without all of these wonderful peo- ple in my life here in Bloomington, this dissertation would have likely been completed many months earlier. Last but not least, I want to thank my parents Mary and Wayne Younge, my sisters Bethany Younge and Elizabeth Keller, and my extended family for their unwavering love and support over the years. Words cannot fully express the gratitude and love I have for you all. Contents Table of Contentsx List of Figures xiii 1 Introduction1 1.1 Overview..................................1 1.2 Research Statement............................7 1.3 Research Challenges........................... 14 1.4 Outline................................... 17 2 Related Research 20 2.1 Virtualization............................... 20 2.1.1 Hypervisors and Containers................... 23 2.2 Cloud Computing............................. 25 2.2.1 Infrastructure-as-a-Service.................... 30 2.2.2 Virtual Clusters.......................... 40 2.2.3 The FutureGrid Project..................... 44 2.3 High Performance Computing...................... 47 2.3.1 Brief History of Supercomputing................ 47 2.3.2 Distributed Memory Computation................ 49 2.3.3 Exascale.............................. 51 3 Analysis of Virtualization Technologies for High Performance Com- puting Environments 54 3.1 Abstract.................................. 54 3.2 Introduction................................ 55 3.3 Related Research............................. 57 3.4 Feature Comparison........................... 59 3.4.1 Usability.............................. 61 3.5 Experimental Design........................... 63 3.5.1 The FutureGrid Project..................... 63 3.5.2 Experimental Environment.................... 65 3.5.3 Benchmarking Setup....................... 65 x CONTENTS xi 3.6 Performance Comparison......................... 68 3.7 Discussion................................. 73 4 Evaluating GPU Passthrough in Xen for High Performance Cloud Computing 77 4.1 Abstract.................................. 77 4.2 Introduction................................ 78 4.3 Virtual GPU Directions......................... 80 4.3.1 Front-end Remote API invocation................ 81 4.3.2 Back-end PCI passthrough.................... 83 4.4 Implementation.............................
Recommended publications
  • A Case for High Performance Computing with Virtual Machines
    A Case for High Performance Computing with Virtual Machines Wei Huangy Jiuxing Liuz Bulent Abaliz Dhabaleswar K. Panday y Computer Science and Engineering z IBM T. J. Watson Research Center The Ohio State University 19 Skyline Drive Columbus, OH 43210 Hawthorne, NY 10532 fhuanwei, [email protected] fjl, [email protected] ABSTRACT in the 1960s [9], but are experiencing a resurgence in both Virtual machine (VM) technologies are experiencing a resur- industry and research communities. A VM environment pro- gence in both industry and research communities. VMs of- vides virtualized hardware interfaces to VMs through a Vir- fer many desirable features such as security, ease of man- tual Machine Monitor (VMM) (also called hypervisor). VM agement, OS customization, performance isolation, check- technologies allow running different guest VMs in a phys- pointing, and migration, which can be very beneficial to ical box, with each guest VM possibly running a different the performance and the manageability of high performance guest operating system. They can also provide secure and computing (HPC) applications. However, very few HPC ap- portable environments to meet the demanding requirements plications are currently running in a virtualized environment of computing resources in modern computing systems. due to the performance overhead of virtualization. Further, Recently, network interconnects such as InfiniBand [16], using VMs for HPC also introduces additional challenges Myrinet [24] and Quadrics [31] are emerging, which provide such as management and distribution of OS images. very low latency (less than 5 µs) and very high bandwidth In this paper we present a case for HPC with virtual ma- (multiple Gbps).
    [Show full text]
  • Status Update on PCI Express Support in Qemu
    Status Update on PCI Express Support in QEmu Isaku Yamahata, VA Linux Systems Japan K.K. <[email protected]> Xen Summit North America April 28, 2010 Agenda ● Introduction ● Usage and Example ● Implementation Details ● Future Work ● Considerations on further development issues Introduction From http://en.wikipedia.org/wiki/PCI_Express PCI Express native Hotplug Electro Mechanical Lock(EMI) Slot Number From http://docs.hp.com/ Eventual Goal Dom0 qemu-dm interrupt root DomU Inject the error up Virtual PCIe Bus down Interrupt to notify the error Xen VMM hardware PCI express bus PCI Express root port PCI Express upstream port PCI Express Error Message native passthrough PCI Express downstream port With native hot plug support Error PCI Express device Eventual Goal ● More PCI features/PCI express features – The current emulated chipset(I440FX/PIIX3) is too old. – So new Chipset emulator is wanted. ● Xen PCI Express support – PCI Express native hotplug – PCI Express native passthourgh ● When error is detected via AER(Advanced Error Reporting), inject the error into the guest. ● these require several steps, so the first step is... First Phase Goal ● Make Qemu PCI Express ready – Introduce new chipset emulator(Q35) ● PCI Express native hot plug ● Implement PCI Express port emulators, and make it possible to inject errors into guest Current status Qemu/PCI express ● PCIe MMCONFIG Merged. the qemu/guest Q35 chipset base working PCIe portemulator working PCIe native hotplug working firmware PCIe AER WIP PCIe error injection WIP VBE paravirtualization working enhancement is seabios mcfg working almost done. e820 working host bridge initiazatlin working ● pci io/memory The next step is space initialization working passing acpi table outside qemu working qemu upstream vgabios VBE paravirtualization working merge.
    [Show full text]
  • CS3210: Booting and X86
    1 CS3210: Booting and x86 Taesoo Kim 2 What is an operating system? • e.g. OSX, Windows, Linux, FreeBSD, etc. • What does an OS do for you? • Abstract the hardware for convenience and portability • Multiplex the hardware among multiple applications • Isolate applications to contain bugs • Allow sharing among applications 3 Example: Intel i386 4 Example: IBM T42 5 Abstract model (Wikipedia) 6 Abstract model: CPU, Memory, and I/O • CPU: execute instruction, IP → next IP • Memory: read/write, address → data • I/O: talk to external world, memory-mapped I/O or port I/O I/O: input and output, IP: instruction pointer 7 Today: Bootstrapping • CPU → what's first instruction? • Memory → what's initial code/data? • I/O → whom to talk to? 8 What happens after power on? • High-level: Firmware → Bootloader → OS kernel • e.g., jos: BIOS → boot/* → kern/* • e.g., xv6: BIOS → bootblock → kernel • e.g., Linux: BIOS/UEFI → LILO/GRUB/syslinux → vmlinuz • Why three steps? • What are the handover protocols? 9 BIOS: Basic Input/Output System • QEMU uses an opensource BIOS, called SeaBIOS • e.g., try to run, qemu (with no arguments) 10 From power-on to BIOS in x86 (miniboot) • Set IP → 4GB - 16B (0xfffffff0) • e.g., 80286: 1MB - 16B (0xffff0) • e.g., SPARCS v8: 0x00 (reset vector) DEMO : x86 initial state on QEMU 11 The first instruction • To understand, we first need to understand: 1. x86 state (e.g., registers) 2. Memory referencing model (e.g,. segmentation) 3. BIOS features (e.g., memory aliasing) (gdb) x/1i 0xfffffff0 0xfffffff0: ljmp $0xf000,$0xe05b 12 x86
    [Show full text]
  • Exploring Massive Parallel Computation with GPU
    Need for parallelism Graphical Processor Units Gravitational Microlensing Modelling Exploring Massive Parallel Computation with GPU Ian Bond Massey University, Auckland, New Zealand 2011 Sagan Exoplanet Workshop Pasadena, July 25-29 2011 Ian Bond | Microlensing parallelism 1/40 Need for parallelism Graphical Processor Units Gravitational Microlensing Modelling Assumptions/Purpose You are all involved in microlensing modelling and you have (or are working on) your own code this lecture shows how to get started on getting code to run on a GPU then its over to you . Ian Bond | Microlensing parallelism 2/40 Need for parallelism Graphical Processor Units Gravitational Microlensing Modelling Outline 1 Need for parallelism 2 Graphical Processor Units 3 Gravitational Microlensing Modelling Ian Bond | Microlensing parallelism 3/40 Need for parallelism Graphical Processor Units Gravitational Microlensing Modelling Paralel Computing Parallel Computing is use of multiple computers, or computers with multiple internal processors, to solve a problem at a greater computational speed than using a single computer (Wilkinson 2002). How does one achieve parallelism? Ian Bond | Microlensing parallelism 4/40 Need for parallelism Graphical Processor Units Gravitational Microlensing Modelling Grand Challenge Problems A grand challenge problem is one that cannot be solved in a reasonable amount of time with todays computers’ Examples: – Modelling large DNA structures – Global weather forecasting – N body problem (N very large) – brain simulation Has microlensing
    [Show full text]
  • Linux Virtualization Update
    Linux Virtualization Update Chris Wright <[email protected]> Japan Linux Symposium, November 2007 Intro Virtualization mini-summit Paravirtualization Full virtualization Hardware changes Libvirt Xen Virtualization Mini-summit June 25-27, 2007 ± Just before OLS in Ottawa. 18 attendees ● Xen, Vmware, KVM, lguest, UML, LinuxOnLinux ● x86, ia64, PPC and S390 Focused primarily on Linux as guest and areas of cooperation ● paravirt_ops and virtio Common interfaces ● Not the best group to design or discuss management interfaces ● Defer to libvirt, CIM, etc... ● CPUID 0x4000_00xx for hypervisor feature detection ● Can we get to common ABI for paravirt hybrid guest? Virtualization Mini-summit paravirt_ops ● Make use of existing abstractions wherever possible (clocksource, clockevents or irqchip) ● Could use a common lib/x86_emulate.c ● Open question: performance benefit of shadow vs. direct paging? Distro Issues ● Lack of feature parity between bare metal and Xen is difficult for distros ● Single binary kernel image ● Merge upstream Performance ● NUMA awareness lacking in Xen ± difficult for Altix ● Static NUMA representation doesn©t map well to dynamic virt environment ● Cooperative memory management ± guest memory hints Virtualization Mini-summit Hardware ● x86 and ia64 hardware virtualization roadmap ● ppc virtualization is gaining in embedded market, realtime requirments ● S390 ªhas an instruction for thatº Virtio ● Separate driver from transport ● Makes driver small, looks like a Linux driver and reusable ● Hypervisor specific
    [Show full text]
  • Dcuda: Hardware Supported Overlap of Computation and Communication
    dCUDA: Hardware Supported Overlap of Computation and Communication Tobias Gysi Jeremia Bar¨ Torsten Hoefler Department of Computer Science Department of Computer Science Department of Computer Science ETH Zurich ETH Zurich ETH Zurich [email protected] [email protected] [email protected] Abstract—Over the last decade, CUDA and the underlying utilization of the costly compute and network hardware. To GPU hardware architecture have continuously gained popularity mitigate this problem, application developers can implement in various high-performance computing application domains manual overlap of computation and communication [23], [27]. such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent In particular, there exist various approaches [13], [22] to programming model for GPU clusters. We therefore introduce overlap the communication with the computation on an inner the dCUDA programming model, which implements device- domain that has no inter-node data dependencies. However, side remote memory access with target notification. To hide these code transformations significantly increase code com- instruction pipeline latencies, CUDA programs over-decompose plexity which results in reduced real-world applicability. the problem and over-subscribe the device by running many more threads than there are hardware execution units. Whenever a High-performance system design often involves trading thread stalls, the hardware scheduler immediately proceeds with off sequential performance against parallel throughput. The the execution of another thread ready for execution. This latency architectural difference between host and device processors hiding technique is key to make best use of the available hardware perfectly showcases the two extremes of this design space.
    [Show full text]
  • Novel Memory and I/O Virtualization Techniques for Next Generation Data-Centers Based on Disaggregated Hardware Maciej Bielski
    Novel memory and I/O virtualization techniques for next generation data-centers based on disaggregated hardware Maciej Bielski To cite this version: Maciej Bielski. Novel memory and I/O virtualization techniques for next generation data-centers based on disaggregated hardware. Hardware Architecture [cs.AR]. Université Paris Saclay (COmUE), 2019. English. NNT : 2019SACLT022. tel-02464021 HAL Id: tel-02464021 https://pastel.archives-ouvertes.fr/tel-02464021 Submitted on 2 Feb 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Nouvelles techniques de virtualisation de la memoire´ et des entrees-sorties´ vers les periph´ eriques´ pour les prochaines gen´ erations´ de centres de traitement de donnees´ bases´ sur des equipements´ NNT : 2019SACLT022 repartis´ destructur´ es´ These` de doctorat de l’Universite´ Paris-Saclay prepar´ ee´ a` Tel´ ecom´ ParisTech Ecole doctorale n◦580 Sciences et Technologies de l’Information et de la Communication (STIC) Specialit´ e´ de doctorat: Informatique These` present´ ee´ et soutenue a` Biot Sophia Antipolis, le 18/03/2019,
    [Show full text]
  • On the Virtualization of CUDA Based GPU Remoting on ARM and X86 Machines in the Gvirtus Framework
    On the Virtualization of CUDA Based GPU Remoting on ARM and X86 Machines in the GVirtuS Framework Montella, R., Giunta, G., Laccetti, G., Lapegna, M., Palmieri, C., Ferraro, C., Pelliccia, V., Hong, C-H., Spence, I., & Nikolopoulos, D. (2017). On the Virtualization of CUDA Based GPU Remoting on ARM and X86 Machines in the GVirtuS Framework. International Journal of Parallel Programming, 45(5), 1142-1163. https://doi.org/10.1007/s10766-016-0462-1 Published in: International Journal of Parallel Programming Document Version: Peer reviewed version Queen's University Belfast - Research Portal: Link to publication record in Queen's University Belfast Research Portal Publisher rights © 2016 Springer Verlag. The final publication is available at Springer via http://dx.doi.org/ 10.1007/s10766-016-0462-1 General rights Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact [email protected]. Download date:08. Oct. 2021 Noname manuscript No. (will be inserted by the editor) On the virtualization of CUDA based GPU remoting on ARM and X86 machines in the GVirtuS framework Raffaele Montella · Giulio Giunta · Giuliano Laccetti · Marco Lapegna · Carlo Palmieri · Carmine Ferraro · Valentina Pelliccia · Cheol-Ho Hong · Ivor Spence · Dimitrios S.
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Kshot: Live Kernel Patching with SMM and SGX
    KShot: Live Kernel Patching with SMM and SGX Lei Zhou∗y, Fengwei Zhang∗, Jinghui Liaoz, Zhengyu Ning∗, Jidong Xiaox Kevin Leach{, Westley Weimer{ and Guojun Wangk ∗Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China, zhoul2019,zhangfw,ningzy2019 @sustech.edu.cn f g ySchool of Computer Science and Engineering, Central South University, Changsha, China zDepartment of Computer Science, Wayne State University, Detroit, USA, [email protected] xDepartment of Computer Science, Boise State University, Boise, USA, [email protected] Department of Computer Science and Engineering, University of Michigan, Ann Arbor, USA, kjleach,weimerw @umich.edu { f g kSchool of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou, China, [email protected] Abstract—Live kernel patching is an increasingly common kernel vulnerabilities also merit patching. Organizations often trend in operating system distributions, enabling dynamic up- use rolling upgrades [3], [6], in which patches are designed dates to include new features or to fix vulnerabilities without to affect small subsystems that minimize unplanned whole- having to reboot the system. Patching the kernel at runtime lowers downtime and reduces the loss of useful state from running system downtime, to update and patch whole server systems. applications. However, existing kernel live patching techniques However, rolling upgrades do not altogether obviate the need (1) rely on specific support from the target operating system, to restart software or reboot systems; instead, dynamic hot and (2) admit patch failures resulting from kernel faults. We patching (live patching) approaches [7]–[9] aim to apply present KSHOT, a kernel live patching mechanism based on patches to running software without having to restart it.
    [Show full text]
  • QEMU Version 4.2.0 User Documentation I
    QEMU version 4.2.0 User Documentation i Table of Contents 1 Introduction ::::::::::::::::::::::::::::::::::::: 1 1.1 Features :::::::::::::::::::::::::::::::::::::::::::::::::::::::: 1 2 QEMU PC System emulator ::::::::::::::::::: 2 2.1 Introduction :::::::::::::::::::::::::::::::::::::::::::::::::::: 2 2.2 Quick Start::::::::::::::::::::::::::::::::::::::::::::::::::::: 2 2.3 Invocation :::::::::::::::::::::::::::::::::::::::::::::::::::::: 3 2.3.1 Standard options :::::::::::::::::::::::::::::::::::::::::: 3 2.3.2 Block device options :::::::::::::::::::::::::::::::::::::: 12 2.3.3 USB options:::::::::::::::::::::::::::::::::::::::::::::: 23 2.3.4 Display options ::::::::::::::::::::::::::::::::::::::::::: 23 2.3.5 i386 target only::::::::::::::::::::::::::::::::::::::::::: 30 2.3.6 Network options :::::::::::::::::::::::::::::::::::::::::: 31 2.3.7 Character device options:::::::::::::::::::::::::::::::::: 38 2.3.8 Bluetooth(R) options ::::::::::::::::::::::::::::::::::::: 42 2.3.9 TPM device options :::::::::::::::::::::::::::::::::::::: 43 2.3.10 Linux/Multiboot boot specific ::::::::::::::::::::::::::: 44 2.3.11 Debug/Expert options ::::::::::::::::::::::::::::::::::: 45 2.3.12 Generic object creation :::::::::::::::::::::::::::::::::: 54 2.3.13 Device URL Syntax ::::::::::::::::::::::::::::::::::::: 66 2.4 Keys in the graphical frontends :::::::::::::::::::::::::::::::: 69 2.5 Keys in the character backend multiplexer ::::::::::::::::::::: 69 2.6 QEMU Monitor ::::::::::::::::::::::::::::::::::::::::::::::: 70 2.6.1 Commands :::::::::::::::::::::::::::::::::::::::::::::::
    [Show full text]
  • Unprivileged GPU Containers on a LXD Cluster
    Unprivileged GPU containers on a LXD cluster GPU-enabled system containers at scale Stéphane Graber Christian Brauner LXD project leader LXD maintainer @stgraber @brau_ner https://stgraber.org https://brauner.io [email protected] [email protected] What are system containers? They are the oldest type of containers 01 BSD jails, Linux vServer, Solaris Zones, OpenVZ, LXC and LXD. They behave like standalone systems 02 No need for specialized software or custom images. No virtualization overhead 03 They are containers after all. LXD System nova-lxd command line tool your own client/script ? container LXD REST API manager LXD LXD LXD LXD LXC LXC LXC LXC Linux kernel Linux kernel Linux kernel Linux kernel Host A Host B Host C Host ... What LXD is Simple 01 Clean command line interface, simple REST API and clear terminology. Fast 02 Image based, no virtualization, direct hardware access. Secure 03 Safe by default. Combines all available kernel security features. Scalable 04 From a single container on a laptop to tens of thousands of containers in a cluster. What LXD isn’t Another virtualization technology 01 LXD offers an experience very similar to a virtual machine. But it’s still containers, with no virtualization overhead and real hardware. A fork of LXC 02 LXD uses LXC’s API to manage the containers behind the scene. Another application container manager 03 LXD only cares about full system containers. You can run whatever you want inside a LXD container, including Docker. LXD Main Certificates components Cluster Containers Snapshots Backups Events Images Aliases Networks Operations Projects Storage pools Storage volumes Snapshots LXD clustering Built-in clustering support 01 No external dependencies, all LXD 3.0 or higher installations can be instantly turned into a cluster.
    [Show full text]