Applications of Virtualization in Systems Design JYVÄSKYLÄ STUDIES in COMPUTING 153

Total Page:16

File Type:pdf, Size:1020Kb

Applications of Virtualization in Systems Design JYVÄSKYLÄ STUDIES in COMPUTING 153 JYVÄSKYLÄ STUDIES IN COMPUTING 153 Nezer Jacob Zaidenberg Applications of Virtualization in Systems Design JYVÄSKYLÄ STUDIES IN COMPUTING 153 Nezer Jacob Zaidenberg Applications of Virtualization in Systems Design Esitetään Jyväskylän yliopiston informaatioteknologian tiedekunnan suostumuksella julkisesti tarkastettavaksi yliopiston Ylistönrinteellä, salissa YlistöKem4 kesäkuun 15. päivänä 2012 kello 12. Academic dissertation to be publicly discussed, by permission of the Faculty of Information Technology of the University of Jyväskylä, in Ylistönrinne, hall YlistöKem4, on June 15, 2012 at 12 o'clock noon. UNIVERSITY OF JYVÄSKYLÄ JYVÄSKYLÄ 2012 Applications of Virtualization in Systems Design JYVÄSKYLÄ STUDIES IN COMPUTING 153 Nezer Jacob Zaidenberg Applications of Virtualization in Systems Design UNIVERSITY OF JYVÄSKYLÄ JYVÄSKYLÄ 2012 Editor Timo Männikkö Department of Mathematical Information Technology, University of Jyväskylä Pekka Olsbo, Ville Korkiakangas Publishing Unit, University Library of Jyväskylä URN:ISBN:978-951-39-4763-7 ISBN 978-951-39-4763-7 (PDF) ISBN 978-951-39-4762-0 (nid.) ISSN 1456-5390 Copyright © 2012 , by University of Jyväskylä Jyväskylä University Printing House, Jyväskylä 2012 ABSTRACT Zaidenberg, Nezer Jacob Applications of Virtualization in Systems Design Jyväskylä: University of Jyväskylä, 2012, 288 p. (Jyväskylä Studies in Computing ISSN 1456-5390; 153) ISBN 978-951-39-4762-0 (nid.) ISBN 978-951-39-4763-7 (PDF) Finnish summary Diss. In recent years, the field of virtualization has generated a lot of attention and has demonstrated massive growth in usage and applications. Recent modification to the underlying hardware such as Intel VT-x instruc- tions and AMD-v instructions have made system virtualization much more effi- cient. Furthermore, recent advances in compiler technology have led the process virtual machine to dominate not only modern programming languages such as C# and Java but also in “ancient" programming languages such as C, C++ and Objective-C. As a consequence of this trend, virtual services and applications using vir- tualization have started to spread. The rise of technologies such as storage virtu- alization, virtualization on clouds and network virtualization in clusters provide system designers with new capabilities that were once impossible without vast investment in development time and infrastructure. This work describes several problems from the fields of Internet stream- ing, kernel development, scientific computation, disaster recovery protection and trusted computing. All of these problems were solved using virtualization tech- nologies. The described systems are state-of-the-art and enable new capabilities that were impossible or difficult to implement without virtualization support. We describe the architecture and system implementations as well as provide open source software implementations. The dissertation provides an understanding of both the strengths and the weaknesses that are connected by applying virtualization methods to system de- sign. Virtualization methods, as described in this dissertation, can be applicable to future applications development. Keywords: Virtualization, Storage Virtualization, Cloud Computing, Lguest, KVM, QEMU, LLVM, Trusted computing, Asynchronous mirroring, Repli- cation, Kernel Development, Prefetching, Pre-execution, Internet Stream- ing, Deduplication Author Nezer J. Zaidenberg Department of Mathematical Information Technology University of Jyväskylä Finland Supervisors Professor Pekka Neittaanmäki Department of Mathematical Information Technology University of Jyväskylä Finland Professor Amir Averbuch Department of Computer Science Tel Aviv University Israel Reviewers Professor Carmi Merimovich Department of Computer Science Tel Aviv-Jaffa Academic Collage Israel Professor Jalil Boukhobza Department of Computer Science Université de Bretagne Occidentale France Opponent Professor Juha Röning Department of Electrical and Information Engineering University of Oulu Finland ACKNOWLEDGEMENTS Without the knowledge and assistance of the mentioned below, this work would not have been possible. I would like to take this opportunity to recognize thank, and acknowledge the following people and Institutions. First and Foremost, it was a privilege to have Prof. Amir Averbuch as su- pervisor. I wish to extend my gratitude to Amir for his support and assistance. I have been working with Amir since 2001, when Amir tutored me during my M.Sc thesis. Amir’s support throughout the years has made this work possible. I wish to express my gratitude and warmest thanks toward my second su- pervisor Prof. Pekka Neittaanmäki for his review, guidance and assistance with all things big and small. Chapter3 presents LgDb, joint work with Eviatar Khen, a co-operation that was very stimulating for both of us. Chapter4 presents AMirror, joint work with Tomer Margalit. Working with Tomer was interesting and mutually beneficial. Chapter5 presents Truly-Protect, joint work with Michael Kiperberg. The cooperation with Michael was very diligent and will also lead to more future work. Chapter6 presents a LLVM-Prefetch, joint work with Michael Kiperberg and Gosha Goldberg. Chapter7 presents PPPC system, joint work with Prof. Amir Averbuch and Prof. Yehuda Roditi. The work is inspired by systems developed by vTrails. A start-up I founded. I would also like to thank the other collaborators I have worked with, that is described in chapter8, Tamir Erez, Limor Gavish and Yuval Meir. Furthermore, this work wouldn’t have been complete without insightful, helpful and often frustrating comments by my reviewers Prof. Jalil Boukhobza, Prof. Carmi Merimovich and the comments of Prof. Tapani Ristaniemi as well as countless other anonymous reviewers of our peer-reviewed papers and posters. I would like to express my gratitude to the staff of the Department of Math- ematical Information Technology for their help and assistance with solving ev- eryday problems. Particularly Timo Männikkö for his review of the manuscript and Kati Valpe and Rantalainen Marja-Leena for all their help. Finally, I wish to thank my parents and my family and all my friends for their moral support and interest in my work, especially my dearest partner Vered Gatt for her patience, encouragement and review of the work and my brother Dor Zaidenberg for his insightful comments. I greatly appreciate the support of the COMAS graduate school, STRIMM consortium and Artturi and Ellen Nyyssönen foundation that provided funding for the research. I owe a debt of gratitude to my faithful proof-reader, Ms. Shanin Bugeja. If this monograph is reasonably free from error it is due in no small measure to her meticulous and ruthless application of the red pen. LIST OF FIGURES FIGURE 1 The flow and the functional connections among the chapters in the dissertation ................................................................ 24 FIGURE 2 Type I and Type II hypervisors .............................................. 28 FIGURE 3 Process Virtualization............................................................ 29 FIGURE 4 Multiple operating systems running on virtual environments on a single system................................................................. 32 FIGURE 5 iPhone development using device and simulator ..................... 33 FIGURE 6 Desktop Virtualization .......................................................... 36 FIGURE 7 Microsoft Application Virtualization....................................... 38 FIGURE 8 Application virtual machine - OS and Application viewpoints .. 39 FIGURE 9 Common Language Runtime (CLR) Environment.................... 40 FIGURE 10 Android (Dalvik) Environment .............................................. 40 FIGURE 11 Using VIP via LVS................................................................. 42 FIGURE 12 Virtual Private Network (VPN). ............................................. 43 FIGURE 13 Cloud Computing Environments ........................................... 44 FIGURE 14 The LgDb System Architecture............................................... 51 FIGURE 15 The LgDb System - x86 Privileges Levels ................................ 52 FIGURE 16 Flow Chart of Code Coverage Operation................................. 56 FIGURE 17 Flow Chart of Profile Operation ............................................. 57 FIGURE 18 Generic replication System .................................................... 66 FIGURE 19 Synchronous replication System ............................................. 67 FIGURE 20 Asynchronous replication System........................................... 68 FIGURE 21 Schematic Architecture of the AMirror Tool............................. 73 FIGURE 22 Why Volume Groups are Required ......................................... 75 FIGURE 23 Deduplication on the Network Traffic ..................................... 77 FIGURE 24 Saving of Bandwidth............................................................. 78 FIGURE 25 Just-In-Time Decrypting ....................................................... 92 FIGURE 26 Just-In-Time Decryption Performance. Program Running Time in Seconds as a Function of Input Size. Note the Logarithmic Scale. ...................................................................................100 FIGURE 27 Program Decryption Time (in cycles) Using AES and our Ci- pher with Different Values of p. ` = 4. a = 14.......................... 101 FIGURE 28 Program Decryption Time (in cycles) Using AES and Our Ci- pher with Different Values of p. ` = 4.
Recommended publications
  • Comparison of Contemporary Real Time Operating Systems
    ISSN (Online) 2278-1021 IJARCCE ISSN (Print) 2319 5940 International Journal of Advanced Research in Computer and Communication Engineering Vol. 4, Issue 11, November 2015 Comparison of Contemporary Real Time Operating Systems Mr. Sagar Jape1, Mr. Mihir Kulkarni2, Prof.Dipti Pawade3 Student, Bachelors of Engineering, Department of Information Technology, K J Somaiya College of Engineering, Mumbai1,2 Assistant Professor, Department of Information Technology, K J Somaiya College of Engineering, Mumbai3 Abstract: With the advancement in embedded area, importance of real time operating system (RTOS) has been increased to greater extent. Now days for every embedded application low latency, efficient memory utilization and effective scheduling techniques are the basic requirements. Thus in this paper we have attempted to compare some of the real time operating systems. The systems (viz. VxWorks, QNX, Ecos, RTLinux, Windows CE and FreeRTOS) have been selected according to the highest user base criterion. We enlist the peculiar features of the systems with respect to the parameters like scheduling policies, licensing, memory management techniques, etc. and further, compare the selected systems over these parameters. Our effort to formulate the often confused, complex and contradictory pieces of information on contemporary RTOSs into simple, analytical organized structure will provide decisive insights to the reader on the selection process of an RTOS as per his requirements. Keywords:RTOS, VxWorks, QNX, eCOS, RTLinux,Windows CE, FreeRTOS I. INTRODUCTION An operating system (OS) is a set of software that handles designed known as Real Time Operating System (RTOS). computer hardware. Basically it acts as an interface The motive behind RTOS development is to process data between user program and computer hardware.
    [Show full text]
  • RCU Usage in the Linux Kernel: One Decade Later
    RCU Usage In the Linux Kernel: One Decade Later Paul E. McKenney Silas Boyd-Wickizer Jonathan Walpole Linux Technology Center MIT CSAIL Computer Science Department IBM Beaverton Portland State University Abstract unique among the commonly used kernels. Understand- ing RCU is now a prerequisite for understanding the Linux Read-copy update (RCU) is a scalable high-performance implementation and its performance. synchronization mechanism implemented in the Linux The success of RCU is, in part, due to its high perfor- kernel. RCU’s novel properties include support for con- mance in the presence of concurrent readers and updaters. current reading and writing, and highly optimized inter- The RCU API facilitates this with two relatively simple CPU synchronization. Since RCU’s introduction into the primitives: readers access data structures within RCU Linux kernel over a decade ago its usage has continued to read-side critical sections, while updaters use RCU syn- expand. Today, most kernel subsystems use RCU. This chronization to wait for all pre-existing RCU read-side paper discusses the requirements that drove the devel- critical sections to complete. When combined, these prim- opment of RCU, the design and API of the Linux RCU itives allow threads to concurrently read data structures, implementation, and how kernel developers apply RCU. even while other threads are updating them. This paper describes the performance requirements that 1 Introduction led to the development of RCU, gives an overview of the RCU API and implementation, and examines how ker- The first Linux kernel to include multiprocessor support nel developers have used RCU to optimize kernel perfor- is not quite 20 years old.
    [Show full text]
  • Lguest a Journey of Learning the Linux Kernel Internals
    Lguest A journey of learning the Linux kernel internals Daniel Baluta <[email protected]> What is lguest? ● “Lguest is an adventure, with you, the reader, as a Hero” ● Minimal 32-bit x86 hypervisor ● Introduced in 2.6.23 (October, 2007) by Rusty Russel & co ● Around 6000 lines of code ● “Relatively” “easy” to understand and hack on ● Support to learn linux kernel internals ● Porting to other architectures is a challenging task Hypervisor types make Preparation ● Guest ○ Life of a guest ● Drivers ○ virtio devices: console, block, net ● Launcher ○ User space program that sets up and configures Guests ● Host ○ Normal linux kernel + lg.ko kernel module ● Switcher ○ Low level Host <-> Guest switch ● Mastery ○ What’s next? Lguest overview make Guest ● “Simple creature, identical to the Host [..] behaving in simplified ways” ○ Same kernel as Host (or at least, built from the same source) ● booting trace (when using bzImage) ○ arch/x86/boot/compressed/head_32.S: startup_32 ■ uncompresses the kernel ○ arch/x86/kernel/head_32.S : startup_32 ■ initialization , checks hardware_subarch field ○ arch/x86/lguest/head_32.S ■ LGUEST_INIT hypercall, jumps to lguest_init ○ arch/x86/lguest/boot.c ■ once we are here we know we are a guest ■ override privileged instructions ■ boot as normal kernel, calling i386_start_kernel Hypercalls struct lguest_data ● second communication method between Host and Guest ● LHCALL_GUEST_INIT ○ hypercall to tell the Host where lguest_data struct is ● both Guest and Host publish information here ● optimize hypercalls ○ irq_enabled
    [Show full text]
  • Lguest the Little Hypervisor
    Lguest the little hypervisor Matías Zabaljáuregui [email protected] based on lguest documentation written by Rusty Russell note: this is a draft and incomplete version lguest introduction launcher guest kernel host kernel switcher don't have time for: virtio virtual devices patching technique async hypercalls rewriting technique guest virtual interrupt controller guest virtual clock hw assisted vs paravirtualization hybrid virtualization benchmarks introduction A hypervisor allows multiple Operating Systems to run on a single machine. lguest paravirtualization within linux kernel proof of concept (paravirt_ops, virtio...) teaching tool guests and hosts A module (lg.ko) allows us to run other Linux kernels the same way we'd run processes. We only run specially modified Guests. Setting CONFIG_LGUEST_GUEST, compiles [arch/x86/lguest/boot.c] into the kernel so it knows how to be a Guest at boot time. This means that you can use the same kernel you boot normally (ie. as a Host) as a Guest. These Guests know that they cannot do privileged operations, such as disable interrupts, and that they have to ask the Host to do such things explicitly. Some of the replacements for such low-level native hardware operations call the Host through a "hypercall". [ drivers/lguest/hypercalls.c ] high level overview launcher /dev/lguest guest kernel host kernel switcher Launcher main() some implementations /dev/lguest write → initialize read → run guest Launcher main [ Documentation/lguest/lguest.c ] The Launcher is the Host userspace program which sets up, runs and services the Guest. Just to confuse you: to the Host kernel, the Launcher *is* the Guest and we shall see more of that later.
    [Show full text]
  • Software De Sistemas Introduccion
    SOFTWARE DE SISTEMAS INTRODUCCION • EL CONCEPTO DE SOFTWARE VA MAS ALLA DE LOS PROGRAMAS DE COMPUTACION EN SUS DISTINTOS ESTADOS QUE ABARCA TODO LO INTANGIBLE RELACIONADO Algodesl.wordpress.com t i ¿QUE ES? p o s d e s o f t w a r e . c o m • Parte esencial para clasificar sistemas operativos • Conocido como software base • Conjunto de programas de software que permite interactuar con el hardware y operarlo junto con configuraciones y funciones de entrada y salida ¿Que hace? • Permite utilizar sistema operativo e informático, incluyendo herramientas de diagnostico • Su propósito es aislar un programa o hardware de aplicaciones tanto como sea posible proyectoova.webcindario.com Ejemplos de programas de software de sistema a d • Potenciales ejemplos: r ia n ▪ cargadores a l is s e ▪ Enlazadores t .b lo ▪ Utilidad de software g s p o ▪ Interfaz grafica t ▪ Celdas ▪ Bios ▪ Hipervisores ▪ Gestores de arranque losejemplos.com *si el software del sistema se almacena en memoria volátil se denomina firware Tipos • Como tal no existen varios tipos de software de sistema pero se pueden dividir en 3; O k • Sistema operativo h o s • t Controladores i n g • . Programas de utilería c o m Tipos; SISTEMA OPERATIVO • Parte que se encarga de administración de hardware como los componentes de computadora y encargado de que todos se unan para funcionar en 1 solo objetivo m i n d 4 2 . c o m SISTEMAS OPERATIVOS • MICROSOFT WINDOWS ▪ Núcleo: monolítico (versiones basadas en MS­DOS) e hibrido (versiones basadas en Windows NT) ▪ Plataformas: ARM, arquitectura Intel, MIPS, Alpha, Power PC SISTEMAS OPERATIVOS • MAC OS ▪ Núcleo: XNU basado en mach y BSD, es tipo hibrido ▪ Plataformas; power PC SISTEMAS OPERATIVOS • LINUX: ▪ Núcleo: núcleo Linux ▪ Plataformas; DEC Alpha, ARM, POWER PC, superH, SPARC, ETRAX CRIS, MIPS, MN103… etc.
    [Show full text]
  • SUSE BU Presentation Template 2014
    TUT7317 A Practical Deep Dive for Running High-End, Enterprise Applications on SUSE Linux Holger Zecha Senior Architect REALTECH AG [email protected] Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 2 Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 3 About REALTECH 1/2 REALTECH Software REALTECH Consulting . Business Service Management . SAP Mobile . Service Operations Management . Cloud Computing . Configuration Management and CMDB . SAP HANA . IT Infrastructure Management . SAP Solution Manager . Change Management for SAP . IT Technology . Virtualization . IT Infrastructure 4 About REALTECH 2/2 5 Our Customers Manufacturing IT services Healthcare Media Utilities Consumer Automotive Logistics products Finance Retail REALTECH Consulting GmbH 6 Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 7 The Inspiration for this Session • Several performance workshops at customers • Performance escalations at customer who migrated from UNIX (AIX, Solaris, HP-UX) to Linux • Presenting the experiences made at these customers in this session • Preventing the audience from performance degradation caused from: – Significant design mistakes – Wrong architecture assumptions – Having no architecture at all 8 Performance Optimization The False Estimation Upgrading server with CPUs that are 12.5% faster does not improve application
    [Show full text]
  • LMAX Disruptor
    Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads Martin Thompson Dave Farley Michael Barker Patricia Gee Andrew Stewart May-2011 http://code.google.com/p/disruptor/ 1 Abstract LMAX was established to create a very high performance financial exchange. As part of our work to accomplish this goal we have evaluated several approaches to the design of such a system, but as we began to measure these we ran into some fundamental limits with conventional approaches. Many applications depend on queues to exchange data between processing stages. Our performance testing showed that the latency costs, when using queues in this way, were in the same order of magnitude as the cost of IO operations to disk (RAID or SSD based disk system) – dramatically slow. If there are multiple queues in an end-to-end operation, this will add hundreds of microseconds to the overall latency. There is clearly room for optimisation. Further investigation and a focus on the computer science made us realise that the conflation of concerns inherent in conventional approaches, (e.g. queues and processing nodes) leads to contention in multi-threaded implementations, suggesting that there may be a better approach. Thinking about how modern CPUs work, something we like to call “mechanical sympathy”, using good design practices with a strong focus on teasing apart the concerns, we came up with a data structure and a pattern of use that we have called the Disruptor. Testing has shown that the mean latency using the Disruptor for a three-stage pipeline is 3 orders of magnitude lower than an equivalent queue-based approach.
    [Show full text]
  • A Thread Synchronization Model for the PREEMPT RT Linux Kernel
    A Thread Synchronization Model for the PREEMPT RT Linux Kernel Daniel B. de Oliveiraa,b,c, R^omulo S. de Oliveirab, Tommaso Cucinottac aRHEL Platform/Real-time Team, Red Hat, Inc., Pisa, Italy. bDepartment of Systems Automation, UFSC, Florian´opolis, Brazil. cRETIS Lab, Scuola Superiore Sant'Anna, Pisa, Italy. Abstract This article proposes an automata-based model for describing and validating sequences of kernel events in Linux PREEMPT RT and how they influence the timeline of threads' execu- tion, comprising preemption control, interrupt handling and control, scheduling and locking. This article also presents an extension of the Linux tracing framework that enables the trac- ing of kernel events to verify the consistency of the kernel execution compared to the event sequences that are legal according to the formal model. This enables cross-checking of a kernel behavior against the formalized one, and in case of inconsistency, it pinpoints possible areas of improvement of the kernel, useful for regression testing. Indeed, we describe in details three problems in the kernel revealed by using the proposed technique, along with a short summary on how we reported and proposed fixes to the Linux kernel community. As an example of the usage of the model, the analysis of the events involved in the activation of the highest priority thread is presented, describing the delays occurred in this operation in the same granularity used by kernel developers. This illustrates how it is possible to take advantage of the model for analyzing the preemption model of Linux. Keywords: Real-time computing, Operating systems, Linux kernel, Automata, Software verification, Synchronization.
    [Show full text]
  • Proceedings of the Linux Symposium
    Proceedings of the Linux Symposium Volume One June 27th–30th, 2007 Ottawa, Ontario Canada Contents The Price of Safety: Evaluating IOMMU Performance 9 Ben-Yehuda, Xenidis, Mostrows, Rister, Bruemmer, Van Doorn Linux on Cell Broadband Engine status update 21 Arnd Bergmann Linux Kernel Debugging on Google-sized clusters 29 M. Bligh, M. Desnoyers, & R. Schultz Ltrace Internals 41 Rodrigo Rubira Branco Evaluating effects of cache memory compression on embedded systems 53 Anderson Briglia, Allan Bezerra, Leonid Moiseichuk, & Nitin Gupta ACPI in Linux – Myths vs. Reality 65 Len Brown Cool Hand Linux – Handheld Thermal Extensions 75 Len Brown Asynchronous System Calls 81 Zach Brown Frysk 1, Kernel 0? 87 Andrew Cagney Keeping Kernel Performance from Regressions 93 T. Chen, L. Ananiev, and A. Tikhonov Breaking the Chains—Using LinuxBIOS to Liberate Embedded x86 Processors 103 J. Crouse, M. Jones, & R. Minnich GANESHA, a multi-usage with large cache NFSv4 server 113 P. Deniel, T. Leibovici, & J.-C. Lafoucrière Why Virtualization Fragmentation Sucks 125 Justin M. Forbes A New Network File System is Born: Comparison of SMB2, CIFS, and NFS 131 Steven French Supporting the Allocation of Large Contiguous Regions of Memory 141 Mel Gorman Kernel Scalability—Expanding the Horizon Beyond Fine Grain Locks 153 Corey Gough, Suresh Siddha, & Ken Chen Kdump: Smarter, Easier, Trustier 167 Vivek Goyal Using KVM to run Xen guests without Xen 179 R.A. Harper, A.N. Aliguori & M.D. Day Djprobe—Kernel probing with the smallest overhead 189 M. Hiramatsu and S. Oshima Desktop integration of Bluetooth 201 Marcel Holtmann How virtualization makes power management different 205 Yu Ke Ptrace, Utrace, Uprobes: Lightweight, Dynamic Tracing of User Apps 215 J.
    [Show full text]
  • Improving Performance of Virtual Machines by Virtio Bridge Bypass
    www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 6 Issue 4 April 2017, Page No. 20931-20937 Index Copernicus value (2015): 58.10 DOI: 10.18535/ijecs/v6i4.24 Improving performance of Virtual Machines by Virtio bridge Bypass for PCI devices 1Shirley Kotian, 2Kirti Menon, 3Kirti Menon, 4Utsav Mundada, 5Neeraj Vilas Auti 1234PICT, 5PICT [CS] ABSTRACT Inspired by the Virtio module of virtualization, we propose an alternate method to directly communicate with PCI devices such as NIC without the use of any kernel modules. This method uses a specialized module written by us which will avoid the mechanism of bridges like the ones used in Virtio that increase latency. This module will be present in the userspace of the guest OS and we are specifically targeting the e1000 device for this purpose and later plan to make it generic for all PCI devices. Our motivation is to avoid unnecessary communication with the kernel which slows down the system. For the first step. we do resource mapping to map the PCI device memory into userspace. Then, we expose PCI configuration space through a userspace module using ACPI cables. Thus, we create a userspace PCI driver which will decrease the latency in access time and increase speed of execution. The applications in the Guest OS that request communication with the PCI devices will be redirected to our application. This will take some load off the kernel and reduce its overhead. Finally, we boot a VM that actually talks to our PCI device emulator. GENERAL TERMS Virtio, UPCI, e1000, QEMU, virt-manager, UIO.
    [Show full text]
  • Administration Guide Administration Guide SUSE Linux Enterprise High Availability Extension 15 SP1 by Tanja Roth and Thomas Schraitle
    SUSE Linux Enterprise High Availability Extension 15 SP1 Administration Guide Administration Guide SUSE Linux Enterprise High Availability Extension 15 SP1 by Tanja Roth and Thomas Schraitle This guide is intended for administrators who need to set up, congure, and maintain clusters with SUSE® Linux Enterprise High Availability Extension. For quick and ecient conguration and administration, the product includes both a graphical user interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006–2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE
    [Show full text]
  • On the Performance Variation in Modern Storage Stacks
    On the Performance Variation in Modern Storage Stacks Zhen Cao1, Vasily Tarasov2, Hari Prasath Raman1, Dean Hildebrand2, and Erez Zadok1 1Stony Brook University and 2IBM Research—Almaden Appears in the proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST’17) Abstract tions on different machines have to compete for heavily shared resources, such as network switches [9]. Ensuring stable performance for storage stacks is im- In this paper we focus on characterizing and analyz- portant, especially with the growth in popularity of ing performance variations arising from benchmarking hosted services where customers expect QoS guaran- a typical modern storage stack that consists of a file tees. The same requirement arises from benchmarking system, a block layer, and storage hardware. Storage settings as well. One would expect that repeated, care- stacks have been proven to be a critical contributor to fully controlled experiments might yield nearly identi- performance variation [18, 33, 40]. Furthermore, among cal performance results—but we found otherwise. We all system components, the storage stack is the corner- therefore undertook a study to characterize the amount stone of data-intensive applications, which become in- of variability in benchmarking modern storage stacks. In creasingly more important in the big data era [8, 21]. this paper we report on the techniques used and the re- Although our main focus here is reporting and analyz- sults of this study. We conducted many experiments us- ing the variations in benchmarking processes, we believe ing several popular workloads, file systems, and storage that our observations pave the way for understanding sta- devices—and varied many parameters across the entire bility issues in production systems.
    [Show full text]