Energy Management for Hypervisor-Based Virtual Machines

Total Page:16

File Type:pdf, Size:1020Kb

Energy Management for Hypervisor-Based Virtual Machines Energy Management for Hypervisor-Based Virtual Machines Jan Stoess Christian Lang Frank Bellosa System Architecture Group, University of Karlsruhe, Germany {stoess, chlang, bellosa}@ira.uka.de Abstract Research has proposed several approaches to OS di- Current approaches to power management are rected control over a computer’s energy consump- based on operating systems with full knowledge of tion, including user- and service-centric management and full control over the underlying hardware; the schemes. However, most current approaches to en- distributed nature of multi-layered virtual machine ergy management are developed for standard, legacy environments renders such approaches insufficient. OSes with a monolithic kernel. A monolithic kernel In this paper, we present a novel framework for en- has full control over all hardware devices and their ergy management in modular, multi-layered oper- modes of operation; it can directly regulate device ating system structures. The framework provides activity or energy consumption to meet thermal or a unified model to partition and distribute energy, energy constraints. A monolithic kernel also controls and mechanisms for energy-aware resource account- the whole execution flow in the system. It can easily ing and allocation. As a key property, the frame- track the power consumption at the level of individ- work explicitly takes the recursive energy consump- ual applications and leverage its application-specific tion into account, which is spent, e.g., in the virtu- knowledge during device allocation to achieve dy- alization layer or subsequent driver components. namic and comprehensive energy management. Our prototypical implementation targets hyper- Modern VM environments, in contrast, consist of visor-based virtual machine systems and comprises a distributed and multi-layered software stack in- two components: a host-level subsystem, which con- cluding a hypervisor, multiple VMs and guest OSes, trols machine-wide energy constraints and enforces device driver modules, and other service infrastruc- them among all guest OSes and service components, ture (Figure 1). In such an environment, direct and, complementary, an energy-aware guest oper- and centralized energy management is unfeasible, as ating system, capable of fine-grained application- device control and accounting information are dis- specific energy management. Guest level energy tributed across the whole system. management thereby relies on effective virtualiza- At the lowest-level of the virtual environment, tion of physical energy effects provided by the vir- the privileged hypervisor and host driver modules tual machine monitor. Experiments with CPU and have direct control over hardware devices and their disk devices and an external data acquisition system energy consumption. By inspecting internal data demonstrate that our framework accurately controls structures, they can obtain coarse-grained per-VM and stipulates the power consumption of individual information on how energy is spent on the hard- hardware devices, both for energy-aware and energy- ware. However, the host level does not possess any unaware guest operating systems. knowledge of the energy consumption of individual applications. Moreover, with the ongoing trend to 1 Introduction restrict the hypervisor’s support to a minimal set of hardware and to perform most of the device control Over the past few years, virtualization technology in unprivileged driver domains [8,15], hypervisor and has regained considerable attention in the design of driver modules each have direct control over a small computer systems. Virtual machines (VMs) estab- set of devices; but they are oblivious to the ones not lish a development path for incorporating new func- managed by themselves. tionality – server consolidation, transparent migra- The guest OSes, in turn, have intrinsic knowledge tion, secure computing, to name a few – into a sys- of their own applications. However, guest OSes oper- tem that still retains compatibility to existing oper- ate on deprivileged virtualized devices, without di- ating systems (OSes) and applications. At the very rect access to the physical hardware, and are un- same time, the ever increasing power density and dis- aware that the hardware may be shared with other sipation of modern servers has turned energy man- VMs. Guest OSes are also unaware of the side- agement into a key concern in the design of OSes. effects on power consumption caused by the vir- USENIX Association 2007 USENIX Annual Technical Conference 1 Applications Applications global, system-wide notion into a local, component- or user-specific one. The second contribution is a Guest OS Guest OS vCPU vNIC vDISK vCPU vNIC vDISK distributed energy accounting approach, which ac- curately tracks back the energy spent in the system Service Drv. NIC Drv. DISK to originating activities. In particular, the presented vCPU vCPU vCPU approach incorporates both the direct and the side- Hypervisor CPU effectual energy consumption spent in the virtual- ization layers or subsequent driver components. As the third contribution, our framework exposes all re- Figure 1: Increasing number of layers and components source allocation mechanisms from drivers and other in today’s virtualization-based OSes. resource managers to the respective energy manage- ment subsystems. Exposed allocation enables dy- tual device logic: since virtualization is transpar- namic and remote regulation of energy consumption ent, the “hidden”, or recursive power consumption, in a way that the overall consumption matches the which the virtualization layer itself causes when re- desired constraints. quiring the CPU or other resources simply vanishes We have implemented a prototype that targets unaccounted in the software stack. Depending on hypervisor-based systems. We argue that virtual the complexity of the interposition, resource require- server environments benefit from energy manage- ments can be substantial: a recent study shows ment within and across VMs; hence the prototype that the virtualization layer requires a considerable employs management software both at host-level amount of CPU processing time for I/O virtualiza- and at guest-level. A host-level management subsys- tion [5]. tem enforces system-wide energy constraints among The whole situation is even worsened by the non- all guest OSes and driver or service components. partitionability of some of the physical effects of It accounts direct and hidden power consumption power dissipation: the temperature of a power con- of VMs and regulates the allocation of physical de- suming device, for example, cannot simply be par- vices to ensure that each VM does not consume more titioned among different VMs in a way that each than a given power allotment. Naturally, the host- one gets alloted its own share on the temperature. level subsystem performs independent of the guest Beyond the lack of comprehensive control over and operating system; on the downside, it operates at knowledge of the power consumption in the system, low level and in coarse-grained manner. To ben- we can thus identify the lack of a model to com- efit from fine-grained, application-level knowledge, prehensively express physical effects of energy con- we have complemented the host-level part with an sumption in distributed OS environments. optional energy-aware guest OS, which redistributes To summarize, current power management the VM-wide power allotments among its own, sub- schemes are limited to legacy OSes and unsuitable ordinate applications. In analogy to the host-level, for VM environments. Current virtualization solu- where physical devices are allocated to VMs, the tions disregard most energy-related aspects of the guest OS regulates the allocation of virtual devices hardware platform; they usually virtualize a set of to ensure that its applications do not spend more standard hardware devices only, without any special energy than their alloted budget. power management capabilities or support for en- Our experiments with CPU and disk devices ergy management. Up to now, power management demonstrate that the prototype effectively accounts for VMs is limited to the capabilities of the host OS and regulates the power consumption of individual in hosted solutions and mostly dispelled from the physical and virtual devices, both for energy-aware server-oriented hypervisor solutions. and energy-unaware guest OSes. Observing these problems, we present a novel The rest of the paper is structured as follows: In framework for managing energy in distributed, Section 2, we present a generic model to energy man- multi-layered OS environments, as they are com- agement in distributed, multi-layered OS environ- mon in today’s computer systems. Our framework ments. We then detail our prototypical implementa- makes three contributions. The first contribution is tion for hypervisor-based systems in Section 3. We a model for partitioning and distributing energy ef- present experiments and results in Section 4. We fects; our model solely relies on the notion of energy then discuss related approaches in Section 5, and fi- as the base abstraction. Energy quantifies the phys- nally draw a conclusion and outline future work in ical effects of power consumption in a distributable Section 6. way and can be partitioned and translated from a 2 2007 USENIX Annual Technical Conference USENIX Association 2 Distributed Energy Management the temperature of a device. Such effects can easily be expressed as energy constraints, by means of a The following
Recommended publications
  • Effective Virtual CPU Configuration with QEMU and Libvirt
    Effective Virtual CPU Configuration with QEMU and libvirt Kashyap Chamarthy <[email protected]> Open Source Summit Edinburgh, 2018 1 / 38 Timeline of recent CPU flaws, 2018 (a) Jan 03 • Spectre v1: Bounds Check Bypass Jan 03 • Spectre v2: Branch Target Injection Jan 03 • Meltdown: Rogue Data Cache Load May 21 • Spectre-NG: Speculative Store Bypass Jun 21 • TLBleed: Side-channel attack over shared TLBs 2 / 38 Timeline of recent CPU flaws, 2018 (b) Jun 29 • NetSpectre: Side-channel attack over local network Jul 10 • Spectre-NG: Bounds Check Bypass Store Aug 14 • L1TF: "L1 Terminal Fault" ... • ? 3 / 38 Related talks in the ‘References’ section Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications What this talk is not about 4 / 38 Related talks in the ‘References’ section What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications 4 / 38 What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications Related talks in the ‘References’ section 4 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP QEMU QEMU VM1 VM2 Custom Disk1 Disk2 Appliance ioctl() KVM-based virtualization components Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP Custom Appliance KVM-based virtualization components QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) Custom Appliance KVM-based virtualization components libvirtd QMP QMP QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 libguestfs (guestfish) Custom Appliance KVM-based virtualization components OpenStack, et al.
    [Show full text]
  • Understanding Full Virtualization, Paravirtualization, and Hardware Assist
    VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist Contents Introduction .................................................................................................................1 Overview of x86 Virtualization..................................................................................2 CPU Virtualization .......................................................................................................3 The Challenges of x86 Hardware Virtualization ...........................................................................................................3 Technique 1 - Full Virtualization using Binary Translation......................................................................................4 Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5 Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6 Memory Virtualization................................................................................................6 Device and I/O Virtualization.....................................................................................7 Summarizing the Current State of x86 Virtualization Techniques......................8 Full Virtualization with Binary Translation is the Most Established Technology Today..........................8 Hardware Assist is the Future of Virtualization, but the Real Gains Have
    [Show full text]
  • Introduction to Virtualization
    z Systems Introduction to Virtualization SHARE Orlando Linux and VM Program Romney White, IBM [email protected] z Systems Architecture and Technology © 2015 IBM Corporation Agenda ° Introduction to Virtualization – Concept – Server Virtualization Approaches – Hypervisor Implementation Methods – Why Virtualization Matters ° Virtualization on z Systems – Logical Partitions – Virtual Machines 2 z Systems Virtualization Technology © 2015 IBM Corporation Virtualization Concept Virtual Resources Proxies for real resources: same interfaces/functions, different attributes May be part of a physical resource or multiple physical resources Virtualization Creates virtual resources and "maps" them to real resources Primarily accomplished with software or firmware Resources Components with architecturally-defined interfaces/functions May be centralized or distributed - usually physical Examples: memory, disk drives, networks, servers Separates presentation of resources to users from actual resources Aggregates pools of resources for allocation to users as virtual resources 3 z Systems Virtualization Technology © 2015 IBM Corporation Server Virtualization Approaches Hardware Partitioning Bare-metal Hypervisor Hosted Hypervisor Apps ... Apps Apps ... Apps Apps ... Apps OS OS OS OS OS OS Adjustable partitions Hypervisor Hypervisor Partition Controller Host OS SMP Server SMP Server SMP Server Server is subdivided into fractions Hypervisor provides fine-grained Hypervisor uses OS services to each of which can run an OS timesharing of all resources
    [Show full text]
  • Improving the Reliability of Commodity Operating Systems
    Improving the Reliability of Commodity Operating Systems MICHAEL M. SWIFT, BRIAN N. BERSHAD, and HENRY M. LEVY University of Washington Despite decades of research in extensible operating system technology, extensions such as device drivers remain a significant cause of system failures. In Windows XP, for example, drivers account for 85% of recently reported failures. This paper describes Nooks, a reliability subsystem that seeks to greatly enhance OS reliability by isolating the OS from driver failures. The Nooks approach is practical: rather than guaranteeing complete fault tolerance through a new (and incompatible) OS or driver architecture, our goal is to prevent the vast majority of driver-caused crashes with little or no change to existing driver and system code. Nooks isolates drivers within lightweight protection domains inside the kernel address space, where hardware and software prevent them from corrupting the kernel. Nooks also tracks a driver’s use of kernel resources to facilitate automatic clean-up during recovery. To prove the viability of our approach, we implemented Nooks in the Linux operating system and used it to fault-isolate several device drivers. Our results show that Nooks offers a substantial increase in the reliability of operating systems, catching and quickly recovering from many faults that would otherwise crash the system. Under a wide range and number of fault conditions, we show that Nooks recovers automatically from 99% of the faults that otherwise cause Linux to crash. While Nooks was designed for drivers, our techniques generalize to other kernel extensions. We demonstrate this by isolating a kernel-mode file system and an in-kernel Internet service.
    [Show full text]
  • KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St
    St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 5-2018 KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Pasunuru, Srinath Reddy, "KVM Based Virtualization and Remote Management" (2018). Culminating Projects in Information Assurance. 53. https://repository.stcloudstate.edu/msia_etds/53 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. 1 KVM Based Virtualization and Remote Management by Srinath Reddy Pasunuru A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree Master of Science in Information Assurance May, 2018 Starred Paper Committee Susantha Herath, Chairperson Ezzat Kirmani Sneh Kalia 2 Abstract In the recent past, cloud computing is the most significant shifts and Kernel Virtual Machine (KVM) is the most commonly deployed hypervisor which are used in the IaaS layer of the cloud computing systems. The Hypervisor is the one which provides the complete virtualization environment which will intend to virtualize as much as hardware and systems which will include the CPUs, Memory, network interfaces and so on. Because of the virtualization technologies such as the KVM and others such as ESXi, there has been a significant decrease in the usage if the resources and decrease in the costs involved.
    [Show full text]
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]
  • The Operating System Process in Virtualization for Cloud Computing 1J
    INFOKARA RESEARCH ISSN NO: 1021-9056 THE OPERATING SYSTEM PROCESS IN VIRTUALIZATION FOR CLOUD COMPUTING 1J. Saravanan, 2Saravanan .P 1M.Phil. Research Scholar, D.B.Jain College (Autonomous), Thoraipakkam, Chennai, India. E-mail: [email protected] 2Assistant Professor, D.B.Jain College (Autonomous), Thoraipakkam, Chennai, India. E-mail: [email protected] ABSTRACT: OS-level virtualization is an era that walls the working system to create a couple of remoted Virtual Machines (VM). An OS-level VM is a digital execution environment that may be forked right away from the baserunning environment. OS-level virtualization has been extensively used to improve safety, manageability, and availability of today’s complicated software program surroundings, with small runtime and resource overhead, and with minimal modifications to the existing computing infrastructure. Keywords: Operating System Virtualization, Virtual Machines, Virtual Environment Cloud Computing, Virtual Private System. 1. INTRODUCTION: Operating System Virtualization (OS Virtualization) is the last form of Virtualization in Cloud Computing. Operating system virtualization is a part of virtualization technology and is a form of server virtualization. In this OS Virtualization tutorial, we are going to cowl makes use of, working, types, kinds of disks, blessings of Operating System Virtualization. Operating System virtualizations consists of a modified shape than a normal operating system so that exceptional customers can perform its give up-use unique applications. This entire process shall perform on a unmarried laptop at a time. In OS virtualizations, the virtual eyes surroundings accept the command from any of the users working it and performs the different duties on the identical gadget by using running specific packages.
    [Show full text]
  • What's New in the Z/VM 6.3 Hypervisor Session 17515
    What's New in the z/VM 6.3 Hypervisor Session 17515 John Franciscovich IBM: z/VM Development Endicott, NY Insert Custom Presented by Bill Bitner Session QR if [email protected] Desired Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. BladeCenter* FICON* OMEGAMON* RACF* System z9* zSecure DB2* GDPS* Performance Toolkit for VM Storwize* System z10* z/VM* DS6000* HiperSockets Power* System Storage* Tivoli* z Systems* DS8000* HyperSwap PowerVM System x* zEnterprise* ECKD IBM z13* PR/SM System z* z/OS* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. Java and all Java based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
    [Show full text]
  • Virtualizationoverview
    VMWAREW H WHITEI T E PPAPERA P E R Virtualization Overview 1 VMWARE WHITE PAPER Table of Contents Introduction .............................................................................................................................................. 3 Virtualization in a Nutshell ................................................................................................................... 3 Virtualization Approaches .................................................................................................................... 4 Virtualization for Server Consolidation and Containment ........................................................... 7 How Virtualization Complements New-Generation Hardware .................................................. 8 Para-virtualization ................................................................................................................................... 8 VMware’s Virtualization Portfolio ........................................................................................................ 9 Glossary ..................................................................................................................................................... 10 2 VMWARE WHITE PAPER Virtualization Overview Introduction Virtualization in a Nutshell Among the leading business challenges confronting CIOs and Simply put, virtualization is an idea whose time has come. IT managers today are: cost-effective utilization of IT infrastruc- The term virtualization broadly describes the separation
    [Show full text]
  • Paravirtualization (PV)
    Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels of privilege known as Ring 0, 1, 2 and 3 to operating systems and applications to manage access to the computer hardware. While user level applications typically run in Ring 3, the operating system needs to have direct access to the memory and hardware and must execute its privileged instructions in Ring 0. x86 privilege level architecture without virtualization Technique 1: Full Virtualization using Binary Translation This approach relies on binary translation to trap (into the VMM) and to virtualize certain sensitive and non-virtualizable instructions with new sequences of instructions that have the intended effect on the virtual hardware. Meanwhile, user level code is directly executed on the processor for high performance virtualization. Binary translation approach to x86 virtualization Full Virtualization using Binary Translation This combination of binary translation and direct execution provides Full Virtualization as the guest OS is completely decoupled from the underlying hardware by the virtualization layer. The guest OS is not aware it is being virtualized and requires no modification. The hypervisor translates all operating system instructions at run-time on the fly and caches the results for future use, while user level instructions run unmodified at native speed. VMware’s virtualization products such as VMWare ESXi and Microsoft Virtual Server are examples of full virtualization. Full Virtualization using Binary Translation The performance of full virtualization may not be ideal because it involves binary translation at run-time which is time consuming and can incur a large performance overhead.
    [Show full text]
  • IIP (System Z9 Integrated Information Processor) Computer CEC (Central Electronics Complex) Server
    IBM System z z/VM Basics Arwed Tschoeke Systems Architect [email protected] © 2009 IBM Corporation © 2008 IBM Corporation Introduction We'll explain basic concepts of System z: – Terminology – Processors – Memory – I/O – Networking We'll see that z/VM virtualizes a System z machine: – Virtual processors – Virtual memory – … and so on Where appropriate, we'll compare or contrast: – PR/SM or LPAR – z/OS – Linux 2 z/VM: The Very Basics z/VM: The Very Basics 1 IBM System z © 2008 IBM Corporation System z Parts Nomenclature x86, UNIX, etc. System z Memory Storage (though we are moving toward "memory") Disk, Storage DASD – Direct Access Storage Device Processor Processor, Engine, PU (processing unit) IOP (I/O processor) CPU (central processing unit) CP (central processor) SAP (system assist processor) Specialty engines –IFL (Integrated Facility for Linux) –zAAP (System z Application Assist Processor) –zIIP (System z9 Integrated Information Processor) Computer CEC (central electronics complex) Server 3 z/VM: The Very Basics © 2008 IBM Corporation IBM System z Virtualization Genetics Over 40 years of continuous innovation in virtualization – Refined to support modern business requirements System z10 . – Exploit hardware technology for economical growth , .. ity System z9 z/VM V5 bil – LPAR, Integrated Facility for Linux, HiperSockets xi 64-Bit Fle – System z Application Assist Processors s, zSeries es VM/ESA Virtual Switch – System z Information Integration stn 9672 bu ESA Guest LANs Set Observer Ro Processors y, 9x21 ilit VM/XA Virtual Machine
    [Show full text]
  • Legacy Reuse
    Faculty of Computer Science Institute for System Architecture, Operating Systems Group LEGACY REUSE CARSTEN WEINHOLD THIS LECTURE ... ■ So far ... ■ Basic microkernel concepts ■ Drivers, resource management ■ Today: ■ How to provide legacy OS personalities ■ How to reuse existing infrastructure ■ How to make applications happy TU Dresden Legacy Reuse 2 VIRTUALIZATION ■ Virtualization: ■ Reuse legacy OS + applications ■ Run applications in natural environment ■ Problem: Applications trapped in VMs ■ Different resource pools, namespaces ■ Cooperation is cumbersome (network, ...) ■ Full legacy OS in VM adds overhead ■ Multiple desktops? Bad user experience TU Dresden Legacy Reuse 3 MAKING THE CUT ■ Hardware level: Next week ■ Virtualize legacy OS on top of new OS ■ Operating System Personality: ■ Legacy OS interfaces reimplemented on top of – or ported to – new OS ■ Hybrid operating systems: Today ■ Run legacy OS virtualized … ■ … but tightly integrated with new OS TU Dresden Legacy Reuse 4 OPERATING SYSTEM PERSONALITIES TU Dresden Legacy Reuse 5 OS PERSONALITY ■ Idea: Adapt OS / application boundary ■ (Re-)Implement legacy APIs, not whole OS ■ May need to recompile application ■ Benefits: ■ Get desired application, established APIs ■ Good integration (namespaces, files, ...) ■ Smaller overhead than virtualization ■ Flexible, configurable, but more effort? TU Dresden Legacy Reuse 6 MONOLITHIC KERNELS App App Monolithic Kernel System Call Entry Ext2 VFAT IP Stack Disk Driver NIC Driver TU Dresden Legacy Reuse 7 DECOMPOSITION App App App App Monolithic
    [Show full text]