Post-Copy Live Migration of Virtual Machines

Total Page:16

File Type:pdf, Size:1020Kb

Post-Copy Live Migration of Virtual Machines Post-Copy Live Migration of Virtual Machines Michael R. Hines, Umesh Deshpande, and Kartik Gopalan Computer Science, Binghamton University (SUNY) {mhines,udeshpa1,kartik}@cs.binghamton.edu ABSTRACT 1 a cluster environment where physical nodes are intercon- We present the design, implementation, and evaluation of nected via a high-speed LAN and also employ a network- post-copy based live migration for virtual machines (VMs) accessible storage system. State-of-the-art live migration across a Gigabit LAN. Post-copy migration defers the trans- techniques [19, 3] use the pre-copy approach which works as fer of a VM’s memory contents until after its processor state follows. The bulk of the VM’s memory state is migrated has been sent to the target host. This deferral is in contrast to a target node even as the VM continues to execute at a to the traditional pre-copy approach, which first copies the source node. If a transmitted page is dirtied, it is re-sent memory state over multiple iterations followed by a final to the target in the next round. This iterative copying of transfer of the processor state. The post-copy strategy can dirtied pages continues until either a small, writable work- provide a “win-win” by reducing total migration time while ing set (WWS) has been identified, or a preset number of maintaining the liveness of the VM during migration. We iterations is reached, whichever comes first. This constitutes compare post-copy extensively against the traditional pre- the end of the memory transfer phase and the beginning of copy approach on the Xen Hypervisor. Using a range of VM service downtime. The VM is then suspended and its pro- workloads we show that post-copy improves several metrics cessor state plus any remaining dirty pages are sent to a including pages transferred, total migration time, and net- target node, where the VM is restarted. work overhead. We facilitate the use of post-copy with adap- Pre-copy’s overriding goal is to keep downtime small by tive prepaging techniques to minimize the number of page minimizing the amount of VM state that needs to be trans- faults across the network. We propose different prepaging ferred during downtime. Pre-copy will cap the number of strategies and quantitatively compare their effectiveness in copying iterations to a preset limit since the WWS is not reducing network-bound page faults. Finally, we eliminate guaranteed to converge across successive iterations. On the the transfer of free memory pages in both pre-copy and post- other hand, if the iterations are terminated too early, then copy through a dynamic self-ballooning (DSB) mechanism. the larger WWS will significantly increase service down- DSB periodically reclaims free pages from a VM and sig- time. Pre-copy minimizes two metrics particularly well – nificantly speeds up migration with negligible performance VM downtime and application degradation – when the VM impact on VM workload. is executing a largely read-intensive workload. However, even moderately write-intensive workloads can reduce pre- Categories and Subject Descriptors D.4 [Operating copy’s effectiveness during migration because pages that are Systems] repeatedly dirtied may have to be transmitted multiple times. In this paper, we propose and evaluate the postcopy strat- General Terms Experimentation, Performance egy for live VM migration, previously studied only in the context of process migration. At a high-level, post-copy Keywords Virtual Machines, Operating Systems, Process migration defers the memory transfer phase until after the Migration, Post-Copy, Xen VM’s CPU state has already been transferred to the target and resumed there. Post-copy first transmits all processor state to the target, starts the VM at the target, and then ac- 1. INTRODUCTION tively pushes the VM’s memory pages from source to target. This paper addresses the problem of optimizing the live Concurrently, any memory pages that are faulted on by the migration of system virtual machines (VMs). Live migra- VM at target, and not yet pushed, are demand-paged over tion is a key selling point for state-of-the-art virtualization the network from source. Post-copy thus ensures that each technologies. It allows administrators to consolidate system memory page is transferred at most once, thus avoiding the load, perform maintenance, and flexibly reallocate cluster- duplicate transmission overhead of pre-copy. wide resources on-the-fly. We focus on VM migration within Effectiveness of post-copy depends on the ability to mini- 1 mize the number of network-bound page-faults (or network A shorter version of this paper appeared in the ACM SIG- faults), by pushing the pages from source before they are PLAN/SIGOPS International Conference on Virtual Exe- faulted upon by the VM at target. To reduce network faults, cution Environments (VEE), March 2009 [11]. The addi- tional contributions of this paper are new prepaging strate- we supplement the active push component of post-copy with gies including dual-direction and multi-pivot bubbling (Sec- adaptive prepaging. Prepaging is a term borrowed from ear- tion 3.2), proactive LRU ordering of pages (Section 4.3), and lier literature [22, 32] on optimizing memory-constrained their evaluation (Section 5.4). disk-based paging systems. It traditionally refers to a more 2. RELATED WORK proactive form of pre-fetching from storage devices (such Process Migration: The post-copy technique has been as hard disks) in which the memory subsystem can try to variously studied in the context of process migration lit- hide the latency of high-locality page faults by intelligently erature: first implemented as “Freeze Free” using a file- sequencing the pre-fetched pages. Modern virtual memory server [26], then evaluated via simulations [25], and later via subsystems do not typically employ prepaging due increas- actual Linux implementation [20]. There was also a recent ing DRAM capacities. Although post-copy doesn’t deal with implementation of post-copy process migration under open- disk-based paging, the prepaging algorithms themselves can Mosix [12]. In contrast, our contributions are to develop a still play a helpful role in reducing the number of network viable post-copy technique for live migration of virtual ma- faults in post-copy. Prepaging adapts the sequence of ac- chines. Process migration techniques in general have been tively pushed pages by using network faults as hints to pre- extensively researched and an excellent survey can be found dict the VM’s page access locality at the target and actively in [17]. Several distributed computing projects incorporate push the pages in the neighborhood of a network fault be- process migration [31, 24, 18, 30, 13, 6]. However, these fore they are accessed by the VM. We propose and compare systems have not gained widespread acceptance primarily a number of prepaging strategies for post-copy, which we because of portability and residual dependency limitations. call bubbling, that reduce the number of network faults to In contrast, VM migration operates on whole operating sys- varying degrees. tems and is naturally free of these problems. Additionally, we identified a deficiency in both pre-copy PrePaging: Prepaging is a technique for hiding the la- and post-copy migration due to which free pages in the VM tency of page faults (and in general I/O accesses in the crit- are also transmitted during migration, increasing the total ical execution path) by predicting the future working set [5] migration time. To avoid transmitting the free pages, we and loading the required pages before they are accessed. develop a “dynamic self-ballooning” (DSB) mechanism. Bal- Prepaging is also known as adaptive prefetching or adap- looning is an existing technique that allows a guest kernel tive remote paging. It has been studied extensively [22, 33, to reduce its memory footprint by releasing its free memory 34, 32] in the context of disk based storage systems, since pages back to the hypervisor. DSB automates the balloon- disk I/O accesses in the critical application execution path ing mechanism so it can trigger periodically (say every 5 sec- can be highly expensive. Traditional prepaging algorithms onds) without degrading application performance. Our DSB use reactive and history based approaches to predict and implementation reacts directly to kernel memory allocation prefetch the working set of the application. Our system em- requests without the need for guest kernel modifications. It ploys prepaging, not in the context of disk prefetching, but neither requires external introspection by a co-located VM for the limited duration of live VM migration to avoid the nor excessive communication with the hypervisor. We show latency of network page faults from target to source. Our that DSB significantly reduces total migration time by elim- implementation employs a reactive approach that uses any inating the transfer of free memory pages in both pre-copy network faults as hints about the VM’s working set with and post-copy. additional optimizations described in Section 3.1. The original pre-copy algorithm has advantages of its own. Live VM Migration. Pre-copy is the predominant ap- It can be implemented in a relatively self-contained external proach for live VM migration. These include hypervisor- migration daemon to isolate most of the copying complexity based approaches from VMware [19], Xen [3], and KVM [14], to a single process at each node. Further, pre-copy also pro- OS-level approaches that do not use hypervisors from OpenVZ vides a clean way to abort the migration should the target [21], as well as wide-area migration [2]. Self-migration of op- node ever crash during migration because the VM is still erating systems (which has much in common with process running at the source. (Source node failure is fatal to both migration) was implemented in [9] building upon prior work migration schemes.) Although our current post-copy im- [8] atop the L4 Linux microkernel.
Recommended publications
  • Linux Virtualization Update
    Linux Virtualization Update Chris Wright <[email protected]> Japan Linux Symposium, November 2007 Intro Virtualization mini-summit Paravirtualization Full virtualization Hardware changes Libvirt Xen Virtualization Mini-summit June 25-27, 2007 ± Just before OLS in Ottawa. 18 attendees ● Xen, Vmware, KVM, lguest, UML, LinuxOnLinux ● x86, ia64, PPC and S390 Focused primarily on Linux as guest and areas of cooperation ● paravirt_ops and virtio Common interfaces ● Not the best group to design or discuss management interfaces ● Defer to libvirt, CIM, etc... ● CPUID 0x4000_00xx for hypervisor feature detection ● Can we get to common ABI for paravirt hybrid guest? Virtualization Mini-summit paravirt_ops ● Make use of existing abstractions wherever possible (clocksource, clockevents or irqchip) ● Could use a common lib/x86_emulate.c ● Open question: performance benefit of shadow vs. direct paging? Distro Issues ● Lack of feature parity between bare metal and Xen is difficult for distros ● Single binary kernel image ● Merge upstream Performance ● NUMA awareness lacking in Xen ± difficult for Altix ● Static NUMA representation doesn©t map well to dynamic virt environment ● Cooperative memory management ± guest memory hints Virtualization Mini-summit Hardware ● x86 and ia64 hardware virtualization roadmap ● ppc virtualization is gaining in embedded market, realtime requirments ● S390 ªhas an instruction for thatº Virtio ● Separate driver from transport ● Makes driver small, looks like a Linux driver and reusable ● Hypervisor specific
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Unprivileged GPU Containers on a LXD Cluster
    Unprivileged GPU containers on a LXD cluster GPU-enabled system containers at scale Stéphane Graber Christian Brauner LXD project leader LXD maintainer @stgraber @brau_ner https://stgraber.org https://brauner.io [email protected] [email protected] What are system containers? They are the oldest type of containers 01 BSD jails, Linux vServer, Solaris Zones, OpenVZ, LXC and LXD. They behave like standalone systems 02 No need for specialized software or custom images. No virtualization overhead 03 They are containers after all. LXD System nova-lxd command line tool your own client/script ? container LXD REST API manager LXD LXD LXD LXD LXC LXC LXC LXC Linux kernel Linux kernel Linux kernel Linux kernel Host A Host B Host C Host ... What LXD is Simple 01 Clean command line interface, simple REST API and clear terminology. Fast 02 Image based, no virtualization, direct hardware access. Secure 03 Safe by default. Combines all available kernel security features. Scalable 04 From a single container on a laptop to tens of thousands of containers in a cluster. What LXD isn’t Another virtualization technology 01 LXD offers an experience very similar to a virtual machine. But it’s still containers, with no virtualization overhead and real hardware. A fork of LXC 02 LXD uses LXC’s API to manage the containers behind the scene. Another application container manager 03 LXD only cares about full system containers. You can run whatever you want inside a LXD container, including Docker. LXD Main Certificates components Cluster Containers Snapshots Backups Events Images Aliases Networks Operations Projects Storage pools Storage volumes Snapshots LXD clustering Built-in clustering support 01 No external dependencies, all LXD 3.0 or higher installations can be instantly turned into a cluster.
    [Show full text]
  • Separating Protection and Management in Cloud Infrastructures
    SEPARATING PROTECTION AND MANAGEMENT IN CLOUD INFRASTRUCTURES A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Zhiming Shen December 2017 c 2017 Zhiming Shen ALL RIGHTS RESERVED SEPARATING PROTECTION AND MANAGEMENT IN CLOUD INFRASTRUCTURES Zhiming Shen, Ph.D. Cornell University 2017 Cloud computing infrastructures serving mutually untrusted users provide se- curity isolation to protect user computation and resources. Additionally, clouds should also support flexibility and efficiency, so that users can customize re- source management policies and optimize performance and resource utiliza- tion. However, flexibility and efficiency are typically limited due to security requirements. This dissertation investigates the question of how to offer flexi- bility and efficiency as well as strong security in cloud infrastructures. Specifically, this dissertation addresses two important platforms in cloud in- frastructures: the containers and the Infrastructure as a Service (IaaS) platforms. The containers platform supports efficient container provisioning and execut- ing, but does not provide sufficient security and flexibility. Different containers share an operating system kernel which has a large attack surface, and kernel customization is generally not allowed. The IaaS platform supports secure shar- ing of cloud resources among mutually untrusted users, but does not provide sufficient flexibility and efficiency. Many powerful management primitives en- abled by the underlying virtualization platform are hidden from users, such as live virtual machine migration and consolidation. The main contribution of this dissertation is the proposal of an approach in- spired by the exokernel architecture that can be generalized to any multi-tenant system to improve security, flexibility, and efficiency.
    [Show full text]
  • Openvz Forum Each Container
    Subject: I am thinking of moving to LXD Posted by votsalo on Wed, 17 Aug 2016 19:13:50 GMT View Forum Message <> Reply to Message I am thinking of moving my dedicated server to LXD (a variant of LXC). One reason is that I can run LXD on several different environments where I can run Ubuntu: * On a dedicated server. * On an Amazon EC2 instance. * On my home machine, running Ubuntu desktop. * On a cheap Ubuntu 16.04 Openstack KVM VPS. I find the VPS option very usuful, however tricky because of the tight disk space. I can have a few containers on my VPS. I can move containers between the VPS and another LXD server. I can run a container OS that I need on my VPS that is not offered as a choice by my VPS provider, but is available as an LXD template. And perhaps I can replace the dedicated server with a few cheaper VPSs. I have tried doing all these with OpenVZ in the past. I couldn't get it to run anywhere, except on the dedicated server or on a dedicated home machine (or dual boot). Now it seems that I cannot run OpenVZ 7.0.0 on any of the above infrastructure. I have installed OpenVZ 7.0.0 on my home machine, as a dual boot, but I'm not using it, since I use my desktop OS and I cannot use both at the same time. In the past I tried installing OpenVZ on several desktop distributions, but I couldn't find one that worked.
    [Show full text]
  • Security of OS-Level Virtualization Technologies: Technical Report
    Security of OS-level virtualization technologies Elena Reshetova1, Janne Karhunen2, Thomas Nyman3, N. Asokan4 1 Intel OTC, Finland 2 Ericsson, Finland 3 University of Helsinki, Finland 4 Aalto University and University of Helsinki, Finland Abstract. The need for flexible, low-overhead virtualization is evident on many fronts ranging from high-density cloud servers to mobile devices. During the past decade OS-level virtualization has emerged as a new, efficient approach for virtualization, with implementations in multiple different Unix-based systems. Despite its popularity, there has been no systematic study of OS-level virtualization from the point of view of security. In this report, we conduct a comparative study of several OS- level virtualization systems, discuss their security and identify some gaps in current solutions. 1 Introduction During the past couple of decades the use of different virtualization technolo- gies has been on a steady rise. Since IBM CP-40 [19], the first virtual machine prototype in 1966, many different types of virtualization and their uses have been actively explored both by the research community and by the industry. A relatively recent approach, which is becoming increasingly popular due to its light-weight nature, is Operating System-Level Virtualization, where a number of distinct user space instances, often referred to as containers, are run on top of a shared operating system kernel. A fundamental difference between OS-level virtualization and more established competitors, such as Xen hypervisor [24], VMWare [48] and Linux Kernel Virtual Machine [29] (KVM), is that in OS-level virtualization, the virtualized artifacts are global kernel resources, as opposed to hardware.
    [Show full text]
  • Container Technologies
    Zagreb, NKOSL, FER Container technologies Marko Golec · Juraj Vijtiuk · Jakov Petrina April 11, 2020 About us ◦ Embedded Linux development and integration ◦ Delivering solutions based on Linux, OpenWrt and Yocto • Focused on software in network edge and CPEs ◦ Continuous participation in Open Source projects ◦ www.sartura.hr Introduction to GNU/Linux ◦ Linux = operating system kernel ◦ GNU/Linux distribution = kernel + userspace (Ubuntu, Arch Linux, Gentoo, Debian, OpenWrt, Mint, …) ◦ Userspace = set of libraries + system software Linux kernel ◦ Operating systems have two spaces of operation: • Kernel space – protected memory space and full access to the device’s hardware • Userspace – space in which all other application run • Has limited access to hardware resources • Accesses hardware resources via kernel • Userspace applications invoke kernel services with system calls User applications E.g. bash, LibreOffice, GIMP, Blender, Mozilla Firefox, etc. System daemons: Windowing system: User mode Low-level system systemd, runit, logind, X11, Wayland, Other libraries: GTK+, Qt, EFL, SDL, SFML, Graphics: Mesa, AMD components networkd, PulseAudio, SurfaceFlinger FLTK, GNUstep, etc. Catalyst, … … (Android) C standard library Up to 2000 subroutines depending on C library (glibc, musl, uClibc, bionic) ( open() , exec() , sbrk() , socket() , fopen() , calloc() , …) About 380 system calls ( stat , splice , dup , read , open , ioctl , write , mmap , close , exit , etc.) Process scheduling Memory management IPC subsystem Virtual files subsystem Network subsystem Kernel mode Linux Kernel subsystem subsystem Other components: ALSA, DRI, evdev, LVM, device mapper, Linux Network Scheduler, Netfilter Linux Security Modules: SELinux, TOMOYO, AppArmor, Smack Hardware (CPU, main memory, data storage devices, etc.) TABLE 1 Layers within Linux Virtualization Virtualization Concepts Two virtualization concepts: ◦ Hardware virtualization (full/para virtualization) • Emulation of complete hardware (virtual machines - VMs) • VirtualBox, QEMU, etc.
    [Show full text]
  • L4 – Virtualization and Beyond
    L4 – Virtualization and Beyond Hermann Härtig!," Michael Roitzsch! Adam Lackorzynski" Björn Döbel" Alexander Böttcher! #!Department of Computer Science# "GWT-TUD GmbH # Technische Universität Dresden# 01187 Dresden, Germany # 01062 Dresden, Germany {haertig,mroi,adam,doebel,boettcher}@os.inf.tu-dresden.de Abstract Mac OS X environment, using full hardware After being introduced by IBM in the 1960s, acceleration by the GPU. Virtual machines are virtualization has experienced a renaissance in used to test potentially malicious software recent years. It has become a major industry trend without risking the host environment and they in the server context and is also popular on help developers in debugging crashes with back- consumer desktops. In addition to the well-known in-time execution. benefits of server consolidation and legacy In the server world, virtual machines are used to preservation, virtualization is now considered in consolidate multiple services previously located embedded systems. In this paper, we want to look on dedicated machines. Running them within beyond the term to evaluate advantages and virtual machines on one physical server eases downsides of various virtualization approaches. management and helps saving power by We show how virtualization can be increasing utilization. In server farms, migration complemented or even superseded by modern of virtual machines between servers is used to operating system paradigms. Using L4 as the balance load with the potential of shutting down basis for virtualization and as an advanced completely unloaded servers or adding more to microkernel provides a best-of-both-worlds increase capacity. Lastly, virtual machines also combination. Microkernels can contribute proven isolate different customers who can purchase real-time capabilities and small trusted computing virtual shares of a physical server in a data center.
    [Show full text]
  • A Performance Study of VM Live Migration Over the WAN
    Master Thesis Electrical Engineering April 2015 A Performance Study of VM Live Migration over the WAN TAHA MOHAMMAD CHANDRA SEKHAR EATI Department of Communication Systems Blekinge Institute of Technology SE-371 79 Karlskrona Sweden This thesis is submitted to the School of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering on Telecommunication Systems. The thesis is equivalent to 40 weeks of full time studies. Contact Information: Author(s): Taha Mohammad, Chandra Sekhar Eati. E-mail: [email protected], [email protected]. University advisor(s): Dr. Dragos Ilie, Department of Communication Systems. University Examiner(s): Prof. Kurt Tutschku, Department of Communication Systems. School of Computing Blekinge Institute of Technology Internet : www.bth.se SE-371 79 Karlskrona Phone : +46 455 38 50 00 Sweden Fax : +46 455 38 50 57 Abstract Virtualization is the key technology that has provided the Cloud computing platforms a new way for small and large enterprises to host their applications by renting the available resources. Live VM migration allows a Virtual Machine to be transferred form one host to another while the Virtual Machine is active and running. The main challenge in Live migration over WAN is maintaining the network connectivity during and after the migration. We have carried out live VM migration over the WAN migrating different sizes of VM memory states and presented our solutions based on Open vSwitch/VXLAN and Cisco GRE approaches. VXLAN provides the mobility support needed to maintain the network connectivity between the client and the Virtual machine.
    [Show full text]
  • Multiple Instances of the Global Linux Namespaces
    Multiple Instances of the Global Linux Namespaces Eric W. Biederman Linux Networx [email protected] Abstract to create a clean implementation that can be merged into the Linux kernel. The discussion has begun on the linux-kernel list and things are Currently Linux has the filesystem namespace slowly progressing. for mounts which is beginning to prove use- ful. By adding additional namespaces for pro- cess ids, SYS V IPC, the network stack, user ids, and probably others we can, at a trivial 1 Introduction cost, extend the UNIX concept and make novel uses of Linux possible. Multiple instances of a namespace simply means that you can have two 1.1 High Performance Computing things with the same name. For servers the power of computers is growing, and it has become possible for a single server I have been working with high performance to easily fulfill the tasks of what previously re- clusters for several years and the situation is quired multiple servers. Hypervisor solutions painful. Each Linux box in the cluster is ref- like Xen are nice but they impose a perfor- ered to as a node, and applications running or mance penalty and they do not easily allow re- queued to run on a cluster are jobs. sources to be shared between multiple servers. Jobs are run by a batch scheduler and, once For clusters application migration and preemp- launched, each job runs to completion typically tion are interesting cases but almost impossibly consuming 99% of the resources on the nodes hard because you cannot restart the application it is running on.
    [Show full text]
  • Openvz Command Line Reference
    OpenVZ Command Line Reference December 20, 2016 Parallels IP Holdings GmbH Vordergasse 59 8200 Schaffhausen Switzerland Tel: + 41 52 632 0411 Fax: + 41 52 672 2010 http://www.virtuozzo.com Copyright © 1999-2016 Parallels IP Holdings GmbH and its affiliates. All rights reserved. This product is protected by United States and international copyright laws. The product’s underlying technology, patents, and trademarks are listed at http://www.virtuozzo.com/legal/. Microsoft, Windows, Windows Server, Windows NT, Windows Vista, and MS-DOS are registered trademarks of Microsoft Corporation. Apple, Mac, the Mac logo, Mac OS, iPad, iPhone, iPod touch, FaceTime HD camera and iSight are trademarks of Apple Inc., registered in the US and other countries. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respective owners. Table of Contents 1. Introduction ........................................................................................................................................... 6 1.1. About OpenVZ ........................................................................................................................... 6 1.2. About This Guide ...................................................................................................................... 7 2. Managing OpenVZ ............................................................................................................................... 8 2.1. OpenVZ Configuration Files .....................................................................................................
    [Show full text]
  • Ovirt Architecture
    oVirt Architecture Itamar Heim Director, RHEV-M Engineering, Red Hat oVirt Engine Architecture 1 oVirt Engine Large scale, centralized management for server and desktop virtualization Based on leading performance, scalability and security infrastructure technologies oVirt Engine Architecture 2 Kenrel-based Virtual Machine (KVM) ● Included in Linux kernel since 2006 ● Runs Linux, Windows and other operating system guests ● Advanced features ● Live migration ● Memory page sharing ● Thin provisioning ● PCI Pass-through ● KVM architecture provides high “feature-velocity” – leverages the power of Linux oVirt Engine Architecture 3 Linux as a Hypervisor? ● What makes up a hypervisor ? ● Hardware management ● Device drivers ● I/O Stack ● Resource Management ● Scheduling ● Access Control ● Power Management ● Memory Manager ● Device Model (emulation) ● Virtual Machine Monitor oVirt Engine Architecture 4 Linux as a Hypervisor? ● What makes up a hypervisor ? ● Hardware management ● Device drivers ● I/O Stack ● Resource Management Operating System Kernel ● Scheduling ● Access Control ● Power Management ● } Memory Manager ● Device Model (emulation) ● Virtual Machine Monitor oVirt Engine Architecture 5 Linux as a Hypervisor? How well does Linux perform as a hypervisor? Isn't Linux a general purpose operating system? Linux is architected to scale from the smallest embedded systems through to the largest multi-socket servers ● From cell phones through to mainframes KVM benefits from mature, time tested infrastructure ● Powerful, scalable memory manager
    [Show full text]