Survey on Mechanisms for Live Virtual Machine Migration and Its Improvements

Total Page:16

File Type:pdf, Size:1020Kb

Survey on Mechanisms for Live Virtual Machine Migration and Its Improvements Information and Media Technologies 11: 101-115 (2016) reprinted from: Computer Software 33(2): 101-115 (2016) © Japan Society for Software Science and Technology 特集●サーベイ論文 解 説 Survey on Mechanisms for Live Virtual Machine Migration and its Improvements Hiroshi Yamada Live virtual machine (VM) migration (simply live migration) is a powerful tool for managing data center resources. Live migration moves a running VM between different physical machines without losing any states such as network conditions and CPU status. Live migration has attracted the attention of academic and industrial researchers since replacing running VMs inside data centers by live migration makes it easier to manage data center resources. This paper summarizes live migration basics and techniques for improving them. Specifically, this survey focuses on software mechanisms for realizing basic live migration, improving its performance, and expanding its applicability. Also, this paper shows research opportunities that the state-of-the-art live migration techniques have not covered yet. tualization software such as Xen [5],KVM[31],Vir- 1 Introduction tualBox [46], VMware ESXi [55], and Hyper-V [41] One of the innovative technologies in our com- is widely available. puter systems for the last decade is system virtual- Live VM migration (simply live migration) is a ization, which allows us to run multiple operating powerful tool for managing data center resources. systems (OSes) on a physical machine. In system Live migration moves a running VM between dif- virtualization, the virtual machine monitor (VMM) ferent physical machines without losing any states is a primary software layer that directly manages such as network connections and CPU status. Re- the underlying hardware, instead of the OS. The placing running VMs inside the data centers by live VMM provides virtual machines (VMs) on which migration makes it easier to manage data center OSes are running as if they are running on physi- resources. For example, the availability of services cal machines. System virtualization brings several can be improved by migrating less loaded VMs to benefits. For example, we can reduce the number of another host to assign resources to more loaded running physical machines by consolidating the VM VMs. Another typical example is to support phys- running server software into one physical machine. ical machine maintenance; a physical machine can This leads to improvements in physical resource uti- be maintained with much less service downtime by lization and reduction in power consumption. An migrating all the VMs running on the target ma- AFCOM survey [1] reports that 72.9% of data cen- chine to other machines. Policies for VM replace- ters in the world are virtualized in 2010. Also, vir- ment using live migration including load balanc- ing [21][57][62] and power saving [19][40][54][58] have 仮想マシンライブマイグレーション機構およびその効率化 been widely studied in research communities. に関するサーベイ Exploring ways to design and implement effective Hiroshi Yamada, 東京農工大学工学部情報工学科, Dept. and/or efficient live migration is still a hot topic in of Information and Computer Sciences, Tokyo Uni- the system research community. This paper de- versity of Agriculture and Technology. scribes a survey on live migration basics and tech- コンピュータソフトウェア,Vol.33, No.2 (2016),pp.101–115. niques for improving them. Specifically, this survey [解説論文] 2015 年 10 月 2 日受付. 101 Information and Media Technologies 11: 101-115 (2016) reprinted from: Computer Software 33(2): 101-115 (2016) © Japan Society for Software Science and Technology focuses on software mechanisms for realizing ba- In virtualized environments, an OS is running on sic live migration, improving its performance, and a VM created by the VMM. We refer to an OS run- expanding its applicability. We believe that this ning on the VM as the guest OS. The VMM pro- survey helps researchers learn about existing live vides an illusion that the guest OSes are running migration techniques, helps administrators judge as if they are running on the physical hardware. which live migration technique is suitable for their The VMM multiplexes the underlying hardware to services and data centers, and sheds light on the create virtual hardware such as virtual CPUs and research directions of live migration. virtual devices. The VMM assigns part of the un- Numerous researches have implicitly assumed derlying hardware to VMs running on top of it. In that live migration is done on a local area network addition, the VMM achieves isolation between run- (LAN); the source and destination are connected in ning VMs; even if a guest OS crashes or is hijacked, the same network. The focus of this paper is live the other guest OSes are not affected. migration mechanisms that are used in intra data The VMM runs in the privileged mode to manage centers. Some efforts extend live migration to apply and multiplex the underlying physical hardware de- it to wide area networks [9][23][38][66]. Surveying vices whereas guest OSes run in the non-privileged these techniques is out of the scope of this paper. mode. When a guest OS executes a privileged in- We also note that exploring VM replacement poli- struction, such as access to MMU or I/O peripher- cies [19][21][40][54][57][58][62] is an important topic als, software interrupts occur, and control is trans- of live migration research, but this paper focuses ferred to the VMM. At this point, the VMM can on the software mechanisms for realizing live mi- capture and regulate all resources because it pro- gration. cesses the interrupts before delivering them to the The contributions of this paper are as follows: guest OS. • We describe mechanisms of live migration and its improvements. Note that the previous sur- 2. 2 Live Migration vey of live migration [39] describes basic mech- The goal of live migration is to move a running anisms of live migration; our survey is different VM between physical machines without disrupting from it. The previous survey mainly focuses on its services. To achieve this goal, minimizing the the difference between process migration and downtime during which the VM is stopped is im- VM migration while our main focus is on the portant, and is the most different point from a sus- difference between live migration mechanisms pend & resume scheme that stops a VM, extracts (Sec.3,4,and5). its memory image, and restores it on the destina- • We classify live migration mechanisms into tion machine. The details of algorithms to mini- two categories: performance and applicability. mize downtime in moving the VM are described in We discuss the state-of-the-art live migration Sec. 3. mechanisms in terms of the two aspects (Sec. 4 To move a running VM from a physical machine and 5). to another one, the live migration mechanism run- • We compare the mechanisms with each other ning inside the VMM transfers the target VM hard- and show some research directions of live mi- ware states to the destination machine. At the des- gration (Sec. 6). tination machine, the VMM builds the target VM from the received states and runs it after the state 2 Background restoration. The live migration mechanism typi- cally transfers memory contents, CPU states, CPU 2. 1 Virtualization register values, and device states. Live migration System virtualization is commonplace in comput- is supported by open-source virtualization software ing environments including high-end data centers, such as Xen [5] and KVM [31]. laptops, and embedded systems. To support sys- The concept of migration is not new. In the tem virtualization, CPU vendors offer CPU exten- system research community, process-level migration sions for virtualization. Typical examples are Intel- approaches have been studied widely [42].Com- VTx [27], AMD SVM [3], and ARM TrustZone [4]. pared to this approach, VM-level migration has the 102 Information and Media Technologies 11: 101-115 (2016) reprinted from: Computer Software 33(2): 101-115 (2016) © Japan Society for Software Science and Technology following advantages, as described in [11]. Sec. 1. • VM states on source are eliminated completely: The narrow interface between 3. 1 Pre-copy Approach a guest OS and the VMM makes it easy to Pre-copy [11] is a widely used approach to trans- avoid the problem of residual dependencies in fer VM resources. The basic idea of pre-copy is to which the source machine must remain avail- iteratively copy from the source to the destination able and network-accessible to service certain the VM’s pages that have been dirtied during live system calls or even memory accesses on behalf migration execution. This idea is used by other sys- of migrated processes. Avoiding this problem tems such as VM fault-tolerant systems [13][43][49]. is valuable when we conduct live migration for Figure 1 shows the execution flow of the pre-copy maintenance of the source machine. migration. The pre-copy consists of two phases: • Entire VM memory can be migrated: push phase and stop-and-copy phase. When the Live migration transfers all of the in-memory live migration starts, it first starts the push phase. state of the VM in a consistent and effi- The VMM copies all pages of a VM from the source cient fashion. This applies to the kernel- to the destination during the first iteration. Subse- internal state (e.g. the TCP control block quent iterations only copy those pages dirtied dur- for a currently active connection) as well as ing the previous iteration. To detect dirty pages, the application-level state, even when this is the VMM maintains the dirty bitmap that describes shared between multiple cooperating processes. which page becomes dirtied. When the number of In practical terms, for example, this means dirty pages is under a predefined threshold, the pre- that we can migrate an on-line game server copy live migration starts the stop & copy phase.
Recommended publications
  • A Performance Study of VM Live Migration Over the WAN
    Master Thesis Electrical Engineering April 2015 A Performance Study of VM Live Migration over the WAN TAHA MOHAMMAD CHANDRA SEKHAR EATI Department of Communication Systems Blekinge Institute of Technology SE-371 79 Karlskrona Sweden This thesis is submitted to the School of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering on Telecommunication Systems. The thesis is equivalent to 40 weeks of full time studies. Contact Information: Author(s): Taha Mohammad, Chandra Sekhar Eati. E-mail: [email protected], [email protected]. University advisor(s): Dr. Dragos Ilie, Department of Communication Systems. University Examiner(s): Prof. Kurt Tutschku, Department of Communication Systems. School of Computing Blekinge Institute of Technology Internet : www.bth.se SE-371 79 Karlskrona Phone : +46 455 38 50 00 Sweden Fax : +46 455 38 50 57 Abstract Virtualization is the key technology that has provided the Cloud computing platforms a new way for small and large enterprises to host their applications by renting the available resources. Live VM migration allows a Virtual Machine to be transferred form one host to another while the Virtual Machine is active and running. The main challenge in Live migration over WAN is maintaining the network connectivity during and after the migration. We have carried out live VM migration over the WAN migrating different sizes of VM memory states and presented our solutions based on Open vSwitch/VXLAN and Cisco GRE approaches. VXLAN provides the mobility support needed to maintain the network connectivity between the client and the Virtual machine.
    [Show full text]
  • Ovirt Architecture
    oVirt Architecture Itamar Heim Director, RHEV-M Engineering, Red Hat oVirt Engine Architecture 1 oVirt Engine Large scale, centralized management for server and desktop virtualization Based on leading performance, scalability and security infrastructure technologies oVirt Engine Architecture 2 Kenrel-based Virtual Machine (KVM) ● Included in Linux kernel since 2006 ● Runs Linux, Windows and other operating system guests ● Advanced features ● Live migration ● Memory page sharing ● Thin provisioning ● PCI Pass-through ● KVM architecture provides high “feature-velocity” – leverages the power of Linux oVirt Engine Architecture 3 Linux as a Hypervisor? ● What makes up a hypervisor ? ● Hardware management ● Device drivers ● I/O Stack ● Resource Management ● Scheduling ● Access Control ● Power Management ● Memory Manager ● Device Model (emulation) ● Virtual Machine Monitor oVirt Engine Architecture 4 Linux as a Hypervisor? ● What makes up a hypervisor ? ● Hardware management ● Device drivers ● I/O Stack ● Resource Management Operating System Kernel ● Scheduling ● Access Control ● Power Management ● } Memory Manager ● Device Model (emulation) ● Virtual Machine Monitor oVirt Engine Architecture 5 Linux as a Hypervisor? How well does Linux perform as a hypervisor? Isn't Linux a general purpose operating system? Linux is architected to scale from the smallest embedded systems through to the largest multi-socket servers ● From cell phones through to mainframes KVM benefits from mature, time tested infrastructure ● Powerful, scalable memory manager
    [Show full text]
  • Network Issues in Virtual Machine Migration
    Network Issues in Virtual Machine Migration Hatem Ibn-Khedher∗, Emad Abd-Elrahman∗, Hossam Afifi∗ and Jacky Forestiery ∗Institut Mines-Telecom (IMT), Telecom SudParis, Saclay, France. Email: fhatem.ibn khedher, emad.abd elrahman, hossam.afifi[email protected] yOrange Labs, Issy Les Moulineaux, France. Email: [email protected] Abstract—Software Defined Networking (SDN) is based ba- A. Network Functions Virtualization sically on three features: centralization of the control plane, programmability of network functions and traffic engineering. Network Functions Virtualization (NFV) [1] virtualizes the The network function migration poses interesting problems that network equipment (Router, DPI, Firewall...). We will not we try to expose and solve in this paper. Content Distribution discuss about hardware. We will rather consider software based Network virtualization is presented as use case. NFV architecture. It is a concept that decouples network Index Terms—Virtualization, SDN, NFV, QoS, Mobility functions from its underlying hardware. Then, it enables the software to run on virtualized generic environment. Therefore, I. INTRODUCTION several virtual appliances can share the single hardware re- The virtualization of resources has addressed the network sources. architecture as a potential target. The basic tasks required in NFV brings several benefits [2] such as reducing CAPEX the virtualization substrate are instantiation of new network and OPEX, promoting flexibility and innovation of the virtual functions, migration and switching. These basic tasks are network functions already implemented. Moreover, it has been strongly dependent on the underlying network configuration introduced as a new networking facility that poised to amend and topology in a way that makes them tributary of the network the core structure of telecommunication infrastructure to be conditions.
    [Show full text]
  • Oracle Linux Virtualization Manager
    Oracle Linux Virtualization Manager Oracle Linux Virtualization Manager is a server virtualization management platform that can be easily deployed to configure, monitor, and manage an Oracle Linux Kernel-based Virtual Machine (KVM) environment. Oracle Linux Key Features KVM and Oracle Linux Virtualization Manager provide a modern, open source, Leading high performance alternative to proprietary server virtualization solutions price/performance with zero licensing costs. using a modern, low overhead An Oracle Linux Premier Support subscription provides customers access to architecture based award-winning Oracle support resources for Oracle Linux Virtualization on the KVM hypervisor Manager, KVM, Oracle Linux, zero-downtime patching with Ksplice, cloud Self-Hosted native tools such as Kubernetes and Kata Containers, clustering tools, Oracle Engine offers a Linux Manager, and Oracle Enterprise Manager. All this and lifetime software hyper-converged management support is included in a single cost-effective support offering. For customers solution with high with an Oracle Cloud Infrastructure subscription, Oracle Linux Premier availability for the support is included at no additional cost. Unlike many other commercial Linux Manager distributions, Oracle Linux is easy to download and completely free to use, Full REST API allows greater distribute, and update. automation and interoperability Oracle Linux KVM Support for secure live migration and Starting with Oracle Linux Release 7 with the Unbreakable Enterprise Kernel (UEK) Release storage live 5, Oracle Linux KVM has been enhanced to deliver leading performance and security for migration hybrid and multi-cloud deployments. Users can take a previously deployed Oracle Linux VM high system and turn the operating environment into a KVM host, or a KVM configuration can availability be set up from a base Oracle Linux installation.
    [Show full text]
  • Proxmox Virtual Environment
    DATASHEET Proxmox Virtual Environment OVERVIEW AT A GLANCE Proxmox VE is a complete virtualization management solution for Complete virtualization solution servers. You can virtualize even the most demanding application for production environments workloads running on Linux and Windows Servers. It combines the leading Kernel-based Virtual Machine (KVM) hypervisor and container- KVM hypervisor based virtualization on one management platform. Lightweight Linux Containers Thanks to the unique multi-master design there is no need for an (LXC) additional management server. This saves ressources and also allows Web-based Management high availabilty without single point of failures (no SPOF). Interface With the included web-based management you can easily control all Comprehensive management functionality. Full access to all logs from all nodes in a cluster is included, feature set including task logs like running backup/restore processes, live-migration Multi-node High Availability or high availability (HA) triggered activities. Clusters VM Templates and Clones ENTERPRISE-READY Multiple storage types supported Proxmox VE includes all the functionality you need to deploy an like Ceph, NFS, ZFS, Gluster, enterprise-class virtualization environment in your company‘s iSCSI,... datacenter. Multiple authentication sources combined with role based Open source license GNU AGPL, user- and permission management enable full control of your HA v3 virtualization cluster. The RESTful web API enables easy integration for third party management tools like custom hosting environments. With the future-proof open source development model, your full access to the source code as well as maximum flexibility and security are guaranteed. ABOUT PROXMOX Proxmox Server Solutions GmbH is a privately held corporation based in Vienna, Austria.
    [Show full text]
  • Performance Comparison of Linux Containers (LXC) and Openvz During Live Migration
    Thesis no: MSCS-2016-14 Performance comparison of Linux containers (LXC) and OpenVZ during live migration An experiment Pavan Sutha Varma Indukuri Faculty of Computing Blekinge Institute of Technology SE-371 79 Karlskrona Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfilment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full-time studies. Contact Information: Author(s): Pavan Sutha Varma Indukuri E-mail: [email protected] University advisor: Sogand Shirinbab Department of Computer Science and Engineering E-mail: [email protected] Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE-371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 i i ABSTRACT Context. Cloud computing is one of the most widely used technologies all over the world that provides numerous products and IT services. Virtualization is one of the innovative technologies in cloud computing which has advantages of improved resource utilization and management. Live migration is an innovative feature of virtualization that allows a virtual machine or container to be transferred from one physical server to another. Live migration is a complex process which can have a significant impact on cloud computing when used by the cloud-based software. Objectives. In this study, live migration of LXC and OpenVZ containers has been performed. Later the performance of LXC and OpenVZ has been conducted in terms of total migration time and downtime. Further CPU utilization, disk utilization and an average load of the servers is also evaluated during the process of live migration.
    [Show full text]
  • Proxmox Virtual Environment
    DATASHEET Proxmox Virtual Environment AT A GLANCE OVERVIEW ñ Complete enterprise Proxmox VE is a complete virtualization management solution for servers. You can virtualization solution virtualize even the most demanding application workloads running on Linux and Windows Servers. It combines the leading Kernel-based Virtual Machine (KVM) ñ HA without SPOF hypervisor and container-based virtualization with OpenVZ on one management platform. ñ VM Templates and Clones The unique multi-master design eliminates the need of an additional management ñ KVM hypervisor with an server like seen on other solutions. This saves ressources and also allows high enterprise class management availabilty without single point of failures (no SPOF). system The included web-based management empowers the user (and admin) to control ñ OpenVZ—Container-based all functionalities easily. This includes full access to all logs from all nodes in a virtualization cluster, including task logs like running backup/restore processes, live-migration or ñ Comprehensive management High Availability (HA) triggered activities. feature set ENTERPRISE-READY ñ Open source solution Proxmox VE includes all the functionalities you need to deploy an enterprise-class virtualization environment in your company. Multiple authentication sources combined with role based user- and permission management enables full control of your virtualization cluster. The RESTful web API enables easy integration for third party management tools like custom hosting environments. The future-proof open source
    [Show full text]
  • Types of Virtualization
    Types of Virtualization • Emulation – VM emulates/simulates complete hardware – Unmodified guest OS for a different PC can be run • Bochs, VirtualPC for Mac, QEMU • Full/native Virtualization – VM simulates “enough” hardware to allow an unmodified guest OS to be run in isolation • Same hardware CPU – IBM VM family, VMWare Workstation, Parallels,… Computer Science CS677: Distributed OS Lecture 5, page 1 Types of virtualization • Para-virtualization – VM does not simulate hardware – Use special API that a modified guest OS must use – Hypercalls trapped by the Hypervisor and serviced – Xen, VMWare ESX Server • OS-level virtualization – OS allows multiple secure virtual servers to be run – Guest OS is the same as the host OS, but appears isolated • apps see an isolated OS – Solaris Containers, BSD Jails, Linux Vserver • Application level virtualization – Application is gives its own copy of components that are not shared • (E.g., own registry files, global objects) - VE prevents conflicts – JVM Computer Science CS677: Distributed OS Lecture 5, page 2 1 Type 1 hypervisor • Unmodified OS is running in user mode (or ring 1) – But it thinks it is running in kernel mode (virtual kernel mode) – privileged instructions trap; sensitive inst-> use VT to trap – Hypervisor is the “real kernel” • Upon trap, executes privileged operations • Or emulates what the hardware would do Computer Science CS677: Distributed OS Lecture 5, page 3 Type 2 Hypervisor • VMWare example – Upon loading program: scans code for basic blocks – If sensitive instructions, replace by Vmware procedure • Binary translation – Cache modified basic block in VMWare cache • Execute; load next basic block etc. • Type 2 hypervisors work without VT support – Sensitive instructions replaced by procedures that emulate them.
    [Show full text]
  • Ovirt Intro & Architecture
    oVirt Intro & Architecture Barak Azulay Manager @ RHEV Engineering Red Hat June 2012 1 Virtualization Management the oVirt way What is oVirt? Large scale, centralized management for server and desktop virtualization Based on leading performance, scalability and security infrastructure technologies Provide an open source alternative to vCenter/vSphere Focus on KVM for best integration/performance Focus on ease of use/deployment 2 Virtualization Management the oVirt way How Does It Look? 3 Virtualization Management the oVirt way Competitive Landscape ● InfoWorld “shootout” 2011 – Independent analysis of leading virtualization platforms – 2nd place in management functionality http://bit.ly/virtshootout 4 Virtualization Management the oVirt way Goals of the oVirt project ● Build a community around all levels of the virtualization stack – hypervisor, manager, GUI, API, etc. ● To deliver both a cohesive complete stack and discretely reusable components for open virtualization management ● Provide a release of the project on a well defined schedule ● Focus on management of the KVM hypervisor, with exceptional guest support beyond Linux ● Provide a venue for user and developer communication and coordination 5 Virtualization Management the oVirt way Governance ● Merit based, open governance model ● Built using the best concepts taken from Apache and Eclipse Foundations ● Governance split between board and projects ● oVirt Board ● Multiple projects under the oVirt brand 6 Virtualization Management the oVirt way Governance (oVirt Board) ● Initial
    [Show full text]
  • Proxmox VE Administration Guide Ii
    PROXMOX VE ADMINISTRATION GUIDE RELEASE 7.0 July 6, 2021 Proxmox Server Solutions Gmbh www.proxmox.com Proxmox VE Administration Guide ii Copyright © 2021 Proxmox Server Solutions Gmbh Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". Proxmox VE Administration Guide iii Contents 1 Introduction 1 1.1 Central Management....................................... 2 1.2 Flexible Storage......................................... 3 1.3 Integrated Backup and Restore................................. 3 1.4 High Availability Cluster..................................... 3 1.5 Flexible Networking........................................ 4 1.6 Integrated Firewall........................................ 4 1.7 Hyper-converged Infrastructure................................. 4 1.7.1 Benefits of a Hyper-Converged Infrastructure (HCI) with Proxmox VE.......... 4 1.7.2 Hyper-Converged Infrastructure: Storage........................ 5 1.8 Why Open Source........................................ 5 1.9 Your benefits with Proxmox VE.................................. 5 1.10 Getting Help........................................... 6 1.10.1 Proxmox VE Wiki..................................... 6 1.10.2 Community Support Forum................................ 6 1.10.3
    [Show full text]
  • Comparing Live Migration Between Linux Containers and Kernel Virtual Machine
    Master of Science in Computer Science February 2017 Comparing Live Migration between Linux Containers and Kernel Virtual Machine Investigation study in terms of parameters Kotikalapudi Sai Venkat Naresh Faculty of Computing Blekinge Institute of Technology SE-371 79 Karlskrona Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full time studies. Contact Information: Author(s): Sai Venkat Naresh Kotikalapudi E-mail: [email protected] University advisor: Sogand Shirinbab Department of Computer Science Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE-371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 i i ABSTRACT Context. Virtualization technologies have been extensively used in various cloud platforms. Hardware replacements and maintenance are occasionally required, which leads to business downtime. Live migration is performed to ensure high availability of services, as it is a major aspect. The performance of live migration in virtualization technologies directly impacts the performance of cloud platforms. Hence comparison is performed in two mainstream virtualization technologies, container and hypervisor based virtualization. Objectives. In the present study, the objective is to perform live migration of hypervisor and container based virtualization technologies, Kernel Virtual Machine (KVM) and Linux Containers (LXC) respectively. Measure and compare the downtime, total migration time, CPU utilization and disk utilization of KVM and LXC during live migration. Methods. An initial literature is conducted to get in-depth knowledge about live migration in virtualization technologies.
    [Show full text]
  • Post-Copy Live Migration of Virtual Machines
    Post-Copy Live Migration of Virtual Machines Michael R. Hines, Umesh Deshpande, and Kartik Gopalan Computer Science, Binghamton University (SUNY) {mhines,udeshpa1,kartik}@cs.binghamton.edu ABSTRACT 1 a cluster environment where physical nodes are intercon- We present the design, implementation, and evaluation of nected via a high-speed LAN and also employ a network- post-copy based live migration for virtual machines (VMs) accessible storage system. State-of-the-art live migration across a Gigabit LAN. Post-copy migration defers the trans- techniques [19, 3] use the pre-copy approach which works as fer of a VM’s memory contents until after its processor state follows. The bulk of the VM’s memory state is migrated has been sent to the target host. This deferral is in contrast to a target node even as the VM continues to execute at a to the traditional pre-copy approach, which first copies the source node. If a transmitted page is dirtied, it is re-sent memory state over multiple iterations followed by a final to the target in the next round. This iterative copying of transfer of the processor state. The post-copy strategy can dirtied pages continues until either a small, writable work- provide a “win-win” by reducing total migration time while ing set (WWS) has been identified, or a preset number of maintaining the liveness of the VM during migration. We iterations is reached, whichever comes first. This constitutes compare post-copy extensively against the traditional pre- the end of the memory transfer phase and the beginning of copy approach on the Xen Hypervisor.
    [Show full text]