LIVE VIRTUAL MACHINE MIGRATION on AMD PROCESSORS Re-Scan the Hardware Upon Resume

Total Page:16

File Type:pdf, Size:1020Kb

LIVE VIRTUAL MACHINE MIGRATION on AMD PROCESSORS Re-Scan the Hardware Upon Resume AMD WHITE PAPER Live Migration with AMD-V™ Extended Migration Technology Virtual Machine migration is a capability being increasingly utilized in today’s enterprise environments. With live migration, a Virtual Machine Monitor (VMM) moves a running Virtual Table of Contents Machine (VM) nearly instantaneously from one server to another for a seamless experience to the end user while maintaining guest uptime. However, if the processors running in the 1. Introduction . 1 computers in the pre and post migration environment are not identical, live migration can result 2. Virtual Machine in unexpected behavior of the guest software. Since the introduction of its AMD64 processor Migration . 2 technology in 2003, AMD has worked closely with virtualization software developers to defi ne the functionality necessary to ensure that live migration is possible across a broad range of 3. Determining Processor Features . 2 AMD64 processors. This whitepaper discusses the basics of live migration and describes the functionality currently available to enable virtualization software vendors to support live 4. VMM Basics. 3 migration across a broad range of AMD64 processors. 5. Problems with Live Migration . 3 1. INTRODUCTION Virtualization can increase utilization of computing System virtualization is the ability to abstract and pool resources by consolidating the workloads running on 5.1 Backward Compatibility resources on a physical platform. This abstraction many physical systems into virtual machines running Problem . 3 decouples software from hardware and enables multiple on a single physical system. Virtual machines can be 5.2 Forward Compatibility operating system images to run concurrently on a single provisioned on-demand, replicated and migrated. Problem . 3 physical platform without interfering with each other. Virtual machine (VM) migration, which is the ability 6. AMD-V™ Extended As a technique, system virtualization has existed for to move a VM from one physical server to another Migration Technology . 3 decades on mainframes. In the past, industry standard under virtual machine monitor (VMM) control, is 6.1 Backward Compatibility x86-based machines, with their limited computing a capability being increasingly utilized in today’s Assurance . 3 resources and challenges in correctly virtualizing the enterprise environments. Instruction Set Architecture (ISA)[1,2], did not lend 6.1.1 Controlling Information themselves well to the technique. However, with the In this whitepaper we discuss how AMD-V Extended Returned via CPUID. 4 increase in computing resources of x86-based servers Migration Technology is designed to enable virtual 6.1.2 CPUID Override Model in recent years and with new processor extensions machine migration between AMD64 processors. Specifi c Registers . 4 such as those found in AMD-V™[3] technology, x86-based virtualization is seeing widespread adoption 6.2 Forward Compatibility and is rapidly becoming a pervasive and integral Assurance . 5 capability in enterprise environments[4]. 6.2.1 Feature Disable Bits . 6 7. Conclusions . 6 8. References . 6 Appendix: Using AMD-V Extended Migration on VMware ESX 3.0.1 . 6 AMD White Paper: Live Migration with AMD-V™ Extended Migration Technology FIGURE 1 – LIVE VIRTUAL MACHINE MIGRATION ON AMD PROCESSORS re-scan the hardware upon resume. Depending on their Virtual Machines Virtual Machines policies and the hardware differences between current and previous hardware, the OSes may refuse to resume and require a reboot. apps apps apps apps apps apps kernel kernel kernel kernel kernel kernel After a live migration, guest software continues to maintain an identical view of the pre and post migration hardware. In this whitepaper we discuss how AMD64 Processors provide support for a VMM to hide Hypervisor Live Migration Technology Hypervisor differences in software-visible processor features during Live and Cold Migration. For simplicity we will AMD Server AMD Server use Live Migration to refer to both migration methods. 3. DETERMINING PROCESSOR FEATURES 2. VIRTUAL MACHINE MIGRATION Software should use CPUID to determine processor There are many ways to migrate a VM. In static migration, features and use this information to select appropriate the VM is shutdown using OS-supported methods; its code paths. CPUID is an x86 instruction that can be static VM image is moved to another VMM and restarted. executed from kernel and applications. The following In cold migration, the VM is suspended using OS- example shows CPUID being called from an application. supported or VMM-supported methods. The suspended VM image is moved to a VMM on a different machine //Example 1: Console application on Windows® XP using CPUID and resumed. In live migration, the VMM moves a running instruction to determine //processor revision and feature information. VM instance nearly instantaneously from one server to another. Live Migration allows for dynamic load balancing #include "stdafx.h" of virtualized resource pools, hardware maintenance #include <stdio.h> without downtime and dynamic failover support. int _tmain (int argc, _TCHAR* argv[]) For simplicity in the following sections we call software { int version; int features_1; running in the VM “guest software.” int features_2; As long as the hardware in the pre and post migration __asm { environment is identical, guest software should behave mov eax, 1 in exactly the same way before and after the migration. It is when guest software runs in a different hardware cpuid environment after a migration that certain challenges mov version, eax can arise. mov features_1, ecx mov features_2, edx } Note that even though a VMM presents a virtual platform to guest software, there could be certain printf (“Processor version: 0x%x features_1: 0x%08x interfaces, depending on VMM design, which guest features_2: 0x%08x\n”, version, features_1, features_2); software can directly use to determine underlying return 0; hardware’s capabilities. We will discuss this more } in Section 5. However, a small number of existing commercial After the reboot/restart following a static migration, guest software determines the presence of certain software should go through its platform discovery phase processor features using non-standard ways (such and be able to adjust to any differences in underlying as using a try-catch code block to check if certain (virtual) hardware. instructions are supported) or assumes the presence of certain features without testing for them. With such Following a cold migration, guest software may software, it is diffi cult to provide assurance about their continue to maintain an identical view of the pre and behavior under live migration scenarios. Note that such post migration hardware. When suspended using software is equally likely to fail in non-virtualized OS-supported methods, some operating systems will environments with different processors. 2 AMD White Paper: Live Migration with AMD-V Extended Migration Technology 4. VMM BASICS To help ensure that guest software does not malfunction To understand the implications of VMM design on after a live migration, the virtualization infrastructure VM migration it is useful to briefl y review various VMM should address the following problems. design alternatives. 5.1 BACKWARD COMPATIBILITY PROBLEM It is important to note that for performance reasons, This is the case when the target processor does a VMM may only want to interfere in the execution not implement a feature which is implemented on of guest software when the guest code performs the current processor. privileged operations [1,2]. In such a case, after live migration, use of such a The fi rst method is called para-virtualization, where the feature may result in unexpected software behavior. guest’s kernel is modifi ed to cooperate with the VMM For example, use of an instruction not implemented on when performing privileged operations. The second the target processor will result in a #UD fault. method is called binary translation, where the VMM modifi es the guest’s binary image at runtime to get 5.2 FORWARD COMPATIBILITY PROBLEM processor control when guest software attempts to This is the case when the target processor implements perform a privileged operation [2]. The VMM can then features not supported on the current processor. emulate the privileged operation and return control to guest software. The third method is hardware-assisted This could be problematic in cases where the target virtualization where the VMM uses processor extensions processor uses an opcode (to implement an instruction) such as AMD-V to intercept and emulate privileged which is not defi ned on the current processor and operations in the guest. software on the current processor expects use of this undefi ned opcode to generate a #UD fault. In such 5. PROBLEMS WITH LIVE MIGRATION cases, after a live migration, software will observe a In the following sections we call the processor in different processor behavior, i.e. it will not observe #UD pre-migration environment the “current” processor. fault upon use of the (previously undefi ned) opcode. The one in post-migration environment is called the “target” processor. Fortunately a very small number of existing commercial software depends on such processor behavior. AMD When performing a live migration, a VM is suspended; its actively works with its software partners to discourage current (guest and VMM) state is captured, moved to use of such behaviors. another VMM, and resumed on it. The VMM preserves the state and identity of virtual devices after the 6. AMD-V EXTENDED MIGRATION TECHNOLOGY migration. This process happens quickly to present a Since the introduction of its AMD64 processor seamless experience to the end user and to maintain technology in 2003, AMD has worked closely with any guest uptime guarantees. virtualization software developers to defi ne the functionality necessary so that live migration is possible As indicated previously, when hardware in the pre and across a broad range of product. AMD-V™ Extended post migration environment is different, live migration Migration Technology provides various capabilities to can result in unexpected guest behavior. address problems associated with Live Migration.
Recommended publications
  • VM Virtualization Tasks Problems of X86 Categories of X86 Virtualization
    VM Starts from IBM VM/370 definition: "an efficient, isolated duplicate of a real machine" Old purpose: share the resource of the expensive mainframe machine New purpose: fault tolerance and security; run multiple different OSes on the same desktop; server consolidation and performance isolation, etc. type 1 vmm: vmm running on bare hardware type 2 vmm: vmm running on host os Virtualization tasks (isolate, secure, fast, transparent …) 1. cpu virtualization How to make sure privileged instructions (e.g., enable/disable interrupts; change critical registers) will not be executed directly 2. memory virtualization OS thought they can control the whole physical memory. No! 3. I/O virtualization OS thought I/O devices are all under its own control. No! Problems of X86 1. Some privileged instructions do not trap; they just fail/change-semantic silently. 2. TLB miss will be handled by hardware instead of software Categories of X86 virtualization Full-virtualization (VMWare) No change to guest OS Use binary translation to change some instructions (privilege instructions or all OS code) on the fly Para-virtualization (Xen) Change guest OS to improve performance Rewrite OS; replace privileged instructions with hypercalls Many other changes Disco On MIPS DISCO directly runs on multi-processor machine; Oses run upon DISCO Goal: use commodity OSes to leverage many-core cluster VMM runs in kernel mode VM OS runs in supervisor mode VM user runs in user mode Privileged instructions will trap to VMM and be emulated CPU virtualization No special challenge (seemed that all privilege instructions will trap) VMM maintains a process-table-entry-like data structure for each VM (virtual PC) (including registers, states, and TLB content (used for emulation …) ).
    [Show full text]
  • Security Analysis of Encrypted Virtual Machines
    Security Analysis of Encrypted Virtual Machines Felicitas Hetzelt Robert Buhren Technical University of Berlin Technical University of Berlin Berlin, Germany Berlin, Germany fi[email protected] [email protected] Abstract 1. Introduction Cloud computing has become indispensable in today’s com- Cloud computing has been one of the most prevalent trends puter landscape. The flexibility it offers for customers as in the computer industry in the last decade. It offers clear ad- well as for providers has become a crucial factor for large vantages for both customers and providers. Customers can parts of the computer industry. Virtualization is the key tech- easily deploy multiple servers and dynamically allocate re- nology that allows for sharing of hardware resources among sources according to their immediate needs. Providers can different customers. The controlling software component, ver-commit their hardware and thus increase the overall uti- called hypervisor, provides a virtualized view of the com- lization of their systems. The key technology that made this puter resources and ensures separation of different guest vir- possible is virtualization, it allows multiple operating sys- tual machines. However, this important cornerstone of cloud tems to share hardware resources. The hypervisor is respon- computing is not necessarily trustworthy or bug-free. To mit- sible for providing temporal and spatial separation of the igate this threat AMD introduced Secure Encrypted Virtu- virtual machines (VMs). However, besides these advantages alization, short SEV, which transparently encrypts a virtual virtualization also introduced new risks. machines memory. Customers who want to utilize the infrastructure of a In this paper we analyse to what extend the proposed fea- cloud provider must fully trust the cloud provider.
    [Show full text]
  • Hardware Virtualization
    Hardware Virtualization E-516 Cloud Computing 1 / 33 Virtualization Virtualization is a vital technique employed throughout the OS Given a physical resource, expose a virtual resource through layering and enforced modularity Users of the virtual resource (usually) cannot tell the difference Different forms: Multiplexing: Expose many virtual resources Aggregation: Combine many physical resources [RAID, Memory] Emulation: Provide a different virtual resource 2 / 33 Virtualization in Operating Systems Virtualizing CPU enables us to run multiple concurrent processes Mechanism: Time-division multiplexing and context switching Provides multiplexing and isolation Similarly, virtualizing memory provides each process the illusion/abstraction of a large, contiguous, and isolated “virtual” memory Virtualizing a resource enables safe multiplexing 3 / 33 Virtual Machines: Virtualizing the hardware Software abstraction Behaves like hardware Encapsulates all OS and application state Virtualization layer (aka Hypervisor) Extra level of indirection Decouples hardware and the OS Enforces isolation Multiplexes physical hardware across VMs 4 / 33 Hardware Virtualization History 1967: IBM System 360/ VM/370 fully virtualizable 1980s–1990s: “Forgotten”. x86 had no support 1999: VMWare. First x86 virtualization. 2003: Xen. Paravirtualization for Linux. Used by Amazon EC2 2006: Intel and AMD develop CPU extensions 2007: Linux Kernel Virtual Machines (KVM). Used by Google Cloud (and others). 5 / 33 Guest Operating Systems VMs run their own operating system (called “guest OS”) Full Virtualization: run unmodified guest OS. But, operating systems assume they have full control of actual hardware. With virtualization, they only have control over “virtual” hardware. Para Virtualization: Run virtualization-aware guest OS that participates and helps in the virtualization. Full machine hardware virtualization is challenging What happens when an instruction is executed? Memory accesses? Control I/O devices? Handle interrupts? File read/write? 6 / 33 Full Virtualization Requirements Isolation.
    [Show full text]
  • Protection Strategies for Direct Access to Virtualized I/O Devices
    Protection Strategies for Direct Access to Virtualized I/O Devices Paul Willmann, Scott Rixner, and Alan L. Cox Rice University {willmann, rixner, alc}@rice.edu Abstract chine virtualization [1, 2, 5, 8, 9, 10, 14, 19, 22]. How- Commodity virtual machine monitors forbid direct ac- ever, virtualization can impose performance penalties up cess to I/O devices by untrusted guest operating systems to a factor of 5 on I/O-intensive workloads [16, 19]. in order to provide protection and sharing. However, These penalties stem from the overhead of providing both I/O memory management units (IOMMUs) and re- shared and protected access to I/O devices by untrusted cently proposed software-based methods can be used to guest operating systems. Commodity virtualization ar- reduce the overhead of I/O virtualization by providing chitectures provide protection in software by forbidding untrusted guest operating systems with safe, direct ac- direct access to I/O hardware by untrusted guests— cess to I/O devices. This paper explores the performance instead, all I/O accesses, both commands and data, are and safety tradeoffs of strategies for using these mecha- routed through a single software entity that provides both nisms. protection and sharing. The protection strategies presented in this paper pro- Preferably, guest operating systems would be able to vide equivalent inter-guest protection among operating directly access I/O devices without the need for the data system instances. However, they provide varying levels to traverse an intermediate software layer within the vir- of intra-guest protection from driver software and incur tual machine monitor [17, 23].
    [Show full text]
  • Bringing Virtualization to the X86 Architecture with the Original Vmware Workstation
    12 Bringing Virtualization to the x86 Architecture with the Original VMware Workstation EDOUARD BUGNION, Stanford University SCOTT DEVINE, VMware Inc. MENDEL ROSENBLUM, Stanford University JEREMY SUGERMAN, Talaria Technologies, Inc. EDWARD Y. WANG, Cumulus Networks, Inc. This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different ven- dors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors. As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtual- ization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques. VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to ef- ficiently virtualize the x86 architecture and support most commodity operating systems.
    [Show full text]
  • The Evolution of an X86 Virtual Machine Monitor
    The Evolution of an x86 Virtual Machine Monitor Ole Agesen, Alex Garthwaite, Jeffrey Sheldon, Pratap Subrahmanyam {agesen, alextg, jeffshel, pratap}@vmware.com ABSTRACT While trap-and-emulate virtualization on mainframes was Twelve years have passed since VMware engineers first virtu- well understood at the time, it was – unsurprisingly – not a alized the x86 architecture. This technological breakthrough design goal for the 4004, whose first application was in fact kicked off a transformation of an entire industry, and virtu- a Busicom calculator [7]. However, as x86 CPUs expanded alization is now (once again) a thriving business with a wide their reach, the case for virtualization gradually emerged. range of solutions being deployed, developed and proposed. There was just one problem, seemingly fundamental: the But at the base of it all, the fundamental quest is still the generally accepted wisdom held that x86 virtualization was same: running virtual machines as well as we possibly can impossible, or at best merely impractical, due to inherited on top of a virtual machine monitor. architectural quirks and a requirement to retain compatibil- ity with legacy code. We review how the x86 architecture was originally virtual- ized in the days of the Pentium II (1998), and follow the This impasse was broken twelve years ago when VMware evolution of the virtual machine monitor forward through engineers first virtualized the x86 architecture entirely in the introduction of virtual SMP, 64 bit (x64), and hard- software. By basing the virtual machine monitor (VMM) ware support for virtualization to finish with a contempo- on binary translation, the architectural features that pre- rary challenge, nested virtualization.
    [Show full text]
  • Accelerating Two-Dimensional Page Walks for Virtualized Systems
    Accelerating Two-Dimensional Page Walks for Virtualized Systems Ravi Bhargava Benjamin Serebrin Francesco Spadini Srilatha Manne Computing Solutions Group Computing Solutions Group Computing Solutions Group Advanced Architecture & Technology Lab Advanced Micro Devices Advanced Micro Devices Advanced Micro Devices Advanced Micro Devices Austin, TX Sunnyvale, CA Austin, TX Bellevue, WA [email protected] [email protected] [email protected] [email protected] Abstract pervisor is the underlying software that inserts abstractions into a Nested paging is a hardware solution for alleviating the software virtualized system: an operating system (OS) becomes a guest OS, memory management overhead imposed by system virtualization. physical addresses become guest physical addresses,and,ingen- Nested paging complements existing page walk hardware to form eral, system elements that the OS presumed were real or physical a two-dimensional (2D) page walk,whichreducestheneedfor are converted into virtualized resources under control or manipula- hypervisor intervention in guest page table management. However, tion of the hypervisor [14, 16]. the extra dimension also increases the maximum number of archi- Ideally, a virtualized guest system will have comparable per- tecturally-required page table references. formance to an equivalent native, non-virtualized system. This can This paper presents an in-depth examination of the 2D page indeed be the case for compute-intensive applications. For exam- ple, the performance overhead for a virtualized system running table walk overhead and options for decreasing it. These options R include using the AMD OpteronTM processor’s page walk cache SPECint!2000 benchmarks can be less than 5% because the hy- to exploit the strong reuse of page entry references.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Oracle VM Server for X86 Virtualization and Management
    ORACLE DATA SHEET Oracle VM Server for x86 Virtualization and Management Oracle VM Server for x86 is a zero license cost server virtualization and management solution that makes enterprise applications easier to deploy, manage, and support. Backed worldwide by affordable enterprise-quality support for both Oracle and non-Oracle environments, Oracle VM reduces operations and support costs while increasing IT efficiency and agility. Engineered for Open Cloud Infrastructure ORACLE’S SERVER VIRTUALIZATION AND MANAGEMENT SOLUTION You are facing increased operating costs and inefficient resource utilization, and have an eye toward cloud computing. Your virtualization solution has to increase datacenter flexibility, meet KEY FEATURES AND BENEFITS your price/performance needs, and make applications easier to deploy, manage, and support. • Complete server virtualization and management solution with zero license Oracle VM delivers: cost • High performance and scalability: Low-overhead architecture with the Xen® hypervisor • Modern, low overhead architecture provides scalable performance under increasing workloads. Supports up to 384 physical based on the Xen hypervisor for leading price/performance CPUs and 6TB memory with each guest VM supporting up to 256 virtual CPUs and 2,000,000MB memory to accommodate the most demanding enterprise and cloud • Speeds application deployment with applications. Oracle VM Templates and virtual appliances • Broad guest operating system support: Oracle Linux, Oracle Solaris, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, CentOS, Microsoft Windows. • Full Oracle VM Manager command-line interface (CLI) and Web Services API • Modern Dom0 kernel: Oracle Unbreakable Enterprise Kernel (UEK) Release 4 offers high (WS-API) allow greater automation and performance and streamlined partner certifications. interoperability • Dom0 UEK update on live systems: Oracle Ksplice updates the Dom0 UEK with all of the • Advanced virtualization features important security patches with no server reboot required.
    [Show full text]
  • A Comparison of Powervm and X86-Based Virtualization Performance
    A Comparison of PowerVM and x86-Based Virtualization Performance October 20, 2009 Elisabeth Stahl, Mala Anand IBM Systems and Technology Group IBM PowerVM Virtualization Virtualization technologies allow IT organizations to consolidate workloads running on multiple operating systems and software stacks and allocate platform resources dynamically to meet specific business and application requirements. Leadership virtualization has become the key technology to efficiently deploy servers in enterprise data centers to drive down costs and become the foundation for server pools and cloud computing technology. Therefore, the performance of this foundation technology is critical for the success of server pools and cloud computing. Virtualization may be employed to: • Consolidate multiple environments, including underutilized servers and systems with varied and dynamic resource requirements • Grow and shrink resources dynamically, derive energy efficiency, save space, and optimize resource utilization • Deploy new workloads through provisioning virtual machines or new systems rapidly to meet changing business demands • Develop and test applications in secure, independent domains while production can be isolated to its own domain on the same system • Transfer live workloads to support server migrations, balancing system load, or to avoid planned downtime • Control server sprawl and thereby reduce system management costs The latest advances in IBM® PowerVM™ offer broader platform support, greater scalability, higher efficiency in resource utilization, and more flexibility and robust heterogeneous server management than ever before. This paper will discuss the advantages of virtualization, highlight IBM PowerVM, and analyze the performance of virtualization technologies using industry-standard benchmarks. PowerVM With IBM Power™ Systems and IBM PowerVM virtualization technologies, an organization can consolidate applications and servers by using partitioning and virtualized system resources to provide a more flexible, dynamic IT infrastructure.
    [Show full text]
  • CS5460: Operating Systems Lecture: Virtualization
    CS5460: Operating Systems Lecture: Virtualization Anton Burtsev March, 2013 Traditional operating system Virtual machines A bit of history ● Virtual machines were popular in 60s-70s ● Share resources of mainframe computers [Goldberg 1974] ● Run multiple single-user operating systems ● Interest is lost by 80s-90s ● Development of multi-user OS ● Rapid drop in hardware cost ● Hardware support for virtualizaiton is lost What is the problem? ● Hardware is not designed to be multiplexed ● Loss of isolation Virtual machine Efficient duplicate of a real machine ● Compatibility ● Performance ● Isolation Trap and emulate What needs to be emulated? ● CPU and memory ● Register state ● Memory state ● Memory management unit ● Page tables, segments ● Platform ● Interrupt controller, timer, buses ● BIOS ● Peripheral devices ● Disk, network interface, serial line x86 is not virtualizable ● Some instructions (sensitive) read or update the state of virtual machine and don't trap (non- privileged) ● 17 sensitive, non-privileged instructions [Robin et al 2000] x86 is not virtualizable (II) ● Examples ● popf doesn't update interrupt flag (IF) – Impossible to detect when guest disables interrupts ● push %cs can read code segment selector (%cs) and learn its CPL – Guest gets confused Solution space ● Parse the instruction stream and detect all sensitive instructions dynamically ● Interpretation (BOCHS, JSLinux) ● Binary translation (VMWare, QEMU) ● Change the operating system ● Paravirtualization (Xen, L4, Denali, Hyper-V) ● Make all sensitive instructions privileged!
    [Show full text]
  • CIS 3207 - Operating Systems CPU Mode
    CIS 3207 - Operating Systems CPU Mode Professor Qiang Zeng Spring 2018 CPU Modes • Two common modes – Kernel mode • The CPU has to be in this mode to execute the kernel code – User mode • The CPU has to be in this mode to execute the user code CIS 3207 – Operating Systems 2 Important questions • How are CPU modes implemented? • Why are CPU modes needed? • Difference between Kernel mode and User mode • How are system calls implemented? • Advanced topic: Virtualization CIS 3207 – Operating Systems 3 How CPU Modes are implemented • Implemented through protection rings – A modern CPU typical provides different protection rings, which represent different privilege levels • A ring with a lower number has higher privileges – Introduced by Multics in 60’s – E.g., an X86 CPU usually provides four rings, and a Linux/Unix/Windows OS uses Ring 0 for the kernel mode and Ring 3 for the user mode CIS 3207 – Operating Systems 4 Why are Protection Rings needed? • Fault isolation: a fault (e.g., divided by 0) in the code running in a less-privileged ring can be captured and handled by code in a more-privileged ring • Privileged instructions: certain instructions can only be issued in a privileged ring; thus an OS can implement resource management and isolation here • Privileged memory space: certain memory can only be accessed in a privileged ring All these are demonstrated in the difference between the kernel mode and the user mode CIS 3207 – Operating Systems 5 Kernel Mode vs. User Mode? • A fault in the user space (e.g., divided by zero, invalid access,
    [Show full text]