Xen and the Art of Virtualization

Total Page:16

File Type:pdf, Size:1020Kb

Xen and the Art of Virtualization Xen and The Art of Virtualization Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt & Andrew Warfield SOSP 2003 Additional source: Ian Pratt on xen (xen source) 1 www.cl.cam.ac.uk/research/srg/netos/papers/2005-xen-may.ppt Para virtualization 2 Virtualization approaches z Full virtualization z OS sees exact h/w OS z OS runs unmodified z Requires virtualizable VMM architecture or work around H/W z Example: Vmware z Para Virtualization z OS knows about VMM OS z Requires porting (source code) z Execution overhead VMM z Example Xen, denali H/W 3 The Xen approach z Support for unmodified binaries (but not OS) essential z Important for app developers z Virtualized system exports has same Application Binary Interface (ABI) z Modify guest OS to be aware of virtualization z Gets around problems of x86 architecture z Allows better performance to be achieved z Expose some effects of virtualization z Translucent VM OS can be used to optimize for performance z Keep hypervisor layer as small and simple as possible z Resource management, Device drivers run in privileged VMM z Enhances security, resource isolation 4 Paravirtualization 5 z Solution to issues with x86 instruction set z Don’t allow guest OS to issue sensitive instructions z Replace those sensitive instructions that don’t trap to ones that will trap z Guest OS makes “hypercalls” (like system calls) to interact with system resources z Allows hypervisor to provide protection between VMs z Exceptions handled by registering handler table with Xen z Fast handler for OS system calls invoked directly z Page fault handler modified to read address from replica location z Guest OS changes largely confined to arch-specific code z Compile for ARCH=xen instead of ARCH=i686 z Original port of Linux required only 1.36% of OS to be modified 5 Para-Virtualization in Xen z Arch xen_x86 : like x86, but Xen hypercalls required for privileged operations z Avoids binary rewriting z Minimize number of privilege transitions into Xen z Modifications relatively simple and self-contained z Modify kernel to understand virtualized environment. z Wall-clock time vs. virtual processor time z Xen provides both types of alarm timer z Expose real resource availability z Enables OS to optimise behaviour 6 x86 CPU virtualization z Xen runs in ring 0 (most privileged) z Ring 1/2 for guest OS, 3 for user-space z General Processor Fault if guest attempts to use privileged instruction z Xen lives in top 64MB of linear address space z Segmentation used to protect Xen as switching page tables too slow on standard x86 z Hypercalls jump to Xen in ring 0 z Guest OS may install ‘fast trap’ handler z Direct user-space to guest OS system calls z MMU virtualisation: shadow vs. direct-mode 7 x86_32 z Xen reserves top of VA 4GB space Xen S z Segmentation protects Kernel S Xen from kernel 3GB z System call speed unchanged ring 0 User U ring 1 ring 3 z Xen 3.0 now supports >4GB mem with 0GB Processor Address Extension (64 bit etc) 8 Xen VM interface: CPU z CPU z Guest runs at lower privilege than VMM z Exception handlers must be registered with VMM z Fast system call handler can be serviced without trapping to VMM z Hardware interrupts replaced by lightweight event notification system z Timer interface: both real and virtual time 9 Xen virtualizing CPU z Many processor architectures provide only 2 levels (0/1) z Guest and apps in 1, VMM in 0 z Run Guest and app as separate processes z Guest OS can use the VMM to pass control between address spaces z Use of software TLB with address space tags to minimize CS overhead 10 XEN: virtualizing CPU in x86 z x86 provides 4 rings (even VAX processor provided 4) z Leverages availability of multiple “rings” z Intermediate rings have not been used in practice since OS/2; x86- specific z An O/S written to only use rings 0 and 3 can be ported; needs to modify kernel to run in ring 1 11 CPU virtualization z Exceptions that are called often: z Software interrupts for system calls z Page faults z Improve Allow “guest” to register a ‘fast’ exception handler for system calls that can be accessed directly by CPU in ring 1, without switching to ring-0/Xen z Handler is validated before installing in hardware exception table: To make sure nothing executed in Ring 0 privilege. z Doesn’t work for Page Fault z Only code in ring 0 can read the faulting address from register 12 Xen 13 Some Xen hypercalls 14 z See http://lxr.xensource.com/lxr/source/xen/include/public/xen.h #define __HYPERVISOR_set_trap_table 0 #define __HYPERVISOR_mmu_update 1 #define __HYPERVISOR_sysctl 35 #define __HYPERVISOR_domctl 36 14 Xen VM interface: Memory z Memory management z Guest cannot install highest privilege level segment descriptors; top end of linear address space is not accessible z Guest has direct (not trapped) read access to hardware page tables; writes are trapped and handled by the VMM z Physical memory presented to guest is not necessarily contiguous 15 Memory virtualization choices z TLB: challenging z Software TLB can be virtualized without flushing TLB entries between VM switches z Hardware TLBs tagged with address space identifiers can also be leveraged to avoid flushing TLB between switches z x86 is hardware-managed and has no tags… z Decisions: z Guest O/Ss allocate and manage their own hardware page tables with minimal involvement of Xen for better safety and isolation z Xen VMM exists in a 64MB section at the top of a VM’s address space that is not accessible from the guest 16 Xen memory management 17 z x86 TLB not tagged z Must optimise context switches: allow VM to see physical addresses z Xen mapped in each VM’s address space z PV: Guest OS manages own page tables z Allocates new page tables and registers with Xen z Can read directly z Updates batched, then validated, applied by Xen 17 Memory virtualization z Guest O/S has direct read access to hardware page tables, but updates are validated by the VMM z Through “hypercalls” into Xen z Also for segment descriptor tables z VMM must ensure access to the Xen 64MB section not allowed z Guest O/S may “batch” update requests to amortize cost of entering hypervisor 18 x86_32 z Xen reserves top of VA 4GB space Xen S z Segmentation protects Kernel S Xen from kernel 3GB z System call speed unchanged ring 0 User U ring 1 ring 3 z Xen 3.0 now supports >4GB mem with 0GB Processor Address Extension (64 bit etc) 19 Virtualized memory 20 management VM1 VM2 z Each process in each VM has 1 2 1 5 1 2 1 5 its own VAS 2 3 2 6 2 3 2 6 z Guest OS deals with real (pseudo-physical) pages, Xen 2 8 maps physical to machine 3 12 z For PV, guest OS uses 5 14 hypercalls to interact with memory 6 9 z For HVM, Xen has shadow 2 4 page tables (VT instructions help) 3 16 5 17 6 18 20 TLB when VM1 is running VP PID PN 1 1 ? 2 1 ? 1 2 ? 2 2 ? 21 MMU Virtualization: shadow mode 22 Shadow page table z Hypervisor responsible for trapping access to virtual page table z Updates need to be propagate back and forth between Guest OS and VMM z Increases cost of managing page table flags (modified, accessed bits) z Can view physical memory as contiguous z Needed for full virtualization 23 MMU virtualization: direct mode z Take advantage of Paravirtualization z OS can be modified to be involved only in page table updates z Restrict guest OSes to read only access z Classify Page frames into frames that holds page table z Once registered as page table frame, make the page frame R_ONLY z Can avoid the use of shadow page tables 24 Single PTE update 25 On write PTE : Emulate guest reads Virtual → Machine first guest write Guest OS yes emulate? Xen VMM Hardware MMU 26 Bulk update z Useful when creating new Virtual address spaces z New Process via fork and Context switch z Requires creation of several PTEs z Multipurpose hypercall z Update PTEs z Update virtual to Machine mapping z Flush TLB z Install new PTBR 27 Batched Update Interface guest reads PD Virtual → Machine guest PT PT PT writes Guest OS validation Xen VMM Hardware MMU 28 Writeable Page Tables: create new entries guest reads PD Virtual → Machine guest writes X PT PT PT Guest OS Xen VMM Hardware MMU 29 Writeable Page Tables : First Use—validate mapping via TLB guest reads Virtual → Machine guest writes X Guest OS page fault Xen VMM Hardware MMU 30 Writeable Page Tables : Re- hook guest reads Virtual → Machine guest writes Guest OS validate Xen VMM Hardware MMU 31 Physical memory z Memory allocation for each VM specified at boot z Statically partitioned z No overlap in machine memory z Strong isolation z Non-contiguous (Sparse allocation) z Balloon driver z Add or remove machine memory from guest OS 32 Xen memory management 33 z Xen does not swap out memory allocated to domains z Provides consistent performance for domains z By itself would create inflexible system (static memory allocation) z Balloon driver allows guest memory to grow/shrink z Memory target set as value in the XenStore z If guest above target, free/swap out, then release to Xen z If guest below target, can increase usage z Hypercalls allow guests to see/change state of memory z Physical-real mappings z “Defragment” allocated memory 33 Xen VM interface: I/O z I/O z Virtual devices (device descriptors) exposed as asynchronous I/O rings to guests z Event notification is by means of an upcall as opposed to interrupts 34 I/O z Handle interrupts z Data transfer z Data written to I/O buffer pools in each domain z These Page frames pinned by Xen 35 Details: I/O z I/O Descriptor Ring: 36 I/O rings REQUEST CONSUMER (XEN) REQUEST PRODUCER
Recommended publications
  • Effective Virtual CPU Configuration with QEMU and Libvirt
    Effective Virtual CPU Configuration with QEMU and libvirt Kashyap Chamarthy <[email protected]> Open Source Summit Edinburgh, 2018 1 / 38 Timeline of recent CPU flaws, 2018 (a) Jan 03 • Spectre v1: Bounds Check Bypass Jan 03 • Spectre v2: Branch Target Injection Jan 03 • Meltdown: Rogue Data Cache Load May 21 • Spectre-NG: Speculative Store Bypass Jun 21 • TLBleed: Side-channel attack over shared TLBs 2 / 38 Timeline of recent CPU flaws, 2018 (b) Jun 29 • NetSpectre: Side-channel attack over local network Jul 10 • Spectre-NG: Bounds Check Bypass Store Aug 14 • L1TF: "L1 Terminal Fault" ... • ? 3 / 38 Related talks in the ‘References’ section Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications What this talk is not about 4 / 38 Related talks in the ‘References’ section What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications 4 / 38 What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications Related talks in the ‘References’ section 4 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP QEMU QEMU VM1 VM2 Custom Disk1 Disk2 Appliance ioctl() KVM-based virtualization components Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP Custom Appliance KVM-based virtualization components QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) Custom Appliance KVM-based virtualization components libvirtd QMP QMP QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 libguestfs (guestfish) Custom Appliance KVM-based virtualization components OpenStack, et al.
    [Show full text]
  • Industrial Control Via Application Containers: Migrating from Bare-Metal to IAAS
    Industrial Control via Application Containers: Migrating from Bare-Metal to IAAS Florian Hofer, Student Member, IEEE Martin A. Sehr Antonio Iannopollo, Member, IEEE Faculty of Computer Science Corporate Technology EECS Department Free University of Bolzano-Bozen Siemens Corporation University of California Bolzano, Italy Berkeley, CA 94704, USA Berkeley, CA 94720, USA fl[email protected] [email protected] [email protected] Ines Ugalde Alberto Sangiovanni-Vincentelli, Fellow, IEEE Barbara Russo Corporate Technology EECS Department Faculty of Computer Science Siemens Corporation University of California Free University of Bolzano-Bozen Berkeley, CA 94704, USA Berkeley, CA 94720, USA Bolzano, Italy [email protected] [email protected] [email protected] Abstract—We explore the challenges and opportunities of control design full authority over the environment in which shifting industrial control software from dedicated hardware to its software will run, it is not straightforward to determine bare-metal servers or cloud computing platforms using off the under what conditions the software can be executed on cloud shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on computing platforms due to resource virtualization. Yet, we a series of dedicated latency tests targeting relevant real-time believe that the principles of Industry 4.0 present a unique configurations. opportunity to explore complementing traditional automation Index Terms—Industrial Control Systems, Real-Time, IAAS, components with a novel control architecture [3]. Containers, Determinism We believe that modern virtualization techniques such as application containerization [3]–[5] are essential for adequate I. INTRODUCTION utilization of cloud computing resources in industrial con- Emerging technologies such as the Internet of Things and trol systems.
    [Show full text]
  • A Virtual Machine Environment for Real Time Systems Laboratories
    AC 2007-904: A VIRTUAL MACHINE ENVIRONMENT FOR REAL-TIME SYSTEMS LABORATORIES Mukul Shirvaikar, University of Texas-Tyler MUKUL SHIRVAIKAR received the Ph.D. degree in Electrical and Computer Engineering from the University of Tennessee in 1993. He is currently an Associate Professor of Electrical Engineering at the University of Texas at Tyler. He has also held positions at Texas Instruments and the University of West Florida. His research interests include real-time imaging, embedded systems, pattern recognition, and dual-core processor architectures. At the University of Texas he has started a new real-time systems lab using dual-core processor technology. He is also the principal investigator for the “Back-To-Basics” project aimed at engineering student retention. Nikhil Satyala, University of Texas-Tyler NIKHIL SATYALA received the Bachelors degree in Electronics and Communication Engineering from the Jawaharlal Nehru Technological University (JNTU), India in 2004. He is currently pursuing his Masters degree at the University of Texas at Tyler, while working as a research assistant. His research interests include embedded systems, dual-core processor architectures and microprocessors. Page 12.152.1 Page © American Society for Engineering Education, 2007 A Virtual Machine Environment for Real Time Systems Laboratories Abstract The goal of this project was to build a superior environment for a real time system laboratory that would allow users to run Windows and Linux embedded application development tools concurrently on a single computer. These requirements were dictated by real-time system applications which are increasingly being implemented on asymmetric dual-core processors running different operating systems. A real time systems laboratory curriculum based on dual- core architectures has been presented in this forum in the past.2 It was designed for a senior elective course in real time systems at the University of Texas at Tyler that combines lectures along with an integrated lab.
    [Show full text]
  • Understanding Full Virtualization, Paravirtualization, and Hardware Assist
    VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist Contents Introduction .................................................................................................................1 Overview of x86 Virtualization..................................................................................2 CPU Virtualization .......................................................................................................3 The Challenges of x86 Hardware Virtualization ...........................................................................................................3 Technique 1 - Full Virtualization using Binary Translation......................................................................................4 Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5 Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6 Memory Virtualization................................................................................................6 Device and I/O Virtualization.....................................................................................7 Summarizing the Current State of x86 Virtualization Techniques......................8 Full Virtualization with Binary Translation is the Most Established Technology Today..........................8 Hardware Assist is the Future of Virtualization, but the Real Gains Have
    [Show full text]
  • Introduction to Virtualization
    z Systems Introduction to Virtualization SHARE Orlando Linux and VM Program Romney White, IBM [email protected] z Systems Architecture and Technology © 2015 IBM Corporation Agenda ° Introduction to Virtualization – Concept – Server Virtualization Approaches – Hypervisor Implementation Methods – Why Virtualization Matters ° Virtualization on z Systems – Logical Partitions – Virtual Machines 2 z Systems Virtualization Technology © 2015 IBM Corporation Virtualization Concept Virtual Resources Proxies for real resources: same interfaces/functions, different attributes May be part of a physical resource or multiple physical resources Virtualization Creates virtual resources and "maps" them to real resources Primarily accomplished with software or firmware Resources Components with architecturally-defined interfaces/functions May be centralized or distributed - usually physical Examples: memory, disk drives, networks, servers Separates presentation of resources to users from actual resources Aggregates pools of resources for allocation to users as virtual resources 3 z Systems Virtualization Technology © 2015 IBM Corporation Server Virtualization Approaches Hardware Partitioning Bare-metal Hypervisor Hosted Hypervisor Apps ... Apps Apps ... Apps Apps ... Apps OS OS OS OS OS OS Adjustable partitions Hypervisor Hypervisor Partition Controller Host OS SMP Server SMP Server SMP Server Server is subdivided into fractions Hypervisor provides fine-grained Hypervisor uses OS services to each of which can run an OS timesharing of all resources
    [Show full text]
  • KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St
    St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 5-2018 KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Pasunuru, Srinath Reddy, "KVM Based Virtualization and Remote Management" (2018). Culminating Projects in Information Assurance. 53. https://repository.stcloudstate.edu/msia_etds/53 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. 1 KVM Based Virtualization and Remote Management by Srinath Reddy Pasunuru A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree Master of Science in Information Assurance May, 2018 Starred Paper Committee Susantha Herath, Chairperson Ezzat Kirmani Sneh Kalia 2 Abstract In the recent past, cloud computing is the most significant shifts and Kernel Virtual Machine (KVM) is the most commonly deployed hypervisor which are used in the IaaS layer of the cloud computing systems. The Hypervisor is the one which provides the complete virtualization environment which will intend to virtualize as much as hardware and systems which will include the CPUs, Memory, network interfaces and so on. Because of the virtualization technologies such as the KVM and others such as ESXi, there has been a significant decrease in the usage if the resources and decrease in the costs involved.
    [Show full text]
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]
  • Introduction to Virtualization Virtualization
    Introduction to Virtualization Prashant Shenoy Computer Science CS691D: Hot-OS Lecture 2, page 1 Virtualization • Virtualization: extend or replace an existing interface to mimic the behavior of another system. – Introduced in 1970s: run legacy software on newer mainframe hardware • Handle platform diversity by running apps in VMs – Portability and flexibility Computer Science CS691D: Hot-OS Lecture 2, page 2 Types of Interfaces • Different types of interfaces – Assembly instructions – System calls – APIs • Depending on what is replaced /mimiced, we obtain different forms of virtualization Computer Science CS691D: Hot-OS Lecture 2, page 3 Types of Virtualization • Emulation – VM emulates/simulates complete hardware – Unmodified guest OS for a different PC can be run • Bochs, VirtualPC for Mac, QEMU • Full/native Virtualization – VM simulates “enough” hardware to allow an unmodified guest OS to be run in isolation • Same hardware CPU – IBM VM family, VMWare Workstation, Parallels,… Computer Science CS691D: Hot-OS Lecture 2, page 4 Types of virtualization • Para-virtualization – VM does not simulate hardware – Use special API that a modified guest OS must use – Hypercalls trapped by the Hypervisor and serviced – Xen, VMWare ESX Server • OS-level virtualization – OS allows multiple secure virtual servers to be run – Guest OS is the same as the host OS, but appears isolated • apps see an isolated OS – Solaris Containers, BSD Jails, Linux Vserver • Application level virtualization – Application is gives its own copy of components that are not shared • (E.g., own registry files, global objects) - VE prevents conflicts – JVM Computer Science CS691D: Hot-OS Lecture 2, page 5 Examples • Application-level virtualization: “process virtual machine” • VMM /hypervisor Computer Science CS691D: Hot-OS Lecture 2, page 6 The Architecture of Virtual Machines J Smith and R.
    [Show full text]
  • The Operating System Process in Virtualization for Cloud Computing 1J
    INFOKARA RESEARCH ISSN NO: 1021-9056 THE OPERATING SYSTEM PROCESS IN VIRTUALIZATION FOR CLOUD COMPUTING 1J. Saravanan, 2Saravanan .P 1M.Phil. Research Scholar, D.B.Jain College (Autonomous), Thoraipakkam, Chennai, India. E-mail: [email protected] 2Assistant Professor, D.B.Jain College (Autonomous), Thoraipakkam, Chennai, India. E-mail: [email protected] ABSTRACT: OS-level virtualization is an era that walls the working system to create a couple of remoted Virtual Machines (VM). An OS-level VM is a digital execution environment that may be forked right away from the baserunning environment. OS-level virtualization has been extensively used to improve safety, manageability, and availability of today’s complicated software program surroundings, with small runtime and resource overhead, and with minimal modifications to the existing computing infrastructure. Keywords: Operating System Virtualization, Virtual Machines, Virtual Environment Cloud Computing, Virtual Private System. 1. INTRODUCTION: Operating System Virtualization (OS Virtualization) is the last form of Virtualization in Cloud Computing. Operating system virtualization is a part of virtualization technology and is a form of server virtualization. In this OS Virtualization tutorial, we are going to cowl makes use of, working, types, kinds of disks, blessings of Operating System Virtualization. Operating System virtualizations consists of a modified shape than a normal operating system so that exceptional customers can perform its give up-use unique applications. This entire process shall perform on a unmarried laptop at a time. In OS virtualizations, the virtual eyes surroundings accept the command from any of the users working it and performs the different duties on the identical gadget by using running specific packages.
    [Show full text]
  • What's New in the Z/VM 6.3 Hypervisor Session 17515
    What's New in the z/VM 6.3 Hypervisor Session 17515 John Franciscovich IBM: z/VM Development Endicott, NY Insert Custom Presented by Bill Bitner Session QR if [email protected] Desired Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. BladeCenter* FICON* OMEGAMON* RACF* System z9* zSecure DB2* GDPS* Performance Toolkit for VM Storwize* System z10* z/VM* DS6000* HiperSockets Power* System Storage* Tivoli* z Systems* DS8000* HyperSwap PowerVM System x* zEnterprise* ECKD IBM z13* PR/SM System z* z/OS* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. Java and all Java based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
    [Show full text]
  • Oracle® Linux Virtualization Manager Getting Started Guide
    Oracle® Linux Virtualization Manager Getting Started Guide F25124-11 September 2021 Oracle Legal Notices Copyright © 2019, 2021 Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Virtualizationoverview
    VMWAREW H WHITEI T E PPAPERA P E R Virtualization Overview 1 VMWARE WHITE PAPER Table of Contents Introduction .............................................................................................................................................. 3 Virtualization in a Nutshell ................................................................................................................... 3 Virtualization Approaches .................................................................................................................... 4 Virtualization for Server Consolidation and Containment ........................................................... 7 How Virtualization Complements New-Generation Hardware .................................................. 8 Para-virtualization ................................................................................................................................... 8 VMware’s Virtualization Portfolio ........................................................................................................ 9 Glossary ..................................................................................................................................................... 10 2 VMWARE WHITE PAPER Virtualization Overview Introduction Virtualization in a Nutshell Among the leading business challenges confronting CIOs and Simply put, virtualization is an idea whose time has come. IT managers today are: cost-effective utilization of IT infrastruc- The term virtualization broadly describes the separation
    [Show full text]