Multi-Hypervisor Virtual Machines

Total Page:16

File Type:pdf, Size:1020Kb

Multi-Hypervisor Virtual Machines Enabling Efficient Hypervisor-as-a-Service Clouds with Ephemeral VirtualizationMulti-Hypervisor Virtual Machines: Dan Williams†, Yaohui Hu‡, Umesh Deshpande, Piush K Sinha‡, NiltonEnabling An Ecosystem of HypervisorBila†, Kartik Gopalan‡, Hani Jamjoom† -level Services † IBM T.J. Watson Research Center Kartik Gopalan, Rohith Kugve, Hardik Bagdi, Yaohui Hu – Binghamton University ‡ Binghamton University Dan Williams, Nilton Bila – IBM T.J. Watson Research Center IBM Almaden Research Center Funded in part by the NSF Funded by NSF Hypervisors • A thin and secure layer in the cloud -- or -- Guest 1 Hypervisor Hypervisors • A thin and secure layer in the cloud -- or -- • Feature-filled cloud differentiators Guest 1 • Migration • Checkpointing • High availability • Live Guest Patching • Network monitoring Service Service Service A B C • Intrusion detection • Other VMI Hypervisor Lots of third-party interest in hypervisor-level services • Ravello • Bromium • XenBlanket • But limited support for third party • McCafe DeepDefender services from base hypervisor. • Secvisor • Cloudvisor • And more… How can a guest use multiple third-party hypervisor-level services? • Our Solution: Span virtualization Guest 1 Guest 2 • One guest controlled by multiple Service coresident hypervisors. Service Service Service A L1s B C D L0 Hypervisor Outline • Why multi-hypervisor virtual machines? • Design of Span Virtualization • Evaluations • Related Work • Conclusions and Future Work Option 1: Fat hypervisor • All services run at the most privileged level. Guest 1 Guest 2 • But…hypervisor cannot trust third- party services in privileged mode. Service Service Service A B C Hypervisor Option 2: Native user space services Hypervisor User space • Services run natively in the user space of the hypervisor Service Service Service A B C Guest 1 • Services control guest indirectly via the hypervisor • E.g. QEMU with KVM, uDenali Hypervisor • But…Potentially large user-kernel interface Cloud providers reluctant to run third- • event interposition and system calls party services natively, even if in user space. Option 3: Service VMs Service VM • Run services inside deprivileged VMs Service Service Service • Services control guest indirectly via A B C Guest 1 hypercalls and events • Single trusted Service VM • Runs all services Hypervisor • E.g. Domain0 in Xen -- or -- Option 3: Service VMs Service VMs • Run services inside deprivileged VMs Service Service Service A B Guest 1 • Services control guest indirectly via C hypercalls and events • Multiple service VMs • One per service Hypervisor • Deprivileged and restartable • E.g. Service Domains in Xoar Option 3: Service VMs Service VMs • Run services inside deprivileged VMs Service Service Service A B C Guest 1 • Services control guest indirectly via hypercalls and events • Multiple service VMs Hypervisor • One per service • Deprivileged and restartable • E.g. Service Domains in Xoar Lack direct control over ISA-level guest state • Memory mappings, VCPU scheduling, port-mapped I/O, etc. Option 4: Nested Virtualization • Services run in a deprivileged L1 hypervisor, which runs on L0. Guest 1 Guest 2 • Services control guest at virtualized ISA level. • But … multiple services must reside in the L1 same L1, i.e. fat L1. A B C D • Vertically Stack L1 hypervisors? L0 • More than two levels of nesting is inefficient. Hypervisor Our solution: Span Virtualization • Allow multiple coresident L1s to concurrently control a common guest • i.e. Horizontal layering of L1 hypervisors Guest 1 Guest 2 • Guest is a multi-hypervisor virtual machine • Each L1 L1s A B C D • Offers guest services that augment L0’s services. • Controls one or more guest resources L0 Hypervisor Design Goals of Span Virtualization • Guest Transparency • Guest remains unmodified Guest 1 Guest 2 • Service Isolation • L1s controlling the same guest are unaware of each other. L1s A B C D L0 Hypervisor Guest Control operations • L0 supervises which L1 controls which Span Guest (unmodified) Guest resource L1 Hypervisor(s) Virtual Guest EPT • Memory, VCPU and I/O Virtual Guest EPT • L0 and L1s communicate via Traps/Faults (implicit) and Messages (explicit) L0 Hypervisor Message Channel L1 Traps Guest • Operations: Faults • Attach an L1 to a specified guest resource • Detach an L1 from a guest resource • Subscribe an attached L1 to receive guest events (currently memory events) • Unsubscribe an L1 from a subscribed guest event Control over Guest Resources • Guest Memory • Shared: All hypervisors have the same consistent view of guest memory • Guest VCPUs • Exclusive: All guest VCPUs are controlled by one hypervisor at any instant • Guest I/O devices • Exclusive: Different virtual I/O devices of a guest may be controlled by different hypervisors • Control Transfer • Control over guest VCPUs and I/O devices can be transferred from one L1 to another via L0. Memory Translation Isolation and Communication: Another design goal Single-Level Virtualization VA Page Table GPA EPT HPA is to compartmentalize L1 services, from each other and from L0. First, L1s must have lower execution privi- (a) Single-level lege compared to L0. Secondly, L1s must remain iso- Shadow EPT lated from each other. These two goals are achieved by Page Virtual VA L1PA EPT HPA deprivileging L1s using nested virtualization and execut- Table GPA EPT L1 ing them as separate guests on L0. Finally, L1s must re- (b) Nested main unaware of each other during execution. This goal Shadow Shadow Shadow EPT is achieved by requiring L1s to receive any subscribed EPTEPT guest events that are generated on other L1s only via L0. Page Virtual EPT Virtual EPT VA GPA Virtual EPTVirtual L1PA Virtual EPT HPA Table EPTL1 There are two ways that L0 communicates with L1s: EPT implicitly via traps and explicitly via messages. Traps EPT (c) Span Guest allow L0 to transparently intercept certain memory man- agement operations by L1 on the guest. Explicit mes- Figure 3: Memory translation for single-level, nested, sages allow an L1 to directly request guest control from and Span VMs. VA = Virtual Address; GPA = Guest L0. An Event Processing module in L0 traps runtime up- Physical Address; L1PA = L1 Physical Address; HPA = dates to guest memory mappings by any L1 and synchro- Host Physical Address. nizes guest mappings across different L1s. The event processing module also relays guest memory faults that 4.1 Traditional Memory Translation need to be handled by L1. A bidirectional Message Channel relays explicit messages between L0 and L1s in- In modern x86 processors, hypervisors manage the phys- cluding attach/detach requests, memory event subscrip- ical memory that a guest can access using a virtualiza- tion/notification, guest I/O requests, and virtual inter- tion feature called Extended Page Tables (EPT) [37], also rupts. Some explicit messages, such as guest I/O re- called Nested Page Tables in AMD-V [5]. quests and virtual interrupts, could be replaced with im- Single-level virtualization: Figure 3(a) shows that for plicit traps. Our choice of which to use is largely based single-level virtualization, the guest page tables map vir- on ease of implementation on a case-by-case basis. tual addresses to guest physical addresses (VA to GPA in the figure). The hypervisor uses an EPT to map guest Continuous vs. Transient Control: Span virtualiza- physical addresses to host physical addresses (GPA to tion allows L1’s control over guest resources to be either HPA). Guest memory permissions are controlled by the continuous or transient. Continuous control means that combination of permissions in guest page table and EPT. an L1 exerts uninterrupted control over one or more guest resources for an extended period of time. For example, Whenever the guest attempts to access a page that is an intrusion detection service in L1 that must monitor either not present or protected in the EPT, the hardware guest system calls, VM exits, or network traffic, would generates an EPT fault and traps into the hypervisor, require continuous control of guest memory, VCPUs, which handles the fault by mapping a new page, em- and network device. Transient control means that an L1 ulating an instruction, or taking other actions. On the acquires full control over guest resources for a brief du- other hand, the hypervisor grants complete control over ration, provides a short service to the guest, and releases the traditional paging hardware to the guest. A guest OS guest control back to L0. For instance, an L1 that period- is free to maintain the mappings between its virtual and ically checkpoints the guest would need transient control guest physical address space and update them as it sees of guest memory, VCPUs, and I/O devices. fit, without trapping into the hypervisor. Nested virtualization: Figure 3(b) shows that for nested virtualization, the guest is similarly granted con- trol over the traditional paging hardware to map virtual 4 Memory Management addresses to its guest physical address space. L1 main- tains a Virtual EPT to map the guest pages to pages in A Span VM has a single guest physical address space L1’s physical addresses space, or L1 pages. Finally, one which is mapped into the address space of all attached more translation is required: L0 maintains EPTL1 to map L1s. Thus any memory write on a guest page is imme- L1 pages to physical pages. However, x86 processors diately visible to all hypervisors controlling the guest. can translate only two levels of addresses in hardware, Note that all L1s have the same visibility into the guest from guest
Recommended publications
  • Effective Virtual CPU Configuration with QEMU and Libvirt
    Effective Virtual CPU Configuration with QEMU and libvirt Kashyap Chamarthy <[email protected]> Open Source Summit Edinburgh, 2018 1 / 38 Timeline of recent CPU flaws, 2018 (a) Jan 03 • Spectre v1: Bounds Check Bypass Jan 03 • Spectre v2: Branch Target Injection Jan 03 • Meltdown: Rogue Data Cache Load May 21 • Spectre-NG: Speculative Store Bypass Jun 21 • TLBleed: Side-channel attack over shared TLBs 2 / 38 Timeline of recent CPU flaws, 2018 (b) Jun 29 • NetSpectre: Side-channel attack over local network Jul 10 • Spectre-NG: Bounds Check Bypass Store Aug 14 • L1TF: "L1 Terminal Fault" ... • ? 3 / 38 Related talks in the ‘References’ section Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications What this talk is not about 4 / 38 Related talks in the ‘References’ section What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications 4 / 38 What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications Related talks in the ‘References’ section 4 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP QEMU QEMU VM1 VM2 Custom Disk1 Disk2 Appliance ioctl() KVM-based virtualization components Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP Custom Appliance KVM-based virtualization components QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) Custom Appliance KVM-based virtualization components libvirtd QMP QMP QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 libguestfs (guestfish) Custom Appliance KVM-based virtualization components OpenStack, et al.
    [Show full text]
  • Xen Enterprise
    Xen: the Past, Present and Exciting Future Ian Pratt Chairman of Xen.org, SVP and Founder of Bromium Sponsored by: & 1 Outline • Community Update • Xen 4 Review • Xen and the next wave of virtualization 2 COMMUNITY UPDATE 3 2011 Highlights • Inclusion of Xen into Linux 3 (and distros) • New Initiatives: – Project Kronos – Xen.org Governance – Renewed focus on Xen for ARM • Successful Community Initiatives – Documentation Day – Google Summer of Code – Hackathons: Cambridge (Citrix) and Munich (Fujitsu) • Lars Kurth: (not so) new Community Manager 4 Contribution Statistics By Change Sets Contributors to Xen.org 5000.0 200 4500.0 180 4000.0 160 3500.0 140 3000.0 120 XenARM** 2500.0 100 PVOPS Individuals 2000.0 XCP 80 Orgs 1500.0 Xen HV 60 1000.0 40 500.0 20 0.0 0 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011* *) End of Sept 2011 **) Activity on Development branch (not yet in xen-unstable) 5 2010 & 2011 Contributors (by KLOC) 2010** 2011** *** 1% 2% 3% 4% 5% 4% 6% 5% Citrix HV 28% Citrix XCP Citrix XCP 5% Citrix HV Oracle Samsung* 39% 11% Intel Novell 6% Novell Oracle Fujitsu AMD 7% AMD Individual Individual 13% Intel Misc 18% Misc 8% University 20% 15% *) Activity on Development branch (not yet in xen-unstable) **) Includes PVOPS ***) Until Sept 2011 6 Developer mailing list traffic Conversations, excluding patches excluding Conversations, 1000 1500 2000 2500 500 0 Oct-03 Dec-03 Feb-04 Apr-04 Jun-04 Aug-04 Oct-04 Dec-04 Feb-05 Apr-05 Jun-05 Aug-05 Oct-05 Dec-05 Feb-06 Apr-06 Jun-06 Aug-06 Oct-06 Dec-06 xen-devel Feb-07 Apr-07 Jun-07 Aug-07
    [Show full text]
  • Understanding Full Virtualization, Paravirtualization, and Hardware Assist
    VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist Contents Introduction .................................................................................................................1 Overview of x86 Virtualization..................................................................................2 CPU Virtualization .......................................................................................................3 The Challenges of x86 Hardware Virtualization ...........................................................................................................3 Technique 1 - Full Virtualization using Binary Translation......................................................................................4 Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5 Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6 Memory Virtualization................................................................................................6 Device and I/O Virtualization.....................................................................................7 Summarizing the Current State of x86 Virtualization Techniques......................8 Full Virtualization with Binary Translation is the Most Established Technology Today..........................8 Hardware Assist is the Future of Virtualization, but the Real Gains Have
    [Show full text]
  • Introduction to Virtualization
    z Systems Introduction to Virtualization SHARE Orlando Linux and VM Program Romney White, IBM [email protected] z Systems Architecture and Technology © 2015 IBM Corporation Agenda ° Introduction to Virtualization – Concept – Server Virtualization Approaches – Hypervisor Implementation Methods – Why Virtualization Matters ° Virtualization on z Systems – Logical Partitions – Virtual Machines 2 z Systems Virtualization Technology © 2015 IBM Corporation Virtualization Concept Virtual Resources Proxies for real resources: same interfaces/functions, different attributes May be part of a physical resource or multiple physical resources Virtualization Creates virtual resources and "maps" them to real resources Primarily accomplished with software or firmware Resources Components with architecturally-defined interfaces/functions May be centralized or distributed - usually physical Examples: memory, disk drives, networks, servers Separates presentation of resources to users from actual resources Aggregates pools of resources for allocation to users as virtual resources 3 z Systems Virtualization Technology © 2015 IBM Corporation Server Virtualization Approaches Hardware Partitioning Bare-metal Hypervisor Hosted Hypervisor Apps ... Apps Apps ... Apps Apps ... Apps OS OS OS OS OS OS Adjustable partitions Hypervisor Hypervisor Partition Controller Host OS SMP Server SMP Server SMP Server Server is subdivided into fractions Hypervisor provides fine-grained Hypervisor uses OS services to each of which can run an OS timesharing of all resources
    [Show full text]
  • KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St
    St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 5-2018 KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Pasunuru, Srinath Reddy, "KVM Based Virtualization and Remote Management" (2018). Culminating Projects in Information Assurance. 53. https://repository.stcloudstate.edu/msia_etds/53 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. 1 KVM Based Virtualization and Remote Management by Srinath Reddy Pasunuru A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree Master of Science in Information Assurance May, 2018 Starred Paper Committee Susantha Herath, Chairperson Ezzat Kirmani Sneh Kalia 2 Abstract In the recent past, cloud computing is the most significant shifts and Kernel Virtual Machine (KVM) is the most commonly deployed hypervisor which are used in the IaaS layer of the cloud computing systems. The Hypervisor is the one which provides the complete virtualization environment which will intend to virtualize as much as hardware and systems which will include the CPUs, Memory, network interfaces and so on. Because of the virtualization technologies such as the KVM and others such as ESXi, there has been a significant decrease in the usage if the resources and decrease in the costs involved.
    [Show full text]
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]
  • Bromium Advanced Endpoint Security Protect | Detect | Respond
    Data Sheet Bromium Advanced Endpoint Security Protect | Detect | Respond At a Glance Today’s enterprises are fighting a losing battle against advanced, Bromium Advanced Endpoint Security targeted and often undetectable cyber attacks. In spite of the uses micro-virtualization and contextual, increased spend on more and more layered security solutions, real-time detection to protect across all major threat vectors and attack types. organizations are not making headway against attackers. Enterprises Enterprises can finally defeat cyber attacks and eliminate endpoint breaches. are getting breached, users’ productivity is hampered by side effects of security solutions, and IT is stuck on a treadmill of never-ending Key Business Benefits patching and remediation. PROTECT AGAINST ZERO-DAY ATTACKS Revolutionary micro-virtualization technology prevents known and Bromium® Advanced Endpoint Security next-generation endpoint protection unknown cyber attacks from offers a better way to defeat cyber that integrates endpoint threat isolation, compromising endpoints and your attacks that target the endpoint, where threat analytics and continuous host corporate network more than 70% of breaches originate. monitoring to enable organizations TRUE PREVENTION WITHOUT The first to deliver an endpoint security to protect, detect and respond to USER DISRUPTION Users can safely click on anything from solution based on virtualization, targeted attacks, zero-day threats and home, branch offices, airports and cafes— Bromium offers comprehensive attempted breaches in real time. without the need for restrictive policies DETECT ATTEMPTED BREACHES Behavioral detection techniques alert and monitor suspicious activity on the host system ELIMINATE REMEDIATION AND EMERGENCY PATCHING COSTS Bromium-protected endpoints are self-remediating because any possibility of malware persistence is eliminated.
    [Show full text]
  • The Operating System Process in Virtualization for Cloud Computing 1J
    INFOKARA RESEARCH ISSN NO: 1021-9056 THE OPERATING SYSTEM PROCESS IN VIRTUALIZATION FOR CLOUD COMPUTING 1J. Saravanan, 2Saravanan .P 1M.Phil. Research Scholar, D.B.Jain College (Autonomous), Thoraipakkam, Chennai, India. E-mail: [email protected] 2Assistant Professor, D.B.Jain College (Autonomous), Thoraipakkam, Chennai, India. E-mail: [email protected] ABSTRACT: OS-level virtualization is an era that walls the working system to create a couple of remoted Virtual Machines (VM). An OS-level VM is a digital execution environment that may be forked right away from the baserunning environment. OS-level virtualization has been extensively used to improve safety, manageability, and availability of today’s complicated software program surroundings, with small runtime and resource overhead, and with minimal modifications to the existing computing infrastructure. Keywords: Operating System Virtualization, Virtual Machines, Virtual Environment Cloud Computing, Virtual Private System. 1. INTRODUCTION: Operating System Virtualization (OS Virtualization) is the last form of Virtualization in Cloud Computing. Operating system virtualization is a part of virtualization technology and is a form of server virtualization. In this OS Virtualization tutorial, we are going to cowl makes use of, working, types, kinds of disks, blessings of Operating System Virtualization. Operating System virtualizations consists of a modified shape than a normal operating system so that exceptional customers can perform its give up-use unique applications. This entire process shall perform on a unmarried laptop at a time. In OS virtualizations, the virtual eyes surroundings accept the command from any of the users working it and performs the different duties on the identical gadget by using running specific packages.
    [Show full text]
  • What's New in the Z/VM 6.3 Hypervisor Session 17515
    What's New in the z/VM 6.3 Hypervisor Session 17515 John Franciscovich IBM: z/VM Development Endicott, NY Insert Custom Presented by Bill Bitner Session QR if [email protected] Desired Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. BladeCenter* FICON* OMEGAMON* RACF* System z9* zSecure DB2* GDPS* Performance Toolkit for VM Storwize* System z10* z/VM* DS6000* HiperSockets Power* System Storage* Tivoli* z Systems* DS8000* HyperSwap PowerVM System x* zEnterprise* ECKD IBM z13* PR/SM System z* z/OS* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. Java and all Java based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
    [Show full text]
  • Improving Desktop System Security Using Compartmentalization
    Improving Desktop System Security Using Compartmentalization by Mohsen Zohrevandi A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Approved April 2018 by the Graduate Supervisory Committee: Rida Bazzi, Chair Gail-Joon Ahn Adam Doupé Ming Zhao ARIZONA STATE UNIVERSITY August 2018 ABSTRACT Compartmentalizing access to content, be it websites accessed in a browser or docu- ments and applications accessed outside the browser, is an established method for protect- ing information integrity [12, 19, 21, 60]. Compartmentalization solutions change the user experience, introduce performance overhead and provide varying degrees of security. Strik- ing a balance between usability and security is not an easy task. If the usability aspects are neglected or sacrificed in favor of more security, the resulting solution would have a hard time being adopted by end-users. The usability is affected by factors including (1) the gen- erality of the solution in supporting various applications, (2) the type of changes required, (3) the performance overhead introduced by the solution, and (4) how much the user expe- rience is preserved. The security is affected by factors including (1) the attack surface of the compartmentalization mechanism, and (2) the security decisions offloaded to the user. This dissertation evaluates existing solutions based on the above factors and presents two novel compartmentalization solutions that are arguably more practical than their existing counterparts. The first solution, called FlexICon, is an attractive alternative in the design space of compartmentalization solutions on the desktop. FlexICon allows for the creation of a large number of containers with small memory footprint and low disk overhead.
    [Show full text]
  • Virtualizationoverview
    VMWAREW H WHITEI T E PPAPERA P E R Virtualization Overview 1 VMWARE WHITE PAPER Table of Contents Introduction .............................................................................................................................................. 3 Virtualization in a Nutshell ................................................................................................................... 3 Virtualization Approaches .................................................................................................................... 4 Virtualization for Server Consolidation and Containment ........................................................... 7 How Virtualization Complements New-Generation Hardware .................................................. 8 Para-virtualization ................................................................................................................................... 8 VMware’s Virtualization Portfolio ........................................................................................................ 9 Glossary ..................................................................................................................................................... 10 2 VMWARE WHITE PAPER Virtualization Overview Introduction Virtualization in a Nutshell Among the leading business challenges confronting CIOs and Simply put, virtualization is an idea whose time has come. IT managers today are: cost-effective utilization of IT infrastruc- The term virtualization broadly describes the separation
    [Show full text]
  • Paravirtualization (PV)
    Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels of privilege known as Ring 0, 1, 2 and 3 to operating systems and applications to manage access to the computer hardware. While user level applications typically run in Ring 3, the operating system needs to have direct access to the memory and hardware and must execute its privileged instructions in Ring 0. x86 privilege level architecture without virtualization Technique 1: Full Virtualization using Binary Translation This approach relies on binary translation to trap (into the VMM) and to virtualize certain sensitive and non-virtualizable instructions with new sequences of instructions that have the intended effect on the virtual hardware. Meanwhile, user level code is directly executed on the processor for high performance virtualization. Binary translation approach to x86 virtualization Full Virtualization using Binary Translation This combination of binary translation and direct execution provides Full Virtualization as the guest OS is completely decoupled from the underlying hardware by the virtualization layer. The guest OS is not aware it is being virtualized and requires no modification. The hypervisor translates all operating system instructions at run-time on the fly and caches the results for future use, while user level instructions run unmodified at native speed. VMware’s virtualization products such as VMWare ESXi and Microsoft Virtual Server are examples of full virtualization. Full Virtualization using Binary Translation The performance of full virtualization may not be ideal because it involves binary translation at run-time which is time consuming and can incur a large performance overhead.
    [Show full text]