Mythbusting Goes Virtual Debunking Four Common Vsphere “Truths”

Total Page:16

File Type:pdf, Size:1020Kb

Mythbusting Goes Virtual Debunking Four Common Vsphere “Truths” Mythbusting Goes Virtual Debunking Four Common vSphere “Truths” Written by Scott D. Lowe Introduction as truisms nonetheless even as the version count rises ever The information being presented in this paper comes courtesy higher. In this white paper, we will expose four such myths of the great minds of Eric Sloof, a VMware Certified Instructor, about vSphere. vExpert, consultant and active VMware community member; and Mattias Sundling, vExpert and Dell evangelist focused on Myth #1: RDMs have better performance than VMFS. the virtualization space. The information presented here was What is RDM? discussed in depth during an April 2, 2012 webcast with Mattias A raw device mapping (RDM) is created when a vSphere Sundling and Eric Sloof. administrator has configured a virtual machine’s virtual disk to point directly to, for example, a LUN (logical unit number) on Regardless of the underlying technology solution, as anything a storage array. With an RDM in place, a virtual machine can becomes increasingly popular and widespread in use, certain access storage just like it’s any other disk. pieces of sometimes inaccurate information about that product become permanent fact, often taking on legend-like status. RDMs operate as follows: The virtual machine’s initial access to Moreover, as a product matures, it changes; it evolves by an RDM virtual disk results in the virtual machine being pointed taking on new features, shedding old ones and improving the to a small mapping file. This mapping file is a symbolic link functionality everywhere else. However, no matter how much containing the raw ID of the intended storage on the storage a product matures and no matter how much it evolves, many array. Once it learns that raw ID, the virtual machine points products carry with them myths that follow through the ages. directly to the raw ID on the storage array and no longer needs Myths that may or may not have once been true, but are used to make use of the mapping file, as illustrated in Figure 1. Figure 1. A VM initially accesses an RDM virtual disk using a mapping file, but subsequently uses the raw ID. The source of the myth which provides the greatest flexibility in Because the virtual machine is accessing managing the volume using native SAN storage directly and not going through tools. However, physical RDMs lose some some of the abstraction that takes of the features found with virtual volumes, place when the hypervisor is placed including the ability to be snapshotted, in the middle, there is a myth that cloned, made into a template, or migrated RDMs have superior performance over if the migration involves copying the disk. virtual storage devices that make use of vSphere Virtual Machine File System It’s not unreasonable to make the (VMFS) datastores. Evidence of this assumption that “raw” would translate myth abounds in forum articles and into increased performance of the other resources outlining administrators’ virtual machine, but this myth has attempts to use RDMs to eke out as been well and truly busted and, in much performance as possible for fact, RDMs operate with performance storage-intensive workloads, such as characteristics on par with VMFS those supporting databases. storage. This is demonstrated as one starts to peer under the covers at what’s RDMs have two modes: virtual happening with the host system and, in and physical particular, how these storage calls are When considering the use of RDMs, interacting with the hypervisor kernel. bear in mind that they come in two By monitoring the kernel, the entire different flavors: story of how storage operates becomes • Virtual compatibility mode—When an RDM clear. Through this monitoring, an is configured in virtual mode, it appears to administrator can watch the “hidden” the guest operating system just like a virtual story of storage activities and what disk does when it’s housed inside a VMFS impact these activities have on volume. With this mode, administrators overall performance. are still able to enjoy the benefits that come with the use of VMFS, including Testing the myth advanced file locking and snapshots. To evaluate this myth, Eric performed Further, because virtual mode continues to tests using three distinct scenarios, two provide a level of hardware abstraction, it involving RDMs and one using VMFS as is more portable across storage hardware primary storage. The tests use a single than physical mode. virtual machine configured with a SCSI • Physical compatibility mode—When an adapter, but with four different volumes, RDM is in physical mode, the volume each configured like this: the characteristics of the mapped device, • Virtual RDM 2 • Physical RDM The results? • VMDK file on VMFS Busted! • A disk that connects to an iSCSI target through the use of the Microsoft iSCSI In testing, Eric discovered that there was initiator that ships with all current editions very little difference between either of of Windows the RDM configurations and the VMFS configuration. In other words, while Otherwise, the environment was there may be other reasons to choose configured as follows: a RDM-based volume over a VMFS- • vSphere 5.0, virtual machine hardware based volume, doing so for performance version 8. reasons alone isn’t necessary. • The virtual machine was running Windows Server 2008. VMware’s test results • It was configured with 4 gigabytes Even VMware has busted this myth in of memory. While there may a pretty big way, as shown in Figure • A single virtual CPU was added to the 2. The PDF file from which the chart virtual machine. be other reasons was sourced includes a wide variety of • The virtual machine was connected to the test cases that fully debunk the RDM vs. to choose a RDM- local area network’s Cisco 2960 switch. VMFS myth. • The storage being used is an Iomega PX6. based volume over Reasons to choose VMFS over RDMs In measuring directly the latency as a VMFS-based Now, understanding that performance storage commands make their way isn’t a reason to choose RDMs, what are volume, doing so through the kernel, Eric discovered that some better reasons to choose VMFS? there isn’t much of a difference in any VMware has spent years improving for performance of the storage configurations since they VMFS and, with vSphere 5, had made all have to go through the kernel, except reasons alone tremendous improvements to this the iSCSI option, which just goes out robust, cluster-aware file system with isn’t necessary. over the network and connects to an features such as: iSCSI target directly. However, at 1 Gbps, • Storage I/O control iSCSI had a top speed throughput rate of • Storage vMotion 112.6 MBps. • Storage DRS Figure 2. Random mixed I/O per second (higher is better) 3 • Large volume size: 64 TB blocks in a virtual machine that have • Large VMDK file size: 2 TB changed since a point in time. This • Changed block tracking (CBT) support (CBT feature is incredibly powerful because tracks all of the storage blocks in a virtual backup and replication technologies machine that have changed since a point can rely on vSphere’s own vStorage in time.) advanced programming interfaces (APIs), rather than either on drivers and When to choose RDMs over VMFS software developed from scratch or on Even tough RDMs don’t offer better traditional full and incremental backup performance, there are times when methodologies for data protection. an RDM should be considered. When a virtual machine needs access to a Requirements for using CBT particularly large single volume—one A number of requirements must be met that is greater than 2 TB in size—an Using CBT can for CBT to operate: administrator might consider using a • Since CBT was introduced in vSphere 4, the physical RDM, which provides direct drastically shrink host must be running at least that version access to a volume of up to 64 TB in size of vSphere. backup windows, and is not subject to VMDK file size limits, • CBT must actually be enabled for the virtual which remain at 2 TB. Note that this 64 machine. This will be discussed below. and the disk TB capability is valid only for physical • The virtual machine being tracked must be RDMs; virtual RDMs are still limited to a subsystem will also running virtual hardware version 7 or above. size of 2 TB. get less utilized • The virtual machine must be using a storage mechanism that runs through the vSphere Another time when RDMs may storage stack. Such mechanisms include during backups. reasonably come into play is when there VMFS, NFS and RDMs in virtual compatibility is a need to perform SAN snapshotting, mode. However, an RDM in physical which results in snapshots not supported compatibility mode is not supported. iSCSi by vSphere. Before a SAN can take a initiators installed inside a virtual machine snapshot, the virtual machine must be do not work with CBT, either. quiesced, which means that the virtual machine needs to flush buffered data to Benefits of CBT disk and prepare for the snapshot. If you By using CBT, administrators can are using SAN snapshots, which are not drastically shrink their organization’s communicating with the vSphere layer, backup windows since the backup then you need to use RDM with native application doesn´t need to scan the file systems, such as like NTFS or EXT3. VMDK files for block changes when Another scenario that requires the use doing incremental or differential of RDMs comes when there is a need to backups. Even when a full backup is cluster virtual machines with Microsoft performed, CBT can be useful in that the Clustering Services.
Recommended publications
  • Touchless and Always-On Cloud Analytics As a Service 1. Introduction
    Touchless and Always-on Cloud Analytics as a Service S. Suneja, C. Isci, R. Koller, E. de Lara Despite modern advances in automation and managed services, many end users of cloud services remain concerned with regards to the lack of visibility into their operational environments. The underlying principles of existing approaches employed to aid users gain transparency into their runtimes, do not apply to today’s dynamic cloud environment where virtual machines (VMs) and containers operate as processes of the cloud operating system (OS). We present Near Field Monitoring (NFM), a cloud-native framework for monitoring cloud systems and providing operational analytics services. With NFM, we employ cloud, virtualization, and containerization abstractions to provide complete visibility into running entities in the cloud, in a touchless manner, i.e., without modifying, instrumenting or accessing inside the end user context. Operating outside the context of the target systems enables always-on monitoring independent of their health. Using an NFM implementation on OpenStack, we demonstrate the capabilities of NFM, as well as its monitoring accuracy and efficiency. NFM is highly practical and general, supporting more than 1000 different system distributions, allowing instantaneous monitoring as soon as a guest system gets hosted on the cloud, without any setup prerequisites or enforced cooperation. 1. Introduction Emerging cloud services enable end users to define and provision complex, distributed applications and their compute resources with unprecedented
    [Show full text]
  • MOS WS 2020/21 Goals
    Virtualization MOS WS 2020/21 Goals • Give you an overview about: • Virtualization and VMs in General • Hardware Virtualization on x86 Goals • Give you an overview about: • Virtualization and VMs in General • Hardware Virtualization on x86 • Not in this lecture: • Lots and lots of Details • Language Runtimes • How to use Xen/KVM/… History Erik Pitti, CC-BY, www.flickr.com/people/24205142@N00 History • Pioneered with IBM’s CP/CMS in ~1967 running on System/360 and System/370 • CP: Control Program (provided S/360 VMs) • Memory Protection between VMs • Preemptive scheduling • CMS: Cambridge Monitor System (later Conversational Monitor System) – Single User OS • At the time more flexible & efficient than time-sharing multi- user systems! Applications • Consolidation (improve server utilization) • Isolation (incompatibility or security reasons) • Reuse (legacy software) • Development … but was confined to the mainframe-world for a long time! Why? Imagine you want to write an operating system, that is: • Secure • Trustworthy • Small • Fast • Fancy but, … Why? Users expect to run their favourite software („legacy“): • Browsers • Word • iTunes • Certified Business Applications • Gaming (Windows/DirectX to DOS) Porting/Rewriting is not an option! Why? „By virtualizing a commodity OS […] we gain support for legacy applications, and devices we don’t want to write drivers for.“ „All this allows the research community to finally escape the straitjacket of POSIX or Windows compatibility […]“ Roscoe, Elphinstone, and Heiser, 2007 What is Virtualization? Suppose you develop on your x86-based workstation running a system Host, a system Guest which is supposed to run on ARM-based phones. An emulator for G running H precisely emulates G’s: • CPU • Memory (subsystem) • I/O devices Ideally, programs running on the emulated G exhibit the same behaviour, except for timing, as when run on a real system G.
    [Show full text]
  • Mos - Virtualization
    MOS - VIRTUALIZATION Tobias Stumpf, Marcus H¨ahnel WS 2017/18 Goals Give you an overview about: • virtualization and virtual machines in general, • hardware virtualization on x86, • our research regarding virtualization. We will not discuss: • lots and lots of details, • language runtimes, • how to use XEN/KVM/. MOS - Virtualization slide 3 What is Virtualization? Outline What is Virtualization? Very Short History Virtualization on x86 Example: L4Linux Example: NOVA Example: Karma VMM MOS - Virtualization slide 4 What is Virtualization? Starting Point You want to write a new operating system that is • secure, • trustworthy, • small, • fast, • fancy. but . MOS - Virtualization slide 5 What is Virtualization? Commodity Applications Users expect to run all the software they are used to (\legacy"): • browsers, • Word, • iTunes, • certified business applications, • new (Windows/DirectX) and ancient (DOS) games. Porting or rewriting all is infeasible! MOS - Virtualization slide 6 What is Virtualization? One Solution: Virtualization \By virtualizing a commodity OS [...] we gain support for legacy applications, and devices we don't want to write drivers for." \All this allows the research community to finally escape the straitjacket of POSIX or Windows compatibility [...]" Roscoe:2007:HV:1361397.1361401 MOS - Virtualization slide 7 What is Virtualization? Virtualization virtual existing in essence or effect though not in actual fact http://wordnetweb.princeton.edu \All problems in computer science can be solved by another level of indirection." David Wheeler MOS - Virtualization slide 8 What is Virtualization? Emulation Suppose you develop for a system G (guest, e.g. an ARM-based phone) on your workstation H (host, e.g., an x86 PC). An emulator for G running on H precisely emulates G's • CPU, • memory subsystem, and • I/O devices.
    [Show full text]
  • Demystifying Internet of Things Security Successful Iot Device/Edge and Platform Security Deployment — Sunil Cheruvu Anil Kumar Ned Smith David M
    Demystifying Internet of Things Security Successful IoT Device/Edge and Platform Security Deployment — Sunil Cheruvu Anil Kumar Ned Smith David M. Wheeler Demystifying Internet of Things Security Successful IoT Device/Edge and Platform Security Deployment Sunil Cheruvu Anil Kumar Ned Smith David M. Wheeler Demystifying Internet of Things Security: Successful IoT Device/Edge and Platform Security Deployment Sunil Cheruvu Anil Kumar Chandler, AZ, USA Chandler, AZ, USA Ned Smith David M. Wheeler Beaverton, OR, USA Gilbert, AZ, USA ISBN-13 (pbk): 978-1-4842-2895-1 ISBN-13 (electronic): 978-1-4842-2896-8 https://doi.org/10.1007/978-1-4842-2896-8 Copyright © 2020 by The Editor(s) (if applicable) and The Author(s) This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book’s Creative Commons license, unless indicated otherwise in a credit line to the material.
    [Show full text]
  • Red Hat Enterprise Virtualization 3.0 Live Chat Transcript Sponsored by Red Hat
    Red Hat Enterprise Virtualization 3.0 Live Chat Transcript Sponsored by Red Hat Speakers: Chuck Dubuque, Senior Product Marketing Manager, Red Hat Andy Cathrow, Product Manager, Red Hat Red Hat Virtualization Live Chat Transcript 2.23.12 Joe:Hi Everyone, thanks for joining the Red Hat Live Chat. Joe:Today we have Chuck Dubuque & Andrew Cathrow with the Red Hat Virtualization team available LIVE to answer your questions. Joe:Speaker Bios:Chuck Dubuque is the Senior Product Marketing Manager for Red Hat Enterprise Virtualization and is responsible for market analysis, program strategy, and channel support. Prior to joining Red Hat, he worked for three years at a mid-sized VAR (value-added reseller) where he experienced both the marketing and engineering of enterprise hardware and software, including Red Hat Enterprise Linux, VMware, Microsoft Windows Server, NetApp, IBM, Cisco, and Dell. Earlier in his career, Dubuque spent eight years in the biotechnology space in marketing and business development. He earned an MBA from Stanford Graduate School of Business and a bachelor's degree from Dartmouth College. Andrew Cathrow serves as Product Manager at Red Hat, where he is responsible for Red Hat's virtualization products. He has also managed Red Hat's sales engineers. Prior to joining Red Hat in 2006, Cathrow worked in product management for a configuration company, and also for a software company that developed middleware and messaging mainframe and midrange systems. Earlier in his career, Cathrow held various positions at IBM Global Services. Joe:Please feel free to start asking questions now Chuck:Thanks Joe. First I'd like to remind everyone that Red Hat launched RHEV 3.0 on January 18, and out launch event is now available on-demand at http://bit.ly/rhev3event.
    [Show full text]
  • BSD Projects IV – BSD Certification • Main Features • Community • Future Directions a (Very) Brief History of BSD
    BSD Overview Jim Brown May 24, 2012 BSD Overview - 5/24/2012 - Jim Brown, ISD BSD Overview I – A Brief History of BSD III – Cool Hot Stuff • ATT UCB Partnership • Batteries Included • ATT(USL) Lawsuit • ZFS , Hammer • BSD Family Tree • pf Firewall, pfSense • BSD License • Capsicum • Virtualization Topics • Jails, Xen, etc. • Desktop PC-BSD II – The Core BSD Projects IV – BSD Certification • Main Features • Community • Future Directions A (Very) Brief History of BSD 1971 – ATT cheaply licenses Unix source code to many organizations, including UCB as educational material 1975 – Ken Thompson takes a sabbatical from ATT, brings the latest Unix source on tape to UCB his alma mater to run on a PDP 11 which UCB provided. (Industry/academic partnerships were much more common back then.) Computer Science students (notably Bill Joy and Chuck Haley) at UCB begin to make numerous improvements to Unix and make them available on tape as the “Berkeley Software Distribution” - BSD A (Very) Brief History of BSD Some notable CSRG • 1980 – Computer Science Research Group members (CSRG) forms at UCB with DARPA funding to make many more improvements to Unix - job control, autoreboot, fast filesystem, gigabit address space, Lisp, IPC, sockets, TCP/IP stack + applications, r* utils, machine independence, rewriting almost all ATT code with UCB/CSRG code, including many ports • 1991 – The Networking Release 2 tape is released on the Internet via anon FTP. A 386 port quickly follows by Bill and Lynne Jolitz. The NetBSD group is formed- the first Open Source community entirely on the Internet • 1992 – A commercial version, BSDI (sold for $995, 1-800-ITS-UNIX) draws the ire of USL/ATT.
    [Show full text]
  • Vmware Technical Journal
    VOL. 1, NO. 1–APRIL 2012 VMWARE TECHNICAL JOURNAL Editors: Steve Muir, Rita Tavilla and Ben Verghese VMWARE TECHNICAL JOURNAL TECHNICAL VMWARE SPECIAL THANKS TABLE OF CONTENTS Stephen Alan Herrod, Ph.D. Chief Technology Officer and Senior Vice President of Research & Development 1 Introduction Steve Herrod, CTO Julia Austin Vice President Innovation Programs 2 VisorFS: A Special-purpose File System for Efficient Handling of System Images Olivier Cremel PROGRAM COMMITTEE Banit Agrawal 3 A Software-based Approach to Testing VMware® vSphere® VMkernel Public APIs Jim Chow Lan Xue, Sreevathsa Sathyanarayana, Thorbjoern Donbaek, Ramesh Pallapotu, James Truong, Keith Farkas Sriram Sankaran, Eric Lorimer Steve Muir Javier Soltero 4 Providing Efficient and Seamless Desktop Services in Ubiquitous Computing Environments Rita Tavilla Lizhu Zhang, Wenlong Shao, Jim Grandy Ben Verghese 5 Comprehensive User Experience Monitoring Lawrence Spracklen, Banit Agrawal, Rishi Bidarkar, Hari Sivaraman Questions and Comments can be sent to [email protected] 6 StatsFeeder: An Extensible Statistics Collection Framework for Virtualized Environments Vijayaraghavan Soundararajan, Balaji Parimi, Jon Cook 7 VMware Distributed Resource Management: Design, Implementation, and Lessons Learned Ajay Gulati, Anne Holler, Minwen Ji, Ganesha Shanmuganathan, Carl Waldspurger, Xiaoyun Zhu 8 Identity, Access Control, and VMware Horizon Will Pugh, Kyle Austin 9 VMworld 2011 Hands-On Labs: Implementation and Workflow Adam Zimman, Clair Roberts, Mornay Van Der Walt VOL. 1, NO. 1 2012 1, NO. VOL. VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright © 2012 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
    [Show full text]
  • Vmware Vsphere 4.1 Ultimate Bootcamp Course Overview Course
    VMware vSphere 4.1 Ultimate Bootcamp Course Overview VMware Ultimate Bootcamp vSphere 4.1 teaches advanced virtualization concepts and explores the VMware vSphere 4.1 product suite. This comprehensive class prepares the student to become a certified professional virtualization expert. The course objective is to instill the knowledge required for the student to do their job efficiently and effectively, starting from installation of the product to real-world troubleshooting issues. The course focus is not limited only to learning and harnessing the power of VMware but the entire concept of virtualization, and other 3rd party tools and technologies that will enhance VMware capabilities and increase the student's virtualization expertise. Course Outline Course Introduction 5m Course Introduction Chapter 01 - Course Introduction and Methodology 5m Learn IT! Do IT! Know IT! Certified Virtualization Expert (CVE) Certification VMTraining's Physical Setup VMTraining's Setup Student to Datacenter VMTraining's Network IP Setup Chapter 01 Review Chapter 02 - Virtualization Overview 47m Why Virtualize? Why Virtualize? - Cost Savings VMware ROI TCO Calculator v2.0 Why Virtualize? - Simplified Management! What is Virtual Infrastructure? What is a Virtual Machine (VM)? Type 1 and Type 2 Hypervisors Cloud Computing VMware vCloud VMware Virtualization Products VMware Go VMware Server 2.0 VMware Infrastructure 3 (VI3) vSphere 4 vSphere 4.1 - New Features vSphere 4.1 Update 1 (2011/02/10) - New Features VMware vCenter Server VMware vCenter Orchestrator VMware
    [Show full text]
  • Asiabsdcon 2008 Proceedings
    AsiaBSDCon 2008 Proceedings March 27-30, 2008 Tokyo, Japan Copyright c 2008 AsiaBSDCon 2008. All rights reserved. Unauthorized republication is prohibited. Published in Japan, March 2008 In Memory of Jun-ichiro “itojun” Hagino Our friend and colleague Jun-ichiro “itojun” Hagino (a BSD hacker famous for IPv6 implemen- tation, and CTO of ipv6samurais.com) was working as a member of our program committee, but he passed away on October 29th 2007. Itojun was a valued member of the BSD community both for his technical and personal contributions to various projects over his career. We who are working on AsiaBSDCon would like to express our condolences to Itojun’s family and friends and also to dedicate this year’s conference to his memory. INDEX P1A: PC-BSD: FreeBSD on the Desktop 001 Matt Olander (iXsystems) P1B: Tracking FreeBSD in a Commercial Setting 027 M. Warner Losh (Cisco Systems, Inc.) P3A: Gaols: Implementing Jails Under the kauth Framework 033 Christoph Badura (The NetBSD Foundation) P3B: BSD implementations of XCAST6 041 Yuji IMAI, Takahiro KUROSAWA, Koichi SUZUKI, Eiichi MURAMOTO, Katsuomi HAMAJIMA, Hajimu UMEMOTO, and Nobuo KAWAGUTI (XCAST fan club, Japan) P4A: Using FreeBSD to Promote Open Source Development Methods 049 Brooks Davis, Michael AuYeung, and Mark Thomas (The Aerospace Corporation) P4B: Send and Receive of File System Protocols: Userspace Approach With puffs 055 Antti Kantee (Helsinki University of Technology, Finland) P5A: Logical Resource Isolation in the NetBSD Kernel 071 Kristaps Džonsons (Centre for Parallel Computing, Swedish Royal Institute of Technology) P5B: GEOM—in Infrastructure We Trust 081 Pawel Jakub Dawidek (The FreeBSD Project) P6A: A Portable iSCSI Initiator 093 Alistair Crooks (The NetBSD Foundation) P8A: OpenBSD Network Stack Internals 109 Claudio Jeker (The OpenBSD Project) P8B: Reducing Lock Contention in a Multi-Core System 115 Randall Stewart (Cisco Systems, Inc.) P9A: Sleeping Beauty—NetBSD on Modern Laptops 127 Jörg Sonnenberger and Jared D.
    [Show full text]
  • Preemptable Remote Execution Facilities for the V-System
    Preemptable Remote Execution Facilities for the V-System Marvin M. Theimer, Keith A. Lantz, and David R. Cheriton Computer Science Department Stanford University Stanford, CA 94305 Abstract wish to compile a program and reformat the documentation after fixing a program error, while continuing to read mail. In A remote execution facility allows a user of a workstation- based distributed system to offload programs onto idle general, a user may have batch jobs to run concurrently with, workstations, thereby providing the user with access to but unrelated to, some interactive activity. Although any one computational resources beyond that provided by his personal of these programs may perform satisfactorily in isolation on a workstation. In this paper, we describe the design and performance of the remote execution facility in the V workstation, forcing them to share a single workstation distributed system, as well as several implementation issues of degrades interactive response and increases the running time interest. In particular, we focus on network transparency of the execution environment, preemption and migration of of non-interactive programs. remotely executed programs, and avoidance of residual dependencies on the original host. We argue that Use of idle workstations as computation servers increases preemptable remote execution allows idle workstations to be used as a "pool of processors" without interfering with use by the processing power available to users and improves the their owners and without significant overhead for the normal utilization of the hardware base. However, this use must not execution of programs. In general, we conclude that the cost of providing preemption is modest compared to providing a compromise a workstation owner's claim to his machine: A similar amount of computation service by dedicated user must be able to quickly reclaim his workstation to avoid "computation engines".
    [Show full text]
  • Cloud Orchestration
    Cloud Orchestration What is a cloud?… ==> ▶ distributing the load automatically ▶ (and selling more than what you have – thin provisioning everywhere…) ▶ not necessarily virtualized – IaaS can also be bare-metal ▶ immediate delivery ▶ pay-as-you-use vs. unlimited ▶ QoS everywhere and accounting ▶ customers can use APIs Warning: this lecture is about setting-up your own cloud ▶ no AWS ▶ no GCP ▶ no Azure ▶ you’re the cloud administrator here, not the luser ▶ and it fits the sovereinty and privacy laws better, as long as your infrastructure is on the national territory VMM Orchestrators Different feature sets 1. just the UI 2. just an orchestrator 3. UI + orchestrator VMM farm UIs ▶ VMware vSphere vCenter ▶ Citrix XenCenter ▶ RHEV == oVirt (just like RHEL == CentOS) ▶ HyperV ▶ Proxmox (KVM & LXC) VMM orchestrators ▶ XCP-NG (XEN) ▶ Ganeti (XEN & KVM) ▶ Apache CloudStack (VMware, KVM, XenServer, XCP, Oracle VM, Hyper-V) ▶ OpenNebula (got orchestrator?) ▶ DanubeCloud (no orchestrator) ▶ OpenStack – just like k8s… why so much pain? LAB // test Apache CloudStack LAB // find the hot-migration scheduler algorithms in those code bases and discuss/compare techniques. On which hardware resources does it bases its decision? // Questions on VMM UIs and orchestrators? Anything else than Docker in mind?… ==> Container engines’ timeline ▶ chroot (1982) ▶ FreeBSD jails (2000) ▶ Solaris Zones / Containers (2004) ▶ AIX WPARs (2007) ▶ Virtuozzo / OpenVZ (resp. 2000 / 2005) ▶ Ubuntu LXC / LXD (2008) ▶ systemd-nspawn (2010) ▶ Docker (2013) ▶ runc/containerd (Jul 16, 2015) ▶ Singularity (2015) What is a chroot?… ==> the thing you sometimes need for rescuing a system There are also chroot-capable daemons (named, unbound, postfix) Anyhow, which engine are good to choose?… ==> stick with the standards ▶ (ideally jails but that’s for Free-and-DflyBSD) ▶ Docker for apps and micro-services ▶ LXC for systems LAB // are there orchestrators for BSD jails? LAB // study and retro-PoC the sysjail vuln on Net-or-OpenBSD.
    [Show full text]
  • ﻠﺤوﺳﺒﺔ الﺴﺤاﺑﻴﺔ ل ﻣﻴكﻲ وأﻣﻦ دﻳﻨا نظام ﲢكم لوصول Dynamic Secure Access
    نظام ﲢكم لوصول دﻳﻨاﻣﻴكﻲ وأﻣﻦ لﻠﺤوﺳﺒﺔ الﺴﺤاﺑﻴﺔ Dynamic Secure Access Control for Cloud Computing إﻋﺪاد: م. ﳎﺪ الﺸﻤاﱄ إﺷﺮاف: د.ﳏﻤﺪ ﺟﻨﻴﺪي I اﳌﻠﺨﺺ II ﰲ ظل التطور الكبﲑ اﳊاصل ﰲ ﳎال خزن البيات ومعاﳉتها ﻹضافة إﱃ التطور على شبكة اﻻنﱰنت، أصبح اﳊصول على اﳌعلومات ومصادرها أسرع وأرخص ومتوافر بشكل أكﱪ. هذا الوضع فتح اال أمام تطور جديد ﰲ عاﱂ اﳊوسبة والشبكات ونﻈﻢ التشﻐيل سلوب جديد هو اﳊوسبة الﺴﺤابية. ﰲ هذﻩ اﳌشروع نقدم حل ﳌشكلة مشاركة البيات بطريقة آمﻨة وديﻨاميكية على الﺴﺤابة، حيﺚ تتﻢ العملية بدون عملية إعادة تشفﲑ للمعطيات اﳌطلوب مشاركتها، كما أا ﻻ تتضمن عملية توزيع للمفاتيح على اﳌﺴتخدمﲔ كما ﰲ أﻏلﺐ اﳊلول اﳌطروحة حالياً. كما أن هذﻩ الرسالة تتضمن خطوة ﰲ اﲡاﻩ إﳒاز إدارة لﻨموذج التﺤكﻢ لوصول حﺴﺐ اﻻستخدام UCON. حيﺚ قمﻨا بدراسة تفويض مﺴتخدم ﳌﺴتخدم أخر ومﻨﺤه صﻼحية مشاركة بياته. Abstract With the huge development in the field of data storing and processing as well as the impending development of the internet, accessing information and resources is becoming faster, cheaper and more available. This has paved the way for a new development in the world of computing, networks and operating systems, which is “Cloud Computing”. In this project, we present a solution for the problem of secure data sharing in a safe and dynamic way on Public Cloud. The process is done without re-encrypting the shared data or re-distributing keys as the case in most solutions so far. Also, include a step towards the completion of access control UCON, where we studied the ability for a user to delegate another user OWNER role and sharing data authorization.
    [Show full text]