Maximizing Virtual Machine Performance an Introduction to Performance Tuning

Total Page:16

File Type:pdf, Size:1020Kb

Maximizing Virtual Machine Performance an Introduction to Performance Tuning Maximizing Virtual Machine Performance An Introduction to Performance Tuning Written by Mattias Sundling, Evangelist, Dell Introduction Requirements for top VM performance VM performance is ultimately determined by the underlying System requirements physical hardware and the hypervisor that serves as the To ensure top performance from your VMs, your system must foundation for your virtual infrastructure. The construction of have the following: this foundation has become simpler over the years, but there • VMware vSphere 5.0 or later—If you are running an older version, are still several areas that should be fine-tuned in order to you must upgrade. Performance and scalability have increased maximize the VM performance in your environment. While significantly since versions 3 and 4. some of the content of this writing will be generic toward any • Virtual machine hardware version 8—This hardware version hypervisor, this document focuses on VMware vSphere 5.0. introduces features to increase performance. If you are not running virtual hardware version 8, upgrade VMware Tools first, and then shut This is an introduction to performance tuning and is not down the VM’s guest OS. In the vSphere client, right-click the VM and intended to cover everything in detail. Most topics have links to select Upgrade Virtual Hardware. sites that contains deep-dive information if you wish to learn more. Warning: Once you upgrade the virtual hardware version to 8, you will lose backward compatibility to versions prior vSphere 5.0. Therefore, if you have a mixed environment, make sure to upgrade all vSphere hosts first. Virtual hardware and guest OS configuration The sections below make recommendations for configuring the various hardware components for best performance, as well as for optimizations that can be done inside the guest OS. CPU Start with one vCPU. Windows 2008 uses the same HAL for Start with one vCPU; most applications both UP and SMP, which makes it easy works well with that. If you start with to reduce the number of CPUs. Note the multiple vCPUs and then realize following: that you have over-provisioned, it • Windows 2003 and earlier have different may be cumbersome to remove the HAL drivers for UP versus SMP. Windows unnecessary vCPUs, depending on your automatically changes the HAL driver OS. Therefore, start with one vCPU and when going from UP to SMP. It can be later you can evaluate CPU utilization very complicated to go from SMP to UP, Start with one vCPU. and application performance. If the depending on the OS and version. application response is poor, you can • If you have a VM running Windows If the application add vCPUs as needed. 2003 SP2 or later that has been reduced response is from two vCPUs to one vCPU, you will Select the correct hardware abstraction still have the multiprocessor HAL in the poor, you can layer in the guest OS. OS. This results in slower performance Make sure you select the correct than a system with correct HAL. The add vCPUs as hardware abstraction layer (HAL) in the HAL driver can be manually updated; needed. Removing guest OS. The HAL drives the OS for however, Windows versions prior to the CPU; choices are “Uni-Processor Windows 2003 SP2 cannot be easily unnecessary (UP) single processor” or ”Symmetric corrected. I have personally experienced Multiprocessing (SMP) multiple systems with an incorrect HAL driver. They vCPUS can be processors.” consume more CPU, which can often cumbersome. Figure 1. Foglight vOPS Enterprise from Dell looks beyond the hypervisor into the application layer. 2 peak to unnecessarily high CPU-utilization performance 10–30 percent, depending percentages when the system gets stressed. on the workload. • Make sure your multi-processor VMs have an OS and application that support multi- ESXi 5 has some enhancements around threading, and take advantage of it. If they Intel SMT to ensure high efficiency don’t, you’ll be wasting resources. and performance for mission-critical applications. The number of cores in Be aware that CPU scheduling varies each pCPU can be up to 10, which depending on the version of VMware makes CPU scheduling easier. ESX or ESXi. VMware ESX 2 used strict co-scheduling, Watch CPU % Ready; a value of 5–10 which required a two-vCPU VM to have percent indicates CPU congestion. two physical CPUs (pCPUs) available at The best indication that a VM is suffering the same time. pCPUs had a single or from CPU congestion on a vSphere dual core, leading to slow performance host is when CPU % Ready reaches when hosting too many VMs. 5–10 percent over time. In this range, further analysis might be needed. Values ESXi 3 introduced relaxed co-scheduling, higher than 10 percent definitely show which allows a two-vCPU VM to be a critical contention. This means the The best indication scheduled, even though there were not VM has to wait for the vSphere host to two pCPU available at the same time. schedule its CPU requests, due to CPU that a VM is resource conflicts with other VMs. This ESXi 4 refined the relaxed co-scheduler suffering from performance metric is one of the most even further, increasing performance important ones to monitor in order to CPU congestion and scalability. Intel SMT was introduced, understand the overall performance in a which exposes two hardware contexts virtual environment. This metric can be on a vSphere host from a single core. This can increase seen only at the hypervisor level and not is when CPU % inside the guest OS. Ready reaches 5–10 percent. Figure 2. This example shows a VM with almost same CPU utilization across all vCPUs. That means the OS and application are multi-threaded. 3 Figure 3. CPU % Ready is an important metric for understanding VM performance. The best practice is Virtual NUMA can improve performance. Virtual NUMA (vNUMA) exposes the The only situation I have found for using to set the memory host NUMA topology to the guest OS. the memory limit is an application limit to unlimited. If the guest OS and applications are that requires 16 GB of memory (as an NUMA-aware, they can benefit by using example) to install or start, but only 4 GB the underlying NUMA architecture in operation. In a case like that, you can more efficiently, which will improve create a memory limit at a much lower performance. This requires virtual value than the actual memory allocation. hardware version 8. The guest OS and application will see the full 16 GB memory, but the vSphere host Memory limits the physical memory to 4 GB. The memory limit setting often hurts more than it helps. In reality, the memory limit often gets When you create a VM, you allocate it set on VMs that were not intended to a certain amount of memory. There is be limited. This can happen when you a feature in the VM settings known as move VMs across different resource memory limit; this often hurts more than pools or perform a P2V of a physical it helps. This setting limits the hypervisor system. The worst case scenario, which memory allocation to a value other than I have seen in the field multiple times, what is actually assigned. This means is setting this memory limit to a value the guest OS will still see the full amount (such as 512 MB) for VM templates, since of memory allocation. However, the all VMs deployed from the templates will hypervisor will allow use of physical inherit the memory limit setting. memory only up to the memory limit amount. Figure 4. Foglight vOPS Enterprise enables you to detect, diagnose and resolve VM problems. 4 Memory definitions Granted: Physical memory being granted to VM by ESX(i) host Active: Physical memory actively being used by VM Ballooned: Memory being used by the VMware Memory Control Driver to allow VM OS to selectively swap memory Swapped: Memory being swapped to disk LUNs that are Figure 5. Memory utilization (active memory) in this example is very low over time, making it safe to decrease memory setting without affecting VM and application too big result in performance. too many VMs, For example, if you allocate 2 GB To determine the correct amount of SCSI reservation memory to a VM and there is a limit memory, you’ll need to monitor active- of 512 MB, the guest OS will see 2 GB memory utilization over at least 30–90 conflicts, and memory. But the vSphere host will allow days in order to see patterns. Some only 512 MB physical memory. If guest systems might be in use only during a potentially lower OS requires more than 512 MB memory, certain period of the 90-day period, but disk I/O due to the memory balloon driver will start to used very heavily during that period. inflate to let guest OS decide what pages metadata locking. are actively being used. If balloon can’t Understand VSphere’s memory reclaim any more memory, guest OS will reclamation techniques. start to swap. If the balloon can’t deflate, It’s widely considered a best practice to or if memory usage is too high on right-size memory allocation in order vSphere host, it will start to use memory to avoid placing extra load on vSphere compression, then VMkernel swapping hosts due to memory reclamation. You’ll as a last resort. Ballooning is a first want to run as many VMs as possible warning signal. Guest OS and VMkernel and will probably over-commit memory swapping will definitely hurt VM (allocate more than you have). performance, plus the vSphere host and the storage subsystem that have to serve There are several techniques that as virtual memory.
Recommended publications
  • Touchless and Always-On Cloud Analytics As a Service 1. Introduction
    Touchless and Always-on Cloud Analytics as a Service S. Suneja, C. Isci, R. Koller, E. de Lara Despite modern advances in automation and managed services, many end users of cloud services remain concerned with regards to the lack of visibility into their operational environments. The underlying principles of existing approaches employed to aid users gain transparency into their runtimes, do not apply to today’s dynamic cloud environment where virtual machines (VMs) and containers operate as processes of the cloud operating system (OS). We present Near Field Monitoring (NFM), a cloud-native framework for monitoring cloud systems and providing operational analytics services. With NFM, we employ cloud, virtualization, and containerization abstractions to provide complete visibility into running entities in the cloud, in a touchless manner, i.e., without modifying, instrumenting or accessing inside the end user context. Operating outside the context of the target systems enables always-on monitoring independent of their health. Using an NFM implementation on OpenStack, we demonstrate the capabilities of NFM, as well as its monitoring accuracy and efficiency. NFM is highly practical and general, supporting more than 1000 different system distributions, allowing instantaneous monitoring as soon as a guest system gets hosted on the cloud, without any setup prerequisites or enforced cooperation. 1. Introduction Emerging cloud services enable end users to define and provision complex, distributed applications and their compute resources with unprecedented
    [Show full text]
  • MOS WS 2020/21 Goals
    Virtualization MOS WS 2020/21 Goals • Give you an overview about: • Virtualization and VMs in General • Hardware Virtualization on x86 Goals • Give you an overview about: • Virtualization and VMs in General • Hardware Virtualization on x86 • Not in this lecture: • Lots and lots of Details • Language Runtimes • How to use Xen/KVM/… History Erik Pitti, CC-BY, www.flickr.com/people/24205142@N00 History • Pioneered with IBM’s CP/CMS in ~1967 running on System/360 and System/370 • CP: Control Program (provided S/360 VMs) • Memory Protection between VMs • Preemptive scheduling • CMS: Cambridge Monitor System (later Conversational Monitor System) – Single User OS • At the time more flexible & efficient than time-sharing multi- user systems! Applications • Consolidation (improve server utilization) • Isolation (incompatibility or security reasons) • Reuse (legacy software) • Development … but was confined to the mainframe-world for a long time! Why? Imagine you want to write an operating system, that is: • Secure • Trustworthy • Small • Fast • Fancy but, … Why? Users expect to run their favourite software („legacy“): • Browsers • Word • iTunes • Certified Business Applications • Gaming (Windows/DirectX to DOS) Porting/Rewriting is not an option! Why? „By virtualizing a commodity OS […] we gain support for legacy applications, and devices we don’t want to write drivers for.“ „All this allows the research community to finally escape the straitjacket of POSIX or Windows compatibility […]“ Roscoe, Elphinstone, and Heiser, 2007 What is Virtualization? Suppose you develop on your x86-based workstation running a system Host, a system Guest which is supposed to run on ARM-based phones. An emulator for G running H precisely emulates G’s: • CPU • Memory (subsystem) • I/O devices Ideally, programs running on the emulated G exhibit the same behaviour, except for timing, as when run on a real system G.
    [Show full text]
  • Mos - Virtualization
    MOS - VIRTUALIZATION Tobias Stumpf, Marcus H¨ahnel WS 2017/18 Goals Give you an overview about: • virtualization and virtual machines in general, • hardware virtualization on x86, • our research regarding virtualization. We will not discuss: • lots and lots of details, • language runtimes, • how to use XEN/KVM/. MOS - Virtualization slide 3 What is Virtualization? Outline What is Virtualization? Very Short History Virtualization on x86 Example: L4Linux Example: NOVA Example: Karma VMM MOS - Virtualization slide 4 What is Virtualization? Starting Point You want to write a new operating system that is • secure, • trustworthy, • small, • fast, • fancy. but . MOS - Virtualization slide 5 What is Virtualization? Commodity Applications Users expect to run all the software they are used to (\legacy"): • browsers, • Word, • iTunes, • certified business applications, • new (Windows/DirectX) and ancient (DOS) games. Porting or rewriting all is infeasible! MOS - Virtualization slide 6 What is Virtualization? One Solution: Virtualization \By virtualizing a commodity OS [...] we gain support for legacy applications, and devices we don't want to write drivers for." \All this allows the research community to finally escape the straitjacket of POSIX or Windows compatibility [...]" Roscoe:2007:HV:1361397.1361401 MOS - Virtualization slide 7 What is Virtualization? Virtualization virtual existing in essence or effect though not in actual fact http://wordnetweb.princeton.edu \All problems in computer science can be solved by another level of indirection." David Wheeler MOS - Virtualization slide 8 What is Virtualization? Emulation Suppose you develop for a system G (guest, e.g. an ARM-based phone) on your workstation H (host, e.g., an x86 PC). An emulator for G running on H precisely emulates G's • CPU, • memory subsystem, and • I/O devices.
    [Show full text]
  • Demystifying Internet of Things Security Successful Iot Device/Edge and Platform Security Deployment — Sunil Cheruvu Anil Kumar Ned Smith David M
    Demystifying Internet of Things Security Successful IoT Device/Edge and Platform Security Deployment — Sunil Cheruvu Anil Kumar Ned Smith David M. Wheeler Demystifying Internet of Things Security Successful IoT Device/Edge and Platform Security Deployment Sunil Cheruvu Anil Kumar Ned Smith David M. Wheeler Demystifying Internet of Things Security: Successful IoT Device/Edge and Platform Security Deployment Sunil Cheruvu Anil Kumar Chandler, AZ, USA Chandler, AZ, USA Ned Smith David M. Wheeler Beaverton, OR, USA Gilbert, AZ, USA ISBN-13 (pbk): 978-1-4842-2895-1 ISBN-13 (electronic): 978-1-4842-2896-8 https://doi.org/10.1007/978-1-4842-2896-8 Copyright © 2020 by The Editor(s) (if applicable) and The Author(s) This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book’s Creative Commons license, unless indicated otherwise in a credit line to the material.
    [Show full text]
  • Red Hat Enterprise Virtualization 3.0 Live Chat Transcript Sponsored by Red Hat
    Red Hat Enterprise Virtualization 3.0 Live Chat Transcript Sponsored by Red Hat Speakers: Chuck Dubuque, Senior Product Marketing Manager, Red Hat Andy Cathrow, Product Manager, Red Hat Red Hat Virtualization Live Chat Transcript 2.23.12 Joe:Hi Everyone, thanks for joining the Red Hat Live Chat. Joe:Today we have Chuck Dubuque & Andrew Cathrow with the Red Hat Virtualization team available LIVE to answer your questions. Joe:Speaker Bios:Chuck Dubuque is the Senior Product Marketing Manager for Red Hat Enterprise Virtualization and is responsible for market analysis, program strategy, and channel support. Prior to joining Red Hat, he worked for three years at a mid-sized VAR (value-added reseller) where he experienced both the marketing and engineering of enterprise hardware and software, including Red Hat Enterprise Linux, VMware, Microsoft Windows Server, NetApp, IBM, Cisco, and Dell. Earlier in his career, Dubuque spent eight years in the biotechnology space in marketing and business development. He earned an MBA from Stanford Graduate School of Business and a bachelor's degree from Dartmouth College. Andrew Cathrow serves as Product Manager at Red Hat, where he is responsible for Red Hat's virtualization products. He has also managed Red Hat's sales engineers. Prior to joining Red Hat in 2006, Cathrow worked in product management for a configuration company, and also for a software company that developed middleware and messaging mainframe and midrange systems. Earlier in his career, Cathrow held various positions at IBM Global Services. Joe:Please feel free to start asking questions now Chuck:Thanks Joe. First I'd like to remind everyone that Red Hat launched RHEV 3.0 on January 18, and out launch event is now available on-demand at http://bit.ly/rhev3event.
    [Show full text]
  • BSD Projects IV – BSD Certification • Main Features • Community • Future Directions a (Very) Brief History of BSD
    BSD Overview Jim Brown May 24, 2012 BSD Overview - 5/24/2012 - Jim Brown, ISD BSD Overview I – A Brief History of BSD III – Cool Hot Stuff • ATT UCB Partnership • Batteries Included • ATT(USL) Lawsuit • ZFS , Hammer • BSD Family Tree • pf Firewall, pfSense • BSD License • Capsicum • Virtualization Topics • Jails, Xen, etc. • Desktop PC-BSD II – The Core BSD Projects IV – BSD Certification • Main Features • Community • Future Directions A (Very) Brief History of BSD 1971 – ATT cheaply licenses Unix source code to many organizations, including UCB as educational material 1975 – Ken Thompson takes a sabbatical from ATT, brings the latest Unix source on tape to UCB his alma mater to run on a PDP 11 which UCB provided. (Industry/academic partnerships were much more common back then.) Computer Science students (notably Bill Joy and Chuck Haley) at UCB begin to make numerous improvements to Unix and make them available on tape as the “Berkeley Software Distribution” - BSD A (Very) Brief History of BSD Some notable CSRG • 1980 – Computer Science Research Group members (CSRG) forms at UCB with DARPA funding to make many more improvements to Unix - job control, autoreboot, fast filesystem, gigabit address space, Lisp, IPC, sockets, TCP/IP stack + applications, r* utils, machine independence, rewriting almost all ATT code with UCB/CSRG code, including many ports • 1991 – The Networking Release 2 tape is released on the Internet via anon FTP. A 386 port quickly follows by Bill and Lynne Jolitz. The NetBSD group is formed- the first Open Source community entirely on the Internet • 1992 – A commercial version, BSDI (sold for $995, 1-800-ITS-UNIX) draws the ire of USL/ATT.
    [Show full text]
  • Vmware Technical Journal
    VOL. 1, NO. 1–APRIL 2012 VMWARE TECHNICAL JOURNAL Editors: Steve Muir, Rita Tavilla and Ben Verghese VMWARE TECHNICAL JOURNAL TECHNICAL VMWARE SPECIAL THANKS TABLE OF CONTENTS Stephen Alan Herrod, Ph.D. Chief Technology Officer and Senior Vice President of Research & Development 1 Introduction Steve Herrod, CTO Julia Austin Vice President Innovation Programs 2 VisorFS: A Special-purpose File System for Efficient Handling of System Images Olivier Cremel PROGRAM COMMITTEE Banit Agrawal 3 A Software-based Approach to Testing VMware® vSphere® VMkernel Public APIs Jim Chow Lan Xue, Sreevathsa Sathyanarayana, Thorbjoern Donbaek, Ramesh Pallapotu, James Truong, Keith Farkas Sriram Sankaran, Eric Lorimer Steve Muir Javier Soltero 4 Providing Efficient and Seamless Desktop Services in Ubiquitous Computing Environments Rita Tavilla Lizhu Zhang, Wenlong Shao, Jim Grandy Ben Verghese 5 Comprehensive User Experience Monitoring Lawrence Spracklen, Banit Agrawal, Rishi Bidarkar, Hari Sivaraman Questions and Comments can be sent to [email protected] 6 StatsFeeder: An Extensible Statistics Collection Framework for Virtualized Environments Vijayaraghavan Soundararajan, Balaji Parimi, Jon Cook 7 VMware Distributed Resource Management: Design, Implementation, and Lessons Learned Ajay Gulati, Anne Holler, Minwen Ji, Ganesha Shanmuganathan, Carl Waldspurger, Xiaoyun Zhu 8 Identity, Access Control, and VMware Horizon Will Pugh, Kyle Austin 9 VMworld 2011 Hands-On Labs: Implementation and Workflow Adam Zimman, Clair Roberts, Mornay Van Der Walt VOL. 1, NO. 1 2012 1, NO. VOL. VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright © 2012 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
    [Show full text]
  • Vmware Vsphere 4.1 Ultimate Bootcamp Course Overview Course
    VMware vSphere 4.1 Ultimate Bootcamp Course Overview VMware Ultimate Bootcamp vSphere 4.1 teaches advanced virtualization concepts and explores the VMware vSphere 4.1 product suite. This comprehensive class prepares the student to become a certified professional virtualization expert. The course objective is to instill the knowledge required for the student to do their job efficiently and effectively, starting from installation of the product to real-world troubleshooting issues. The course focus is not limited only to learning and harnessing the power of VMware but the entire concept of virtualization, and other 3rd party tools and technologies that will enhance VMware capabilities and increase the student's virtualization expertise. Course Outline Course Introduction 5m Course Introduction Chapter 01 - Course Introduction and Methodology 5m Learn IT! Do IT! Know IT! Certified Virtualization Expert (CVE) Certification VMTraining's Physical Setup VMTraining's Setup Student to Datacenter VMTraining's Network IP Setup Chapter 01 Review Chapter 02 - Virtualization Overview 47m Why Virtualize? Why Virtualize? - Cost Savings VMware ROI TCO Calculator v2.0 Why Virtualize? - Simplified Management! What is Virtual Infrastructure? What is a Virtual Machine (VM)? Type 1 and Type 2 Hypervisors Cloud Computing VMware vCloud VMware Virtualization Products VMware Go VMware Server 2.0 VMware Infrastructure 3 (VI3) vSphere 4 vSphere 4.1 - New Features vSphere 4.1 Update 1 (2011/02/10) - New Features VMware vCenter Server VMware vCenter Orchestrator VMware
    [Show full text]
  • Asiabsdcon 2008 Proceedings
    AsiaBSDCon 2008 Proceedings March 27-30, 2008 Tokyo, Japan Copyright c 2008 AsiaBSDCon 2008. All rights reserved. Unauthorized republication is prohibited. Published in Japan, March 2008 In Memory of Jun-ichiro “itojun” Hagino Our friend and colleague Jun-ichiro “itojun” Hagino (a BSD hacker famous for IPv6 implemen- tation, and CTO of ipv6samurais.com) was working as a member of our program committee, but he passed away on October 29th 2007. Itojun was a valued member of the BSD community both for his technical and personal contributions to various projects over his career. We who are working on AsiaBSDCon would like to express our condolences to Itojun’s family and friends and also to dedicate this year’s conference to his memory. INDEX P1A: PC-BSD: FreeBSD on the Desktop 001 Matt Olander (iXsystems) P1B: Tracking FreeBSD in a Commercial Setting 027 M. Warner Losh (Cisco Systems, Inc.) P3A: Gaols: Implementing Jails Under the kauth Framework 033 Christoph Badura (The NetBSD Foundation) P3B: BSD implementations of XCAST6 041 Yuji IMAI, Takahiro KUROSAWA, Koichi SUZUKI, Eiichi MURAMOTO, Katsuomi HAMAJIMA, Hajimu UMEMOTO, and Nobuo KAWAGUTI (XCAST fan club, Japan) P4A: Using FreeBSD to Promote Open Source Development Methods 049 Brooks Davis, Michael AuYeung, and Mark Thomas (The Aerospace Corporation) P4B: Send and Receive of File System Protocols: Userspace Approach With puffs 055 Antti Kantee (Helsinki University of Technology, Finland) P5A: Logical Resource Isolation in the NetBSD Kernel 071 Kristaps Džonsons (Centre for Parallel Computing, Swedish Royal Institute of Technology) P5B: GEOM—in Infrastructure We Trust 081 Pawel Jakub Dawidek (The FreeBSD Project) P6A: A Portable iSCSI Initiator 093 Alistair Crooks (The NetBSD Foundation) P8A: OpenBSD Network Stack Internals 109 Claudio Jeker (The OpenBSD Project) P8B: Reducing Lock Contention in a Multi-Core System 115 Randall Stewart (Cisco Systems, Inc.) P9A: Sleeping Beauty—NetBSD on Modern Laptops 127 Jörg Sonnenberger and Jared D.
    [Show full text]
  • Preemptable Remote Execution Facilities for the V-System
    Preemptable Remote Execution Facilities for the V-System Marvin M. Theimer, Keith A. Lantz, and David R. Cheriton Computer Science Department Stanford University Stanford, CA 94305 Abstract wish to compile a program and reformat the documentation after fixing a program error, while continuing to read mail. In A remote execution facility allows a user of a workstation- based distributed system to offload programs onto idle general, a user may have batch jobs to run concurrently with, workstations, thereby providing the user with access to but unrelated to, some interactive activity. Although any one computational resources beyond that provided by his personal of these programs may perform satisfactorily in isolation on a workstation. In this paper, we describe the design and performance of the remote execution facility in the V workstation, forcing them to share a single workstation distributed system, as well as several implementation issues of degrades interactive response and increases the running time interest. In particular, we focus on network transparency of the execution environment, preemption and migration of of non-interactive programs. remotely executed programs, and avoidance of residual dependencies on the original host. We argue that Use of idle workstations as computation servers increases preemptable remote execution allows idle workstations to be used as a "pool of processors" without interfering with use by the processing power available to users and improves the their owners and without significant overhead for the normal utilization of the hardware base. However, this use must not execution of programs. In general, we conclude that the cost of providing preemption is modest compared to providing a compromise a workstation owner's claim to his machine: A similar amount of computation service by dedicated user must be able to quickly reclaim his workstation to avoid "computation engines".
    [Show full text]
  • Cloud Orchestration
    Cloud Orchestration What is a cloud?… ==> ▶ distributing the load automatically ▶ (and selling more than what you have – thin provisioning everywhere…) ▶ not necessarily virtualized – IaaS can also be bare-metal ▶ immediate delivery ▶ pay-as-you-use vs. unlimited ▶ QoS everywhere and accounting ▶ customers can use APIs Warning: this lecture is about setting-up your own cloud ▶ no AWS ▶ no GCP ▶ no Azure ▶ you’re the cloud administrator here, not the luser ▶ and it fits the sovereinty and privacy laws better, as long as your infrastructure is on the national territory VMM Orchestrators Different feature sets 1. just the UI 2. just an orchestrator 3. UI + orchestrator VMM farm UIs ▶ VMware vSphere vCenter ▶ Citrix XenCenter ▶ RHEV == oVirt (just like RHEL == CentOS) ▶ HyperV ▶ Proxmox (KVM & LXC) VMM orchestrators ▶ XCP-NG (XEN) ▶ Ganeti (XEN & KVM) ▶ Apache CloudStack (VMware, KVM, XenServer, XCP, Oracle VM, Hyper-V) ▶ OpenNebula (got orchestrator?) ▶ DanubeCloud (no orchestrator) ▶ OpenStack – just like k8s… why so much pain? LAB // test Apache CloudStack LAB // find the hot-migration scheduler algorithms in those code bases and discuss/compare techniques. On which hardware resources does it bases its decision? // Questions on VMM UIs and orchestrators? Anything else than Docker in mind?… ==> Container engines’ timeline ▶ chroot (1982) ▶ FreeBSD jails (2000) ▶ Solaris Zones / Containers (2004) ▶ AIX WPARs (2007) ▶ Virtuozzo / OpenVZ (resp. 2000 / 2005) ▶ Ubuntu LXC / LXD (2008) ▶ systemd-nspawn (2010) ▶ Docker (2013) ▶ runc/containerd (Jul 16, 2015) ▶ Singularity (2015) What is a chroot?… ==> the thing you sometimes need for rescuing a system There are also chroot-capable daemons (named, unbound, postfix) Anyhow, which engine are good to choose?… ==> stick with the standards ▶ (ideally jails but that’s for Free-and-DflyBSD) ▶ Docker for apps and micro-services ▶ LXC for systems LAB // are there orchestrators for BSD jails? LAB // study and retro-PoC the sysjail vuln on Net-or-OpenBSD.
    [Show full text]
  • ﻠﺤوﺳﺒﺔ الﺴﺤاﺑﻴﺔ ل ﻣﻴكﻲ وأﻣﻦ دﻳﻨا نظام ﲢكم لوصول Dynamic Secure Access
    نظام ﲢكم لوصول دﻳﻨاﻣﻴكﻲ وأﻣﻦ لﻠﺤوﺳﺒﺔ الﺴﺤاﺑﻴﺔ Dynamic Secure Access Control for Cloud Computing إﻋﺪاد: م. ﳎﺪ الﺸﻤاﱄ إﺷﺮاف: د.ﳏﻤﺪ ﺟﻨﻴﺪي I اﳌﻠﺨﺺ II ﰲ ظل التطور الكبﲑ اﳊاصل ﰲ ﳎال خزن البيات ومعاﳉتها ﻹضافة إﱃ التطور على شبكة اﻻنﱰنت، أصبح اﳊصول على اﳌعلومات ومصادرها أسرع وأرخص ومتوافر بشكل أكﱪ. هذا الوضع فتح اال أمام تطور جديد ﰲ عاﱂ اﳊوسبة والشبكات ونﻈﻢ التشﻐيل سلوب جديد هو اﳊوسبة الﺴﺤابية. ﰲ هذﻩ اﳌشروع نقدم حل ﳌشكلة مشاركة البيات بطريقة آمﻨة وديﻨاميكية على الﺴﺤابة، حيﺚ تتﻢ العملية بدون عملية إعادة تشفﲑ للمعطيات اﳌطلوب مشاركتها، كما أا ﻻ تتضمن عملية توزيع للمفاتيح على اﳌﺴتخدمﲔ كما ﰲ أﻏلﺐ اﳊلول اﳌطروحة حالياً. كما أن هذﻩ الرسالة تتضمن خطوة ﰲ اﲡاﻩ إﳒاز إدارة لﻨموذج التﺤكﻢ لوصول حﺴﺐ اﻻستخدام UCON. حيﺚ قمﻨا بدراسة تفويض مﺴتخدم ﳌﺴتخدم أخر ومﻨﺤه صﻼحية مشاركة بياته. Abstract With the huge development in the field of data storing and processing as well as the impending development of the internet, accessing information and resources is becoming faster, cheaper and more available. This has paved the way for a new development in the world of computing, networks and operating systems, which is “Cloud Computing”. In this project, we present a solution for the problem of secure data sharing in a safe and dynamic way on Public Cloud. The process is done without re-encrypting the shared data or re-distributing keys as the case in most solutions so far. Also, include a step towards the completion of access control UCON, where we studied the ability for a user to delegate another user OWNER role and sharing data authorization.
    [Show full text]