Architectural Decisions for Linuxone Hypervisors

Total Page:16

File Type:pdf, Size:1020Kb

Architectural Decisions for Linuxone Hypervisors July 2019 Webcast Virtualization options for Linux on IBM Z & LinuxONE Richard Young Executive IT Specialist Virtualization and Linux IBM Systems Lab Services Wilhelm Mild IBM Executive IT Architect for Mobile, IBM Z and Linux IBM R&D Lab, Germany Agenda ➢ Benefits of virtualization • Available virtualization options • Considerations for virtualization decisions • Virtualization options for LinuxONE & Z • Firmware hypervisors • Software hypervisors • Software Containers • Firmware hypervisor decision guide • Virtualization decision guide • Summary 2 © Copyright IBM Corporation 2018 Why do we virtualize? What are the benefits of virtualization? ▪ Simplification – use of standardized images, virtualized hardware, and automated configuration of virtual infrastructure ▪ Migration – one of the first uses of virtualization, enable coexistence, phased upgrades and migrations. It can also simplify hardware upgrades by make changes transparent. ▪ Efficiency – reduced hardware footprints, better utilization of available hardware resources, and reduced time to delivery. Reuse of deprovisioned or relinquished resources. ▪ Resilience – run new versions and old versions in parallel, avoiding service downtime ▪ Cost savings – having fewer machines translates to lower costs in server hardware, networking, floor space, electricity, administration (perceived) ▪ To accommodate growth – virtualization allows the IT department to be more responsive to business growth, hopefully avoiding interruption 3 © Copyright IBM Corporation 2018 Agenda • Benefits of virtualization ➢ Available virtualization options • Considerations for virtualization decisions • Virtualization options for LinuxONE & Z • Firmware hypervisors • Software hypervisors • Software Containers • Firmware hypervisor decision guide • Virtualization decision guide • Summary 4 © Copyright IBM Corporation 2018 What hypervisors and virtualization are available on Linux on IBM Z & LinuxONE ❑ IBM PR/SM (traditional ) or via DPM (Dynamic Partition Manager) – Firmware based virtualization to securely share and partition hardware resources. DPM providing graphical interface & REST interfaces with simplified management, automation, and dynamic capability. ❑ IBM z/VM – IBM developed, software based mainframe virtualization that can be traced back to the beginning of Virtualization in computing ❑ Linux KVM – Open source software based virtualization. Supports multiple hardware architectures. Kernel based virtual machines started in mid 2000’s. ❑ Containers – System Containers and Application containers. Via Linux cgroups and namespaces, provide isolated and managed environment for applications to run. Containers share a single host kernel. ❑LXD Containers – LXD is a system container manager. Unprivileged containers with a CLI and API. Also has OpenStack integration ❑Docker based Containers - Simplified container with a toolset for Container image build process, an API & CLI, a registry. Clustering added with Swarm. ❑ IBM Secure Service Container (SSC) – Fully encrypted workload in a partition. Traditional system administrator access removed. Limited and encrypted network access. Primarily deployed with (ICP) IBM Cloud Private - (SSC for ICP – a Kubernetes based deployment/orchestration solution) 5 © Copyright IBM Corporation 2019 Agenda • Benefits of virtualization • Available virtualization options ➢ Considerations for virtualization decisions • Virtualization options for LinuxONE & Z • Firmware hypervisors • Software hypervisors • Software Containers • Firmware hypervisor decision guide • Virtualization decision guide • Summary 6 © Copyright IBM Corporation 2018 Considerations for virtualization decisions ❑ Software supported in combination with it ❑ Open vs proprietary ❑ Hardware support – i.e. NVMe, CTC, ISM ❑ Outage avoidance – Live migration/relocation ❑ Current in house standards – Distros ❑ Feature/Function and requirements ❑ Available skill set in house to manage ➢Live relocation requirements x, y ,z ❑ Ability to hire talent with needed skills ❑ Dynamic by design – No outages to change ❑ Learning curve / duration to become ❑ Performance / Scalability fluent/expert – Simplicity vs complexity ❑ Ecosystem – Documentation, training, 3rd ❑ Level of Isolation / security party solutions and support ❑ Certifications & Multitenancy requirements ❑ Cost – Direct / Indirect for additional features ❑Monitoring , Security, Automation, Auditing, rd ❑ Automation capability – Rest APIs or 3 party Time to train tooling – i.e. Kickstart deployment, OpenStack, or Ansible 2019 IBM Systems Technical University 7 © Copyright IBM Corporation 2019 Agenda • Benefits of virtualization • Available virtualization options • Considerations for virtualization decisions ➢ Virtualization options for LinuxONE & IBM Z • Firmware hypervisors • Software hypervisors • Software Containers • Firmware hypervisor decision guide • Virtualization decision guide • Summary 8 © Copyright IBM Corporation 2018 IBM Z and LinuxONE Virtualization Built-in, Shared Everything Architecture IBM® Z & LinuxONE™ Systems Hardware assisted virtualization • Cores are designed to run at near 100% utilization nearly 100% of the time • Provisioning of virtual servers in seconds • High granularity of resource sharing (<1%) 1 LPAR – PR/SM or IBM DPM* – up to 85 Logical Partitions • Upgrade of physical resources without taking the system down • Scalability of up to 1000’s of virtual servers • More with less: more virtual servers per core, sharing of physical resources • Extensive life-cycle management 2+3 KVM and z/VM – 1000s of Virtual Machines • HW-supported isolation, highly secure (EAL5+ or EAL4+ certified) 9 © Copyright IBM Corporation 2018 Architectural Options 1.Firmware hypervisor management ❑Traditional PR/SM ❑IBM Dynamic Partition Manager 2.Optionally, one or more software hypervisor ❑IBM z/VM ❑KVM 3.Optionally, one or more container technology ❑Docker ❑IBM SSC for ICP ❑OKD 10 © Copyright IBM Corporation 2018 IBM LinuxONE Virtualization All Linux images are capable of hosting containers Simplified view of virtualization options on IBM LinuxONE SLES RHEL SLES Ubuntu RHEL Ubuntu SLES SLES Ubuntu RHEL RHEL Ubuntu SLES RHEL Virtual & Ubuntu Virtual2 IBM z/VM IBM z/VM IBM z/VM IBM z/VM CPUs IBM z/VM LPAR8 Virtual CPUs Ubuntu RHEL SLES RHEL SLES SLES Ubuntu RHEL There are typically dozens, Ubuntu RHEL SLES Ubuntu RHEL SLES Ubuntu even hundreds of Linux servers in a KVM or z/VM LPAR. SLES KVM Ubuntu KVM RHEL KVM IBM z/VM Traditional Logical LPAR1 LPAR2 LPAR3 LPAR4 LPAR5 LPAR6 LPAR7 CPUs PR/SM or PR/SM+ Real P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 IBM DPM CPUs* P1 – P12 are Physical cores, also known as Integrated Facility for Linux (IFL) processors) * - Only one shared pool of cores per system 11 © Copyright IBM Corporation 2018 What is IBM Dynamic Partition Manager? • Built on existing PR/SM technology capabilities • Simplified, consumable, enhanced, partition life-cycle and integrated dynamic I/O management capabilities LINUX LINUX LINUX LINUX • Provides the technology foundation that enables APIs for IaaS and secure, private KVM Clouds PR/SM DPM IBM DPM Powerful and easy HMC 12 © Copyright IBM Corporation 2018 Technical Specifications for DPM IBM z14, z13, z13s, IBM LinuxONE Emperor I & II Supported Operating Environments or Rockhopper – Linux/KVM – FCP and FICON – z/VM 6.4 and newer - FCP and FICON – HW for DPM Feature Code #0016 – IBM Secure Service Container Appliances - FCP and – Two dedicated FICON OSA-Express6S 1000BASE-T Ethernet #0426 or OSA-Express5S 1000BASE-T Ethernet #0417 • Support for auto-configuration of devices to simplify Linux installation, where Linux distribution installers exploit function Supported IO Adapters • Secure FTP through HMC for boot and install of operating – FICON Express including 16S+ (Type FCP & FICON) system via FTP – FCP Express32S • Optionally specify VLANs to use on configured OSA – OSA-Express5S, 6S, and 7S adapters – Crypto Express5S and Crypto Express6S No support yet for – zEDC Express • GDPS® Virtual Appliance – RoCE Express and RoCE Express2 • FICON CTC ( Required for z/VM SSI LGR ) – HiperSockets • FICON attached Tape • ISM ( SMC-D ) • Internal NVMe SSDs 13 © Copyright IBM Corporation 2019 Architectural Options 1.Firmware hypervisor management ❑Traditional PR/SM ❑IBM Dynamic Partition Manager 2.Optionally, one or more software hypervisor ❑IBM z/VM ❑KVM 3.Optionally, one or more container technology ❑Docker ❑IBM SSC for ICP ❑OKD 14 © Copyright IBM Corporation 2018 z/VM Virtualization - Overview ➢ Virtualizes CPU, Memory, I/O devices, disks, Networks, Switches with possible overcommitment ➢ Highly effective and granular sharing and resource shifting definition for Linux guests ➢ Cluster for up to four z/VM images or physical systems as members of a Single System Image (SSI) cluster ➢ Live Linux Guest Relocation (LGR) between the nodes of a SSI cluster ➢ Contains LDAP and RACF Security capabilities 15 © Copyright IBM Corporation 2018 Combine LPARs with z/VM CPU Pooling ▪LPAR with 5 Linux CPU / IFLs ▪Create 2 Pools – one with 4-CPU / cores and one with 1-CPU / core ▪Place the four WAS guests in the 4-cores pool and the two DB2 guests in the 1-core pool • Requires 4-core WAS entitlement • Requires 1-core DB2 entitlement WAS WAS WAS WAS DB2 DB2 PVU Entitlements Guest Guest Guest Guest Guest Guest 700 2 vores 2 cores 2 cores 2 cores 1 cores 1 cores 600 500 cores Pool cores Pool 400 WAS 300 DB2 Capacity 4 cores Capacity 1 core 200 100 LPAR with 5 cores 0 5-cores LPAR With cores Pooling ▪Avoids
Recommended publications
  • Effective Virtual CPU Configuration with QEMU and Libvirt
    Effective Virtual CPU Configuration with QEMU and libvirt Kashyap Chamarthy <[email protected]> Open Source Summit Edinburgh, 2018 1 / 38 Timeline of recent CPU flaws, 2018 (a) Jan 03 • Spectre v1: Bounds Check Bypass Jan 03 • Spectre v2: Branch Target Injection Jan 03 • Meltdown: Rogue Data Cache Load May 21 • Spectre-NG: Speculative Store Bypass Jun 21 • TLBleed: Side-channel attack over shared TLBs 2 / 38 Timeline of recent CPU flaws, 2018 (b) Jun 29 • NetSpectre: Side-channel attack over local network Jul 10 • Spectre-NG: Bounds Check Bypass Store Aug 14 • L1TF: "L1 Terminal Fault" ... • ? 3 / 38 Related talks in the ‘References’ section Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications What this talk is not about 4 / 38 Related talks in the ‘References’ section What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications 4 / 38 What this talk is not about Out of scope: Internals of various side-channel attacks How to exploit Meltdown & Spectre variants Details of performance implications Related talks in the ‘References’ section 4 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP QEMU QEMU VM1 VM2 Custom Disk1 Disk2 Appliance ioctl() KVM-based virtualization components Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) libvirtd QMP QMP Custom Appliance KVM-based virtualization components QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 OpenStack, et al. libguestfs Virt Driver (guestfish) Custom Appliance KVM-based virtualization components libvirtd QMP QMP QEMU QEMU VM1 VM2 Disk1 Disk2 ioctl() Linux with KVM 5 / 38 libguestfs (guestfish) Custom Appliance KVM-based virtualization components OpenStack, et al.
    [Show full text]
  • Ubuntu Installation Guide
    Ubuntu Installation Guide Ubuntu Installation Guide Copyright © 2004 – 2020 the Debian Installer team Copyright © 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2015, 2018 Canonical Ltd. This document contains installation instructions for the Ubuntu 20.04 system (codename “‘Focal Fossa’”), for the S/390 (“s390x”) architecture. It also contains pointers to more information and information on how to make the most of your new Ubuntu system. This manual is free software; you may redistribute it and/or modify it under the terms of the GNU General Public License. Please refer to the license in Appendix F. Table of Contents Installing Ubuntu 20.04 “Focal Fossa” For s390x...........................................................................ix 1. Welcome to Ubuntu ........................................................................................................................1 1.1. What is Ubuntu?...................................................................................................................1 1.1.1. Sponsorship by Canonical .......................................................................................1 1.2. What is Debian? ...................................................................................................................1 1.2.1. Ubuntu and Debian..................................................................................................2 1.2.1.1. Package selection........................................................................................2 1.2.1.2. Releases.......................................................................................................3
    [Show full text]
  • Ubuntu Server for IBM Z and Linuxone
    Ubuntu Server for IBM Z and LinuxONE What’s New - June 2021 Frank Heimes, Tech. Lead Z, Canonical Ltd. Ubuntu on Big Iron: ubuntu-on-big-iron.blogspot.com Ubuntu Server for IBM Z and LinuxONE (s390x) Mission and Philosophy - In a nutshell Freedom to download Ubuntu - study, use, share, (re-)distribute, contribute, improve and innovate it! Mapped to Ubuntu Server for IBM Z and LinuxONE (s390x) - the goal is: ● to expand Ubuntu’s ease of use to the s390x architecture (IBM Z and LinuxONE) ● unlock new workloads, especially in the Open Source, Cloud and container space ● to tap into new client segments ● quickly exploit new features and components - in two ways: ○ promptly supporting new hardware ○ releases built and based on the latest kernels, tool-chain and optimized libraries ● provide parity across architectures, in terms of release and feature parity and closing gaps ● provide a uniform user experience and look-and-feel ● be part of the collective world-wide Open Source power in action ● deal with upstream work and code only - no forks ● offer a radically new subscription pricing with drawer-based pricing, or alternatively provide entry-level pricing based on up to 4 IFLs Release Cadence - Ubuntu https://wiki.ubuntu.com/Releases https://wiki.ubuntu.com/LTS https://en.wikipedia.org/wiki/List_of_Ubuntu_releases 16.04 16.10 17.04 17.10 18.04 18.10 19.04 19.10 20.04 20.10 21.04 20.10 in development Ubuntu 20.04 LTS end-of-life 19.10 in service with s390x support 19.04 upgrade path 18.10 Ubuntu 18.04 LTS 5 years ESM 17.10 17.04 18 months 16.10 5 years Ubuntu 16.04 LTS 5 years ESM Ubuntu 18.04 LTS (Bionic Beaver) ● The codename for the current LTS (Long Term Support) release 18.04 is 'Bionic Beaver' or in short 'Bionic': https://launchpad.net/ubuntu/bionic ● Bionic Release Schedule: https://wiki.ubuntu.com/BionicBeaver/ReleaseSchedule Release date: April, 26th 2018 ● Updated major components: ○ Kernel 4.15 (linux-generic) + HWE kernels ○ docker.io 17.12.1 → 18.09.5 ○ Qemu-KVM 2.11.x / Libvirt (libvirt-bin) 4.0.0 ○ Open vSwitch 2.9 → 2.9.2 ○ LXD 3.0.0 (incl.
    [Show full text]
  • IBM Z Systems Introduction May 2017
    IBM z Systems Introduction May 2017 IBM z13s and IBM z13 Frequently Asked Questions Worldwide ZSQ03076-USEN-15 Table of Contents z13s Hardware .......................................................................................................................................................................... 3 z13 Hardware ........................................................................................................................................................................... 11 Performance ............................................................................................................................................................................ 19 z13 Warranty ............................................................................................................................................................................ 23 Hardware Management Console (HMC) ..................................................................................................................... 24 Power requirements (including High Voltage DC Power option) ..................................................................... 28 Overhead Cabling and Power ..........................................................................................................................................30 z13 Water cooling option .................................................................................................................................................... 31 Secure Service Container .................................................................................................................................................
    [Show full text]
  • Flexible Lustre Management
    Flexible Lustre management Making less work for Admins ORNL is managed by UT-Battelle for the US Department of Energy How do we know Lustre condition today • Polling proc / sysfs files – The knocking on the door model – Parse stats, rpc info, etc for performance deviations. • Constant collection of debug logs – Heavy parsing for common problems. • The death of a node – Have to examine kdumps and /or lustre dump Origins of a new approach • Requirements for Linux kernel integration. – No more proc usage – Migration to sysfs and debugfs – Used to configure your file system. – Started in lustre 2.9 and still on going. • Two ways to configure your file system. – On MGS server run lctl conf_param … • Directly accessed proc seq_files. – On MSG server run lctl set_param –P • Originally used an upcall to lctl for configuration • Introduced in Lustre 2.4 but was broken until lustre 2.12 (LU-7004) – Configuring file system works transparently before and after sysfs migration. Changes introduced with sysfs / debugfs migration • sysfs has a one item per file rule. • Complex proc files moved to debugfs • Moving to debugfs introduced permission problems – Only debugging files should be their. – Both debugfs and procfs have scaling issues. • Moving to sysfs introduced the ability to send uevents – Item of most interest from LUG 2018 Linux Lustre client talk. – Both lctl conf_param and lctl set_param –P use this approach • lctl conf_param can set sysfs attributes without uevents. See class_modify_config() – We get life cycle events for free – udev is now involved. What do we get by using udev ? • Under the hood – uevents are collect by systemd and then processed by udev rules – /etc/udev/rules.d/99-lustre.rules – SUBSYSTEM=="lustre", ACTION=="change", ENV{PARAM}=="?*", RUN+="/usr/sbin/lctl set_param '$env{PARAM}=$env{SETTING}’” • You can create your own udev rule – http://reactivated.net/writing_udev_rules.html – /lib/udev/rules.d/* for examples – Add udev_log="debug” to /etc/udev.conf if you have problems • Using systemd for long task.
    [Show full text]
  • User's Guide and Reference for IBM Z/OS® Remote Access Programs August 2, 2021
    User's Guide and Reference for IBM z/OS® Remote Access Programs August 2, 2021 International Business Machines Corporation IBM Z Dallas ISV Center Dallas, TX USA This document is intended for the sole use of participants in an IBM Z Dallas ISV Center Remote Development or Early Test Program and is not to be distributed to non-participants or used for purposes other than intended. © Copyright International Business Machines Corporation 2019. All rights reserved. 1 Table of Contents 1 Preface .................................................................................................................................................... 4 1.1 Links ................................................................................................................................................. 4 2 Overview – Remote Access Environment ........................................................................................... 5 2.1 Hardware / Software Platform .......................................................................................................... 5 2.2 Introduction to the Virtual Machine Concept ................................................................................... 5 2.3 z/OS Remote Access Environment ................................................................................................... 5 2.4 Printers .............................................................................................................................................. 7 2.5 System Availability..........................................................................................................................
    [Show full text]
  • Version 7.8-Systemd
    Linux From Scratch Version 7.8-systemd Created by Gerard Beekmans Edited by Douglas R. Reno Linux From Scratch: Version 7.8-systemd by Created by Gerard Beekmans and Edited by Douglas R. Reno Copyright © 1999-2015 Gerard Beekmans Copyright © 1999-2015, Gerard Beekmans All rights reserved. This book is licensed under a Creative Commons License. Computer instructions may be extracted from the book under the MIT License. Linux® is a registered trademark of Linus Torvalds. Linux From Scratch - Version 7.8-systemd Table of Contents Preface .......................................................................................................................................................................... vii i. Foreword ............................................................................................................................................................. vii ii. Audience ............................................................................................................................................................ vii iii. LFS Target Architectures ................................................................................................................................ viii iv. LFS and Standards ............................................................................................................................................ ix v. Rationale for Packages in the Book .................................................................................................................... x vi. Prerequisites
    [Show full text]
  • Understanding Full Virtualization, Paravirtualization, and Hardware Assist
    VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist Contents Introduction .................................................................................................................1 Overview of x86 Virtualization..................................................................................2 CPU Virtualization .......................................................................................................3 The Challenges of x86 Hardware Virtualization ...........................................................................................................3 Technique 1 - Full Virtualization using Binary Translation......................................................................................4 Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5 Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6 Memory Virtualization................................................................................................6 Device and I/O Virtualization.....................................................................................7 Summarizing the Current State of x86 Virtualization Techniques......................8 Full Virtualization with Binary Translation is the Most Established Technology Today..........................8 Hardware Assist is the Future of Virtualization, but the Real Gains Have
    [Show full text]
  • Introduction to Virtualization
    z Systems Introduction to Virtualization SHARE Orlando Linux and VM Program Romney White, IBM [email protected] z Systems Architecture and Technology © 2015 IBM Corporation Agenda ° Introduction to Virtualization – Concept – Server Virtualization Approaches – Hypervisor Implementation Methods – Why Virtualization Matters ° Virtualization on z Systems – Logical Partitions – Virtual Machines 2 z Systems Virtualization Technology © 2015 IBM Corporation Virtualization Concept Virtual Resources Proxies for real resources: same interfaces/functions, different attributes May be part of a physical resource or multiple physical resources Virtualization Creates virtual resources and "maps" them to real resources Primarily accomplished with software or firmware Resources Components with architecturally-defined interfaces/functions May be centralized or distributed - usually physical Examples: memory, disk drives, networks, servers Separates presentation of resources to users from actual resources Aggregates pools of resources for allocation to users as virtual resources 3 z Systems Virtualization Technology © 2015 IBM Corporation Server Virtualization Approaches Hardware Partitioning Bare-metal Hypervisor Hosted Hypervisor Apps ... Apps Apps ... Apps Apps ... Apps OS OS OS OS OS OS Adjustable partitions Hypervisor Hypervisor Partition Controller Host OS SMP Server SMP Server SMP Server Server is subdivided into fractions Hypervisor provides fine-grained Hypervisor uses OS services to each of which can run an OS timesharing of all resources
    [Show full text]
  • Linux on IBM System Z with Z/VM
    • Support for Collaborative Memory Management Assist (CMMA) • z/VM VSWITCH support for OSA-Express2 and OSA-Express3 IBM Systems and Technology Group on System z by which z/VM and Linux guests exchange link aggregation for increased throughput and provides more information to optimize their use and management of memory seamless nondisruptive failover in the event that an OSA port in • Up to 32 real processors in a single z/VM image the group becomes unavailable • Coordinated near-continuous availability and disaster recovery • Enhanced memory utilization using Virtual Machine Resource ™ Manager (VMRM) between z/VM and Linux guests for Linux guests with HyperSwap support and a Linux on IBM Geographically Dispersed Parallel Sysplex™ GDPS® solution • More extensive workloads and systems resource management features with VMRM including functions that may be called by Access to a Linux Environment client applications to allocate and manage resources for guests System z with • Enhanced I/O performance and operation of SCSI disks IBM has established a Linux environment that delivers virtual Linux including support for N-Port Identifier virtualization on System z servers so developers can port, test and develop new software servers technologies for the System z platform. For registration procedures z/VM • DVD installation to SCSI disks or 3390-format disks and terms of service for the Community Development System for • IPL of SCSI disks attached to FCP channels by z/VM for Linux Linux, go to: and other guest operating systems ibm.com/systems/z/os/linux/lcds/ Additional opportunities for Independent Software Vendors (ISVs) • Usability enhancements for the z/VM virtual switch (VSWITCH) to test drive the Linux experience are the Linux for System z Test and guest LAN environments Drive offerings.
    [Show full text]
  • KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St
    St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 5-2018 KVM Based Virtualization and Remote Management Srinath Reddy Pasunuru St. Cloud State University, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Pasunuru, Srinath Reddy, "KVM Based Virtualization and Remote Management" (2018). Culminating Projects in Information Assurance. 53. https://repository.stcloudstate.edu/msia_etds/53 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. 1 KVM Based Virtualization and Remote Management by Srinath Reddy Pasunuru A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree Master of Science in Information Assurance May, 2018 Starred Paper Committee Susantha Herath, Chairperson Ezzat Kirmani Sneh Kalia 2 Abstract In the recent past, cloud computing is the most significant shifts and Kernel Virtual Machine (KVM) is the most commonly deployed hypervisor which are used in the IaaS layer of the cloud computing systems. The Hypervisor is the one which provides the complete virtualization environment which will intend to virtualize as much as hardware and systems which will include the CPUs, Memory, network interfaces and so on. Because of the virtualization technologies such as the KVM and others such as ESXi, there has been a significant decrease in the usage if the resources and decrease in the costs involved.
    [Show full text]
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]