Network Function Virtualization Seminar

Total Page:16

File Type:pdf, Size:1020Kb

Network Function Virtualization Seminar Network Function Virtualization Seminar Matthias Falkner, Distinguished Engineer, Technical Marketing Nikolai Pitaev, Technical Marketing Engineer, TECSPG-2300 Cisco Webex Teams Questions? Use Cisco Webex Teams to chat with the speaker after the session How 1 Find this session in the Cisco Events Mobile App 2 Click “Join the Discussion” 3 Install Webex Teams or go directly to the team space 4 Enter messages/questions in the team space TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 3 Agenda: TECSPG-2300 Network Function Virtualization – A use-case based Technology Deep- Dive • Introduction • 08:45 – 09:05 Matt • NFV Primer • 09:05 – 10:00 Matt • Virtualizing Branch Infrastructure • 10:00 – 10:45 Nikolai Break • SP/Cloud Virtualization • 11:00 – 11:30 Nikolai • Connecting to Multiple Clouds • 11:30 – 12:15 Nikolai • Multi-Tenanted SMB Services • 12:15 – 12:50 Matt • Conclusion • 12:50 – 13:00 Matt TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 4 Introduction Virtualization of Network Functions (NFV) – Current State • Idea of de-coupling software from hardware is not new! • Linked to automation / orchestration • Increased focus to simplify Enterprise architectures https://www.dataports.eu/wp-content/uploads/google-datacenter- eemshaven-img-7.jpg • Particularly on L4-7 services • SPs drive adoption, but Enterprises are following suit https://bloximages.newyork1.vip.townnews.com/omaha.com/content • Both consumption models (MSP, self-managed) /tncms/assets/v3/editorial/1/ab/1ab55a42-195a-11e7-b177- considered 3f34b38ab18c/58e3d482ba843.image.jpg?resize=1200%2C673 https://cnet3.cbsistatic.com/img/8cRqI3rcyHCpNORJVjRkeTVUoLM=/724x407/2013/1 0/25/d451bda3-3f9b-11e3-a363-14feb5ca9861/Structure_from_Yerba_Buena.jpg TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 11 Why Virtualize? Motivations for the Enterprise OPEX CAPEX • Deployment Flexibility • Deploy on standard x86 servers • Reduction of number of network elements • Economies of scale • Reduction of on-site visits • Service Elasticity – deploy as needed • Deployment of standard on-premise hardware • Simpler architectural paradigm • Simplification of physical network architecture • HA still needed? • Leveraging Virtualization benefits • Best-of-breed • Hardware oversubscription, vMotion, .. • Increased potential for automated network operations • Re-alignment of organizational boundaries TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 15 The 4 Layers of a virtualized System Architecture 4 Automation / Orchestration (Cisco DNA Center, NSO) 3 Virtual WAN Virtual Router Virtual Firewall Virtual Wireless LAN Optimization 3rd Party VNFs (ISRv,CSR) (ASAv, NGFWv) Controller (vWLC) (vWAAS) 2 Network Functions Virtualization Infrastructure Software (NFVIS) ISR 4000 + CSP-5444 / Enterprise Network Compute 1 UCS E-Series UCS C-Series System TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 16 NFV Primer This Section will cover basic VNF Technologies It is all about Virtual Network Functions. We are not talking about generic Virtualization Techniques. Topics, which will be covered next: • IO: SR-IOV, Virtual Switches, Service Chaining • CPU: Hyperthreading, vCPU pinning, NUMA Socket Allocation • Putting all together: NFV Performance Insights • VNF Virtualization vs. Containerization TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 18 A System Architecture View with 3 VNFs VM1(4vCPU CSR 1000v) CSR VMFman2(1vCPU/ CSR 1000v)HQF IOS PPE PPE Rx CSR 1 1 1 1 CMan Pkt Scheduler IRQ vNIC vNIC VM Linux Fman3 / 1 n VM (2vCPUPPE CSRHQF 1000v) IOS Rx CSR CMan Pkt Scheduler IRQ2 vNIC 2 vNIC 2 VM Linux2 • Example: 3 CSR VMs Guest OSFman Scheduler/ 1 n PPE HQF IOS Pkt Scheduler Rx 3 3 3 3 scheduled on a 2-socket 8- Guest CManOS Scheduler IRQ vNIC1 vNICn VM Linux 1 vCPU 1 vCPU 1 1 vCPU0 1 2 vCPU3 core x86 Guest OS2 Scheduler vCPU0 – Different CSR footprints shown 3 3 vCPU0 vCPU1 • Type 1 Hypervisor vSwitch – No additional Host OS X86 Server vCPU 2 1 represented vCPU 3 0 Host Linux 2 vNICn Process Process Queue VM Kernel1 • HV Scheduler algorithm governs how HV Scheduler vCPU/IRQ/vNIC/VMKernel processes are allocated to Socket Socket 0 1 pCPUs pCPU0 pCPU1 pCPU2 pCPU3 pCPU0 pCPU1 pCPU2 pCPU3 pCPU4 pCPU5 pCPU6 pCPU7 pCPU4 pCPU5 pCPU6 pCPU7 • Note the various schedulers I/O I/O Memory Storage Memory I/O I/O – Running ships-in-the-night © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public Packet Path from Physical Interface into VNF Packet feature x86 Host (by example of FD.io VPP) processing VM1 VMn GuestUser Packet moved to CSR 1000v .. CSR 1000v VNF DPDK-VirtIO DPDK-VirtIO Ptr Ptr VNF interrupted, Ptr Ptr Ptr Ptr Why does this matter? packet pointer Kernel Guest Guest passed to buffer • Illustrates contention of shared resources Shared Pkt Mem Host User • Each packet move vNIC vNIC / vHost-user notified Pkt Pkt Pkt Pkt (vHost_user) Qemu consumes resources (vHost_user) Pkt Pkt Pkt Pkt • Packet pointer buffers FD.io VPP VPP kicked, have limited depth Kernel switching packet Host • can cause drops Ptr pNIC Driver pNIC Packet copied into Pkt Pkt Pkt Pkt Memory Pkt Pkt Pkt Pkt Pkt Pkt Pkt Pkt Packet Arrival Pkt Traffic Generator TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 20 Potential Bottlenecks in a virtualized System X86 Host (w/ OVS-DPDK, FD-IO/VPP) VM Guest 1 Application VM2 Application Intra-VM Processing (e.g. Features) IO Driver IO Driver User Space Space User Hypervisor / (QEMU) Virtualization Layer vNIC vNIC Virtual Switch / Kernel Host IO-Path Virtual Switch pNIC Physical Interfaces pNIC pNIC Driver pNIC Driver Pkt Pkt Pkt Pkt Pkt Pkt Pkt Pkt TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 21 VNF Architecture VNF Architecture Matters! VM1(4vCPU CSR 1000v) C VMFm2(a1nvC/ PU CSR 100H0QvF) S R IOS PPE PPE Rx C 1 1 1 1 • CMan Pkt Scheduler IRQ vNIC1 vNICn VM Linux VNF can be associated with multiple S VMF3m(a2nv/C PU CSRH Q1F000v) R IOS PPE Rx C • VNF can be assoPkt Schecduleirated with m2 ultip2le 2 2 CMan IRQ vNIC1 vNICn VM Linux Guest OSF mSacnh/ eduler HQF S vCPUs R vCPUIOsS PPE Rx 3 3 3 3 Guest COMSan ScheduPlkteScrheduler IRQ vNIC1 vNICn VM Linux 1 vCPU 1 vCPU 1 1 vCPU0 1 2 vCPU3 • … and cGounsesut OmS2e Smcheemduoleryr • … and consume memory vCPU0 3 3 vCPU vCPU1 • VNF Softw0 are architecture can impact performance • vSwitch VNF Software architecture can impact 2 X86 Server vCPU1 s s e 3 e vCPU0 u c Host Linux e performance o 2 u r vNICn P CSR Resource Template Q VM Kernel1 *Available in 3.16.02 and later • Example: CSR1000V vCPU allocations HV Scheduler Default (Data Plane Heavy) Control Plane Heavy vCPUs 1 2 4 8 vCPUs 1 2 4 8 Socket0 Socket1 Control Control pCPU0 pCPU1 pCPU2 pCPU3 pCPU0 pCPU1 pCPU2 pCPU3 1 1 1 1 2 2 Service 1 Service 1 pCPU4 pCPU5 pCPU6 pCPU7 pCPU4 pCPU5 pCPU6 pCPU7 Data 1 3 7 Data 1 2 6 I/O I/O Memory Storage Memory I/O I/O Service Plane Medium Service Plane Heavy vCPUs 1 2 4 8 vCPUs 1 2 4 8 Control Control 1 2 2 1 2 4 Service 1 Service 1 Data 1 2 6 Data 1 2 4 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 23 NUMA (non-uniform memory access) • NUMA is a memory sharing technology to allows a processor VPP1 VM1 VPP2 VM2 core to use memory associated with Socket 0 Socket 1 other cores NUMA Node 0 NUMA Node 1 Core0 Core1 Core2 Core3 Core4 Core4 Core6 Core7 • Accessing remote memory happens over the NUMA connection, which is Core0 L1 Core1 L1 Core2 L1 Data Core3 L1 Core4 L1 Data Core5 L1 Core6 L1 Data Core7 L1 typically slower than local memory Data Cache Data Cache Cache Data Cache Cache Data Cache Cache Data Cache Core0-1 Core2-3 Core4-5 Core6-7 access L1 Instruction Cache L1 Instruction Cache L1 Instruction Cache L1 Instruction Cache Core0-1 Core2-3 Core3-4 Core6-7 L2 Cache L2 Cache L2 Cache L2 Cache • Benefits: each core can access its own Core1-3 Core4-7 memory -> allows for simultaneous L3 Cache (last-level cache) L3 Cache (last-level cache) Core0-3 Memory NUMA Node 0 NUMA Node 1 Core4-7 Memory memory access controller (shared) Interconnect (shared) Interconnect (shared) controller (shared) • Performance implications Mem-VPP1 NUMA Node 0 External Memory Mem-VPP2 Mem-VM1 NUMA Node 1 External Memory Mem-VM2 • Higher-latency for memory access • Variable performance • Application memory may not be local to the core TECSPG-2300 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 24 CSR1000V Performance Polaris 16.10.01b ESXi/SR-IOV ESXi / SR-IOV/ Single Feature / IMIX 18000 16000 14000 12000 10000 8000 6000 4000 Throughput (Mbps) Throughput 2000 0 IPSec (Single CEF ACL NAT L4 FW Basic QoS AES) 1 vCPU 6546 4656 781 3448 3741 5342 2 vCPU 7093 5093 843 3516 3794 6250 4 vCPU 8606 7075 1218 3844 3787 7590 8 vCPU 15624 14494 2312 4546 4547 15396 Traffic Profile : IMIX {64 byes (58.33%), 594 bytes (33.33%), 1518 bytes (8.33%)} PDR(Packet Drop Rate): 0.01% *The max throughput license we offer today is 10Gbps and please contact us if you have use case requires more than 10G © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public CSR1000V Performance Polaris 16.10.01b KVM/SR-IOV KVM-REHL / SR-IOV/ Single Feature / IMIX 18000 16000 14000 12000 10000 8000 6000 4000 Throughput (Mbps) Throughput 2000 0 IPSec (Single CEF ACL NAT L4 FW Basic QoS AES) 1 vCPU 3812 4312 750 3302 3575 4753 2 vCPU 7148 3624 843 3536 3813 6304 4 vCPU 8643 6911 1218 3781 3673 7786 8 vCPU 16718 15023 2405 7276 7120 14740 Traffic Profile : IMIX {64 byes (58.33%), 594 bytes (33.33%), 1518 bytes (8.33%)} PDR(Packet Drop Rate): 0.01% *The max throughput license we offer today is 10Gbps and please contact us if you have usecase requires more than 10G © 2020 Cisco and/or its affiliates.
Recommended publications
  • Virtual Routing in the Cloud
    I I I I I I I I I CISCO ~ ptg17123584 Virtual Routing in the Cloud Arvind Durai Stephen Lynn ciscopress.com Amit Srivastava Exclusive Offer - 40% OFF Cisco Press Video Training live lessons·® ciscopress.com/video Use coupon code CPVIDE040 during checkout. Video Instruction from Technology Experts livelessons·® ptg17123584 Advance Your Skills Train Anywhere Learn Get started with fundamentals, Train anywhere, at your Learn from trusted author become an expert, or get certified. own pace, on any device. trainers published by Cisco Press. Try Our Popular Video Training for FREE! ciscopress.com/video Explore hundreds of FREE video lessons from our growing library of Complete Video Courses, Livelessons, networking talks, and workshops. 9781587144943_Durai_Virtual_Routing_Cloud_CVR.indd 2 4/8/16 1:25 PM Virtual Routing in the Cloud Arvind Durai, CCIE No. 7016 Stephen Lynn, CCIE No. 5507 & CCDE No. 20130056 Amit Srivastava ptg17123584 Cisco Press 800 East 96th Street Indianapolis, IN 46240 USA ii Virtual Routing in the Cloud Virtual Routing in the Cloud Arvind Durai, CCIE No. 7016 Stephen Lynn, CCIE No. 5507 & CCDE No. 20130056 Amit Srivastava Copyright© 2016 Cisco Systems, Inc. Published by: Cisco Press 800 East 96th Street Indianapolis, IN 46240 USA All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without written permission from the publisher, except for the inclusion of brief quotations in a review. Printed in the United States of America First Printing April 2016 Library of Congress Control Number: 2016934921 ISBN-13: 978-1-58714-494-3 ISBN-10: 1-58714-494-8 Warning and Disclaimer ptg17123584 This book is designed to provide information about CSR 1000V router and adoption of NFV technology in the cloud environment.
    [Show full text]
  • Oracle VM Virtualbox User Manual
    Oracle VM VirtualBox R User Manual Version 5.1.0 c 2004-2016 Oracle Corporation http://www.virtualbox.org Contents 1 First steps 11 1.1 Why is virtualization useful?............................. 12 1.2 Some terminology................................... 12 1.3 Features overview................................... 13 1.4 Supported host operating systems.......................... 15 1.5 Installing VirtualBox and extension packs...................... 16 1.6 Starting VirtualBox.................................. 17 1.7 Creating your first virtual machine......................... 18 1.8 Running your virtual machine............................ 21 1.8.1 Starting a new VM for the first time.................... 21 1.8.2 Capturing and releasing keyboard and mouse.............. 22 1.8.3 Typing special characters.......................... 23 1.8.4 Changing removable media......................... 24 1.8.5 Resizing the machine’s window...................... 24 1.8.6 Saving the state of the machine...................... 25 1.9 Using VM groups................................... 26 1.10 Snapshots....................................... 26 1.10.1 Taking, restoring and deleting snapshots................. 27 1.10.2 Snapshot contents.............................. 28 1.11 Virtual machine configuration............................ 29 1.12 Removing virtual machines.............................. 30 1.13 Cloning virtual machines............................... 30 1.14 Importing and exporting virtual machines..................... 31 1.15 Global Settings...................................
    [Show full text]
  • Virtualization Best Practices
    SUSE Linux Enterprise Server 15 SP1 Virtualization Best Practices SUSE Linux Enterprise Server 15 SP1 Publication Date: September 24, 2021 Contents 1 Virtualization Scenarios 2 2 Before You Apply Modifications 2 3 Recommendations 3 4 VM Host Server Configuration and Resource Allocation 3 5 VM Guest Images 25 6 VM Guest Configuration 36 7 VM Guest-Specific Configurations and Settings 42 8 Xen: Converting a Paravirtual (PV) Guest to a Fully Virtual (FV/HVM) Guest 45 9 External References 49 1 SLES 15 SP1 1 Virtualization Scenarios Virtualization oers a lot of capabilities to your environment. It can be used in multiple scenarios. To get more details about it, refer to the Book “Virtualization Guide” and in particular, to the following sections: Book “Virtualization Guide”, Chapter 1 “Virtualization Technology”, Section 1.2 “Virtualization Capabilities” Book “Virtualization Guide”, Chapter 1 “Virtualization Technology”, Section 1.3 “Virtualization Benefits” This best practice guide will provide advice for making the right choice in your environment. It will recommend or discourage the usage of options depending on your workload. Fixing conguration issues and performing tuning tasks will increase the performance of VM Guest's near to bare metal. 2 Before You Apply Modifications 2.1 Back Up First Changing the conguration of the VM Guest or the VM Host Server can lead to data loss or an unstable state. It is really important that you do backups of les, data, images, etc. before making any changes. Without backups you cannot restore the original state after a data loss or a misconguration. Do not perform tests or experiments on production systems.
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • The Open Virtual Machine Format Whitepaper for OVF Specification
    The Open Virtual Machine Format Whitepaper for OVF Specification version 0.9 OVF Whitepaper 0.9 VMware, Inc. 3401 Hillview Ave., Palo Alto CA, 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com XenSource, Inc. 2300 Geng Road, Suite 2500, Palo Alto, CA 94303 www.xensource.com Copyright © VMware, Inc. and XenSource, Inc. All rights reserved. VMware, the VMware “boxes” logo and design, Virtual SMP and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Xen, XenSource, the “Circle Xen” logo and derivatives thereof are registered trademarks or trademarks of XenSource, Inc. in the United States and other countries. Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respective companies. No part of this specification (whether in hardcopy or electronic form) may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of VMware, Inc. (VMware), and XenSource, Inc. (XenSource) except as otherwise permitted under copyright law or the terms of that certain Teaming and Non-Disclosure Agreement between VMware and XenSource dated March 23, 2007, as amended from time to time. Please note that the content in this specification is protected under copyright law even if it is not distributed with software
    [Show full text]
  • FVD: a High-Performance Virtual Machine Image Format for Cloud
    FVD: a High-Performance Virtual Machine Image Format for Cloud Chunqiang Tang IBM T.J. Watson Research Center [email protected] http://www.research.ibm.com/people/c/ctang/ Note: This paper describes the copy-on-write, copy- 1 Introduction on-read, and adaptive prefetching capabilities of FVD. The compact image capability of FVD is described Cloud Computing is widely considered as the next big separately in a companion paper entitled “Compact thing in IT evolution. In a Cloud like Amazon EC2 [1], Image Support in Fast Virtual Disk (FVD)”, which the storage space for virtual machines’ virtual disks can is available at https://researcher.ibm.com/ be allocated from multiple sources: the host’s direct- researcher/view project.php?id=1852 attached storage (DAS, i.e., local disk), network-attached storage (NAS), or storage area network (SAN). These op- Abstract tions offer different performance, reliability, and avail- ability at different prices. DAS is at least several times This paper analyzes the gap between existing hyper- cheaper than NAS and SAN, but DAS limits the avail- visors’ virtual disk capabilities and the requirements in ability and mobility of VMs. a Cloud, and proposes a solution called FVD (Fast Vir- tual Disk). FVD consists of an image format and a To get the best out of the different technologies, a block device driver designed for QEMU. Despite the ex- Cloud usually offers a combination of block-device stor- istence of many popular virtual machine (VM) image age services to VMs. For instance, Amazon Web Ser- formats, FVD came out of our unsatisfied needs in the vices (AWS) [2] offers to a VM both ephemeral storage IBM Cloud.
    [Show full text]
  • Gadgetpc Debian Installation Guide
    GadgetPC Single Board Computer Debian Installation Guide Document Revision: 1.03 Date: 20 October, 2009 BiPOM Electronics, Inc. 16301 Blue Ridge Road, Missouri City, Texas 77489 Telephone: 1-713-283-9970 Fax: 1-281-416-2806 E-mail: [email protected] Web: www.bipom.com All trademarked names in this manual are the property of respective owners. © 2009 BiPOM Electronics, Inc. Page 1 1. Overview Thank you for your purchase of the GadgetPC Single Board Computer. GadgetPC is a powerful computer board that is capable of running high-level operating systems such as Linux. This document is for advanced users who want to install Debian Linux system from scratch. When GadgetPC is first powered, it goes through a boot sequence and executes various components in the following order: ROM boot loader ( built-in ROM) AT91BootStrap ( DataFlash ) U-boot (DataFlash ) Linux kernel ( uImage file under USB FAT root ) Root FS ( Debian on USB Flash drive ) ROM boot loader is built into the AT91SAM9260 microcontroller and cannot be changed. As soon as the board is powered the ROM boot loader starts. It downloads and runs an application (AT91BootStrap) from external storage media (DataFlash) into internal SRAM. AT91BootStrap has been developed by BiPOM Electronics specifically for GadgetPC. AT91BootStrap is responsible for initializing hardware such as DataFlash, SDRAM, digital outputs, and USART0 serial port. AT91BootStrap downloads to SDRAM and passes control to U-Boot which is a powerful boot loader that resides also in DataFlash. U-Boot performs many low-level tasks such as detecting USB hardware, reading Linux image from external USB flash drive, uncompressing Linux image to SDRAM, and passing control to Linux image in SDRAM.
    [Show full text]
  • Oracle® Linux KVM User's Guide
    Oracle® Linux KVM User's Guide F29966-16 August 2021 Oracle Legal Notices Copyright © 2020, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Thin-Provisioned Disks with QEMU and KVM
    Thin-provisioned disks with QEMU and KVM Paolo Bonzini Red Hat, Inc. devconf.cz, February 2014 devconf.cz 2014 QEMU vs. KVM ● QEMU is where the cool stuff happens ● Fast paravirtualized hardware (virtio) ● Virtual machine lifecycle (including migration) ● Storage stack ● kvm.ko lets you use QEMU for virtualization ● Narrow-scoped, mature code ● Also cool :) ● This talk will be about QEMU devconf.cz 2014 Outline ● Thin provisioning concepts ● What was there ● Requirements ● Designing a thin-provisioning API ● Future work devconf.cz 2014 Thin provisioning concepts ● A disk is made of many blocks ● The user tells the disks how it's using them ● The disk can be used more efficiently ● Speed and durability gains for SSDs ● Oversubscription of networked storage ● Efficient maintenance operations ● Better name (from SCSI standard): “Logical block provisioning” devconf.cz 2014 Thin provisioning and virtualization ● The advantages extend to virtualization ● Can be applied to any storage backend ● “Software-defined storage” before it became cool ● The user is the guest administrator ● Only pay for actually used space ● Guest disks can be overprovisioned ● Host admin saves disk space devconf.cz 2014 Multiple storage layers qcow2, raw, ... file, blockqcow2, device, raw, gluster, ... iSCSI gluster, iSCSI gluster, NFS ext4/XFS Ext4, XFS SSD, NAS, dm-thinp devconf.cz 2014 What was there ● Lazy allocation of blocks ● Differential disks (copy-on-write) ● High-level watermark: management can query the highest sector in use ● QEMU-specific formats: qcow,
    [Show full text]
  • Optimizing VM Images for Openstack with KVM/QEMU
    27th Large Installation System Administration Conference November 3–8, 2013 • Washington, D.C. Optimizing VM images for OpenStack with KVM/QEMU Brian Wellman Director, Systems Operations Metacloud 27th Large Installation System Administration Conference November 3–8, 2013 • Washington, D.C. A Few Notes • All topics covered in this presentation assume the following software versions: • QEMU 1.0.0+ • OpenStack Grizzly (2013.1) • libvirt 0.9.8+ • Ubuntu 12.04 LTS • RHEL/CentOS 6.3 • There are a number of different ways of doing what is described in this presentation. This is simply one way of doing things based upon our experience running production clouds for our clients. 27th Large Installation System Administration Conference November 3–8, 2013 • Washington, D.C. Disk vs. Container Formats • Disk formats store partition and block data: • QEMU+KVM supports a cornucopia of different disk formats, too many to cover in this presentation • We will be covering RAW and QCOW2 • Container formats express metadata about a VM as well as its underlying block devices: • Typically contains files in a disk format • We will be covering AMI 27th Large Installation System Administration Conference November 3–8, 2013 • Washington, D.C. RAW Disk Format • Direct representation of a disk structure • Can be sparse • Widely supported by hypervisors • Can be treated as a block device 27th Large Installation System Administration Conference November 3–8, 2013 • Washington, D.C. QCOW2 Disk Format • QEMU Copy on Write Version 2 • Supports pre-allocation as well as on-demand allocation of blocks • Wide range of features: • Read-only backing files • Snapshots (internal and external) • Compression • Encryption 27th Large Installation System Administration Conference November 3–8, 2013 • Washington, D.C.
    [Show full text]
  • Improving Digital Forensics and Incident Analysis in Production Environments by Using Virtual Machine Introspection
    Improving Digital Forensics and Incident Analysis in Production Environments by Using Virtual Machine Introspection Benjamin Taubmann Dissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften Eingereicht an der Fakultät für Informatik und Mathematik der Universität Passau Juli 2019 Gutachter: Prof. Dr. Hans P. Reiser Prof. Dr. Nuno Miguel Carvalho dos Santos Abstract Main memory forensics and its special form, virtual machine introspection (VMI), are powerful tools for digital forensics and can be used to improve the security of computer-based systems. However, their use in production systems is often not pos- sible. This work identifies the causes and offers practical solutions to apply these techniques in cloud computing and on mobile devices to improve digital forensics and incident analysis. Four key challenges must be tackled. The first challenge is that many existing solu- tions are not reproducible, for example, because the corresponding software compo- nents are not available, obsolete or incompatible. The use of these tools is also often complex and can lead to a crash of the system to be monitored in case of incorrect use. To solve this problem, this thesis describes the design and implementation of Libvmtrace, which is a framework for the introspection of Linux-based virtual ma- chines. The focus of the developed design is to implement frequently used methods in encapsulated modules so that they are easy for developers to use, optimize and test. The second challenge is that many production systems do not provide an interface for main memory forensics and virtual machine introspection. To address this problem, this thesis describes possible solutions for how such an interface can be implemented on mobile devices and in cloud environments designed to protect main memory from unprivileged access.
    [Show full text]
  • Hypervisors, Virtualization API, Cloud Computing Platforms
    Hypervisors, Virtualization API, Cloud Computing Platforms Alexander Betz 7.5.2013 What is virtualization? Separation of the OS from the hardware. OS OS Layer Hardware Hardware What is virtualization? Separation of the OS from the hardware. OS Guest Hypervisor Hardware Host What is virtualization? Separation of the OS from the hardware. OS Guest1 Guest2 Hypervisor Hypervisor Hardware Host Host Multiple different OS on the same host Transfer guest between hosts What is emulation ? • Provide an instruction set specific to a hardware type – The indirection layer converts commands from the guest into commands the host can understand – Non-host hardware can be simulated for the VM Guest Layer Host Why use it ? • Better resource usage – Average server load without virtualisation: 5% – One big instead of several small physical machines (less space, electricity, cooling required) • More flexibility for server and desktop hosting – Different OS on the same host • Only 5-10% slower Hypervisors • Also called Virtual Machine Monitor Guest Guest Guest Guest Guest Management 1 2 1 2 3 Tools Hypervisor Hypervisor Host OS Hardware Hardware Type 1 Hypervisor Type 2 Hypervisor (bare metal) (hosted) QEMU and KVM • Kernel based Virtual Machine(KVM) – Linux kernel module Guest Guest 1 2 – Acts like type 1 hypervisor – Uses Linux functions QEMU QEMU Linux Kernel KVM Hardware • Quick EMUlator(QEMU) – Provides an environment for the guest OS and passes the instructions from the guest OS to KVM Terms • Virtualization – Simulate enough hardware so a guest can run
    [Show full text]