Techniques for Monitoring and Measuring Virtualized Networks

Techniques for Monitoring and Measuring Virtualized Networks

June 22, 2018 Techniques for Monitoring and Measuring Virtualized Networks Aman Shaikh & Vijay Gopalakrishnan AT&T Labs ‐ Research © 2018 AT&T Intellectual Property. All rights reserved. AT&T, Globe logo, Mobilizing Your World and DIRECTV are registered trademarks and service marks of AT&T Intellectual Property and/or AT&T affiliated companies. All other marks are the property of their respective owners. Outline Introduction and Background • Service‐provider and datacenter networks • Introduction to Network Function Virtualization (NFV) – Software Defined Networking (SDN) Challenges • General challenges in network measurements • Challenges specific to NFV Measurement and monitoring • What and how of data collection • Data collection and tools in today’s cloud • Measurement applications: trouble‐shooting, performance improvement, verification • Leveraging SDN and end‐hosts for measurements Summary • Additional Resources 2 Introduction and Background Why the move towards SDN and NFV 3 Network Traffic Trends* Unprecedented growth in data traffic • Ten‐fold increase in traffic volume b/w 2014 and 2019 Large number of diverse devices • Devices becoming mobile • 11.5 billion devices, including 3B IoT devices by 2019 Multitude of Demanding applications • ¾ of the data traffic will be video by 2019 Growth of Cloud Computing • Variable demand, transient applications *Source: Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update 2014‐2019 4 Traditional Networking Technology trends Introducing, modifying, or scaling networking services is difficult Proprietary, fixed High spatial and The cost of reserving The cost and lack of function, and hardware temporal variability in the space and energy skilled professionals to nature of network “demand”, especially for a variety of network integrate and maintain appliances closer to the edge equipment network services 5 Concrete Example: Cellular Network Architecture • Hardware appliances supporting • Statically provisioned at a few specific cellular network functions central locations – specialized, expensive – hard to predict load at edge, cannot repurpose 6 Problems with Traditional deployments Inefficient load distribution Path Inflation Performance Impacts Limited Customization 7 Problems with Traditional deployments Inefficient load distribution Path Inflation Performance Impacts Limited Customization 4 Problems with Traditional deployments City A Inefficient load distributionCity A Path InflationCity B City B User Cellular User Network Data center Cellular hosting S/P‐ Network Internet GWs Server Performance Impacts Limited CustomizationData center hosting S/P‐GWs Internet Server 4 Problems with Traditional deployments Inefficient load distributionCity A Path Inflation City B User Cellular Network Data center hosting S/P‐ Internet GWs Server Performance Impacts Limited Customization 4 Problems with Traditional deployments Inefficient load distributionCity A Path Inflation City B User Cellular Network Data center hosting S/P‐ Internet GWs Server Performance Impacts Limited Customization Diverse Devices Diverse applications Diverse NFs Transcoder Smartphone Fleet tracking IDS Firewall Smart meter 4 Moving towards SDN and NFV • Take advantage of advances in Cloud technology • Network elements are applications running on a common hardware platform • Reduce capital and energy expenses by consolidating network elements • Software as a driver to change the innovation cycle • Reduce time to market customer targeted and tailored services 12 How does it work? Virtual Machine Virtual Machine Virtual Machine Virtual Machine Virtual Network Virtual Network Virtual Network Virtual Network Function Function Function Function Virtual Switch Server Platform Physical Network 13 Key NFV technology Strengths and Weaknesses NFV Design Principles Separation of software from hardware • Enable software to evolve independent from hardware, and vice versa Flexible deployment of network functions • Deploy automatically network‐function software on a pool of hardware resources • Run different functions at different times in different data centers Dynamic service provisioning • Scale the NFV performance dynamically and on a grow‐as‐you‐ need basis • Fine granularity control based on the current network conditions 15 Technical Requirements Performance Manageability – Keep the degradation as small as possible – Different from data center networking, where – Understand the maximum achievable the hardware resources are almost equivalent performance of the underlying programmable – Support sharing spare resources and elastic hardware provisioning of network services effectively Reliability and Stability Security – Minimize service reliability and service level – Share underlying networking and storage, run in agreement impacts when evolving to NFV other’s data center, outsource to 3rd party – Ensure service stability when reconfiguring or – Introduction of new elements, such as relocating software‐based virtual appliances orchestrators and hypervisors 16 Problem with Software‐based Packet Processing 17 Enter DPDK (Data Plane Development Kit) 40 31.2 User‐space software libraries for accelerating packet processing L3 30 20 workloads on Commercial Off The Shelf (COTS) hardware platform Core 10 (Mpps) 1.1 Per Performance 0 Linux DPDK High‐performance, community‐driven solution Platform • Delivers 25X performance jump over Linux DPDK Sample Customer ISV Eco-System • Open Source, BSD License; Vibrant community support and Applications Applications Applications Packet adoption; EAL ETHDEV LPM FrmWork EXACT MALLOC E1000 IGB • Comprehensive NFV and Intel architecture support DISTRIB MATCH IXGBE I40e MBUF Extns ACL • User space application development; multiple models VMXNET3 FM 10K MEMPOOL Classify KNI XENVIRT Cisco VIC* RING • Over two‐dozen pre‐built sample applications PCAP Mellanox* POWER METER TIMER Broadcom* IVSHMEM Core (PMDs: Native & SCHED Libs Platform Virtual) QoS Host of other techniques to improve performance User Space • Software prefetch, Core‐thread affinity, vector instructions, KNI IGB_UIO VFIO Algorithmic optimizations, Hardware offload, Bulk Functions 18 How does DPDK work? User‐Space DPDK App Driver constantly polls the NIC for packet core is packet processing (Poll‐based) pegged at 100% use (Event‐based) VNF Application VNF Application • Takes advantage of multiple cores on modern CPU DPDK User • Not all cores used for computation; Dedicate a (few) core(s) Dataplane Space Dataplane for packet processing 2. System Call (read) Socket Socket API • Gets around limitations with interrupt driven packet API 3. System Call processing (used by traditional NIC drivers) (write) Kernel 1. DMA 2. DMA Space Write Read Provides memory management libraries for skb_buf 1. Interrupt 4. DMA efficient packet copy & DMA • Allocate large chunk of memory in application mapped to Ethernet Driver API Ethernet Driver API DPDK NIC NIC NIC • Raw packets copied directly into application memory Packet Buffer Driver Packet Buffer 19 Need for Interface Sharing Physical Servers Virtualized Servers ? Network Network 20 Software‐Based Sharing Utilizes emulation techniques to provide a logical I/O hardware device to the VM • Interposes itself between the driver running in the guest OS and the underlying hardware (via emulation or split‐driver) Features • Parses the I/O commands • Translates guest addresses into host physical addresses • Ensures that all referenced pages are present in memory • Maps multiple I/O requests from VMs into a single I/O stream for underlying hardware Examples • Open VSwitch (Layer‐2), Juniper Contrail (Layer‐3) Drawback • Poor performance * Source: Intel White Paper on SR‐IOV 21 Direct Assignment Bypasses the VM I/O emulation layer to write directly to the memory space of a virtual machine • Akin to servers’ ability to safely DMA data to directly to/from host memory • Uses enhancements like Intel VT‐d • Results in throughput improvement for the VMs Drawback • Limited scalability; a physical device can only be assigned to one VM * Source: Intel White Paper on SR‐IOV 22 Single Root I/O Virtualization (SR‐IOV) PCI‐SIG Standard to find middle ground between direct attached & software‐based Standardizes a way to bypass the hypervisor involvement in data movement • SR‐IOV capable device appears as multiple devices, Physical or Virtual Functions (PF / VF) • Provides independent memory space, interrupts, and DMA streams for each virtual machine • SR‐IOV assigns one or more VF to a virtual machine • SR‐IOV enables hypervisor bypass by providing the ability for VMs to attach to a VF * Source: Intel White Paper on SR‐IOV 23 Using SR‐IOV and DPDK Existing VNF implementations rely on this approach for performance! 24 Pros and Cons of these approaches Software‐based sharing High level of flexibility and control, easy extensibility Preferred approach with virtualization because of recovery and migration Recent efforts on getting better performance Supports only subset of functions provided by the network card (NIC) Significant CPU overhead SR‐IOV Provides good network performance and sharing of network interface Requires support in the BIOS as well as in the operating system/hypervisor Orchestration limitations; no easy way to migrate VMs Networking restrictions in a cloud environment (because of hypervisor bypass) 25 NFV platforms/building blocks Vector Packet Processor (VPP)‐ Originally from Cisco, but currently open source (fd.io) • DPDK based data plane, features implemented as graph, Large code base

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    104 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us