Flexible, Software-Defined Networking Infrastructure

Total Page:16

File Type:pdf, Size:1020Kb

Flexible, Software-Defined Networking Infrastructure Flexible, software-defined networking infrastructure Red Hat Enterprise Linux enabling the clouds Clouds Need Red Hat Enterprise Linux Rashid Khan Russell Bryant Networking Services Office of Technology Wednesday, May 03, 2017 #redhat #rhsummit OS Matters Red Hat Enterprise Linux Enables, Empowers, Excels, Enterprise Cloud Red Hat Enterprise Linux (RHEL) #redhat #rhsummit OS Matters ! Core operating system needs support ● Evaluation of patches regarding stability and ● Dedicated to RHEL impact (Hardening) ○ ~700 Developers ● Single point of support (no tennis match of bugs) ○ ~ 400 QA ● Minimizes downtime with balance of stability and security (CVE) ● In addition ○ Layered products Guarantee of API and ABI ■ Developers ● Applications will work after minor upgrades ■ QA ● 3rd party kernel modules under kabi program will ○ Support Services continue work ○ ~14,000 people ready to ● Synchronization of user space with kernel ensure your success features ○ ● Somethings like HW Integration with layered products, and Ansible, and a acceleration cannot be done whole portfolio of products without the OS! #redhat #rhsummit Network Security, Isolation, Tunnels Security and Isolation For Multi-tenancy, Fairness, Enterprise readiness ● Robust Firewalling ○ Connection Tracking with NAT in OVS ○ NetFilter ● Network Namespaces ● L2 Security via MACsec ● L3 Security via IPsec #redhat #rhsummit Tunnels and Isolation ● VLANs (limited identifiers) ● VXLAN with HW offload with IPv6 also ● Geneve (more flexible) ● QinQ 802.1ad (great results) C C C C C V C V M V V V V M M M M M Vxlan Red Hat Red Hat Geneve Enterprise Enterprise QinQ Linux (RHEL) Linux (RHEL) #redhat #rhsummit Packets to/from Virtual Machines Kernel Networking DPDK+vhost-user Device Assignment U Vm-1 Vm-2 Vm-n Vm-1 Vm-2 Vm-n Vm-1 Vm-2 Vm-n S E Vhost-USER / QEMU R vhost -net / QEMU D D D s i i i Open vswitch Open vswitch SR-IOV p r r r a e e e c DPDK c c c e PMD t t t Driver K e Kernel Networking Kernel Kernel r Networking Networking Device Driver n e l IP IP IP Cloud s Cloud Cloud p a c e #redhat #rhsummit Ready for NFV and Enterprise Vm-1 Vm-2 Vm-n ● Fully integrated with Vhost-USER / QEMU ○ Red Hat Openstack Platform ○ Red Hat Virtualization Open vswitch Zero packet loss absolutely possible! ● Full support with Frame Mpps Gbps Mpps/core Mpps ○ Network Namespaces size @0.002% @0.002% @0.002% @0% ○ VXLAN loss loss loss loss ○ Geneve 64 22.93 15.41 5.73 13.46 DPDK ○ Conntrack ○ NAT ○ MQ ○ CPU Pinning NIC ○ Kernel-DP / DPDK on Host DPDK NOT “experimental” anymore ○ DPDK inside the Guest Intel 82599 NIC Max ~23Mpps ○ Optimized for 10G ○ Tuning #redhat #rhsummit CPUs are needed for SW solutions What about 25G? 40G? 50G? Is it game over? #redhat #rhsummit OVS HW offload emerging as an answer Red Hat working with ● Mellanox ● Netronome ● Cavium ● Chelsio ● Others RH Value add ● Open Source upstream solution (Must) ● Integrated solution with layered products (Must) ● Unified / Common API for Kernel and DPDK (Goal) ● No Vendor lock-in (Goal) #redhat #rhsummit HW partner A ● Initial results look very promising ● Fully integrated solution with layered products achievable ● Physical-to-Virtual-to-Physical in this test #redhat #rhsummit HW partner A ● Crawl, Walk, Run ● Huge Potential Of Success ● Physical-to-Virtual-to-Physical ● RHEL is a must for success #redhat #rhsummit HW partner B ● Initial results look very promising ● Fully integrated solution with layered products achievable ● Physical-to-Virtual-to-Physical #redhat #rhsummit HW partner B ● Crawl, Walk, Run ● Huge Potential Of Success ● Physical-to-Virtual-to-Physical ● RHEL is a must for success #redhat #rhsummit Containers Networking (BTW Containers are Linux!) Container Versus VM Networking Packet Flow DPDK Accelerated NIC Open vSwitch Userspace vhost-user container container VM VM App App OS OS executing executing App App Networking Networking Stack Stack Kernel Open vSwitch NIC Performance #redhat #rhsummit Containers: DPDK or not DPDK? Packet Flow DPDK Accelerated NIC Open vSwitch Userspace container container container container App App App App Networking Networking Networking Networking Stack Stack Stack Stack Kernel Open vSwitch Additional Context Switches NIC Performance #redhat #rhsummit Container Networking App Container Container Container App App App App Private Network Container Container Container Container Namespace Open vSwitch veth Private Net Node 2 Private Net Private Net Private Net Namespace Namespace Namespace Namespace NIC veth Open vSwitch/Bridge Container Container SRIO-V Host Network Namespace Open vSwitch or Linux Bridge Open vSwitch Node 3 Node 1 NIC IP cloud ● Namespaces provide private network stacks inside each container ● Kernel enforces privacy without need for heavy-weight virtualization ● Standard kernel network interfaces, iptables, routing, OVS, can all be used #redhat #rhsummit Containers for NFV Containers NFV (Needs / Requests) ● Multiple networks (SDN, physical, SAN, etc) ● Physical NICs and SR-IOV (DPDK inside container) ● Flexible IP addressing and overlapping IP networks (multi-tenancy) ● Flat architecture for line-rate processing and low latency ● NUMA and CPU affinity of containerized VFs ● Coordinating widely separated premises ● IPv6 support and availability, especially in public cloud ● Provide existing orchestration and optional micro-service features #redhat #rhsummit Container NFV Proof-of-Concept Using OpenShift vCPE Operator Datacenter Server GPON NID Demarc Point Management VF Container VF VF Container Container Service Function Chains NIC NIC NIC Customer Metro/GPON Already Done! Under investigation: ● Static IPv6 addressing ● Overlapping IP networks ● Service Function Chains ● NUMA/CPU affinity ● Flexible IP addressing ● IPv6 SLAAC addressing ● No bridging or SDN in packet fast-paths ● Dynamic Service Function ● Multiple interfaces per container (NIC, VLAN, SDN) Chains/NSH #redhat #rhsummit ● Robust SRIO-V OVN / SDN Virtual Networking ● Decouple logical network topology from physical network A B D E HV1 HV2 S S A B C F C R S S S D E G H F G H Logical Network Topology Cloud Physical Network #redhat #rhsummit Open Virtual Network (OVN) ● Provide common virtual networking implementation as a part of the base platform (RHEL) ● Comes with OVS and introduces no new dependencies ● Use lessons learned from previous OVS based virtual networking implementations ● Started upstream in 2015 and has now matured #redhat #rhsummit Context - OpenStack Services API and common services Backend specific orchestration Backend specific host technology #redhat #rhsummit Context - OpenStack Neutron REST API neutron-server Orchestration layer ML2/OVS driver (Translate Neutron config into configuration of a network across a Neutron agents deployment) (OVS, L3, DHCP, Metadata) Per-host programmable virtual switch Open vSwitch programmed by the orchestration layer #redhat #rhsummit Context - OpenStack Neutron with OVN neutron-server neutron-server ML2/OVS driver ML2/OVN driver Neutron agents OVN services (OVS, L3, DHCP, Metadata) (ovn-northd, ovn-controller, OVN DBs) Open vSwitch Open vSwitch #redhat #rhsummit OVN - Virtual Networking Across Products OpenStack OpenShift RHV neutron-server (Kubernetes) (oVirt) THANK YOU OVN CNI plugin + kubernetes ML2/OVN driver OVN network provider resource watcher OVNplus.google.com/+RedHat services facebook.com/redhatinc OVN services OVN services (ovn-northd, ovn-controller, (ovn-northd, ovn-controller, (ovn-northd, ovn-controller, OVNlinkedin.com/company/red-hat DBs) twitter.com/RedHatNews OVN DBs) OVN DBs) youtube.com/user/RedHatVideos Open vSwitch Open vSwitch Open vSwitch #redhat #rhsummit OVN Architecture #redhat #rhsummit 1) Logical Configure in Northbound Database Neutron with networking-ovn OVN Northbound DB #redhat #rhsummit 2) ovn-northd Populates Southbound Database Neutron with networking-ovn OVN Northbound DB ovn-northd OVN Southbound DB #redhat #rhsummit 3) ovn-controller Generates Physical Flows Neutron with networking-ovn OVN Northbound DB ovn-northd OVN Southbound DB ovn-controller ovn-controller ovn-controller OVS OVS ... OVS #redhat #rhsummit Thank You! Questions? Backup Differences Kernel Networking DPDK + Vhost-user Device Assignment Pros Pros Pros ● Feature Rich / Robust solution ● Packets directly sent to user ● Line rate performance ● Ultra Flexible space ● Packets directly sent to VMs ● Integration with SDN ● Line rate performance with tiny ● HW based isolation ● Integration with OVS packets ● Supports Live Migration ● Integration with OVS Cons ● Supports Overlay Networking ● Limited number of VMs ● Full Isolation support / Cons ● Not as flexible Namespace / Multi-tenancy ● Everything has to be in user ● Less control from the host space ● No OVS Cons ● No SDN ● Non line rate performance for ● No live Migration of VMs tiny packets ● No Overlay #redhat #rhsummit #redhat #rhsummit.
Recommended publications
  • Hypervisor Based Password Security
    HyperPass: Hypervisor Based Password Security James "Murphy" McCauley, Radhika Mittal Abstract Phishing attacks: It has been shown that it is quite Passwords are the linchpin in the security of an increasing possible to fool users into divulging passwords and other number of online services – services that range from private data [11]. While some are quick to dismiss such social networking to business communication to banking. attacks as “user error”, phishing can be coupled with Given their importance, it is unfortunate that passwords network-based attacks or can incorporate techniques such are relatively easily stolen using a number of different as homograph domain names to create user experiences types of attack. We introduce HyperPass: an approach that are very difficult to differentiate from legitimate ones. and proof-of-concept system that aims to prevent some of Attacking hosts: By compromising a user’s machine, these attacks by moving passwords from a user’s normal passwords can be stolen directly in any of several operating environment into a secure hypervisor. Notably, ways, e.g., by examining HTTP post data, reading them this is done while maintaining compatibility with existing from browser password managers, or logging keystrokes. online services, applications, and operating systems. These techniques have been used by botnets (the Torpig botnet alone steals a password every second [30]) as well 1 Introduction as by off-the-shelf “spyware” which has even been pre- installed on rental computers [24]. While preventing Passwords are the linchpin of online security. Certainly, this sort of host compromise is an ongoing effort by both there are other major technologies involved in cyberse- industry and academia, it continues to be an elusive goal.
    [Show full text]
  • Oracle® Linux Virtualization Manager Getting Started Guide
    Oracle® Linux Virtualization Manager Getting Started Guide F25124-11 September 2021 Oracle Legal Notices Copyright © 2019, 2021 Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Circuit‐Based Logical Layer 2 Bridging in Software‐Defined Data Center Networking
    Received: 2 November 2018 Revised: 3 May 2019 Accepted: 13 July 2019 DOI: 10.1002/dac.4128 RESEARCH ARTICLE Circuit‐based logical layer 2 bridging in software‐defined data center networking Yao‐Chun Wang | Ying‐Dar Lin Computer Science, National Chiao Tung Summary University, Hsinchu, Taiwan With the expansion of the size of data centers, software‐defined networking Correspondence (SDN) is becoming a trend for simplifying the data center network manage- Yao‐Chun Wang, Computer Science, National Chiao Tung University, Hsinchu, ment with central and flexible flow control. To achieve L2 abstractions in a Taiwan. multitenant cloud, Open vSwitch (OVS) is commonly used to build overlay Email: [email protected] tunnels (eg, Virtual eXtensible Local Area Network [VXLAN]) on top of existing underlying networks. However, the poor VXLAN performance of OVS is of huge concern. Instead of solving the performance issues of OVS, in this paper, we proposed a circuit‐based logical layer 2 bridging mechanism (CBL2), which builds label‐switched circuits and performs data‐plane multicasting in a software‐defined leaf‐spine fabric to achieve scalable L2 without overlay tunneling. Our evaluations indicate that direct transmission in OVS improves throughput performance by 58% compared with VXLAN tunneling, and data‐ plane multicasting for ARP reduces address resolution latency from 149 to 0.5 ms, compared with control‐plane broadcast forwarding. The evaluation results also show that CBL2 provides 0.6, 0.4, and 11‐ms protection switching time, respectively, in the presence of switch failure, link failure, and port shutdown in practical deployment. KEYWORDS cloud, datacenter, layer 2, multitenancy, network virtualization, OpenFlow, SDN 1 | INTRODUCTION Infrastructure‐as‐a‐Service (IaaS)1 providers enable enterprise customers (who are also called tenants) to obtain flex- ible and on‐demand virtualized infrastructures, by using virtualization technologies that share the computing resources (eg, servers, storages, and networks) in a data center.
    [Show full text]
  • Neutron Integration
    oVirt – Neutron Integration July 2013 Mike Kolesnik 1/49 Agenda oVirt network configuration Neutron overview Integration benefits External providers Neutron as an external provider Under the hook Future work 2/49 oVirt Network Configuration 3/49 Network View Network - a logical entity that represents a layer 2 broadcast domain Defined within the scope of a data center 4/49 Adding a New Network Give it a name Define the network properties (VLAN, MTU, VM/Non-VM) 5/49 Adding a New Network Make the network available in the selected clusters. 6/49 Host Level Configuration Optional vs. required networks Host level configuration: 7/49 Adding Network to a vNIC 8/49 Supported Configuration - Linux Bridge 1 .. N 1 .. N VM VM VM VM SW Bridge SW Bridge SW Bridge VLAN VLAN VLAN bond NIC NIC 9/49 Neutron Overview 10/49 OpenStack Networking - Neutron Neutron provides network connectivity-as-a- service It offers a plug-in architecture designed to support various network technologies through vendor specific plug-ins and API extensions Exposes REST API for accessing the service Available plugins: Linux Bridge, OVS, Cisco Nexus, NVP, Ryu, NEC, etc... 11/49 Neutron high level architecture Neutron API Neutron Plugin - Create network - Create network . Compute nodes API clients . configured according to - Create Port - Create Port selected plugin, by either: * Agent at each node * External network controller (e.g. UCSM) API extensions DB API + plugin = Neutron service Compute nodes 12/49 Integration Benefits Add support in oVirt for the various network technologies provided via Neutron plugins Leverage L3 services modeled in Neutron Enjoy both worlds: Neutron for managing VM networks oVirt for managing infrastructure networks (Migration network, storage network etc.) Neutron networks are exposed side by side with oVirt networks which allows the user to use oVirt mature implementation for network configuration 13/49 Integration bits 14/49 External Providers An external product that can be used to provide resources for oVirt Resources that can be provided: hosts, networks, etc.
    [Show full text]
  • Learn About VXLAN in Virtualized Data Center Networks
    Learn About VXLAN in Virtualized Data Center Networks Data centers have rapidly increased their server virtualization over the past decade, resulting in dramatic increases in agility. This, in turn, has created a demand for data center networks that are equally flexible and agile. Virtualization of the network is the next obvious step – decoupling the virtual network from the physical network makes it easier to manage, automate, and orchestrate. One common method being used today to virtualize data center networks is the use of overlay networks. An overlay network sits on top of the physical network, enabling Network overlays are the use of a virtual network of switches, routers, firewalls, load balancers, and so on. created by encapsulating This decoupling of the virtual from the physical enables fast programmatic provision- traffic and tunneling it ing of the network for any application. You no longer have to orchestrate changes over the physical network. Although relatively new to across a set of physical devices. data center networks, Creating a virtual overlay network also benefits the physical, or underlay, network, overlay networks have been used in campus which can now be a simple IP network that is concerned solely with delivering packets networks for years to carry to destinations. An overlay network adds simplicity, resiliency, and scale to the wireless LAN traffic over physical network, another reason overlay networks are gaining popularity. the wired network. Figure 1 illustrates an overlay network. From the perspectives of virtual machine 1 and virtual machine 2 (VM1 and VM2), traffic between them is taking the route shown by the dotted line, going through traditional networking devices such as switches, routers, and firewalls, which are instantiated in the hosts.
    [Show full text]
  • The Design and Implementation of Open Vswitch
    The Design and Implementation of Open vSwitch Ben Pfaff∗ Justin Pettit∗ Teemu Koponen∗ Ethan J. Jackson∗ Andy Zhou∗ Jarno Rajahalme∗ Jesse Gross∗ Alex Wang∗ Jonathan Stringer∗ Pravin Shelar∗ Keith Amidon† Martin Casado∗ ∗VMware †Awake Networks What is Open vSwitch? From openvswitch.org: “Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag).” Where is Open vSwitch Used? ● Broad support: – Linux, FreeBSD, NetBSD, Windows, ESX – KVM, Xen, Docker, VirtualBox, Hyper-V, … – OpenStack, CloudStack, OpenNebula, … ● Widely used: – Most popular OpenStack networking backend – Default network stack in XenServer – 1,440 hits in Google Scholar – Thousands of subscribers to OVS mailing lists Open vSwitch Architecture VMs VM 1 VM 2 ... VM n Netlink Hypervisor ovsdb-server OVSDB ovs-vswitchd kernel module l O r e w e V o S l n D F s n r B e u p e O k NICs Controller Use Case: Network Virtualization OpenFlow tables Table 0 Table 1 Table 24 packet packet Flow 1 Flow 1 Flow 1 ingress ... egress Flow 2 Flow 2 Flow 2 ... ... ... Physical L2 ... Logical to to Logical Lookup Physical OpenFlow Pipeline Implications for Forwarding Performance OpenFlow tables Table 0 Table 1 Table 24 packet packet Flow 1 Flow 1 Flow 1 ingress ... egress Flow 2 Flow 2 Flow 2 ... ... ... Physical to L2 ... Logical to Logical Lookup Physical k hash k0 hash k1 hash ... 24 lookups lookups lookups 100+ hash lookups per packet for tuple space search? Non-solutions ● All of these helped: – Multithreading – Userspace RCU – Batching packet processing – Classifier optimizations – Microoptimizations ● None of it helped enough: % versus x.
    [Show full text]
  • Virtual Switching Plays an Important Role in the Data Center, So I'm Going to Give a Brief Overview of the Different Products
    Virtual switching plays an important role in the data center, so I’m going to give a brief overview of the different products. What is virtual switching? Well, a physical server these days usually has a hypervisor as operating system, which has only one function: virtualizing other operating systems to virtual machines that are running on top of the hypervisor. These virtual machines can be Windows, Linux, Solaris, or even other operating systems. These virtual machines need network connectivity. For that, they share one or more physical network interface cards on the server, commonly called a pNIC. To regulate this network traffic, a virtual switch, called a vSwitch, runs in software on the hypervisor and connects these pNICs with the virtual network interface cards of the virtual machines, called vNICs. So it looks like this: The blue parts are done in software, only the last part, the pNIC, is physical. There are three big players in the hypervisor market: Citrix with XenServer, Microsoft with Hyper-V and VMware with ESXi or vSphere. Each has their own implementation of a virtual switch. Apart from that, Cisco has a Nexus 1000 virtual switch. Citrix Xenserver I have no experience with XenServer and so far I’ve found litte information on it. A virtual switch that can be used is Open vSwitch, an open source product which runs on Xen and Virtualbox. I’m not sure if this is the only virtual switch that XenServer supports. Open vSwitch supports a variety of features you would expect from a switch: trunking, 802.1q VLAN tags, link aggregation (LACP), tunneling protocols, SwitchPort ANalyser (SPAN), IPv6, basic QoS.
    [Show full text]
  • Deploy the Extrahop Discover Appliance on a Linux KVM Published: 2020-02-23
    Deploy the ExtraHop Discover Appliance on a Linux KVM Published: 2020-02-23 The following procedure guides you through the deployment process of the ExtraHop Discover EDA 1000v or EDA 2000v virtual appliance on a Linux kernel-based virtual machine (KVM). You should be familiar with basic KVM administration before proceeding. If you have not already done so, download the ExtraHop Discover virtual appliance file for KVM from the ExtraHop Customer Portal . Important: If you want to deploy more than one ExtraHop virtual appliance, create the new instance with the original deployment package or clone an existing instance that has never been started. System requirements Your environment must meet the following requirements to deploy a Discover appliance: EDA 1000v EDA 2000v A KVM hypervisor environment capable of hosting a A KVM hypervisor environment capable of hosting a VM that includes: VM that includes: • 4 GB RAM • 6 GB RAM • Two vCPUs • Six vCPUs • 4 GB boot disk (virtio-scsi interface • 4 GB boot disk (virtio-scsi interface recommended recommended • 40 GB datastore disk • 250 GB datastore disk (Optional) Open vSwitch virtual switch software (Optional) Open vSwitch virtual switch software An ExtraHop virtual appliance license key An ExtraHop virtual appliance license key Package contents The installation package for KVM systems is a tar.gz file that contains the following items: Description EDA 1000v file name EDA 2000v file name Domain XML configuration file eda-1000v.xml eda-2000v.xml Boot disk extrahop-boot.qcow2 extrahop-boot.qcow2 Datastore disk extrahop-data.qcow2 extrahop-data.qcow2 Deploy the Discover virtual appliance To deploy the Discover virtual appliance, complete the following procedures: • Determine the best virtual bridge configuration for your network • Create a virtual capture bridge that contains the traffic you want to monitor • Edit the domain XML configuration file • Configure a mirror session on the virtual bridge © 2020 ExtraHop Networks, Inc.
    [Show full text]
  • Throughput and Latency of Virtual Switching with Open Vswitch: a Quantitative Analysis
    Throughput and Latency of Virtual Switching with Open vSwitch: A Quantitative Analysis Paul Emmerich · Daniel Raumer · Sebastian Gallenm¨uller · Florian Wohlfart · Georg Carle Abstract Virtual switches, like Open vSwitch, have the traditional benefits of software switches: high flex- emerged as an important part of today's data centers. ibility, vendor independence, low costs, and conceptual They connect interfaces of virtual machines and pro- benefits for switching without Ethernet bandwidth lim- vide an uplink to the physical network via network itations. The most popular virtual switch implementa- interface cards. We discuss usage scenarios for virtual tion { Open vSwitch (OvS [43]) { is heavily used in switches involving physical and virtual network inter- cloud computing frameworks like OpenStack [7] and faces. We present extensive black-box tests to quantify OpenNebula [6]. OvS is an open source project that is the throughput and latency of software switches with backed by an active community, and supports common emphasis on the market leader, Open vSwitch. Finally, standards such as OpenFlow, SNMP, and IPFIX. we explain the observed effects using white-box mea- The performance of packet processing in software surements. depends on multiple factors including the underlying hardware and its configuration, the network stack of Keywords Network measurement Cloud Perfor- · · the operating system, the virtualization hypervisor, and mance evaluation Performance characterization · · traffic characteristics (e.g., packet size, number of flows). MoonGen Each factor can significantly hurt the performance, which gives the motivation to perform systematic experiments to study the performance of virtual switching. We carry 1 Introduction out experiments to quantify performance influencing factors and describe the overhead that is introduced Software switches form an integral part of any virtu- by the network stack of virtual machines, using Open alized computing setup.
    [Show full text]
  • SDN with Neutron and Skydive
    SDN with Neutron and Skydive Dominik Holler Senior Software Engineer - Red Hat 10/2019 This presentation is licensed under a Creative Commons Attribution 4.0 International License SDN deployment with OVN, Neutron and Skydive ● Neutron Integration ● Skydive Integration 1 Neutron Integration Networking in oVirt ● Linux bridge based networking ○ VLAN for interhost communication ○ Basic filtering, QoS ● SR-IOV based networking ○ VLAN for interhost communication ● NIC PCI passthrough ● External: ○ 3rd-party ○ Open vSwitch based networking ■ oVirt internal OVN for tunneled interhost communication ■ OpenStack Neutron OVN/ODL 2 Neutron Integration in oVirt Create or import 1 network oVirt Engine Neutron 3 Create port Create or read Create 2 network vNIC 5 4 Create port 6 OVN Associate vNIC with VDSM port 3 Example ovirtmgmt Engine host Host0, OVS Host1, Linux bridge Host2 No Spoof oVirt Engine VM OpenStack Controller OVN Controller OVN Controller Neutron Vm0 OVN Central Vm1 ... 4 Deployment yum install -y centos-release-openstack-stein && \ yum install -y openstack-packstack && \ packstack --os-glance-install=y --os-cinder-install=y \ --os-manila-install=n --os-nova-install=y --os-horizon-install=y \ --os-swift-install=n --os-ceilometer-install=n --os-aodh-install=n \ --os-panko-install=n --os-sahara-install=n --os-heat-install=n \ oVirt engine --os-magnum-install=n --os-trove-install=n --os-ironic-install=n \ --os-client-install=y --os-neutron-install=y \ --default-password=123456 \ --provision-demo=y \ --install-hosts=$controller_host \ --os-network-hosts=$controller_host,$network_host0,$network_host1
    [Show full text]
  • Open Vswitch
    Underneath OpenStack Quantum: Software Defined Networking with Open vSwitch Thomas Graf <[email protected]> Principal Software Engineer Red Hat, Inc. pril !", !#$% 1 Thomas Graf <[email protected]> Part One Why Open vSwitch? Open vSwitch ena(les )in*+ to (ecome part of a Software Defined Networking architecture. Application Application Application Network Operating System VM1 VM2 Switch Switch Open vSwitch Switch 2 Thomas Graf <[email protected]> Switched Networks Switches learn from the networ, traffic the- obser'e and decide independently. Compute Node A Network Node B Tenant Tenant Tenant Tenant Tenant Tenant 1 2 3 4 5 6 Bridge Bridge L3 Agent Switch 1 Switch 2 Switch 3 Peter Alice 3 Thomas Graf <[email protected]> Dynamica y update flow ta! es in a universal anguage. In the Software .efined Networ,ing architecture, the control and data planes are decoupled, networ, intelligence and state are logically centralized, and the underlying network infrastr*cture is a(stracted from the applications. Software-Defined Networking: The New Norm for Networks ONF White Paper April 13, 2012 4 Thomas Graf <[email protected]> Software Defined Networking A logically centralized controller decides what is (est for the networ, (ased on a globa view of the networ,. Compute Node A Network Node B Tenant Tenant Tenant Tenant Tenant Tenant 1 2 3 4 5 6 vSwitch 1 vSwitch 2 L3 Agent Controller Switch 1 Open$ ow Switch 2 Switch 3 Peter Alice 5 Thomas Graf <[email protected]> Software Defined Networking An attempt to create a well1,nown API for applications of the /etwor, that did not succeed yet.
    [Show full text]
  • Performance Evaluation of Linux Bridge and OVS in Xen
    Thesis no: XXX-20YY-NN Performance evaluation of Linux Bridge and OVS in Xen Jaswinder Singh Faculty of Computing Blekinge Institute of Technology SE371 79 Karlskrona, Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fullment of the requirements for the degree of Master of Science in Electrical Engineering. The thesis is equivalent to 20 weeks of full time studies. Contact Information: Author(s): Jaswinder Singh E-mail: [email protected] University advisor: Patrik Arlos Faculty of Computing Blekinge Institute of Technology, Sweden University Examiner: Prof. Kurt Tutschku Department of Communication Systems Blekinge Institute of Technology, Sweden Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Virtualization is the key technology which has provided smarter and easier ways for eectively utilizing resources provided by hardware. Virtualization allows multiple operative systems (OS) to run on a single hardware. The resources from a hardware are allocated to virtual machines (VM) by hy- pervisor. It is important to know how the performance of virtual switches used in hypervisor for network communication aect the network trac. Performance of Linux Bridge (LB) and Open vSwitch (OVS) is investigated in this study. The method that has been used in this research is experi- mentation. Two dierent experiment scenarios are used to benchmark the performance of Linux Bridge and OVS in virtual and non-virtual environ- ment. Performance metric bitrate is used to benchmark the performance of LB and OVS.
    [Show full text]