
Introduction to the Cloud Computing Network Control Plane Architecture and Protocols for Data Center Networks Outline ❒ Data Center Networking Basics ❒ Problem Solving with Traditional Design Techniques ❒ Virtual/Overlay Network Functional Architecture ❒ Virtual/Overlay Network Design and Implementation Data Center Networking Basics Lots of Servers ❒ Data centers consist of massive numbers of servers ❍ Up to 100,000’s ❒ Each server has multiple processors ❍ 8 or more ❒ Each processor has multiple cores ❍ 32 max for commodity processors, more coming ❒ Each server has multiple NICs ❍ Usually at least 2 for redundancy ❍ 1G common, 10G on the upswing Source: http://img.clubic.com/05468563-photo-google-datacenter.jpg Mostly Virtualized ❒ Hypervisor provides a compute abstraction layer ❍ Looks like hardware to operating system VM3 ❍ OSes run as multiple Virtual Machines (VMs) on single server ❒ Hypervisor maps VM to processors VM2 ❍ Virtual cores (vCores) VM1 VM4 ❒ Virtual switch provides networking vNIC between VMs and to DC network vNIC vNIC vNIC ❍ Virtual NICs (vNICS) Hypervisor ❒ W.o. oversubscription, usually as many VMs as cores Virtual Switch ❍ Up to 256 for 8p x 32c Server Hardware ❍ Typical is 32 for 4p x 8c NIC1 NIC2 ❒ VMs can be moved from one machine to another Data Center Network Problem ❒ For a single virtualized data center built with cheap commodity servers: ❍ 32 VMs per server ❍ 100,000 servers ❍ 32 x 100,000 = 3.2 million VMs! ❒ Each VM needs a MAC address and an IP address ❒ Infrastructure needs IP and MAC addresses too ❍ Routers, switches ❍ Physical servers for management ❒ Clearly a scaling problem! Common Data Center Network Architectures: Three Tier ❒ Server NICs connected directly to edge ❒ Pluses switch ports ❍ Common ❍ ❒ Aggregation layer switches connect Simple ❒ multiple edge switches Minuses ❒ ❍ Top layer massively over-subscribed Top layer switches connect aggregation ❍ ❍ Reduced cross sectional bandwidth Top layer can also connectThese canto the be Internet • 4:1 oversubscription means only 25% of bandwidth ❒ Usually some redundancyIP Routers available (for more €s) ❍ Scalability at top layer requires expensive enterprise switches End of Row Switch (sometimes) Top of Rack (ToR) Switch Source: K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, S. A. Madani, N. Min-Allah, L. Wang, D. Chen, M. Iqbal, C.-Z. Xu, and A. Y. Zomaya, "Quantitative Comparisons of the State of the Art Data Center Architectures," Concurrency and Computation: Practice and Experience, vol. 25, no. 12, pp. 1771-1783, 2013. Common Data Center Network Architectures: Fat ❒TreeCLOS network origin in 1950’s ❒ Pluses telephone network ❍ No oversubscription ❒ Data center divided into k pods ❍ Full bisection bandwidth ❒ Each pod has (k/2)2 switches ❒ Minuses ❍ k/2 access, k/2 aggregation ❍ Need specialized routing and ❒ Core has (k/2)2 switches addressing scheme ❍ Number of pods limited to number of ❒ 1:1 oversubscription ratio and full ports on a switch bisection bandwidth ❍ Maximum # of pods = # switch ports k=4 Example Source: Bilal, et. al. Problem Sovlving with Traditonal Design Techniques Problem #1:ARP/ND Handling ❒ IP nodes use ARP (IPv4) and N(eighbor) D(iscovery) for resolving the IP to MAC address ❍ Broadcast (ARP) and Multicast (ND) ❒ Problem: ❍ Broadcast forwarding load on large, flat L2 Source: networks can be http://www.louiewong.com/wp-content/uploads/2010/09/ARP.jpg overwhelming Problem #2: VM Movement ❒ Data center operators need to move VMs around ❍ Reasons: server maintenance, server optimization for energy use, performance improvement, etc. ❍ MAC address can stay fixed (provided it is unique in the data center) Hypervisor Hypervisor Hypervisor Hypervisor ❍ If subnet changes, IP address must change because it is bound to the VM’s location in the topology • For “hot” migration, the IP address cannot change ❒ Problem: ❍ How broadcast domains are provisioned affects where VMs can be moved Source: http://www.freesoftwaremagazine.com/files/nodes/1159/slide4.jpg Solutions Using Traditional Network Design Principles: IP Subnets ❒ ToR == last hop router ❍ Subnet (broadcast domain) limited to rack Note: ❍ Good broadcast/multicast limitation These solutions only ❍ Poor VM mobility work if the data center ❒ Aggregation Switch == last hop router ❍ Subnet limited to racks controlled by aggregation switch is single tenant! ❍ Complex configuration • Subnet VLAN to all access switches and servers on served racks Where to put the last ❍ Moderate broadcast/multicast limitation ❍ Moderate VM mobility hop router? • To any rack covered ❒ Core Switch/Router == last hop router ❍ Poor broadcast/multicast limitation ❍ Good VM mobility Source: Bilal, et. al. Problem #3: Dynamic Provisioning of Tenant Networks ❒ Virtualized data centers enable renting infrastructure to outside parties (aka tenants) ❍ Infrastructure as a Service (IaaS) model ❍ Amazon Web Services, Microsoft Azure, Google Compute Engine, etc. ❒ Customers get dynamic server provisioning through VMs ❍ Expect same dynamic “as a service” provisioning for networks too ❒ Characteristics of tenant network ❍ Traffic isolation ❍ Address isolation • From other tenants • From infrastructure Solution Using Traditional Network Design Principles ❒ Use a different VLAN for each tenant network ❒ Problem #1 ❍ There are only 4096 VLAN tags for 802.1q VLANs* ❍ Forces tenant network provisioning along physical network lines ❒ Problem #2 ❍ For fully dynamic VM placement, each ToR-server link must be dynamically configured as a trunk ❒ Problem #3 ❍ Can only move VMs to servers where VLAN tag is available • Ties VM movement to physical infrastructure *except for carrier Ethernet, about which more shortly Summary ❒ Configuring subnets based on hierarchical switch architecture always results in a tradeoff between broadcast limitation and VM movement freedom ❍ On top of which, can’t achieve traffic isolation for multitenant networks ❒ Configuring multitenant networks with VLAN tags for traffic isolation ties tenant configuration to physical data center layout ❍ Severely limits where VMs can be provisioned and moved ❍ Requires complicated dynamic trunking ❒ For multitenant, virtualized data centers, no good solution using traditional techniques! Virtual/Overlay Network Functional Architecture Virtual Networks through Overlays❒ Basic idea of an overlay: ❍ Tunnel tenant packets through underlying physical Ethernet or IP network ❍ Overlay forms a conceptually separate network providing a separate service from underlay ❒ L2 service like VPLS or EVPN ❍ Overlay spans a separate broadcast domain ❒ L3 service like BGP IP VPNs ❍ Different tenant networks have separate IP address spaces ❒ Dynamically provision and remove overlay as tenants need network service ❒ Multiple tenants with separate networks on the same server Source: Bilal, et. al. Blue Tenant Network Yellow Tenant Network Advantages of Overlays ❒ Tunneling is used to aggregate traffic ❒ Addresses in underlay are hidden from the tenant ❍ Inhibits unauthorized tenants from accessing data center infrastructure ❒ Tenant addresses in overlay are hidden from underlay and other tenants ❍ Multiple tenants with the same IP address space ❒ Overlays can potentially support large numbers of tenant networks ❒ Virtual network state and end node reachability are handled in the end nodes Challenges of Overlays ❒ Management tools to co-ordinate overlay and underlay ❍ Overlay networks probe for bandwidth and packet loss, which can lead to inaccurate information ❍ Lack of communication between overlay and underlay can lead to inefficient usage of network resources ❍ Lack of communication between overlays can lead to contention and other performance issues ❒ Overlay packets may fail to traverse firewalls ❒ Path MTU limit may cause fragmentation ❒ Efficient multicast is challenging Functional Architecture: Definitions ❒ Virtual Network ❍ Overlay network defined over the Layer 2 or Layer 3 underlay (physical) network ❍ Provides either a Layer 2 or a Layer 3 service to tenant ❒ Virtual Network Instance (VNI) or Tenant Network ❍ A specific instance of a virtual network ❒ Virtual Network Context (VNC) ❍ A tag or field in the encapsulation header that identifies the specific tenant network Functional Architecture: More Definitions ❒ Network Virtualization Edge (NVE) ❍ Data plane entity that sits at the edge of an underlay network and implements L2 and/or L3 network virtualization functions • Example: virtual switch aka Virtual Edge Bridge (VEB) ❍ Terminates the virtual network towards the tenant VMs and towards outside networks ❒ Network Virtualization Authority (NVA) ❍ Control plane entity that provides information about reachability and connectivity for all tenants in the data center Overlay Network Architecture Tenant Tenant Data Plane System System Control Plane NVE NVA LAN link Tenant System NVE Data Center L2/L3 Network End System NVE integration Tenant Tenant Point to Point System System link Virtual/Overlay Network Design and ImplementatION Implementing Overlays: Tagging or Encapsulation? ❒ At or above Layer 2 but below Layer 3: ❍ Insert tag at a standards specified place in the pre-Layer 3 header ❒ At Layer 3: ❍ Encapsulate the tenant packet with an encapsulation protocol header and an IP header ❒ Tenant network identified by Virtual Network Context ❍ Tag for tagging ❍ Context identifier in protocol header for encapsulation L2 Virtual Networks:Tagging Options ❒ Simple 802.1q VLANs ❍ 4096 limit problem ❍ Trunking complexity ❒ MPLS ❍ Nobody uses MPLS directly on the switching
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages45 Page
-
File Size-