Introduction to the Cloud Computing Network Control Plane Architecture and Protocols for Data Center Networks Outline

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to the Cloud Computing Network Control Plane Architecture and Protocols for Data Center Networks Outline Introduction to the Cloud Computing Network Control Plane Architecture and Protocols for Data Center Networks Outline ❒ Data Center Networking Basics ❒ Problem Solving with Traditional Design Techniques ❒ Virtual/Overlay Network Functional Architecture ❒ Virtual/Overlay Network Design and Implementation Data Center Networking Basics Lots of Servers ❒ Data centers consist of massive numbers of servers ❍ Up to 100,000’s ❒ Each server has multiple processors ❍ 8 or more ❒ Each processor has multiple cores ❍ 32 max for commodity processors, more coming ❒ Each server has multiple NICs ❍ Usually at least 2 for redundancy ❍ 1G common, 10G on the upswing Source: http://img.clubic.com/05468563-photo-google-datacenter.jpg Mostly Virtualized ❒ Hypervisor provides a compute abstraction layer ❍ Looks like hardware to operating system VM3 ❍ OSes run as multiple Virtual Machines (VMs) on single server ❒ Hypervisor maps VM to processors VM2 ❍ Virtual cores (vCores) VM1 VM4 ❒ Virtual switch provides networking vNIC between VMs and to DC network vNIC vNIC vNIC ❍ Virtual NICs (vNICS) Hypervisor ❒ W.o. oversubscription, usually as many VMs as cores Virtual Switch ❍ Up to 256 for 8p x 32c Server Hardware ❍ Typical is 32 for 4p x 8c NIC1 NIC2 ❒ VMs can be moved from one machine to another Data Center Network Problem ❒ For a single virtualized data center built with cheap commodity servers: ❍ 32 VMs per server ❍ 100,000 servers ❍ 32 x 100,000 = 3.2 million VMs! ❒ Each VM needs a MAC address and an IP address ❒ Infrastructure needs IP and MAC addresses too ❍ Routers, switches ❍ Physical servers for management ❒ Clearly a scaling problem! Common Data Center Network Architectures: Three Tier ❒ Server NICs connected directly to edge ❒ Pluses switch ports ❍ Common ❍ ❒ Aggregation layer switches connect Simple ❒ multiple edge switches Minuses ❒ ❍ Top layer massively over-subscribed Top layer switches connect aggregation ❍ ❍ Reduced cross sectional bandwidth Top layer can also connectThese canto the be Internet • 4:1 oversubscription means only 25% of bandwidth ❒ Usually some redundancyIP Routers available (for more €s) ❍ Scalability at top layer requires expensive enterprise switches End of Row Switch (sometimes) Top of Rack (ToR) Switch Source: K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, S. A. Madani, N. Min-Allah, L. Wang, D. Chen, M. Iqbal, C.-Z. Xu, and A. Y. Zomaya, "Quantitative Comparisons of the State of the Art Data Center Architectures," Concurrency and Computation: Practice and Experience, vol. 25, no. 12, pp. 1771-1783, 2013. Common Data Center Network Architectures: Fat ❒TreeCLOS network origin in 1950’s ❒ Pluses telephone network ❍ No oversubscription ❒ Data center divided into k pods ❍ Full bisection bandwidth ❒ Each pod has (k/2)2 switches ❒ Minuses ❍ k/2 access, k/2 aggregation ❍ Need specialized routing and ❒ Core has (k/2)2 switches addressing scheme ❍ Number of pods limited to number of ❒ 1:1 oversubscription ratio and full ports on a switch bisection bandwidth ❍ Maximum # of pods = # switch ports k=4 Example Source: Bilal, et. al. Problem Sovlving with Traditonal Design Techniques Problem #1:ARP/ND Handling ❒ IP nodes use ARP (IPv4) and N(eighbor) D(iscovery) for resolving the IP to MAC address ❍ Broadcast (ARP) and Multicast (ND) ❒ Problem: ❍ Broadcast forwarding load on large, flat L2 Source: networks can be http://www.louiewong.com/wp-content/uploads/2010/09/ARP.jpg overwhelming Problem #2: VM Movement ❒ Data center operators need to move VMs around ❍ Reasons: server maintenance, server optimization for energy use, performance improvement, etc. ❍ MAC address can stay fixed (provided it is unique in the data center) Hypervisor Hypervisor Hypervisor Hypervisor ❍ If subnet changes, IP address must change because it is bound to the VM’s location in the topology • For “hot” migration, the IP address cannot change ❒ Problem: ❍ How broadcast domains are provisioned affects where VMs can be moved Source: http://www.freesoftwaremagazine.com/files/nodes/1159/slide4.jpg Solutions Using Traditional Network Design Principles: IP Subnets ❒ ToR == last hop router ❍ Subnet (broadcast domain) limited to rack Note: ❍ Good broadcast/multicast limitation These solutions only ❍ Poor VM mobility work if the data center ❒ Aggregation Switch == last hop router ❍ Subnet limited to racks controlled by aggregation switch is single tenant! ❍ Complex configuration • Subnet VLAN to all access switches and servers on served racks Where to put the last ❍ Moderate broadcast/multicast limitation ❍ Moderate VM mobility hop router? • To any rack covered ❒ Core Switch/Router == last hop router ❍ Poor broadcast/multicast limitation ❍ Good VM mobility Source: Bilal, et. al. Problem #3: Dynamic Provisioning of Tenant Networks ❒ Virtualized data centers enable renting infrastructure to outside parties (aka tenants) ❍ Infrastructure as a Service (IaaS) model ❍ Amazon Web Services, Microsoft Azure, Google Compute Engine, etc. ❒ Customers get dynamic server provisioning through VMs ❍ Expect same dynamic “as a service” provisioning for networks too ❒ Characteristics of tenant network ❍ Traffic isolation ❍ Address isolation • From other tenants • From infrastructure Solution Using Traditional Network Design Principles ❒ Use a different VLAN for each tenant network ❒ Problem #1 ❍ There are only 4096 VLAN tags for 802.1q VLANs* ❍ Forces tenant network provisioning along physical network lines ❒ Problem #2 ❍ For fully dynamic VM placement, each ToR-server link must be dynamically configured as a trunk ❒ Problem #3 ❍ Can only move VMs to servers where VLAN tag is available • Ties VM movement to physical infrastructure *except for carrier Ethernet, about which more shortly Summary ❒ Configuring subnets based on hierarchical switch architecture always results in a tradeoff between broadcast limitation and VM movement freedom ❍ On top of which, can’t achieve traffic isolation for multitenant networks ❒ Configuring multitenant networks with VLAN tags for traffic isolation ties tenant configuration to physical data center layout ❍ Severely limits where VMs can be provisioned and moved ❍ Requires complicated dynamic trunking ❒ For multitenant, virtualized data centers, no good solution using traditional techniques! Virtual/Overlay Network Functional Architecture Virtual Networks through Overlays❒ Basic idea of an overlay: ❍ Tunnel tenant packets through underlying physical Ethernet or IP network ❍ Overlay forms a conceptually separate network providing a separate service from underlay ❒ L2 service like VPLS or EVPN ❍ Overlay spans a separate broadcast domain ❒ L3 service like BGP IP VPNs ❍ Different tenant networks have separate IP address spaces ❒ Dynamically provision and remove overlay as tenants need network service ❒ Multiple tenants with separate networks on the same server Source: Bilal, et. al. Blue Tenant Network Yellow Tenant Network Advantages of Overlays ❒ Tunneling is used to aggregate traffic ❒ Addresses in underlay are hidden from the tenant ❍ Inhibits unauthorized tenants from accessing data center infrastructure ❒ Tenant addresses in overlay are hidden from underlay and other tenants ❍ Multiple tenants with the same IP address space ❒ Overlays can potentially support large numbers of tenant networks ❒ Virtual network state and end node reachability are handled in the end nodes Challenges of Overlays ❒ Management tools to co-ordinate overlay and underlay ❍ Overlay networks probe for bandwidth and packet loss, which can lead to inaccurate information ❍ Lack of communication between overlay and underlay can lead to inefficient usage of network resources ❍ Lack of communication between overlays can lead to contention and other performance issues ❒ Overlay packets may fail to traverse firewalls ❒ Path MTU limit may cause fragmentation ❒ Efficient multicast is challenging Functional Architecture: Definitions ❒ Virtual Network ❍ Overlay network defined over the Layer 2 or Layer 3 underlay (physical) network ❍ Provides either a Layer 2 or a Layer 3 service to tenant ❒ Virtual Network Instance (VNI) or Tenant Network ❍ A specific instance of a virtual network ❒ Virtual Network Context (VNC) ❍ A tag or field in the encapsulation header that identifies the specific tenant network Functional Architecture: More Definitions ❒ Network Virtualization Edge (NVE) ❍ Data plane entity that sits at the edge of an underlay network and implements L2 and/or L3 network virtualization functions • Example: virtual switch aka Virtual Edge Bridge (VEB) ❍ Terminates the virtual network towards the tenant VMs and towards outside networks ❒ Network Virtualization Authority (NVA) ❍ Control plane entity that provides information about reachability and connectivity for all tenants in the data center Overlay Network Architecture Tenant Tenant Data Plane System System Control Plane NVE NVA LAN link Tenant System NVE Data Center L2/L3 Network End System NVE integration Tenant Tenant Point to Point System System link Virtual/Overlay Network Design and ImplementatION Implementing Overlays: Tagging or Encapsulation? ❒ At or above Layer 2 but below Layer 3: ❍ Insert tag at a standards specified place in the pre-Layer 3 header ❒ At Layer 3: ❍ Encapsulate the tenant packet with an encapsulation protocol header and an IP header ❒ Tenant network identified by Virtual Network Context ❍ Tag for tagging ❍ Context identifier in protocol header for encapsulation L2 Virtual Networks:Tagging Options ❒ Simple 802.1q VLANs ❍ 4096 limit problem ❍ Trunking complexity ❒ MPLS ❍ Nobody uses MPLS directly on the switching
Recommended publications
  • Control Plane Overview Subnet 1.2 Subnet 1.2
    The Network Layer: Control Plane Overview Subnet 1.2 Subnet 1.2 R3 R2 R6 Interior Subnet 1.2 R5 Subnet 1.2 Subnet 1.2 R7 1. Routing Algorithms: Link-State, Distance Vector R8 R4 R1 Subnet 1.2 Dijkstra’s algorithm, Bellman-Ford Algorithm Exterior Subnet 1.2 Subnet 1.2 2. Routing Protocols: OSPF, BGP Raj Jain 3. SDN Control Plane Washington University in Saint Louis 4. ICMP Saint Louis, MO 63130 5. SNMP [email protected] Audio/Video recordings of this lecture are available on-line at: Note: This class lecture is based on Chapter 5 of the textbook (Kurose and Ross) and the figures provided by the authors. http://www.cse.wustl.edu/~jain/cse473-16/ Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse473-16/ ©2016 Raj Jain Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse473-16/ ©2016 Raj Jain 5-1 5-2 Network Layer Functions Overview Routing Algorithms T Forwarding: Deciding what to do with a packet using 1. Graph abstraction a routing table Data plane 2. Distance Vector vs. Link State T Routing: Making the routing table Control Plane 3. Dijkstra’s Algorithm 4. Bellman-Ford Algorithm Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse473-16/ ©2016 Raj Jain Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse473-16/ ©2016 Raj Jain 5-3 5-4 Rooting or Routing Routeing or Routing T Rooting is what fans do at football games, what pigs T Routeing: British do for truffles under oak trees in the Vaucluse, and T Routing: American what nursery workers intent on propagation do to T Since Oxford English Dictionary is much heavier than cuttings from plants.
    [Show full text]
  • The Network Layer: Control Plane
    CHAPTER 5 The Network Layer: Control Plane In this chapter, we’ll complete our journey through the network layer by covering the control-plane component of the network layer—the network-wide logic that con- trols not only how a datagram is forwarded among routers along an end-to-end path from the source host to the destination host, but also how network-layer components and services are configured and managed. In Section 5.2, we’ll cover traditional routing algorithms for computing least cost paths in a graph; these algorithms are the basis for two widely deployed Internet routing protocols: OSPF and BGP, that we’ll cover in Sections 5.3 and 5.4, respectively. As we’ll see, OSPF is a routing protocol that operates within a single ISP’s network. BGP is a routing protocol that serves to interconnect all of the networks in the Internet; BGP is thus often referred to as the “glue” that holds the Internet together. Traditionally, control-plane routing protocols have been implemented together with data-plane forwarding functions, monolithi- cally, within a router. As we learned in the introduction to Chapter 4, software- defined networking (SDN) makes a clear separation between the data and control planes, implementing control-plane functions in a separate “controller” service that is distinct, and remote, from the forwarding components of the routers it controls. We’ll cover SDN controllers in Section 5.5. In Sections 5.6 and 5.7 we’ll cover some of the nuts and bolts of managing an IP network: ICMP (the Internet Control Message Protocol) and SNMP (the Simple Network Management Protocol).
    [Show full text]
  • IS-IS Client for BFD C-Bit Support
    IS-IS Client for BFD C-Bit Support The Bidirectional Forwarding Detection (BFD) protocol provides short-duration detection of failures in the path between adjacent forwarding engines while maintaining low networking overheads. The BFD IS-IS Client Support feature enables Intermediate System-to-Intermediate System (IS-IS) to use Bidirectional Forwarding Detection (BFD) support, which improves IS-IS convergence as BFD detection and failure times are faster than IS-IS convergence times in most network topologies. The IS-IS Client for BFD C-Bit Support feature enables the network to identify whether a BFD session failure is genuine or is the result of a control plane failure due to a router restart. When planning a router restart, you should configure this feature on all neighboring routers. • Finding Feature Information, on page 1 • Prerequisites for IS-IS Client for BFD C-Bit Support, on page 1 • Information About IS-IS Client for BFD C-Bit Support, on page 2 • How to Configure IS-IS Client for BFD C-Bit Support, on page 2 • Configuration Examples for IS-IS Client for BFD C-Bit Support, on page 3 • Additional References, on page 4 • Feature Information for IS-IS Client for BFD C-Bit Support, on page 4 Finding Feature Information Your software release may not support all the features documented in this module. For the latest caveats and feature information, see Bug Search Tool and the release notes for your platform and software release. To find information about the features documented in this module, and to see a list of the releases in which each feature is supported, see the feature information table.
    [Show full text]
  • Efficient Network Reachability Analysis Using a Succinct Control Plane Representation
    Efficient Network Reachability Analysis using a Succinct Control Plane Representation Seyed K. Fayaz Tushar Sharma Ari Fogel∗ Ratul Mahajany Todd Millsteinz Vyas Sekar George Varghesez CMU ∗Intentionet yMicrosoft Research zUCLA Abstract— To guarantee network availability and se- curity, operators must ensure that their reachability poli- cies (e.g., A can or cannot talk to B) are correctly im- plemented. This is a difficult task due to the complexity Routers Environment at )me t configura)on files of network configuration and the constant churn in a net- Network work’s environment, e.g., new route announcements ar- Environment at )me t+1 control plane rive and links fail. Current network reachability analysis Environment at )me t+2 … … techniques are limited as they can only reason about the data plane at )me t+2 current “incarnation” of the network, cannot analyze all data plane at )me t+1 configuration features, or are too slow to enable explo- data plane at )me t ration of many environments. We build ERA, a tool for A B efficient reasoning about network reachability. Instead of Figure 1: Reachability behavior of a network (e.g., A reasoning about individual incarnations of the network, can talk to B) is determined by its data plane, which, ERA directly reasons about the network “control plane” in turn, is the current incarnation of the control plane. that generates these incarnations. We address key expres- siveness and scalability challenges by building (i) a suc- To highlight this challenge, it is useful to consider prior cinct model for the network control plane (i.e., various work on network verification.
    [Show full text]
  • The Network Layer Data Plane
    The Network Layer: Data Plane Overview Net 1 R1 Net 2 R2 Net 3 R3 Net 4 1. Network Layer Basics Raj Jain 2. What’s inside a router? Washington University in Saint Louis 3. Forwarding Protocols: IPv4, DHCP, NAT, IPv6 Saint Louis, MO 63130 4. Software Defined Networking [email protected] Audio/Video recordings of this lecture are available on-line at: Note: This class lecture is based on Chapter 4 of the textbook http://www.cse.wustl.edu/~jain/cse473-16/ (Kurose and Ross) and the figures provided by the authors. Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse473-16/ ©2016 Raj Jain Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse473-16/ ©2016 Raj Jain 4-1 4-2 Overview Network Layer Basics Forwarding and Routing 1. Forwarding and Routing T Forwarding: Input link to output link via Address prefix lookup in a table. 2. Connection Oriented Networks: ATM Networks T Routing: Making the Address lookup table 3. Classes of Service T Longest Prefix Match 4. Router Components 5. Packet Queuing and Dropping 125.200.1.3 126.23.45.67 125.200.1.1 125.200.1.2 128.272.15.2 2 1 Prefix Next Router Interface 126.23.45.67/32 125.200.1.1 1 128.272.15/24 125.200.1.2 2 128.272/16 125.200.1.1 1 Ref: Optional Homework: R3 in the textbook Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse473-16/ ©2016 Raj Jain Washington University in St.
    [Show full text]
  • Tesseract: a 4D Network Control Plane Hong Yan†, David A
    Tesseract: A 4D Network Control Plane Hong Yany, David A. Maltzz, T. S. Eugene Ngx, Hemant Gogineniy, Hui Zhangy, Zheng Caix yCarnegie Mellon University zMicrosoft Research xRice University Abstract example, load balanced best-effort forwarding may be implemented by carefully tuning OSPF link weights to We present Tesseract, an experimental system that en- indirectly control the paths used for forwarding. Inter- ables the direct control of a computer network that is un- domain routing policy may be indirectly implemented by der a single administrative domain. Tesseract’s design setting OSPF link weights to change the local cost met- is based on the 4D architecture, which advocates the de- ric used in BGP calculations. The combination of such composition of the network control plane into decision, indirect mechanisms create subtle dependencies. For in- dissemination, discovery, and data planes. Tesseract pro- stance, when OSPF link weights are changed to load bal- vides two primary abstract services to enable direct con- ance the traffic in the network, inter-domain routing pol- trol: the dissemination service that carries opaque con- icy may be impacted. The outcome of the synthesis of trol information from the network decision element to the indirect control mechanisms can be difficult to predict nodes in the network, and the node configuration service and exacerbates the complexity of network control [1]. which provides the interface for the decision element to The direct control paradigm avoids these problems be- command the nodes in the network to carry out the de- cause it forces the dependencies between control policies sired control policies. to become explicit.
    [Show full text]
  • Single-Chip Control/Data-Plane Processors Trends, Features, Deployment
    Single-Chip Control/Data-Plane Processors Trends, Features, Deployment By Linley Gwennap Principal Analyst June 2008 www.linleygroup.com Single-Chip Control/Data-Plane Processors By Linley Gwennap, Principal Analyst, The Linley Group This paper examines the trend toward combining control-plane and data-plane processing on a single chip. It discusses the technologies driving this trend, the common features of these chips, their advantages and disadvantages, and how they are being deployed today and into the future. What Is an SCDP? A single-chip control/data-plane processor (SCDP) combines control-plane and data- plane processing on a single chip. In networking or communications equipment, the data plane processes each packet as it passes through the system. Data-plane tasks may include converting packets from one protocol to another, encrypting or decrypting data, filtering unwanted packets, prioritizing packets, and routing them to their next destina- tion. A relatively simple processor with a small amount of software can perform these tasks, but they must be done quickly. Packets must be processed at least at wire speed, that is, the speed of the incoming network connection (e.g., 1Gbps for a Gigabit Ethernet connection). The control plane handles packets that require extra processing, typically about 5% to 10% of all packets. The most common example is a routing update (e.g., RIP, BGP); the control plane maintains the route table. The control plane may also process other protocols (e.g., ARP) that are too complex for the data plane, or legacy protocols (e.g., SNA, AppleTalk) that are rarely encountered. Finally, the control plane also handles management tasks such as system configuration and logging.
    [Show full text]
  • REDDIG II – Computer Networking Training
    REDDIG II – Computer Networking Training JM SANCHEZ / PH RASSAT - 20/06/2012 IP Addressing and Subnetting Invierno 2011 | Capacitacion en fabrica - CORPAC IP Addressing and Subnetting IP Addresses An IP address is an address used to uniquely identify a device on an IP network. The address is made up of 32 binary bits which can be divisible into a network portion and host portion with the help of a subnet mask. 32 binary bits are broken into four octets (1 octet = 8 bits) Dotted decimal format (for example, 172.16.254.1) REDDIG II | Network Course | Module 2 | 3 IP Addressing and Subnetting Binary and Decimal Conversion REDDIG II | Network Course | Module 2 | 4 IP Addressing and Subnetting IP Address Classes • IP classes are used to assist in assigning IP addresses to networks with different size requirements. • Classful addressing: REDDIG II | Network Course | Module 2 | 5 IP Addressing and Subnetting Private Address Range • Private IP addresses provide an entirely separate set of addresses that still allow access on a network but without taking up a public IP address space. • Private addresses are not allowed to be routed out to the Internet, so devices using private addresses cannot communicate directly with devices on the Internet. REDDIG II | Network Course | Module 2 | 6 IP Addressing and Subnetting Network Masks • Distinguishes which portion of the address identifies the network and which portion of the address identifies the node. • Default masks: Class A: 255.0.0.0 Class B: 255.255.0.0 Class C: 255.255.255.0 • Once you have the address and the mask represented in binary, then identification of the network and host ID is easier.
    [Show full text]
  • A Exploration of Multi-Protocol Label Switching (Mpls) Network: a Review
    ISSN- 2394-5125 VOL 7, ISSUE 13, 2020 A EXPLORATION OF MULTI-PROTOCOL LABEL SWITCHING (MPLS) NETWORK: A REVIEW Nisha 1, Dr. Rashid Hussain 2 Ph.D. Scholar 1, Associate Professor 2 Suresh Gyan Vihar University, Mahal Jagatpura, Jaipur 1, 2 Received: 14 March 2020 Revised and Accepted: 8 July 2020 ABSTRACT: The central principle of the Multiprotocol Label Switching Network (MPLS) utilizes Label Switching Path (LSP) technology that provides high efficiency in packet transmission without needing to check for routing tables. Nevertheless, where a connection breakdown happens in an MPLS network, the reconstruction of a new path may entail further overhead. The MPLS (Multi-Protocol Label Switching) network is increasingly moving into a general and converged network capable of exchanging multi-service (voice, data, and video) information on the same IP-based network. Quality of service (QoS) has been progressively a critical requirement for new applications carrying MPLS networks. It realization helps service companies to develop the methods of network design and include full network services and to resolve any loss. This article describes a standard network loss (the loss would contribute to the restoration of the path in the MPLS network by using MPLS) and the network failure on the basis of exploratory study findings. KEYWORD: Multi-Protocol Label Switching (MPLS), Label Switch Path (LSP), Quality of Service (QoS) I. INTRODUCTION MPLS (Multi-Protocol Label Switching) is used as the core technologies of multiple autonomous systems in many business networks and public infrastructures. This is a connection-oriented infrastructure intended to solve the challenges of today’s network in terms of volume, scalability and traffic engineering.
    [Show full text]
  • Introduction to MPLS
    Introduction to MPLS Steve Smith Systems Engineer 2003 Technical Symposium © 2003, Cisco Systems, Inc. All rights reserved. 1 Agenda • Background • Technology Basics What is MPLS? Where Is it Used? • Label Distribution in MPLS Networks LDP, RSVP, BGP • Building MPLS Based Services VPNs AToM Traffic Engineering • Configurations Configuring MPLS, LDP, TE • Summary RST-1061 8216_05_2003_c1 © 2003, Cisco Systems, Inc. All rights reserved. 2 Background 2003 Technical Symposium © 1999, Cisco Systems,© 2003, Inc. Cisco Systems, Inc. All rights reserved. 3 Terminology • Acronyms PE—provider edge router P—Provider core router CE—Customer Edge router (also referred to as CPE) ASBR—Autonomous System Boundary Router RR—Route Reflector • TE—Traffic Engineering TE Head end—Router that initiates a TE tunnel TE Midpoint—Router where the TE Tunnel transits • VPN—Collection of sites that share common policies • AToM—Any Transport over MPLS Commonly known scheme for building layer 2 circuits over MPLS Attachment Circuit—Layer 2 circuit between PE and CE Emulated circuit—Pseudowire between PEs RST-1061 8216_05_2003_c1 © 2003, Cisco Systems, Inc. All rights reserved. 4 Evolution of MPLS • From Tag Switching • Proposed in IETF—Later combined with other proposals from IBM (ARIS), Toshiba (CSR) Cisco Calls a MPLS Croup Cisco Ships Traffic Engineering BOF at IETF to Formally Chartered MPLS TE Deployed Standardize by IETF Tag Switching Cisco Ships MPLS VPN Large Scale MPLS (Tag Deployed Deployment Switching) 1996 1997 1998 1999 2000 2001 Time RST-1061 8216_05_2003_c1
    [Show full text]
  • Nsa Cybersecurity Report
    NATIONAL SECURITY AGENCY CYBERSECURITY REPORT A GUIDE TO BORDER GATEWAY PROTOCOL (BGP) BEST PRACTICES A TECHNICAL REPORT FROM NETWORK SYSTEMS ANALYSIS BRANCH U/OO/202911-18 PP-18-0645 10 September 2018 1 s NSA CYBERSECURITY REPORT DOCUMENT CHANGE HISTORY DATE VERSION DESCRIPTION 07/24/18 01 Initial Release 08/24/18 02 Re-templated DISCLAIMER OF WARRANTIES AND ENDORSEMENT The information and opinions contained in this document are provided “as is” and without any warranties or guarantees. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government, and shall not be used for advertising or product endorsement purposes. U/OO/202911-18 PP-18-0645 10 September 2018 2 s NSA CYBERSECURITY REPORT A Guide to Border Gateway Protocol (BGP) Best Practices CONTACT INFORMATION Client Requirements and Inquiries or General Cybersecurity Inquiries CYBERSECURITY REQUIREMENTS CENTER (CRC) 410-854-4200 [email protected] U/OO/202911-18 PP-18-0645 10 September 2018 3 s NSA CYBERSECURITY REPORT Executive Summary The dominant routing protocol on the Internet is the Border Gateway Protocol (BGP). BGP has been deployed since the commercialization of the Internet and version 4 of BGP is over a decade old. BGP works well in practice, and its simplicity and resilience enabled it to play a fundamental role within the global Internet. However, BGP inherently provides few performance or security protections.
    [Show full text]
  • LTRDCT-2223.Pdf
    Implementing VXLAN In a Data Center Lilian Quan, Principle Engineer, INSBU Erum Frahim, Technical Leader, Services Kevin Cook, Solution Architecture LTRDCT-2223 Agenda . VxLAN Overview . Flood-&-Learn VXLAN . VXLAN with MP-BGP EVPN Control Plane . VXLAN Design Options . MP-BGP EVPN VXLAN Configuration . Lab Introduction Agenda . VxLAN Overview . Flood-&-Learn VXLAN . VXLAN with MP-BGP EVPN Control Plane . VXLAN Design Options . MP-BGP EVPN VXLAN Configuration . Lab Introduction Recap – What is VXLAN ? Tunnel Ethernet Frames Endpoints NETWORK OVERLAY Host Host 1 4 IP Addr IP Network IP Addr Host 1.1.1.1 2.2.2.2 Host 2 Switch Switch 5 Host Host 3 IP/UDP Packets 6 PLANE DATA DATA Outer Outer Outer Outer Outer Outer VXLAN Inner Inner Optiona Original CRC MAC MAC MAC MAC l Inner Ethernet 802.1Q IP DA IP SA UDP ID CRC DA SA (24 bits) DA SA 802.1Q Payload VXLAN Encapsulation Original Ethernet Frame • VXLAN uses MAC in UDP encapsulation (UDP destination port 4789) 5 Why VXLAN? VXLAN provides a Network with Segmentation, IP Mobility, and Scale • “Standards” based Overlay • Leverages Layer-3 ECMP – all links forwarding • Increased Name-Space to 16M identifier • Segmentation and Multi-Tenancy • Integration of Physical and Virtual • It’s SDN 6 VXLAN VTEP VXLAN terminates its tunnels on VTEPs (Virtual Tunnel End Point). Each VTEP has two interfaces, one is to provide bridging function for local hosts, the other has an IP identification in the core network for VXLAN encapsulation/decapsulation. Underlay Network (IP Routing) VTEP VTEP IP Interface IP Interface Local LAN Segment Local LAN Segment Host Host Host Host 7 VXLAN Underlay Network – IP Routing • IP routed Network • Flexible topologies IP Transport Network • Recommend a network with redundant paths using ECMP for load sharing • Support any routing protocols --- OSFP, EIGRP, IS-IS, BGP, etc.
    [Show full text]