Introduction to LISP and VXLAN - Scalable Technology Overlays for Switching Victor Moreno - Distinguished Engineer Agenda

• Overlays and Switching Fabrics

• VXLAN encapsulation

• The EVPN Control Plane

• The LISP Control Plane

• Deployment Considerations

• Conclusion Flexible Switching Fabrics

Create Virtual Networks on top of an efficient IP network

Mobility Segmentation + Policy Scale Automated & Programmable Full Cross Sectional BW L2 + L3 Connectivity

V V Physical M M Physical + Virtual Hosts O O S S Virtual Use VXLAN to Create Overlay Networks on Switching Fabrics Overlay Taxonomy

Overlay Control Plane Service = Virtual Network Instance (VNI) VTEPs Identifier = VN Identifier (VNID) NVE = Network Virtualisation Edge Encapsulation VTEP = VXLAN Tunnel End-Point Edge Devices (NVE) Edge Device (NVE) Hosts Underlay Network (end-points)

Underlay Control Plane VXLAN Fundamentals VXLAN is an Overlay Encapsulation

Data Plane Learning Protocol Learning Flood and Learn over a multidestination Advertise hosts in a protocol distribution tree joined by all edge devices amongst edge devices

Overlay Control Plane

Encapsulation

VXLAN

t VXLAN Packet Structure in IP with a Shim for Scalable Segmentation

Outer MAC Header Outer IP Header Outer UDP Header VXLAN Header Original Layer 2 Frame FCS

14 Bytes

(4 Bytes Optional) 20 Bytes 8 Bytes 8 Bytes Ethernet Payload

Tag

VNI

Port

Source

0x8100 0x0800 0x0000

Header Header

Dest.IP

Address Address

VLAN VLAN ID

Protocol

Src. MAC

Reserved Reserved

Source IP

IPHeader

RRRRIRRR

Checksum Checksum

Dest.MAC Misc. Data

Ether Ether Type

VLAN VLAN Type

0x11 (UDP) 0x11

VXLAN VXLAN Port

UDP Length VXLAN VXLAN Flags

48 48 16 16 16 72 8 16 32 32 16 16 16 16 8 24 24 8

Src VTEP MAC Address Src and Dst addresses of Large scale the VTEPs Allows for 16M segmentation UDP 4789 possible segments Next-Hop MAC Address Hash of the inner L2/L3/L4 headers of the original frame. Enables entropy for ECMP Load Tunnel Entropy 50 (54) Bytes of overhead balancing in the Network. Data Plane Learning Dedicated Multicast Distribution Tree per VNI

PIM Join for Multicast PIM Join for Multicast Group 239.1.1.1 Group 239.2.2.2

V V V V V Web DB DB Web VM VM VM VM Data Plane Learning Dedicated Multicast Distribution Tree per VNI

V V V V V

VM1 VM2 VM3 Data Plane Learning Learning on Broadcast Source - ARP Request Example

ARP Req IP A  G

ARP Req IP A  G V V V V V

ARP Req VM1 VM2 VM3 MAC IP Addr MAC IP Addr VM 1 VTEP 1 VM 1 VTEP 1

ARP Req ARP Req Data Plane Learning Learning on Unicast Source - ARP Response Example

ARP Resp V V V V V

ARP Resp VTEP 2  VTEP 1 VM1 VM2 VM3 MAC IP Addr MAC IP Addr VM 2 VTEP 2 VM 1 VTEP 1

ARP Resp VXLAN Evolution

Multicast Independent • Head-end replication enables unicast-only mode • Control Plane provides dynamic VTEP discovery

Protocol Learning • Workload MAC addresses learnt by VXLAN NVEs • Advertise L2/L3 address-to-VTEP association prevents floods information in a protocol

• VXLAN HW Gateways to other encaps/networks External Connectivity • VXLAN HW Gateway redundancy • Enable hybrid overlays

• VXLAN IP Services • Distributed IP Gateways VXLAN Evolution: Using a Control Protocol VTEP Discovery BGP consolidates and 2 propagates VTEP list for VNI

BGP Route Reflector VTEPs advertise their VNI membership in BGP 1

1 1

IP A IP C IP B TOR 1 V V V V V Overlay Neighbours TOR 3 , IP C TOR 3 TOR 2 TOR 2 , IP B VTEP can perform 3 4 VTEP obtains list of Head-End Replication VTEP neighbours for each VNI VXLAN Data Plane Learning in Unicast Mode Head-end Replication

*Broadcast, Unknown Unicast or Multicast

5 Frames are unicast to the neighbours

4 VXLAN Encap

BUM Frame IP A  IP B IP A IP C IP B Overlay Neighbours BUM Frame IP A  IP C POD3 , IP C TOR 1 V V V V V POD2 , IP B 3 VTEP performs Head- 2 End Replication TOR 3 TOR 2 VTEP retrieves the list of Overlay 1 Neighbours** BUM Frame **Information statically configured or dynamically retrieved via control plane A host sends a L2 BUM* frame (VTEP discovery) Agenda

• Overlays and Switching Fabrics

• VXLAN encapsulation

• The EVPN Control Plane

• The LISP Control Plane

• Deployment Considerations

• Conclusion BGP EVPN Control Plane for VXLAN Host and Subnet Route Distribution

Route-Reflectors deployed for scaling purposes

RR RR

iBGP Adjacencies

V V V V V

. Host Route Distribution decoupled from the Underlay protocol . Use MP-BGP on the leaf nodes to distribute internal host/subnet routes and external reachability information BGP EVPN Control Plane MAC IP VNI Next- Encap Seq Host Advertisement Hop 1 1 5000 IP L1 VXLAN 0 MAC L1

NLRI: Host MAC1, IP1 NVE IP L1/MAC L1 RR RR VNI 5000 Ext.Community: Encapsulation: VXLAN, NVGRE Sequence 0

V V V V V

VNI 5000

Host 1 VLAN 10 1. Host Attaches 2. Attachment NVE advertises host’s MAC (+IP) through BGP RR 3. Choice of encapsulation is also advertised BGP EVPN Control Plane MAC IPIP VNI Next-- Encap Seq Host Moves Hop 1 1 5000 IPIP L1L3 VXLAN 01 MAC L1L3

NLRI: Host MAC1, IP1

NVE IP L3/MAC L3 RR RR VNI 5000 Ext.Community: Encapsulation: VXLAN, NVGRE Sequence 1

V V V V V

VNI 5000

Host 1 VLAN 10 1. Host Moves to NVE3 2. NVE3 detects Host1 and advertises H1 with seq#1 3. NVE1 sees more recent route and withdraws its advertisement VXLAN L2 and L3 Gateways Connecting VXLAN to the broader network

Egress interface chosen (bridge L2 Gateway: VXLAN to VLAN Bridging may .1Q tag the packet) VXLANORANGE VXLAN L2 Ingress VXLAN packet on Gateway Orange segment

Destination is in another segment. Ingress VXLAN packet on Packet is routed to the new segment L3 Gateway: VXLAN to X Routing Orange segment SVI VXLAN VXLAN • VXLAN ORANGE BLUE

VLAN100 VXLAN VLAN200 • VLAN Egress interface chosen (bridge may .1Q tag the packet)

N7K w/F3 ASR1K/ASR N1KV Nexus 3K N5K/6K N9K CSR 1000V LC 9K L2 Gateway VXGW on 1110 Yes Yes Yes Yes N/A Yes L3 Gateway CSR 1000V Yes No Yes Yes Yes Yes VTEP Redundancy in a VXLAN Fabric

vPC provides MAC state synchronisation and HSRP peering Redundant VTEPs share anycast VTEP IP address in the underlay

L3 Gateway redundancy based on vPC and HSRP (2 nodes) VXLAN L3 VXLAN L3 Gateway Gateway

L3 Fabric

VXLAN L2 VXLAN L2 Gateway Gateway

L2 Gateway redundancy based on vPC (anycast VM VM vMAC  Emulated VTEP VTEP address) OS OS HW VTEP Redundancy – L2 Gateway

VXLAN Overlay on IP Fabric VTEP Anycast IP A VTEP Anycast IP B

IP A1 IP A2 IP B1 IP B2 VXLAN L2 VXLAN L2 VXLAN L2 VXLAN L2 Gateway Gateway Gateway Gateway

WAN/DCI HW VTEP Redundancy – L3 Gateway

L2 VXLAN Overlay L3 VXLAN Overlay Overlay: HSRP VIP Overlay: No HSRP Underlay: VTEP Anycast IP B Underlay: Independent VTEPs IP B1 IP B2 IP A1 IP A2 VXLAN L3 VXLAN L3 VXLAN L3 VXLAN L3 Gateway Gateway Gateway Gateway

IP over Port p2p IP links Channels

WAN/DCI WAN/DCI Distributed Gateway Function in L3 Overlays

L3 Boundary

L2/L3 Fabric L3 Boundary

App App App App OS OS OS OS Virtual Physical Virtual Physical

Traditional L2 - centralised L2/L3 boundary L2/L3 fabric (or overlay)

• Always bridge, route only at an aggregation point • Always route (at the leaves), bridge when necessary

• Large amounts of state converge • Distribute and disaggregate necessary state

• Scale problem for large# of L2 segments • Optimal scalability

• Traditional L2 and L2 overlays • Enhanced forwarding and L3 overlays Distributed IP Anycast Gateway

The same “Anycast” SVI IP/MAC is used at all VTEPs/ToRs A host will always find its SVI anywhere it moves

L3 Fabric

VXLAN L3 VXLAN L3 VXLAN L3 VXLAN L3 VXLAN L3 Gateway Gateway Gateway Gateway Gateway

SVI IP Address SVI IP Address MAC: 0000.dead.beef MAC: 0000.dead.beef IP: 10.1.1.1 IP: 10.1.1.1 VM VM

OS OS Distributed IP Anycast Gateway Detailed View

Underlay Underlay / IP Core / IP Core

L3 GWY L3 GWY SVI A SVI B SVI B SVI A

VLAN A' VNI A VLAN A VLAN B' VNI B VLAN B

VTEP L2 GWY L2 GWY VTEP

Consistent Anycast SVI IP / MAC L3 Fabric address at all leaves

VXLAN L3 VXLAN L3 VXLAN L3 VLAN-IDs are locally significant Gateway Gateway Gateway Bridged Traffic in VXLAN EVPN 802.1Q Tagged Traffic to VNI Mapping

SVI A SVI B SVI B SVI A

VLAN A' VNI A VLAN A VLAN B' VNI B VLAN B H2 H1 H4 VTEP1 VTEP2

• VLANs are stretched over L2 VNIs

• VLANs (VLAN A) mapped to VNI (VNI A) at each VTEP: VLAN A’  VNI A  VLAN A

• Bridged traffic forwarded over the L2 VNIs Distributed IP Anycast Gateway Packet-Walk – IP Forwarding within the Same Subnet aka Bridging (ARP)

1. PM1 sends an ARP request for Default Gateway –10.10.10.1

2. The ARP request is suppressed at TOR and punted to the Supervisor, where MAC and IP L3 Fabric is learned and distributed MAC IP L2 VNI L3 VNI PM1_MAC 10.10.10.20 10000 50000 CPU rib 2 VXLAN VXLAN 3. TOR response with Gateway MAC to PM1 L3 L3 Gateway Gateway

3 1

V V M M O O S S

VM1 10.10.10.10 PM1 10.10.10.20

PM1 ARP Cache Standard behaviour of End-Host (virtual or physical) to ARP for the Default Gateway 10.10.10.1 -> GW_MAC Distributed IP Anycast Gateway Packet-Walk – IP Forwarding within the Same Subnet aka Bridging (ARP)

4. VM1 sends an ARP request for PM1 – 10.10.10.20

5. The ARP request is suppressed at TOR and punted to the Supervisor, where MAC and IP L3 Fabric MAC IP L2 VNI L3 VNI MAC IP L2 VNI L3 VNI is learned and distributed VM1_MAC 10.10.10.10 10000 50000 PM1_MAC 10.10.10.20 10000 50000 CPU rib 5 VXLAN VXLAN 6. Assuming PM1 is known and a valid route L3 L3 does exist in the Unicast RIB, TOR responds Gateway Gateway 6 to ARP with PM1 MAC as Source MAC. VM1 4 can build its ARP cache

V V M M O O S S

VM1 10.10.10.10 PM1 10.10.10.20 VM1 ARP Cache 10.10.10.20 -> PM1_MAC If there is Unicast RIB miss on TOR, ARP request will be forwarded to all ports except the original sending port (ARP snooping). ARP response will be punted to Supervisor of destination TOR for Unicast RIB population (learn) and subsequently forwarded to source TOR. Distributed IP Anycast Gateway Packet-Walk – IP Forwarding within the Same Subnet aka Bridging (Data Packet)

7. VM1 generates a data packet with PM1_MAC 9

as destination MAC DVTEP: DTOR_L0 8. TOR receives the packet and performs Layer- SVTEP : STOR_L0 VNI 10000 2 lookup for the destination L3DMAC: Fabric PM1_MAC SMAC: VM1_MAC 9. TOR adds VXLAN-Header information 8 DIP: 10.10.10.20 VLAN 123 <-> VNI 10000 SIP : 10.10.10.10 (Destination VTEP, VNI, etc) and forwards the PM1_MAC -> DTOR_L0, 10000 VXLAN VXLAN packet across the Layer-3 fabric, picking one L3 L3 Gateway Gateway of the equal cost paths available via the VLAN 123 <-> VNI 10000 multiple Spines 7 PM1_MAC -> eth1/23 SIP : 10.10.10.10 DMAC: PM1_MAC 10. The destination TOR receives the packet, DIP: 10.10.10.20 SMAC: VM1_MAC V V VLAN 123 strips off the VXLAN header and performs M M VLAN 123 O O SMAC: VM1_MAC S S DIP: 10.10.10.20 lookup and forwarding toward PM1 DMAC: PM1_MAC SIP : 10.10.10.10 VM1 10.10.10.10 PM1 10 10.10.10.20

In case of VM1 is not known to PM1, PM1 would ARP for VM1. Destination TOR would Proxy for VM1. No Silent-Host discovery problem. Distributed Routing with EVPN Routed Traffic to VNI Mapping

SVI A SVI B SVI B SVI A

VLAN A' VNI A VLAN A VLAN B' VNI B VLAN B H2 H1 H4 VTEP1 VTEP2

• A common VNI (VNI X) is provisioned amongst the different VTEPs to carry routed traffic

• Routed traffic between VTEPs will be encapsulated in VNI X

• Standard longest prefix match routing takes place: • Host routes for all known remote hosts are installed at every VTEP  Forward over VNI X • Local hosts are covered by directly connected prefix, a host route will not be present Distributed IP Anycast Gateway with EVPN Packet Flow – Unknown Host (H4) in remote Subnet

SVI A SVI B SVI A

VLAN A' VNI A VLAN A VLAN B' H2 H1 H4 VTEP1 VTEP2

• H1  H4 – Routed via SVI B (VTEP1) to VLAN A (VTEP1)  Bridged over VNI A (as unknown unicast flood) • H4  H1 – Routed via SVI A (VTEP2)  VNI X  SVI B (VTEP1)  VLAN B’ (as H1 is known based on response) • Standard longest prefix match routing: As long as H4 is not learnt by VTEP1, the only path to H4 is the locally connected subnet (VLAN A’) Distributed IP Anycast Gateway with EVPN Packet Flow – unknown Host (H1) in remote Subnet

SVI A SVI B SVI A

VLAN A' VNI A VLAN A VLAN B' H2 H1 H4 VTEP1 VTEP2

• H4  H1 – Routed via SVI A (VTEP2) to VLAN B (VTEP1)  Routed over VNI X (as destination Subnet known) • H1  H4 – Routed via SVI B (VTEP1)  VNI X  SVI A (VTEP2)  VLAN A’ (as H1 is known based on response) Distributed IP Anycast Gateway Packet-Walk – IP Forwarding within the Different Subnet aka Routing (ARP)

1. VM1 sends ARP request for Default Gateway –10.10.10.1

SVI IP Address (VRF Blue) SVI IP Address (VRF 2. The ARP request will be received at TOR MAC: 0000.dead.beef Blue) and punted to the Supervisor, where MAC IP: 10.10.10.1 L3 Fabric MAC: 0000.dead.beef MAC IP L2 VNI L3 VNI IP: 20.20.20.1

and IP is learned and distributed VM1_MAC 10.10.10.10 10000 50000

CPU rib 2 VXLAN VXLAN L3 L3 Gateway Gateway

3. TOR acts as regular Default Gateway and 3 sends ARP response with GW_MAC to VM1 1

V V M M O O S S

VM1 10.10.10.10 PM2 20.20.20.20 VM1 ARP Cache 20.20.20.20 -> GW_MAC Distributed IP Anycast Gateway Packet-Walk – IP Forwarding within the Different Subnet aka Routing (Data Packet)

4. VM1 generates a data packet destined to 6

PM2 IP (20.20.20.20) with GW_MAC as DVTEP: DTOR_L0 destination MAC SVTEP : STOR_L0 VNI 50000 5. TOR receives the packet and performs Layer- L3DMAC: Fabric DTOR_MAC 3 lookup for the destination (known) SMAC: STOR_MAC 5 DIP: 20.20.20.20 6. TOR adds VXLAN-Header information 20.20.20.20 -> DTOR_L0, 50000 SIP : 10.10.10.10

VXLAN VXLAN (Destination VTEP, VNI, etc) and forwards the L3 L3 Gateway Gateway packet across the Layer-3 fabric, picking one 20.20.20.20 -> PM2_MAC of the equal cost paths available via the 4 PM2_MAC -> eth1/32 SIP : 10.10.10.10 multiple Spines DMAC: GW_MAC DIP: 20.20.20.20 SMAC: VM1_MAC V V VLAN 321 7. The destination TOR receives the packet, M M VLAN 123 O O SMAC: GW_MAC S S DIP: 20.20.20.20 strips off the VXLAN header and performs DMAC: PM2_MAC SIP : 10.10.10.10 lookup and forwarding toward PM2 VM1 10.10.10.10 PM2 7 20.20.20.20 BGP-EVPN Configuration Summary

feature nv overlay Enable fabric forwarding anycast-gateway-mac 0002.0002.0002 feature vn-segment-vlan-based feature bgp interface vlan 3002 Distributed Gateway nv overlay vpn no shutdown vrf member customer1 ip address 10.10.10.1/24 interface nve1 VXLAN fabric forwarding mode anycast-gateway no shutdown source-interface loopback1 vrf context customer1 Tenant VRF with EVPN host-reachability protocol bgp vni 39000 member vni 6000 rd auto suppress-arp address-family ipv4 unicast mcast-group 235.1.1.1 route-target both auto member vni 39000 associate-vrf route-target both auto evpn

vlan 3002 L2 VNIs router bgp 65000 vn-segment 6000 address-family ipv4 unicast iBGP Configuration evpn address-family l2vpn evpn vni 6000 l2 neighbor x.x.x.x remote-as 65000 rd auto update-source loopback1 route-target import auto address-family ipv4 unicast route-target export auto address-family l2vpn evpn send-community extended

vlan 555 L3 VNIs vrf customer1 vn-segment 39000 address-family ipv4 unicast interface vlan 555 advertise l2vpn evpn no shutdown redistribute direct route-map bar vrf member customer1 Agenda

• Overlays and Switching Fabrics

• VXLAN encapsulation

• The EVPN Control Plane

• The LISP Control Plane

• Deployment Considerations

• Conclusion Location Identity Separation Protocol • What do we mean by “Location” and “Identity”

Today’s IP Behaviour Loc/ID “Overloaded” Semantic IP core 10.1.0.1 When the Device Moves, It Gets Device IPv4 or IPv6 a New IPv4 or IPv6 Address for Its New Identity and Location Address Represents 20.2.0.9 Identity and Location

LISP Behaviour

IP core Loc/ID “Split” 10.1.0.1 When the Device Moves, Keeps Device IPv4 or IPv6 1.1.1.1 Its IPv4 or IPv6 Address. 2.2.2.2 Address Represents 10.1.0.1 It Has the Same Identity Identity Only. Its Location Is Here! Only the Location Changes

52 LISP Operations LISP :: Mapping Resolution “Level of Indirection”

. LISP “Level of Indirection” is analogous to a DNS lookup ‒ DNS resolves IP addresses for URL Answering the “WHO IS” question

[ who is lisp.cisco.com ] ? DNS DNS host Server Name-to-IP URL Resolution [153.16.5.29, 2610:D0:110C:1::3 ]

‒ LISP resolves locators for queried identities Answering the “WHERE IS” question

[ where is 2610:D0:110C:1::3 ] ? LISP LISP LISP Identity-to-locator Mapping router Mapping Resolution System [ locator is 128.107.81.169, 128.107.81.170 ] A LISP Packet Walk • How Does LISP Operate?

3 EID-prefix: 10.2.0.0/24 Mapping Locator-set: Entry 2.1.1.1, priority:Non 1, weight:-LISP 50 (D1) This Policy Controlled 1 Non-LISP site DNS Entry: 2.1.2.1site, priority: 1, weight: 50 (D2) by Destination Site D.abc.com A 10.2.0.1 10.1.0.0/24 LISP Site S ITR PITR 2 1.1.1.1 5.4.4.4

10.1.0.1 -> 10.2.0.1 IP Network 5.3.3.3 EID-to-RLOC 4 mapping 5.2.2.2 1.1.1.1 -> 2.1.1.1 5.1.1.1

10.1.0.1 -> 10.2.0.1 2.1.1.1 2.1.2.1 3.1.1.1 3.1.2.1 ETR 5 West East 10.1.0.1 -> 10.2.0.1 D 10.2.0.0/24 10.3.0.0/24

54 A LISP Packet Walk 3 • How About Non-LISP Sites? EID-Prefix: 10.2.0.0/24 Mapping Locator-Set: 1 Entry DNS Entry: 2.1.1.1, priority: 1, weight: 50 (D1) D.abc.com A 10.2.0.1 2.1.2.1, priority: 1, weight: 50 (D2) Non-LISP Site Non-LISP Site S

2 192.3.0.1 -> 10.2.0.1 PITR 4.4.4.4

4 5.3.3.3 4.4.4.4- > 2.1.2.1 EID-to-RLOC 192.3.0.1 -> 10.2.0.1 mapping IP 5.1.1.1 5.2.2.2 2.1.1.1 2.1.2.1 Network3.1.1.1 3.1.2.1 ETR 5 West East 192.3.0.1 -> 10.2.0.1 D 10.2.0.0/24 10.3.0.0/24

55 LISP Mapping Database • The Basics – Registration and Resolution

Mapping Cache Entry (on ITR): LISP Site 10.2.0.0/16-> (2.1.1.1, 2.1.2.1) ITR

Map Server / Resolver: 5.1.1.1

Map-Reply 10.2.0.0/16 -> (2.1.1.1, 2.1.2.1) 2.1.1.1 2.1.2.1 3.1.1.1 3.1.2.1 ETR ETR ETR ETR Database Mapping Entry (on ETR): Database Mapping Entry (on ETR): 10.2.0.0/16 -> (2.1.1.1, 2.1.2.1) 10.3.0.0/16 -> (3.1.1.1, 3.1.2.1) West East 10.2.0.0 /16 10.3.0.0/16 Y X Y Z

10.2.0.2 56 Servers Basic LISP Configuration ip lisp map-resolver ip lisp map-server lisp site west authentication-key 0 s3cr3t eid-prefix 10.2.0.0/24

Border Routers Between Backbones ip lisp proxy-itr ip lisp ITR map-resolver 5.3.3.3

Non-LISP Sites LISP Site PITR ITR 5.3.3.3 Branch Routers 1.1.1.1 ip lisp itr-etr Mapping DB ip lisp ITR map-resolver 5.3.3.3 5.1.1.1 5.2.2.2 DC Aggregation Routers IP Network ip lisp itr-etr ip lisp database-mapping 10.2.0.0/24 2.1.1.1 p1 w50 2.1.1.1 2.1.2.1 ip lisp database-mapping 10.2.0.0/24 2.1.2.1 p1 w50 ip lisp ETR map-server 5.1.1.1 key s3cr3t ip lisp ETR map-server 5.2.2.2 key s3cr3t ETR West East 10.2.0.0/24 Usually Devices Will Be Configured as ITRs and ETRs to Handle Traffic in Both Directions; We Illustrate Only One Direction for Simplicity 58 RLOC EID LISP Encap/Decap LISP Roles and Address Spaces Mapping • What are the Different Components Involved? EID RLOC a.a.a.0/24 w.x.y.1 DB b.b.b.0/24 x.y.w.2 c.c.c.0/24 z.q.r.5 d.d.0.0/16 z.q.r.5 EID RLOC a.a.a.0/24 w.x.y.1 b.b.b.0/24 x.y.w.2 c.c.c.0/24 z.q.r.5 EID Space d.d.0.0/16 z.q.r.5 LISP Roles EID RLOC a.a.a.0/24 w.x.y.1 b.b.b.0/24 x.y.w.2 c.c.c.0/24 z.q.r.5 • Tunnel Routers - xTRs d.d.0.0/16 z.q.r.5 ITR

• Edge devices encap/decap Non-LISP Prefix Next-hop w.x.y.1 e.f.g.h x.y.w.2 e.f.g.h z.q.r.5 e.f.g.h • Ingress/Egress Tunnel z.q.r.5 e.f.g.h Routers (ITR/ETR) PxTR RLOC Space • Proxy Tunnel Routers - PxTR ETR • Coexistence between LISP EID Space and non-LISP sites • Ingress/Egress: PITR, PETR Address Spaces • EID to RLOC Mapping DB • EID = End-point Identifier • RLOC to EID mappings • Host IP or prefix • RLOC = Routing Locator • Servers and Resolvers • IP address of routers in the backbone 59 LISP Use Cases Efficient Multi-Homing IPv6 Transition Support v6 LISP v6 LISP Router Services Internet Router IPv4 IPv6 Internet Internet LISP LISP v6 v4 v6 Site Routers . IP Portability . v6-over-v4, v6-over-v6 . Ingress Traffic Engineering without BGP . v4-over-v6, v4-over-v4

Multi-Tenancy and VPNs Host-Mobility LISP Site LISP Site

IP Network IP Network

West East West East

. Reduced CapEx/OpEx . Cloud / Layer 3 VM moves . Large scale Segmentation . Segmentation 60 Host-Mobility Scenarios

Moves Without LAN Extension Moves With LAN Extension

LISP Site Non-LISP LISP Site XTR Site XTR

North Mapping DB Mapping DB IP Network Internet or Shared WAN LAN Extension

LISP-VM (XTR) LISP-VM (XTR) West East West East

IP Mobility Across Subnets Routing for Extended Subnets

Disaster Recovery Active-Active Data Centres

Cloud Bursting Distributed Clusters

Application Members in One Location Application Members Distributed (Broadcasts across sites) 61 LISP Host-Mobility – Move Detection • Monitor the Source of Received Traffic

• The new xTR checks the source of received traffic

• Configured dynamic-EIDs define which prefixes may roam lisp dynamic-eid roamer Received a Packet … database-mapping 10.2.0.0/24 p1 w50 database-mapping 10.2.0.0/24 p1 w50 … It’s from a “New” Host map-server 5.1.1.1 key abcd … It’s in the Dynamic-EID Allowed Range interface vlan 100 Mapping DB lisp mobility roamer …It’s a Move! 5.1.1.1 5.2.2.2 Register the /32 with LISP A B C D

LISP-VM (xTR)

West East 10.2.0.0 /16 10.3.0.0/16

Y X Y Z 10.2.0.2 62 LISP Host-Mobility – Traffic Redirection • Update Location Mappings for the Host System Wide

• When a host move is detected, updates are triggered: • The host-to-location mapping in the Database is updated to reflect the new location • The old ETR is notified of the move • ITRs are notified to update their Map-caches • Ingress routers (ITRs or PITRs) now send traffic to the new location 10.2.0.0/16 – RLOC A, B LISP Site xTR Mapping DB 10.2.0.2/32 – RLOC C, D A B C D

LISP-VM (xTR)

West East 10.2.0.0 /16 10.3.0.0 /16

Y X Y Z 10.2.0.2 63 Host Mobility without LAN extensions LISP Host-Mobility – First Hop Routing • No LAN Extension

• SVI (Interface VLAN x) and HSRP configured as usual • Consistent GWY-MAC configured across all dynamic subnets

• The lisp mobility command enables proxy-arp functionality on the SVI • The LISP-VM router services first hop routing requests for both local and roaming subnets

• Moving hosts always talk to a local gateway with the same MAC interface vlan 200 interface vlan 100 ip address 10.2.0.8/24 interface vlan 100 ip address 10.3.0.7/24 interface Ethernet2/4 lisp mobility roamer ip address 10.2.0.5/24 lisp mobility roamer ip address 10.1.0.6/24 (ip proxy-arp lisp mobility roamer (ip proxy-arp) lisp mobility roamer hsrp 201 ( ip proxy-arp) hsrp 201 (ip proxy-arp mac-address 0000.0e1d.010c hsrp 101 mac-address 0000.0e1d.010c hsrp 101 B C ip 10.3..0.1 mac-address 0000.0e1d.010c A D ip 10.3.0.1 mac-addressip 0000.0e1d.010c 10.2.0.1 ip 10.2.0.1 LISP-VM (xTR) HSRP Active HSRP Active West East 10.2.0.0 /24 10.3.0.0 /24 HSRP HSRP ARP ARP GWY-MAC GWY-MAC 10.2.0.2 65 Host-Mobility and Multi-homing • ETR Updates – Across LISP Sites Null0 host routes indicate the host is “away”

10.2.0.0/16 – RLOC A, B 6 10.2.0.2/32 – RLOC C, D Map-Register 10.2.0.2/32 Map-Notify Mapping DB 10.2.0.2/32 5.1.1.1 5.2.2.2 : Routing Table: 5 10.3.0.0/16 – Local 10.2.0.0/16 – Local 7 10.2.0.0/24 – Null0 10.2.0.2/32 – Null0 10.2.0.2/32 – Local Routing Table: 4 10 A B 10.3.0.0/16 – Local C D 10.2.0.0/24 – Null0 2 10.2.0.2/32 – Local 9 Routing Table: 3 10.2.0.0 /16 10.2.0.0/16 – Local 10.3.0.0 /16 10.2.0.2/32 – Null0 8 1 East West Y X Map-Notify Y Map-Notify 10.2.0.2/32 10.2.0.2 10.2.0.2/32 66 Refreshing the Map Caches Map Cache @ ITR

10.2.0.0/16 – RLOC A,B

1. ITRs and PITRs with cached mappings continue to send traffic to the old locators LISP site 1. The old xTR knows the host has moved (Null0 route) ITR 10.2.0.2/32 – RLOC C,D 2. Old xTR sends Solicit Map Request (SMR) messages to any encapsulators sending traffic to the moved host Mapping DB 3. The ITR then initiates a new map request process

4. An updated map-reply is issued from the new location

5. The ITR Map Cache is updated A B C D

LISP-VM (xTR) • Traffic is now re-directed

• SMRs are an important integrity measure to avoid West East unsolicited map responses and spoofing 10.2.0.0 /16 10.3.0.0 /16 Y X Y Z 10.2.0.2

67 On-subnet Server-Server Traffic

West-to-East East-to-West

• X ARPs for Y, /32 Null0 entry for Y triggers • Y ARPs for X, /24 Null0 entry for the ‘home proxy-ARP on West xTRs to ensure traffic is subnet’ triggers proxy-ARP on East DC xTRs steered there to ensure traffic is steered there •Note: entry for Y in X ARP cache is cleared by GARP • Note: assumption is that ARP cache on Y is message originated by West XTRs refreshed after the move • Traffic to Y is LISP encapsulated • Traffic to X is LISP encapsulated

BC 10.2.0.9  10.2.0.8 CB 10.2.0.8  10.2.0.9

A B C D A B C D

LISP DC xTR LISP DC xTR

West West East East 10.2.0.0/24 10.2.0.0/24 10.3.0.0/24 10.3.0.0/24

10.2.0.9 Y 10.2.0.9 Y X Y Z X Y Z 10.2.0.8 10.2.0.8

68 Host Mobility in Extended Subnets LISP Host-Mobility – First Hop Routing • With Extended Subnets

• Consistent GWY-IP and GWY-MAC configured across all sites • Consistent HSRP group number across sites  consistent GWY-MAC

• Servers can move anywhere and always talk to a local gateway with the same IP/MAC interface vlan 100 interface vlan 200 interface vlan 100 ip address 10.2.0.7/24 ip address 10.2.0.8/24 interface Ethernet2/4ip address 10.2.0.5/24 lisp mobility roamer lisp mobility roamer ip address 10.2.0.6/24lisp mobility roamer lisp extended-subnet-mode lisp extended-subnet-mode lisp mobilitylisp roamer extended-subnet-mode hsrp 101 LAN Ext. hsrp 101 lisp extendedhsrp-subnet 101 -mode ip 10.2.0.1 ip 10.2.0.1 hsrp 101 ip 10.2.0.1 A B C D ip 10.2.0.1 LISP-VM (xTR) HSRP Active HSRP Active West East 10.2.0.0 /24 HSRP HSRP 10.2.0.0 /24 ARP ARP GWY-MAC GWY-MAC

72 Host-Mobility and Multi-homing • ETR Updates – Extended Subnets Null0 host routes indicate the host is “away”

10.2.0.0/16 – RLOC A, B 10.2.0.0 /24 is the dyn-EID 6 10.2.0.2/32 – RLOC C, D Map-Register 10.2.0.2/32 Mapping DB 5.1.1.1 5.2.2.2 Routing Table: Routing Table: 10.2.0.0/16 – Local Routing Table: 5 10.2.0.0/16 – Local 10.2.0.0/24 – Null0 10.2.0.0/16 – Local 10.2.0.0/24 – Null0 4 10.2.0.2/32 – Null0 10.2.0.0/24 – Null0 4 10.2.0.2/32 – Local A B 2 10.2.0.2/32 – Local C D Routing Table: 10.2.0.0/16 – Local 10.2.0.0/24 – Null0 4 10.2.0.2/32 – Null0 3 10.2.0.0 /16 3 1 10.2.0.0 /16 OTV East West Y X Y Map-Notify Map-Notify 10.2.0.2/32 10.2.0.2/32 10.2.0.2 73 Refreshing the Map Caches Map Cache @ ITR

10.2.0.3/32 – RLOC A,B 1. ITRs and PITRs with cached mappings continue to send traffic to the old locators 10.2.0.2/32 – RLOC A,B 1. The old xTR knows the host has moved (Null0 LISP site route) ITR 10.2.0.2/32 – RLOC C,D 2. Old xTR sends Solicit Map Request (SMR) messages to any encapsulators sending traffic to the moved host Mapping DB 3. The ITR then initiates a new map request process

4. An updated map-reply is issued from the new location

5. The ITR Map Cache is updated A B C D

LISP-VM (xTR) • Traffic is now re-directed OTV West East • SMRs are an important integrity measure to avoid 10.2.0.0 /16 10.2.0.0 /16 unsolicited map responses and spoofing Y X Y Z 10.2.0.2

74 LISP VM-Mobility Configuration • With Extended Subnets  “extended-subnet-mode”

ip lisp ITR-ETR ip lisp ITR-ETR ip lisp database-mapping 10.2.0.0/16 ip lisp database-mapping 10.3.0.0/16 ip lisp database-mapping 10.2.0.0/16 ip lisp database-mapping 10.3.0.0/16

lisp dynamic-eid roamer lisp dynamic-eid roamer database-mapping 10.2.0.0/24 … database-mapping 10.2.0.0/24 database-mapping 10.2.0.0/24 database-mapping 10.2.0.0/24 map-server 1.1.1.1 key abcd map-server 1.1.1.1 key abcd map-server 2.2.2.1 key abcd map-server 2.2.2.1 key abcd map-notify-group 239.10.10.10 map-notify-group 239.10.10.10 interface vlan 100 interface vlan 100 ip address 10.2.0.10 /16 ip address 10.2.0.11 /16 lisp mobility roamer lisp mobility roamer lisp extended-subnet-mode lisp extended-subnet-mode hsrp 101 hsrp 101 ip 10.2.0.1 ip 10.2.0.1 LAN Ext. 1.1.1.1 2.2.2.2 Mapping DB A B C D LISP-VM (xTR)

West East 10.2.0.0/16

X Y Z 75 On-subnet Server-Server Traffic • On Subnet Traffic Across L3 Boundaries

With LAN Extension Without LAN Extensions • Live moves and cluster member dispersion • Cold moves, no application dispersion

• Traffic between X & Y uses the LAN • X- Y traffic is sent to the LISP-VM router Extension & LISP encapsulated

• Link-local-multicast handled by the LAN • Need LAN extensions for link-local Extension multicast traffic

BC 10.2.0.3  10.2.0.2

Mapping DB LAN Ext. 10.2.0.3  10.2.0.2 A B C D A B C D

LISP-VM (xTR) LISP-VM (xTR)

West West 10.2.0.0/16 East East 10.2.0.0/16 10.3.0.0/16

10.2.0.3 Y 10.2.0.3 Y X Y Z X Y Z 10.2.0.2 10.2.0.2 76 Dissection of LISP Mobility Functionality Modular Functionality: Fixed Menu or a-la-carte

IP Core 1. Overlay Control Plane (Signalling) * 2. Overlay Data Plane (Encap-decap) * * Site Gateway (SG): LISP encap/decap 3. Mobility detection ** LISP signalling 4. First hop routing fix-ups **

** First-Hop Router (FHR) : Move Detection Local Routing Fix-up Menu of functionality: • Consume some or all • May be distributed over multiple hops Mobile Host

77 LISP Mobility Consumption Models Modular Functionality: Fixed Menu or a-la-carte

SG SG Advertise Host Routes

SG+FH Redistribute R IP LISP to IGP

FHR

Single-Hop (SH) Multi-Hop (MH) IGP Assist (SH or Fabric Integrated MH) SG: Encap/decap ✓ ✓ ✗ ✓ SG: LISP Signalling ✓ ✓ ✓ ✓ FHR: Move Detection ✓ ✓ ✓ ✗ FHR: Local Routing Fix-up ✓ ✓ ✓ ✗ 78 LISP – Standalone Fabric Integration

. Benefits:

. Ingress Routing Optimisation Branch . Edge Router Scale SG . WAN/Multi-homing LISP WAN . Host detection done by the fabric

. LISP mobility triggered by Site Gateway (SG): Fabric Protocol LISP encap/decap reception of host routes LISP signalling (e.g.BGP) . MP-BGP to LISP handoff . VXLAN EVPN (Standalone) Fabric based Mobility: . VPNv4 Move Detection Local Routing Fix-up BGP host advertisement LISP Fabric Integration - Host Mobility Signalling 10.2.0.2/32 – RLOC A,B,C,D Across Sites 10.2.0.2/32 – RLOC E,F,G,H Map-System Map-Notify 4 3 10.2.0.2/32 Map-Register Map-Notify 10.2.0.2/32 LISP Map-Register Registration/ L3 Core Notifications L3Routing Core Table: Routing Table: 10.2.0.2/32 – L3, 65001 10.2.0.2/32 – L3, 65001 10.2.0.2/32 – L6, 65002 10.2.0.2/32 – L6, 65002 LISP 10.2.0.2/32 – Null0-LISP 2 encap/decap E F G H A B C D LISP eBGP Host Routes encap/decap w/Sequence Community 2 BGP 2 withdraw BGP iBGP Host Routes iBGP Host Routes BGP AS 65002 withdraw BGP AS 65001 L5 L6 L7 L8 L1 L2 L3 L4 1

Routing Table: “roamer” 10.2.0.2/32 – L3, 65001 Routing Table: (lands in a foreign 10.2.0.2/32 – Local 10.2.0.2/32 – Local network) 10.2.0.2/32 – L6, 65002 LISP Fabric Integration Fabric Host Route Distribution @ SG

vrf context UNDERLAY-RLOC-VRF ip lisp itr-etr LISP Infrastructure ip lisp itr map-resolver ip lisp etr map-server key

vrf context TENANT-A-VRF ip lisp itr-etr Tenant Mobility ip lisp locator-vrf UNDERLAY-RLOC-VRF lisp instance-id lisp dynamic-eid database-mapping dyn-eid-prefix p1 w50 database-mapping dyn-eid-prefix p1 w50 Register hosts with LISP register-route-notifications tag

inteface ethernet 2/4 SG Interface ip address x.x.x.x no shutdown Multi-tenancy/Segmentation in LISP • LISP Multi-tenancy

Instance EID IP Location Virtualized Mapping Service: Green A East Blue A West . EID entries with instance-id semantics Yellow C East  West . Control packets also contain instance-id “Colored” Map semantics Requests/Replies To MPLS VPNs, VRF-lite or separate To LISP networks

GD | Instance1 1.1.0.1  10.2.0.2 GE | Instance2 1.1.0.1  10.2.0.2 GF | Instance3 1.1.0.1  10.2.0.2

Single RLOC space shared by multiple instances

“Colored” Traffic: Virtualized Map Cache (xTRs): • Instance-ID tag in LISP data header • Mappings cached in different VRFs per instance-id • Instance-ID encoded in LISP control packets • Interoperable with other VRF features/solutions

82 LISP Extensibility

• LISP Canonical Address Family (LCAF)

• Extensible encoding allows inclusion of metadata in the control plane (without having to get IETF workgroup approval each time it changes)

• Attributes can be added to extend EIDs or RLOCs. For example: • Instance IDs for segmentation are encoded in LCAF as part of the EID • Another example are geo-coordinates as part of the RLOC • MAC addresses are easily implemented using LCAF • SGTs could be registered as part of the RLOC for an EID Summary

• BGP is a full featured . It has been extended to support overlay location management

• LISP is a Directory Service.

• Both mechanisms are currently implemented to interoperate cleanly

• Operators may prefer one over the other or the combination

• Cisco is committed to deliver end-to-end solutions in both areas Q & A Complete Your Online Session Evaluation Give us your feedback and receive a Cisco 2016 T-Shirt by completing the Overall Event Survey and 5 Session Evaluations. – Directly from your mobile device on the Cisco Live Mobile App – By visiting the Cisco Live Mobile Site http://showcase.genie-connect.com/ciscolivemelbourne2016/ – Visit any Cisco Live Internet Station located throughout the venue Learn online with Cisco Live! T-Shirts can be collected Friday 11 March Visit us online after the conference for full access to session videos and at Registration presentations. www.CiscoLiveAPAC.com Thank you