NSX-T Meets FRRouting – Part 1

Link Part 1 - https://rutgerblom.com/2020/01/17/nsx-t-meets-frrouting-part-1/ Part 2 - https://rutgerblom.com/2020/01/20/nsx-t-meets-frrouting-part-2/amp/

NSX-T Meets FRRouting – Part 1 rutgerblom January 17, 2020 2

Until recently I always used pfSense with the OpenBGPD package as the NSX- T Edge counterpart in my lab environment. It’s quick and easy to set up and works well enough. But pfSense is not what I typically find in a customer’s production environment.

I started to investigate other virtualized “top-of-rack solutions” for the lab that would be a bit more similar to what I see in the enterprise. Right now I’m testing out FRRouting and I must say that I’m pretty impressed with this solution so far. At least it’s good enough to be the subject of a blog post or two �

I’m going to walk through deploying and configuring a pair of FRRouting instances, the NSX-T Edge, and BGP in a lab environment. Follow along if you want.

Target topology

The diagram below shows a logical L3 design for the NSX-T Edge – FRRouting solution that we’ll be building:

There’s nothing much out of the ordinary here. We have a Tier-0 gateway backed by two Edge nodes, and BGP routing. At the top of the diagram things look a bit less familiar with two routers powered by FRRouting.

That’s a nice sketch. Now let’s see if we can make it work too.

Bill of materials

The following is used to build this environment:

o NSX-T 2.5.1 o vSphere 6.7 U3 o Debian Linux 10.2 o FRRouting 7.2

Deploy FRRouting

This first part is about getting the FRR instances up and running which begins with installing two Linux servers. Let’s get right to it. Install Linux servers

Debian Linux is a good fit here as there is an official FRR Debian repository which makes installing FRR a lot easier.

Each is configured with two NICs.

The ens192 interface is configured as the primary interface and will be the “north-facing” port. The ens224 interface is the “SDDC-facing” port. At this point we only assign a static IP address to the ens192 interface.

The only additional components we need to install are the SSH server and standard system utilities:

Complete the Debian installation on both servers.

Install VLAN support

The servers will soon be configured with some VLAN interfaces. To add support for this we install the VLAN package:

apt install vlan -y

Add the following line to /etc/modules so that VLAN (802.1Q) support is loaded during boot:

8021q Enable IPv4 packet forwarding

We want the Linux servers to become Linux routers and as a part of that we need to enable IPv4 packet forwarding in /etc/.conf:

net.ipv4.ip_forward=1

Reboot the servers after making this change.

Configure network interfaces

Time to configure the network interfaces on the Linux routers. The following shows the interface configuration per Linux : frr-01:

Interface IP address Comment ens192 10.2.129.101/24 Primary interface, north-facing ens224 – Secondary interface, SDDC-facing ens224.1611 172.16.11.253/24 Management VLAN ens224.1657 172.16.57.1/29 BGP peering VLAN ens224.1659 172.16.59.253/24 Overlay transport VLAN

Which results in the following /etc/network/interfaces for frr-01:

source /etc/network/interfaces.d/*

# The loopback network interface

auto lo

iface lo inet loopback

# The primary network interface - north-facing port

auto ens192 allow-hotplug ens192

iface ens192 inet static

address 10.2.129.101/24

gateway 10.2.129.1

dns-nameservers 10.2.129.10

dns-search demo.local

# The secondary network interface - SDDC-facing port

auto ens224

allow-hotplug ens224

iface ens224 inet manual

mtu 9000

# The VLAN 1611 interface - Management

auto ens224.1611

iface ens224.1611 inet static

address 172.16.11.253/24

# The VLAN 1657 interface - BGP peering

auto ens224.1657

iface ens224.1657 inet static

address 172.16.57.1/29

# The VLAN 1659 interface - Overlay transport

auto ens224.1659

iface ens224.1659 inet static

address 172.16.59.253/24 frr-02:

Interface IP address Comment ens192 10.2.129.102/24 Primary interface, north-facing ens224 – Secondary interface, SDDC-facing ens224.1611 172.16.11.254/24 Management VLAN ens224.1658 172.16.58.1/29 BGP peering VLAN ens224.1659 172.16.59.254/24 Overlay transport VLAN

The corresponding /etc/network/interfaces for frr-02:

source /etc/network/interfaces.d/*

# The loopback network interface

auto lo iface lo inet loopback

# The primary network interface - north-facing

auto ens192

allow-hotplug ens192

iface ens192 inet static

address 10.2.129.102/24

gateway 10.2.129.1

dns-nameservers 10.2.129.10

dns-search demo.local

# The secondary network interface - SDDC-facing

auto ens224

allow-hotplug ens224

iface ens224 inet manual

mtu 9000

# The VLAN 1611 interface - Management auto ens224.1611

iface ens224.1611 inet static

address 172.16.11.254/24

# The VLAN 1658 interface - BGP peering

auto ens224.1658

iface ens224.1658 inet static

address 172.16.58.1/29

# The VLAN 1659 interface - Overlay transport

auto ens224.1659

iface ens224.1659 inet static

address 172.16.59.254/24

Restart the network to activate the new network interface configuration:

systemctl restart networking

Run the ip address command to verify that the new interface configuration is active:

Install VRRP

As you noticed we are “stretching” the management VLAN (1611) and the overlay transport VLAN (1659) between the Linux routers. Both routers can act as the default gateway for these VLANs at any given time. To make use of this capability we’ll set up VRRP with Keepalived.

Install the package:

apt install keepalived -y

Create the Keepalived configuration file: /etc/keepalived/keepalived.conf. Below the Keepalived configuration per server: frr-01 (VRRP master):

global_defs {

# Email Alert Configuration

notification_email { # Email To Address

[email protected]

}

# Email From Address

notification_email_from [email protected]

# SMTP Server Address / IP

smtp_server 127.0.0.1

# SMTP Timeout Configuration

smtp_connect_timeout 60

router_id frr-01

}

vrrp_sync_group VG1 {

group {

1611

1659

}

}

vrrp_instance 1611 {

# State = Master or Backup

state MASTER

# Interface ID for VRRP to run on

interface ens224.1611

# VRRP Router ID

virtual_router_id 10

# Highest Priority Wins

priority 250

# VRRP Advert Intaval 1 Second

advert_int 1

# Basic Inter Router VRRP Authentication

authentication {

auth_type PASS

auth_pass VMware1!VMware1!

}

# VRRP Virtual IP Address Config virtual_ipaddress {

172.16.11.1/24 dev ens224.1611

}

}

vrrp_instance 1659 {

# State = Master or Backup

state MASTER

# Interface ID for VRRP to run on

interface ens224.1659

# VRRP Router ID

virtual_router_id 11

# Highest Priority Wins

priority 250

# VRRP Advert Intaval 1 Second

advert_int 1

# Basic Inter Router VRRP Authentication

authentication { auth_type PASS

auth_pass VMware1!VMware1!

}

# VRRP Virtual IP Address Config

virtual_ipaddress {

172.16.59.1/24 dev ens224.1659

}

} frr-02 (VRRP backup):

global_defs {

# Email Alert Configuration

notification_email {

# Email To Address

[email protected]

}

# Email From Address

notification_email_from [email protected]

# SMTP Server Address / IP smtp_server 127.0.0.1

# SMTP Timeout Configuration

smtp_connect_timeout 60

router_id frr-02

}

vrrp_sync_group VG1 {

group {

1611

1659

}

}

vrrp_instance 1611 {

# State = Master or Backup

state BACKUP

# Interface ID for VRRP to run on

interface ens224.1611 # VRRP Router ID

virtual_router_id 10

# Highest Priority Wins

priority 150

# VRRP Advert Intaval 1 Second

advert_int 1

# Basic Inter Router VRRP Authentication

authentication {

auth_type PASS

auth_pass VMware1!VMware1!

}

# VRRP Virtual IP Address Config

virtual_ipaddress {

172.16.11.1/24 dev ens224.1611

}

}

vrrp_instance 1659 { # State = Master or Backup

state BACKUP

# Interface ID for VRRP to run on

interface ens224.1659

# VRRP Router ID

virtual_router_id 11

# Highest Priority Wins

priority 150

# VRRP Advert Intaval 1 Second

advert_int 1

# Basic Inter Router VRRP Authentication

authentication {

auth_type PASS

auth_pass VMware1!VMware1!

}

# VRRP Virtual IP Address Config

virtual_ipaddress {

172.16.59.1/24 dev ens224.1659 }

}

Restart the Keepalived service on both routers to activate the new configuration:

systemctl restart keepalived

We can now verify VRRP operation by running systemctl status keepalived:

Running the ip address command will hopefully show the virtual IP address on the two VLAN interfaces:

And a ping to the virtual IP address from the VRRP backup node (frr-02 in this case) should be successful:

Install FRRouting

With Linux installed and configured we continue with the FRRouting installation.

Begin by adding the FRR Debian repository:

curl -s https://deb.frrouting.org/frr/keys.asc | apt-key add -

FRRVER="frr-stable"

echo deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | tee -a /etc/apt/sources.list.d/frr.list

apt update && apt install frr frr-pythontools -y

FRRouting is now installed.

Configure FRRouting capabilities

We only enable the routing protocols that are needed. To make FRR a good match for the NSX-T Edge we would like the instances to be capable of doing BGP and BFD. So we simply enable these daemons in /etc/frr/daemons.

bgpd=yes

bfdd=yes

Restart the FRR service and verify that the BGP and BFD daemons are active:

systemctl restart frr

systemctl status frr

This is looking good. The FRR instances are now ready for control plane configuration.

Summary

This completes part 1 of the series on NSX-T and FRRouting. We’ve been quite productive:

o Installed two Debian Linux servers o Installed VLAN support o Enabled packet forwarding o Configured network interfaces o Installed and configured VRRP o Installed FRRouting

In the next part we’ll continue with deploying the NSX-T Edge and setting up BGP routing between NSX-T and the FRR instances. Thanks for reading and stay tuned!

Part 2 –

Welcome back! We’re in the process of building an NSX-T Edge – FRRouting environment.

In part 1 we prepared the FRR routers by doing he following:

• Installed two Debian Linux servers o Installed VLAN support o Enabled packet forwarding o Configured network interfaces • Installed and configured VRRP • Installed FRRouting

In this second part we will first deploy the NSX-T Edge components and then set up BGP routing. There’s a lot to do so let’s get started! Target topology As a refresher here is the big picture once more:

We’ll use this diagram as our blueprint. Scroll back up here any time you wonder what the heck it is we’re doing down there. Deploy NSX-T Edge Let’s begin by getting the NSX-T Edge on par with the FRR routers. Create NSX-T segments The FRR routers, frr-01 and frr-02, were configured with local “peering” VLANs 1657 and 1658 respectively. Corresponding VLAN-backed segments are needed for L2 adjacency with the FRR routers.

Creating the “vlan-1658” segment:

Both segments in place:

Uplink profile Create an uplink profile for the edge transport nodes containing settings for teamings, transport VLAN, and MTU:

The transport VLAN has id 1659 and MTU size is 9000.

Deploy Edge VMs Instead of walking through the Edge node deployment, the table below summarizes the settings I used during the deployment. Have a look at the Single N-VDS per Edge VM article for a detailed Edge node deployment walkthrough.

Setting Edge Node 1 Edge Node 2 Name en01 en02 FQDN en01.lab.local en02.lab.local Form Factor Small Small Mgmt IP 172.16.11.61/24 172.16.11.62/24 Mgmt Interface PG-MGMT (VDS) PG-MGMT (VDS) Default Gateway 172.16.11.1 172.16.11.1 Transport Zone TZ-VLAN, TZ-OVERLAY TZ-VLAN, TZ-OVERLAY Static IP List 172.16.59.71, 172.16.59.81 172.16.59.72, 172.16.59.82 Gateway 172.16.59.1 172.16.59.1 Mask 255.255.255.0 255.255.255.0 Uplink1 > Trunk1 (VDS) Uplink1 > Trunk1 (VDS) DPDK Interdace Uplink2 > Trunk2 (VDS) Uplink2 > Trunk2 (VDS) The two Edge nodes are up and running:

We add both Edge nodes to an Edge cluster:

Create Tier-0 gateway With the Edge nodes in place we can create a Tier-0 gateway. I’m configuring it with Active-Standby HA Mode:

We add four external interfaces to the Tier-0:

Name IP address Segment Edge Node en1-uplink1 172.16.57.2/29 vlan-1657 en1 en1-uplink2 172.16.58.2/29 vlan-1658 en1 en2-uplink1 172.16.57.3/29 vlan-1657 en2 en2-uplink2 172.16.58.3/29 vlan-1658 en2 The four Tier-0 interfaces are in place:

Test connectivity Now is a good time to verify the L2 adjacency between the FRR routers and the Tier-0 interfaces.

A ping from frr-01 to the Tier-0 interfaces in VLAN 1657:

And a ping from frr-02 to the Tier-0 interfaces in VLAN 1658:

Successful pings. We’re good! Configure BGP Moving up an OSI layer, we continue with setting up BGP.

Tier-0 gateway The Tier-0 is configured with the following BGP settings:

Setting Value Local AS 65000 BGP On Graceful Restart Disable ECMP On The settings in NSX Manager:

We add two BGP neighbors to the Tier-0: 172.16.57.1 (frr-01) and 172.16.58.1 (frr-02). Make sure to enable BFD for these neighbors too:

The neighbor status will be “Down” at this point which is expected as we didn’t configure BGP on the FRR routers yet.

For route re-distribution I choose to re-distribute from all the available sources into the BGP process:

FRR routers Configuration of BGP in FRRouting can be done by editing configuration files directly or through VTY shell which is FRRouting’s CLI frontend. We’ll use VTY shell today. frr-01 Run the vtysh command to start VTY shell:

After changing to the configuration mode with conf t, we enable the BGP process with: router bgp 65001

Next, we configure the router ID and the BGP/BFD neighbors which are the Tier-0’s interfaces in VLAN 1657 on frr-01: bgp router-id 172.16.57.1 neighbor 172.16.57.2 remote-as 65000 neighbor 172.16.57.2 bfd neighbor 172.16.57.3 remote-as 65000 neighbor 172.16.57.3 bfd

We want frr-01 to advertise itself as the default gateway to its BGP neighbors which is accomplished with: address-family ipv4 unicast neighbor 172.16.57.2 default-originate neighbor 172.16.57.3 default-originate

Run end followed by wr to save the configuration:

If all went well we should now see active BGP and BFD sessions between frr-01 and the Tier-0 interfaces in VLAN 1657. Let’s verify this with: show bgp summary

BGP neighbor sessions are looking good. How about BFD? show bfd peers

BFD sessions are up. frr-02 We repeat the exact same configuration steps on frr-02. The configuration for frr-02 looks like this: router bgp 65001 bgp router-id 172.16.58.1 neighbor 172.16.58.2 remote-as 65000 neighbor 172.16.58.2 bfd neighbor 172.16.58.3 remote-as 65000 neighbor 172.16.58.3 bfd ! address-family ipv4 unicast neighbor 172.16.58.2 default-originate neighbor 172.16.58.3 default-originate exit-address-family Let’s check the BGP/BFD status at frr-02: show bgp summary

show bfd peers

BGP and BFD sessions are looking good. Routing After a lot of deploying and configuring it’s finally time to see if we can actually route any traffic. FRR routing tables We begin by having a look at the FRR routing tables. Run the following command in VTY shell on the FRR routers: show ip route bgp frr-01:

frr-02:

The FRR routers have learned about each other’s /29 subnets via the NSX- T Tier-0. More specifically, they were learned from neighbor 172.16.57.2 and 172.16.58.2. This tells us that the active Tier-0 SR is hosted on Edge node 1.

Is the standby Tier-0 SR completely out of the picture then? Let’s see: show bgp detail

The standby Tier-0 SR on Edge node 2 also advertises routes for the same /29 subnets, but as you can see the ASN (65000) is added to the path three more times and packets won’t be routed over these longer paths.

Tier-0 routing table Run the following command on the Edge node hosting the active Tier-0 SR: get route bgp

Here we see two equal cost routes for 0.0.0.0/0, one to each FRR router. This tells us that “default-originate” did its job. Both routes also ended up in the FIB which means ECMP is working.

From overlay to physical It’s now time for the ultimate test. We create an overlay segment, 192.168.10.0/24, connected to the Tier-0 gateway:

The BGP process on the Tier-0 advertises the 192.168.10.0/24 network to its neighbors. Let’s check if they ended up there: show ip route bgp frr-01:

frr-02:

A route to the overlay network is indeed present in both of the FRR routers routing table. Now we connect a VM to the overlay segment and run a traceroute from this VM to an IP address north of the FRR routers: traceroute 10.2.129.10 -n -q 2

The VM on the overlay segment can reach the physical network. By doing two probes per hop we also see that the Tier-0 offers two paths to the destination: one via frr-01 (172.16.57.1) and one via frr-02 (172.16.58.1). It’s a wrap It’s been quite a project, but we got ourselves a working NSX-T Edge – FRRouting environment and it wasn’t that hard to set up, right?

This all started with me looking for a more enterprise like virtual top-of- rack solution for my NSX-T lab. Having these FRR routers north of the Tier-0 certainly feels like a big step towards that goal. Perhaps not fully showcased in these articles, but FRRouting’s feature set is pretty much on par with today’s data center leaf-spine switches. As a matter of fact it’s already being used there. Have a look at Cumulus Networks for example.

For more information about features and possibilities surrounding BGP have a look at the official NSX-T and FRRouting documentation. Most of all I recommend that you set this up yourself. Hopefully these two articles will help you get started with that.

Thanks for reading!