View on GitHub

Data Centre using Nokia SRLinux

This repository features multiple network topologies using Nokia Service Router Linux (SR Linux). The topologies demonstrate the configuration of VxLAN and the use of various tools to collect and visualize network statistics.

Configuration Overview

The following is brief description of the router configuration in this lab. This is still work in-progress and more detail will be added later. Please consult the router documentation for more information.

Note: This lab uses Release 21.11, which is not the latest but the container size is smaller.

Main Lab Topology

The main clab configuration file creates a Data Centre fabric using BGP, EVPN, and VxLAN technologies. Here is a description of the overall topology:

The goal of the configuration is to allow connectivity among all servers over the DC fabric.

Main Lab Topology

Configuration Workflow

The following is the description of the configuration procedure. The procedure is divided into three stages:

Notes:

Fabric Configuration

Prior to configuring EVPN based overlay, a routing protocol needs to be deployed in the fabric to advertise the reachability of all the leaf VXLAN Termination End Point (VTEP) addresses throughout the IP fabric.

Therefore, we need to configure BGP between five autonomous systems (AS). One AS includes the spine switches. Each of the other ASs includes one leaf or border switch. The purpose of configure eBGP is to create an underlay infrastructure that share all system0 (loopback) IP addresses that will be used later.

The configuration includes the following steps (order is not import as long as all steps are completed before committing the configuration):

At the end of this stage, you should be able to see all BGP neighbours and the advertised routes.

EVPN Configuration

The previous configuration enables us to establish iBGP EVPN sessions between the Leaf and Spine routers. In this stage we create iBGP configuration by including all routers in one AS (65500). The configuration for all routers will be the same but spine will be used also as route reflectors.

Follow these steps in leaf and border switches:

In the spine switches:

VxLAN Configuration

The configuration of VxLAN is different for each router. In summary, we will need:

Server IP Address
h1 192.168.1.11/24
h2 192.168.1.12/24
h3 192.168.1.13/24
h4 192.168.2.11/24
sflow 192.168.3.11/24

Verification and Troubleshooting

Ensure that a BGP session is established:

# /show network-instance default protocols bgp neighbor <neighbor>

To verify the tunnel interface configuration:

# /show tunnel-interface vxlan-interface brief

For the bridge table:

# /show network-instance vrf-1 bridge-table mac-table all

Once configured, the bgp-vpn instance can be checked to have the RT/RD values set:

# /show network-instance vrf-1 protocols bgp-vpn bgp-instance 1

When the BGP-EVPN is configured in the mac-vrf instance, the leafs start to exchange EVPN routes, which we can verify with the following commands:

# /show network-instance default protocols bgp neighbor <neighbor>

The IMET/RT3 routes can be viewed in summary and detailed modes:

# /show network-instance default protocols bgp routes evpn route-type 3 summary
# /show network-instance default protocols bgp routes evpn route-type 3 detail

When the IMET routes from leaf2 are imported for vrf-1 network-instance, the corresponding multicast VXLAN destinations are added and can be checked with the following command:

# show tunnel-interface vxlan1 vxlan-interface 1 bridge-table multicast-destinations destination *

After receiving EVPN routes from the remote leafs with VXLAN encapsulation5, SR Linux creates VXLAN tunnels towards remote VTEP, whose address is received in EVPN IMET routes. The state of a single remote VTEP we have in our lab is shown below from the leaf1 switch.

# /show tunnel vxlan-tunnel all

Once a VTEP is created in the vxlan-tunnel table with a non-zero allocated index6, an entry in the tunnel-table is also created for the tunnel.

# /show network-instance default tunnel-table all

When the leafs exchanged only EVPN IMET routes they build the BUM flooding tree (aka multicast destinations), but unicast destinations are yet unknown, which is checked with the command below:

# show tunnel-interface vxlan1 vxlan-interface 1 bridge-table unicast-destinations destination *

Note: as of the time of this writing, you will be able to ping from h1 to h2 and h3 (on the same subnet) and from h1 to h4 and sFlow server as expected. However, ping from h2 and h3 to severs on remote subnets is not successful. I am still troubleshooting the issue.

Alternative Lab Topology

To be able to experiment with srlinux quickly, a smaller topology file is added. The smaller DC topology consists of one spine switch, two leaf switches, and four servers connected to the leaf switches as follows:

Tiny Lab Topology

The goal of the configuration is to allow connectivity among all servers over the DC fabric.

Similar to the main topology, the router are configured to run eBGP to create an underlay infrastructure. EVPN and VxLAN are configured using iBGP as an overlay. Unlike the main topology, route reflector is not needed for this topology.

The IP addresses of the servers:

Server IP Address
h1 192.168.1.11/24
h2 192.168.1.12/24
h3 192.168.3.11/24
h4 192.168.4.11/24

Note: as of the time of this writing, you will be able to ping from h1 to h2 (on the same subnet) and from h1 to h3 and h4 as expected. However, ping from h2 to h3 and h4 is not successful. I am still troubleshooting the issue.

References

The above configuration is based on the following tutorials: