BIG-IP L2 Virtual Wire LACP Passthrough Deployment with IXIA Bypass Switch and Network Packet Broker (Single Service Chain - Active / Active)

Introduction

This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer to https://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer to https://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibility and https://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2).

F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups.

This document covers the design and implementation of the IXIA Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire).

This document focus on IXIA Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer to https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1.

This article is continuation of https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 with latest versions of BIG-IP and IXIA Devices. Also focused on various combination of configurations in BIG-IP and IXIA devices.


Network Topology

Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker.

Figure 1 - Deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker

Please refer Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 for more insights on lab topology and connections.


Hardware Specification

Hardware used in this article are

  • IXIA iBypass DUO ( Bypass Switch)
  • IXIA Vision E40 (Network Packet Broker)
  • BIG-IP
  • Arista DCS-7010T-48 (all the four switches)


Software Specification

Software used in this article are

  • BIG-IP 16.1.0
  • IXIA iBypass DUO 1.4.1
  • IXIA Vision E40 5.9.1.8
  • Arista 4.21.3F (North Switches)
  • Arista 4.19.2F (South Switches)


Switch Configuration

LAG or link aggregation is a way of bonding multiple physical links into a combined logical link. MLAG or multi-chassis link aggregation extends this capability allowing a downstream switch or host to connect to two switches configured as an MLAG domain. This provides redundancy by giving the downstream switch or host two uplink paths as well as full bandwidth utilization since the MLAG domain appears to be a single switch to Spanning Tree (STP).

Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 shows MLAG configuring in both the switches. This article focus on LACP deployment for tagged packets. For more details on MLAG configuration, refer to https://eos.arista.com/mlag-basic-configuration/#Verify_MLAG_operation


Step Summary

Step 1 : Configuration of MLAG peering between both the North Switches

Step 2 : Verify MLAG Peering in North Switches

Step 3 : Configuration of MLAG Port-Channels in North Switches

Step 4 : Configuration of MLAG peering between both the South Switches

Step 5 : Verify MLAG Peering in South Switches

Step 6 : Configuration of MLAG Port-Channels in South Switches

Step 7 : Verify Port-Channel Status


Step 1 : Configuration of MLAG peering between both the North Switches

MLAG Configuration in North Switch1 and North Switch2 are as follows

North Switch 1:

  • Configure Port-Channel
interface Port-Channel10
  switchport mode trunk
  switchport trunk group m1peer
  • Configure VLAN
interface Vlan4094
  ip address 172.16.0.1/30
  • Configure MLAG
mlag configuration
  domain-id mlag1
  heartbeat-interval 2500
  local-interface Vlan4094
  peer-address 172.16.0.2
  peer-link Port-Channel10
  reload-delay 150

North Switch 2:

  • Configure Port-Channel
interface Port-Channel10
  switchport mode trunk
  switchport trunk group m1peer
  • Configure VLAN
interface Vlan4094
  ip address 172.16.0.2/30
  • Configure MLAG
mlag configuration
  domain-id mlag1
  heartbeat-interval 2500
  local-interface Vlan4094
  peer-address 172.16.0.1
  peer-link Port-Channel10
  reload-delay 150

 

Step 2 : Verify MLAG Peering in North Switches

North Switch 1:

North-1#show mlag
MLAG Configuration:
domain-id             :              mlag1
local-interface       :           Vlan4094
peer-address          :         172.16.0.2
peer-link             :     Port-Channel10
peer-config           :          consistent

MLAG Status:
state                 :             Active
negotiation status    :          Connected
peer-link status      :                 Up
local-int status      :                 Up
system-id             :  2a:99:3a:23:94:c7
dual-primary detection :           Disabled

MLAG Ports:
Disabled              :                  0
Configured            :                  0
Inactive              :                  6
Active-partial        :                  0
Active-full           :                  2

North Switch 2:

North-2#show mlag
MLAG Configuration:
domain-id             :              mlag1
local-interface       :           Vlan4094
peer-address          :         172.16.0.1
peer-link             :     Port-Channel10
peer-config           :          consistent

MLAG Status:
state                 :             Active
negotiation status    :          Connected
peer-link status      :                 Up
local-int status      :                 Up
system-id             :  2a:99:3a:23:94:c7
dual-primary detection :           Disabled

MLAG Ports:
Disabled              :                  0
Configured            :                  0
Inactive              :                  6
Active-partial        :                  0
Active-full           :                  2


Step 3 : Configuration of MLAG Port-Channels in North Switches

North Switch 1:

interface Port-Channel513
   switchport trunk allowed vlan 513
   switchport mode trunk
   mlag 513
interface Ethernet50
   channel-group 513 mode active

North Switch 2:

interface Port-Channel513
   switchport trunk allowed vlan 513
   switchport mode trunk
   mlag 513
interface Ethernet50
   channel-group 513 mode active


Step 4 : Configuration of MLAG peering between both the South Switches

MLAG Configuration in South Switch1 and South Switch2 are as follows

South Switch 1:

  • Configure Port-Channel
interface Port-Channel10
  switchport mode trunk
  switchport trunk group m1peer
  • Configure VLAN
interface Vlan4094
  ip address 172.16.1.1/30
  • Configure MLAG
mlag configuration
  domain-id mlag1
  heartbeat-interval 2500
  local-interface Vlan4094
  peer-address 172.16.1.2
  peer-link Port-Channel10
  reload-delay 150

South Switch 2:

  • Configure Port-Channel
interface Port-Channel10
  switchport mode trunk
  switchport trunk group m1peer
  • Configure VLAN
interface Vlan4094
  ip address 172.16.1.2/30
  • Configure MLAG
mlag configuration
  domain-id mlag1
  heartbeat-interval 2500
  local-interface Vlan4094
  peer-address 172.16.1.1
  peer-link Port-Channel10
  reload-delay 150


Step 5 : Verify MLAG Peering in South Switches

South Switch 1:

South-1#show mlag
MLAG Configuration:
domain-id           :               mlag1
local-interface     :            Vlan4094
peer-address        :          172.16.1.2
peer-link           :      Port-Channel10
peer-config         :          consistent


MLAG Status:
state               :              Active
negotiation status  :           Connected
peer-link status    :                  Up
local-int status    :                  Up
system-id           :   2a:99:3a:48:78:d7


MLAG Ports:
Disabled            :                   0
Configured          :                   0
Inactive            :                   6
Active-partial      :                   0
Active-full         :                   2

South Switch 2:

South-2#show mlag
MLAG Configuration:
domain-id           :               mlag1
local-interface     :            Vlan4094
peer-address        :          172.16.1.1
peer-link           :      Port-Channel10
peer-config         :          consistent


MLAG Status:
state               :              Active
negotiation status  :           Connected
peer-link status    :                  Up
local-int status    :                  Up
system-id           :   2a:99:3a:48:78:d7


MLAG Ports:
Disabled            :                   0
Configured          :                   0
Inactive            :                   6
Active-partial      :                   0
Active-full         :                   2


Step 6 : Configuration of MLAG Port-Channels in South Switches

South Switch 1:

interface Port-Channel513
   switchport trunk allowed vlan 513
   switchport mode trunk
   mlag 513
interface Ethernet50
   channel-group 513 mode active

South Switch 2:

interface Port-Channel513
   switchport trunk allowed vlan 513
   switchport mode trunk
   mlag 513
interface Ethernet50
   channel-group 513 mode active


LACP modes are as follows

  1. On
  2. Active
  3. Passive

LACP Connection establishment will occur only for below configurations

  • Active in both North and South Switch
  • Active in North or South Switch and Passive in other switch
  • On in both North and South Switch

Note: In this case, all the interfaces of both North and South Switches are configured with LACP mode as Active.


Step 7 : Verify Port-Channel Status

North Switch 1:

North-1#show mlag interfaces detail
                                                              local/remote
   mlag             state       local       remote            oper        config                 last change    changes
---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- -------
    513       active-full       Po513        Po513           up/up       ena/ena         4 days, 0:34:28 ago        198


North Switch 2:

North-2#show mlag interfaces detail
                                                              local/remote
   mlag             state       local       remote            oper        config                 last change    changes
---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- -------
    513       active-full       Po513        Po513           up/up       ena/ena         4 days, 0:35:58 ago        198

South Switch 1:

South-1#show mlag interfaces detail
                                                              local/remote
   mlag             state       local       remote            oper        config                 last change    changes
---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- -------
    513       active-full       Po513        Po513           up/up       ena/ena         4 days, 0:36:04 ago        190

South Switch 2:

South-2#show mlag interfaces detail
                                                              local/remote
   mlag             state       local       remote            oper        config                 last change    changes
---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- -------
    513       active-full       Po513        Po513           up/up       ena/ena         4 days, 0:36:02 ago        192


Ixia iBypass Duo Configuration

For detailed insight, refer to IXIA iBypass Duo Configuration section in https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?page=1


Figure 2 - Configuration of iBypass Duo (Bypass Switch)


Heartbeat Configuration

Heartbeats are configured on both bypass switches to monitor tools in their primary path and secondary paths. If a tool failure is detected, the bypass switch forwards traffic to the secondary path. Heartbeat can be configured using multiple protocols, here Bypass switch 1 uses DNS and Bypass Switch 2 uses IPX for Heartbeat.


Figure 3 - Heartbeat Configuration of Bypass Switch 1 ( DNS Heartbeat )


In this infrastructure, the VLAN ID is 513 and represented as hex 0201.


Figure 4 - VLAN Representation in Heartbeat


Figure 5 - Heartbeat Configuration of Bypass Switch 1 ( B Side )


Figure 6 - Heartbeat Configuration of Bypass Switch 2 ( IPX Heartbeat )


Figure 7 - Heartbeat Configuration of Bypass Switch 2 ( B Side )


IXIA Vision E40 Configuration

Create the following resources with the information provided.

Bypass Port Pairs

Inline Tool Pair

Service Chains


Figure 8 - Configuration of Vision E40 ( NPB )


This articles focus on deployment of Network Packet Broker with single service chain whereas previous article is based on 2 service chain.


Figure 9 - Configuration of Tool Resources


In Single Tool Resource, 2 Inline Tool Pairs configured which allows to configure both the Bypass Port pair with single Service Chain.


Figure 10 - Configuration of VLAN Translation


From Switch Configuration, Source VLAN is 513 and it will be translated to 2001 and 2002 for Bypass 1 and Bypass 2 respectively.

For more insights with respect to VLAN translation, refer https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?page=1

For Tagged Packets, VLAN translation should be enabled. LACP frames will be untagged which should be bypassed and routed to other Port-Channel. In this case LACP traffic will not reach BIG-IP, instead it will get routed directly from NPB to other pair of switches.


LACP bypass Configuration


The network packet broker is configured to forward (or bypass) the LACP frames directly from the north to the south switch and vice versa. LACP frames bear the ethertype 8809 (in hex). This filter is configured during the Bypass Port Pair configuration.


Note: There are methods to configure this filter, with the use of service chains and filters but this is the simplest for this deployment.


Figure 11 - Configuration to redirect LACP


BIG-IP Configuration

Step Summary

  • Step 1 : Configure interfaces to support vWire
  • Step 2 : Configure trunk in passthrough mode
  • Step 3 : Configure Virtual Wire

Note: Steps mentioned above are specific to topology in Figure 2. For more details on Virtual Wire (vWire), refer https://devcentral.f5.com/s/articles/BIG-IP-vWire-Configuration?tab=series&page=1 and https://devcentral.f5.com/s/articles/vWire-Deployment-Configuration-and-Troubleshooting?tab=series&page=1


Step 1 : Configure interfaces to support vWire

To configure interfaces to support vWire, do the following

  1. Log into BIG-IP GUI
  2. Select Network -> Interfaces -> Interface List
  3. Select Specific Interface and in vWire configuration, select Virtual Wire as Forwarding Mode


Figure 12 - Example GUI configuration of interface to support vWire


Step 2 : Configure trunk in passthrough mode

To configure trunk, do the following

  1. Log into BIG-IP GUI
  2. Select Network -> Trunks
  3. Click Create to configure new Trunk. Disable LACP for LACP passthrough mode

 

Figure 13 - Configuration of North Trunk in Passthrough Mode


Figure 14 - Configuration of South Trunk in Passthrough Mode


Step 3 : Configure Virtual Wire

To configure trunk, do the following

  1. Log into BIG-IP GUI
  2. Select Network -> Virtual Wire
  3. Click Create to configure Virtual Wire

 

Figure 15 - Configuration of Virtual Wire


As VLAN 513 is translated into 2001 and 2002, vWire configured with explicit tagged VLANs. It is also recommended to have untagged VLAN in vWire to allow any untagged traffic.

Enable multicast bridging sys db variable as below for LACP passthrough mode

modify sys db l2.virtualwire.multicast.bridging value enable

Note: Make sure sys db variable enabled after reboot and upgrade. For LACP mode, multicast bridging sys db variable should be disabled.


Scenarios

As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches. ICMP traffic is used to represent network traffic from the north switches to the south switches.


Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode

Above configurations shows that all the four switches are configured with LACP active mode.


Figure 16 - MLAG after deployment of BIG-IP and IXIA with Switches configured in LACP ACTIVE mode

Figure 16 shows that port-channels 513 is active at both North Switches and South Switches.

Figure 17 - ICMP traffic flow from client to server through BIG-IP

Figure 17 shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully.


Scenario 2: Active BIG-IP link goes down with link state propagation enabled in BIG-IP

Figure 15 shows Propagate Virtual Wire Link Status enabled in BIG-IP. Figure 17 shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.4 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below


Figure 18 - BIG-IP interface 1.1 disabled


Figure 19 - Trunk state after BIG-IP interface 1.1 disabled


Figure 19 shows that the trunks are up even though interface 1.1 is down. As per configuration, North_Trunk has 2 interfaces connected to it 1.1 and 1.3 and one of the interface is still up, so North_Trunk status is active.


Figure 20 - MLAG status with interface 1.1 down and Link State Propagation enabled

Figure 20 shows that port-channel 513 is active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by IXIA configuration.


Figure 21 - IXIA Bypass Switch after 1.1 interface of BIG-IP goes down


As shown in Figure 8 , Single Service Chain is configured and which will be down only if both Inline Tool Port pairs are down in NPB. So Bypass will be enabled only if Service Chain goes down in NPB. Figure 21 shows that still Bypass is not enabled in IXIA Bypass Switch.


Figure 22 - Service Chain and Inline Tool Port Pair status in IXIA Vision E40 ( NPB )


Figure 22 shows that Service Chain is still up as BIG IP2 ( Inline Tool Port Pair ) is up whereas BIG IP1 is down. Figure 1 shows that P09 of NPB is connected 1.1 of BIG-IP which is down.


Figure 23 - ICMP traffic flow from client to server through BIG-IP

Figure 23 shows that still traffic flows through BIG-IP even though 1.1 interface of BIG-IP is down. Now active incoming interface is 1.3 and active outgoing interface is 1.4. Low bandwidth traffic is still allowed through BIG-IP as bypass not enabled and IXIA handles rate limit process.


Scenario 3: When North_Trunk goes down with link state propagation enabled in BIG-IP


Figure 24 - BIG-IP interface 1.1 and 1.3 disabled


Figure 25 - Trunk state after BIG-IP interface 1.1 and 1.3 disabled


Figure 15 shows that Propagate Virtual Wire Link State enabled and thus both the trunks are down.


Figure 26 - IXIA Bypass Switch after 1.1 and 1.3 interfaces of BIG-IP goes down


Figure 27 - ICMP traffic flow from client to server bypassing BIG-IP


Conclusion


This article covers BIG-IP L2 Virtual Wire Passthrough deployment with IXIA. IXIA configured using Single Service Chain. Observations of this deployment are as below

  1. VLAN Translation in IXIA NPB will convert real VLAN ID (513) to Translated VLAN ID (2001 and 2002)
  2. BIG-IP will receive packets with translated VLAN ID (2001 and 2002)
  3. VLAN Translation needs all packets to be tagged, untagged packets will be dropped.
  4. LACP frames are untagged and thus bypass configured in NPB for LACP.
  5. Tool Sharing needs to be enabled for allowing untagged packet which will add extra tag. This type of configuration and testing will be covered in upcoming articles.
  6. With Single Service Chain, If any one of the Inline Tool Port Pairs goes down, low bandwidth traffic will be still allowed to pass through BIG-IP (tool)
  7. If any of the Inline Tool link goes down, IXIA handles whether to bypass or rate limit. Switches will be still unaware of the changes.
  8. With Single Service Chain, if Tool resource configured with both Inline Tool Port pair in Active - Active state then load balancing will happen and both path will be active at a point of time.
  9. Multiple Service Chains in IXIA NPB can be used instead of Single Service Chain to remove rate limit process. This type of configuration and testing will be covered in upcoming articles.
  10. If BIG-IP goes down, IXIA enables bypass and ensures there is no packet drop.


Published Dec 09, 2021
Version 1.0

Was this article helpful?

No CommentsBe the first to comment