BIG-IP L2 Deployment with Bypasss, Network Packet Broker and LACP

Introduction

This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer to https://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer to https://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibility. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). In this installment, we will cover the deployment of the bypass switch (BP), network packet broker (NPB) and BIG-IP in Virtual Wire (vWire) mode with LACP (ref. https://en.wikipedia.org/wiki/Link_aggregation).


Design Overview

The insertion of inline network tools at L2 reduces the complexity associated with these deployments because no configuration change is required for the routing or switching infrastructure. The Figure 1 below is an example of a L2 insertion. It shows switches to the north and south of the bypass switches, in other networks these devices may be routers, firewalls or any other device capable of using LACP to provide greater throughput and/or network resilience. In normal operation, traffic passing through the bypass switches is forwarded to the network packet brokers and to the BIG-IP on the primary path (solid lines). The BIG-IP is configured in vWire mode. The bypass switches monitor the tools’ availability using heartbeats. In the invent of a failure of the primary path/tool, the bypass switches will forward traffic using the secondary path (dotted lines). If both BIG-IP devices fail, it will enter bypass mode and permit traffic to flow directly from north to south.  


Figure 1. Topology Overview


LACP Bypass

A Link Aggregation Group (LAG) combines multiple physical ports together to make a single high-bandwidth connection by load balancing traffic over individual ports. It also offers the benefit of resiliency. As port(s) fail(s) within the aggregate, bandwidth is reduced but the connection remains up and passing traffic.


Link Aggregation Control Protocol (LACP) provides a method to control the aggregation of multiple ports to form a LAG. Network devices configured with LACP ports send LACP frames to its peers to dynamically build LAGs.  

 

Common network designs leverage link aggregation using multiple chassis (MLAG) (aka Virtual Port Channel or VPC on Cisco devices). This allows for LAG to terminate to 2 or more devices. For more information about MLAG refer to https://en.wikipedia.org/wiki/MC-LAG.

 

By default, the BIG-IP device participates in the LACP peering. It processes the LACP frames but does NOT forward them. This means the LAGs are formed between the switches and the BIG-IP and not between the north and south switches. This may not be suited for all deployments. In cases where LACP peering is required between the north and south switches, LACP packets need to bypass the inline tool (BIG-IP) and forward to the next hop unaltered.

 

Figure 2 illustrates how the LACP traffic is handled by NPBs. LACP packets sent from the north switches are forwarded to the NPBs by the BP switches. The NPBs are configured to filter and bypass frames with Ethertype 8809 (LACP). The LACP packets are returned to BPs switches and forwarded to the south switches. LACP peering is established between the north and south switches.


Figure 2. LACP Bypass


Heartbeats

Monitoring the paths and the tools is critical in minimizing service interruption. Heartbeats can be used to provide this monitoring function. In Figure 3, heartbeats are configured on BP1 to monitor the path from the BP to the tool. In normal operation, heartbeats are sent out on BP1 port 1 (top solid blue line) and received on BP1 port 2 (bottom solid blue line). Heartbeats are also configured to monitor the reverse path, sent from BP1 port 2 to BP1 port 1. This ensures the network connections are up and the tools are processing traffic initiated in both directions. If the heartbeats are not received on the primary path, BP1 will start forwarding traffic over the secondary path. If both paths are detected to be down, BP1 is configured to bypass the NPB and BIG-IP for all traffic. This means that all traffic is permitted to traverse the BP from north to south and vice versa. Heartbeats are configured on all four paths, see Figure 4.

Figure 3. Heartbeat Path


Figure 4. Heartbeats monitor paths and tools


Lab Overview

The following discusses the lab and setup that was used to validate this design. The objective of this lab is twofold (refer to refer to Figure 5 for details):

  • Demonstrate tool failure detection by the BP switch using heartbeat
  • LACP traffic bypass by the NPB. The focus is on the primary (active) path


Note 1: In this environment, a single bypass switch is used to simulate two bypass switches.

Note 2: This article is part of a series. The steps below are general configurations, for step-by-step instructions, please refer to the lab section of the article L2 Deployment of vCMP guest with Ixia network packet broker.


Figure 5. Primary Path


The lab equipment consists of following:


  • L3 switches – A pair north and south of the insertion point, each pair is configured as MLAG peers. Each pair has a LACP LAG to connect to the other pair.
  • Ixia iBypass Switch (BP) – Provides L2 insertion capabilities with fail to wire configured. Also configured to monitors paths and tools.
  • Ixia Network Packet Broker - Configured to filter and bypass function.
  • BIG-IP i5800 – To test traffic flow, the BIG-IP was configured to forward traffic, with no tool. It operates in vWire mode.s


The Figure 6 below shows the lab configuration and cabling.


Figure 6. Lab Configuration


Ixia iBypass Switch Configuration


The following shows the configuration of the Ixia BP.



Bypass Switch Heartbeat Configuration


The heartbeat configuration is identical to the one mentioned in the xxx guide with the exception of the VLAN ID. In this infrastructure, the VLAN ID is 513 and represented as hex 0201.




Network Packet Broker Configuration


Create the following resources with the information provided.

Bypass Port Pairs


Tool Resources


Service Chains


The final config should look like the following:



 

LACP bypass Configuration


The network packet broker is configured to forward (or bypass) the LACP frames directly from the north to the south switch and vice versa. LACP frames bear the ethertype 8809 (in hex). This filter is configured during the Bypass Port Pair configuration.


Note: There are methods to configure this filter, with the use of service chains and filters but his is the simplest for this deployment.


 

Big-IP Configuration


Two vWire groups are created, one for each link of the LACP LAG.


 



Testing

Pings were used to represent network traffic from the north switches to the south switches. To simulate a tool failure, the vWire2 configuration was removed. In this failure simulation, the interfaces associated with vWire2 remained up but the tool is not processing traffic, see Figure 7. The BP heartbeats detected the tool failure and put the ports in bypass mode. The north and south switches renegotiated the LACP LAG. In the process of renegotiating, approximately 200 pings packets were lost over a period of a few seconds. The failure and bypass mode is displayed on the BP dashboard in Figure 8. The LACP status of port 50 is shown in Figure 9 from the North Switch 2 CLI.


Figure 7. Remove vWire2 configuration

 

Figure 8. Bypass Mode Enabled


Figure 9. North Switch 2 CLI Output for LACP Peer


When the vWire2 configuration is restored, the BP detects the tool has been restored and resumes the traffic follow. The traffic on half of the LAG is interrupted for 400 pings (a few seconds) during the renegotiation of LAG (Diagram 10). The BP dashboard (Figure 11) shows operations has returned to normal.


Figure 10. vWire2 configuration restored.


Figure 11. Bypass switch returns to normal operating state

Published Jan 16, 2020
Version 1.0

Was this article helpful?

1 Comment

  • Dojs's avatar
    Dojs
    Icon for Cirrostratus rankCirrostratus

    I'm using a similar configuration, but i had to TAG all the vlans that i received. So just works when was setup using TRUNK with just interface 1.1 and 2.1 didn't work. any idea why?