I use a lot of internal resources to solve problems at F5 -- I'm sure there's an support engineer or two that might even suggest I abuse my rights as a customer with access to their internal email. However, I'm not a programmer. I've never really claimed to be. Simple bash or perl scripts to automate a task or two? Sure. IPTables commands off the top of my head? Maybe. BigPipe commands are almost second nature now; but Irules syntax is coming slowly to me. So I was awfully proud of myself this week when I ran into an issue deploying a Link Controller configuration in one of our international offices and I was able to solve it quickly -- by turning to DevCentral, finding the syntax I think I need, and putting it in place. I'm sure this could be optimized or made more resiliant; Colin, Joe, (Hoolio?) any time you're ready. :)

 

There are a couple of problems being solved here.

  1. The current Telco provider has been unreliable for the last 6 months to a year, and the needs of the office's SaaS application usage have multiplied this year.
  2. High availability is more important than high thruput -- what we have is OK, but having it always available is important.
  3. The current network structure already exists, and we don't want to re-ip it, all we want is to drop in a 2nd circuit to make Internet Access highly available.
  4. There are external customer-facing dependencies that we're not wanting to muck with now.

 

The solution? Link Controller, Vlan Groups, and Irules.

 

Here's the simplified existing diagram:

 

no_lc

 

Here's where we're going:

 

bridge_lc

 

The biggest challenge? Inbound traffic to the DMZ offices from the new network? Thats easy, virtuals are no big deal. Using Opaque vlan groups to bridge the existing network? Challenging, but straightforward. Forwarding outbound corporate traffic, so that unproxy-able protocols like h.323 don't traverse a BigIP NAT, but protocols that require HA links like IPSec and HTTPs work? Ok, that's a few more virtuals to create. Selecting the NAT for outbound traffic, so traffic that is destined for the existing 10.10.10.1 router is not natted by the BigIP, but traffic that is being sent to the 10.10.20.1 router is natted?

 

OK, that takes an irule. I can't irule to save my life. However, with the help of the Search function and the IRule editor, here's the code I came up with:

  when LB_SELECTED {
    if {[LB::server addr] eq "10.10.10.1" } {
        snat none
    } elseif {[LB::server addr] eq "10.10.20.1"} {
        switch -glob [IP::local_addr] {
            "10.10.10.10" {snatpool outbound_ipsec }
            "10.10.10.20" {snatpool outbound_corp }
            "10.10.10.21" {snatpool outbound_lab }
            "10.10.10.22" {snatpool outbound_guest }
            default { snatpool default_auto }
        }
    }
}

 

It's not a work of art, but it's less than 20 lines, Colin!

 

Cheers~

 

Side note: Notice I've created a 'snatpool default_auto'. I do that on all of my LTM's to use instead of Snat Automap -- that way, when I'm doing troubleshooting later, tcpdumps won't have both client traffic and monitor traffic in the capture.