Forum Discussion

Ossar_178453's avatar
Ossar_178453
Icon for Altostratus rankAltostratus
Nov 26, 2014

Config sync between two L3 separated active/standby pairs

Hello,

 

I am trying to create a dual DC solution where I do have two pairs of LTMs in active/standby configuration. Internally each pair of course shared the same L2 network. The pairs are L3 separated, no stretching of L2 over sites.

 

We currently use a fairly complex FW solution which acts kind of like a GTM but works inline and NATs the traffic. So external requests are NATed by the FWs and sent towards the correct LTM pair. The failover is done automagically in the FW so a failure to respond from the "active" LTM pair will caue the FW to send the traffic to the other LTM pair in the other site.

 

What I need is a simple way of syncing the two inter site LTM pairs with VIPs, self IPs, pools, nodes iRules, certificates etc. Sounds simple, but the two clusters are L3 separated so at least VIPs, self IPs and routes differ as the two sites have different IP space assigned to them.

 

I realise I would need some kind of mapping scheme of VIPs, self IPs etc but I want to remove the need for manual configuration of two LTM pairs each time a change to a service is needed.

 

The only thing I can think of is to have some kind of config sync hook to extract the config on one pair, modify it according to IP mapping scheme and push it to the other pair whenever a change is made. Seems very ugly and not very portable to future versions.

 

3 Replies

  • R_Marc's avatar
    R_Marc
    Icon for Nimbostratus rankNimbostratus

    Here's one way I think you could make it work. Put both site's IPs on both clusters. So you'd have a self IP from each site, and then on each of the virtuals put a layered virtual in front of the "real". That way each virtual will respond to either site's IP addresses. On the Pool side, just add in each sites backend addresses. On the "passthru" virtual do some name based mapping using an iRule. I do this for IPv4 -> IPv6. One pro and con on this approach is that naming conventions become very important, if you want to make things as generic as possible. I enforce this by using an iApp to maintain consistency, which I would highly recommend.

    I put in a feature request to allow for multiple VIPs on any given virtual, which I'm hoping gets into 11.6 at some point (no ETA on that yet) which would help in your use case as well, I think.

    Here's the iRule in question for the above scenario:

        when CLIENT_ACCEPTED {
            set name [string range [virtual name] [ expr { [string last "/" [virtual name] [ string length [virtual name] ]] + 1} ] [expr {[string first "-ipv4" [virtual name]] - 1}]]
            set type [string range [virtual name]  [expr {[string first "-ipv4" [virtual name]] + 6}] [ expr {[string first "-passthru" [virtual name]] -1} ]]
            virtual $name-ipv6-$type-virtual
        }
    
  • You should be able to do this with a Sync-Only Device Group (DG). DG1 (sync-failover) would contain one pair, DG2 (sync-failover) would contain another pair. DG3 (sync-only) would contain all four.

     

    You'd put all the objects you want synchronised in a dedicated administrative partition (AP) and sync that, thus anything not in that AP that you don't want to be changed. Of course things get a bit tricky when one object in one AP referenced something in another but hopefully there's a way.

     

  • It does sync all relevant objects. Self IPs and VLANs are never sync'd anyway.

     

    Regardless, due to the way you want to split things this won't work. If it was the virtuals you wanted to sync then it would, it just can't work the other way around as the static objects must be in Common and the sync'd ones in another partition. Hey ho.

     

    Why don't you use the same IPs for the virtuals using an unrouted dummy subnet that only the next hop device at each site is aware of, or perhaps NAT?