Forum Discussion

Chris_Phillips's avatar
Chris_Phillips
Icon for Nimbostratus rankNimbostratus
Sep 28, 2010

GTM instead of LTM HA

Hi,

 

 

I'm looking at deploying a new environment with dedicated 6900 LTMs and have concerns about the single point of failure they represent in an HA config. Sure it's "HA" and it's two machines, but it's still one heartbeat etc, and I've had issues with this causing singificant issues on 6400's over the previous few years.

 

 

The environment will potentially contain up to 100 web servers serving up to 500,000 live media streams, backed by a clustered caching layer and an Oracle RAC data store. all other areas scale nicely, but putting a single pair of LTM's in front of all of it worries me. As such I was thinking about using GTM to provide two pairs, or two 6900's in the same environment with each LTM instance holding connectivity to half of the servers behind. I've plans to use iControl for these web servers to automatically register into appropriate pools and such, so am comfortable about most of the possible downsides here.

 

 

Any thoughts on this would be appreciated, in order to improve the theoretical and practical resilience of the environment.

 

5 Replies

  • If you're worried about availability, obviously 2 pairs will be better than 1. Would these be placed in a manner that would require you to use network failover or would you be using voltage failover?

     

     

    I've been running 6900s for a couple years now and have been plenty satisfied with them. I can definitely say though that I have had issues with network failover but with MAC Masquerading, can get away with it. With that said, you will obviously have a shared config on a single pair making human error that much more of a risk.

     

     

    If you have the money, I'd definitely go with 2 pairs. I haven't played with 6400s but have played with 6800s and can say 6900s are much more stable so I wouldn't be too concerned on that front.
  • If there is a level of LTM HA then it would be primarily serial, no issues there. Are there significant downsides from the GTM perspective of this? Is there a view from which it can be seen that this isn't really adding anything, and could be reducing the resilience? It's more software, more complexity etcetera, and the wide ip stuff, which I've no expereince of, is still surely in some way a single point of failure.

     

     

    Thanks
  • I would definitely chat with a Sales/Systems Engineer as far as the GTM implications. GTM will allow you to leverage DNS to add a bit of resiliency, not to mention allowing for layer 3 failures...If you're simply using an LTM pair, you're going to be using "floating" IP addresses which obviously have to be in the same subnet.
  • Different subnets may be attractive. The environment is supposedly going to be all behind a number of single security devices, e.g. fwsm blade pairs on Cisco 6509's. which might undermine the principles of what I'm after, as I have no influence in that area, however I would trust two 6509's more than two LTM 6900's.
  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus

     

    Mmm... Sorry, diverging from F5's a bit here... I've used FWSM's in the past... They have their good points, but also some severe limitations depending on the configuration chosen. For example when running in multi-context mode they don't do multicast... And the only way you can run them active/active is with multi-context enabled. Sadly also the offload mode was pulled just before they released it too... And they have a pretty hard limit on bandwidth (6Gbps). And to get a safe harbour version you'll have to run a different version of IOS on your 6500's with FWSM cards than on your 6500's without.

     

     

    Other than that they are pretty cool... But expensive... And for my mind, I find it hard to justify losing a 6500 slot (Or two) for them. I feel it's probably better to run the new VSX's and put 10Gbps interfaces into them...

     

     

    Having said about cost, the cost of the FWSM's can probably be offset by buying cheaper supervisor cards (Sup32's vs Sup720's. FWSM's can't use the fabric switching in Sup720's, so you're better off with Sup32's by quite a bit of money there)

     

     

    Now back to F5's... if you run 10.x on your 6900's you get multiple heartbeats (If you configure them). It's only 9.x that has the single heartbeat problem.

     

     

    1 pair or 2 pairs... Comes down to cost really...

     

     

    The other option is to run everything in a mini-cloud with VMWare and use virtual LTE's... You'd have to do the numbers though to see how the performance would go, but if you can scale sideways you might be surprised (Sorry, left field suggestion here). Especially if you use something like Cisco UCS's with Nexus virtual switching etc...

     

     

     

    H