Forum Discussion

nitass's avatar
nitass
Icon for Employee rankEmployee
Mar 17, 2011

active/active LTM deployment on different subnet

Hi there,

 

 

I am thinking about active/active LTM deployment which each LTM is located on different data center and using different subnet. I mean not using GTM at all.

 

 

For example

 

 

data center A - subnet A

 

data center B - subnet B

 

 

I think if I create VIP on subnet C and D and then configure upstream router to route traffic of subnet C and D to appropriate data center e.g. dynamic route, static route, etc. If one of data center fails, upstream router will be notified and move traffic accordingly.

 

 

Is this possible? What do you think?

 

Has anyone used this configuration?

 

 

Thanks a lot!

 

Nitass

 

16 Replies

  • HI Nitass,

     

    I believe what you are asking is possible. I have lab tested something similar several years ago where I wanted to avoid using a GTM or like service and found that it would increase complexity across the board and it was not fully automated - some manual manipulation was required.

     

     

    My lab setup was the following

     

     

    Datacenter A

     

    Circuit A

     

    VIP A (Active)

     

    VIP B (Dormant)

     

    Servers A

     

     

    Datacenter B

     

    Circuit B

     

    VIP B (Active)

     

    VIP A (Dormant)

     

    Server B

     

     

    I ran BGP peering between Datacenter A and B (which used 2 different ISPs). I created VIP A and B on Datacenter A and the same in Datacenter B. However, only the VIPs that were being advertised in their respective ISPs were ever responding. I then created a route pools which custom health checked IP addresses on the upstream router (checking for latency and ICMP). Since ADCs in both data centers were connected to switch routers (Cisco 6509s), I created secondary addresses on the subnets (VIP A and B).

     

     

    This worked well in the lab, however, I found one large issue. Because BGP was being used, fail over could take anywhere between 30 seconds to 5 minutes depending on your ISPs defaults. I even looked into beefing up BGP with PfR(Performance Routing). However, this required the ISP to support it and not all ISPs offer this type of configuration. I even looked at lowering the BGP timers - again the ISP had to support it on their end which is highly unlikely when you deviate from the known standards.

     

     

    In conclusion, I decided against this approach because it simply could not guarantee consistent failover within a reasonable amount of time as well as the complexity in maintaining this configuration. Thus we decide that the GTM would provide the best solution overall.

     

     

     

    I hope this helps

     

    Bhattman

     

     

     

     

     

  • That's the catch, even if u use a flavor of peering there is a lag in failover.. even if it's internal and u can control it... still not the right way to do it..
  • Hi Nitaas,

     

    i am stucked with the same scenario as you were in that time. Only difference is that we have Seperate load Balancer pair for Second DC.

     

    Clients has some sites on DC-1 and some applications hosted on DC-2. We can say this as Active-Active Data Center (in LB prospects). For GTM - We decided to play with Listener IPs (on both data center) on Public DNS for CNAME entry. for Eg:

     

    Listener IP DC-1 : 10.10.10.1

    Listener IP DC-2 : 10.20.20.1

    Application -1 : www.xyz.com (at DC-2)

    Application-2 : www.123.com (at DC-1)

     

    We'll do that CNAME entry/Delegation record entry on Public DNS with both DCs listener IPs for both the applications.

     

    i.e. www.xyz.com 10.10.10.1

    10.20.20.1

     

    www.123.com 10.10.10.1

    10.20.20.1

     

    and Set the member order/ratio on for each application at respective DCs accordingly in GSLB wide IPs members.

     

    -> 0 for application-1 at DC-2

     

    -> 0 for application-2 at DC-1

     

    Assuming Public DNS is performing Round Robin load balancing only.

     

    highlighting the important point : VIP IP pool and Server IP pool would be different at these data centers. 

     

    Could you please help us to about the approach you adapted or Else pls sugget for other effective solution.

    • nitass's avatar
      nitass
      Icon for Employee rankEmployee

      Sorry for delay. I am a bit busy lately. I do not remember what it ended up to be. 😅 It has been a while. Let me dig it and will let you know.