Forum Discussion

MahendraRaj_131's avatar
MahendraRaj_131
Icon for Nimbostratus rankNimbostratus
Sep 11, 2013

DNS Persistence in GTM

Hi

 

We are planning to setup an Active / Active DC for our Website. All inbound traffic to our website is from AKAMAI edge servers. our requirement is to do the load balance of these inbound traffic across 2 DC (50-50% load sharing). Also we want to have a DNS persistence.

 

The problem what I see in this solution is, the AKAMAI has N number of Edge servers. A single user session would be load balanced within AKAMAI cloud and will take a different Edge server to reach our web servers. In this case, even if I enable the DNS persistence in GTM I don't think so it will work as the source is going to be changed every time and that will break the user session.

 

As anyone deployed the similar kind of solution to tackle this kind of DNS persistence issue? If so, would you please advice me how to achieve this.

 

Thanks

 

We are planning to place GTM in our DC to load balance the inbound traffic from AKAMAI to one of our DC

 

11 Replies

  • The GTM's have an option for persistence in the load balancing of the pools. So when a LDNS requests an IP address, it gets the same IP each time for as long as the persistence TTL is set to last. This is a drop down item from the GUI.

     

    As far as the Akamai cloud persistence, the GTM will reply to LDNS requests. So if a user session coming from the Akamai cloud uses the same LDNS, then they will get the persistence to the DC specified through the GTM's persistence as mentioned above.

     

    If every edge server will be doing its own LDNS request and a user session will switch edge servers during a session, you could attempt a topology based solution if you know that the edge servers for a session will come from a specific IP range. If the IPs of LDNS requests are going to be random or too many to enumerate, you'll need to coordinate with Akamai.

     

  • Thanks for your quick response Jason...

     

    Yes, we have the list of IP Ranges of AKAMAI edge servers. If I do topology based solutiong.. for example...

     

    AKAMAI IP Range1:- 10.10.0.0/16 AKAMAI IP Range2:- 10.20.0.0/16

     

    Lets take if the above two ranges are from Akamai, based on the topology load balancing, IP Range1 goes to DC1 and IP Range2 goes to DC2 in ideal scenario.... What will happen if the DC1 failed..? Will GTM do redirect the traffic to DC2 for IP Range1 as well?

     

    Thanks

     

    • Jason_40733's avatar
      Jason_40733
      Icon for Cirrocumulus rankCirrocumulus
      Let's say Range1 goes to Pool1 ( an IP address for DC1 ) and Range2 goes to Pool2 ( an IP address for DC2 ). If you define Pool1 to have the DC1 IP address ( or VIP if using an LTM ) as the primary node and have the DC2 IP address ( or VIP... ) as the next node and set Pool1's load balancing to "Global Availability" you should get what you want. All incoming LDNS requests from Range1 would receive the DC1 IP address. If the DC1 IP address was down, the global availability load balancing would then return the DC2 IP address instead. You'd do the same process for the Pool2, except Pool2 would return the DC2 address and resort to the DC1 address if the DC2 was unavailable. Please note... we use LTMs to provide a VIP ( or IP & port availability ) to provide status to our GTMs. If you're not using an LTM to send the DC1 and DC2 status up to the GTMs, test and re-test your monitoring solution that determines if DC1/DC2 is available. Also note, the TTL on your returned DNS answer can affect failover times on top of your monitoring intervals.
  • Sorry, I am little confused here..!!!

     

    We can choose either Topology based or Global Availability load balancing..right? or can we do both at same time..!!!? Please clarify..

     

    Thanks Mahendra

     

    • Jason_40733's avatar
      Jason_40733
      Icon for Cirrocumulus rankCirrocumulus
      You will do both at the same time. You'll create two pools for the wideip. You'll use topology based routing to select which pool. Within each pool you'll do global availability. The global availability would select between the DC1 and DC2 IP address(es). In this flow... we'll assume the request came from Range1 and should select Pool1 which points to DC1 ( or DC2 if DC1 is down ) Range1 Request to GTM -> Based off topology select Pool1 -> Based on Global Availability Pool1 always returns the DC1 IP address ( Pool1 will return the DC2 IP address ONLY if the DC1 IP address shows down from some monitor ).
  • Mahendra, Though I've not done this with Akamai specifically, we did a pointed setup for a few ISPs that did not seem to get updated on the GeoLocation db. Specifiying a given range for a specific ISP, and then designating the data center worked fine. That said, if that data center had to come down for maintenance, yes, the GTMs, set to Topology, would then fail over to the other--but with an alternate as GA.

     

    The only odd thing was if/when BGP would flap on occasion. We had to choose whether to let it flap some customers back and forth a bit during those times, or extend the time-out of Site A being recognized as down. But to answer your question, Choosing Topology as primary with an alternate as Global Availability does allow failover. Works well.

     

  • Thanks all... for all your input...It helps me a lot to understand now...

     

    However, I was googling in devcentral as well for the same... I am having another question based on the hint I got from Devcentral.. that can we achieve this persistence using iRule in some way..?

     

    Sorry, for asking too many questions.... I just wanted to go to the management and our Solution Architect team with at least two option or the best option to achieve this..

     

    Please help me on this. Thanks Mahendra

     

    • Jason_40733's avatar
      Jason_40733
      Icon for Cirrocumulus rankCirrocumulus
      I'd go with one of the two solutions Stephan's email. Accomplishing this with an iRule is more problematic and subject to more issues than using the built-in options that Stephan enumerated. Plus, both options Stephan listed accomplish what you seem to be looking for.
  • I used to work with a customer facing a similar requirement. Indeed it can always happen, that by different name resolution a client will be directed to the 'wrong' data center.

     

    That´s why these folks implemented site specific cookies allowing to redirect to site specific hostnames.

     

    As a result a LTM in DC1 noticed a client request with a cookie belonging to DC2. In this case it redirected to a hostname which was associated with a virtual server in DC2 only.

     

    In my example above I forgot to discuss a fallback in case there is no match for the topology records.

     

  • Hi Stephan

     

    Sorry, I didn't get you exactly what you are saying... Would you be able to give me an example...

     

    Thanks

     

  • As I understand, you want to make sure the same client is always directed to the same site for recurring request. This is required for session consistency on application level i.e. to maintain a shopping basket because your sites do not synchronize this information in realtime.

     

    If for some reason (i.e. client changes a new DNS resolver in a different region) an A record is provided which points to the other site, you may want to direct the client to the proper site.

     

    This can be done by creating site specific hostnames basket1.domain.bit and basket2.domain.bit along to www.domain.bit.

     

    If the client was initially handled by site 1 the load balancer or application server sets a site specific cookie. With each new request the client will send this cookie, which indicates what site actually has his shopping basket.

     

    Now the LTM can redirect the client to basket1.domain.bit or basket2.domain.bit, if it gets a request with a cookie belonging to the alternative site.