Forum Discussion

arokh_137577's avatar
arokh_137577
Icon for Nimbostratus rankNimbostratus
Nov 21, 2013

Problem with session persistence using CARP when load balancing a McAfee Web Gateway cluster using progress page for downloads

We have a cluster of 14 McAfee Web Gateways and about 15000 users connecting to them from a few dozen Citrix farms. Previously we have been using source address persistence, which works fine until one of the pool members are taken offline then online again. All clients will then be load balanced to another available pool member and the one that was offline gets no traffic after that.

 

Enter hash persistence using CARP. The idea is simple, use something like the host header and make a hash of it then load balance using the CARP algorithm. This also works great, except when downloading files. McAfee Web Gateway works like this; it downloads the file for malware scanning before delivering it to the client. Meanwhile it displays a progress page to the client.

 

The problem is that with hash persistence quite often the progress page will show an error. This is because I get loadbalanced to a different pool member than the one showing me the progress page.

 

I really would like to use hash persistence, but I'm not sure there is a proper workaround for this. Any suggestions?

 

What are you guys doing for persistence to web caches?

 

5 Replies

  • hi arokh, based on what you wrote, you are following exactly what is done in the field when deploying transparent caches.

     

    You need to configure the hash on the CARP persistence. What kind of hash you selected ? Are you using an iRule and taken the Client IP for the hash for example ?

     

  • Yes, I am using the HTTP::host as hash. Here's my irule:

    when HTTP_REQUEST {
      set debug 0
      set host [HTTP::host]
      set client [IP::client_addr]
    
      if { $host ne "" } {
        if { $debug == 1 } { log local0. "Persisting on host $host" }
        persist carp $host
      } else {
        if { $debug == 2 } { log local0. "Persisting on client $client, User-Agent is: [HTTP::header value User-Agent]" }
        persist carp $client
      }
    }
    

    As long as there's a host header it will hash that, otherwise it will hash the source ip. It works fine when I test it on a separate virtual server, but once I try it out in production it doesn't work reliably. The HTTP::host for the progress page is the same as for the download page.

    CARP persistence should not keep any records, right? Could it be that it somehow remembers the previous persistence even after I switch to my CARP hash profile?

  • Hi Arokh, are you also using OneConnect ? Because for every HTTP REQUEST, the hash can give you a different result, if you don't use OneConnect, the serverside connection will move from one server to another, not waiting for the final HTTP Response. Just thinking loudly here.

     

  • No I am not using OneConnect, how come the hash will give a different result without it? I thought that was just for re-using TCP connections, shouldn't the hash be the same as long as the HTTP::host is the same?

     

  • If your HTTP Proxies are explicit (not transparent), same connection can carry multiple HTTP requests and not wait for their respective answer (HTTP 1.1 and pipelining). In that case, without OneConnect, when CARP will select another pool member, the BigIP will disconnect the pool member handling the existing connection, and connect to the new pool member. With oneConnect, the new pool member will receive a new connection, and the existing connection with the "1st" pool member will be kept.

     

    I haven't checked since version 10 if the behavior remains the same.

     

    HTH