Forum Discussion

darragh_19954's avatar
darragh_19954
Icon for Nimbostratus rankNimbostratus
Dec 13, 2007

Saving pools to persistence table when it's not the LB default

I'm trying to persist cookie-disabled users on a particular pool using their IP/UserAgent combination by saving the IP/UA as a universal key in the persistence tables.

 

 

In the BIG-IP GUI, I've set up poolA as the default pool with universal as the default persistence profile.

 

 

So when a new user arrives, there will be no entry in the persistence table. So we randomly sample between pool A and B. Then we save the outcome in the persistence table against their IP/UA as a key. When a subsequent request comes from that user, we check the persistence table and send to the pool previously sampled.

 

 

What I'm finding is that if I use pool A as the default for the load-balancer, and the request gets directed to B, then some of the subsequent requests (for page elements like stylesheets etc) will not find an entry in the persistence table so they get re-sampled. So the HTML comes from pool A and the stylesheet comes from pool B.

 

 

The code is below. I don't know what I'm doing wrong but it looks there is some delay between adding an entry to the non-default pool persistence and being able to query it. Any thoughts on this would much appreciated.

 

 


when HTTP_REQUEST {
set SessionID [string map {" " ""} "[IP::client_addr][HTTP::header User-Agent]"]
set persistence [session lookup uie $SessionID]
if { $persistence != "" } {
set PoolVal $persistence 
set message "Persistence on $persistence"
} else {
if { rand() < 0.5 } {
set PoolVal "poolA"
} else {
set PoolVal "poolB"
}
set message "Pool sampled as $PoolVal"
}
log local0. "$PoolVal,$message,[HTTP::path]"
pool $PoolVal
}
when HTTP_RESPONSE {
if { $persistence == "" } {
session add uie $SessionID $PoolVal 60
}
}

5 Replies

  • I'd expect that the first request should be for a document. That document would reference other content like images and style sheets. So the client should make a single request where the persistence info is saved. The client shouldn't actually make any further requests until they get the response.

     

     

    What do you see in the logs when a failure occurs?

     

     

    Aaron
  • Once the LB has selected pool A, the document is returned from that pool. The client does wait until this is returned and most of the subsequent requests for images, CSS and JS files are honoured correctly (an entry is found in persistence table and the request is directed to the pool that served the parent document).

     

     

    However a few of the requests fail to find a persistence entry, get re-sampled and so some end up going to pool B. This pool doesn't know anything about the make-up of the document (it's a different version of the site) and so it actually returns 404s to the client.

     

     

    The client definitely waits because often the 2nd and 3rd request will succeed as expected. But then the 4th might fail and find nothing. And the 5th will work ... a mystery. I sometimes find that the persistence table ends up with 2 entries for the same key but different pool values. Shouldn't it only store 1 value per key?
  • I'm not sure how there would ever be two persistence records for the same client as defined by the source IP and user-agent string with different values. I could see that the timeout of 60 seconds might be too short and a client would make a request more than a minute after the previous request. That would mean the previous persistence record would be removed and the subsequent request could be sent to the other pool.

     

     

    Can you reproduce this consistently enough to capture a tcpdump of the failure? If so, you might consider opening a support case on the issue. It would probably help to add logging to the rule of the client IP address, user-agent string, the persistence record that's found and the selected pool and pool member. The output from 'b persist all show all' might help as well. You can write out the persistence records to a file every 5 seconds using a while loop on the CLI:

     

     

    interval=5; output=/var/tmp/b.persist.out; while true; echo -e "\n======================`date +%Y%m%d-%H%M`======================\n" >> $output; do b persist all show all >> $output; sleep $interval; done &

     

     

    Make sure to get it on a single line. The script will run in the background. It will write out the output from ‘b persist all show all’ every 5 seconds (or whatever number you set interval to). The output is written to /var/tmp/b.persist.out (or whatever you set output to).

     

     

    The & at the end of the line sets the command to run in the background, so you can log out of your SSH session and it won’t affect the running of the command.

     

     

    To stop the command from running, log back into the BIG-IP via SSH and run ‘kill %1’ without the single quotes. Make absolutely sure to type exactly kill %1.

     

     

    Aaron
  • Thanks for all the help.

     

     

    I finally got it working. We're running in an environment with many virtual servers and many pools. So it turns out I need to save the entry to the persistence table against a particular pool. This then ensures it works.

     

     

    So to add an entry, I did the following:

     

     

    session add uie [list $SessionID pool poolA] $PoolVal 60

     

     

    To lookup an entry, I did the following:

     

     

    session lookup uie [list $SessionID pool poolA]

     

     

  • Do you have match across virtuals and/or pools enabled? I would have thought you would only see a conflict if these options were enabled on the persistence profile.

     

     

    Aaron