Forum Discussion

Michael_Harwoo1's avatar
Michael_Harwoo1
Icon for Nimbostratus rankNimbostratus
Sep 25, 2015

Only send connections to server2 if connection limit reached on server1

I have 4 HTTP web caching servers in a pool, and we want to try and increase the cache hit rate (likihood that it will have already cached the requested page) by focusing all connctions on one server until it reaches a high number of connections, and needs new connections to be shared to other servers in the pool instead.

 

E.g. I want to send all connections to server1 only, until server 1 reaches a connection limit I have set (say 500, TBC). Then only during the time that server 1 is at its configured limit, i want it to then start sending new connections to server2. Server 2 will have a limit set as well, say 500 again, and if we get to a point where server1 and 2 have both reached thier limit, it will start sending to server 3, then server 4 if needed. Perhaps server 4 could have no limit set, so that the pool overall status does not reach its limit, as it would be better to respond slowly if needed than stop responding completely.

 

Could this be done with a combination of Priority Activation groups and Connection Limits? E.g. Priority Group Activation = Less than 1. * Server1 = Connection limit 500 + priority group 4 * Server2 = Connection limit 500 + priority group 3 * Server3 = Connection limit 500 + priority group 2 * Server4 = Connection limit 0 + priority group 1 If so great, but if there is a better way to do it with an irule perhaps, let me know.

 

Thanks in advance.

 

2 Replies

  • I am stepping outside my area of expertise and hope to learn from this discussion. My understanding of the goal of having caching servers in a pool was to allow balancing across the pool while using some method (such as destination-address persistence) to ensure that requests for a single object are always sent to the same caching server. I'm sure there are other/more granular ways to control which resource requests go to which caching servers. Are you already employing some sort of logic to 'persist' requests for the same resource to the same caching server, and having problems with one cache server being 'overloaded'?
  • Yes, I believe you could use priority activation for this as when a pool member reaches its connection limit it is "unavailable" (will have a yellow triangle) and means the next priority could can be activated if it was the only member in the higher priority group.

     

    One quick question, if you are trying to tune for caching content have you looked at using the caching built into LTM, Web Acceleration profiles? Could have your servers some work.

     

    https://support.f5.com/kb/en-us/solutions/public/14000/900/sol14903.html