Thanks but I am confused by setting var to zero in RULE_INIT? Isn't that a local connection event so every new connection(exec of irule) will just reset it to zero killing any state value (0 or 1) between non local irule client connections?
Does RULE_INIT only fire the very first time VS comes up from down state and takes first connection?
I thought it was every new connection on irule enhanced VS?
We are evaluating two different paths. One is to use GTM and balance over our two LTM/pools (each pool has LTM VS at head), however we may have unrelated challenges using GTM so we are looking at using LTM only solution as plan b. Both LTM's are in same data center. We are using GTM to manage code risk over two banks of multi layer app servers with no crosstalk once entering the bank.
One bank has end of life legacy physical hardware that is still taking majority of site load. Second bank is all vm guest servers, but only built to currently handle 15% of load. Slowly we are building out VM cluster and incrementally decom physical cluster. Trying to remove all dependency on the legacy pools, vs's, server, etc so we can just modulate traffic over to new pool and eventually just shutoff/delete legacy objects.
So given the disparity in resources between the two I would probably have to start with something really safe like. Every 500 connections, send 1 to the new pool and then calibrate it for our peak spikes.
So the global counter would incr say up to 500 (each local connection contributing to incr/decr), each time before pooling the connection to the larger pool, once hitting 500 it gets reset to 0 and then pooled to the new pool
Sorry if I am being redundant, just wanted to be clear
Thanks for you help!