Forum Discussion

smraj77_152133's avatar
smraj77_152133
Icon for Nimbostratus rankNimbostratus
Oct 14, 2015

No persistence to pool members

Hi Guys, one pool(A) with 4 members(least connections load balancing) and another pool(B) with 2 members(least connections load balancing) have been configured. An irule and source addr persistence profile(7hrs timeout) assigned to VS. in the irule client accepted event, particular IP(X) connections send to pool B and all other connections forward to pool A. It's working correctly. our app is using sessions and same source IP must be connected to same member of Pool. but same source IP (X) is connecting to both member in pool B. persist records show the persistence mode is source address affinity, but same source IP persists to 2 members. what is the root cause? and how to resolve this?

 

Thanks in Advance!

 

4 Replies

  • Hello,

     

    To give you some ideas to further troubleshoot your issue, please provide the following:

     

    1. F5 version and hotfix in use

       

    2. Text-format configurations for VS, Pool, iRule and Persistency Profile. Replace any real public DNS names & ext IP addresses with dummy ones

       

  • Thanks Hannes Rapp for troubleshooting this issue. details below, have renamed the names Ip with dummy

     

    1. BIG-IP 10.2.4 Build 817.0 Hotfix HF7

    2. virtual VS_test { snatpool snat_test pool pool_test1 destination 10.x.x.x:http ip protocol tcp rules iRule_test persist test_SRCIP profiles { http_capps {} tcp-lan-optimized { serverside } tcp-wan-optimized { clientside } } vlans VLAN_x enable }

     

    pool pool_test1 { lb method member least conn action on svcdown reselect monitor all http members { 10.x.x.6:http {} 10.x.x.7:http {} 10.x.x.8:http {} 10.x.x.9:http {} }

     

    pool pool_test2 { lb method member least conn monitor all http members { 10.x.x.10:http {} } 10.x.x.11:http {} } }

     

    Irule_test when CLIENT_ACCEPTED { if { [IP::addr [IP::client_addr] equals 10.x.x.x] } { if { [active_members pool_test2] < 1 } { pool pool_test1 } elseif { [active_members pool_test2] > 0} { pool pool_test2 } } else { pool pool_test1} }

     

    test_SRCIP Persistence TypeSource Address Affinity Parent Profilesource_addr Timeout25200 seconds Map Proxies Enabled

     

  • Hello,

     

    One leading question I would ask - what happens if one member in pool_test1 comes unavailable (marked down by monitor), and then available again before TCP idle timeout hits (i.e a short-term "flapping" occurs)? The same question about pool_test2 - what happens?

     

    In case of pool_test1, you have defined "action on svcdown reselect" which instructs F5 to remap the serverside connection stream of any active connections to another available pool member. In case of pool_test2, it's not the same. I've explained what happens with persistency and active connections in case a pool member comes available in more detail in this Q&A thread: https://devcentral.f5.com/questions?pid=42097answer125656

     

    What to do with your issue?

     

    - You should remove the "action on svcdown reselect" setting in pool configuration if persistent sessions are required, and replace it with
    reject
    .

     

    - If persistent sessions are not required, the "action on svcdown reselect" is a good option.

     

    Relevant: If you have mixed requirements (i.e persistecy required for pool_test1 but not for pool_test2), then you must improve your iRule with the use "persist none" function as you're selecting a pool where persistent connections are not required.

     

    Note: The configuration change will not take effect for any established connections, it will take effect for new connections. Since you're using terribily long-lived connections, you will also have to remove any existing peristency records as well as delete clientside connections which are bound to be routed to pool_test1 and pool_test2. You can do both in TMSH during a maintenance window.

     

    --

     

    Persistency method tip:

     

    Since your service is using HTTP protocol, please consider using cookie-based persistency instead of source IP persistency.

     

    iRule tip:

     

    Your current iRule has unnecessary nested IF-ELSEIF clauses. The code below will do the same with less.

     

    when CLIENT_ACCEPTED {
      if { ( [IP::addr [IP::client_addr] equals 10.x.x.x] ) && ( [active_members pool_test2] > 0 ) } {
        pool pool_test2
      } else { 
        pool pool_test1
      }
    }
    
  • Once again Thanks Hannes for your tips and actions. based on our investigation there was no evidence show that node down or up, but answer to your qn, BIG-IP delete the persistence records i think.

     

    currently one member in test2 pool disabled for interim resolution. I will consider and bring up to the next level for changing the pool configuration/cookie based persistence as well as irule enhancement and will update about the issue. But it will take up some time since we don't have test envt. to simulate and fix.