Forum Discussion

Kevin_Conaway_5's avatar
Kevin_Conaway_5
Icon for Nimbostratus rankNimbostratus
Aug 04, 2009

Priority Group Activation not working as expected

I have 2 nodes in a pool that I would live to configure in an active -> failover configuration. I.E., all traffic is directed at the active node until it is down and then all traffic is directed to the failover node.

I tried setting this up by modifying the priority values of my nodes to be 1 and 2 but the traffic keeps getting "round robin'd" between them. I set the priority group activation to be 1.

Here is my pool configuration from the BigIP console:

pool show:

   
 POOL PoolName  LB METHOD round robin  MIN/CUR ACTIVE MEMBERS: 1/2   
 |        conns (cur, max, limit, tot) = (0, 2, 0, 18)   
 |        (pkts,bits) in = (181, 963160), out = (144, 95712)   
 +-> POOL MEMBER PoolName/10.10.10.126:8006  ACTIVE,UP   SESSIONS ENABLED   
 |   |        priority 1    ratio 1    dynamic ratio 1   
 |   |        conns (cur, max, limit, tot) = (0, 1, 0, 9)   
 |   |        (pkts,bits) in = (91, 481792), out = (72, 50208)   
 |   |        requests (total) = 9   
 +-> POOL MEMBER PoolName/10.10.10.127:8006  ACTIVE,UP   SESSIONS ENABLED   
 |        priority 2    ratio 1    dynamic ratio 1   
 |        conns (cur, max, limit, tot) = (0, 1, 0, 9)   
 |        (pkts,bits) in = (90, 481368), out = (72, 45504)   
 |        requests (total) = 9   
 }

pool list:

   
 pool PoolName {   
 min active members 1   
 monitor all gateway_icmp   
 member 10.10.10.126:8006   
 member 10.10.10.127:8006 priority 2   
 }

Version: BIG-IP 9.0.4 Build 118.5

8 Replies

  • That's one of the first few releases in the v9 train, it may do well to upgrade to at least the latest maintenance train in 9.1. That said, do you want the failover node to demote after the primary node is back, or does it matter? If it doesn't, you can set single node persistence with a simple iRule and no priority:

     
     when CLIENT_ACCEPTED { 
       persist uie 1 
     } 
     

  • If the failure node becomes the primary, I would like it to to stay as the "primary" until it fails.
  • Thanks.

     

     

    In regard to my original posting, is there something I'm missing? I don't understand why the priority activation is not working with that configuration.
  • Might just be a bug in your version of code with priority, the config is simplistic and works just fine on my version (10.0.1):

     

     

    [root@golgotha:Active] config b pool priority_testPool

     

    POOL priority_testPool LB METHOD round robin MIN/CUR ACTIVE MEMBERS 1/2

     

    | (cur, max, limit, tot) = (0, 1, 0, 5)

     

    | (pkts,bits) in = (35, 38960), out = (30, 25040)

     

    +-> POOL MEMBER priority_testPool/10.10.20.1:http active,up

     

    | | session enabled priority 10 ratio 1

     

    | | (cur, max, limit, tot) = (0, 1, 0, 5)

     

    | | (pkts,bits) in = (35, 38960), out = (30, 25040)

     

    | | requests (total) = 5

     

    +-> POOL MEMBER priority_testPool/10.10.20.247:http active,up

     

    | session enabled priority 5 ratio 1

     

    | (cur, max, limit, tot) = (0, 0, 0, 0)

     

    | (pkts,bits) in = (0, 0), out = (0, 0)

     

    | requests (total) = 0

     

     

     

  • Hello Citizen Elah,

     

     

    I have a question in regards to your post and was wondering if you could assist me?

     

     

    I have a Pool with 4 members, each with their own priority activation value

     

     

    server 1 has a value of 4

     

    server 2 has a value of 3

     

    server 3 has a value of 2

     

    server 4 has a value of 1

     

     

     

    When server 1 fails, it fails over all connections to server 2, when server 2 fails, it fails all connections to server 3, etc...

     

     

    This works as expected.

     

     

    My questions is:

     

     

    If I'm on server 4 and a node comes up with a higher priority activation value, it stays on server 4 until I close the session. Is there a way to automatically demote the node once the primary one comes back up?

     

     

    For ex, If I'm on server 4 (Priority activation value of 1) and server 1 comes up or one with a high value, how can I have it automatically move me to that server or a server with a higher priority activation value?

     

     

     

  • New non-persistent connections will fall back to the higher priority servers. If you want to move persistent non-active connections, you'll need to force the pool member offline. To move the active connections, you'll need to drop the pool member by assigning a monitor that will fail the node, at which point your session should return to an active higher priority pool member.
  • Thanks for your response.

     

     

    To move the active connections, you'll need to drop the pool member by assigning a monitor that will fail the node, at which point your session should return to an active higher priority pool member.

     

     

    Is this handled by an IRule?