Forum Discussion

jgranieri_42214's avatar
jgranieri_42214
Icon for Nimbostratus rankNimbostratus
Feb 09, 2012

Priority Group Activation ( Preemption question)

I have a working config for priority group activation on a 2 node pool.

 

 

Due to the nature of my application I don't want the higher priority pool member to receive traffic when it comes back online, so in essence disable preemption and keep the backup lower priority pool member until we force it back.

 

 

I am assuming this will need to be done via irule, i dont see any settings in GUI that would set this up.

 

 

I guess I need an irule to set the priority lower on the primary member than the backup if it goes down so it doesn't take traffic back when its restored.

 

 

Using persistence would NOT work for our application as users from the same company need to stay on the same pool member and could come from different source IP's.

 

 

 

any help would be appreciated

 

 

3 Replies

  • It sounds like Manual Resume might me the right option for you.

     

     

     

    Setting the Manual Resume attribute

     

    By default, when a monitor detects that a resource (that is, a node or a pool member) is unavailable, the BIG-IP system marks the resource as down and routes traffic to the next appropriate resource as dictated by the active load balancing method. When the monitor next determines that the resource is available again, the BIG-IP system marks the resource as up and immediately considers the resource to be available for load balancing connection requests. While this process is appropriate for most resources, there are situations where you want to manually designate a resource as available, rather than allow the BIG-IP system to do that automatically. You can manually designate a resource as available by configuring the Manual Resume attribute of the monitor.

     

     

  • that did it. thanks, still learning all the little features on F5. moving off old css/foundry LB!
  • this is just another example using irule. please feel free to revise.

    [root@ve1023:Active] config  b virtual bar list
    virtual bar {
       snat automap
       destination 172.28.19.79:22
       ip protocol 6
       rules myrule
    }
    [root@ve1023:Active] config  b pool foo1 list
    pool foo1 {
       monitor all tcp
       members 200.200.200.101:22 {}
    }
    [root@ve1023:Active] config  b pool foo2 list
    pool foo2 {
       monitor all tcp
       members 200.200.200.102:22 {}
    }
    [root@ve1023:Active] config  b rule myrule list
    rule myrule {
       when RULE_INIT {
       set static::pool1 foo1
       set static::pool2 foo2
    }
    
    when CLIENT_ACCEPTED {
       set vs "[IP::local_addr]:[TCP::local_port]"
    
       if {[table lookup current_pool] eq ""} {
          table set current_pool $static::pool1 indef indef
       }
    
       if {[active_members [table lookup current_pool]] < 1} {
          if {[table lookup current_pool] eq $static::pool1} {
             set new $static::pool2
          } elseif {[active_members $static::pool1] > 0} {
             set new $static::pool1
          } else {
             reject
             return
          }
          table set current_pool $new 0 0
       }
       pool [table lookup current_pool]
    }
    
    when SERVER_CONNECTED {
       log local0. "[IP::client_addr]:[TCP::client_port] -> $vs -> [IP::remote_addr]:[TCP::remote_port]"
    }
    }
    
    [root@ve1023:Active] config  cat /var/log/ltm
    Feb  9 09:40:24 local/tmm info tmm[4822]: Rule myrule : 192.168.204.8:62065 -> 172.28.19.79:22 -> 200.200.200.101:22
    Feb  9 09:40:50 local/ve1023 notice mcpd[3746]: 01070638:5: Pool member 200.200.200.101:22 monitor status down.
    Feb  9 09:40:50 local/tmm err tmm[4822]: 01010028:3: No members available for pool foo1
    Feb  9 09:40:59 local/tmm info tmm[4822]: Rule myrule : 172.28.19.253:58019 -> 172.28.19.79:22 -> 200.200.200.102:22
    Feb  9 09:41:21 local/tmm err tmm[4822]: 01010221:3: Pool foo1 now has available members
    Feb  9 09:41:24 local/ve1023 notice mcpd[3746]: 01070727:5: Pool member 200.200.200.101:22 monitor status up.
    Feb  9 09:41:46 local/tmm info tmm[4822]: Rule myrule : 172.28.19.251:50606 -> 172.28.19.79:22 -> 200.200.200.102:22
    
    the last log shows a new connection was still sent to foo2 pool even foo1 pool was back.