Forum Discussion

Valentine_96813's avatar
Valentine_96813
Icon for Nimbostratus rankNimbostratus
Jan 30, 2012

Forcing Priority Group Usage

When I was searching though priority group posts, I founf a refernce someone made to an iRule that will force incoming connections back from the secondary server when the primaries come back up. Here is what I mean

 

 

sticky = sourceIP 10 min

 

 

lb method = least conn

 

 

Pool webapp

 

appserver1 priority 10

 

appserver2 priority 10

 

appserver3 priority 10

 

SorryPage server priority1

 

 

So when all is good, users get the appservers. When they appservers go down they get the SorryPage server. Problem is, when they appservers come back up, they have to timeout to the Sorrypage server, then they get the appservers.

 

 

Since I do not care about breaking the connection to Sorrypage as its static and does not require login, stickiness, etc, I want an iRule that will force connections back the higher priority servers immediately as the monitors mark them active.

 

 

I know I can do this with a pool redirect if I move the sorrypage server, but I dont want to rebuild everything for all the VSs. Anyone got this?

 

6 Replies

  • Hi Valentine,

    If you only want to use the sorry server(s) when the virtual server's default pool has no members, you can use an iRule like the one below. The VS pool will be checked on each HTTP request instead of per connection like you see with priority group activation. Update sorry_pool to the name of a second pool you create containing the sorry server(s).

    
    when CLIENT_ACCEPTED {
    
     Save the name of the VS default pool
    set default_pool [LB::server pool]
    }
    when HTTP_REQUEST {
     Check if the VS default pool has any active members
    if {[active_members $default_pool]}{
    pool $default_pool
    } else {
    pool sorry_pool
    }
    }
    

    Aaron
  • Thank you for your reply. However, I really do not want to want to redirect to another pool if I can help it. I would really like to see the iRule that references the priority group number and forces connectivity to the higher level members.
  • Is there a reason you don't want to use two separate pools? I think the above iRule is probably simpler than using priority group activation. You might want to explicitly disable persistence for the sorry_pool using persist none. If you're using persistence for the default pool, you'd want to also enable it using the persist command.

     

     

    You can't kill a different active connection from an iRule which is executing for a different connection. But you could affect which server is selected for the current connection. We discussed a similar scenario in this thread:

     

     

    http://devcentral.f5.com/Community/GroupDetails/tabid/1082223/asg/50/aft/1178810/showtab/groupforums/Default.aspx

     

     

    Aaron
  • Baron_of_Strath's avatar
    Baron_of_Strath
    Historic F5 Account

    I have just created this and tested. It is not the cleanest of code but it does work reliably. Perhaps someone with a better programming know-how can tweak it.

     

    when CLIENT_ACCEPTED { backup node Requires the IP Address - Couldn't find a variable for the lower priority node ip set backup_node "172.29.2.95" interval is in milliseconds 60 is very aggressive set interval 60 min is a minimum number of servers to be in the pool while allowing connection to this object. set min 1 set DEBUG 1

     

     scan [LB::select] %s%s%s%s%d command current_pool command2 current_member current_port  
     eval [LB::select] 
    
     if { $DEBUG equals 1 } {log local0. "Pool Member Selected $current_member"}
    Start Conditional
     if { $current_member equals $backup_node } {
        after $interval -periodic {
          if { [active_members $current_pool] > $min } {
             if { $DEBUG equals 1 } {log local0. "Resetting connection"}
             TCP::close
          }
          else
          {
            log local0. "Number of active members in $current_pool [active_members $current_pool]"
          }
        }
     }
     else { log local0. "Sent to primary node $current_member"}
    Close Conditional - only run when connected to backup Send user to the selected member - This will ALWAYS be the one active priority group member
     pool $current_pool

    }

     

  • Baron_of_Strath's avatar
    Baron_of_Strath
    Historic F5 Account

    Forgot to say that the pool consists of 2 objects and there is a reject on service down set on the pool. The use case was an HL7 app which needed to be forced to stick to the primary and which holds connections open indefinitely and which incurs great delays if the tcp stream is passively dropped due to node failure.

     

    • David_Vega_01_1's avatar
      David_Vega_01_1
      Icon for Nimbostratus rankNimbostratus
      Baron, What version of LTM are you using? I am getting errors trying this iRule. I have similar issues with PG and need a solution to failback to primary node after recovery. Any help is appreciated. Thanks.