Forum Discussion

Narendra_26827's avatar
Narendra_26827
Icon for Nimbostratus rankNimbostratus
Jul 22, 2011

Persist connection to new node when LB_FAILED event occurs

Hi,

 

 

We are doing load balancing between various members across different pools via hash and modulo function. The modulo value decides to which node the request should be redirected.

 

 

 

Now if one of the node goes down by the help of health monitors we will mark the same node down in other pools also as down.

 

 

 

The request will be shifted to new node picked in the LB_FAILED event.

 

 

 

Now we want to avoid following condition :

 

 

 

If previous down node comes up then again all the requests will be shifted to it from the new node (as per hash and modulo logic). Breaking again (second time) the existing client connections.

 

 

 

 

 

Can this condition be avoided by using cookie persistence or any other way?

 

 

 

Can anybody help or provide a suggestion.

 

 

 

 

 

Thanks.

 

4 Replies

  • Cookie Persistence ties the client to a specified server in the pool.

     

     

    If for some reason that server becomes unavailable and a re-select of a new node is executed a new cookie will be issued persisting the client to the new node for the remainder of the client session regardless of the status of the old node, the client will persist to the node specified in the current persistence cookie.

     

     

    The same type of logic exists when using Priority Activation Groups when some sort of persistence is being used.
  • Will the cookie persistence profile would have the preference over iRule? Will the iRule modulo logic would not be considered in that scenario?

     

     

    Thanks.

     

  • I am not familiar with what you are calling iRule Modulo Logic (but normally iRule directed persistence will override the default Cookie Persistence Profile). Can you post a copy of the iRule that you are referring to or a link to where you got it? I would like to take a look at so that have a better understanding of what you have versus what you need.

     

     

     

  • I have the following iRule logic which calculates crc32 of objectID present in http header and does the mod of it by the no. of members present in the pool. Now with the below logic if one server goes down the request are retried to another server via LB_FAILED event but if the previous server becomes up again then again (second time) all connections are reestablished from the new member to old member which was down. Can this be avoided by some kind of persistence?

    
    
    when HTTP_REQUEST {
    
    set uri [HTTP::uri]
    
    if { [string tolower $uri] contains "/api/gateway" or [string tolower $uri] contains "/api/channel" or [string tolower $uri] contains "/api/space" }
    {
    set orgid [crc32 [HTTP::header objectId]]
    set key [expr $orgid % [llength [members -list default_pool]]]
    
    set default_member [lsort [members -list default_pool]]
    set channel_member [lsort [members -list channel-pool]]
    set gateway_member [lsort [members -list gateway-pool]]
    set space_member [lsort [members -list space-pool]]
    
    log "CRC32 value $orgid"
    log "[llength [members -list default_pool]]"
    log "Mod value $key"
    
    switch -glob [string tolower $uri] {
    "/api/channel*" {
    log "[lindex [lindex $channel_member $key]  0] [lindex [lindex $channel_member $key]  1]"
    pool channel-pool member [lindex [lindex $channel_member $key]  0] [lindex [lindex $channel_member $key]  1]
    unset channel_member
    }
    "/api/space*" {
    log "[lindex [lindex $space_member $key]  0] [lindex [lindex $space_member $key]  1]"
    pool space-pool member [lindex [lindex $space_member $key]  0] [lindex [lindex $space_member $key]  1]
    unset space_member
    }
    "/api/gateway*" {
    log "[lindex [lindex $gateway_member $key]  0] [lindex [lindex $gateway_member $key]  1]"
    pool gateway-pool member [lindex [lindex $gateway_member $key]  0] [lindex [lindex $gateway_member $key]  1] 
    unset gateway_member
    }
    default {
    log "[lindex [lindex $channel_member $key]  0] [lindex [lindex $channel_member $key]  1]"
    pool default_pool member [lindex [lindex $default_member $key]  0] [lindex [lindex $default_member $key]  1]
    unset default_member
    }
    }
    
    }
    }
    
    when LB_FAILED {
    
    set uri [HTTP::uri]
    
    if { [HTTP::header exists "objectId"] }
    {
    set orgid_new [crc32 [HTTP::header objectId]]
    set newkey [expr $orgid_new % [active_members default_pool]]
    
    set default_member_new [lsort [active_members -list default_pool]]
    set channel_member_new [lsort [active_members -list channel-pool]]
    set gateway_member_new [lsort [active_members -list gateway-pool]]
    set space_member_new [lsort [active_members -list space-pool]]
    
    
    
    switch -glob [string tolower $uri] {
    "/api/channel*" {
    log "[lindex [lindex $channel_member_new $newkey]  0] [lindex [lindex $channel_member_new $newkey]  1]"
    LB::reselect pool channel-pool member [lindex [lindex $channel_member_new $newkey]  0] [lindex [lindex $channel_member_new $newkey]  1]
    unset channel_member_new
    }
    "/api/space*" {
    log "[lindex [lindex $space_member_new $newkey]  0] [lindex [lindex $space_member_new $newkey]  1]"
    LB::reselect pool space-pool member [lindex [lindex $space_member_new $newkey]  0] [lindex [lindex $space_member_new $newkey]  1]
    unset space_member_new
    }
    "/api/gateway*" {
    log "[lindex [lindex $gateway_member_new $newkey]  0] [lindex [lindex $gateway_member_new $newkey]  1]"
    LB::reselect pool gateway-pool member [lindex [lindex $gateway_member_new $newkey]  0] [lindex [lindex $gateway_member_new $newkey]  1]
    unset gateway_member_new
    }
    default {
    log "[lindex [lindex $default_member_new $newkey]  0] [lindex [lindex $default_member_new $newkey]  1]"
    LB::reselect pool default_pool member [lindex [lindex $default_member_new $newkey]  0] [lindex [lindex $default_member_new $newkey]  1]
    unset default_member_new
    }
    }
    }
    }
    
     

    Thanks.