Forum Discussion

GaryZ_31658's avatar
GaryZ_31658
Historic F5 Account
Jul 14, 2008

Vip target VIP

All,

 

 

I have a scenario where I want to LB "pods" of servers using a Master VIP. I could have many nodes in each pool and do not want to disable each individual node when we take a "pod" offline. It would be a simple task to disable either the pool or the VIP as a whole rather than each individual node.

 

 

iRule logic: In the iRule below, we round robin connections to a specific VIP (one defined for each pod). Once at the pool, we set a unique cookie and persist back to the pod as required (each VIP inserts a cookie "my_cookiePodx").

 

 

Question: Can I monitor the pool (or VIP) in the iRule for availability so if a pod is offline, the VIP won't send traffic to the disabled pod virtual? I was thinking of using when LB_Failed but the docs suggest fail detection between 9 and 45 seconds. My thoughts are if the VIP is offline, LTM would send a reset and the browser would simply retry. This seems faster but it also seems a little dirty.

 

 

when RULE_INIT {

 

set ::cnt 0

 

}

 

 

when HTTP_REQUEST {

 

set reqcookie [findstr [HTTP::cookie names] "my_cookie"]

 

if { $reqcookie starts_with "my_cookie"} {

 

set podcookie [findstr $reqcookie "my_cookie" 9 " "]

 

set podvip [findclass "$podcookie" $::pod_vip " "]

 

virtual $podvip

 

} else {

 

incr ::cnt

 

if { $::cnt <= 3 } {

 

set ::cnt 1

 

}

 

switch $::cnt {

 

1 { virtual pod1 }

 

2 { virtual pod2 }

 

3 { virtual pod3 }

 

}

 

}

 

}

13 Replies

  • Deb_Allen_18's avatar
    Deb_Allen_18
    Historic F5 Account
    It's worth mentioning that the [LB::server pool] command does NOT return the configured default pool. It returns the last selected pool, which means this code:

     

    } else {    
         if original pool is down, use the default pool..    
         User will lose session, but will get page.    
        HTTP::cookie remove "pod"    
        pool [LB::server pool]
    will still return the previously selected pool (not the default) if it is a subsequent request on a keepalive connection. It would be better to specify the pool name for cases like this.

     

     

    HTH

     

    /deb
  • brice's avatar
    brice
    Icon for Nimbostratus rankNimbostratus

    Posted By deb on 07/22/2008 12:11 PM

     

    It's worth mentioning that the [LB::server pool] command does NOT return the configured default pool. It returns the last selected pool, which means this code:

     

    } else {     
      if original pool is down, use the default pool..     
      User will lose session, but will get page.     
     HTTP::cookie remove "pod"     
     pool [LB::server pool]
    will still return the previously selected pool (not the default) if it is a subsequent request on a keepalive connection. It would be better to specify the pool name for cases like this.

     

    HTH

     

    /deb

     

     

    Thank you for that. I was thinking it would be the default.

     

     

    --brice
  • brice's avatar
    brice
    Icon for Nimbostratus rankNimbostratus

    Posted By deb on 07/22/2008 12:07 PM

     

    This may be simpler & more efficient approach, esp if the default pool idea is working well and you'd like to support it still:

     

    If you use 2 cookies - one for pod choice and one for LTM persistence, you can let LTM built in persistence options do the heavy lifting re:persistence mgmt, and use your existing rule logic to set the pod cookie. To enforce persistence across multiple pools, apply a persistence profile with a custom cookie name. That way the system will look for or insert the same cookie name in the default pool and the pod-specific pool...

     

    That way you'd avoid the overhead of having to parse both values from the same cookie, and of building & managing the persistence cookie.

     

    HTH

     

    /deb

     

     

    So we actually don't want member persistence, but rather pod/pool persistence. All pool members in a pod are session aware, and we would like to make sure we load balance across these servers as equally as possible. We are not using any persistence profile on the VIP, all persistence is in the iRule above via the "pod" cookie. With that said, do you see any pitfalls or trouble with that design? Thanks, again, in advance...

     

     

    --brice