Forum Discussion

Nirmal_67412's avatar
Nirmal_67412
Icon for Nimbostratus rankNimbostratus
Feb 26, 2010

Configuring stickyness when using multiple pools with one virtual server

Hello experts,

 

I am relatively a newbie (I have used F5 back in ver 3.x and a bit in 4.x). I am trying to setup a DR environment with just one VIP.

 

 

Here is my setup:

 

=================

 

1. A pool of servers with (say) 3 nodes

 

2. 3 such pools at 3 different geographies

 

3. Each node across the 3 pools have a unique token / identifier

 

4. All configured thru' a single virtual IP / server

 

 

Requirement:

 

============

 

1. When a connection comes into the virtual IP, it can get routed to any of the 3 pools

 

2. After being connected and routed to a pool, the connection should always come into the same pool

 

3. If a node accepts a connection, it should continue to get accept subsequent connections

 

4. If that node goes down, other peer nodes in the same pool should pick up the connection

 

5. If the whole pool goes down then it can fail over to other pools

 

 

Here is how I was trying to achieve this:

 

=========================================

 

Under Virtual server defined a default pool called "DefaultPool".

 

 

DefaultPool contains *all* nodes as members (namely nodes identified by 101, 102, 103, 201, 202, 203, 301, 302, 303)

 

 

Other pools used will be:

 

 

Pool1 contain member nodes identified by 101, 102, 103

 

 

Pool2 contain member nodes identified by 201, 202, 203

 

 

Pool3 contain member nodes identified by 301, 302, 303

 

 

iRule used at Virtual Server level is:

 

 

when HTTP_REQUEST

 

{

 

if {[HTTP::header exists "token"]}

 

{

 

set key [HTTP::header "token"]

 

 

if {$key equals "101" || $key equals "102" || $key equals "103"}

 

{

 

if {$key equals "101"}

 

{

 

pool pool1_1

 

}

 

elseif {$key equals "102"}

 

{

 

pool pool1_2

 

}

 

elseif {$key equals "103"}

 

{

 

pool pool1_3

 

}

 

}

 

elseif {$key equals "201" || $key equals "202" || $key equals "203"}

 

{

 

if {$key equals "201"}

 

{

 

pool pool2_1

 

}

 

elseif {$key equals "202"}

 

{

 

pool pool2_2

 

}

 

elseif {$key equals "203"}

 

{

 

pool pool2_3

 

}

 

}

 

elseif {$key equals "301" || $key equals "302" || $key equals "303"}

 

{

 

if {$key equals "301"}

 

{

 

pool pool3_1

 

}

 

elseif {$key equals "302"}

 

{

 

pool pool3_2

 

}

 

elseif {$key equals "303"}

 

{

 

pool pool3_3

 

}

 

}

 

else

 

{

 

pool DefaultPool

 

}

 

}

 

else

 

{

 

pool DefaultPool

 

}

 

}

 

 

Please let me know if this is the right approach or is there a better way. Unfortunately, i don't have too much time and looking for help or pointers in the right direction ASAP.

 

 

Thanks in advance,

 

Nirmal R.

5 Replies

  • Hi Nirmal,

    I think you are on the right track based on your explanation and that you for including your iRule - it does help us understand your thoughts and approaches.

    Here is an iRule I quickly worked up based on your description.

    I am assuming that the token information is provided to the client once the initial connection has been made by the defaultpool.

     
      when HTTP_REQUEST {  
      if { [HTTP::header exists "token"] } {  
      set key [HTTP::header "token"]   
      switch $key {  
      101 { pool pool1_1}  
      102 { pool pool1_2}  
      103 { pool pool1_3}   
      201 { pool pool2_1}  
      202 { pool pool2_2}  
      203 { pool pool2_3}  
      301 { pool pool3_1}  
      302 { pool pool3_2}  
      303 { pool pool3_3}  
      }  
        
      } else {  
      pool defaultpool  
      }  
      }  
        
      when LB_FAILED {  
      if {[HTTP::header exists "token"]} {  
      set key [HTTP::header "token"]  
      switch $key {  
      101 -  
      102 -  
      103 {  
              if {[active_members pool1] > 0 }{  
      LB::reselect pool pool1  
      }  
      201 -  
      202 -  
      203 {  
      if {[active_members pool2] > 0 }{  
      LB::reselect pool pool2  
      }  
      301 -  
      302 -  
      303 {  
      if {[active_members pool3] > 0 }{  
      LB::reselect pool pool3  
      }  
                              default {  
                                     LB::reselect pool defaultpool  
                                  }  
      }  
      } else {  
           LB::reselect pool defaultpool  
          }  
      }  
       

    This assumes:

    - you have 9 pools each contains single node

    - you have 3 pools contain a group of 3 nodes in each

    - You have 1 pool containing all 9 nodes.

    How the iRule works (at least based on my understand of your requirements):

    The initial request will not contain any of header "token". It will be forwarded to the defaultpool (Containing 1 of the 9 nodes). Once selected, the client's header "token" will be created/updated by the selected node. Subsequent requests will then be forwarded to poolx_x which contains the node where it was given the token. If that node fails, it will then go to 1 of 3 pools that contains the similar token in the header. Once a new node is selected and client's header token will be updated so that the subsequent requests again go to the same node. if the entire pool down then it will go to the defaultpool where the entire process will restart again. If the client request loses it's token in the process of a node failure it will be sent to Defaultpool to restart the process.

    Note: This has not be tested so I can guarantee what the results will exactly be - so expect adjustments made.

    I hope this helps

    Bhattman
  •  

    Thanks for validating the iRule and optimizing for me. Most of you explained is in sync with what I said as well. There is one thing to add - the token does not get set in the initial response. As part of the initialization process there are a few handshakes that happen. In my case initial handshake is through Tomcat so, I was thinking of using jsessionId but, to keep it simple I switched to using "persistence by source address" for about 3 minutes (180 sec as the config setting). Do you think this will help?

     

     

    Thanks in advance,

     

    Nirmal R.
  • Hi Nirmal,

     

    I think the going down the route of JsessionID is a better choice only because it will stay in sync with the application. You can then reduce or even eliminate the irule because it's always going to follow the JsessionID to ensure the clients stay on the same node.

     

     

    Bhattman
  • I think you misunderstood the flow - initially during handshake jsessionid could be used - this goes to a certain type of server. Once the initial handshake process is done, it should go to the other counterpart of the server that uses tokens as described above. I want to ensure that the connection continues to go to the server node where the initial handshake happened until a token is acquired and then continue to the server node until the token is available and valid and the server is available.

     

     

    Hope this helps,

     

    Nirmal R.
  • Hi Nirmal,

     

    In that case again yes persistance source would be you next option. I would also think about setting up persistance timeout to match up with connection timeouts with the servers in the pool - or a bit less. I.E., if the servers idle timeout is 300s, then you would want to set persistance to 290s - basically you either want to timeout exactly with the server OR you want to timeout seconds before it (which is my preference).

     

     

    I hope that helps

     

     

    Bhattman