Forum Discussion

Chris_Miller's avatar
Chris_Miller
Icon for Altostratus rankAltostratus
Apr 26, 2010

Best Way to do Cookie Persistence here?

I have one VIP and 5 "pools" that can service the VIP. Each pool is made up of 3 clustered servers so pool persistence is required, but not server persistence within the pool. So, if a user hits our site, their first request should be sent via least connection load balancing to a pool. I planned on creating datagroups with the IPs of each pool and based on that, setting a cookie. So, if the user got load balanced to pool 1 which was served by 1.1.1.1-1.1.1.3, I'd create a datagroup for that ip range and would send a cookie in the response for pool 1. When traffic returned, I'd check that the cookies existed and send to the corresponding pod. So, 1 VIP, 1 iRule, 5 Pools, 3 servers per pool...what's the best way with an iRule to do this?

8 Replies

  • You could seperate the traffic to each pool of servers based on the content type or by the request in the URI:

     

     

    when HTTP_REQUEST {

     

    if { [string tolower [HTTP::uri]] starts_with "/content1" } {

     

    pool pool.for.content1

     

    }

     

    elseif {[string tolower [HTTP::uri]] starts_with "/content2" } {

     

    pool pool.for.content2

     

    }

     

    elseif {[string tolower [HTTP::uri]] starts_with "/content3" } {

     

    pool pool.for.content3

     

    }

     

    elseif {[string tolower [HTTP::uri]] starts_with "/content4" } {

     

    pool pool.for.content4

     

    }

     

    elseif {[string tolower [HTTP::uri]] starts_with "/content5" } {

     

    pool pool.for.content5

     

    }

     

    Choose a Default Action none of the above content is matched

     

    Most often a redirect to a different site, or a redirect to a url that

     

    would be matched by this iRule

     

    else {

     

    HTTP::redirect http://[getfield [HTTP::host] ":" 1]/content1

     

    }

     

    }

     

     

    Once seperated into each Pool, I'm not sure that you would need to go any further than a custom F5 Cookie Setting to do the rest. These are the possible settings:

     

     

    Cookie Hash: Specifies that the server provides the cookie, which the system then maps consistently to a specific node.

     

    HTTP Cookie Insert: Specifies that the system inserts server information, in the form of a cookie, into the header of the server response. This is the default setting.

     

    HTTP Cookie Passive: Specifies that the server provides the cookie, formatted with the correct server information and timeout.

     

    HTTP Cookie Rewrite: Specifies that the system intercepts the BIGipCookie header, sent from the server, and overwrites the name and value of that cookie.

     

     

    Hope this helps.
  • An alternate iRule you can use the switch command

    
    when HTTP_REQUEST {
       switch -glob [string tolower[HTTP::uri]] {
         "/content1*" { pool pool.for.content1 }
         "/content2*" { pool pool.for.content2 }
         "/content3*" { pool pool.for.content3 }
         "/content4*" { pool pool.for.content4 }
         "/content5*" { pool pool.for.content5 }
         default  { HTTP::redirect http://[getfield [HTTP::host] ":" 1]/content1 }
       }
    }
    

    I hope this helps

    Bhattman
  • The content on all the "pools" is the same. I misspoke above about the first request.

     

     

    As I mentioned, we have 5 pools, each with 3 servers. We also have a pool that contains all 15 servers. The iRule should check to see whether the cookie exists, and if it doesn't, send the request via least connections to the pool containing all 15 servers. Then, the iRule should check the IP of the chosen server, determine which pool it's in based on a datagroup or something else, and set the corresponding cookie.

     

     

  • I believe that you are trying to replace the default behavior of the F5 Persistence Cookie and compensate for Pool Member failures with an iRule. You shouldn't have to do either since those features are available through Virtual Server and Pool setup options.

     

     

    Use Default Cookie Behavior (Tracks the Pool and Pool Member selected by whatever Load Balancing Method is configured - Round Robin, Least Connections (Node), Least Connections (Member), etc):

     

    HTTP Cookie Insert: Specifies that the system inserts server information, in the form of a cookie, into the header of the server response. This is the default setting.

     

     

    If Pool Member Fails:

     

    Set Pool setting "Action On Service Down" to "Reselect".

     

     

    This should give you the behavior that you are looking for and decrease the setup complexity.
  • Posted By Michael Yates on 04/27/2010 08:14 AM

     

    I believe that you are trying to replace the default behavior of the F5 Persistence Cookie and compensate for Pool Member failures with an iRule. You shouldn't have to do either since those features are available through Virtual Server and Pool setup options.

     

     

    Use Default Cookie Behavior (Tracks the Pool and Pool Member selected by whatever Load Balancing Method is configured - Round Robin, Least Connections (Node), Least Connections (Member), etc):

     

    HTTP Cookie Insert: Specifies that the system inserts server information, in the form of a cookie, into the header of the server response. This is the default setting.

     

     

    If Pool Member Fails:

     

    Set Pool setting "Action On Service Down" to "Reselect".

     

     

    This should give you the behavior that you are looking for and decrease the setup complexity.

     

     

    Michael - in this case, the separate pools have shared resources - ie, pool 1 has 3 servers but all 3 depend on a certain java resource. I need pool persistency, but not node persistency. So, I'm not compensating for pool member failure...something else does that for us...I'm compensating for complete pool failure. For instance, if you get sent to pool 1, you need to stay in pool 1. If pool 1 doesn't have any active nodes, now it's time to find a different pool.
  • I haven't read through and fully understood the example you provided, but I can see at least one issue with the overall scenario. I don't think you'll be able to use LTM's least connections load balancing algorithm, as it sounds like you want to logically combine the connection counts for clusters of three servers.

     

     

    You could try to manually track the connections to each "cluster" of three servers yourself. If you did that, you could track which of the clusters had the lowest connection count and select that cluster to handle the request. Once a cluster had been selected you could select the server with the lowest connection count. For persistence, you could track which cluster was selected and set a cookie based on that. You'd want to skip the cluster/server selection in subsequent requests if the client presented a valid persistence cookie.

     

     

    If that seems doable, could you confirm which LTM version you're running? Also, is there any simple way you can think of to correlate the servers in a "cluster"? Can you come up with a IP addressing scheme that would make it simple to identify the servers in cluster 1 versus cluster 2, etc? If so and you're running 10.x, you could add all of the servers to a single pool and then use the members or active_members command to get a list of the pool members.

     

     

    You could then use an array or list of lists to store the connection count info for each cluster and cluster members. If you use lists, you could use lsort -integer -index X to get the cluster and cluster member with the lowest connection count.

     

     

    If you can't use a correlation between the IP addresses and cluster members, you'd probably need to use a difficult to maintain configuration using datagroups with the cluster memberships and update that each time the cluster members change.

     

     

    Aaron
  • Posted By hoolio on 04/28/2010 09:52 AM

     

    I haven't read through and fully understood the example you provided, but I can see at least one issue with the overall scenario. I don't think you'll be able to use LTM's least connections load balancing algorithm, as it sounds like you want to logically combine the connection counts for clusters of three servers.

     

     

    You could try to manually track the connections to each "cluster" of three servers yourself. If you did that, you could track which of the clusters had the lowest connection count and select that cluster to handle the request. Once a cluster had been selected you could select the server with the lowest connection count. For persistence, you could track which cluster was selected and set a cookie based on that. You'd want to skip the cluster/server selection in subsequent requests if the client presented a valid persistence cookie.

     

     

    If that seems doable, could you confirm which LTM version you're running? Also, is there any simple way you can think of to correlate the servers in a "cluster"? Can you come up with a IP addressing scheme that would make it simple to identify the servers in cluster 1 versus cluster 2, etc? If so and you're running 10.x, you could add all of the servers to a single pool and then use the members or active_members command to get a list of the pool members.

     

     

    You could then use an array or list of lists to store the connection count info for each cluster and cluster members. If you use lists, you could use lsort -integer -index X to get the cluster and cluster member with the lowest connection count.

     

     

    If you can't use a correlation between the IP addresses and cluster members, you'd probably need to use a difficult to maintain configuration using datagroups with the cluster memberships and update that each time the cluster members change.

     

     

    Aaron

     

     

    Hoolio - I'm currently using IP schemes to identify servers in each. For instance, 1.1.1.1-1.1.1.3 would be pool 1, 1.1.1.4-1.1.1.6 would be pool 2, etc...I'm doing that with data groups today. Good point on Least Connections...am running 10.1. I'm not too concerned with which node in the pool/cluster has the most connections since all 3 share the same backend resources. I'm very interested in sending to the "pool" with least connections. So, yes, I'd want to track the connections to the 3 nodes in each cluster and make my initial selection based on that.