Forum Discussion

Brady_11518's avatar
Brady_11518
Icon for Nimbostratus rankNimbostratus
Apr 23, 2013

irule based pool selection *and* priority group activation?

Hello!

 

I am using an irule using a switch command with a single virtual server in LTM 11.3 to split traffic between several pools for the same set of hosts.

 

http://www.abc.com/site1

 

http://www.abc.com/site2

 

http://www.abc.com/site3

 

site1-pool, site2-pool, and site3-pool are all identical. Each has 5 members at priority 2, and 5 members at priority 1, with activation set at less than 1.

 

The virtual server has no default pool, it has an irule as follows:

 

when HTTP_REQUEST {

 

switch --glob -- [string tolower [HTTP::path]] {

 

"/site2*" {

 

pool site2-pool

 

return

 

}

 

"/site3*" {

 

pool site3-pool

 

return

 

}

 

default {

 

pool site1-pool

 

return

 

}

 

}

 

}

 

Functionally, all traffic is indeed getting sent to each pool correctly. My problem is that pool member priorities are not being respected. Traffic is getting sent to priority 1 members of each pool, even though all of the priority 2 members are up. My other "normal" virtual servers with a single pool correctly send traffic only to the priority 2 servers, it is just the one where the pool is being selected by the iRule that is not respecting the priority.

 

Am I missing something? Is there something I need to specify in the iRule to cause it to respect the priority group activation settings on the pool to which the traffic is being sent? Or do I have to (ugh) reinvent the priority group activation logic within this irule itself?

 

6 Replies

  • this is mine.

    root@(ve11a)(cfg-sync Changes Pending)(Active)(/Common)(tmos) list ltm virtual bar
    ltm virtual bar {
        destination 172.28.20.16:80
        ip-protocol tcp
        mask 255.255.255.255
        profiles {
            http { }
            tcp { }
        }
        rules {
            myrule
        }
        source 0.0.0.0/0
        source-address-translation {
            type automap
        }
        vlans-disabled
    }
    root@(ve11a)(cfg-sync Changes Pending)(Active)(/Common)(tmos) list ltm rule myrule
    ltm rule myrule {
        when HTTP_REQUEST {
       pool foo
    }
    when HTTP_RESPONSE {
       log local0. "client [IP::client_addr]:[TCP::client_port] server [IP::server_addr]:[TCP::server_port]"
    }
    }
    root@(ve11a)(cfg-sync Changes Pending)(Active)(/Common)(tmos) list ltm pool foo
    ltm pool foo {
        members {
            200.200.200.101:80 {
                address 200.200.200.101
                priority-group 2
                session monitor-enabled
                state up
            }
            200.200.200.111:80 {
                address 200.200.200.111
                priority-group 1
                session monitor-enabled
                state up
            }
        }
        min-active-members 1
        monitor gateway_icmp
    }
    
     test
    
    [root@ve11a:Active:Changes Pending] config  tail -f /var/log/ltm
    Apr 23 13:07:06 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34211 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34212 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34213 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34214 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34215 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34216 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34217 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34218 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34219 server 200.200.200.101:80
    Apr 23 13:07:06 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34220 server 200.200.200.101:80
    
  • this is when not using priority group. is there anything i missed?

    root@(ve11a)(cfg-sync Changes Pending)(Active)(/Common)(tmos) list ltm pool foo
    ltm pool foo {
        members {
            200.200.200.101:80 {
                address 200.200.200.101
                priority-group 2
                session monitor-enabled
                state up
            }
            200.200.200.111:80 {
                address 200.200.200.111
                priority-group 1
                session monitor-enabled
                state up
            }
        }
        monitor gateway_icmp
    }
    
    [root@ve11a:Active:Changes Pending] config  tail -f /var/log/ltm
    Apr 23 13:09:07 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34231 server 200.200.200.101:80
    Apr 23 13:09:07 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34232 server 200.200.200.101:80
    Apr 23 13:09:07 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34233 server 200.200.200.111:80
    Apr 23 13:09:07 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34234 server 200.200.200.111:80
    Apr 23 13:09:07 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34235 server 200.200.200.101:80
    Apr 23 13:09:07 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34236 server 200.200.200.101:80
    Apr 23 13:09:07 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34237 server 200.200.200.111:80
    Apr 23 13:09:07 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34238 server 200.200.200.111:80
    Apr 23 13:09:07 ve11a info tmm1[8163]: Rule /Common/myrule : client 172.28.19.251:34239 server 200.200.200.101:80
    Apr 23 13:09:07 ve11a info tmm[8163]: Rule /Common/myrule : client 172.28.19.251:34240 server 200.200.200.101:80
    
    
  • Boy, that sure makes it seem like it should be working as expected.

     

     

    My VS also has the following characteristics:

     

    lb: observed (node)

     

    tcp (default settings)

     

    tcp-wan-optimized (client, default settings)

     

    tcp-lan-optimized (server, default settings)

     

    http (default settings)

     

    oneconnect (default settings)

     

    wan-optimized-compression (default settings)

     

    webacceleration (default settings)

     

     

    I'll try removing everything but http and tcp tomorrow, and go to a member based lb on the pools and see if that fixes it.

     

     

    If one of this seems like it would obviously be the culprit, let me know!

     

     

    Thanks!

     

     

  • Boy, that sure makes it seem like it should be working as expected.

     

     

    My VS also has the following characteristics:

     

    lb: observed (node)

     

    tcp (default settings)

     

    tcp-wan-optimized (client, default settings)

     

    tcp-lan-optimized (server, default settings)

     

    http (default settings)

     

    oneconnect (default settings)

     

    wan-optimized-compression (default settings)

     

    webacceleration (default settings)

     

     

    I'll try removing everything but http and tcp tomorrow, and go to a member based lb on the pools and see if that fixes it.

     

     

    If one of this seems like it would obviously be the culprit, let me know!

     

     

    Thanks!

     

     

  • Boy, that sure makes it seem like it should be working as expected.

     

    My VS also has the following characteristics:

     

    lb: observed (node)

     

    tcp (default settings)

     

    tcp-wan-optimized (client, default settings)

     

    tcp-lan-optimized (server, default settings)

     

    http (default settings)

     

    oneconnect (default settings)

     

    wan-optimized-compression (default settings)

     

    webacceleration (default settings)

     

    I'll try removing everything but http and tcp tomorrow, and go to a member based lb on the pools and see if that fixes it.

     

    If one of this seems like it would obviously be the culprit, let me know!

     

    Thanks!