Forum Discussion

Brian_Gibson_30's avatar
Brian_Gibson_30
Icon for Nimbostratus rankNimbostratus
Aug 15, 2013

Changes to pool priority groups

We have recently upgraded a pair of our Viprions from 10.2.4 to 11.4. Since that time one of our services is behaving oddly.

We have 2 priority groups in a pool. One service goes up and down througout the day, the higher priority one. However it seems that the LB is occasionally sending traffic to the secondary priority group eventhough the server is the higher priority group is still running. Here is the configs...

ltm pool /Common/PROD-pool-GL-Snap-54700 {
    load-balancing-mode fastest-node
    members {
        /Common/10.129.136.21:54700 {
            address 10.129.136.21
            priority-group 2
        }
        /Common/10.129.136.22:54700 {
            address 10.129.136.22
            priority-group 2
        }
        /Common/10.130.40.33:54700 {
            address 10.130.40.33
            priority-group 2
        }
        /Common/10.130.40.34:54700 {
            address 10.130.40.34
            priority-group 2
        }
        /Common/10.130.40.41:54700 {
            address 10.130.40.41
            priority-group 2
        }
        /Common/10.130.40.42:54700 {
            address 10.130.40.42
            priority-group 2
        }
        /Common/10.130.40.43:54700 {
            address 10.130.40.43
            priority-group 2
        }
        /Common/10.130.40.44:54700 {
            address 10.130.40.44
            priority-group 2
        }
        /Common/174.234.194.154:80 {
            address 174.234.194.154
            priority-group 1
        }
    }
    min-active-members 1
    monitor /Common/tcp_half_open
}

This only started happening after the upgrade. I don't see the service being marked down in the logs.

6 Replies

  • BinaryCanary_19's avatar
    BinaryCanary_19
    Historic F5 Account
    You need to post the virtual server config too. Persistence might have an influence on the patterns you observe, as well as long-lived connections.
  • Ok. Not sure what that does but no problem..

    ltm virtual /Common/PROD-vs-GL-Snap-54700 {
        destination /Common/X.X.X.X:80
        ip-protocol tcp
        mask 255.255.255.255
        pool /Common/PROD-pool-GL-Snap-54700
        profiles {
            /Common/http { }
            /Common/tcp { }
        }
        source 0.0.0.0/0
        source-address-translation {
            type automap
        }
        translate-address enabled
        translate-port enabled
    }
    
    • BinaryCanary_19's avatar
      BinaryCanary_19
      Historic F5 Account
      Are these typically long-lived connections? Without persistence, the only explanation I can think of for connections going to lower priority pool members is if the high priority pool member was down at the time load balancing decision was being made.
  • The connections are not long lived. And the redirects happen hours after the primary server is up and running.

     

    • BinaryCanary_19's avatar
      BinaryCanary_19
      Historic F5 Account
      If you have packet captures that demonstrate this, and you're unable to make sense of them, try opening a support case. If you are going to take packet captures, use: tcpdump -i 0.0:nnn -s0 -w /var/tmp/filename.cap host or dst port 54700 This might generate a large file depending on your traffic... Once you feel you've caught the problem, note the time and send it for review if you're unable to find the cause. Send a new qkview too.
    • BinaryCanary_19's avatar
      BinaryCanary_19
      Historic F5 Account
      meh. this site is stripping special characters. the tcpdump command above should have the SNAT IP address after "host". You're using automap, so this would be the floating self-IP on the vlan where the pool members are reachable.