Forum Discussion

dannyk81_365606's avatar
dannyk81_365606
Icon for Nimbostratus rankNimbostratus
Jun 28, 2018

Traffic shifting (based on ratio) between two pools from a single VS

Hey everyone,

We are using an F5 BigIP LTM as an external load balancer for ingress traffic to Kubernetes clusters, I'm trying to build a solution to perform controlled traffic shifting between two Kubernetes clusters (i.e. Blue/Green).

We need a single Virtual Server and a logic to "divert" some percentage of traffic to a different pool in order to test how it works, with the goal to gradually move all the traffic there (i.e. from Blue cluster to Green cluster).

I found the following post (https://devcentral.f5.com/questions/two-pool-ratio-configuration) and tried to follow this approach, however I end up with all traffic being sent just to one of the pools, regardless of the Ratio setting I define.

I defined the following dummy nodes:

    ltm node /DEV_SRE_APP/blue-ingress-dummy {
        address 128.1.1.1
        monitor /Common/none
    }

    ltm node /DEV_SRE_APP/green-ingress-dummy {
        address 128.2.2.2
        monitor /Common/none
    }

and a pool with these nodes as members:

    ltm pool /DEV_SRE_APP/kub_ingress_virtual_pool {
        load-balancing-mode ratio-member
        members {
            /DEV_SRE_APP/blue-ingress-dummy:80 {
                address 128.1.1.1
                monitor /Common/none
                ratio 100
            }
            /DEV_SRE_APP/green-ingress-dummy:80 {
                address 128.2.2.2
                monitor /Common/none
            }
        }
    }

This is the iRule:

    ltm rule /DEV_SRE_APP/kub_ingress_blue_green_rule {
      when CLIENT_ACCEPTED {
      set members [active_members -list [LB::server pool] ]
      log local0. "Active members are: $members"

      eval [LB::select]

      switch [ getfield [LB::server addr] "%" 1 ] {
        "128.1.1.1" {
            log local0. "Sending to BLUE cluster"
            pool blue_kub_ingress_pool
        }

        "128.2.2.2" {
            log local0. "Sending to GREEN cluster"
            pool green_kub_ingress_pool
        }
      }
    }

With above I was expecting 99% of the request to hit dummy node "128.1.1.1" and be diverted to pool "blue_kub_ingress_pool", however LB::select always returns "128.2.2.2" which picks up pool "green_kub_ingress_pool".

    Jun 28 13:02:21 mad-bigip01 info tmm[11791]: Rule /DEV_SRE_APP/kub_ingress_blue_green_rule : Active members are: {128.2.2.2%791 80} {128.1.1.1%791 80}
    Jun 28 13:02:21 mad-bigip01 info tmm[11791]: Rule /DEV_SRE_APP/kub_ingress_blue_green_rule : Sending to GREEN cluster

Only if I manually disable node "128.2.2.2" it selects node 128.1.1.1 and traffic ends up in pool "blue_kub_ingress_pool".

Perhaps I'm trying something that no longer supported? or maybe I'm missing something?

--Danny