Forum Discussion

smeisenzahl's avatar
smeisenzahl
Icon for Employee rankEmployee
Aug 28, 2013

iRule to disable VIP with less then X members a pool

Create and iRule to disable a vip or stop sending traffic to a vip with less then 3 members in the ltm pool. I have a WIP test.company.com with two VIPS, vip1 is in datacenter1 (dc1) and vip2 is in datacenter2 (dc2). The ltm's in the two datacenters do not communicate or share the same ip space, but the gtm's in the two dc's are in the same sync group and can communicate. Since the LTM's can't communicate I can't use priority groups and need a way to make the vip failover to the other DC.

Datacenter 1 HA gtm’s – sync group “group” HA ltm’s – don’t have communication to ltm’s in datacenter 2

Datacenter 1 HA gtm’s – sync group “group” HA ltm’s – don’t have communication to ltm’s in datacenter 1

Scenario - Wip name = test.company.com, Wip pool = "test_pool" - lb method on pool is global availability, Members in this pool are "VIP" from dc1 and "VIP" from dc2, all traffic needs to go to vip in dc1 unless down.

LTM’s in dc1 has a VIP = testvip-dc1, Pool = testpool-dc1 with 4 members in the pool (different pool members than dc2)

LTM’s in dc2 has a VIP = testvip-dc2, Pool = testpool-dc2 with 4 members in the pool (different pool members than dc1)

Due to capacity issues, when "testpool-dc1" has only two members available I need to stop sending traffic to the vip in dc1 and start sending traffic to the VIP in dc2

I have written some valid irules and applied them to the VIP's but they don’t seem to work. I think I need an iRule on the GTM, but I'm not sure. below are some examples I have created for the vips

when HTTP_REQUEST { if { [ active_members [ LB::server pool ] ] <=2 } { pool testpool-dc1 } else { [ LB::server ] [ LB::down ] }

}
when CLIENT_ACCEPTED { if { [ active_members testpool-dc1 ] <= 2 } { LB::down }

}

I’m probably way off on my irule so any help on writing the irule and where to assign it (ltm or gtm ) would be great.

6 Replies

  • Hi buddy,

     

    Are you looking for this : https://devcentral.f5.com/wiki/iRules.Monitor_pools_from_external_monitors.ashx

     

  • That's an interesting use case, and it's a little sad that this isn't just an option in the config. There's probably a few ways to do this, but here's what I've come up with using an external monitor. Add this monitor script to a pool that already has a good working monitor and make sure the Availability Requirement setting is set to All.

    !/bin/sh
    pidfile="/var/run/$MONITOR_NAME.$1..$2.pid"
     Send signal to the process group to kill our former self and any children
     as external monitors are run with SIGHUP blocked
    if [ -f $pidfile ]
    then
        kill -9 -`cat $pidfile` > /dev/null 2>&1
    fi
    echo "$$" > $pidfile
    
    pool=local-pool
    minup=2
    virtual=access-test-vs
    
    upmembers=`tmsh show /ltm pool $pool members |grep "Current Active Members" |awk -F" : " '{ print $2 }'`
    
    if [ $upmembers -ge $minup ]
    then
        rm -f $pidfile
        state=`tmsh show /ltm virtual $virtual |grep "State" |awk -F" : " '{ print $2 }'`
        if [ $state == "disabled" ]
        then
            logger -p local0.info -t MONITOR-ALERT "Pool $pool Monitor UP - enabling virtual $virtual"
            tmsh modify /ltm virtual $virtual enabled
        fi
        echo "up"
    else
        rm -f $pidfile
        state=`tmsh show /ltm virtual $virtual |grep "State" |awk -F" : " '{ print $2 }'`
        if [ $state == "enabled" ]
        then
            logger -p local0.info -t MONITOR-ALERT "Pool $pool Monitor DOWN - disabling virtual $virtual"
            tmsh modify /ltm virtual $virtual disabled
        fi    
        echo "up"
    fi
    

    The script itself always reports "up" to the monitor because its job is simply to monitor the number of available members, compare that to the "minup" value, and then enable or disable the VIP (if it's already disabled/enabled respectively). Also notice that I hard-coded the pool, minup and virtual values:

    pool=local-pool
    minup=2
    virtual=access-test-vs
    

    You could very easily turn those into arguments in the external monitor config:

    ltm monitor external test_nodesup_monitor {
        args "local-pool 2 access-test-vs"
    }
    

    That way the script could use $3 (the pool), $4 (minup), and $5 (virtual server) instead of the hard-coded values. You'll still need to create a separate external monitor config for each VIP (and pool) that you want to work with.

  • Is there a way to get this script to look in a different partition? Here is the error I get

     

    Nov 28 18:40:45 LTM-3900-1 err tmsh[1527]: 01420006:3: 01020036:3: The requested Virtual Server (/Common/my_vip_443) was not found.

     

    my_vip_443 exists in the /Test/ partition.

     

    Any suggestions?

     

    • AlexDeMarco's avatar
      AlexDeMarco
      Icon for Nimbostratus rankNimbostratus

      So I have a workaround by cd'ing in the partition.. is this a good thing to do??

       

  • Is there a way to get this script to look in a different partition? Here is the error I get

     

    Nov 28 18:40:45 LTM-3900-1 err tmsh[1527]: 01420006:3: 01020036:3: The requested Virtual Server (/Common/my_vip_443) was not found.

     

    my_vip_443 exists in the /Test/ partition.

     

    Any suggestions?

     

    • AlexDeMarco_216's avatar
      AlexDeMarco_216
      Icon for Nimbostratus rankNimbostratus

      So I have a workaround by cd'ing in the partition.. is this a good thing to do??