Forum Discussion

Peter_Fellwock_'s avatar
Peter_Fellwock_
Icon for Nimbostratus rankNimbostratus
May 03, 2006

primary-secondary in an iRule

So here is my issue to resolve, we have a site that manages sessions poorly, actually it just does not manage sessions, and we need a load balancing scheme of all to one machine until it dies then to the next... so how can I say wether a node is down by a particular monitor?

 

 

set primary [ 10.1.2.100 ]

 

set secondary [ 10.1.2.200 ]

 

when HTTP_REQUEST {

 

if { primary node is down by monitor my_mon} {

 

node secondary 80

 

} else {

 

node primary 80

 

}

 

}

 

 

thanx

9 Replies

  • If you only want 1 server at a time instead of a load balancing scheme, why not use pool member priorities instead of an iRule?

    
    pool testpool {
       action on svcdown reselect
       min active members 1
       member 10.10.10.1:http priority 4
       member 10.10.10.2:http priority 3
       member 10.10.10.3:http priority 2
       member 10.10.10.4:http
    }
  • Deb_Allen_18's avatar
    Deb_Allen_18
    Historic F5 Account
    Priority works unless you really truly need traffic to go to only 1 pool member. With just priority, and with persistence of any kind enabled, when the higher prio nodes come back up after failing, you will see traffic distributed across multiple pool members until old connections/sessions die off.

    Here's a really slick way to stick to one and only one server in a pool.

    Apply to the VS a universal persistence profile using an iRule like this:

    rule PriorityFailover {
      when CLIENT_ACCEPTED { persist uie 1 }
    }

    The first connection will create a single universal persistence record with a key of "1". All subsequent connections will look up persistence using "1" as the key, resulting in truly universal persistence for all connections. (Use 1 or any constant value. 0 will have the same affect as using 1. One of my customers uses "persist uie [TCP::local_port]"

    When one node fails, the other is persisted to by all comers. When the 2nd node fails, the 1st again becomes the preferred node for all, ad infinitum.

    Doesn't offer the capability of manual resume after failure, or true designation of a "primary" and "secondary" instance (sometimes required for db applications), but it sure does solve the problem of "only use one node at a time, I don't care which one, please" (You can use priority to gravitate towards the top of a list...)

    HTH

    /deb
  • Is this rule still valid for 9.4.5 as I can't even save it without error:

     

     

    01070151:3: Rule [single_node_persistence] error:

     

    line 1: [undefined procedure: rule] [rule PriorityFailover {

     

    when CLIENT_ACCEPTED { persist uie 1 }

     

    }]
  • If you're adding the rule via the GUI or the iRuler, leave off the first and last lines as these define the iRule name. Just use this:

     
     when CLIENT_ACCEPTED { 
        persist uie 1 
     } 
     

    Aaron
  • I'm going to test this scenario, I've already created the following config further below.

    Would this mean that I need to change my Pool Priorities to make both Servers "priority-group 1" rather than 2 & 1?

    I guess with this setup the 1st server A that listens gets the 1st connection and all subsequent connections, when server A breaks then server B takes all connections, when Server A comes back on line, old/new connections still remain with Server B....this is what I want only 1 server processing client traffic....however, I need to make sure when both servers start up Monday morning, health checks kick-in that all connections go to Server A firstly not Server B....what I'm asking is does that mean Server A needs to be started firstly, wait for the 1st client connection before its acts for all clients connections. I can't allow Server B to take connections unless Server A dies.

    Monday morning, always has to be Server A.

    Thanks.

    (Active)(/Common)(tmos) list ltm rule "FIX"
    ltm rule FIX {
        when CLIENT_ACCEPTED {
        persist uie 1
    }
    }
    
    (Active)(/Common)(tmos) list ltm persistence
    ltm persistence global-settings { }
    ltm persistence universal FIX {
        app-service none
        defaults-from universal
        rule FIX
        timeout 3600
    }
    
    (Active)(/Common)(tmos) list ltm pool "FIX-19003"
    ltm pool FIX-19003 {
        load-balancing-mode least-connections-member
        members {
            fixomln1d01.zit.commerzbank.com:19003 {
                address 10.167.20.20
                priority-group 1
                session monitor-enabled
                state up
            }
            fixomln1d03.zit.commerzbank.com:19003 {
                address 10.167.20.11
                priority-group 2
                session monitor-enabled
                state up
            }
        }
        min-active-members 1
        monitor FIX
    }
    
  • Would this mean that I need to change my Pool Priorities to make both Servers "priority-group 1" rather than 2 & 1?

     

    if you want to give specific server higher priority (e.g. server A), i think priority group (with single node persistence irule) may be helpful.

     

    sol8968: Enabling persistence for a virtual server allows returning clients to bypass load balancing

     

    https://support.f5.com/kb/en-us/solutions/public/8000/900/sol8968.html

     

  • I've tried single node persistence with priority groups and this doesn't work well (my current setup). When Server A fails new connections go to Server B, problem is when Server A comes back up whilst connections remain with Server B, new connections go to Sever A...as in example below

    Active)(/Common)(tmos) list ltm virtual "FIX-19003"
    ltm virtual FIX-19003 {
        destination 10.167.21.16:19003
        ip-protocol tcp
        mask 255.255.255.255
        mirror enabled
        persist {
            source_addr {
                default yes
            }
        }
        pool FIX-19003
        profiles {
            tcp { }
        }
        vlans-disabled
    }
    
    )(Active)(/Common)(tmos) list ltm pool "FIX-19003"
    ltm pool FIX-19003 {
        load-balancing-mode least-connections-member
        members {
            fixomln1d01.zit.commerzbank.com:19003 {
                address 10.167.20.20
                priority-group 1
                session monitor-enabled
                state up
            }
            fixomln1d03.zit.commerzbank.com:19003 {
                address 10.167.20.11
                priority-group 2
                session monitor-enabled
                state up
            }
        }
        min-active-members 1
        monitor FIX
    }
    
  • I've tried single node persistence with priority groups and this doesn't work well (my current setup). When Server A fails new connections go to Server B, problem is when Server A comes back up whilst connections remain with Server B, new connections go to Sever A

     

    did you see server B's persistence record when server A came back up?

     

  • "show ltm persistence persist-records node-addr x.x.x.x" ......showed Persistent connections if I remember (tested 3 months ago)....need to test again to be sure.