Forum Discussion

atomicdog_7107's avatar
atomicdog_7107
Icon for Nimbostratus rankNimbostratus
Feb 07, 2008

Forced Failover via iRule?

We have an application team that is requesting something odd... and I believe that the only way to set this up is via iRules. Here's the scenario...

 

 

We have one physical primary server with 4 JVM instances on it. We have another physical server (on standby) with 4 more JVM instances on it. The request is to load-balance only the 4 instances on the primary server, but when 2 JVM instances (or more) fail health checks on the primary server the load-balancer will fail over completely to the 4 ALSB instances on the secondary server (and kill the remaining sessions to the primary server's instances). They don't want to have both servers running simultaneously.

 

 

We've attempted to set this up using server priorities and the 'min active members' setting with a very simple tcp-based health check. Our current configuration is outlined in the following configuration:

 

 

pool JVM_UAT_HTTP {

 

lb method member least conn

 

min active members 3

 

monitor all tcp

 

members

 

192.168.28.131:http

 

192.168.28.132:http

 

192.168.28.133:http

 

192.168.28.134:http

 

192.168.28.141:http

 

priority 2

 

192.168.28.142:http

 

priority 2

 

192.168.28.143:http

 

priority 2

 

192.168.28.144:http

 

priority 2

 

 

The problem that we're running in to is this; when the primary server loses 2 of it's JVMs, the BigIP begins to use ALL of the remaining available JVMs (the remaining 2 from the primary server AND the 4 from the secondary server) instead of just using the 4 on the secondary server as requested.

 

 

Can this be done via iRules (or even a different configuration)? If so, does anyone have an iRule that I can try?

 

 

Thanks for your help!

 

4 Replies

  • at it's most basic:

    
    when CLIENT_ACCEPTED {
      if { [active_members myPrimaryPool] < 3 } {
        pool myStandbyPool
      } else { pool myPrimaryPool }
    }

    You can build from here
  • How about multiple WideIPs talking to the same HTTPS VIPs where an iRule directs traffic to destination pools based on the host section of the decrypted URL.

     

    How do I feedback to the GTM that the WideIP should not use the site if the pool is down but continue to use it where the other pools are up.

     

     

    I.e

     

    WideIPs:

     

    abc.here.com

     

    123.here.com

     

     

    both go to VIPs

     

     

    Site 1 LTM - 10.1.1.1

     

    Site 2 LTM - 10.1.2.1

     

     

    Which has an irule that checks the requested URL then sends to Pool_abc or Pool_123 as per hostname

     

    If Pool_abc is down at site 1 I want the VIP to be removed from the abc.here.com WideIP but to remain in the 123.here.com WideIP

     

     

    Is this possible?

     

     

    I'm thinking HTTPS health checks on the WideIP Pool members but is it possible via an irule also?
  • Apologies first for asking irrelevant question related to this thread but I have stuck in one issue where I need to monitor JVM service from F5.

     

    At present ,I have set up http monitoring to monitor our backend servers which are essentially WEB SERVER(apache).These web servers talk to JVM in backend.problem is since we are monitoring web services only so when content server(JVM) is down still F5 ends up sending request to backend web server instead of redirecting it other web server.Is there any way we can monitor JVM service too on top of monitoring our web servers ?