Forum Discussion

Chris_125062's avatar
Chris_125062
Icon for Nimbostratus rankNimbostratus
May 03, 2013

Dynamic Active/Active/Standby Load Balancing

Our F5 is an 8900 LTM running 10.2.4 HF6. I installed these devices 2 years ago, and have administered them since.

 

 

And today is the day that I try to create my first custom iRule... but I have no clue where to start.

 

 

In this example there are a total of 8 database servers. 5 servers (named servers 1, 2, 3, 4, & 5) are located in data center 1. The remaining 3 servers (named servers 6, 7 , 8) are located in data center 2.

 

 

Servers 1 & 2 are copied to servers 3 & 4, which are copied to servers 6 & 7.

 

 

Server 5 is copied to server 8.

 

 

 

>> I would like to setup this profile so that servers 1,2,3,4, & 5 are always used under normal operating circumstances.

 

 

>> In the event that servers 1 AND 3 are down, I would like server 6 to begin receiving traffic as well.

 

>> Also, in the event that servers 2 AND 4 are down, I would like server 7 to begin receiving traffic as well.

 

>> If server 5 is down, I would like server 8 to begin receiving traffic as well.

 

 

Is this possible? How would you suggest that I start attacking this issue?

 

7 Replies

  • Are servers 1, 3, and 6 different than servers 2, 4, and 7, and those different from servers 5 and 8? In other words, are these 3 sets of different services in 3 separate pools? If so, then priority group load balancing would be the easiest to implement.
  • just wondering if "reverse" health monitor is useful. for example, assign reverse monitors which check server 1 and 3 to server 6.

     

  • Kevin -

     

     

    How would you suggest going about this? Note that in normal operations, I would like servers 1,2,3,4, & 5 online & load balancing traffic. Servers 6, 7, & 8 would only activate during the fail conditions detailed above.

     

     

    Servers 1, 3, & 6 = DB A

     

    Servers 2, 4, & 7 = DB B

     

    Servers 5 & 8 = Archive

     

     

     

    nitass -

     

     

    Is it possible to monitor multiple nodes using a reverse monitor?
  • Can I assume that DB A, DB B, and Archive are three SEPARATE services/pools, and can I also assume they are attached to three separate virtual servers as well? If they are separate pools, then priority group load balancing allows you to assign different priorities to members of a pool. On the Members tab of a given pool, set the Priority Group Activation option to something like Less than 1. Then click each of the member nodes and set a priority group number. So for example, set servers 1, 2, 3, 4, and 5 to priority 2 (higher numbers mean higher priority), and set servers 6, 7, and 8 to priority 1. With this setting, only the priority 2 (higher) members will be used until all (less than 1) priority 2 members become unavailable (per pool). You'll also of course need a monitor on the pools to determine that availability.
  • Is it possible to monitor multiple nodes using a reverse monitor?e.g.

    i assume 200.200.200.101, .102 and .111 are server1, 3 and 6 respectively. anyway, the problem i found is this does not work when server is shutted down (i.e. port 80 is not listening). i am not sure if it is expected (actually, i think it is not).

    [root@ve10:Active] config  b pool foo list
    pool foo {
       monitor all myhttp
       members {
          200.200.200.101:80 {}
          200.200.200.102:80 {}
          200.200.200.111:80 {
             monitor monitor101 and monitor102
          }
       }
    }
    [root@ve10:Active] config  b monitor myhttp list
    monitor myhttp {
       defaults from http
       interval 1
       timeout 4
       recv "UP"
       send "GET /status.html HTTP/1.1\r\nHost: \r\nConnection: Close\r\n\r\n"
    }
    [root@ve10:Active] config  b monitor monitor101 list
    monitor monitor101 {
       defaults from http
       interval 1
       timeout 4
       dest 200.200.200.101:80
       reverse
       recv "UP"
       send "GET /status.html HTTP/1.1\r\nHost: \r\nConnection: Close\r\n\r\n"
    }
    [root@ve10:Active] config  b monitor monitor102 list
    monitor monitor102 {
       defaults from http
       interval 1
       timeout 4
       dest 200.200.200.102:80
       reverse
       recv "UP"
       send "GET /status.html HTTP/1.1\r\nHost: \r\nConnection: Close\r\n\r\n"
    }
    
     when .101 and .102 return "UP"
    
    [root@ve10:Active] config  curl http://200.200.200.101/status.html
    UP
    [root@ve10:Active] config  curl http://200.200.200.102/status.html
    UP
    [root@ve10:Active] config  b pool foo|grep -i pool\ member
    +-> POOL MEMBER foo/200.200.200.101:80   active,up
    +-> POOL MEMBER foo/200.200.200.102:80   active,up
    +-> POOL MEMBER foo/200.200.200.111:80   inactive,down
    
     when .101 and .102 do not return "UP" (i.e. they return "DOWN")
    
    [root@ve10:Active] config  curl http://200.200.200.101/status.html
    DOWN
    [root@ve10:Active] config  curl http://200.200.200.102/status.html
    DOWN
    [root@ve10:Active] config  b pool foo|grep -i pool\ member
    +-> POOL MEMBER foo/200.200.200.101:80   inactive,down
    +-> POOL MEMBER foo/200.200.200.102:80   inactive,down
    +-> POOL MEMBER foo/200.200.200.111:80   active,up
    
    
  • Kevin -

     

     

    I believe that I understand the logic that you're suggesting... the issue that I have is how do I load balance traffic from a single VIP to multiple pools? Oh... or can I load balance from one vip to three vips that all exist on the same LTM?

     

     

     

     

    nitass -

     

     

    Curious. Which server are you referring to as shut down?

     

  • Load balancing from a single virtual server to multiple pools can be accomplished in a bunch of ways, and more or less depends on how you determine the load balancing selection. For example:

    
    when HTTP_REQUEST {
         switch -glob [string tolower [HTTP::uri]] {
              "/foo*" { pool foo_pool }
              "/bar*" { pool bar_pool }
              default { default_pool }
         }
    }
    

    This example would assume that the client was requesting a specific URI (the trigger), so you're implementation may be a little different. In any case, because you've defined different pools with their own priority groups, something like this should meet your requirements. You can also, as you mentioned, load balance from one VIP to three other VIPs, and is an excellent solution if you have to apply different profiles and/or iRules to the traffic.