Forum Discussion

Manikanta's avatar
Manikanta
Icon for Nimbostratus rankNimbostratus
Jun 19, 2019

VIP monitoring multiple Pools

Our application is having a specific requirement where based on the URI we route the request between multiple Pools using an irule.

 

The problem we are having is on health check part, where currently VIP monitors the health of default Pool only.

 

In our above application scenario, though all members are down under a pool which is not default pool, VIP is still directing the traffic to that Pool.

 

We are trying to see a possibility where the VIP health status is determined based on multiple Pools health status. Is there anyway we can achieve that?

14 Replies

  • Hello Manikanta.

    Check this video, you have the answer at the end ->

    https://youtu.be/4uRZDAZNPRI

    KR,

    Dario.

  • JG's avatar
    JG
    Icon for Cumulonimbus rankCumulonimbus

    A very basic example:

    when HTTP_REQUEST {
        if { [ active_members pool_1 ] < 1 or [ active_members pool_2 ] < 1 or [ active_members pool_3 ] < 1 } {
            HTTP::respond 503 content {
                <html>
                    <head>
                        <title>Service Error</title>
                    </head>
                    <body>
                        <font color="red">We are sorry, but the site you are trying to access is currently unavailable.<p>
                    </body>
                </html>
            } "Content-Type" "text/html"
        }
    }
  • Thanks Dario and JG.

     

    In my use case, I don't want to respond to client. we have backup. I just want to mark VIP down if any of the Pools are down, so the our BIG IP- DNS will not provide that IP to the client.

     

    Basically our setup looks like below,

    VIP

    Default Pool : A

    Other pools: B,C,D

    Traffic between Pools being handled by an Irule with URI mapping.

     

    We would like to see if there is any way where active_members of Pool A or B or C or D < 1 then Mark the VIP down/Red or Tell Big-Ip DNS that I am down.

     

    Can we do that?

     

    • Are you running an old release?

      This is not a normal behavior now.

      My config...

      ltm virtual VS-TEST_2000 {
          destination 10.130.40.150:sieve-filter
          ip-protocol tcp
          mask 255.255.255.255
          profiles {
              http { }
              tcp { }
          }
          rules {
              RULE_MarkDown
          }
          source 0.0.0.0/0
          source-address-translation {
              type automap
          }
          translate-address enabled
          translate-port enabled
          vs-index 18
      }

      I'm forcing pool state down using UDP monitor.

      ltm pool P-ABC_80 {
          members {
              N-WEB1_10.1.1.1:http {
                  address 10.1.1.1
                  session monitor-enabled
                  state down
              }
          }
          monitor udp 
      }
      ltm pool P-DEF_80 {
          members {
              N-WEB2_10.1.1.2:http {
                  address 10.1.1.2
                  session monitor-enabled
                  state down
              }
          }
          monitor udp 
      }

      iRule

      when HTTP_REQUEST {
      	set uri [HTTP::uri]
      	if { $uri starts_with "bla" } {
      		pool /Common/P-ABC_80
      	}
      	elseif { $uri starts_with "ble" } {
      		pool /Common/P-DEF_80
      	}
      	else {
      		drop
      	}
      }

      KR,

      Dario.

  • Hi Dario,

     

    Which pool you assigned to the VIP. Is it P-ABC_80 or P-DEF_80? If either of the pools (P-ABC_80 or P-DEF_80) is down then the entire VIP will be marked down?

     

    How you are doing this? I am thinking VIP health status is based on the default Pool health status.

     

    By the way we are running 12.1.3 code.

    • No, the status of the VS depends of all the pools assigned to the it.

      In the last example I'm not using default pool, but it's the same with it.

      ltm virtual VS-TEST_2000 {
          destination 10.130.40.150:sieve-filter
          ip-protocol tcp
          mask 255.255.255.255
          pool P-GHI_80
          profiles {
              http { }
              tcp { }
          }
          rules {
              RULE_MarkDown
          }
          source 0.0.0.0/0
          source-address-translation {
              type automap
          }
          translate-address enabled
          translate-port enabled
          vs-index 18
      }

      In 12.1.3 the behavior is the same...

      Please, share your config (VS, Pool, iRule,...)

      KR,

      Dario.

  • Hi,

     

    I don't see an option of adding multiple POOLs to one VIP. How are you adding?

    • Pool P-GHI_80 was added as default pool, the rest of them (P-ABC_80 and P-DEF_80) were added only to the iRule.

       

      As I said, share your config (VS, Pool, iRule).

  • JG's avatar
    JG
    Icon for Cumulonimbus rankCumulonimbus

    You have not specified how your DNS server monitors the virtual server status. irule is used to respond to an end-user request only.

  • JG,

     

    BIG-IP DNS monitors virtual server health status located at different data centers. Based on the availability it will provide the IP for the request in specified load balancing method

  • Dario,

     

    Here is the config,

     

    VS:

     

    ltm virtual test_443_VIP {

      destination 10.xx.xx.xx:443

      ip-protocol tcp

      mask 255.255.255.255

      pool abc_POOL

      profiles {

        rdmh_2018 {

          context clientside

        }

        http { }

        tcp { }

      }

      rules {

        uri_forward

      }

      source 0.0.0.0/0

      source-address-translation {

        pool UP_snatpool

        type snat

      }

      translate-address enabled

      translate-port enabled

      vs-index 19

    }

     

     

    Pools:

     

    ltm pool ghi_POOL {

      members {

        x1:8082 {

          address 10.xx.xx.xx

          session monitor-enabled

          state down

        }

        x2:8082 {

          address 10.xx.xx.xx

          session monitor-enabled

          state down

        }

      }

      monitor http_keepalive_html

    }

     

     

    ltm pool jkl_POOL {

      members {

        x3:4082 {

          address 10.xx.xx.xx

          session monitor-enabled

          state down

        }

        x4:4082 {

          address 10.xx.xx.xx

          session monitor-enabled

          state down

        }

      }

      monitor http_keepalive_html

    }

     

     

     

    ltm pool abc_POOL {

      members {

        a1:8081 {

          address 10.xx.xx.xx

          session monitor-enabled

          state up

        }

    a2:8081 {

          address 10.xx.xx.xx

          session monitor-enabled

          state up

        }

      }

      monitor http_keepalive_html

    }

     

    ltm pool def_POOL {

      members {

        b1:8084 {

          address 10.xx.xx.xx

          session monitor-enabled

          state up

        }

    b2:8084 {

          address 10.xx.xx.xx

          session monitor-enabled

          state up

        }

      }

      monitor http_keepalive_html

    }

     

    irule:

     

    when HTTP_REQUEST { 

      switch -glob [string tolower [HTTP::uri]] {

        "/abc*" {

        pool abc_POOL

        } 

        "/def*" {

          pool def_POOL

        } 

        "/ghi*" {

          pool ghi_POOL

        }   

        "/jkl*" {

          pool jkl_POOL

        }

      default {

      reject

        }

      }

    }

     

     

    Here in my config my default pool is abc_POOL.other pools were added in irule only. though ghi_POOL, jkl_POOL were down, VS is still showing UP.

     

    I am looking for a way where if any of the Pools (abc_POOL, def_POOL, ghi_POOL, jkl_POOL) is down, then I want to mark the Virtual Server down.

     

  • JG's avatar
    JG
    Icon for Cumulonimbus rankCumulonimbus

    You can set up monitors that poll the URLs that reach each of the Web service pools, and then apply these monitors to your DNS health checking system.