A very common use of iRules is to choose an appropriate destination based on the current traffic or request details. In this article I'll review the iRules commands you should have in your repertoire for selecting the right pool, pool member or destination address under specific conditions. Other articles in the series: 


Selecting a destination

Command: pool

In most cases, the destination you want to specify is a just a specific pool of servers serving the same content.

The command to choose a pool is simply "pool":

 

pool <poolname>


You can either specify a literal pool name:

pool HTTP_pool


or use a variable to specify one:

pool $myPool


You can also choose a specific pool member using the pool command:

pool HTTP_pool member 10.10.10.1 80


Command: node

The "node" command is useful if you want to send traffic to a specific IP/port combination that is not defined as a pool member:

when HTTP_REQUEST {
  if { [HTTP::uri] starts_with "/admin" } {
    node 10.1.1.200 8080
  } else {
    pool HTTP_pool
  }
}


When is the "default pool" not the default pool?

When configuring a standard virtual server, you can specify a default pool and/or any number of iRules as resources for the virtual server. iRules applied to the virtual may or may not select a pool or pool member. If not, the default pool configured on the virtual server will be used for all traffic. If the iRule does select a pool for a connection or request, the selected pool (rather than the configured default pool) then becomes the default pool for the remainder of that connection unless another pool is specifically selected. That's important to remember for transaction-based protocols such as HTTP for which traffic is often split per request.

Consider the following example (simplified for demonstration purposes):

A single keepalive HTTP connection to the virtual server is established.
The first HTTP request on that connection is for an HTML page, then several subsequent requests are made for graphics and stylesheets, then another HTML page, then more graphics, etc.

HTML pages are hosted on one set of servers, graphics on another, and style sheets on a third set. The virtual server configuration includes html_pool as the default pool, and the following iRule to distribute traffic to each pool based on content:

when HTTP_REQUEST {
  switch -glob [HTTP::path] {
    *.css { pool css_pool }
    *.jpg { pool jpg_pool }
  }
}


Here's is the breakdown of where traffic would be sent using this iRule:

index.html  - - > html_pool
logo.jpg  - - > jpg_pool
style.css  - - > css_pool
page2.html  - - > css_pool
logo2.jpg  - - > jpg_pool
style2.css  - - > css_pool


Notice anything odd?
The request for page2.html didn't go to the configured default pool "html_pool".  Because no pool selection condition matched that request, it followed the last request made on that connection to "css_pool" (which most likely resulted in a 404 "Not Found" error, and the supporting css & image that page would have called would never be requested.)

A simple addition to the iRule is all that is required to enforce the desired "default" pool for all requests not matching *.css or *.jpg:

when HTTP_REQUEST {
  switch -glob [HTTP::path] {
    *.css { pool css_pool }
    *.jpg { pool jpg_pool }
    default { pool html_pool }
  }
}


If you're using" if / elseif / else" instead of "switch", you can do the same by specifying the default pool in the final "else" clause:

when HTTP_REQUEST {
  if { [HTTP::path] ends_with "*.css" }{
    pool css_pool
  } elseif { [HTTP::path] ends_with "*.jpg" }{
    pool jpg_pool
  } else {
    pool html_pool
  }
}


Multiple pool selections

In BIG-IP v4.x, iRule processing ended when you chose a pool.
That's no longer true in LTM 9.x: You can choose a target destination at any of several decision points, and the last one selected will be the one to which the traffic is sent.

Checking destination status before sending traffic

There are a couple of iRules commands you can use to check the status of the destination before sending traffic. (Note: Application health monitors capable of accurately determining the health of the load balanced service must be applied to the pool members before these commands will return actual server status.)

Command: active_members

Before sending traffic to a pool, you might want to check if the pool has any members available. You can do that with the "active_members" command, and if the pool has no active members, take an alternative action as in this codeshare entry:

when CLIENT_ACCEPTED {
  if {[active_members PoolHTTPS] < 1}{
    SSL::disable
    reject
  } else {
    pool PoolHTTPS
}


Command: LB::status

Before sending traffic to a specific pool member, you can use the "LB::status" command to verify it is available for load balancing, and if not, take an alternative action:

when HTTP_REQUEST {
  set pserver [persist lookup uie [HTTP::cookie PersistCookie]] 
  if { [LB::status pool http_pool member $pserver 80] eq "up" } {
    pool http_pool member $pserver 80
  } else {
    log "Persist server $pserver:80 down! Redirecting"
    HTTP::redirect "http://server.domain.com/BrokenPersistence.html"
  }
}

(Note: LB::status was added in LTM 9.2.0, and backported to 9.1.2 HF4)

Catching destination failures after sending traffic

If your iRule chooses a pool or member without checking the status first or if you use the node command to send traffic to an unmonitored destination, the selected destination may not be available to service the request. In that case, the LB_FAILED event will be triggered, and you can include logic in your iRule to handle such circumstances.

Here's a modification of an example I used earlier which uses LB_FAILED to specify fallback logic when the admin server fails to respond:

when HTTP_REQUEST {
  if { [HTTP::uri] starts_with "/admin" } {
    set admin 1
    node 10.1.1.200 8080
  } else {
    set admin 0
    pool HTTP_pool
  }
}
when LB_FAILED {
  switch $admin {
    1 {
      log local0. "Admin server 10.1.1.200:8080 not responding"
      node 10.1.1.201 8080
    }		
    2 {
      log local0. "Admin server 10.1.1.201:8080 not responding"
      node 10.1.1.202 8080
    }
    3 {
      log local0. "Admin server 10.1.1.202:8080 not responding"
      reject
    }
  }		
  incr admin
}
Get the Flash Player to see this player.