We get a lot of posts about the best way to use LTM for name based virtual hosting: Conserving routable IP addresses by hosting multiple websites on the same virtual server and load balancing requests to the appropriate webserver based on the hostname requested.

Here's an article explaning our best practices recommendations for the basic LTM configuration, plus the health monitors and iRule you can use to do the job right.  In it you'll learn exactly how to configure LTM to support three name based virtual hosts running on the same virtual server.

Problem Definition: Simple example

Let's assume you have a BIG-IP LTM, and 3 webservers, and you are hosting 3 websites: "iz.hotkittehs.com", "www.bukkitsgalor.org", and "icanhaz.devcentral.f5.com" You want each site to be as highly-available as possible using the smallest possible number of IP addresses. You've decided to configure hostname-based virtual hosting on each of the 3 webservers, and you want to set up a similar configuration on LTM: a single IP address hosting 3 different hostnames, with each request directed to the appropriate server.

As of LTMv9, multiple instances of the same pool member can be independently monitored, so the best way to accomplish the goal is to create 3 separate pools, all with the same members, and monitor each with a single Host-header specific monitor. A separate pool and monitor for each site is the key to optimizing this configuration: You don't want to mark all 3 sites down on a server if only 1 is not responding. More on that in a minute.

To build the LTM configuration, you'll start from the bottom and build up to the virtual server, first defining the monitors for each site, then a pool for each site, then the iRule required to split the traffic, then finally the virtual server to which the 3 hostnames correspond.

Site/Application Specific Monitor Configuration

First you'll use the built-in HTTP monitor template to configure a separate monitor for each site. For each monitor, specify a different hostname in the Host header so each tests only the health of a specific site.

Each monitor should make an HTTP request that effectively tests that specific site's functionality, one that will only succeed if the site is fully functional. It can be a request for a static page if that's all the site serves. If the site hosts an application, though, the monitor should request a dynamic page on each webserver which forces a transaction with the application to verify its health and returns a specific phrase upon success. For application monitoring, the recommended best practice is to create such a script specific to your application, configure the monitor Send string to call that script, and set the Receive string to match that phrase.

The Receive string should be a specific string that would only be returned if the requested page is returned as expected. We don't recommend using a single dictionary word or a number, as some of those strings may be found in error responses and result in false positives (and requests being sent to a site that's gone belly up). For example, if you follow a common practice of specifying "200" to look for the "200 OK" server message, it will also match on the HTTP date header containing "2007" and mark the pool member up even on a server error. Using the string "200 OK" would be a better choice, but still only tests whether the HTTP service is responding.

For "www.bukkitsgalor.org", which hosts an ecommerce application, the Send string for the monitor will look something like this:

GET /path/to/test.script HTTP/1.1\r\nHost: www.bukkitsgalor.org\r\nConnection: close\r\n\r\n

and the Host header sent would be "www.bukkitsgalor.org".

The test.script at /path/to/test.script would transact with the application to retrieve some inventory data. If the transaction fails, indicating the server is not healthy, the script returns no data. If the transaction succeeds, indicating the server is healthy, the server returns a string with the requested data: "We haz this many BuKkiTs: 42"

To mark the server up when a response containing the expected inventory data is received, configure the Receive to match the expected response phrase:

We haz this many BuKkiTs: 

(For more information on configuring HTTP monitors, you can check the reference guide on AskF5 for your version, or AskF5 Solution 3224.)

Pool Pool Pool Configuration

You could just configure a single pool containing the 3 webservers configured for name based virtual hosting, load balance all requests to the 3 servers and let the webservers figure it out. But that's not the most highly-available approach you can take. With a single pool serving all the sites, you can monitor all 3 sites, but you'd have to mark the server down if any of the 3 site monitors failed: In other words, with a single pool, you will have to mark all 3 sites down on a server if only 1 site is not healthy. Since each site could be unavailable or unhealthy independent of the others for any number of reasons, the recommended best practice is to monitor each application separately.

We couldn't do that in BIG-IP v4.x , but in LTM, the pool object became a container for pool members, making each copy of a pool member in a different pool a unique object whose availability could be separately maintained. That means that we can now create virtual copies of the same server by adding it to multiple pools, then monitor each copy using different criteria, and set that copy's availability independent of the status of the other copies.

So configure 3 pools, each containing the same pool members, and apply a different site-specific custom monitor to each pool.

One rule to rule them all...

Now that you have separate pools of servers available for each application, a very simple rule is all that's required to distribute traffic to the right pool:

rule eenie_meenie_minee_Host {
    switch [HTTP::host] {
      iz.hotkittehs.com { pool hotkittehs }
      www.bukkitsgalor.org { pool bukkitsgalor }
      icanhaz.devcentral.f5.com { pool icanhaz }
	  	default { reject }

(Note: You could also use HTTP classes instead of an iRule.)

...and one Virtual Server to bind them

Create a single standard virtual server and apply an HTTP profile (and whatever other profiles make sense for you: clientSSL profile if hosting HTTPS and OneConnect profile for connection pooling are some of the more commonly used profiles for web hosting.)

Apply the iRule created above as a resource for the virtual server, set persistence if desired, and a default pool if you want.  (The default pool will never be used, but you can set one if you don't want to see the virtual server status reflected as "Unknown".)

Here's what the entire configuration would look like once you have it all built out:

  resource = rule eenie_meenie_minee_Host

rule eenie_meenie_minee_Host
  selects pool based on host header, rejects unknown hosts
pool hotkittehs
  monitor hotkittehs (sends "iz.hotkittehs.com" host header)

pool bukkitsgalor
  monitor bukkitsgalor (sends "www.bukkitsgalor.org" host header)

pool icanhaz
  monitor icanhaz (sends "icanhaz.devcentral.f5.com" host header)