Quick Start: Application Delivery Fundamentals

On DevCentral we often focus on the out of the box solutions. iRules, iControl, iApps and more are fantastic and exciting technologies, but there's a lot that goes into making an F5 device work before you even get to play with some of those more advanced features. Things like configuring a pool or a virtual server often times get taken for granted on DevCentral, and that is something we'd like to change. The reality is there are many new users coming to the world of F5 all the time, and not everyone is an expert. Not only is the user base expanding so is the feature base with every outgoing release. As we add more features and more users that are still cutting their teeth, it becomes significantly more important to continue educating not only the advanced, but also the newly indoctrinated.

As a means of attempting to dig into the more basic, entry level concepts and knowledge required to successfully navigate the waters of a freshly licensed F5 device, let's take a look at a plausible, and frankly quite common, scenario:

Sam is a network admin. Sam works with many products, from many vendors, and is constantly expanding his skill set. As such he never quite knows what he will be doing from day to day. As it turns out, today his manager has tasked him with migrating from a technology that is no longer supported, and as such their company is moving away from, to a newer, currently supported platform. That platform happens to be F5, with which Sam's experience can be tidily summed up as "Open box. Plug in.". This means Sam has some learning to do, and needs to know where to start. This raises the question of where to start.

First things first, he would need to gather the appropriate information for the application. Things such as site IP, site name, server IP space, and relevant VLANS required are necessary before rolling up his sleeves and getting started. This information would look like:

Once he has this information, it's time to log into the F5 device, with his freshly changed admin password, and start the configuration process. Ideally he has followed the configuration wizard and has the management configuration completed, which allows him to move on to configuring the production components. Upon logging into the device he knows his first task needs to be to get things talking, which means, in this day and age of VLAN separated links, he'll be starting with the VLANS. An IP address can live near anywhere, a server can't communicate to anyone without the path laid out, and that means the VLANs are the most logical place to start.

To create a VLAN he would navigate in the GUI to Network -> VLANs, and here he'll be able to create whatever he needs. All he needs to configure is the name, a description of what this VLAN is used for, a VLAN tag (which is technically optional, but in a VLAN separated network will be required) and the F5 interface on which the traffic is coming in. (E.G. where's the cable?) In Sam's case, as you can probably tell from the information above, he will need three separate VLANs to support the different networks required for his deployment. Once he has this simple task completed, he'll be able to move on from VLANs to the next logical step in the configuration which is to define a self IP address on the F5 device.

The Self IP is the IP address of the F5 device where it lives on each network. These addresses allow it to communicate with each of the defined networks once they are configured and applied to the appropriate VLAN. Since we have three networks to communicate with, and three VLANs to represent those three networks, we'll use three Self IP addresses, one for each VLAN that the F5 device is able to route to. To create a Self IP Sam would just navigate to Network -> Self IPs and again select "create".

 

In this screen he'll assign a Name, an IP address and a Netmask, then select the appropriate VLAN for each IP address. Once this is finished his box is now functionally routable across all three networks. To double check that all is working as intended he can easily ssh into the box and send a few pings out across the different networks. Assuming all is well, this means it's time to start configuring his application objects.

At this point Sam wants to begin defining his servers. Within the F5 there are a couple of ways to do this. He could begin creating nodes and directly defining server addresses to be used later in the deployment. However, in this case, it is more efficient for him to begin at the pool level because by going through the pool creation process he will be defining the nodes when he creates his pool members. Nodes are directly configured server objects. They are merely an IP address that defines a server. A pool member, however, is an IP:port combination that defines a destination and is also tied to a particular pool. The pool is a collection of pool members that essentially serve the application. The pool level is where things such as the load balancing method, pool monitors and more are configured. This is the object that will effectively be the internal destination of the inbound traffic for this application.

To create a pool, and subsequent pool members, Sam would navigate to Local Traffic -> Pools and again select create. At this screen he will assign a pool name, which is required, and will have the option of further configuring the pool with such items as a description, health monitors, load balancing method and more. In Sam's case he defines his first pool as pool1. Inside of pool1, under resources, he begins to add in his server objects and service port. This is the port on which the servers will be listening. Once he's added the appropriate new members it's time to make a decision on which load balancing method he'll be using for this pool.

Load balancing is simply the concept of distributing server traffic amongst multiple pool members (servers) via a pre-defined algorithm. When it comes to load balancing on an F5 device, there are several methods available. Ranging from simple and classic, such as Round Robin, to far more advanced, like priority group activation and weighted least connections; there is no shortage of ways to slice and dice traffic. In Sam's case he's just interested in a basic traffic distribution, which leads him to choose Round Robin, which will evenly balance traffic amongst pool members. The only concern in this scenario is the possibility of one of said pool members being unavailable while being sent traffic. To guard against this it's time for Sam to configure a health monitor.

A health monitor is a scheduled check on resource availability. This could be a simple ICMP request, or something like a more advanced monitor that makes a specific request and evaluates resource health based on the response. Basic HTTP monitors are already defined on the F5 device. To attach this monitor you would simply go back to the pool that was just created and in the "health monitors" section move the desired monitor, in this case HTTP, to active. Sam has a different requirement. He wants to be able to perform a query against the web based application in his pool and ensure that appropriate response data is received. This is easily accomplished on the F5 device, as you can see below:

1. In the Local Traffic –> Monitors section hit create

2. Name it, and select HTTP from the type drop down

3. Create a send string. This is the query the monitor is going to send to the server. In this case, Sam sends a
GET /server-status.html

4. Create a Receive String. This is what the monitor needs to find in the response for it to consider the resource available. Sam used "Server ok” here.

5. Hit create, and the new monitor is now able to be attached to the pool.

So at this point the device now has a defined traffic destination on the server side,but clients are still unable to connect. This is because while the pool defines the server side destination for traffic, there still needs to be a client side destination so the F5 device knows to listen for client connections on the desired IP:port. To accomplish this Sam needs to create a Virtual Server. A Virtual Server (Or VIP) is a client side IP:port combination that allows a client to connect to one of the resources behind the device. Without a Virtual Server, no client connection can be established. To configure a Virtual Server Sam would navigate to Local Traffic -> Virtual Servers and once again select create. On the Virtual Server creation screen there are many options to customize your application deployment to fit your particular needs. The core information required to configure a Virtual is a name for the virtual, a destination (IP address), and a service port (the port on which the client will connect). This would get you a basic TCP Virtual Server with no bells and whistles.

In Sam's case, however, he is dealing with HTTP traffic, which means he'll want to go into the Virtual's configuration and select the profile named "http" under the "HTTP Profile" section. This will enable HTTP parsing and optimization at the Virtual layer. A profile within the F5 device is a way to create an abstraction layer between configuration objects, and configuration options. This allows a user to create a customized set of options, for instance an HTTP profile that handles traffic in the specific way they desire, and to re-use those options easily across multiple objects, I.E. Virtual Servers. After assigning this profile Sam is ready to begin testing traffic being passed to his application.

Unfortunately, there's a problem. While the connections are being established correctly and data appears to be passing through the F5 device to the servers, the responses from the servers, which a simple TCP dump will show are leaving the servers, never seem to arrive back at the client. Some further analysis shows that the requests received by the server still has the client's IP as the source. This is problematic, because when the server attempts to respond using its own IP as the source. This means even if the response gets routed directly to the client, the client will reject the response as it is expecting the response to originate from the address it used as the destination of the request. This issue can be seen below:

Client: 10.10.2.10
F5 VS: 10.10.2.30 (no SNAT)
Server: 10.10.2.46:80

The image shows 3 concurrent captures, taken on the client, F5, and the server.

You can see under the yellow circles were the connection goes wrong. The connection comes in to the F5, from the client. The F5 completes the handshake and begins the handshake with the Server. The server, seeing the source IP as 10.10.2.10,  sends it’s Syn/Ack directly back to the client.

The client gets it, RST’s, because it, from it’s point of reference has already completed the handshake it wanted to. The new Syn/Ack doesn’t match anything in it’s current network stack.

To solve this issue all Sam needs to do is ensure that all traffic returning to the client originates from the same IP address that the client used for the request. The easiest way to do this is to enable the SNAT Pool feature within the Virtual Server. This will re-write the source address of both the request from client to the server, and the response from the server to the client. This is important to ensure that the traffic for both the request and response traverses the F5 device. In the case of auto map SNAT it will automatically use the self IP on the appropriate VLAN for traffic bound to the pool members, and the Virtual Server's external IP for traffic bound to the client. For clarification see the image below:

On the top portion of the drawing we can see the asymmetric network path. The client sends a request through the F5 virtual, but without SNAT, the server attempts to respond back straight to the client. The client at that point is not listening for a response from the servers source IP, therefore it just drops it.

The bottom drawing shows what automap SNAT does. The source IP’s are adjusted to ensure that all traffic going to and from the server through the F5, traverses the F5.  Bam, problem solved

After this small configuration change and some further testing Sam will find that traffic is now flowing as expected and the application is serving content. With this, Sam's task of transitioning an application deployment to the F5 device is completed.

The above scenario is common, but obviously not overly complex. There are many further options, toys and tricks available to you when configuring your device. This, however, should get you from a new box to passing traffic without much hassle. For further options and more advanced scenarios dig into the Advanced Design & Config section on DevCentral, as well as the rest of what the community has to offer.

 

This article was a collaboration with Devcentral's Josh Michaels and Colin Walker. ENJOY!

 

 

Published Dec 07, 2012
Version 1.0

Was this article helpful?

3 Comments

  • Now that you are all in the know on the fundamentals of application delivery, the cool thing is you can take the information from the first whiteboard pic above and answer a few questions and have the BIG-IP create it all for you. We call these iApps, which is a tab in the GUI just under Statistics on the left menu. To create an iApp similar to the example in the article, just navigate to iApps->Application Services, click create, and select the f5.http template. Bonus! The double bonus here is that the application service, once created, "owns" the objects for your application, so an operator can't come along and accidentally delete members from your pool or wipe out your virtual server. Powerful! For more details: https://devcentral.f5.com/wiki/iApp.HomePage.ashx
  • Hi Josh,

     

    I have 3 questions :)

     

     

    Please can you provide me any comparision concerning the difference in load balancing between an F5 link Controller and an F5 LTM ?

     

    Does F5 have any Video to watch in order to learn more about configuration examples for F5 Link Controller ?

     

    Is there any Lab document or configuration examples that help us in learning F5 link controller in an easy way other than the configuration guide ?

     

     

    Thank you a lot.