Using application fluency and layer 7 routing to implement of an efficient, scalable, and cost-effective application architecture

There is a subtle difference between the word balance and distribute. Balancing implies a simple decision process. If I have three boxes and three people, I give one box to each person in order - regardless of the weight of those boxes and the ability of the people to carry them. Distribution, on the other hand, implies some form of intelligence behind the decision process. I give the boxes to the people most capable of carrying their weight so that no person gets overloaded and all the boxes arrive at their destination as quickly as possible.

Just as many people use the terms balance and distribute interchangeably, many also interchange the terms load-balancing and application delivery. While load-balancing is certainly at the heart of any application delivery solution, has some amount of intelligence behind its algorithms, and can improve the performance, security, and availability of your web applications, there's even more intelligence behind application delivery that results in additional options with which to scale your application infrastructure and reduce costs.

Layer 4 vs Layer 7

Traditional layer 4 load-balancing offers horizontal scalability and ensures availability of applications by balancing requests across a number of servers. While there are several industry standard algorithms such as round-robin, least connections - and their weighted variations - that can be used to determine how those requests are distributed, in the end you still need a pool of identical servers from which the application delivery controller can choose to retrieve content. If you're relying on third-party web application servers this can become costly as your capacity needs grow. You'll need to purchase additional licenses for each and every server you deploy, not to mention the potential additional costs of SSL certificates and application-specific costs such as JDBC/ODBC drivers or integration software.

Using layer 7 - application fluent - features, you can architect a solution that not only ensures availability and scales well, but that also reduces the costs associated with web application servers such as additional licensing fees.

Layer 7 routing, a.k.a. content switching, a.k.a. content based routing, allows you to examine application specific content and headers to determine which server or pool of servers should process a given request. You can examine every aspect of a request - URI, headers, application content - and make decisions to intelligently distribute the request. You can easily distribute all .JPG or .GIF files to one server, all .HTML files to another server, and all .ASP or .JSP requests to yet another. You can look at application-specific content - virtually a requirement for content based routing of web service (SOA) and XML requests - and direct requests as required by your application delivery needs.

Certainly the web application server must process requests and must be the core of your application architecture. But you can use simple application aware routing to move images and static content to other, less expensive servers and thus reduce the overall cost of maintaining your application infrastructure. This has the added benefit of increasing the capacity of your web application servers, as the application servers will no longer be expending valuable (and expensive) resources on serving up images and static content. Moving static content and images to less expensive servers reduces costs, but it has the advantage of allowing you to configure the servers to specifically serve up these content types without affecting how the core application content is served or requiring that you maintain complex application server configurations to handle these different content types.

And while it's undoubtedly more efficient to design an application from the start with this capability in mind, the  beauty of application delivery controllers is that you can add this functionality at any time without needing to rewrite the actual applications involved. Because the application delivery controller is an intermediary, it can examine all requests and decide where to route the request. Thus you can virtually rearchitect your application architecture in real-time without incurring the costs of rewriting or redeploying existing applications.

Side Effects

By architecting your delivery infrastructure and taking advantage of content based routing you gain additional benefits offered by application delivery controllers. Caching and compression can be applied to offload servers, increasing capacity while improving the end-user experience through better application performance. Security functions such as DoS and SYN flood protection, as well as application firewalling functionallity can be moved to the application delivery controller in order to protect your entire application infrastructure from a single, manageable entry point.  

Additional features found in application delivery controllers that optimize and accelerate the delivery of content can also be applied to all servers without the need to tweak and modify each and every server configuration, saving time and money that can be refocused on other tasks.

While load balancing is certainly at the heart of any application delivery network, taking advantage of application fluency can provide the means by which the scalability, security, and performance of your application infrastructure can be dramatically improved over traditional layer 4 load balancing.

Imbibing: Coffee