Back in the day when I was actually allowed to write code for customers the pat answer to any code being returned from QA because of problems was a flat “but it works on my machine.” Alright, alright, I’ll be honest; it wasn’t flat at all, it usually a plaintive whine. This isn’t an uncommon scenario as differences in environments and interactions with other applications may be enough to cause problems on one machine and not another. Troubleshooting such subtle issues were painful, to say the least, and not something anyone wanted to do.

Now comes the time of the network architect. Everything is networked and it’s easy enough to dismiss performance problems for one or more customers simply because, well, it works fine on your network.

But your network is not my network and application performance is about end-to-end performance, not just network perimeter to end performance. Accounting for the differences in networks can positively impact application performance but it isn’t something that’s easily accomplished without the right tools. 

It’s easy to dismiss external network problems in the face of poor application performance because (a) you lack visibility into external networks and (b) you have no control over them even if you do find a problem. There’s very few good solutions to (a) and no good solutions to (b) unless you have a lot of friends and a lot of favors to pull in across the various network providers over whose networks your application needs to communicate.

A harrowing array of variables can be the culprit for poor application performance, some of which you can’t affect and some of which you can. The first step, however, is to figure out which variables are affecting the application and it’s nearly impossible to determine what those are when you aren’t physically able to see the packets and behavior from the end-user’s perspective.

Worse is that even if you figure out that there’s something peculiar about one application or one particular user connection type, what are you going to do about? Changing the configuration of the server to address a challenge with one application may impact (and is likely to do on most systems where the operating system drives the TCP/IP stack) all applications deployed on that server. Changing the configuration to address issues specific to one user connection type, e.g. mobile, remote, local network, VPN – is also likely to impact all users and all applications. Similar issues exist at layer 7 where the nuances in HTTP implementations across applications may need to be tweaked for one, but not another.

The obvious solution is different servers with different configurations that match all the possible combinations of users and applications. In a traditional environment this is never going to happen. The cost of hardware, software, network, and general management of so many pieces of hardware and applications would drive the cost of business through the roof. Even in a virtualized environment the complexity involved in such an undertaking is overwhelming and, to be honest, impractical.

What is practical and manageable is to tweak configurations based on protocol and application in a centralized way. this affords architects and administrators the ability to configure protocol behaviors unique to both client and server side networks, thus addressing the differences in behavior between the two environments.


Because a solution with a full-proxy architecture is the end-point for all intents and purposes, it has information about the user and the network it is using. While this won’t provide a complete view of performance from the end-user’s perspective, it can provide insight into the variables over which the intermediary has some control, i.e. network variables that can be adjusted by the mediator that will improve performance. The solution also has information about the server-side network variables. That means it can adjust variables on both sides of the equation simultaneously without impacting each other.

But that’s just the network. What about the application? After all, it’s application-specific tweaks that may be necessary for one but not PERAPPPOLICY another that we want to affect. For example, RPC-based protocols often run into performance problems due to a conflict between Nagle’s Algorithm and delayed acknowledgements. When the two collide – which is often in applications using RPC-based protocols like Microsoft Outlook – they can cause a deadlock that imposes delays of at least 200ms but as high as 500ms. Depending on the number of calls being made that delay can add up to many seconds – unacceptable in just about every user’s definition.

The typical solution to this problem is to disable Nagle’s algorithm on the server, but that degrades performance for any other application that might be running on the same server. Catch-22. If the mediator is smart enough, however, it can disable Nagle’s algorithm on a per application basis. This means that only the RPC-based application is affected while all other applications – and users – continue to enjoy acceptable performance levels.

This type of granular protocol and application policy-based “optimizations” are not universally implemented by providers of intermediaries like load balancers and application delivery controllers. It requires a full-proxy, for one, and it further requires the ability to intelligently apply policies based on a variety of environmental conditions such as application and connection type. But for those solutions that can provide this level of granularity, the benefits are huge in terms of performance and security and really just about any application delivery related functionality.

The separation of client-side policies from server-side policies allows features and functionality like compression and specific TCP-related optimizations to be “on” or “off”, as is best suited to the particular environment, on a per application basis.


Follow me on TwitterView Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share