When being chased by a dragon, you don't need to be faster than the dragon. You just need to be faster than the halfling behind you.

I had a lot of discussions at RSA this past week, and of course some of them centered on performance. One of the challenges often associated with pure proxy-based application anything involves dealing with the argument that proxies degrade performance, especially in something as intense as an application firewall. That's because of the associated computational cost of buffering input, reassembling packets, and parsing through data in addition to the requirement of managing TCP connections both on the client and server side of the proxy.

That's a lot of work. And it some cases it does impede performance, by necessity. The question should really be how much does it impede performance, i.e. do the benefits of a pure proxy model outweigh the cost in computational cycles, and is it faster than the application for which the device is proxying. You must be faster than what you're in front of by a significant magnitude, but that also means that the definition of "fast" is variable and based more on what you're proxying rather than bare speeds and feeds.

The benefits of a pure proxy model are something we've known for a long time. Mediation, transformation, transport and application level protocol sanitization and security, full content-based routing; these capabilities are enabled by a full proxy architecture and cannot be fully realized with a half-proxy or packet-focused solution. In order to protect an application, or provide optimization or content-based routing, you first need to understand the application. It all comes back to application fluency and the need to understand what you're trying to manipulate in the first place. Without that fluency, a device is about as useful as an elevator with a bunch of blank buttons. You eventually get where you're going, but a lot of time is wasted playing guessing games.

Does it slow down the device? Yes. Of course it does. And the more you want to do with it (feature concurrency) the more performance will be impacted in a negative fashion. As always, YMMV. What is generally guaranteed is that the device is going to be an order of magnitude faster than what it will sit in front of, and is fully capable of handling hundreds of thousands of simultaneous requests at a time without issue. As long as the core networking and traffic management features run at wire-speed, the variability in performance is reduced to the application for which you are proxying, and what you want to do with it.

It's easy for me to discount performance, as BIG-IP is still the fastest application delivery controller (ADC) there is, but even if it wasn't I could ignore the minimal impedence on performance based solely on the fact that BIG-IP would still be way faster than the platforms serving up the applications it delivers. The benefits outweigh any degradation that might be incurred, and what's funny is that in most cases that slight performance hit is offset by an increase in overall performance due to the optimization and acceleration capabilties of the application delivery controller. Yes, slower == faster. How's that for a paradox?

As we move into an era where XML traffic comprises a significant portion of traffic being delivered we're going to all see a change in the raw number of TPS handled by all intermediaries and proxies that need to process XML. And that change is not going to be in a positive direction for anyone in the immediate future. That's why consolidation, centralization of services, and smarter network devices are necessary as you move forward with (re)architecting your application infrastructure to support XML and SOA delivery networks. Consider moving common, shared services to a network platform capable the flexibility required to secure and deliver applications reliably. Use an extensible language like iRules to help move common processing off of servers and into the network, and explore the possibility of making your network infrastructure a first-class citizen in your application architecture, rather than just a bump in the wire. Doing so will result in a better performing architecture overall, meaning that any increases in performance achieved by tweaking one service or one platform won't be nullified by another component of your infrastructure somewhere downstream.

A well-architected, well-managed infrastructure can enhance the overall performance of all your applications, whether it's faster than the dragon, or just the halfling.

Imbibing: Mountain Dew