The importance of a full-proxy architecture to application delivery, security, cloud computing, and virtualization


People often describe the act of changing focus from one related but distinct task to another as “wearing two different hats.” Like moving from “developer” to “administrator” when you’re trying to deploy an application in a testing environment. You’re the developer, but then you have to “switch gears” and become a server administrator in order to ensure that the application server and its environment is configured properly before you can actually test the application you just wrote.

But the metaphor of “switching hats” is not entirely accurate in the world of application delivery because it implies that you can only wear one at a time; your focus is either on the application or its delivery. After all, no one wears two different hats at the same time. Yet the skills necessary to implement a successful application delivery strategy requires that one bring to bear both network and application expertise at the same time. You can’t really switch hats because you have to understand how the two affect one another in order to tweak and optimize and configure the systems involved in an application delivery network.

But wearing two different socks, now that’s a different story.


No one has every really asked why I called this blog “Two Different Socks”. Maybe that’s because they’re polite and they know that I’m just an oddball, but really – there is a very good reason for it. But then you knew there would be, didn’t you? 

First, I do actually wear two different socks. My socks are usually hidden beneath shoes and the ends of my jeans anyway, so I’m not too fastidious about making sure they match. They’re all essentially the same anyway, they just have different colored toes and heels so it’s not completely crazy and no one’s ever (to my knowledge) noticed. Go ahead: the next time you see me in person ask. You won’t be disappointed.

But the wearing of two different socks is more than just a physical quirk; it’s a representation of constantly being in two worlds at the same time: applications and networking. It’s standing on both sides of the fence at the same time and keeping in mind that the two of them work together in myriad ways to actually serve up an application to users that is fast, secure, and available. It’s understanding the intricate footsteps of the application delivery dance that occurs each and every time a user accesses a web application regardless of where they might be or what device they may be using.

But like a really good metaphor, this one goes deeper than just saying something about me; it also says something about the technology – application delivery – on which I like to write so often.


“Two different socks” is probably the most accurate (and simplest) description of a full-proxy based application delivery platform, at least if you’re a developer and have an understanding of network-oriented programming. If you’ve ever written even a simple TCP-based application in, well, just about any environment and have had a need to reference examples you’ll recall that sample code often uses the variable “sock” to represent a reference to an accepted TCP connection over a socket.

I know. Developers can be extremely unimaginative at times, can’t we?

In any case, the concept of a proxy is pretty well understood: it sits between a client and a server and mediates or brokers a connection between the two. A full proxy, however, is a little more complex in that it actually has two different network stacks and thus always uses two different sock(et)s for any given connection. It is the existence of two complete networking stacks that provides many of the capabilities fullproxystacksassociated with application delivery platforms such as acceleration, optimization, efficiencies in managing TCP connections, and security. This applies to not just web application traffic, but any traffic – SMTP, FTP, TCP-based – that might be flowing through the solution.

It is important to note that even though there are two different stacks, there are only two connections – one with the client and one with the server(s). There are no “sock(et)s” between the client and server-side stacks on the platform itself. At least, that’s the way it should be in an optimized application delivery platform. This is not the same thing as putting both the web server and the application server on the same machine and pointing the web server’s application “connector” at the application server. That’s more akin to chaining proxies, an architecture in which it really doesn’t matter whether the applications are physically located on the same machine or not. A truly unified, integrated application delivery platform communicates internally over something more akin to a secured bus than a TCP connection. The former is fast and incurs almost no latency while the latter incurs all the latency and overhead typically associated with opening a TCP connection. That network latency is minimal, of course, but it is higher than communicating directly and the TCP management overhead remains a constant.


The separation of client from server-side network stacks provides the means by which network and application layer protocols can be “sanitized” and secured, it offers a way to “spool” requests in a sort of queue-like fashion such that even if the servers are overloaded, clients can still make connections and will get served as quickly as possible because the application delivery platform can hold content and requests as necessary to ensure full service. It enables TCP multiplexing, one of the most valuable, important optimizations for drastically increasing efficiencies in traditional and emerging data center models. It makes termination of SSL secured connections a simple task. The separation of client from server-side network stacks allows developers and administrators alike to apply network-side scripting functionality to requests and responses, and provides the context that is so important in today’s dynamic network and application environments.

The separation of client from server-side network stacks allows for the abstraction of the virtual presentation of an application or service from the physical implementation. This abstraction supports both traditional, virtual, and cloud computing architectures with equal alacrity and with no apparent differences to the client. This allows organizations to move from a static architecture based on traditional data center models to a more dynamic one based on emerging technologies without disruption of service. Such abstraction enables the delivery of applications hosted in virtualized containers in an on-demand environment by managing those stacks separately, such that connections from clients are always accessing available and responsive application instances without requiring manual intervention of any kind.

Two different sock(et)s turns out to be a whole lot more important than you might think to application delivery and enterprise architecture no matter what interpretation of the metaphor you might prefer.

As Paul Harvey used to say: And now you know the rest of the story.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share