The concept of an “intercloud” is floating around the tubes and starting to gather some attention. According to Greg Ness you can “Think of the intercloud as an elastic mesh of on demand processing power deployed across multiple data centers. The payoff is massive scale, efficiency and flexibility.”

Basically, the intercloud is the natural evolution of global application delivery. The intercloud is about delivering applications (services) from one of many locations based on a variety of parameters that will be, one assumes, user/organization defined. Some of those parameters could be traditional ones: application availability, performance, or user-location. Others could be more business-focused and based on such tangibles as cost of processing.

Greg, playing off Hoff, explains:

For example, services could be delivered from one location over another because of short term differentials in power and/or labor costs. It would also give enterprises more viable options for dealing with localized tax or regulatory changes.

The intercloud doesn’t yet exist, however---It has at least one missing piece: the automation of manual tasks at the core of the network. The intercloud requires automation of network services, the arcane collection of manual processes required today to keep networks and applications available.

Until there is network service automation, all intercloud bets are off.

What I find eminently exciting about the intercloud concept is that it requires a level of intelligence, of contextual awareness, that is the contextpurview of application delivery. We’re calling them services again, like we did when SOA was all the rage, but in the end even a service can be considered an application – it’s a self-contained piece of code that executes a specific function for a specific business purpose. If it makes it  easier to grab onto, just call “application delivery” “service delivery” because there really isn’t too much of a difference there. But intercloud requires a bit more awareness than global application delivery; specifically it requires more business and data center specific awareness than we have available.

On the surface intercloud sounds a lot like what we do today in a globally load balanced environment: application services are delivered from the data center that makes the most sense based on variables (context) surrounding the request including the user, the state of the data center, the networks involved, and the applications themselves. Global application delivery decisions are often made based on availability or location, but when the global application delivery infrastructure is able to collaborate with the local application delivery infrastructure the decision making process is able to get a lot more granular. Application performance, network conditions, capacity – all can be considered as part of the decision regarding which data center should service any given request.

I rarely disagree with Greg and, on the surface at least, he is absolutely right in that we need to automate processes before the intercloud can come to fruition. But we are also missing one other piece: the variables that are peculiar to the business and data centers comprising the intercloud and the integration/automation that will allow global application delivery infrastructure to take advantage of those variables in an efficient manner. That data, likely, is assumed in the need to automate as without that data there’s not nearly enough to automate decisions across data centers in the way in which Greg and Hoff expect such systems to do.


WHAT’S DIFFERENT ABOUT INTERCLOUD?
What makes the intercloud differ from today’s global application delivery architectures is the ability to base the data-center decision on intercloud businessy-type (non IT) data. This data is necessary to construct the appropriate rules against which request decision making processes can be evaluated. While global application delivery systems today are capable of understanding a great many variables, there are a few more nascent data points it doesn’t have such as cost to serve up an application (service) or labor costs or a combination of time of day and any other variable.

Don’t get me wrong – an intelligent global application delivery system can be configured with such information today, but it’s a manual process and manual processes don’t scale well. This is why Greg insists (correctly) that automation is the key to the intercloud. Assuming that the cost of power, for example, changes throughout the day and, in fact, might be volatile in general means that manually reconfiguring the global application delivery system would be necessary. That simply wouldn’t be feasible. A system for providing that information – and any other information which would become the basis for request routing across distributed data centers – needs to be constructed and subsequently able to be integrated into the massive management system that will drive the intercloud.

It makes a certain amount of sense, if you think about it, that global application delivery would also need to evolve into something more; capable of context awareness at a higher point of view than local application delivery. Global application delivery will be the foundation for intercloud because it’s already performing the basic function – we just lack the variables and the automation necessary for global application delivery solutions to take the next step and become intercloud controllers.

But they will get there.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share