Can intercloud intelligence eliminate the impact of intercontinental latency?

Ken has always posited that it would be not only kewl but highly efficient if your data center could “follow the sun.” We all know that application performance is affected – positively and negatively – by distance. So if you’re a global organization with one primary data center that means some folks are going to have to settle for poorer application performance. That pesky speed of light law absolutely must be obeyed, for now at least, and intercontinental traffic has high latency, period.

So let’s introduce the concept of Intercloud into the equation, shall we, and see what happens…

If you’ve got very geographically diverse locations from which large numbers of employees and customers access a web application, you might be able to consider the possibility of using intercloud concepts to migrate the physical location of that application as the day moves through its cycle.

Consider that folks in Asia are many, many hours ahead of those of us in the United States. They’re up and working while we’re snoozing away. migration But they’re probably using the same web applications that we do. And if the web application (and data center) is physically located in the United States, they necessarily experience poorer application performance. There are solutions we can deploy to counter this – web application acceleration, caching, optimizations – but these do not remove the latency inherent in packets traveling across the seas.

So how do you get rid of that latency? You move the web application closer to the users who are accessing it. With intercloud, you move the application to a cloud in Asia during their work day, and then move it back to the United States as their day ends and ours starts. There may even be an intermediate move to somewhere in Europe as Europeans start their day sometime in between Asia and the US. Your data center follows the sun.

The big deal is that you’ve eliminated the latency inherent in intercontinental data transfers thereby improving application performance. That latency averages 100 to 300 milliseconds1 and doesn’t take into account any localized latency. Eliminating that latency can have a very positive impact on the overall user experience. Which makes them happier and thus more productive.

But what if you want to get more flexible? What if you want to move applications based on the evaluation of specific business or application or network characteristics, in real time?


Intercloud is really a flashy way of describing the process of integrating global application delivery (global load balancing) with the business. Consider, for example, that you have multiple call centers. Assume that one is suddenly becoming overwhelmed with calls. The blockage rate is rapidly increasing and the rate of calls is falling off dramatically. What you need to do, somehow, is reroute calls to a data center that isn’t overwhelmed. You need to be able to base call routing on call center metrics, not necessarily bandwidth or application performance.

This is where the integration capabilities of dynamic infrastructure comes in handy. Using those capabilities, you can construct a system in which the global application delivery infrastructure collaborates with Business Activity Monitoring (BAM) systems to determine real-time call center performance. Using business rules, you create a policy that instructs the global application delivery infrastructure to reroute calls to less burdened call centers when any one of them passes the established thresholds.

This rerouting must be dynamic and based on current conditions; you can’t base it on yesterday or last week or even time of day because it’s nearly impossible to predict when a call center might be overwhelmed. It requires an intelligent, adaptable infrastructure capable of collaborating with business and application-focused monitoring and management systems because that’s where the business data resides.

The power of intercloud will be in its ability to adapt and react based on actionable business and application and network metrics, which necessarily requires a dynamic infrastructure. And not only must it be dynamic, it must be intelligent enough to take direction from those metrics and make decisions in real-time in order to optimize not just the applications, but the business.

1Ledlie, J., Gardner, P., and M. Selter, "Network Coordinates in the Wild", USENIX Symposium on Networked System Design and Implementation, April 2007.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share