Reuven Cohen of the Elastic Vapor blog, in this article, puts forth the notion that infrastructure is required to enable cloudbursting and then asks an excellent question:

start_quote_rb To truly enable a capable cloudbursting infrastructure, I feel there needs to be a common consensus on how this may be archived and by what means. So the question in the short term is: what are some of the practical approaches, technologies and architectures needed to make this kind of hybrid cloud infrastructure feasible?

The general premise of cloudbursting is to allow the cloud to act as overflow resources in the event your own infrastructure becomes overloaded. It's an active fail-over model, in a way, that ensures applications are available and performing well even when capacity in the local data center is exhausted.

The problem with an architecture that supports cloudbursting is that
you want to be able to simultaneously leverage both the local data center and the cloud.

      Where's F5?


This keeps the cost of using the cloud at a minimum and maximizes the return on investment in your own infrastructure.

Sounds like quite a trick. But I don't think it's nearly as complicated as it originally sounds, unless I'm missing something or the sleep deprivation caused by a sick child is affecting me more than I think. 

Global load balancing has been performing these kinds of tricks for years in silence, never receiving a lot of attention except in those rare cases that it happens to "save the day" in a disastrous situation in which a primary data center is destroyed and global load balancing simply redirects everyone to the secondary data center because, well, that's what it's supposed to do.

If you combine a global load balancing solution with an intelligent local application delivery solution you can begin to see a cloudbursting architecture take shape. The application delivery network not only delivers applications but it manages them in terms of performance and capacity, so when the global load balancing solution checks on the local instance of the application and hears "oh my god, we're nearing capacity, we're slowing down" from the application delivery network, it can react instantly and start directing new requests to the cloud.

In this version of the architecture, the global load balancer simply treats the cloud as a secondary data center. There's really no magic, here, except for the requirement that the local application delivery network be able to understand capacity and performance of the applications it is delivering and can relay that information back to the global load balancing solution.

I'm sure there are other architectures that would work, as well, and other solutions, but it seems to me that the really difficult question to answer is whether the application lives in the cloud all the time or whether it's pushed out to the cloud as a part of a workflow initiated when the local application delivery network senses it is near capacity and needs the help of the cloud.

Ensuring that both the local and remote (cloud) instances of an application can be used simultaneous in a hybrid architecture to maximize the investment and capacity of an application seems fairly simple if you stop looking at the cloud as a magical, mystical thing and view it as just another data center.

In the end, when viewed as part of a hybrid cloud computing architecture, that's all the cloud really is. 

Follow me on Twitter View Lori's profile on SlideShare AddThis Feed Button Bookmark and Share