Cloud computing can’t assure availability of applications in the face of a physical network outage, can it?

Cloud computing providers focus on providing an efficient, scalable environment in which applications can be deployed and provide for their availability with load balancing services and health monitoring and elastic scalability. But it can’t assure availability of your network. The Rackspace outage late last year was allegedly caused by a peering issue. You know, a network, problem.

blockquote UPDATE: “The issues resulted from a problem with a router used for peering and backbone connectivity located outside the data center at a peering facility, which handles approximately 20% of Rackspace’s Dallas traffic,” Rackspace said in an incident report on its blog. “The problems stemmed from a configuration and testing procedure made at our new Chicago data center, creating a routing loop between the Chicago and Dallas data centers. This activity was in final preparation for network integration between the Chicago and Dallas data centers. The network integration of the facilities was scheduled to take place during the monthly maintenance window outside normal business hours, and today’s incident occurred during final preparations.”

We spend so much time worrying about application availability that we often overlook – both purposefully and accidentally – one of the most basic facts on which applications are built today: the existence of a working, reliable core network.

N



O NETWORK, NO APPS

One of the most basic solutions to ensuring availability at the network layer is network redundancy. That is to say most organizations who determine that availability is a number one priority will maintain multiple connections to the Internet – via different providers – and then utilize “link load balancing” to route, re-route, and balance traffic across those  cat5_network_cableconnections. This redundancy is supposed to ensure that if one connection (provider) is hit with an outage or simply experiencing poor performance that another provider can be used to ensure customers and users can access applications.

This would seem to mean, at first glance, that cloud computing does not have a part to play in network availability. You can’t outsource your physical connectivity to “the cloud”, after all, so it doesn’t seem as though cloud has a part to play in maintaining availability from a network perspective.

That’s true. From a network perspective, cloud can’t help. From an internal user/customer perspective, cloud can’t help.

But from an external customer/user perspective, perhaps cloud can be of service (sorry for that one, really) after all.

The reason to keep connectivity available is, ultimately, to deliver applications. While cloud computing cannot address a problem with basic physical connectivity it can be leveraged in a way as to help ensure that applications are available in the unlikely event that an organization’s physical connectivity is interrupted. Using the cloud as a secondary data center, essentially, provides the means by which at least customers external to the network problem can still access applications in the face of an interruption. Cloud as a secondary data center is a fairly mundane and perhaps even boring use of cloud computing, and yet it’s probably one of the more well-understood and cost effective examples of how cloud computing can be leveraged by organizations of all sizes, but particularly smaller ones that may not have before had the option to have a “second” data center due to prohibitive costs.

The only problem – and it is a problem – in this entire scenario is that the global application delivery solution (global server load balancer or GSLB) must remain available too, which may mean that deployment at the local data center is not an option because well, if there’s no connectivity to the applications there’s no connectivity to the GSLB, either. The reason this is a problem is that typically the GSLB is deployed locally, under the control of the organization. In order to take advantage of cloud computing as a secondary data center to combat the potential loss of physical network service, the GSLB would have to be deployed externally, so it was still accessible to external customers and users.

I



S THIS A JOB FOR INTERCLOUD?

Perhaps an external GSLB “service” is what’s required; an external catalog of services that’s based on GSLB and provides core DNS services on an “organizational” scale. A domain “locator” that’s not quite DNS but yet is. Or perhaps we’re simply looking at a solution that’s more along the lines of a third-party DNS service, where DNS is outsourced to a managed provider and GSLB is an extension or additional option that can be provisioned. Perhaps it, itself, is a cloud-based service that only kicks in when/if you need it.

There is almost certainly a solution to the problem of maintaining network-level availability that involves “the cloud” but it is architectural, not technological. It’s not a tangible solution like link load balancing that physically addresses the challenges associated with maintaining network connectivity. It’s a deployment model, an architectural model, that will necessary to solve this problem. The pieces of the puzzle already exist, generally speaking, so coupling together a solution today would not, strictly speaking, be impossible. But it may be desirable to envision a solution that is based on standards (Intercloud may actually help with this one) or standard practices, and that’s something that today the cloud doesn’t address.

Follow me on Twitter    View Lori's profile on SlideShare  friendfeed icon_facebook

AddThis Feed Button Bookmark and Share