If you’re looking at standardization and interoperability efforts only as they relate to providers or end-users then you’re not thinking long term nor are you really considering the potential of cloud computing and virtualization to revolutionize data center architectures. In a nutshell, if you equate “cloud” with “providers like Amazon and Google” then you don’t really get the big picture.

While the ultimate goal of cloud specifications and standards is to enable interoperability and ease of migration for the end-user, approaching the creation of such standards from the point of view of the end-user will result in a huge shipment of fail arriving at the doors of everyone involved.

A bigger issue is the belief that cloud standardization efforts should be designed specifically with the needs of end-users in mind, defining, as Jeff Boles of ComputerWorld suggests, “a few core ‘activities’ that are targeted more at interoperability than uniform services and structure.”

FoxFail This view of cloud standardization is short-sighted and selfish.

First, it neglects to understand that a “few core ‘activities’” cannot be properly defined or modeled without the underlying specifications. Without a common model, and therefore definition, around which such core activities can be defined we cannot achieve true interoperability. Unless we agree, as an industry, which components are in fact necessary to the cloud we cannot properly define the APIs or specifications necessary for a “few core activities” in any meaningful, interoperable fashion.

Certainly it’s possible to give the end-user a specification with an interface to the cloud containing standardized APIs like “deploy application”, “spawn application”, “stop application”, etc… but that will not enable interoperability between cloud providers because the very definition of “provider” varies from Google to Amazon to Joyent to BlueLock. “Deploy application” has a very different meaning to each provider and therefore the idea that a simple specification for a few “core activities” would be meaningless.

You might stop and say wait a minute, that’s the point of a standardized API. How each of these providers implements “deploy application” is not my concern, it’s all abstracted-like, right, cause it’s the cloud. True, it’s not your concern, but it is the provider’s concern and it is the concern of anyone who has to actually implement “deploy application.” Standards beneath this simplistic top-level API are required in order to provide the means by which the providers can actually implement the interfaces in such a way that the interface is portable across implementations.

Second, it completely ignores the fact that cloud computing and virtual data center architectures are not just for providers. Organizations will eventually implement some form of these emerging data center architectures locally and whether we call it “cloud” or “fog bank” is really irrelevant. The same core principles will necessarily be used because they make sense from an efficiency standpoint. But before organizations can take full advantage of these emerging models there needs to be standardization across a lot more than just a “few core activities” primarily because the systems required for the organization to orchestrate and automate emerging data center models will simply engender more vendor lock-in without them.

The end-user is not always the primary focus of cloud computing interoperability efforts. Not really. Sorry if that hurts your feelings to be ripped out of the center of the cloud compute universe, but interoperability efforts are focused on two things: a common model and a common interface. That model and subsequent interface must define a common “stack” of components in order to facilitate the collaboration between all the other application and network focused IT management systems. Any such specification ignoring this need would fail to provide any kind of meaningful interoperability between cloud implementations. 

Lastly, the focus on a few “core activities” ignores the very real fact that every business and IT organization have processes into which they will incorporate cloud-based applications. Those processes may appear simplistic, but they often involve an intricate dance between myriad components in the data center. Not just network and application infrastructure, but applications and in some cases, human interactions, as well. Those processes – those “core activities” – should not be prescribed by some specification. Such processes must be unique to the organization and thus require a top-level collaborative specification that allows processes to be specified dynamically, much in the way BPEL provides dynamic specification of business processes in the business process management (BPM) world.

Jeff speaks for the end-user:

The naysayers have a good claim that standardization could stifle innovation, but what I care about, as an end user, is really carrying out a couple of key steps, in the same way, regardless of who the provider is.

If you really care about carrying out a couple of key steps, in the same way, regardless of who the provider is, then you have to recognize (1) the provider may be your own IT organization and (2) the disparity between the types of cloud computing (PaaS, IaaS, SaaS) that exist at the nonce make that desire nearly impossible until a full ontology of the cloud can be agreed upon and used as the basis for delivering the standards desired. 

Achieving these goals is difficult enough without complicating the work necessary by prematurely defining some simple set of “core activities” to make end-users – some of whom are apparently woefully uninformed about the nature and potential of cloud computing in the first place -  happy.

 

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share