The saying goes that to forget (or in some cases blatantly ignore) the mistakes of the past is to be doomed to repeat them.

ODBC. BPEL. JDBC.

All three are extensible standards in the software industry that cause no end of headaches and increased management overhead for folks attempting to deal with them. None of them are interoperable; you can't use the ODBC driver for Oracle to hook up to a SQL Server database, nor you can use the same BPEL produced by one BPM solution as within another. Because they're "extensible" and that extensibility leads, almost unilaterally, to interoperability issues.

Proprietary extensions to open standards: square pegs in round holes Extensibility ostensibly provides a mechanism through which vendors can offer value-added features and functionality to products making use of these "open" standards. Like SIP (Session Initiation Protocol) for VoIP (Voice over IP) solutions like Skype. Vendors and even providers of VoIP services extend SIP with proprietary, "value added" data that destroys the standard's interoperability with other SIP-enabled products.

It's like trying to shove a square peg into a round hole. You can do it, but not well, and it isn't easy.

Oh, it's great news for folks like F5 that support network-side scripting for a programmable application delivery platform that can ease the pain of trying to get two disparately SIP-enabled products to work together, like a client and the server, but it's bad in general for the industry because it inhibits adoption of standards that could, if actually standardized, become ubiquitous enough to launch new and exciting ways to leverage the technology.

That's one of the reasons when we start talking about Infrastructure 2.0 and coordination and orchestration that we have to be very careful to consider the potential ramifications. While standards are obviously a good thing, standards that result in inoperable products - from infrastructure vendors or third parties - would only make the situation worse. Proprietary extensions to "open" standards result in not only vendor lock in, but inoperable products that only serve to make more complex the already complex web of management we're suffering from today.

As Greg Ness points out in "Static Networks meet Billowing Expectations"

Connectivity intelligence enables real-time tracking and interconnectedness between networks, applications and endpoints. The lack of connectivity intelligence has driven up networking costs and heightened pressures on already tight budgets.

That interconnectedness cannot be achieved without communication first at the people layer, or layer 8 as it is often referred to. And what has to come out of that communication is consensus; consensus that any "extensible" standard must be interoperable at the core, with proprietary extensions being just that: extensions, options, add-ons, but never, never requirements for interoperability or, as Greg puts it, interconnectedness.

But as often as we mention standards at the infrastructure layer, there's another aspect of cloud computing that will eventually require standards: deployment. Even though two cloud computing providers may both be built upon a virtual computing model, that does not mean that the mechanisms used by customers to deploy those virtual images are the same. Web services might be used by one, a Web 2.0 style REST API by another, and a proprietary mechanism by a third. Without some standard for deploying applications into the cloud, customers today risk vendor lock in. Processes must be built around and using those proprietary deployment models, which results in lock-in similar to that experienced by many organizations with EAI (enterprise application integration) solutions before the advent of SOA and the ESB (enterprise service bus).

A standardized method of deploying virtual images into cloud computing environments would have additional benefits to customers if they, too, took advantage of such standards when implementing their own, private cloud computing environment. Leveraging standard methods of deployment internally means a much smoother move into the cloud if it becomes required as well as a more seamless experience when expanding from a purely private implementation to a hybrid architecture employing the use of both internal and external clouds.

And despite the fact that standards often remove vendor lock-in that providers might count on, consider this: providers, if standardized, could not only provide services to those interested in leveraging the cloud to deliver applications and services, but to other providers, as well. Leasing out unused compute cycles in much the same way as large telcos have always leased their core network to local and smaller providers.

It's a win-win-win scenario, in which everyone can benefit if we can gather consensus that it's necessary, communicate with one another to make it happen, and keep them open without proprietary extensions getting in the way.

Follow me on Twitter View Lori's profile on SlideShare AddThis Feed Button Bookmark and Share