Infrastructure 2.0 requires collaboration. Collaboration requires the ability to communicate. The ability to communicate requires integration. But how that integration will happen may shape the future of infrastructure and network architecture.

choice

There is a growing recognition of the basic problems associated with the rapid rate of change inherent in on-demand architectures (cloud) and the complexity that comes from virtualized data centers. Challenges such as IP address and application management, visibility, and last but not least, integration.

Yes, that most dreaded of all technology concepts has finally come to the network.

The answer to the growing challenges of managing rapid change is automation and orchestration, but in order to build such solutions there is required the ability to integrate infrastructure – both with other infrastructure solutions and with the management systems and platforms that will actual control the orchestration of the data center. The need for this infrastructure integration is rising in awareness. That’s a Good Thing. But questions remain regarding how that integration should be achieved; what form should it take?

While traditional EAI (enterprise application integration) technology originally took the form of API-based integration – that is, libraries that included functions that could be invoked to execute specific functions – in later years with the advent of SOA and Web Services metadata-based integration patterns became much more popular. Metadata-based integration reduced the cost to create, maintain, and support integration libraries for the vendors and insulated customers from changes and the nitty-gritty details.

But then Web 2.0 and social networking became all the rage and integration between those sites reverted to the traditional API-based method with a slight twist. Rather than rely upon completely proprietary data formats, i.e. created specifically for the application, they began to offer both JSON (JavaScript Object Notation) and XML formats to exchange data. While not completely interoperable – the data itself is not compatible across applications – the format is, at least across platforms, languages, and implementations.

Infrastructure 2.0 needs to look at what has – and hasn’t – worked in the application space, and learn from it to lay a solid but extensible foundation for the future of infrastructure integration.


DIFFERENT STROKES FOR DIFFERENT FOLKS

Infrastructure solutions today use a variety of mechanisms to collaborate. The primary purpose has been to allow third-party development of management applications for specific applications or platforms, though there has also been a smattering of enterprise usage for specific integration purposes with data center management systems. Infrastructure today has also generally accepted as a standard format XML, though whether that’s via a RESTful API or a SOAPy API is very much dependent on the vendor’s view of the world and what its typical users demand.

There are a lot of infrastructure solutions (and even more announced/coming after VMWorld this year) that are API-enabled. The thing is that just like Web 2.0 and social networking APIs no two APIs are the same. That means configuring a VLAN on a Cisco switch or an F5 BIG-IP or an HP ProCurve Switch is a completely different process. The API calls themselves, the data required, the process – each is unique to the product. This complicates application portability across clouds (or data centers) because the orchestration and automation that enables a dynamic infrastructure is implicitly tightly-coupled to the infrastructure. That’s okay for the cloud provider, because they’re probably – like you – standardizing on certain vendors so it isn’t going to be a problem for them. And the granularity offered by the various APIs provides them with the ability to build out automation and orchestration solutions that are tailored to their environment. The more that can be automated the more simplified the provisioning process, which in turn offers value to its customers and the ability to differentiate in the market.

But when you try to take an application that may require services from security, acceleration, load balancing, IPS, IDS, firewall, and other infrastructure support solutions from one environment to another, that’s where the differences in the API becomes apparent – and problematic. You can’t automate the migration via the API because the product in one environment may be different than the next, and therefore such a method would be useless. The answer is, of course, to somehow just share the configuration data, but today that is just as tightly coupled to products as APIs.

There’s no standard way to share the metadata – the configuration - that describes those requirements across vendor lines. When you request configuration data from product B it’s completely different from that of product A, and neither one can completely understand the other. So what’s needed is a standardized but extensible metadata format – and a way to share/consume that metadata across clouds and data centers. That’s the concept behind constructing a mechanism through which metastructure data can be published, shared, and consumed, anyway.


NOT MUTUALLY EXCLUSIVE METHODS OF INTEGRATION

When it comes down to it the use of metadata and APIs to integrate and collaborate in a dynamic infrastructure is not an either-or proposition. On the contrary, in fact, both will be critical to the success of infrastructure 2.0 to solve the challenges associated with implementing a truly dynamic infrastructure.

APIs will be necessary to specifically automate and orchestrate data center operational and business processes while metastructure hubs will be necessary for portability, upgrades, and reconfiguration efforts. While it certainly appears, at least at first glance, that metastructure hubs and the metadata integration approach would work well for both design (configuration, a.k.a. governance) and run time dynamism, metadata integration does not enforce any order of operations. Can’t and shouldn’t enforce any order of operations. Infrastructure interested in certain events or data subscribe to a topic or channel and receive (or pull, depending on the model) updates at varying rates. Processes generally require that certain tasks be complete ere the next one begins, and thus require more control. That control comes from an API and a management system capable of executing specific automations across the infrastructure in the specified order at the specified time under specified conditions.

Though it might appear at first that the there are two competing methods of integration to enable the dynamic infrastructure, nothing could be further from the truth. Both metadata integration and API-based integration will be required to build out a truly portable, dynamic infrastructure. And if we look at what’s happened in the web application space, we see that it, too, has compromised on a combination of metadata (standardizing on XML and JSON) and APIs to enable the cross-application sharing of data and functions that essentially today make up the “social networking web”.

Interestingly enough it seems to be working for Web 2.0. Hopefully we’ll see the same kind of success and adoption if we enable similar integration mechanisms for Infrastructure 2.0.

Follow me on TwitterView Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share