The biggest disadvantage organizations have when embarking on a “we’re going cloud” initiative is that they’re already saddled with an existing infrastructure and legacy applications. image That’s no surprise as it’s almost always true that longer-lived enterprises are bound to have some “legacy” applications and infrastructure sitting around that’s still running just fine (and is a source of pride for many administrators – it’s no small feat to still have a Novell file server running, after all). Applications themselves are almost certainly bound to rely on some of that “legacy” infrastructure and integration and let’s not even discuss the complex web of integration that binds applications together across time and servers.

So it is highly unlikely that an organization is going to go from its existing morass of infrastructure that comprises the data center to an elegant, efficient “cloud-based” architecture overnight. Like raising children, it takes an investment of time, effort, and yes, money.

But for that investment the organization will eventually get from point A (legacy architecture) to point Z (cloud computing) and realize the benefits associated with an on-demand, automated data center.

There are some milestones that are easily recognizable as enterprise data centers as you traverse the path between here and there; steps, if you will, on the journey to free the data center from its previously static and brittle infrastructure and processes on its way to a truly dynamic infrastructure. There are, you guessed it, five steps and they all end with (how’d you ever guess?) “ate”.

1. SEPARATE test and development image

2. CONSOLIDATE servers

3. AGGREGATE capacity on demand

4. AUTOMATE operational processes

5. LIBERATE the data center with a cloud computing model

And for your efforts of raising up this data center you’ll achieve a dynamic infrastructure that is scalable, reliable, and enables available applications. Yes, the three “ables”. Modern “math” says five “ates” = three “ables”, at least in the realm of the data center.

To get there a new paradigm in data center and networking design is required; one that allows the customer, on their terms, to add, remove, grow, and shrink application and data/storage services on-demand. It’s the type of network that can understand the context of the user, location, situation, device, and application and dynamically adjust to those conditions. It’s the type of network which can be provisioned in hours not weeks or months to support new business applications. It’s an Infrastructure 2.0 enabled data center: integrated, collaborative, and services-based.

What’s necessary is a new architecture and new way of looking at infrastructure. But to build that architecture you first need a blueprint, a map, that helps you get there – building codes that help navigate the construction of a dynamic infrastructure that’s capable of responding to demand based on the operational and business processes that have always been the real competitive advantage IT brings to the business table. That blueprint, the architecture, is infinitely more important than its individual components. It’s not just the components, it’s the way in which the components are networked together that brings to life the dynamic data center.

And it’s those architectural blueprints, the building codes, that we’re bringing to Interop.