connectivity-intelligence-dynamo

There’s been increasing interest in Infrastructure 2.0 of late that’s encouraging to those of us who’ve been, well, pushing it uphill against the focus on cloud computing and virtualization for quite some time now. What’s been the most frustrating about bringing this concept to awareness has been that cloud computing is one of the most tangible examples of both what infrastructure 2.0 is and what it can do and virtualization is certainly one of the larger technological drivers of infrastructure 2.0 capable solutions today. So despite the frustration associated with cloud computing and virtualization stealing the stage, as it were, the spotlight is certainly helping to bring the issues which Infrastructure 2.0 is attempting to address into the fore. As it gains traction, one of the first challenges that must be addressed is to define what it is we mean when we say “Infrastructure 2.0.”

Like Web 2.0 – go ahead and try to define it simply – Infrastructure 2.0 remains, as James Urquhart put it recently, a “squishy term.”

James Urquhart in “Understanding Infrastructure 2.0”:

blockquote Right now, Infrastructure 2.0 is one of those "squishy" terms that can potentially incorporate a lot of different network automation characteristics. As is hinted at in the introduction to Ness' interview, there is a working group of network luminaries trying to sort out the details and propose an architectural framework, but we are still very early in the game. [link to referenced interview added]

What complicates Infrastructure 2.0 is that not only is the term “squishy” but so is the very concept. After all, Infrastructure 2.0 is mostly about collaboration, about integration, about intelligence. These are not off the shelf “solutions” but rather enabling technologies that are designed to drive the flexibility and agility of enterprise networks forward in a such as way as to alleviate the pain points associated with the brittle, fragile network architectures of the past.

Greg Ness summed it the concept, at least, very well more than a year ago in “The beginning of the end of static infrastructure” when he said, “The issue comes contextdown to static infrastructure incapable of keeping up with all of the new IP addresses and devices and initiatives and movement/change already taking place in large enterprises” and then noted that “the notion of application, endpoint and network intelligence thus far has been hamstrung by the lack of dynamic connectivity, or connectivity intelligence.”

What Greg noticed is missing is context, and perhaps even more importantly the ability to share that context across the entire infrastructure.  I could, and have, gone on and on and on about this subject so for now I’ll just stop and offer up a few links to some of the insightful posts that shed more light on Infrastructure 2.0 – its drivers, its requirements, its breadth of applicability, and its goals - to date:

James believes "Infrastructure 2.0" will “evolve into a body of standards that will have the same impact as BGP or DNS” and I share that belief. The trick is going to be in developing standards that allow for the “squishiness” that is required to remain flexible and adaptable across myriad architectures and environments while being able to standardize how that happens.

Follow me on Twitter    View Lori's profile on SlideShare  friendfeed icon_facebook

AddThis Feed Button Bookmark and Share