The term “Infrastructure 2.0” seems to be as well understood as the term “cloud computing.” It means different things to different people, apparently, and depends heavily on the context and roles of those involved in the conversation. This shouldn’t be surprising; the term “Web 2.0” is also variable and often depends on the context of the conversation. imageThe use of the versioning moniker is meant, in both cases however, to represent a fundamental shift in the way imagethe technologies are leveraged by people. In the case of Web 2.0 it’s about the shift toward interactive, integrated web applications used to collaborate (share) data with people. In the case of Infrastructure 2.0, it’s about a shift toward interactive, integration infrastructure used to collaborate (share) data with infrastructure.

The two are surprisingly similar in evolution, in key components, in defining attributes, in intended effects. Web 2.0 evolved to address a specific set of key pain points observed and felt by users for years. Infrastructure 2.0 is the data center equivalent; it’s the evolution of infrastructure to address a set of key pain points observed and felt by IT for years. It’s about enabling infrastructure components – network, application network, endpoints – with the ability to share data and the intelligence to make decisions based on that data in the proper context. It’s the ability to adapt to change and manage the massive volume of information that each individual infrastructure component inherently collects but rarely shares.

The rapid growth and emergence of sites like FriendFeed and TwitterFeed that exist solely to augment the usability of other Web 2.0 sites is a perfect example of the way in which Web 2.0 allows users to shift the burden of managing the ebb and flow of data across sites in a more effective manner. The automation of sharing across web 2.0 sites is still primitive; it doesn’t really take into consideration context and the rules by which content is pushed to one site or another are primarily simple and event-based. “When X happens, push Y.” What happened is that applications recognized that they couldn't adapt fast enough to the rapid changes occurring and addressed that gap with technology. Strategic points of control began to emerge that allowed for automation of sharing and feedback across web sites and ultimately people.

This is where Infrastructure 2.0 begins as well. Simple, event-based integration between the various layers of network, application network, and endpoints. In some cases, such as IF-MAP and IP address management, this is accomplished in much the same way as Web 2.0: a third party “manager” that collects the information provided and fires off an “event” that then shares the information with anyone “subscribed” to it. But it needs to build on that momentum, and continue to evolve, toward what everyone wants to happen but is afraid to mention - as if we’re afraid we might jinx efforts to get there if talk about it, or that we’ll be branded a renegade – or worse - for even thinking it might happen.


INFRASTRUCTURE 2.0 IS THE BEGINNING, NOT THE END

What we are working toward, as we should be in Web 2.0 as well, is the ability of the infrastructure to meet specified business and operational goals automatically. We shouldn’t have to sit down with a slide ruler and determine the right combination of network speeds, current application loads, and the amount of RAM available to meet a business-defined SLA (service level agreement) or performance guarantee.

imageWhat we eventually want to end up with, what we’re truly working toward, is the ability to specific a single policy that says “Application A Response Time Must be Less than 5ms” and let the infrastructure figure out how to meet that goal. The problem is this decision can’t be made by any single component; it has to made in the context of all the components. A router can’t decide to route to network A just because the response time would be better because it may be the case that the application instance it ends up routing to is already bogged down so much that its response time alone would push the total response time over the agreed upon performance metrics. That’s what makes context so important to the decision making process; what makes it imperative that context be shared across the infrastructure and the whole infrastructure be capable of integration. So the “big picture” is able to be understood and acted upon in such a way as to bring about the desired result: an application response time under the specified business limits.

We’re closer to that than you might think, but the implementation is scattered around in the infrastructure in varying degrees of readiness to meet what is, I’ll admit, such a lofty goal. But if the infrastructure has the visibility into the various factors that affect performance and is integrated with the infrastructure responsible for managing each of those factors, and can make decisions based on that information, then we can get to the point where the burden of managing the disparate pieces of the grand data center puzzle is shifted off of people and onto technology.

Infrastructure 2.0 is just the beginning of the story. It’s not the goal, the real goal is a dynamic infrastructure. Infrastructure 2.0 is the way we’re going to make that happen. Eventually.

Follow me on Twitter View Lori's profile on SlideSharefriendfeedicon_facebook AddThis Feed Button Bookmark and Share