VM sprawl is predicted to be one of the outcomes of early adoption and excitement over virtualization. Just as IT struggled to manage the explosion of PCs and servers across the enterprise, it is predicted that now it will need to find a way to manage the explosion of virtual machines as they pop up all over the enterprise with surprising alacrity.

Part of the difficulty in managing new technology is the rogue deployment of X. Whether that's physical or virtual servers is irrelevant, the challenges associated with managing what are essentially unmanaged applications and servers deployed outside normal organizational processes are the same.

One of the reasons these rogue deployments are so difficult to manage is that they are, effectively, invisible to the management systems and IT staff tasked with controlling them. They simply come into existence in what appears to be a whim, taking over network resources such as IP addresses and ports. This spontaneous existence is problematic, because those network resources may be needed for other, business critical uses.

DHCP makes the task of network configuration a breeze. Plug in the machine to a switch and fire up the OS - it obtains an IP address. Launch a virtual machine - it obtains an IP address. And if they don't or can't, anyone savvy enough to deploy a virtual machine is likely competent enough to set a static IP address and be on their way, heedless of the potential for conflicts with other physical and virtual servers, as well as other pieces of critical infrastructure.

AUTO DISCOVERY FOR NETWORK INFRASTRUCTURE

DHCP is innately dynamic, but it is not always well integrated with the rest of the network infrastructure. It is often integrated with DNS, to be sure, as it is often a DNS server providing DHCP services and thus it is configured to handle the dynamic nature of changing IP addresses.

But the rest of the infrastructure is not, and the application delivery infrastructure along with other essential perimeter devices such as firewalls and security solutions, are not necessary so actively aware of these changes. This is part of the evolution to infrastructure 2.0 necessary for IT and data centers to adapt to this new dynamic paradigm.

What we need, and what would alleviate some of the pain associated with rogue (and planned) VM deployments is some kind of auto-discovery mechanism that could enable the connectivity awareness needed for infrastructure collaboration in the next generation data center.

IF-MAP is one of those solutions, providing the means by which enabled clients and servers can communicate through event-driven messaging on a network level. Routers, switches, IP address management infrastructure, and perimeter security solutions could potentially take advantage of this emerging specification to "auto-discover" changes in the underlying network ecosystem and, one hopes, enforce organizational policy regarding access and assignment in the process.

But while IF-MAP may be one of the means through which the coming network evolution is achieved at the network infrastructure level (today implementations are focused on security), it does not yet adequately address the challenge of managing dynamic data centers at the application level.

AUTO DISCOVERY FOR APPLICATIONS

Infrastructure20-graphic-greg

When a server - physical or virtual - is deployed it is rarely deployed for the sake of merely existing. It is deployed in order to run some application needed by IT or the business. The application, like its underlying infrastructure, must also be managed, which means application delivery and security infrastructure must necessarily be made aware of its existence.

We need, if you will, auto-discovery for applications.

It is rare to find auto-discovery technology that is truly "auto". Almost all implementations of such solutions, usually in the form of asset management, require some form of agent (or client-side) daemon in order for the asset to be discovered. Even SOA, with its loose-coupling and late-binding techniques, still requires some form of meta-data or agent in order to properly discover services. WSIL inspection is most often utilized as the means to achieve this end without a separately deployed and managed agent, but most systems still rely upon an agent to "automatically" discover new applications.

These techniques do not speak at all to the way in which the application delivery infrastructure, the "application intelligent" infrastructure Greg Ness often points to when discussing Infrastructure 2.0, discovers and subsequently manages dynamic application instances. It is simply assumed they will discover the applications and how is left as an exercise to the implementers. Too often the result is a manual, after-the-fact exercise for application delivery administrators.

Providing this application intelligence in an Infrastructure 2.0 world is enabled today through the use of standards-based APIs capable of communicating with and directing application delivery infrastructure. The instrumentation of applications is necessary because managing dynamic applications is not the same as managing dynamic network resources such as IP addresses. An application is not just a node on the network, and it requires specific configuration in order to be secured, optimized, and made highly available. Sometimes those configurations can be automated through profiles, when the application is well known and understand, such as Microsoft Exchange, OWA, SharePoint, Oracle, or SAP applications. But for custom applications, this is rarely possible. Regardless of the ease with which acceleration, optimization, and security policies can be applied to applications, they cannot be based on IP address or host name because more than one application could easily reside on any given IP address or host in a dynamic data center over the course of a given day. Which brings us right back to the manual application delivery configuration processes that, like their IP and network-oriented cousins, is an increasingly costly proposition.

This is a large part of what will drive the coming network evolution: the need to provide the means by which all applications can be easily instrumented such that they can be automatically discovered and managed appropriately by the infrastructure tasked with delivering and securing them. 

Today it's fairly easy for administrators to manage the delivery of applications and maintain control over their deployment. But the ease with which applications can be deployed within virtual machines and in the cloud speak to a time in the near future when application sprawl will be as large a problem as VM sprawl as PC sprawl and put more urgency into the evolution of the network.

IMAGE COURTESTY GREG NESS/INFOBLOX 

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share