As a telecommuter – and one that lives in that technological mecca of the midwest, Green Bay – I don’t often get the chance to talk face to face with, well, anyone. Being conscripted into booth duty at Interop this week means I get to talk to people with real problems and with ones that can quickly bring anyone with their head in the clouds back down to earth.

Imagine if you will an application. A real, honest to goodness client-server application. Not web-based, but client-server; like the kind we wrote confusedin Delphi and Visual Basic back in the 90s. Now imagine that this application is used by customers in places across the country with names you’ve never heard of. It’s pervasive but more than that it’s critical – and not just to the business but to people’s very lives.

Now certainly this application could move into the cloud, right? The cloud is not prejudiced in any way against client-server applications; it’s not as if “The Cloud” requires web-based interfaces. Certainly this application could be moved into a cloud environment for disaster recovery and overdraft protection (which was the intention) without needing any massive changes.

Here’s the rub: the IP addresses of the server, a critical component to be sure, are hardcoded. 

Yeah. That’s what I said while standing at the whiteboard trying to figure out how a traditional global load balancing infrastructure was going to deal with that let alone how something like “the cloud” - that assumes a certain level of dynamism – would handle it.


THE LEGACY of LEGACY

This particular conversation is a good reminder that “the cloud” just isn’t going to be the right choice for all applications. More than that, the “cloud model” itself isn’t necessarily going to be a good fit for all applications. This isn’t about control or security or compliance or features – it’s about decisions that were made 10, 20, and in some cases, 40 years ago. Decisions that, once made, cannot easily be unmade no matter how grand it might be to do so or much sense technically it may make.

This is really about reality and about the fact that organizations aren’t green fields; they can’t just start from scratch and build out an infrastructure that’s going to be all puppies and rainbows in the cloud. In order for organizations to move to a cloud model – or even put their applications in an off-premise cloud – there are real changes that need to be made. Changes that need to be made in such a way that those applications that cannot move to the cloud can still be supported even as new applications are running in new environments and taking advantage of new architectures.

It’s about the legacy of, well, legacy applications. Applications written using technology that has long since been discarded, is no longer supported and, in some cases, may not be able to be modified for lack of skills or tools. Applications for which no one can justify the cost and effort of “modernizing” but that are critical and must remain available and secure and able to be integrated with new, cloud-based applications.

Some applications aren’t going to run in a cloud no matter where that cloud may physically exist. They aren’t. But these applications still need to be supported by the infrastructure and co-exist in an environment that might also need to support cloud-based applications.

It is this very scenario – a hybrid environment supporting both legacy and modern applications – that is where we are really headed. Because even if some day we’ll end up with everyone’s data center in the cloud, we still have to get there. And getting there means we’re going to spend some time – probably many, many years – in a world where cloud and traditional infrastructures intermingle and exist in the same space.

That’s going to mean that network and application network infrastructure has to support both types of applications at the same time. It’s going to have to enable a more careful move from one model to another while maintaining the availability and security of legacy applications. It’s going to have to be able to adapt to a data center in which multiple types of applications are delivered over the same network, and be able to deal with that in a way that’s efficient without increasing complexity.

Yes, what I’m saying in a nutshell is that Infrastructure 2.0 is going to have to be backward compatible because it’s going to be a long time before we see “pure” cloud architectures.

That’s assuming we ever will. And even though we’re in Vegas where the mood of gambling seems to infect everything, that’s not a bet I’m willing to take.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share