imageI was chatting with my mother a couple weeks ago about cloud (she’s a used-to-be programmer turned project manager for a Fortune 500. Don’t look at me like that, I keep telling you it runs in the family) and one of the problems she lamented about was that folks don’t seem to understand how entrenched COBOL and the mainframe is in the organization. It’s so entrenched that given the choice between a client-server application and a COBOL application that did the same thing they chose the COBOL program because it was less expensive and they had the knowledge on staff (COBOL, mainframe skills) to deal with it.

This was recently. Like, this year. Like in the past couple of weeks.

So when folks start preaching about moving applications and, in some cases, the entire infrastructure to “the cloud”, I laugh. Ruefully. Cause the core of their critical systems run on the mainframe and are written in COBOL, and they’re not going anywhere.

As @jamesurquhart points out not-so-delicately via Twitter, forklift replacement approaches have never worked. Neither has “rip and replace” or any other approach that advocates “all or nothing” approaches to change in IT.

Rather new technology is often used for new applications where it makes sense, just as @monkchips explains was his initial intent. So yes, this Fortune 500 company is going to experiment with cloud bursting, and it has a wide variety of web services, leverages Java and COBOL at the same time, and implements some web applications using .NET. There’s something for everyone in their data center.


The reality is that, like the Fortune 500 my mother works for, organizations have a dizzying array of architectures and technologies mashed up together to support their business. COBOL applications, client-server, web applications, web services, and cloud are going to continue to work together in the same data center for the foreseeable. The prediction that no one will need to run their own data centers in the future due to cloud is simply a pipe-dream.

That means network and application network infrastructure has to support a wide variety of technology and architectures, and it must do it simultaneously. Even organizations who plan to move everything to a cloud environment – internal or external – aren’t going to do so “overnight.” There is no rip-and-replace for cloud, and even if there were it wouldn’t likely be a painless transition. So organizations are going to move slowly, as they are always wont to do, and need an infrastructure that is flexible enough to support whatever is thrown at it.

Unified application delivery and data services is one of the ways in which an organization can gain the agility necessary to move toward a hybrid – or pure if that’s the goal, though I doubt it – environment. The ability to integrate (collaborate) with applications and other systems in the overall IT ecosystem mean that a unified infrastructure solution can allow IT to benefit both traditional and emerging data center models at the same time.

internal-cloudbursting As organizations try to move toward a more efficient, virtual infrastructure there needs to be a layer of abstraction in the infrastructure that insulates users and in some cases systems from the rapid rate of change that occurs as architecture is changed. Unified application delivery provides that abstraction by supporting both virtual application services at the same time as it supports traditional application services. indeed, unified application delivery can support an application comprised of both virtual and physical services at the same time in sort of an “internal cloudbursting” architecture.

Consider that you may have a dedicated set of physical resources for an application, but need to meet seasonal or event-based demand. A unified application delivery solution provides the “interface” to the customer/user for the application on the physical resources, but can also just as easily handle additionally provisioned virtual resources necessary to meet that demand.

That’s just one niche case where the traditional architecture mingles freely with emerging data center models and needs to be supported as a single, unified resource. Imagine this now as a step toward a fully on-demand architecture. Without interruption to clients/customers/partners you can easily replace the traditional resources with on-demand, without disruption of service. Folks familiar with SOA will recognize this concept, as will developers familiar with polymorphic methods: abstraction in the infrastructure works the same way and provides many of the same benefits.

It is not realistic to expect that existing data centers will ever fully move to an external cloud nor will they internally ever become fully “cloudized”. It’s just not feasible given the vast array of technology and architectures that have been put into place over the past fifty years. But certainly some aspects of the data center will move to new models, and the infrastructure must be able to support both at the same time. Even better if the infrastructure can abstract resources of any kind to provide a unified view of the applications which IT is tasked with delivering and securing.


Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share