image There is a common myth that the reason legacy code continues to run in businesses around the world is that no one understands it; that IT and businesses are afraid to replace it because they don’t know what it does.

Once again, living in the mainframe capital of the world (the insurance industry heavy midwest), I get to talk to IT folks who deal with legacy software and hardware all the time. Do not doubt that they know exactly what that legacy software does and how it works, and perhaps frightening to proponents of change and the benefits of emerging technology those IT organizations are still developing software for those legacy platforms.


THE TIGHT-COUPLING OF SOFTWARE AND THE BUSINESS

I’m sure there are plenty of organizations out there with some legacy software running that no one understands and for which the organization has no one with the skill set to migrate to a more modern system. But I’m of the opinion that there are far more organizations out there with legacy software running that has simply become too expensive and laden with risk to migrate. Software that is core to the cobolcodebusiness, that essentially has become the business over time carries with it an exceedingly high risk – if it fails, the business fails, and in a way that’s very real and very costly. There is a high level of not only software but business integration that happens over time with critical systems and it is that integration and dependence on specific legacy systems that deters organizations from even considering a migration to a more modern system.

These IT organizations know what the legacy software does; in fact they continue to develop new functionality and systems that integrate with and depend on those systems, and they invest in human capital by specifically training new developers and architects on those systems and legacy platforms. The original developers have long since moved on and up – to project and program managers who are more than willing to evaluate new platforms and systems but who also understand the risks to the operational effectiveness of both IT and the business should something go wrong.

Legacy systems developed in the early 1970s are still being maintained because the risks outweigh the potential benefits. These decades old systems are so integral to the continued operation of IT and the business that the risk inherent in migration is simply too high to justify such an undertaking. With so much time and money invested in these systems they are as close to perfection as software gets, and any migration to new platforms runs the risk of introducing errors and new flaws that would need to be worked out over time. Too, every integrated system would need to be updated, which incurs the risk that those systems would have errors and issues. It's actually mind-boggling to consider the effort that would be required to accomplish such a Herculean task.


IF IT AIN’T BROKE, DON’T FIX IT

That’s not to say these organizations have not invested in new architectures, solutions, and platforms over time. Indeed, they have, and there are plenty of heterogeneous environments out there with a good mix of both legacy and modern software not only in production, but integrated and orchestrated together in what is a big melting pot of application development environments. Modern systems are used to interface to users and customers, but the core software upon which these businesses rely is ancient, legacy software that may never be replaced. As long as the supplier of both the hardware and the software development environments continues to support it there’s no reason for these organizations to change.

With many organizations wholly reliant on legacy software it is more likely the case that comfort levels drive continued reliance on these systems than ignorance. It is exactly because they understand and trust the systems in place that they continue to build on them, to integrate them, and rely upon them to power their businesses day in and day out.

It is hard to justify why an organization should migrate its entire business to a new platform and new software that might be error-prone and requires millions of man-hours to migrate when a comfortable system that just works already exists. The costs to train new developers and architects is by and large less of an investment than attempting to re-architect an entire ecosystem of applications while that ecosystem is evolving. Because of the time it would take to migrate such systems – and prove their correctness – there is no way to “go dark” until it’s done; it would have to be accomplished while new systems are being integrated and put into place. You’d almost need two IT organizations to get it done – one to work on and maintain the old architecture and one to work on and migrate the new one. That’s not counting the costs to invest in a completely new architecture requiring new platforms, new hardware, new software, and new developers.


SOA ENABLED CONTINUED RELIANCE

What’s ironic is that SOA was purported to provide the means by which migrations could occur but instead enabled the continued reliance on legacy systems. Organizations implemented web services interfaces to legacy systems, but never took the next step; they did not use the inherent decoupling of interface from implementation provided by SOA and its standards to replace the implementation. Instead, web services became exactly what some feared: little more than a method of integration; a bridge between two worlds. And thus it has remained, with service-enabled interfaces it is easy enough for organizations to update presentation layer and user-interface technology, to take advantage of emerging web application models without giving up the comfort and trust they have in their core systems.

Even if these organizations started a migration today it would still be years if not decades before every legacy software system was replaced with something more “modern”. That’s why it’s important amidst the hype of cloud computing and social networking and web 2.0/3.0/4.0 that we not lose sight of the fact that not every organization is simply going to rip and replace their entire architecture in favor of the latest and “greatest” new data center model. We need to continue to support internal hybrid models of application architecture as well as hybrid data center architectures. In the application delivery space this is a lot easier than it might be for some: applications are applications, protocols are protocols, and network traffic is network traffic. Application delivery platforms, at least, are capable of supporting both legacy software and modern implementations at the same time, on the same solution, without equal alacrity. Whether that enables organizations to move from legacy to modern, or offers a means by which more modern technology can be applied to legacy systems without requiring modification to those applications is up to the organization.

Legacy software isn’t going away, and whether we like it or not the number of legacy systems is growing each year because organizations can’t afford the risk to their business and can’t justify the investment to change. So even as we continue to look forward to how emerging data center and application architectural models can provide benefits and solve problems we need to continue to evaluate how to support aging technologies and provide them, as well, with the tools necessary to keep their applications secure and available.

Follow me on Twitter    View Lori's profile on SlideShare  friendfeed icon_facebook

AddThis Feed Button Bookmark and Share