I was doing a video blog last night and got to pondering the state of project management these days. With the PMI and several other groups trying to rein in the beast that is runaway projects, it seems we’d have come further than we have.Complex

The one thing that got me thinking this – and the corollary that I came up with later – is that projects have a predictable pattern – staff who are already very busy with five projects more important than yours (or that got in the queue before yours, however your organization does it) ignore your project until the due date for their part is near, then they scurry to put it into place. Eventually, someone’s scurrying doesn’t get them to the finish line, or the finish line is moved before they get there – or the lines painted on the track go off in their own direction, completely away from the finish line. And then you’re behind.

Straight-up, I do not think that this is a staff problem – when your desk is piled with projects, only the stuff due yesterday gets done – nor do I view it as a problem of the business – we get paid a lot of money to get projects completed, and it behooves our employer to make certain we’re not sitting playing WoW waiting for the next project – indeed, if you often find yourself with free time, I would be looking for a new job. This is the nature of the beast, to some extent. Even if you plan out three months worth of projects so there’s time to get things done, the unforeseen will bite you in the rear.

And that’s where my corollary came in. I’ll be expounding upon it in the near future, but the simple fact is that our systems are outrageously complex. That means there will be unforeseen problems. In fact, I’m pondering the possibility that the best sales pitch for a cloud vendor is “let us worry about the complexity, you worry about your application!” because we’re so far into the uber-complex that sometimes we don’t even notice it. There are thousands of circuits on one chip of your server. There are hundreds, maybe thousands, maybe tens of thousands of computers in your building. They all have switches and routers, cables running here and there, overlapping wireless access points, a connection to the Internet that boils down to a couple of wires, Application Delivery Controllers directing traffic, millions of lines of code running in your data center before you deploy a single application… and that doesn’t even touch on security products. We should be grateful that entire complex system works at all, not surprised when there is an unforeseen problem.

Add to this the fact that there is constantly change in the data center. Products with moving parts tend to break more than products without, yet your data center is constantly in flux. Even in these belt-tightening times, I’ll bet money you’ve got massive upgrades of several systems on the schedule for 2009. That’s the IT equivalent of moving parts. It’s far easier to run into integration problems when integration is a constantly moving target.

We used to “pad our estimates” to account for this Data Center Uncertainty Principle, now most project management methodologies try to compensate by putting in milestones with dead space after them – which is really just advanced estimate padding methodology, right? Granted, it’s more accountable and granular because it’s part of the plan, but it’s not significantly different. Some Project Managers still pad, but go about it in a different manner – telling the staff working on the project one due date and the people waiting for completion a different one that is somewhat later.

I don’t have the easy answers. We’re working in a highly complex environment, and even if a purchased product works as advertised or a developed app is perfectly written to spec, there are always going to be infrastructure and integration issues. Perhaps purchasing your network from a single vendor in the same manner as you do a computer would have worked 20 years ago, but not likely today – it’s cheap to replace a laptop or switch laptop vendors and replace users’ machines as they need turnover, it’s much more difficult to replace your entire network. This is the model that has SAN vendors in so much trouble right now – you can replace your SAN vendor, but the cost is prohibitive, so many companies choose to forego SAN all together and go with a NAS, possibly with something like ARX in front of it.

My best advice, though some will vehemently disagree with me, is to follow a set of standards like PMI and do post-project analysis of both failed and overly-successful projects. Then at least you can get a feel for what worked and what did not. Another bit is to simplify things. Our products simplify your environment by virtualizing network objects and handling content-layer routing, or making all of your NAS resources appear as a single directory tree without terribly obfuscating the back end. Other vendors’ products, as I’ve mentioned before, have unused functionality already in your data center that’s worth exploring.

In the end, delivering to the business is why IT exists. Lots of IT staff don’t appreciate that, and lots of business people don’t appreciate that most IT really tries. Make certain you’re trying, and make certain your customers know you’re trying, and that’s about the best you can do in the current state of affairs. Making it a point to help business understand the limitations and risks is huge too – that’s what our IT staff did up-front on our recent DC-China project, and we were then able to plan around them. Certainly earned our respect, and if you watch the RealIT video series, you’ll think so too when that part comes up.

That’s it for now, more to come on this topic though!