Bob owns a widget shop. Now this widget shop is not your ordinary widget shop, because the widgets are made from Swarovski crystal. Very expensive stuff. Bob is aware that losing any number of his widgets would be financially devastating, and the negative press he'd receive would darken his shop's reputation. So he's invested in a very modern physical security system that utilizes electronic locks on all the doors, and includes all the newest laser motion detection technology. It's further connected to a monitoring service just in case, so he'll know if security has been breached and can immediately react.

One night the system fails. But because Bob didn't want the system to hamper his ability to serve his customers, he insisted the entire system fail open. That's right, when the electronic lock system failed all the doors automatically unlocked, the laser motion detection system shut down, and the monitoring service was disconnected.

If you're thinking that's the dumbest thing you've heard in a while, you wouldn't be alone. It's highly unlikely that anyone is going to demand that a physical security system - whether it be on a store front, a car, or a data center - fail open. Any one who does demand this arguably crazy security scheme is going to be labeled a nut-case. And when it does fail and Bob's precious widgets are lost, we're likely to shake our heads and say, "He was asking for it with that crazy fail open security system."

But we might be hypocritical in saying such a thing, because it's likely that in our own security implementations we've demanded that we only deploy products that fail open in the event of a failure.

It's a common question to ask of vendors whose products are deployed transparently or as proxies - do they fail open or closed? And in many cases we want the answer to be "fail open" rather than "fail closed".

Granted, there are some cases where we do want a product to fail open because the loss of that product will not negatively affect the security of our systems. Monitoring solutions, ones that are deployed transparently to collect data and report on it, should fail open. The loss of the data and visibility is regrettable, but it's not going to affect the security of our data and applications. Web analytic solutions, too, should probably fail open. Though we like our statistics and campaign management functionality, no one is going to be adversely affected if they fail unless their failure prevents customers/users from accessing an application. The answer: fail open.

But when you start talking about security solutions - firewalls, web application firewalls, authentication and authorization systems - it's a very different story. A story that ends with your crystal widgets being stolen if one of those security systems fails opens. Or worse, your customers' crystal widgets.

We tend not to view network and application security systems in the same light as physical security. Perhaps that's because if someone physically steals a widget it's gone. That's not true of digital data because a thief generally steals a copy of the data, but rarely manages to destroy/steal the original. So you still have all your widgets and so do your customers, and so do the bad guys. A digital theft is more easily recoverable than a physical theft of Bob's widget shop.

But let's not forget the legal ramifications of such a breach. There are very real consequences for executives of organizations if they fail to properly secure customer widgets (data) and fail to protect it from theft. HIPAA, SOX, SB1386, BASEL II. The list of regulations attempting to force sanity into the protection of digital data seems to grow every year, and the consequences seem to be growing more dire as well.

Bob and his colleagues would never consider implementing a security system that fails open. It's too much of a risk.

So why do we?

 

Follow me on Twitter View Lori's profile on SlideShare