Apparently if you’re attending the USENIX Security conference (August 12-14, 2009, in Montreal, Canada) you can participate in the Security Grand Challenge. What is that, you ask? Here’s how the organizers describe it:

The concept is very simple. The participant teams will have to use their science and technical skill to create an environment where a server can function with integrity and minimum required service levels even when under attack.

On the day of the competition, each participant team will receive a virtualized server, with a number of services. The services might be implemented in different languages (e.g., C, Java, or Python) and may be web-based or stand-alone. However, each service will have a number of hidden security flaws, which have been implanted by the organizers. These flaws might be used by an attacker to disrupt the service. The services are part of a mission-critical system (e.g., a life-support system) and need to be always functioning correctly or some catastrophic event will happen.

The task of the participants is to modify and improve their servers so that they become resilient to attacks.

Now this is a very noble quest, but the focus on exploitation of vulnerabilities is too narrow, in my opinion, given the numerous ways in which a service might be disrupted without the existence of any “hackable” vulnerabilities.

There are any number of ways in which an attacker might render a service unavailable without hacking or exploiting a hidden – or obvious – security flaw. The very nature of network and transport protocols – from IP to TCP to HTTP - provides more than enough fodder for attackers bent on disruption of service. The trust inherent in an established TCP connection, for example, provides ample opportunity for mischief. When that doesn’t work, the trust inherent in an established HTTP session – over which it is expected will come a varying but often high volume of requests – can easily be exploited as a means of disrupting a service.

service-unavailable Disrupting a service is much more easily done by exploiting the trust relationship between a client and server established over a TCP connection. Why bother with sending malicious requests or viruses over the connection in an attempt to exploit a potentially unpatched vulnerability when it would be so much easier to sneak past the defenses of a service using legitimate requests that are capable of consuming resources rapidly and effectively disabling a service?

The belief that security is about patching vulnerabilities is short sighted. That is certainly part of the equation, but it’s not the whole story. All layers of the network and application stack are vulnerable to exploitation without the need for a specific vulnerability to exist and, in fact, it is often the case that there is no solution available to address a particular “vulnerability” in the network stack. That’s because in some cases, such as layer 7 DoS attacks, there’s nothing in the packets or data flow that would indicate it was an attack. A much higher level visibility is required to detect such stealthy attacks, and that means an external device capable of viewing all ongoing communication between clients and servers such that it can perceive anomalies that would indicate an attack is occurring.

So while the quest for an “unhackable” server is laudable and certainly a lack of vulnerabilities in applications and servers themselves would go a long way toward shoring up the defenses of any application, let’s not forget that application and service security requires a much broader interpretation of “vulnerability” in order to understand what’s necessary to prevent possible disruption of services.

Believing that patching application, language, and server vulnerabilities makes it secure against disruption is a rat hole down which it would not be prudent to go. An unhackable server is still vulnerable.


Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share