Risks with virtualization is same as it ever was but different

Hoff makes a good point about cloud security last month in his “The Cloud is a Fickle Mistress: DDoS&M” which was, if I may quote, “it’s the oldies and goodies that will come back to haunt us.” In other words, it’s the well-known, well-understood protocol-based attacks of uncloud computing that will be problematic for cloud computing.

Security in virtualized environments and “the cloud” is indeed the “same as it ever was.” And yet it’s different, too.


While it’s true that the oldies and goodies are likely the vulnerabilities with which we need to be most concerned it is the shared nature of cloud computing that frightens organizations, especially when considering the oldies and goodies based on old skool protocols and daemons and concepts that operate beneath the visible layers of the application. It is exactly the possibility of planting a backdoor or a denial of service or scared_catmalware infection at the underlying system layers that makes cloud computing and security inherently more scary.

Consider that your application may be running in a virtual container on the same machine as three other applications from three different  organizations. Consider then that a denial of service, directed at the core network layer (and thus the operating system) may be successful. The point of a denial of service is to consume so many resources – network, CPU, memory – that other processes like your application starve to death and unable to execute.

An attack targeted at the underlying systems may be capable of achieving that goal. Your application is, essentially, collateral damage. Your application wasn’t the target, but because of the shared nature of cloud computing you were just too close to the intended target.

Similarly, the possibility that an attacker may be able to compromise the underlying virtualization infrastructure or operating systems means that it is possible to infect or otherwise compromise the applications deployed on that infrastructure. Could be your application; could be someone else’s application. The point is that if the underlying infrastructure is compromised the possibility exists that your application will somehow be affected. And likely not in a positive way. And just imagine what happens when attackers figure out how to break out of containment; attacking through an application and tunneling into the underlying system through some as yet unknown hole in virtual containers? The underlying hypervisor could be compromised, the operating system, and every application that makes use of that physical machine’s compute resources until the exploit is discovered and remediated.

The introduction of the virtualization layer into cloud computing architectures opens up some very interesting possibilities in terms of attack vectors, many of which have not yet been fully considered, explored, or even understood. It is in part this “unknown” that gives many organizations pause to consider whether “security” in the cloud is “good enough” yet for them to entrust providers with critical business applications.


Securing applications against known vulnerabilities is easy. We know how to find them, we know how to fix them, we know how to mitigate them through a variety of mechanisms: secure coding, virtual patching, application firewalls, protocol security, host-based security, etc… But we don’t know so much about virtual environments and its vulnerabilities; yet. It is that unknown that should give us pause to consider what we’re doing.

crystal-ball As Hoff points out, it’s likely that most attacks are going to be “traditional”, i.e. well-known, well-understood, mitigatable, attacks. The oldies-but-baddies. Given that, it makes sense to ensure that applications in any virtualized infrastructure are secured against those attacks. Doing so means that even if someone finds a new way to execute the attack, your infrastructure is still well protected.

Treat virtual environments (for now at least) like a new fangled window. If you’ve already implemented a security system that’s based on motion detection, it likely doesn’t matter if an intruder came in the new window or an old window. The system will still detect the motion and an alarm will still go off. You’re protected against the “unknown”, in a way, because you’re protected against the already known.

Ultimately we want to find a way to stop people from coming in the new window, but until we know how they might abuse that window, we have to content ourselves with being protected against the attack. Eventually virtualization and its supporting technology may give rise to new, more complex attacks. But at this point it is merely another attack surface through which the oldies-but-baddies can be executed.

So if you’re protected against the oldies-but-baddies, then you’ve mitigated the risk as much as you can at this time. And in the security game, that’s kind of as much as you can do.


Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share