You can put into place technology to mitigate and defend against the effects, but you can’t stop the attack from happening

In the wake of attacks that disrupted service to many popular sites in December the question on many folks’ minds was: how do you prevent such an attack?

My answer to that question was – and continues to be – you can’t. You also can’t prevent an SQLi attack, or an XSS-based attack, or a DDoS directed at your DNS infrastructure. You cannot prevent an attack any more than you can prevent a burglar from targeting your house. You can make it less appealing to them, you can enact policies that make it less likely that an attack (or the burglar) will be successful, but you can’t stop either from trying in the first place.

The only thing you can do is try to mitigate the impact, to manage it, and to respond to it when it does happen.

In the past, infrastructure and operating systems have evolved to include defenses against typical network-based attacks, i.e. flooding-type attacks that attempt to consume resources through massive amounts of traffic directed at a particular service. DNS (remember taking down Twitter was as easy as D.N.S), the network stack, etc… have all been targets of these types of attacks and will continue to be a target for such attacks. But with most infrastructure able to detect and mitigate the impact of such attacks, they are less and less effective. The increase in bandwidth availability and Moore’s law have combined to create a formidable defense against flooding-based attacks and thus sites of significant size are able to fend off these attacks and render them less effective.

The recent rise of application-layer attacks – those directed at application protocols like HTTP – however, are neither as common nor as easily detected as their flooding-based predecessors. Network infrastructure is rarely natively capable of detecting such attacks and many network components simply lack the awareness necessary to detect and mitigate such an attack. This potentially leaves application infrastructure vulnerable and easily disrupted by an attack.

THE BEST DEFENSE is a GOOD OFFENSE

Attacks that take advantage of the protocol’s native behavior, i.e. slow HTTP attacks, are the most difficult to address because they aren’t easily detected. And  no matter what your chosen method of dealing with such attacks may be, you’re still going to need a context-aware solution. That’s because the defense against 01-Oct-10_98697098CC140_Green_Bay_Pac_crop_450x500such attacks requires recognition of the attack, which means you must be able to examine the client and its behavior to determine whether or not it’s transferring data at a rate commensurate with its network-connection.

For example, if a client’s network connection – as determined during the TCP handshaking process – is clearly high-speed, low latency broadband, then it makes very little sense why its transfer rate suddenly drops to that of a 1200 baud modem. It could be a sudden change in network conditions, but it may be the beginning of what will be a denial of service attack. How you decide to deal with that situation may depend on many factors and may include a data center pow-wow with security and network operations teams, but in order to get to that point you first have to be able to recognize the situation – which requires context-aware infrastructure.

One of the ways you could deal with the situation is to start manually dropping those connections and perhaps even blacklisting the IP addresses from which the connections are initiated for a period of time. That would alleviate any potential negative impact, but it’s a manual process that takes time and it may inadvertently reject legitimate connections.

A better solution – the only real solution at this point in time – is to mitigate the impact of those slow transfers on the application infrastructure; on the web and application servers that are ultimately the target of such slow HTTP-based attacks. To do that you need a mediating technology, an application front-end, if you will, that acts like an offensive guard in front of the web and application servers and protects them from the negative impact of the exploitation of HTTP. An offensive line in football is, after all, very defensive in purpose. Its goal is to keep the opposing team from reaching the quarterback (the application). But it doesn’t just stand around repelling attacks, it often dynamically adjusts its position and actions based on the attackers’ moves. If the offensive line fails, the quarterback is often sacked or fails to successfully complete the play, which can result in the offense going nowhere – or losing ground. That’s a lot like applications and DDoS. If the infrastructure fails to meet an attack head on and adjust its defense based on the attack, the result can be a loss of resources and the application is unable to complete the play (response). In football you often see the attackers coming, but like slow HTTP-based attacks, the offensive line can be blindsided by an attacker sneaking around and flanking the quarterback.

It ends up that an HTTP-based attack is more like one of the defensive line masquerading as part of the offensive line and sneaking through. You don’t recognize he isn’t part of the team until it’s too late.

EVERY REQUEST COULD BE PART OF AN ATTACK – OR NOT

The reason slow HTTP-based attacks are so successful and insidious is that it is nearly impossible to detect the attack and even more difficult to prevent it from impacting application availability unless you have the proper tools in place to do so. The web or application server doesn’t recognize it is under attack and in fact sees absolutely nothing out of the ordinary. It can’t recognize the fact that its queues are filling up and even if it could, what could it do about it? Drop the  connection? Empty the queue? Try to force the transfer of data? None of these options is viable and none are possible, even if the web/application server could detect it was under attack.

There is no way for a web/application server to detect and subsequently protect itself against such an attack. It will, if left to its own defense, eventually topple over from the demand placed upon it and be unable to service legitimate clients. Mission accomplished. 

What an application delivery controller does is provide a line of defense against such attacks. Because it and not the web/application server is the endpoint to which the client connections, its queues are filled as a result of an attack while the web/application servers’ are not. The web/application server continues to serve responses as fast as the application delivery controller can receive them – which is very fast – and thus the application infrastructure never sees the impact of a slow HTTP-based attack. The application delivery controller sees the impact, but because it is generally capable of managing millions of connections simultaneously (as opposed to thousands in the application infrastructure) it is not impacted by such an attack.

The entire theory behind slow HTTP-based attacks is to tie up the limited resources available in the application infrastructure. By leveraging an application delivery controller as an intermediary, you (1) increase the resources (connections) available and (2) mitigate the impact of slow consumption because, well, you have enough connections available to continue to serve legitimate users and the web/application infrastructure isn’t bogged down by dealing with malicious users.

WHAT ABOUT CLOUD?

An alternative approach to mitigating the impact of a slow HTTP-based attack is, of course, to simply provision additional resources as necessary to keep up with demand.

cloud computing and highly automated, virtualized architectures can certainly be used to implement such a strategy. Such a strategy leverages auto-scaling techniques to automatically launch additional instances when it becomes clear that the existing instances cannot “keep up” with demand. While this strategy is likely to successfully mitigate a disruption in service because it continually increases the available resources and requires the attacker to keep up, it is also likely to significantly increase the cost to the organization of an attack.

Organizations that require load balancing services should instead evaluate the solution providing load balancing services for application-layer DDoS attack capabilities as well as TCP multiplexing services. The ability to mediate between clients and servers and leverage a full-proxy architecture will be natively able to mitigate much of the impact of an application-layer attack without requiring additional instances of the application to be launched. Such a solution is valid in both cloud computing and traditional architectures and while traditionally viewed as an optimization technique can be a valuable tool in any organizational security toolbox as a means to mitigate the impact of an application layer attack.


AddThis Feed Button Bookmark and Share