Technical Article Cloud Changes Cost of Attacks August 05, 2009 by Lori MacVittie 3341 article adc application delivery application delivery controller application security applications availability cloud cost dos internet load balancer management mitigation networking risk security tcp us web web 2.0 0 For some companies there’s never been a quantifiable financial impact from attacks. Cloud may change that. One of the frustrations with information security is that it’s always difficult – if not impossible – to quantify risk. Without the ability to quantify risk, it’s often the case that solutions that would mitigate the risk are left unimplemented because there’s no way to prove that the risk would turn into a breach, downtime, or other revenue impacting incident. Take the recent PayPal outage. Estimates are that the hour of downtime for the payment processing king might have cost businesses in the area of $7.2 million dollars. But that’s assuming every customer affected by the outage abandoned their order with disgust and went elsewhere, which is unlikely. Because there is no way to determine with any surety the actual impact in dollars, we have to estimate based on what we do know, and that’s a $2000 per second transaction rate for PayPal that was effectively cut off for the better part of an hour. Similarly experts like to estimate the financial cost of a breach at over $200 per record impacted. The annual Cost of Data Breach report found that the “total average costs of a data breach grew to $202 per record compromised, an increase of 2.3% since 2007 ($197 per record) and 11% compared to 2006 ($182 per record).” (Data breach costs rise as firms brace for next loss, Feb 2009) This is a generalized value and does not reflect the type of customer data that may have been exposed, or even if the data was exposed (it may not have been). It’s just a number that’s useful for information security folks to base the value of risk on when they’re recommending solutions. Most organizations are not PayPal, they aren’t financial networks that lose millions per second during an outage, and they actually store very little customer data that could be exposed in the event of a breach. The value of their risk is, at most, tied to reputation and potential loss of customers which, while certainly costly in its own way, is yet another game with numbers that is just as fuzzy to compute as it is to determine value of risk. The introduction of cloud, however, may very well change the way “normal” organizations calculate risk – and how they protect their systems. THE COST OF AN ATTACK In an environment ripe with web applications there continue to be a number of exploitative attacks that are not breaches or vulnerabilities but are rather an exploitation of trust and flaws in de facto standard protocols, like HTTP. Consider, for example, Slow Loris. No, not me and my clones, but the attack. Slow Loris is a brilliant exploitation of HTTP and the way in which servers process the protocol. It’s an exploitation of the use of persistent connections that allows a client to open a connection to a server and keep it open while never requesting any real data. It’s designed to overwhelm the server with connections – consuming resources – without requiring a massive network of clients. And it completely adheres to the protocol. There’s no malicious data involved, no obvious wrongdoing in the requests. It just keeps sending HTPT headers in intervals that result in the connection remaining open for as long as possible without any real data being exchanged. It’s a DoS (Denial of Service) attack that is very effective and nearly impossible for a web server to detect. Now, if your applications are behind an application delivery controller, a.k.a. load balancer, you’re probably protected from this attack. Most application delivery controllers will only pass on valid HTTP requests to the servers they are virtualizing, and because Slow Loris never actually sends an HTTP request the web server’s resources – and your application – are protected. But consider that you deployed your application in the cloud, in an environment in which you have no clue – or control – over whether the load balancing/application delivery solution is capable of protecting your applications against such an attack. Or protecting itself for that matter. A simple layer 4 load balancer, for example, won’t stop Slow Loris because all a layer 4 load balancer cares about is TCP connections. It just passes everything else right on through – which means your web server/application will be consuming resources while waiting for an HTTP request until it is over capacity. And what happens when it reaches capacity? You launch a second instance of the application to handle the load. That’s the on-demand beauty of cloud, isn’t it? What? You say you wouldn’t launch a second one because it’s an attack? That’s what makes Slow Loris so dangerous – there’s almost no way of detecting it is an attack. It looks like valid users, valid connections, and valid HTTP requests – as far as those requests go. You’d have to parse through the logs very carefully and match up request patterns with clients to see what’s happening. In the meantime, your application is at capacity and users are starting to get angry because they can’t use the application. See, you’ll launch a second instance because you’ll really have very little choice. And that’s when the ball game changes. Every time you have to launch a new instance in the cloud you’re going to pay for it. An attack such as Slow Loris could potentially cost an organization real dollars in additional fees as operators (or automation) launch additional processes to handle the increasing load. That’s new for most organizations because there’s rarely been hard dollars attached to downtime before; there were soft dollars, to be sure, but never hard, quantifiable cash. Now there is. There’s instance fees, bandwidth fees, IP address fees, and whatever other fees might be attached to executing an application in the cloud. INFRASTRUCTURE MATTERS I’ve said it, oh about a million times, but I’ll say it again: this is why infrastructure matters. This is why it’s important for the consumer of cloud computing – that’s IT – to be not only cognizant but choosy about the infrastructure supporting their cloud computing provider. A simple layer 4 load balancing solution is not going to be able to detect – let alone stop – a subtle, layer 7 attack like Slow Loris. And Slow Loris isn’t the only abuse of TCP and HTTP out there, it’s just the latest and certainly not the last. Information security isn’t just about protecting information from being stolen, it’s about ensuring that information is available to the right people at the right time. That means protective measures that ward off attacks designed to prevent legitimate access as much as measures to prevent illegitimate and unauthorized access to that data. Infrastructure – network and application network – can either be a tool that helps you achieve that goal or a hindrance, depending on how carefully you choose the solutions that support your applications. You need to concern yourself with what’s under the hood, as it were, if you’re going to make an informed decision regarding which cloud provider to choose. Choosing a provider with less than optimal infrastructure might cost less up front, but in the long run it could cost you a lot more. Layer 4 vs Layer 7 DoS Attack An Unhackable Server is Still Vulnerable The IT Security Flowchart Get your SaaS off my cloud New TCP vulnerability about trust, not technology 4 Reasons We Must Redefine Web Application Security last modified: August 05, 2009 0 Comment(s): You must be logged in to post comments.