Forum Discussion

rameshr_132303's avatar
rameshr_132303
Icon for Nimbostratus rankNimbostratus
Feb 03, 2016

When does one make a call if SSL Offloading needs to be done or not?

Hi All,

 

I'm currently trying to put in place a few servers (2 at each DC site) which will be accessed by HTTPS.

 

There are a total of 2 pairs of servers, a pair at each site, load balanced by a GTM and the servers themselves at each site load balanced by an LTM.

 

Now how do i decide if SSL offloading needs to be done or not for those servers? I understand that it's generally recommended to offload SSL, but how do we go about checking to see if this will load the server or not? As in, based on which parameters do we decide if Offloading is required or not?

 

Thanks!

 

Ramesh

 

7 Replies

  • A great question that also comes up with my clients quite often. My answer is that unless you have specific regulatory or design requirements, always decrypt in BigIP (or even before traffic arrives in BigIP).

     

    As for now, you may be able to route non-decrypted traffic directly to your end-servers (and gain in F5 performance by doing so). However, unless your end-servers are bare metal with SSL ASICs, or VMs that offload SSL transactions to external HSM, you will certainly lose in overall performance. The majority of the world uses virtualization, and as you know, virtualized hosts are a complete garbage when it comes to cost-effective SSL-handling.

     

    But what if your F5s are also virtual? That would reduce the significance of overall-performance aspect. VE BigIP does have software-based SSL acceleration, but its nowhere near as good as hardware-acceleration. Even in case of VE BigIP, I would choose to decrypt in BigIP (or before traffic arrives in BigIP) because of better control. For instance, theres a lot more you can do with iRules if you decrypt in BigIP. In your case, you would gain full visibility of HTTP headers and payload - all that information can be useful to make balancing decisions, reject some customers or even implement temporary workarounds to mitigate the bad effects of poorly coordinated releases in your enterprise. Some BigIP modules, (i.e. ASM) can only be used if you decrypt clientside traffic.

     

    • Kai_Wilke's avatar
      Kai_Wilke
      Icon for MVP rankMVP
      Hi Hannes, modern CPUs have a build-in support for certain SSL related hardware accelerations (aka. AES-NI) and F5s Virtual Editions (v11.4+) are able to use those CPU extensions. Well, its definately not the same performance gain that can be achived by using those specialized crypto cards, but the gain is already very impressive for AES bulk encryptions. Cheers, Kai
  • A great question that also comes up with my clients quite often. My answer is that unless you have specific regulatory or design requirements, always decrypt in BigIP (or even before traffic arrives in BigIP).

     

    As for now, you may be able to route non-decrypted traffic directly to your end-servers (and gain in F5 performance by doing so). However, unless your end-servers are bare metal with SSL ASICs, or VMs that offload SSL transactions to external HSM, you will certainly lose in overall performance. The majority of the world uses virtualization, and as you know, virtualized hosts are a complete garbage when it comes to cost-effective SSL-handling.

     

    But what if your F5s are also virtual? That would reduce the significance of overall-performance aspect. VE BigIP does have software-based SSL acceleration, but its nowhere near as good as hardware-acceleration. Even in case of VE BigIP, I would choose to decrypt in BigIP (or before traffic arrives in BigIP) because of better control. For instance, theres a lot more you can do with iRules if you decrypt in BigIP. In your case, you would gain full visibility of HTTP headers and payload - all that information can be useful to make balancing decisions, reject some customers or even implement temporary workarounds to mitigate the bad effects of poorly coordinated releases in your enterprise. Some BigIP modules, (i.e. ASM) can only be used if you decrypt clientside traffic.

     

    • Kai_Wilke's avatar
      Kai_Wilke
      Icon for MVP rankMVP
      Hi Hannes, modern CPUs have a build-in support for certain SSL related hardware accelerations (aka. AES-NI) and F5s Virtual Editions (v11.4+) are able to use those CPU extensions. Well, its definately not the same performance gain that can be achived by using those specialized crypto cards, but the gain is already very impressive for AES bulk encryptions. Cheers, Kai
  • I think the biggest reason you would want to offload SSL for the site is if you need to do anything to the traffic as it goes through. Unless you offload SSL, you wouldn't be able to process the traffic or manipulate it in any way (e.g. within an iRule or Local Traffic Policy).

     

    If you're currently offloading the SSL on the web server(s), then the resource load for decryption is on the servers themselves, and you must maintain the certificates and private keys on both of those servers. If you offload the SSL on the F5, then you can keep the certificate on there and let it do all the decryption processing for SSL. Of course, the other thing would be that if you still wanted HTTP between the LTM and web servers, you could set that up as well to ensure end to end encryption, but if it's not a security concern, then you could just offload the SSL at the F5 and go HTTP to the backend.

     

    So personally, I think the 2 main questions are whether you want the intercept the traffic at all and if you would rather the F5 do the heavy lifting for SSL decryption.

     

  • Hi Rameshr,

     

    as a security guy I would recommend, to not SSL-Offload everywhere, unless there is a valid reason to do so (e.g. Network IDS, etc.).

     

    Note: With SSL-Offloading im refering to those HTTPS-to-HTTP bridging scenarios. Using SSL-Offloading modules (Hardware Acceleration Cards) and Layer7 SSL-Inspection in a HTTPS-to-HTTPS bridging scenarios is fine and not covered by my opinions...

     

    Pro SSL-Offload

     

    • Less CPU overhead on your F5 and Backend servers (the saved CPU cylces are somewhat less esp. when using OneConnect)
    • No administration overhead for deploying/maintaining SSL-Certs on backend servers.

    Contra SSL-Offload

     

    • More complex content switch rule set (e.g. enabling X_Forwarded_Proto/Front-End-Https support via additional headers, possible Header and STREAM rewrites for incompatible applications)
    • You start to send confidential data (e.g. Passwords, Cookies, etc.) in clear text within your internal network.

    Quote from the Internet: In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers (public for the first time) will help to dispel that. If you stop reading now you only need to remember one thing: SSL/TLS is not computationally expensive any more. -- Adam Langley (Google)

     

    Cheers, Kai

     

  • If you assume that ECC ciphers are in play, or you're using servers with SSL acceleration hardware built in (as alluded to in Adam Langley's quote), then the implications of SSL CPU usage and throughput are perhaps less of a factor in the offloading decision today, but certainly not something to completely ignore. In truth, you really have to consider the implications of not handling SSL at a "trusted proxy". If you're creating an "end-to-end" SSL session from the client to the server, the following things become either lost to you, or exponentially more difficult:

     

    1. Intelligent load balancing - in the absence of access to application layer visibility, an ADC (i.e. load balancer) is largely reduced to persisting on source addresses for browser-based communications.

       

    2. Insight - malware generally exists at the application layer, and an entire industry of products (IDS, IPS, AV, WAF, etc.) have been built to address this challenge. In the absence of application layer visibility, these security controls have to happen at the server and/or the client, which is a completely unreasonable request in most cases.

       

    3. SSL intelligence - if you've reviewed the SSLLabs grading criteria lately, you'll notice that the requirements are fairly complex. Quite a few of the items on that list are simply harder to accomplish (and maintain) on a set of "stock" web servers, vs. a single secure ADC entry point.

       

    I'd point out here an interesting difference between SSL offloading and SSL management. In an age where we weren't as concerned about the inside of the network, and SSL was expensive on commodity servers, offloading simply made more sense. But I think we've evolved a bit. SSL is definitely cheaper these days, and malware is slipping right through that open port 443 on your firewall. SSL management is therefore not the (IMHO, reckless) pursuit of "end-to-end" SSL, but rather an evolutionary state where SSL is indeed maintained between every point in the network, but also managed intelligently at a central point, where security "layers" (IDS, IPS, WAF, etc.) are given privileged visibility to unencrypted data and the SSL process itself is controlled at the highest possible standard for each party involved. SSL management also implies a return to intelligent load balancing and robust logging capabilities that you won't get otherwise.