Last month I ran across a blog entry by Extreme Geekboy discussing a patch (now in the most recent nightly forthcoming 4.0 builds) for Firefox he submitted that implements the user agent components of HTTP Strict Transport Security.  Strict Transport Security, or HSTS (or STS if that extra character is taxing to type) is an internet draft that allows site owners to specify https as the only acceptable means of accessing the site.  This is accomplished by the site inserting a header that the browser will evaluate and for x number of seconds (specified in the header) will rewrite all requests, either from the user or returned in a link from the site to https.  This first part is good, but is only half of the implementation.  If you are under a man-in-the-middle attack, it matters not if your data is encrypted because the attacker has the keys and is quite happy to decrypt your session unbeknownst to you.  This is where the second half of the draft comes in.  It disallows the use of untrusted certificates (self-signed, untrusted-CA signed, etc).  Any link to an untrusted destination should result in an error in the browser.

The goals of the draft are to thwart passive and active network attackers as well as imperfect web developers.  It does not address phishing or malware.  For details on the threat vectors, read section 2.3 of the draft.

Implementation of this draft is actually quite trivial.  To get there, I’ll walk you configuring your own certificate authority for use in testing, a BIG-IP (Don’t have one? Get the VE trial!), and a server.  All this testing for me is completely contained on my laptop, utilizing Joe’s excellent article on laptop load balancing configuration with LTM VE and VMware, though full-disclosure: I deployed Apache instead of IIS.

Working with Certificates

I’ve worked with certificates on windows and linux, but for this go I’ll create the certificate authority on my linux virtual machine and prepare the certificates.  Many have mad cli skills with the openssl command switches, but I do not.  So I’m a big fan of the CA.pl script for working with certificates, which hides a lot of the magic.

  1. Make a directory a copy a couple tools into it for testing (my Ubuntu system file locations, ymmv)
    • jrahm@jrahm-dev:~$ mkdir catest
    • jrahm@jrahm-dev:~$ cd catest
    • jrahm@jrahm-dev:~/catest$ cp /usr/lib/ssl/misc/CA.pl .
    • jrahm@jrahm-dev:~/catest$ cp /usr/lib/ssl/openssl.cnf .
  2. Create the certificate authority.  Questions are pretty self explanatory, make sure the common name is the name you want the CA to be referenced as.
    • jrahm@jrahm-dev:~/catest$ ./CA.pl –newca
  3. Create the certificate and sign in.  Similar questions to the CA process.  Common name should be the name of your site.  In my case, this is test.testco.com
    • jrahm@jrahm-dev:~/catest$ ./CA.pl -newreq
    • jrahm@jrahm-dev:~/catest$ ./CA.pl –sign
  4. Export the root certificate to Windows compatible format (had to use the openssl command for this one)
    • jrahm@jrahm-dev:~/catest$ openssl x509 -in cacert.pem -outform DER -out ca.der
  5. Copy the files to the desktop (using pscp)
    • C:\Users\jrahm>pscp jrahm@10.10.20.200:/home/jrahm/catest/*.pem .
    • C:\Users\jrahm>pscp jrahm@10.10.20.200:/home/jrahm/catest/demoCA/ca.der .
  6. Install the root certificate in Windows
  7. Install the test.testco.com key and certificate to BIG-IP
  8. Create the SSL Profile for CA-signed certificate
  9. Create a self-signed certificate in BIG-IP for host test.testco.com
  10. Create an additional clientssl profile for the self-signed certificate

 

Preparing the BIG-IP Configuration

To test this properly we need four virtual servers, a single pool, and a couple iRules.  The first two virtuals are for the “good” site and support the standard ports for http and https.  The second two virtuals are for the “bad” site and this site will represent our man-in-the-middle attacker.  The iRules support a response rewrite on the good site http virtual (as recommended in the draft), and the insert of the HSTS header on the https virtual only (as required by the draft).  Not specified in the draft is the appropriate length for the max-age.  I’m adding logic to expire the max-age a day in advance of the certificate expiration date, but you can set a static length of time.  I read on one blog that a user was setting it for 50 years.  It’s not necessary in my example, but I’m setting the includeSubDomains as well, so that will instruct browsers to securely request and link from test.testco.com and any subdomains of this site (ie, my.test.testco.com).

   1: ### iRule for HSTS HTTP Virtuals ###
   2: #
   3: when HTTP_REQUEST {
   4:     HTTP::respond 301 Location "https://[HTTP::host][HTTP::uri]"
   5: }
   6:  
   7: ### iRule for HSTS HTTPS Virtuals ###
   8: #
   9: when RULE_INIT {
  10:     set static::expires [clock scan 20110926]
  11: }
  12: when HTTP_RESPONSE {
  13:     HTTP::header insert Strict-Transport-Security "max-age=[expr {$static::expires - [clock seconds]}]; includeSubDomains"
  14: }

 

HSTS & MITM Virtuals

 

### "Good" Virtuals ###
#
virtual testco_http-vip {
   snat automap
   pool testco-pool
   destination 10.10.20.111:http
   ip protocol tcp
   rules hsts_redirect
   profiles {
      http {}
      tcp {}
   }
}
virtual testco_https-vip {
   snat automap
   pool testco-pool
   destination 10.10.20.111:https
   ip protocol tcp
   rules hsts_insert
   profiles {
      http {}
      tcp {}
      testco_clientssl {
         clientside
      }
   }
}

### "Bad" Virtuals ###
#
virtual testco2_http-vip {
   snat automap
   pool testco-pool
   destination 10.10.20.112:http
   ip protocol tcp
   profiles {
      http {}
      tcp {}
   }
}
virtual testco2_https-vip {
   snat automap
   pool testco-pool
   destination 10.10.20.112:https
   ip protocol tcp
    profiles {
      http {}
      tcp {}
      testco-bad_clientssl {
         clientside
      }
   }
}

The Results

I got the expected results on both Firefox 4.0 and Chrome.  Once I switched the virtual from the known good site to the bad site, both browsers presented error pages that I could not click through.

 

Great!  Where is it Supported?

Support already exists in the latest releases of Google Chrome, and if you use the NoScript add-on for current Firefox releases you have support as well.  As mentioned above in the introductory section, Firefox 4.0 will support it as well when it releases.

Conclusion

HTTP Strict Transport Security is a promising development in thwarting some attack vectors between client and server, and is a simple yet effective deployment in iRules.  One additional thing worth mentioning is the ability on the user agent (browser or browser add-on) to “seed” known HSTS servers.  This would provide additional protection against the initial http connection users might make before redirecting to the https site where the STS header is delivered.  Section 12.2 of the draft discusses the bootstrap vulnerability without the seeds in place prior to the first connection to the specified site.