You shouldn't be surprised to learn that when we create Reference Architectures we actually test them. The settings you find in the Configuration Best Practice Guides have been created, tested and documented pretty carefully to work well in most environments.

Recently I've moved my testing environment to a new cloud provider. It's a great service, providing exactly what you need from a cloud: elasticity, speed, and ease of use. I don't need to ask anyone to create me a new network, or provision me a server, and I've got access to a cool catalog of application and infrastructure images. My first job was to recreate my application acceleration test rig. Not a problem, I've documented the setup and it's deliberately simple - my reference architecture is big on returns and small on investment (in both your time and money). In an hour or so I have my design up and running - this cloud stuff is a real boon for the lazy.

Now to run some testing.

Hmm. That's not right - my acceleration test page is being pretty inconsistent - it's sometimes only marginally faster than the baseline un-accelerated version. Now I know from my previous test results this setup, despite being pretty simple (TCP optimization, SPDY gateway and a very basic layer 7 acceleration policy) can give good, measurable improvements.

To be honest, at this point I'm starting to worry. I'm about to push out a reference architecture that suddenly isn't working right.

So back to my help desk and sysadmin days (see, I used to have a proper job). First thing to check when something breaks - what have you changed? In this case the list is quite long - I've moved environments entirely, I'm using a different version of the web server O/S and a different WAN emulation component, plus I've altered my test page to have more JavaScript. The only thing that's 100% the same is my BIG-IP version and configuration. Looks like I’ll have to actually work out what’s going on.

Now I’m of the opinion that any time you are breaking out analysis tools like Wireshark or HTTPWatch (both excellent tools, of course) you’re having a bad day. But in this case there seemed no alternative. Running the tests on a new browser session – the SPDY advantage is clear – all the images load almost concurrently and the trace shows all the page objects being loaded together. However when I’ve used CTRL + F5 to reload the page without the browser cache the behavior has reverted to the classic HTTP 1.1 waterfall with a few objects being requested in parallel, then another few. No wonder my test results bad. Shutting down the browser and reloading the page restores the proper SPDY behavior, my test page loads about twice as quick through the accelerated config.

I’m going to test this behavior out some different versions and browsers, but for now I can sleep at night knowing that our forthcoming acceleration Reference Architecture actually works and I’ve learnt some valuable testing methodology lessons.