I know I've talked about testing before and here I am talking about it again.  So why another post dedicated to testing -  Is it because I couldn't come up with another T word or is it because testing is a critical part of deploying an acceleration solution?  If you chose the later you are correct.  Testing is often an overlooked step in choosing and deploying an   acceleration solution, it's not that people think it's not important it is mostly a matter of time and resources.  Often times people will budget half a day for testing when in reality a week may be more appropriate.

Let's get the burning question out of the way "Is there a right way and a wrong way to test the benefits of an acceleration solution?" The answer to that is a resounding yes, however it's not so simple to say what is right and what is wrong.  The performance metric that is used to judge whether or not the test is successful will define how the test should be conducted.   Here are some tips to keep in mind.

  • If the primary goal is to improve performance for end users in remote offices then testing should incorporate a variety of latencies and link speeds that users are accessing the application from.  Testing shouldn't just be performed on the LAN.      
  • The correct way to test the performance of a repeat visitor to a web site is to close the browser and re-open it.  Do not hit the refresh button or F5 Networks key to reload the page, this will force the browser to refresh all the content currently in cache ignoring any expires values that have been set.
  • Make sure the test environment resembles production as much as possible in terms of server configuration, content being requested, and load levels.   If the pages on the test system are 300K but production is 750K it is not possible to determine the true gains.  Likewise testing on a system that is not under load will not provide a true representation of the performance with and without acceleration.
  • Always establish a baseline of the environment under test, do not try to compare metrics from production to metrics from the test bed. 
  • Execute multiple test runs to get an average, don't just take metrics from a single test run.
  • When testing the performance of a web based application measurements should be taken from a browser or a tool that accurately emulates a browser.  If using a browser I rely on HttpWatch and HttpFox to capture response times if only a single page is being tested.  If testing multiple pages or transactions within an application I use TrueSpeed from Symphoniq.

TrueSpeed is a  synthetic agent that is able to play back a sequence of steps within a website, very useful when you need to test a series of pages across multiple line speeds and latencies.  TrueSpeed can emulate either first time visitors  (by  automatically deleting the browser cache prior to test execution) or a repeat visitor (by firing off a test run to populate the cache before starting to collect metrics).  Automating the process of running through a series of pages is a real time saver especially when running a number of different configurations such  as with vs without acceleration, first time visitor vs repeat visitor, or the performance at various latencies.  Test runs can easily be compared and exported to a pdf or spreadsheet providing an easy way to share the results of the testing with others. 

There is no reason why when testing you have to limit yourself to a single test tool or a single metric.  TrueSpeed can be used in conjunction with load generating software which will provide a comprehensive picture of the performance gains.  The load testing software will show the improvement in hits per second achievable on the back end while TrueSpeed will show the actual end user performance.