Note: As of 11.4, WebAccelerator is now a part of BIG-IP Application Acceleration Manager.

This is article five of ten in a series on DevCentral’s implementation of WebAccelerator. Join Colin Walker and product manager Dawn Parzych as they discuss the ins and outs of WebAccelerator. Colin discusses his take on implementing the technology first hand (with an appearance each from Jason Rahm and Joe Pruitt) while Dawn provides industry insight and commentary on the need for various optimization features.

 

 

Generally speaking, the two ways in which people will optimize their web application are to reduce the overall amount of data that needs to be downloaded by the client, and reduce the number of round trips required. Caching, and particularly IBR takes care of the round trips bit, which can have a massive impact, and compression takes care of the data reduction…doesn’t it?

You see the way that standard (gzip/deflate) compression works is, at a very simplistic level, to look for duplicate strings in the data at hand, then replace every instance beyond the first with a pointer back to the first, within a given range (block). This allows the mechanism to effectively remove content completely and actually send a partial file to the user, whose algorithm on the receiving end will undo the above voodoo until they have a whole, readable file again. As you can imagine, because the target for this type of algorithm is repeated strings, text is an ideal candidate for compression.

Dawn Says...

 

According to HttpArchive the size of images transferred for the top 100 URLs increased by 51% between February 2012 and February 2013. What is more interesting is that the number of images has remained constant during this time period. It seems somewhere along the way the best practices related to images being under a certain size was thrown out the window. The thinking goes that as connectivity has increased the size of images isn’t as important. What people seem to be forgetting is that many people these days are connecting on a mobile device and those devices do not have the high speed connection you may at home, and not to mention average connection speeds may be going up in some geographies but not in all. Akamai tracks average connection speeds in their quarterly state of the Internet reports. The Q3-2012 report found that the global average connection speed was 2.8 Mbps this is a 6.8% drop from the previous quarter. The report defines broadband connectivity as a connection of 4 Mbps or greater, globally 41% of users qualify as broadband – this means that the majority of users have less than a 4 Mbps connection and the size of images is having a significant impact on the end user experience. Altering the quality of a JPEG from 90-70 can reduce the file size by up to 50% and the majority of end users will not see any visible differences between the two objects. As long as mobile carriers enforce limits on data downloads and until more than 50% of users are accessing from broadband connections image optimization should still be implemented for web applications.

What about other things, though, besides text? What about something that isn’t at all text like and likely can’t just have parts of it removed and re-added at will because of repeated strings? Something like, say, perhaps an image file? Well, for images standard compression does very little. Frankly for most binary files standard compression does very little, relative to text. So you have a web application that is image heavy, you’ve tuned your TCP profile, you’ve turned on compression and IBR, and you’re looking for even more performance. What’s left to do?

Well, as of version 11.2, WebAccelerator has a new and more powerful than ever image optimization engine. This engine has two functions. One component of image optimization is EXIF header removal. This is a loss-less optimization as the quality of the image isn’t impacted but the size still is reduced.  The EXIF header removal is done for Jpegs particularly photographs.   There is a bunch of data that is embedded that the browser does not need to render the image like the type of camera that was used, focal length of the lens, software used to edit, etc.   Removing this data can significantly reduce the size of the images without impacting the quality.  The other more complex component addresses images in much the same way that gzip was designed to address text. It can more effectively compress images, but perhaps even more importantly it can actually modify the image itself.

With this feature you can not only compress images, but you can do what is called “image scaling”. This allows you to actually reduce the overall image quality by whatever amount you are comfortable with to increase end user performance, sometimes dramatically so. The reality is a huge portion of images being used by web applications out there were designed with an eye towards appearance, and not performance. Some tweaking was likely done to make them web acceptable, but if performance is truly your primary concern, you may want to de-tune the quality a bit more yourself. This is especially true since the vast majority of users would never notice image quality reduced by a small amount, but they just might notice the performance difference.

The image optimization engine in WA makes this amazingly easy. You can tune by image type, to specific values to ensure you are getting the exact balance of performance vs. appearance that you want. You can even convert images to a particular format if you should so choose, though that particular feature warrants extensive testing, as image types can often times display differently once converted. Still, it has proved useful in some specific cases.

In short WebAccelerator’s image optimization engine gives you the ability to increase performance, which directly improves user experience, without adding any more complexity or strain to your server environment. If you’re looking for more speed and you’re already reducing round trips, caching as much as possible, compressing text content and have your TCP tuning set up properly, this is the next logical step as far as I’m concerned.