I just got back from the Velocity Web Performance and Operations Conference, and I’m excited to share with everyone some of the performance and operations features I’ve added to Fiddler over the last year. This is the first in a series of posts on this topic.
To improve the load time of your website, decreasing the amount of time spent on the network is crucial. There’s no better way to do that than to make sure all of your resources are cached appropriately, and Fiddler’s Caching Response Inspector can help explain how browsers will cache your resources.
Of course, optimizing your resources’ cachability only helps when a user revisits your site—to reduce the first-visit load time, you need to use fewer resources, and/or make those resources smaller.
After you’ve stripped away everything you can (even when using a popular library like jQuery), your next step, as we know from YSlow and High Performance Websites, is using gzip to compress your script, CSS, and HTML.
Unfortunately, like many best practices, the simple “Gzip Components” directive is a bit too simple. Many web developers simply tick a box on their server and call it done -- and frankly, they’re doing far better than those who forget to enable compression at all! The problem is that not all gzip tools are created equal. The GZIP format internally uses the DEFLATE compression format described in RFC1951; this format combines the LZ77 algorithm with Huffman encoding. While these formats are fascinating to study on their own, all you really need to know is that the compression ratio achieved is often widely variable based on the parameters and quality of the compressor. The popular gzip utility on Linux, for instance, allows you to specify the –1 argument for fastest compression, or the –9 argument for maximum compression.
However, even gzip -9 does not achieve the full potential of DEFLATE. Recognizing this, researchers at Google created a new compressor called Zopfli. Zopfli trades a massive increase in compression-time (~80x longer) for an improvement between 3% and 8% in the final file size. The resulting file is fully compatible with all DEFLATE decompressors, and decompresses just as quickly as streams that are compressed using lesser compressors. In their paper, Google reports the following data when compressing huge data sets:
Just looking around what’s my Fiddler Session List right now, there are clear opportunities to use Zopfli to shave bytes:
See some other “real world” results here.
Some readers might think: meh, a few KB here and there? Who cares? Let me answer that for you—everyone! Performance is one of the very very few universal goods in software—everyone appreciates faster software and websites.
If your site delivers a static file a million times per day, saving one KB will conserve a gigabyte of bandwidth, every single day.
Fiddler’s Transformer tab allows you to easily see the impact of compression on the size of a resource. By default, it includes a DEFLATE implementation on par with the gzip –9 option. However, if you download Zopfli.exe and put it inside a Tools subfolder in the Fiddler installation directory:
… the next time Fiddler is started, a new Use Zopfli checkbox appears on the Transformer tab:
By ticking this box, you can determine the improvement that using Zopfli will yield:
When every byte counts, using Zopfli to compress your static resources is absolutely the way to go. You can easily integrate Zopfli.exe in your build/deploy scripts to help ensure that users have the fastest possible experience on your website.
PS: Because many file formats (PNG, WOFF, etc) internally use DEFLATE, reencoding the data streams within such files using Zopfli can prove very effective. Google recently shrank their WOFF font library by an average of 6% by reencoding their internal DEFLATE streams using Zopfli. Unfortunately, reencoding non-textual files is somewhat trickier than simply running Zopfli.exe-- it’s a topic I’ll cover in a future post.