When you’re trying to squeeze every last millisecond of performance from your website, ensuring that it uses the network as efficiently as possible is critical. Most sites spend most of their load time waiting for the network, so sending fewer bytes is one great way to ensure that your pages load as quickly as possible.
For many webpages, images account for the largest number of bytes transferred; you should ensure that you send as few images as needed, and that each image is configured to cache appropriately if it will ever be reused. Beyond those simple steps, you should also ensure that your images are encoded as efficiently as possible.
I’ve previously written about image optimization, but in today’s post I want to show how you can quickly identify inefficiently-encoded image files with Telerik Fiddler. The new Fiddler Custom Columns feature allows you to add columns with data about image responses; simply right-click the column headers and choose Customize Columns:
In the Collection dropdown, choose Miscellaneous. Image-related fields you can display as columns include:
The ImageRGB and ImageFingerprint fields are useful for finding similar or duplicated images (even if they were resized). The ImageDimensions and PixelCount fields are useful for finding your largest images. When looking for inefficiently-encoded images, however, the Bytes/Pixel field introduced in v220.127.116.11 is the most useful choice.
All of the popular raster web image formats utilize compression internally, and for all practical purposes, 32-bits-per-pixel is sufficient for perfect visual fidelity. Thus, any image using over 4 bytes per pixel should immediately be considered suspect. There are four common scenarios for seeing high numbers of bytes-per-pixel:
Of these, issue #4 appears to be extremely prevalent, especially for PNG files. That’s because PNG is the native file format for several popular image editing tools (including major Adobe products) and these products store their editing data within the file. The expectation is that the graphic designer will use the tool’s “Save for Web” feature to export an image stripped of all of the unnecessary data, but all too often this step is skipped and the bloated asset is instead published to the live website.
For instance, consider the image at http://a.fsdn.com/sd/sf-logo.png. This 192x32 PNG image is 26168 bytes, for a bytes/pixel ratio of 4.193. Using the Fiddler ImageView analyzer, you’ll find that 67% of the bytes of this response contain XML metadata, and it also includes a 2.6kb color correction profile which is very likely unneeded:
TweakPNG shows that only 5550 bytes of the 25760 byte file contain the compressed pixel information (the IDAT chunk):
If you’d like, you can even delete the unneeded chunks (e.g. iTXt, iCCP) directly in TweakPNG, save the result, and update your site to serve the smaller PNG. You should also consider using an optimizer that can yield higher compression ratios for the actual image data.
Small images tend to suffer the most extreme overhead. For instance, http://winsupersite.com/sites/all/themes/winsupersite/images/contact-icon.png is a 16x16 PNG weighing in at 47kb, a whopping 185 bytes per pixel. Only 459 bytes of the image (just under 1%) are useful image data. A common offender is the 1x1 tracking pixel images that many sites use, because of the overhead in the file headers, most of these 1 pixel images weigh in around 50 bytes, although you can get down to 26 bytes if you want.
While PNGs are one of the top culprits, they’re not the only ones. One 24x24 profile JPEG I encountered on Twitter weighs in at 75820 bytes, a hefty 131 bytes per pixel. Stripping metadata and reencoding this JPEG efficiently (even at full quality) drops it a much more reasonable 1014 bytes; there are many such optimizers available, but I like RIOT Optimizer because it has a simple GUI and it’s easily added to Fiddler.
Fiddler does not currently have analyzers for SVG files, but Fiddler will allow you to see whether they’re served with gzip Content-Encoding, and you can use the SyntaxView Inspector to scan for editing metadata that should be stripped.
When browsing with the Bytes/Pixel column enabled, you’ll sometimes find some surprising results.
Back in August, I noted that Twitter was appending anywhere from dozens to thousands of junk (0x20) bytes on the end of the JPEG and PNG files they serve for users’ profile images. It turns out that, though invalid, they were doing this deliberately, as an information-hiding mechanism. The HTTPS protocol encrypts all of the data transferred over it, but it doesn’t hide the length of that data. Because Twitter profile pictures can appear in predictable patterns on 3rd-party pages, an otherwise “blind” network attacker could infer what pages a user was visiting based on the length of responses from Twitter’s image server. So, Twitter selected a number of common byte length thresholds and they pad images to those sizes. So, for instance, a 10174 byte image is padded with 6124 bytes of data to yield a 16298 byte response, while a 11502 byte response is padded with 4796 bytes to yield a matching 16298 byte response. Fiddler now detects this padding at the end of PNG files and surfaces it:
Unfortunately for Twitter, it seems that their Performance team didn’t know that their security team had undertaken this padding step, so they mistakenly enabled GZIP compression for these images. That change improves performance, but circumvents the length-blinding padding effort.
Eric Lawrence (@ericlaw) has built websites and web client software since the mid-1990s. After over a decade of working on the web for Microsoft, Eric joined Telerik in October 2012 to enhance the Fiddler Web Debugger on a full-time basis. With his recent move to Austin, Texas, Eric has now lived in the American South, North, West, and East.
Subscribe to be the first to get our expert-written articles and tutorials for developers!