Looking at traffic stats and Google’s mod_pagespeed I thought of a way to optimize my blog a bit: replace some of the small images with inline data: URLs. According to Wikipedia this works in all modern web browsers. IE7 is out of luck, but it (kind of) works in IE8.
For example, this Creative Commons logo is a 1082 byte GIF. Small, but there’s a lot of overhead in doing an HTTP request just for that image used in one place on my page. Fetching with wget takes 11 packets and 3 or 4 round trips. (A little better in a real browser with HTTP keepalives).
As a base64 data: URL it’s only 1467 bytes. That adds 1-2 TCP packets to the request and 0-1 round trips. The downside of inlining is that the image is now served with every single HTML page. No caching and bots will have to pay for it too. In the end I think it’s more work for my server, less work for the client, which is a good tradeoff for small images. (It may be less work for the server in general; fewer client requests per page.)
I arbitrarily decided 3k was the cutoff for converting to data URLs. That results in 5 images:
I converted the images via this online encoder. I am too lazy to measure the actual impact the change had, too much trouble. The HTML size of my blog went up 10k (46k vs 36k), but I’m loading 5 fewer HTTP requests (16 vs 20) and not loading about 7k worth of image binaries. I’m optimistic.