I instrumented the Leaflet version of my vector tile client to show me the size of the vector tiles for the page. Chrome developer tools will do this measurement, but it’s easier to have it in console.log. All measures are uncompressed and uncached times, and for my relatively limited set of rivers (based on Strahler filtering). I intend to add more data to the vector tiles; measuring this as a baseline.
All measurements taken with a browser window about 1000 x 1000, loading about 25 tiles for the view. I slowly zoomed in on St. Louis, a fairly dense area to load.
Size Time View Cutoff 1113kB 1237ms 4/38.89/-98.75 strahler >= 6 1117kb 1494ms 5/37.892/-94.241 1741kb 1798ms 6/38.100/-92.230 strahler >= 5 830kb 1047ms 7/38.376/-91.214 334kb 479ms 8/38.565/-90.722 283kb 427ms 9/38.6340/-90.4738 strahler >= 4 239kb 332ms 10/38.7701/-90.3962 strahler >= 3 90kb 233ms 11/38.8445/-90.5332 60kb 301ms 12/38.8691/-90.5842 strahler >= 2 53kb 686ms 13/38.8697/-90.6169 strahler >= 1
I guess I knew that the map got lighter weight as I zoomed in, but not how much. Too bad; I’m most excited to load data in the zoomed out views. But looking at this I can move the cutoffs some and show more rivers. Update: I updated some of the thresholds around z=8 to z=13.
A large tile like 4/3/5.json gzips from 384k to 92k. A smaller tile like 8/40/97.json gzips from 29k to 7k. So let’s say GeoJSON gzips to 25% the original size; that means the biggest views will take about 500k of data on the wire to download. That’s pretty reasonable.
A whole separate question is render time. The Leaflet render is really bad and I don’t know why. The new D3 client is so fast it gives me some aspiration.