Looking at coronavirus.data.gov.uk
Update 5th May: They have removed stuff they don’t need from the JSON, so it is now down to 700K (3.3MB total, still up from when I originally wrote this blog post). Hopefully they will get gzipping working on it soon; I’ve lost broadband today.
Update 29th April: They add a 33K Markdown parser to parse About text written in Markdown into HTML. I use a server-side Markdown parser to convert the Markdown to HTML once.
Update 28th April: They have added lower-tier and male/female data to the JSON data, adding over a megabyte to the JSON file request, it’s now 1633K, total transfer of 4.2MB. Adding the same feature to my site increased it by 2K – a new HTML table and a bit of inline JS for the chart.
The UK government already has a lot of good guidance on this, e.g. Building a resilient frontend using progressive enhancement, and How to test frontend performance, and a lovely blog post by Matt Hobbs at Why we focus on frontend performance. Having these guidelines be followed by everyone across all the variations of procurement and delivery, however, is much trickier than implementing them, in my book!
I have just loaded the page here (afternoon of 26th April 2020) on my laptop, and here is the Network tab of Developer Tools in Firefox:
For those of you who haven’t seen one before, this displays each resource requested for a webpage, where it came from, what triggered the request, what type of resource it is, its transfer size and actual size (transfer can be compressed), and a waterfall display of the timings of the responses.
- That involves a 588K JSON file containing the data (not gzipped by the server, which would reduce it about 90% – reported to them), and then three GeoJSON files (365K, although the site requests the third one twice, so 531K given both are requested simultaneously and so the second request doesn’t get a cached copy from the first request – reported to them) for the map boundaries and circle positions. Only one GeoJSON file is needed for the initial display.
In total, on a cold page load of the official site, a desktop browser will transfer 3.19MB of data. On mobile the map is not displayed, so it “only” loads 1.33MB of data. Ignoring the data, the stuff necessary to display anything at all (even just a header and footer) comes to c. 768K.
|Resource||Total transfer size|
|Map overlay data||531K|
|Map tile data||272K|
|Resource||Total transfer size|
My version can be seen at http://dracos.co.uk/made/coronavirus.data.gov.uk/. Please note in the below I am not saying any of this is “best practice”, I was doing this in a few hours on a Sunday, but it is the intent and the structure I think are important. Here is a network diagram for my version:
My version transfers 407KB in total on load, including the needed data:
|Map overlay data||141K||531K||-390K|
(40K map, 67K chart)
|Map tile/font data||86K||1369K||-1283K|
[I am not allowed to use the same font as GOV.UK, but if I was mine would be 64K more to load that, same as the official site. I’ve not included the font in the table above.]
What I changed, front end
- The map – well, I didn’t want to spend time installing my own tiles or similar, so used an existing free tileserver from Stamen. This is a raster tile server, but even ignoring the megabyte font used by the official site, the raster tiles are quite a bit smaller than the vector ones. In overview situations, if there isn’t a need, a raster tile may well be a better choice than a vector one.
- Map overlay data
- I set it up to only load the one GeoJSON file needed to display the on-load map. The other GeoJSON files are requested when you click the tabs that change the map to need that data. If you were worried about it taking a while to load that, you could potentially e.g. load them later on once the main page had loaded, or load them when someone starts to hover over the table, or something.
On the server, it currently works as follows. Every 10 minutes, it asks the
official server if the latest data JSON file has changed since it last got it.
If it has, it downloads the new file, and
generates new static HTML files by requesting a dynamic PHP file and saving the
output to disc (originally, and if there were no traffic expectation or this
blog post involved, you could simply have the PHP file produce the output
directly as a normal
index.php). The use of PHP is not important;
the data however you would wish. The important thing is to have a resilient
read the existing HTML to do its thing.
I am keeping abreast of changes made to the official site at present – text tweaks, the new stacked graph, and so on – but it’s not something I wish to keep doing for a long time, unlike, say, traintimes.org.uk :) (Performance notes on that available too!) I have hopefully contributed something back, a couple of bug reports and one pull request (to make the table headers sticky); sadly making a slimline static version of the existing site using its own code was not really possible for someone outside to do, unlike small bugs/features, because it would involve team buy-in, server maintainability and so on.