I have to say, I’m finding the size of the SVG graphs to be more and more painful as time goes on. Take, say, this perfectly ordinary graph:
The visible part of the graph is not that large or complex (because it’s truncated, with an x-min of 2020-06-15), and as a PNG the image is 24KB. As an SVG, however, it’s an order of magnitude larger, at 720KB.
This is quite painful. I don’t know why my connection to bmndr.s3.amazonaws.com is so slow, but this is the result:
(Perhaps it’s a matter of the region the bucket is in? I’m probably on the other side of the world from it. I don’t think it’s a problem with my internet connection: fast.com shows a download speed of 3.5MB/s, and even if it was a slow connection: the fact remains that in practice, the larger SVGs are painfully slow for me.)
By the way: gzipping this particular SVG reduces it from 720KB all the way down to 76KB. And brotli compression yields an even better result, weighing in at only 48KB.
If I optimize the SVG it with svgo (the command line version of the same tool @phi was talking about), that reduces the uncompressed size to 134KB; or 16KB gzipped; or 12KB compressed with brotli! (Half the size of the PNG version.)
So at the very least, it seems compressing it would be a really big win. For those that are harder to compress, there are other optimization tricks you could try: for instance, because the PNGs are quite lightweight, a decent strategy might be to first load the PNG, then load the SVG in the background with javascript, and then switch out the PNG for the SVG once it has loaded.
This uses a tiny bit more bandwidth (loading both the PNG and the SVG), but it gives a much faster loading speed even in the cases where the SVG is large even compressed.
I’d also look into how the S3 bucket is configured. If I’m right, and the slowness is a matter of me being very far from the AWS region where the bucket is hosted, there are ways to improve this. You could set up some sort of cross region replication thing to host the graphs in multiple regions around the world, or better yet, you could set up Cloudfront (or another CDN) in front of the bucket.
Cloudfront, being AWS’s CDN, has a great integration with S3. It’s really easy to set up a Cloudfront distribution that just serves content out of an S3 bucket, and you can also configure things like automatic gzipping. The pricing math works out fairly similar: by my calculations, the network transfer prices for S3 and Cloudfront work out about the same for files around the size we’re talking about here.
I strongly suspect (but can’t prove), that using Cloudfront would help a lot. But of course, you could do some or all these various optimizations at once: they all should play well with one another.