I know it's said a lot, but this seems like a good a time as any to show my appreciation of the performance of this very website, news.ycombinator.com! It's so good that I use it as my default sanity check of internet connectivity. I think it's actually slightly better when logged out, but even logged in it's fantastic. It performs better than both the OP's website and my tiny personal site that's hosted on GitHub Pages.
By the way, here's a small anecdote that might be useful to some. I realized recently that my own website was loading more slowly than I felt it should, so I investigated it and found that the culprit was some fonts and @font-map declarations I was loading from an external CDN. The solution was simply to move everything home into my site's assets. How I discovered any of that: I simply opened the Network tab on Firefox's developer console and reloaded the site, and I was able to see the timeline of my site loading and each individual asset as it was being requested and delivered, and then it was obvious.
The Web pages of college and health care providers are usually limited by the back-end in my experience, but modern Web apps can be the worst of both worlds — ClickUp, for example, is both very slow to load and sluggish once loaded. I've found myself using form submission again instead of XHR as a way to protest the tides.
I wonder how much of Ferris' latency is related to net speed and rendering speed / computational horsepower of their local machine. I've seen several different evaluation regimes which try to objectively measure web-site "responsiveness" independent of net and rendering speed. Maybe the OP could use one of them to show how different sites were using techniques that unduly reduce the perceived performance of their site.
Think about what happens if you're on a site that loads hundreds of images and has keep-alive explicitly turned off. Or if you're doing a TLS handshake across a high latency network (TLS, for all it's benefits, requires at least one back-and-forth to setup the secure transport before sending "content.") Or you have a weird / inefficient dynamic loading process. Each of these will cause perceived latency, but have different solutions.
I tend to agree with the OP, it certainly seems like sites I use on a regular basis are laggy, but we may need a more objective evaluation framework than "ugh. the web is slow." And who knows, maybe most of the problem could be solved by getting the OP a better ISP and a faster machine.
This rant doesn't have any specifics, but... I've got 1G fiber again, and tons of web pages are still slow as heck.
1-2 seconds to load for most users is not hard to hit if you care, and if most of your users aren't on 2g across the world from your hosting. At least for pages you're likely to enter the site on.
The rant points to pagespeed, which is a good start. If you serve your html in 200ms or less (measured on your server), have a reasonable implementation of TLS 1.2 or 1.3 and address the easy fixes on pagespeed, you'll probably have a faster than average site.
Sure, but think about the past thirty years of web browser
development. Every time software developers make a faster browser
with cool new features, content developers make content that uses
all that new capability. It's sort of the content equivalent of
Wirth's Law (software is getting slower more rapidly than hardware
is becoming faster.)
Developers almost always have reasonably beefy hardware setups
(because the software they use requires plenty of memory or
compute resources.) Does the OP's observation imply there's a
wider range of hardware out there? Maybe people constructing the
pages they're complaining about assume everyone will be on a
kick-ass machine with the best GPU money can buy and on a
low-latency / high-bandwidth network. Maybe it's an observation
that too many web developers don't consider consumers with more
mundane circumstances.
Also... I use Lynx and EWW a lot. The web seems pretty zippy when
you're ignoring the images and javascript. But yeah, that's not a
general solution, too many sites require javascript to function.
It's not JS that's the problem per se, it's what JS is being loaded and very often that's a metric ton of marketing crap i.e. stuff for retargeting, tracking and many other modern horrors.
And half the problem is not that there is one bit of software doing tracking, it's that there are fourteen different package that different people added at some point, all doing similar things.
Yep, that too. Many people made measurements over the last 10-ish years and the ratio between content and marketing stuff is something like 250:1 at least. On some websites it was 10x that.
I know it's said a lot, but this seems like a good a time as any to show my appreciation of the performance of this very website, news.ycombinator.com! It's so good that I use it as my default sanity check of internet connectivity. I think it's actually slightly better when logged out, but even logged in it's fantastic. It performs better than both the OP's website and my tiny personal site that's hosted on GitHub Pages.
By the way, here's a small anecdote that might be useful to some. I realized recently that my own website was loading more slowly than I felt it should, so I investigated it and found that the culprit was some fonts and @font-map declarations I was loading from an external CDN. The solution was simply to move everything home into my site's assets. How I discovered any of that: I simply opened the Network tab on Firefox's developer console and reloaded the site, and I was able to see the timeline of my site loading and each individual asset as it was being requested and delivered, and then it was obvious.
The Web pages of college and health care providers are usually limited by the back-end in my experience, but modern Web apps can be the worst of both worlds — ClickUp, for example, is both very slow to load and sluggish once loaded. I've found myself using form submission again instead of XHR as a way to protest the tides.
I wonder how much of Ferris' latency is related to net speed and rendering speed / computational horsepower of their local machine. I've seen several different evaluation regimes which try to objectively measure web-site "responsiveness" independent of net and rendering speed. Maybe the OP could use one of them to show how different sites were using techniques that unduly reduce the perceived performance of their site.
Think about what happens if you're on a site that loads hundreds of images and has keep-alive explicitly turned off. Or if you're doing a TLS handshake across a high latency network (TLS, for all it's benefits, requires at least one back-and-forth to setup the secure transport before sending "content.") Or you have a weird / inefficient dynamic loading process. Each of these will cause perceived latency, but have different solutions.
I tend to agree with the OP, it certainly seems like sites I use on a regular basis are laggy, but we may need a more objective evaluation framework than "ugh. the web is slow." And who knows, maybe most of the problem could be solved by getting the OP a better ISP and a faster machine.
This rant doesn't have any specifics, but... I've got 1G fiber again, and tons of web pages are still slow as heck.
1-2 seconds to load for most users is not hard to hit if you care, and if most of your users aren't on 2g across the world from your hosting. At least for pages you're likely to enter the site on.
The rant points to pagespeed, which is a good start. If you serve your html in 200ms or less (measured on your server), have a reasonable implementation of TLS 1.2 or 1.3 and address the easy fixes on pagespeed, you'll probably have a faster than average site.
Sure, but think about the past thirty years of web browser development. Every time software developers make a faster browser with cool new features, content developers make content that uses all that new capability. It's sort of the content equivalent of Wirth's Law (software is getting slower more rapidly than hardware is becoming faster.)
Developers almost always have reasonably beefy hardware setups (because the software they use requires plenty of memory or compute resources.) Does the OP's observation imply there's a wider range of hardware out there? Maybe people constructing the pages they're complaining about assume everyone will be on a kick-ass machine with the best GPU money can buy and on a low-latency / high-bandwidth network. Maybe it's an observation that too many web developers don't consider consumers with more mundane circumstances.
Also... I use Lynx and EWW a lot. The web seems pretty zippy when you're ignoring the images and javascript. But yeah, that's not a general solution, too many sites require javascript to function.
It's not JS that's the problem per se, it's what JS is being loaded and very often that's a metric ton of marketing crap i.e. stuff for retargeting, tracking and many other modern horrors.
And half the problem is not that there is one bit of software doing tracking, it's that there are fourteen different package that different people added at some point, all doing similar things.
Yep, that too. Many people made measurements over the last 10-ish years and the ratio between content and marketing stuff is something like 250:1 at least. On some websites it was 10x that.