I think this article has "aged well" in the sense that... nothing has changed for the better :( Since I wrote it, I did upgrade my machine: I now have a 24-core 13th Gen i7 laptop with a fast NVMe drive and... well, Windows 11 is _still_ visibly laggy throughout. Comparing it to KDE on the same machine is like night and day in terms of general desktop snappiness (and yes, KDE has its own bloat too, but it seems to have evolved in a more "manageable" manner).
I've also gotten an M2 laptop for work since then, and same issue there: I remember how transformative the M1 felt at launch with everything being extremely quick in macOS, but the signs of bloat are _already_ showing up. Upgrading anything takes ages because every app is a monster that weighs hundreds of MBs, and reopening apps after a reboot is painfully slow. Still, though, macOS feels generally better than Windows on modern hardware.
About the article itself, I'll say that there was a complaint back then (and I see it now here too) about my blaming of .NET rewrites being misplaced. Yes, I'll concede that; I was too quick to write that, and likely wrong. But don't let that distract yourself from the rest of the article. Modern Notepad is inexplicably slower than older Notepad, and for what reason? (I honestly don't know and haven't researched it.)
And finally, I'll leave you with this other article that I wrote as a follow-up to that one, with a list of things that I feel developers just don't think about when writing software, and that inevitably leads to the issues we see industry-wide: https://jmmv.dev/2023/09/performance-is-not-big-o.html
The author mentions rewriting core applications in C# on windows but I don’t think this is the problem. Write a simple hello world app in c#, compile it and see how long it takes to run vs a rust app or a python script - it’s almost native. Unity is locked to a horrifically ancient version of mono and still manages to do a lot of work in a small period of time. (If we start talking JavaScript or python on the other hand…)
I agree with him though. I recently had a machine that I upgraded from Win10 to Win11 and it was like someone kneecapped it. I don’t know if it’s modern app frameworks, or the OS, but something has gone horribly wrong on macOS and windows (iOS doesn’t suffer from this as much for whatever reason IME)
My gut instinct is an adjustment to everything being asynchronous, combined with development on 0 latency networks in isolated environments means that when you compound “wait for windows defender to scan, wait for the local telemetry service to respond, incrementally async load 500 icon or text files and have them run through all the same slowness” with frameworks that introduce latency, context switching, and are thin wrappers that spend most of our time FFI’ing things to native languages, and then deploy them in non perfect conditions you get the mess we’re in now.
> Unity is locked to a horrifically ancient version of mono and still manages to do a lot of work in a small period of time
Unity is the great battery killer!
The last example I remember is that I could play the first xcom remake (which had a native mac version) on battery for 3-4 hours, while I was lucky to get 2 hours for $random_unity_based_indie with less graphics.
> incrementally async load 500 icon or text files and have them run through all the same slowness”
This really shouldn't be slower when done asynchronously compared to synchronously. I would expect, actually, that it would be faster (all available cores get used).
> I would expect, actually, that it would be faster (all available cores get used).
And I think this assumption is what's killing us. Async != parallel for a start, and parallel IO is not guaranteed to be fast.
If you write a function:
async Task<ImageFile> LoadFile(string path)
{
var f = await load_file(path);
return new ImageFile(f);
}
And someone comes along and makes it into a batch operation;
async Task<List<ImageFile>> LoadFiles(List<string> paths)
{
var results = new List<ImageFile>();
foreach(var path in paths) {
var f = await load_file(path);
results.Add(ImageFile(f));
}
return results;
}
and provides it with 2 files instead of 1, you won't notice it. Over time 2 becomes 10, and 10 becomes 500. You're now at the mercy of whatever is running your tasks. If you yield alongside await [0] in an event loop, you introduce a loop iteration of latency in proceeding, meaning you've now introduced 500 loops of latency.
In case you say "but that's bad code", well yes, it is. But it's also very common. When I was reading for this reply, I found this stackoverflow [0] post that has exactly this problem.
It depends how the application is written in C#. A lot of Modern C# relies on IoC frameworks. These do some reflection shenanigans and this has a performance impact.
> The author mentions rewriting core applications in C# on windows but I don’t think this is the problem. Write a simple hello world app in c#, compile it and see how long it takes to run vs a rust app or a python script - it’s almost native <...>
> My gut instinct is an adjustment to everything being asynchronous, combined <...> with frameworks that introduce latency, context switching, and are thin wrappers that spend most of our time FFI’ing things to native languages, and then deploy them in non perfect conditions you get the mess we’re in now.
Back then, programmers had to care about performance. The field of programming was less accessible, so the average skills to reach the barrier to entry were higher. So people were, on average, better programmers. The commercial incentives of today to reach market with something half-assed and then never fix it don’t help.
In 2002 I ran OpenBSD on my laptop (thus sacrificing wifi). The memory footprint of running X11, a browser, a terminal, and an editor: 28MB
My take on it - performance decays when engineering management doesn’t prioritize it.
Modern example: Laptops boot in seconds. My servers take about 5 minutes to get to Linux boot, with long stretches of time taken by various subsystems, while Coreboot (designed to be fast) boots them nearly as quickly as a laptop.
Old example: early in my career we were developing a telecom system with a 5 min per year (5 9s) downtime target. The prototype took 30 minutes to boot, and engineers didn’t care because management hadn’t told them to make it boot faster. It drove me nuts. (a moot point, as it eventually got cancelled and we all got laid off)
> Notepad had been a native app until very recently, and it still opened pretty much instantaneously. With its rewrite as a UWP app, things went downhill. The before and after are apparent, and yet… the app continues to be as unfeatureful as it had always been. This is extra slowness for no user benefit.
We now have HUGE (/s) advancements in Notepad, like tabs and uh... Copilot
More than CPU speed, I think the increase in storage and RAM is to blame for the slow decay in latency.
When you have only a few Kb/Mb of RAM and storage, you can't really afford to add much more to the software than what is the core feature. Your binary need to be small, which lead to faster loading in RAM, and do less, which means less things to run before the actual program.
When size is not an issue, it's harder to say no when the business demand for a telemetry system, an auto-update system, a crash handler with automatic report, and a bunch of features, a lot of which needs to be initialized at the start of the program, introducing significant latency at startup.
It's also complexity - added more than necessary, and @ a faster pace than hardware can keep up with.
Take font rendering: in early machines, fonts were small bitmaps (often 8x8 pixels, 1 bit/pixel), hardcoded in ROM. As screen resolutions grew (and varied between devices), OSes stored fonts in different sizes. Later: scalable fonts, chosen from a selection of styles / font families, rendered to sub-pixel accuracy, sub-pixel configuration adjustable to match hw construction of the display panel.
Yeah this is very flexible & can produce good looking fonts (if set up correctly). Which scale nicely when zooming in or out.
But it also makes rendering each single character a lot more complex. And thus eats a lot more cpu, RAM & storage than 8x8 fixed size, 1bpp font.
Or the must-insert-network-request-everywhere bs. No, I don't need search engine to start searching & provide suggestions after I've typed 1 character & didn't hit "search" yet.
There are many examples like the above, I won't elaborate.
Some of that complexity is necessary. Some of it isn't, but lightweight & very useful. But much of it is just a pile of unneeded crap of dubious usefulness (if any).
Imho, software development really should return to 1st principles. Start with a minimum viable product, that only has the absolute necessary functionality relevant to end-users. Don't even bother to include anything other than the absolute minimum. Optimise the heck out of that, and presto: v1.0 is done. Go from there.
> But it also makes rendering each single character a lot more complex.
Not millions of times more complex.
Except for some outliers that mess up everything (like anything from Microsoft), almost all of the increased latency between keypress and character rendering we see on modern computers comes from optimizing for modularity and generalization instead of specialized code for handling the keyboard.
Not even our hardware reacts fast enough to give you the latency computers had in the 90s.
I suspect (in a common pattern) the main thing that blocks making performance a priority is that it equates to reordering various ranks among developers and product managers.
When performance supersedes "more features", developers are gatekeepers and manager initiatives can be re-examined. The "solution" is to make performance a non-priority and paint complainers as stale and out-of-fashion.
Probably more common is that software isn't developed with end-users as #1 priority; it's developed to advance business goals.
Those 2 goals align about as often as planets in our solar system.
To some degree this is true for open source software as well. Developers may choose to work on features they find interesting (or projects done in a language they feel comfortable with), vs. looking at user experience first. Never mind that optimizing UX is hard (& fuzzy) as it is.
Or all the work done on libre software to cater to various corporate interests. As opposed to power users fixing & improving things for their needs.
Personally I've decided to just vote with my feet and avoid using poor performing software as much as possibl, but that's frequently impractical or not worth the cost of missing out. I also doubt this will change the behaviors of companies as we see with, for example, TV advertising that they give no shits about degrading the consumer experience over the long term.
There doesn't seem much hope on the technical side either as software complexity is only increasing.aybe longer term AI has a role to play in auto-optimization?
Are mobile devices slow/unresponsive. I haven't experienced that unless I realllllly cheap out. Or after 4 years of OS updates on Apple devices for some reason. Androids seem OK in this regard.
I switched from android back to iOS last year. There seems to be some sort of inherent latency in either android or Samsung’s UI that causes the UI thread to lag behind your inputs by a noticeable amount, and for the UI thread to block app actions in many cases.
Things like summoning a keyboard causing my 120hz galaxy phone to drop to sub 10fps playing the intro animation for GBoard were just rampant. All non existent in iOS
FWIW I updated my phone to a relatively budget samsung recently, and had a similar noticeable delay to bring up the keyboard, installing 'simple keyboard' from F-droid seems to have helped. I wouldn't be surprised if it is missing features compared to the samsung/google ones where their absence will annoy a power user, but for whatever subset I use it works fine and doesn't appear as though my phone hangs.
I begin to wonder if all the commenters in this thread have compromised devices. I'm on a 5 year old Samsung (the model is, I bought it new six months ago) - my Linux machines are fast (gentoo) and my windows 10 and 11 machines are fast. My kid's computer is an i3 7350k and he plays roblox, Minecraft, teardown on it with no issues. That computer is a couple years older than he is at 9-10 years old. That computer's twin is my nas backup with 10gbe running windows - the drive array refused to work at any decent speed on Linux and I didn't want jbod, I wanted RAID.
Some things are slow, like discord on windows takes ~12 seconds before it starts to pop in ui elements after you double click. My main computer is beefy, though, 32 thread 128GB; but no NVMe. Sata spindle and SSDs. But I have Ryzen 3600s that run windows 11 fine.
> I'm on a 5 year old Samsung (the model is, I bought it new six months ago)
How quick is the Share menu on your samsung? On mine, it takes about 3 seconds of repainting itself before it settles down. On iOS there's about a 100ms pause and the drawer pops up, fully populated. I found [0] which is a perfect example of this sort of bloat.
> My main computer is beefy, though, 32 thread 128GB; but no NVMe. Sata spindle and SSDs. But I have Ryzen 3600s that run windows 11 fine.
My main computer is a 24 core i9 with 64GB ram on NVMe. It runs windows fine. But, I saw exactly the same behaviour out of the box on this machine (and on the machine I replaced) as the linked article shows. I can compile, play games, do AV transcoding. But using apps like slack or discord is like walking through molasses, and even launching lightweight apps like WIndows Terminal, and Notepad have a noticeable delay from button press to the window appearing on screen. There's just something a bit broken about it.
I have a top end Pixel phone running stock Android and encounter latency all the time. All first start time is usually a couple of seconds. Switching back to an open app is fast in many cases but some still have to do a refresh (which I suspect involves communication to servers, although that's still not much of an excuse).
Too much developers relying on bloatware, not enough “implement it yourself” because in reality every hash map doves a unique problem that a Palin hash map is not necessarily suited for. Embrace NIH
Startup time has always been a bit of a sketchy metric as modern OSs and languages do a lot of processing on application launch. Some scan for viruses. On Macs you have checks for x86 vs Apple silicon and loading of Rosetta if required. Managed runtime environments have various JITs that get invoked. And apps are now huge webs of dependencies, so lots of dynamically linked code being loaded. A better metric is their performance once everything is in memory. That said, I still think we’re doing poorly at that metric as well. As resources have ballooned over the last decade, we’ve become lazy and we just don’t care about writing tight code.
The applications are launched at startup time because they have runtime startup slowness.
The applications have startup slowness because of the JIT runtime/deps/.dll.
At the end of the day, end users pay for the cost of developer convenience (JIT and deps most of the time, even thought there are some case where dynamic linking is alright) because they don't do native apps.
Offloading everything at startup is a symptom IMO.
Replying to your specific point about virus scans.
For some (naive) reason, I expect them to run a single time against a binary app that is never changed. So in theory it shouldn't even be a problem, but the reality says otherwise.
Original author here! Thanks for (re)sharing. We previously discussed this at length in https://news.ycombinator.com/item?id=36503983.
I think this article has "aged well" in the sense that... nothing has changed for the better :( Since I wrote it, I did upgrade my machine: I now have a 24-core 13th Gen i7 laptop with a fast NVMe drive and... well, Windows 11 is _still_ visibly laggy throughout. Comparing it to KDE on the same machine is like night and day in terms of general desktop snappiness (and yes, KDE has its own bloat too, but it seems to have evolved in a more "manageable" manner).
I've also gotten an M2 laptop for work since then, and same issue there: I remember how transformative the M1 felt at launch with everything being extremely quick in macOS, but the signs of bloat are _already_ showing up. Upgrading anything takes ages because every app is a monster that weighs hundreds of MBs, and reopening apps after a reboot is painfully slow. Still, though, macOS feels generally better than Windows on modern hardware.
About the article itself, I'll say that there was a complaint back then (and I see it now here too) about my blaming of .NET rewrites being misplaced. Yes, I'll concede that; I was too quick to write that, and likely wrong. But don't let that distract yourself from the rest of the article. Modern Notepad is inexplicably slower than older Notepad, and for what reason? (I honestly don't know and haven't researched it.)
And finally, I'll leave you with this other article that I wrote as a follow-up to that one, with a list of things that I feel developers just don't think about when writing software, and that inevitably leads to the issues we see industry-wide: https://jmmv.dev/2023/09/performance-is-not-big-o.html
The author mentions rewriting core applications in C# on windows but I don’t think this is the problem. Write a simple hello world app in c#, compile it and see how long it takes to run vs a rust app or a python script - it’s almost native. Unity is locked to a horrifically ancient version of mono and still manages to do a lot of work in a small period of time. (If we start talking JavaScript or python on the other hand…)
I agree with him though. I recently had a machine that I upgraded from Win10 to Win11 and it was like someone kneecapped it. I don’t know if it’s modern app frameworks, or the OS, but something has gone horribly wrong on macOS and windows (iOS doesn’t suffer from this as much for whatever reason IME)
My gut instinct is an adjustment to everything being asynchronous, combined with development on 0 latency networks in isolated environments means that when you compound “wait for windows defender to scan, wait for the local telemetry service to respond, incrementally async load 500 icon or text files and have them run through all the same slowness” with frameworks that introduce latency, context switching, and are thin wrappers that spend most of our time FFI’ing things to native languages, and then deploy them in non perfect conditions you get the mess we’re in now.
> Unity is locked to a horrifically ancient version of mono and still manages to do a lot of work in a small period of time
Unity is the great battery killer!
The last example I remember is that I could play the first xcom remake (which had a native mac version) on battery for 3-4 hours, while I was lucky to get 2 hours for $random_unity_based_indie with less graphics.
In my defence, I never said it was efficient! Games always suck for battery because they’re constantly rendering.
> incrementally async load 500 icon or text files and have them run through all the same slowness”
This really shouldn't be slower when done asynchronously compared to synchronously. I would expect, actually, that it would be faster (all available cores get used).
> I would expect, actually, that it would be faster (all available cores get used).
And I think this assumption is what's killing us. Async != parallel for a start, and parallel IO is not guaranteed to be fast.
If you write a function:
And someone comes along and makes it into a batch operation; and provides it with 2 files instead of 1, you won't notice it. Over time 2 becomes 10, and 10 becomes 500. You're now at the mercy of whatever is running your tasks. If you yield alongside await [0] in an event loop, you introduce a loop iteration of latency in proceeding, meaning you've now introduced 500 loops of latency.In case you say "but that's bad code", well yes, it is. But it's also very common. When I was reading for this reply, I found this stackoverflow [0] post that has exactly this problem.
[0] https://stackoverflow.com/questions/5061761/is-it-possible-t...
It depends how the application is written in C#. A lot of Modern C# relies on IoC frameworks. These do some reflection shenanigans and this has a performance impact.
This is literally what I said:
> The author mentions rewriting core applications in C# on windows but I don’t think this is the problem. Write a simple hello world app in c#, compile it and see how long it takes to run vs a rust app or a python script - it’s almost native <...>
> My gut instinct is an adjustment to everything being asynchronous, combined <...> with frameworks that introduce latency, context switching, and are thin wrappers that spend most of our time FFI’ing things to native languages, and then deploy them in non perfect conditions you get the mess we’re in now.
Back then, programmers had to care about performance. The field of programming was less accessible, so the average skills to reach the barrier to entry were higher. So people were, on average, better programmers. The commercial incentives of today to reach market with something half-assed and then never fix it don’t help.
In 2002 I ran OpenBSD on my laptop (thus sacrificing wifi). The memory footprint of running X11, a browser, a terminal, and an editor: 28MB
Browsers are the big problem. Security and compatibility push upgrades to the latest, very heavy, ones.
There was plenty of software that ran like absolute garbage “back then” but OS’s didn’t.
My take on it - performance decays when engineering management doesn’t prioritize it.
Modern example: Laptops boot in seconds. My servers take about 5 minutes to get to Linux boot, with long stretches of time taken by various subsystems, while Coreboot (designed to be fast) boots them nearly as quickly as a laptop.
Old example: early in my career we were developing a telecom system with a 5 min per year (5 9s) downtime target. The prototype took 30 minutes to boot, and engineers didn’t care because management hadn’t told them to make it boot faster. It drove me nuts. (a moot point, as it eventually got cancelled and we all got laid off)
> Notepad had been a native app until very recently, and it still opened pretty much instantaneously. With its rewrite as a UWP app, things went downhill. The before and after are apparent, and yet… the app continues to be as unfeatureful as it had always been. This is extra slowness for no user benefit.
We now have HUGE (/s) advancements in Notepad, like tabs and uh... Copilot
Don't forget dark mode!
I remember Windows 3.1 where I could change not only the color of the buttons, but the color of the light and shadow edge.
More than CPU speed, I think the increase in storage and RAM is to blame for the slow decay in latency. When you have only a few Kb/Mb of RAM and storage, you can't really afford to add much more to the software than what is the core feature. Your binary need to be small, which lead to faster loading in RAM, and do less, which means less things to run before the actual program.
When size is not an issue, it's harder to say no when the business demand for a telemetry system, an auto-update system, a crash handler with automatic report, and a bunch of features, a lot of which needs to be initialized at the start of the program, introducing significant latency at startup.
It's also complexity - added more than necessary, and @ a faster pace than hardware can keep up with.
Take font rendering: in early machines, fonts were small bitmaps (often 8x8 pixels, 1 bit/pixel), hardcoded in ROM. As screen resolutions grew (and varied between devices), OSes stored fonts in different sizes. Later: scalable fonts, chosen from a selection of styles / font families, rendered to sub-pixel accuracy, sub-pixel configuration adjustable to match hw construction of the display panel.
Yeah this is very flexible & can produce good looking fonts (if set up correctly). Which scale nicely when zooming in or out.
But it also makes rendering each single character a lot more complex. And thus eats a lot more cpu, RAM & storage than 8x8 fixed size, 1bpp font.
Or the must-insert-network-request-everywhere bs. No, I don't need search engine to start searching & provide suggestions after I've typed 1 character & didn't hit "search" yet.
There are many examples like the above, I won't elaborate.
Some of that complexity is necessary. Some of it isn't, but lightweight & very useful. But much of it is just a pile of unneeded crap of dubious usefulness (if any).
Imho, software development really should return to 1st principles. Start with a minimum viable product, that only has the absolute necessary functionality relevant to end-users. Don't even bother to include anything other than the absolute minimum. Optimise the heck out of that, and presto: v1.0 is done. Go from there.
> But it also makes rendering each single character a lot more complex.
Not millions of times more complex.
Except for some outliers that mess up everything (like anything from Microsoft), almost all of the increased latency between keypress and character rendering we see on modern computers comes from optimizing for modularity and generalization instead of specialized code for handling the keyboard.
Not even our hardware reacts fast enough to give you the latency computers had in the 90s.
I suspect (in a common pattern) the main thing that blocks making performance a priority is that it equates to reordering various ranks among developers and product managers.
When performance supersedes "more features", developers are gatekeepers and manager initiatives can be re-examined. The "solution" is to make performance a non-priority and paint complainers as stale and out-of-fashion.
Probably more common is that software isn't developed with end-users as #1 priority; it's developed to advance business goals.
Those 2 goals align about as often as planets in our solar system.
To some degree this is true for open source software as well. Developers may choose to work on features they find interesting (or projects done in a language they feel comfortable with), vs. looking at user experience first. Never mind that optimizing UX is hard (& fuzzy) as it is.
Or all the work done on libre software to cater to various corporate interests. As opposed to power users fixing & improving things for their needs.
Or you can have technical managers that understand what they are managing.
So is there any hope for improvement?
Personally I've decided to just vote with my feet and avoid using poor performing software as much as possibl, but that's frequently impractical or not worth the cost of missing out. I also doubt this will change the behaviors of companies as we see with, for example, TV advertising that they give no shits about degrading the consumer experience over the long term.
There doesn't seem much hope on the technical side either as software complexity is only increasing.aybe longer term AI has a role to play in auto-optimization?
Are mobile devices slow/unresponsive. I haven't experienced that unless I realllllly cheap out. Or after 4 years of OS updates on Apple devices for some reason. Androids seem OK in this regard.
I switched from android back to iOS last year. There seems to be some sort of inherent latency in either android or Samsung’s UI that causes the UI thread to lag behind your inputs by a noticeable amount, and for the UI thread to block app actions in many cases.
Things like summoning a keyboard causing my 120hz galaxy phone to drop to sub 10fps playing the intro animation for GBoard were just rampant. All non existent in iOS
FWIW I updated my phone to a relatively budget samsung recently, and had a similar noticeable delay to bring up the keyboard, installing 'simple keyboard' from F-droid seems to have helped. I wouldn't be surprised if it is missing features compared to the samsung/google ones where their absence will annoy a power user, but for whatever subset I use it works fine and doesn't appear as though my phone hangs.
I begin to wonder if all the commenters in this thread have compromised devices. I'm on a 5 year old Samsung (the model is, I bought it new six months ago) - my Linux machines are fast (gentoo) and my windows 10 and 11 machines are fast. My kid's computer is an i3 7350k and he plays roblox, Minecraft, teardown on it with no issues. That computer is a couple years older than he is at 9-10 years old. That computer's twin is my nas backup with 10gbe running windows - the drive array refused to work at any decent speed on Linux and I didn't want jbod, I wanted RAID.
Some things are slow, like discord on windows takes ~12 seconds before it starts to pop in ui elements after you double click. My main computer is beefy, though, 32 thread 128GB; but no NVMe. Sata spindle and SSDs. But I have Ryzen 3600s that run windows 11 fine.
Did you watch the videos linked in the article?
> I'm on a 5 year old Samsung (the model is, I bought it new six months ago)
How quick is the Share menu on your samsung? On mine, it takes about 3 seconds of repainting itself before it settles down. On iOS there's about a 100ms pause and the drawer pops up, fully populated. I found [0] which is a perfect example of this sort of bloat.
> My main computer is beefy, though, 32 thread 128GB; but no NVMe. Sata spindle and SSDs. But I have Ryzen 3600s that run windows 11 fine.
My main computer is a 24 core i9 with 64GB ram on NVMe. It runs windows fine. But, I saw exactly the same behaviour out of the box on this machine (and on the machine I replaced) as the linked article shows. I can compile, play games, do AV transcoding. But using apps like slack or discord is like walking through molasses, and even launching lightweight apps like WIndows Terminal, and Notepad have a noticeable delay from button press to the window appearing on screen. There's just something a bit broken about it.
[0] https://www.androidpolice.com/2018/05/05/google-please-fix-a...
But but ... Samsung has bigger numbers in the spec sheet! It must be faster!
I have a top end Pixel phone running stock Android and encounter latency all the time. All first start time is usually a couple of seconds. Switching back to an open app is fast in many cases but some still have to do a refresh (which I suspect involves communication to servers, although that's still not much of an excuse).
Similarly (linked in a footnote): http://danluu.com/input-lag/
I’ve recently noticed this on an especially well used app I have on my iPhone 14 with a stupid animation which regularly annoys me.
Google Authenticator’s filter box, when you tap it there is a very noticeable delay after tapping the filter box and the keyboard showing.
And what makes it worse is that if you switch away from the app, it auto clears the filter.
This isn’t a complex app and it’s slow at doing a use case easily performed millions of times a day.
"Makes me sick, motherfucker, how far we done fell." --Det. William 'Bunk' Moreland
Too much developers relying on bloatware, not enough “implement it yourself” because in reality every hash map doves a unique problem that a Palin hash map is not necessarily suited for. Embrace NIH
> How does this all happen? It’s easy to say “Bloat!”
Bloat!
It was pretty easy indeed.
Startup time has always been a bit of a sketchy metric as modern OSs and languages do a lot of processing on application launch. Some scan for viruses. On Macs you have checks for x86 vs Apple silicon and loading of Rosetta if required. Managed runtime environments have various JITs that get invoked. And apps are now huge webs of dependencies, so lots of dynamically linked code being loaded. A better metric is their performance once everything is in memory. That said, I still think we’re doing poorly at that metric as well. As resources have ballooned over the last decade, we’ve become lazy and we just don’t care about writing tight code.
Why not both?
The applications are launched at startup time because they have runtime startup slowness. The applications have startup slowness because of the JIT runtime/deps/.dll.
At the end of the day, end users pay for the cost of developer convenience (JIT and deps most of the time, even thought there are some case where dynamic linking is alright) because they don't do native apps.
Offloading everything at startup is a symptom IMO.
Replying to your specific point about virus scans. For some (naive) reason, I expect them to run a single time against a binary app that is never changed. So in theory it shouldn't even be a problem, but the reality says otherwise.