This is well expressed; I've never heard anyone present this perspective before.
This framework neatly explains why the experiences I've had working for big companies have been consistently dismal, and why that's likely to be true at any big company regardless of their culture or the nature of their business: the kind of high-complexity, marginal-profit-seeking work described here is completely at odds with the values which motivate me, and with the goals I had when I went to work for those places.
(I spent my entire stint at Microsoft working on one single "wicked feature", initially described as a simple, six-month starter project; it grew and grew, ultimately pulling in ten or fifteen people across half a dozen teams and putting them through a brutal sync-meeting schedule for literally years. The whole thing amounted to a single checkbox in some obscure settings dialog, meant to be clicked by a handful of institutional customers who wanted to save some disk space while porting their rickety piles of internal scripts from one ancient legacy platform to a newer legacy platform: for that, we made an unholy mess of permanent, irreducible complexity all over a couple of different codebases. I found it impossible to believe that the value of this feature could ever justify the cost of its creation, or its maintenance, and quit that job feeling that all I had accomplished there was to make things a little bit worse.)
I think Microsoft's "value proposition" is keeping legacy stuff running, even if the cost is enormous.
Where I work we sometimes use Access '98 because we have some old Access '98 databases. Access '98 installs just fine on Windows 11, even the stuff it does to take over the desktop works, except it looks a little funny because borderless windows work a little differently now. Clippy works great!
You are not going to run a binary from '98 on a Linux system. A binary from '98 on MacOS was for a different OS running with a different instruction set!
Windows has a system that recognizes particular applications and makes libraries behave the old way, even if they are buggy, to keep them working:
It took Google a year (and presumably many engineers) to rewrite the database behind Freebase so they could run it inside their hyperscale system.
It took me a month (after a long time of thinking about it) to 'crack the code' of Freebase to turn it into an RDF file with perfect referential integrity that you could query with SPARQL.
Google's case is harder because (1) anything is harder to do at hyperscale, (2) they settled on some answers as to how to do things at hyperscale that were the best at the time but had they had the chance to chew on the problem for a long time and try a number of different ways to build the foundation [1], they could have developed something which would have made developers productive. (3) Google doesn't care because they're so profitable, on a very large scale you can use devs in a wasteful way and pay them a lot.
[1] My really productive month was based on trying a lot of things that didn't work at a low level of intensity
This is well expressed; I've never heard anyone present this perspective before.
This framework neatly explains why the experiences I've had working for big companies have been consistently dismal, and why that's likely to be true at any big company regardless of their culture or the nature of their business: the kind of high-complexity, marginal-profit-seeking work described here is completely at odds with the values which motivate me, and with the goals I had when I went to work for those places.
(I spent my entire stint at Microsoft working on one single "wicked feature", initially described as a simple, six-month starter project; it grew and grew, ultimately pulling in ten or fifteen people across half a dozen teams and putting them through a brutal sync-meeting schedule for literally years. The whole thing amounted to a single checkbox in some obscure settings dialog, meant to be clicked by a handful of institutional customers who wanted to save some disk space while porting their rickety piles of internal scripts from one ancient legacy platform to a newer legacy platform: for that, we made an unholy mess of permanent, irreducible complexity all over a couple of different codebases. I found it impossible to believe that the value of this feature could ever justify the cost of its creation, or its maintenance, and quit that job feeling that all I had accomplished there was to make things a little bit worse.)
I think Microsoft's "value proposition" is keeping legacy stuff running, even if the cost is enormous.
Where I work we sometimes use Access '98 because we have some old Access '98 databases. Access '98 installs just fine on Windows 11, even the stuff it does to take over the desktop works, except it looks a little funny because borderless windows work a little differently now. Clippy works great!
You are not going to run a binary from '98 on a Linux system. A binary from '98 on MacOS was for a different OS running with a different instruction set!
Windows has a system that recognizes particular applications and makes libraries behave the old way, even if they are buggy, to keep them working:
https://www.rockpapershotgun.com/windows-95-had-special-code...
Index case: Kubernetes
It took Google a year (and presumably many engineers) to rewrite the database behind Freebase so they could run it inside their hyperscale system.
It took me a month (after a long time of thinking about it) to 'crack the code' of Freebase to turn it into an RDF file with perfect referential integrity that you could query with SPARQL.
Google's case is harder because (1) anything is harder to do at hyperscale, (2) they settled on some answers as to how to do things at hyperscale that were the best at the time but had they had the chance to chew on the problem for a long time and try a number of different ways to build the foundation [1], they could have developed something which would have made developers productive. (3) Google doesn't care because they're so profitable, on a very large scale you can use devs in a wasteful way and pay them a lot.
[1] My really productive month was based on trying a lot of things that didn't work at a low level of intensity
The time to limit features was before the manual processes were automated.
The allegation that "features make money", i.e. more than they cost, depends entirely on the distinction between capital and operating budgets.