I found this talk to be great. It goes through the history of OOP and how some of the ideas for the more modern ECS were embedded in the culture at the formation of OOP in the 1960s to 1980s but somehow weren't adopted.
It was pretty clear, even 20 years ago, that OOP had major problems in terms of what Casey Muratori now calls "hierarchical encapsulation" of problems.
One thing that really jumped out at me was his quote [0]:
> I think when you're designing new things, you should focus on the hardest stuff. ... we can always then take that and scale it down ... but it's almost impossible to take something that solves simple problems and scale it up into something that solves hard [problems]
I understand the context but this, in general, is abysmally bad advice. I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
> I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
I don't agree. While starting with the simplest case and expanding out is a valid problem-solving technique, it is also often the case in mathematics that we approach a problem by solving a more general problem and getting our solution as a special case. It's a bit paradoxical, but a problem that be completely intractable if attacked directly can be trivial if approached with a sufficiently powerful abstraction. And our problem-solving abilities grow with our toolbox of ever more powerful and general abstractions.
Also, it's a general principle in engineering that the initial design decisions, the underlying assumptions underlying everything, is in itself the least expensive part of the process but have an outsized influence on the entire rest of the project. The civil engineer who halfway through the construction of his bridge discovers there is a flaw in his design is having a very bad day (and likely year). With software things are more flexible, so we can build our solution incrementally from a simpler case and swap bits out as our understanding of the problem changes; but even there, if we discover there is something wrong with our fundamental architectural decisions, with how we model the problem domain, we can't fix it just by rewriting some modules. That's something that can only be fixed by a complete rewrite, possibly even in a different language.
So while I don't agree with your absolute statement in general, I think it is especially wrong given the context of language design and system architecture. Those are precisely the kind of areas where it's really important that you consider all the possible things you might want to do, and make sure you're not making some false assumption that will massively screw you over at some later date.
I thought it was very interesting about how Alan Kay and Bjarne Stroustrup may have been applying wisdom from their old fields of expertise and how that affected their philosophy.
There is an appeal to building complexity through Emergence, where you design several small self-contained pieces that have rich interactions with each other and through those rich interactions you can accomplish more complex things. Its how the universe seems to work.
But I also think that the kinds of tools that we have make designing things like this largely impossible. Emergence tends to result in things that we dont expect, and for precise computation and engineering, it feels like we are not close to accomplishing this.
So the idea that we need a sense of 'omniscience' for designing programs on individual systems feels like it is the right way to go.
Waaait, but I thought OOP was carefully crafted to "scale with big teams", and that's why it works so... ahem... "well". Turns out it was just memetic spillover from the creators' previous work?
And we absolutely needed 30-45 minutes to learn that that wasn't why it was created. The first part is a history of OOP languages to debunk something I'd never heard even claimed until I watched this video. The history was interesting, but also wrong in a few places. It was amusing to hear him talk about Arpanet being used in the 90s, though.
If I get bored with life I'll rewatch and take notes, that was the main one that made me chuckle and stuck with me. It was details around Lisp and a couple other things that were outside his explicit research scope (he specifically researched C++, Smalltalk, and Simula per his blog). Like claiming that everything in Lisp was based on lists (even in 1960 that wasn't true).
I'd just expect more from someone who takes 30+ minutes to debunk a claim that doesn't matter and most people have never heard to be more particular in getting details correct.
On the first part of the video, to be more constructive, it does not matter why a language or tool or whatever was made. The claim, that he debunks, is that OO languages were made to be good for working with teams. Whether it was made for that is immaterial, and no one needs 30 minutes of mostly historically correct video to get to The Truth(tm) of the matter. What's more interesting, and he never bothered to get into, is whether OO is actually good for working with teams (I can go either way, I've dealt with enough garbage OO programs to know that OO itself does not help things, but enough good OO programs to know that it can help things).
To anyone who has not yet watched the video, the second half is interesting, the first half is mostly a waste of time.
This is in the talk, he explicitly says that its often brought up that "OOP is made for large teams" "you're not using it as intended" "its not made to model your domain hierarchy" etc etc. The first 30 minutes is his reaction to that, disproving it.
Whether thats true or interesting is a different question, but its explicitly stated in the video, at the start, before he goes into the history.
I dunno man even just learning that Bjarne thought Simula's classes were cool specifically because of the domain of what he was working on—and learning that he ran into the same “unity build” problem that anyone who's worked on a large C++ project has encountered, years before literally anyone else in the world had—was fascinating, something I'd never heard before, and very interesting context in the broader scope of “OOP.”
Wow, definitely much more than I expected from the title. Really enjoyed the surprise mini-talk about the origin of entity-component-system in the Q&A section as well.
I know a two-and-a-half hour video is a hard sell for most people, but I found this talk to be absolutely fascinating. It's not yet another tired “let's all shit on OOP just for the sake of it”-type thing—instead, it's basically nothing but solid historical information (presented with evidence!) as to how “OOP”, as we now know it, came to be. The specific context in which these various decisions were made is something that nobody ever cares to teach, such that it's basically long-since forgotten today—yet here it is, in an easily-digestible format!
Amusingly, an hour into the video he complains about information being hidden behind hours of video. It would be a better paper, but apparently he hasn't written or put one out there. Probably a 20-30 minute read instead of 2.5 hours (or 1.25 since I'm running it at double speed).
I don't know if it matters to you, but the "video" is just a recording of a conference talk. It wasn't made with the sole intention of making a "video". I agree a text format version of the same information would be useful.
To be fair, though, the video has an uncommonly high (by modern standards!) information density/signal-to-noise ratio—there's minimal filler, and it's very straightforward and to-the-point with regards to its subject matter!
I found this talk to be great. It goes through the history of OOP and how some of the ideas for the more modern ECS were embedded in the culture at the formation of OOP in the 1960s to 1980s but somehow weren't adopted.
It was pretty clear, even 20 years ago, that OOP had major problems in terms of what Casey Muratori now calls "hierarchical encapsulation" of problems.
One thing that really jumped out at me was his quote [0]:
> I think when you're designing new things, you should focus on the hardest stuff. ... we can always then take that and scale it down ... but it's almost impossible to take something that solves simple problems and scale it up into something that solves hard [problems]
I understand the context but this, in general, is abysmally bad advice. I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
[0] https://www.youtube.com/watch?v=wo84LFzx5nI&t=8284s
> I'm not sure about language design or system architecture but this is almost universally not true for any mathematical or algorithmic pursuit.
I don't agree. While starting with the simplest case and expanding out is a valid problem-solving technique, it is also often the case in mathematics that we approach a problem by solving a more general problem and getting our solution as a special case. It's a bit paradoxical, but a problem that be completely intractable if attacked directly can be trivial if approached with a sufficiently powerful abstraction. And our problem-solving abilities grow with our toolbox of ever more powerful and general abstractions.
Also, it's a general principle in engineering that the initial design decisions, the underlying assumptions underlying everything, is in itself the least expensive part of the process but have an outsized influence on the entire rest of the project. The civil engineer who halfway through the construction of his bridge discovers there is a flaw in his design is having a very bad day (and likely year). With software things are more flexible, so we can build our solution incrementally from a simpler case and swap bits out as our understanding of the problem changes; but even there, if we discover there is something wrong with our fundamental architectural decisions, with how we model the problem domain, we can't fix it just by rewriting some modules. That's something that can only be fixed by a complete rewrite, possibly even in a different language.
So while I don't agree with your absolute statement in general, I think it is especially wrong given the context of language design and system architecture. Those are precisely the kind of areas where it's really important that you consider all the possible things you might want to do, and make sure you're not making some false assumption that will massively screw you over at some later date.
I thought it was very interesting about how Alan Kay and Bjarne Stroustrup may have been applying wisdom from their old fields of expertise and how that affected their philosophy.
There is an appeal to building complexity through Emergence, where you design several small self-contained pieces that have rich interactions with each other and through those rich interactions you can accomplish more complex things. Its how the universe seems to work. But I also think that the kinds of tools that we have make designing things like this largely impossible. Emergence tends to result in things that we dont expect, and for precise computation and engineering, it feels like we are not close to accomplishing this.
So the idea that we need a sense of 'omniscience' for designing programs on individual systems feels like it is the right way to go.
Waaait, but I thought OOP was carefully crafted to "scale with big teams", and that's why it works so... ahem... "well". Turns out it was just memetic spillover from the creators' previous work?
And we absolutely needed 30-45 minutes to learn that that wasn't why it was created. The first part is a history of OOP languages to debunk something I'd never heard even claimed until I watched this video. The history was interesting, but also wrong in a few places. It was amusing to hear him talk about Arpanet being used in the 90s, though.
> but also wrong in a few places
Would you be so kind as to elaborate how/where? (Other than the "arpanet in the 90s")
If I get bored with life I'll rewatch and take notes, that was the main one that made me chuckle and stuck with me. It was details around Lisp and a couple other things that were outside his explicit research scope (he specifically researched C++, Smalltalk, and Simula per his blog). Like claiming that everything in Lisp was based on lists (even in 1960 that wasn't true).
I'd just expect more from someone who takes 30+ minutes to debunk a claim that doesn't matter and most people have never heard to be more particular in getting details correct.
On the first part of the video, to be more constructive, it does not matter why a language or tool or whatever was made. The claim, that he debunks, is that OO languages were made to be good for working with teams. Whether it was made for that is immaterial, and no one needs 30 minutes of mostly historically correct video to get to The Truth(tm) of the matter. What's more interesting, and he never bothered to get into, is whether OO is actually good for working with teams (I can go either way, I've dealt with enough garbage OO programs to know that OO itself does not help things, but enough good OO programs to know that it can help things).
To anyone who has not yet watched the video, the second half is interesting, the first half is mostly a waste of time.
> Whether it was made for that is immaterial
This is in the talk, he explicitly says that its often brought up that "OOP is made for large teams" "you're not using it as intended" "its not made to model your domain hierarchy" etc etc. The first 30 minutes is his reaction to that, disproving it.
Whether thats true or interesting is a different question, but its explicitly stated in the video, at the start, before he goes into the history.
I dunno man even just learning that Bjarne thought Simula's classes were cool specifically because of the domain of what he was working on—and learning that he ran into the same “unity build” problem that anyone who's worked on a large C++ project has encountered, years before literally anyone else in the world had—was fascinating, something I'd never heard before, and very interesting context in the broader scope of “OOP.”
[dead]
Wow, definitely much more than I expected from the title. Really enjoyed the surprise mini-talk about the origin of entity-component-system in the Q&A section as well.
Great video, I knew like 0.1% of those things before watching.
I know a two-and-a-half hour video is a hard sell for most people, but I found this talk to be absolutely fascinating. It's not yet another tired “let's all shit on OOP just for the sake of it”-type thing—instead, it's basically nothing but solid historical information (presented with evidence!) as to how “OOP”, as we now know it, came to be. The specific context in which these various decisions were made is something that nobody ever cares to teach, such that it's basically long-since forgotten today—yet here it is, in an easily-digestible format!
Amusingly, an hour into the video he complains about information being hidden behind hours of video. It would be a better paper, but apparently he hasn't written or put one out there. Probably a 20-30 minute read instead of 2.5 hours (or 1.25 since I'm running it at double speed).
I don't know if it matters to you, but the "video" is just a recording of a conference talk. It wasn't made with the sole intention of making a "video". I agree a text format version of the same information would be useful.
To be fair, though, the video has an uncommonly high (by modern standards!) information density/signal-to-noise ratio—there's minimal filler, and it's very straightforward and to-the-point with regards to its subject matter!