Liskov substitution will not save you. One of the worst cases of inheritance I've ever seen was in a hierarchy that was a perfect Liskov fit -- an even better fit than traditional examples like "a JSON parser is a parser". See https://news.ycombinator.com/item?id=42512629.
The fundamental problem with inheritance, and one not shared by any other kind of polymorphism, is that you can make both upcalls and downcalls within the same hierarchy. No one should ever use inheritance in any long-term production use case without some way of enforcing strict discipline, ensuring that calls can only go one way -- up or down, but not both. I don't know to what extent tooling to enforce this discipline exists.
> No one should ever use inheritance in any long-term production use case without some way of enforcing strict discipline, ensuring that calls can only go one way -- up or down, but not both. I don't know to what extent tooling to enforce this discipline exists.
Disagree with your first part. Inheritance used to express Subtyping is different from that used for Code-reuse and yet again different from that used for implementing Framework skeleton structure. You have to disambiguate them carefully when using it. See the article linked to by user "Fannon" here - https://news.ycombinator.com/item?id=42789466
As for tooling, you have to enforce the contract using pre/post/inv clauses following Meyer's DbC and also explicit documentation.
Thanks for that article -- I have to agree with Jacob Zimmerman in the comments to the article:
> I don’t get it. I read one part of the article, think I get it, then I read a different part and what I read there doesn’t jive with what I thought I understood. And I can’t figure out how to reconcile them.
---
> As for tooling, you have to enforce the contract using pre/post/inv clauses following Meyer's DbC and also explicit documentation.
I think we call them asserts and type-level state machines :)
I don't really believe in documentation as enough of a barrier to doing worse things. It must, at a structural level, be easier to do better things.
There is no confusion if you understand that Inheritance is just a "mechanism" to express three (and maybe more) different kinds of "policies" and a single class may implement any or all of them in which case it becomes important to disambiguate which methods/functions express which "policies". There is a abstract concept and a syntactical expression of that concept which needs to be clear in one's mind.
Again, asserts are just the "mechanism" to express pre/post/inv "policies" in code. Without having an understanding of pre/post/inv from the pov of Hoare Logic, merely using asserts will not give you much benefit. Documentation is quite important here.
Both the above can be seen in the design of the Eiffel Language where they are integrated into proper syntactical mechanisms. Once you understand the concepts here, you can apply them explicitly even if your language does not support the needed syntax (eg. Contracts). See Bertrand Meyer's OOSC2 for details - https://bertrandmeyer.com/oosc2/ Specifically "Design-by-Contract (DbC)" and "Inheritance Techniques" and "Using Inheritance well".
I agree that inheritance does too many things and has too many degrees of flexibility. I think other kinds of polymorphism like typeclasses don't have this issue, and are better due to that. Automation is highly preferable to documentation.
I think the discussion would benefit from you concretely working through an example. What change(s) are you proposing to how inheritance is done in C++ or Java, and how would they prevent spaghetti code and nested upcalls/downcalls?
I am not sure that you understood what i wrote. Inheritance's flexibility is its very strength that allows you to express different concepts elegantly. Also no amount of Automation/Tooling/etc. can substitute for documentation explaining the intent behind the code.
The main thing i would like to see in C++/Java/whatever is support for "Design-by-Contract" (DbC) similar to that given in Eiffel. There is already a proposal for C++; see this recent HN discussion - https://news.ycombinator.com/item?id=42131473 Basically this is a way to apply Hoare Logic to functions/methods directly in the implementation language itself. Now use this across the types/classes in a inheritance hierarchy and you can enforce the semantics that you want to express.
Regarding a concrete example; I have already pointed you to Bertrand Meyer's OOSC2 and three specific chapters to read; They walk you through proper examples with explanations. Additionally also see his Applying Design By Contract paper (pdf) here - https://se.inf.ethz.ch/~meyer/publications/computer/contract.... If you would like to see complete application code using a C++ OO framework i suggest creating a sample MFC app using Visual Studio C++ wizard and just looking at the generated code without adding anything of your own. It uses a Document/View architecture (a variation of MVC) where your app specific classes derive from MFC framework given classes. The framework invokes your derived methods (i.e. downcall) which can as needed call back to base's method (i.e. upcall). There is a strong coupling between the framework classes and your app specific ones by design. You can see how different usages of inheritance are implemented to give a powerful app framework; see documentation starting here - https://learn.microsoft.com/en-us/cpp/mfc/document-view-arch...
Sorry I don't think your response really gets to the point. I'm aware of various techniques like contracts, but you're speaking in generalities rather than specifics. So yes, I haven't quite understood what you meant.
This is a common frustration I have with OOP discourse, it tends to be really up-in-the-air and not grounded in concrete specifics. (The article you linked also has this issue.) Meanwhile, users suffer in ways that just don't happen with typeclass-based polymorphism, and none of this discourse is required in my world. So why should I not recommend everyone use typeclass-based polymorphism?
> I am not sure that you understood what i wrote. Inheritance's flexibility is its very strength
No, being too flexible is a weakness, not a strength. At scale, rigorous discipline enforced by tooling is required.
> Also no amount of Automation/Tooling/etc. can substitute for documentation explaining the intent behind the code.
Yes, of course documentation is required. What I'm saying is that if it can be automated, it should be, and that relying on documentation alone is foolish.
In particular, invariants like "no downcalls" or "no upcalls" should 100% be enforced by automation. Documentation is not enough at scale and under pressure.
> i suggest creating a sample MFC app using Visual Studio C++ wizard
I'd rather not?
> The framework invokes your derived methods (i.e. downcall) which can as needed call back to base's method (i.e. upcall).
This sounds really bad to me at scale and under pressure.
I pointed you to a specific book i.e. OOSC2 and three specific chapters in it (to start with) which explain the concepts well with examples you asked for. How much more specific can one get? If you already know contracts then it should be easy to translate the concepts to any language of your choice. Meyer provides a thorough rationale and is extremely detailed in his examples. Furthermore, i also pointed you to one of the largest and commercially most successful class library and application framework (i.e. MFC) where you can see classic OOD/OOP (including upcalls/downcalls) in action; and yet you say i am "speaking in generalities"! It seems you are not willing to read/study but expect a couple of paragraphs to enlighten everything, which is not going to happen.
Eg: Base.method() has {pre1} and {post1} as contracts. Derived.method() has {pre2} and {post2} as contracts. What should be the relationship between {pre2}&{pre1} and {post2}&{post1} to enforce proper subtyping?
> This is a common frustration I have with OOP discourse, it tends to be really up-in-the-air and not grounded in concrete specifics
It is not up-in-the-air when ideas and specific books by authors like Bertrand Meyer and Barbara Liskov (both researchers and practitioners) are being pointed out. Trying to simplify their concepts into a couple of paragraphs would invariably miss important nuances and lead to misinterpretations (the bane of most HN discussions based on trivial articles/blog posts). Hence it is better they are studied directly and then we can have a discussion if you would like.
> Meanwhile, users suffer in ways that just don't happen with typeclass-based polymorphism, and none of this discourse is required in my world. So why should I not recommend everyone use typeclass-based polymorphism?
Sure, there are other types of polymorphisms which can be better in certain scenarios. But that is not under discussion here; we are talking about "traditional" dynamic runtime dispatch based polymorphism which is far easier to understand and implement even in small languages like C.
> No, being too flexible is a weakness, not a strength. At scale, rigorous discipline enforced by tooling is required.
Flexibility increases your "design space" and hence never a weakness. Rigorous discipline is needed throughout development but tooling can only do so much.
> In particular, invariants like "no downcalls" or "no upcalls" should 100% be enforced by automation.
This depends on the concept you are trying to express and cannot be the same in all scenarios (except for direct ones like "interface implementation").
> I'd rather not?
Well, you did ask for a concrete example and i showed you MFC apps.
> This sounds really bad to me at scale and under pressure.
Saying something is "bad" or "spaghetti" without understanding the design concepts behind the implementation is wrong. MFC is one the largest and most successful application frameworks in the industry and has proven itself in all sorts of applications at scale; studying it teaches one lots of OOD/OOP techniques (good/bad/ugly) needed in real-life industry apps.
> Flexibility increases your "design space" and hence never a weakness.
This is just objectively false. Constraints liberate and liberties constrain.
> Rigorous discipline is needed throughout development but tooling can only do so much.
Have you used Rust? I would recommend building some kind of non-trivial command line tool with it — you will quickly see how low your expectations for tooling have been.
> Eg: Base.method() has {pre1} and {post1} as contracts. Derived.method() has {pre2} and {post2} as contracts. What should be the relationship between {pre2}&{pre1} and {post2}&{post1} to enforce proper subtyping?
As someone who understands variance etc quite well, my answer is to simply not have subtypes. You absolutely do not need inheritance subtyping to build production software. (Rust has subtyping and variance only for lifetime parameters, and that's confusing enough.)
> Sure, there are other types of polymorphisms which can be better in certain scenarios. But that is not under discussion here; we are talking about "traditional" dynamic runtime dispatch based polymorphism which is far easier to understand and implement even in small languages like C.
I use traits for runtime dispatch in Rust all the time?
Inheritance is only traditional because C++ and Java made it so. I think it's been a colossal mistake.
> This is just objectively false. Constraints liberate and liberties constrain.
You are completely wrong here. Flexibility by definition means an increase in the allowed degrees of freedom in one or more axes which in turn allows one to mix and match feature sets to express more design concepts (eg. Multi-paradigm). Your second line is a silly slogan which presumably means constraints make the job of picking one choice from a set easier due to less thought needed. It is applicable to inexperienced developers but certainly not to experienced ones who need all the flexibility that a language can give.
> As someone who understands variance etc quite well, my answer is to simply not have subtypes. You absolutely do not need inheritance subtyping to build production software. (Rust has subtyping and variance only for lifetime parameters, and that's confusing enough.)
You have not understood the example. Variance is used to constrain types but pre/post are predicates relating subsets of values from the types; this constrains the state space (cartesian product of the types) itself. Second, your statement not to use subtyping is silly. Subtype relationships arise naturally amongst concepts in any non-trivial system which you can group in a hierarchy based on commonality (towards the top) and variability (towards the bottom). Inheritance is just a direct way of expressing it.
> Inheritance is only traditional because C++ and Java made it so. I think it's been a colossal mistake.
Statements like these betray an ignorance of the subject. I have already shown that Inheritance can be used for different purposes of which Subtyping in the LSP sense is what everybody agrees on. The other uses need experience and discipline but are very powerful when done clearly. Inheritance was first introduced in Simula67 based on a idea presented by Tony Hoare in 1966. C++ popularized it and others simply copied it. See wikipedia for more details - https://en.wikipedia.org/wiki/Inheritance_(object-oriented_p...
PS: This discussion reminded me of "The Blub Paradox" by Paul Graham (https://paulgraham.com/avg.html) which i think most Rust evangelicals suffer from. Just from my cursory look at Rust i have seen nothing compelling to want me to study it in depth over my preferred language of C++. With the addition of more features into "Modern C++" to support Functional Programming it has become even more flexible and powerful albeit with a steeper learning curve.
> Your second line is a silly slogan which presumably means constraints make the job of picking one choice from a set easier due to less thought needed
That is absolutely not what it means, and it is not a silly slogan — it is a basic law of reality.
As an example, if your build system is monadic (build nodes can add new nodes dynamically) then the number of nodes in it is not known upfront. If the build system is not monadic, the number of nodes is determined at the start of the build process.
As another example, the constraints that Rust sets around & and &mut mean that the compiler can do really aggressive noalias optimizations that no one would even dream about doing in C or C++.
> It is applicable to inexperienced developers but certainly not to experienced ones who need all the flexibility that a language can give.
I'm quite an experienced developer, and I've tended to use more constrained languages over time. I love the fact that Rust constrains me by not having uncoordinated shared mutable state.
> This discussion reminded me of "The Blub Paradox" by Paul Graham (https://paulgraham.com/avg.html) which i think most Rust evangelicals suffer from
At Oxide we use Rust and would never have been able to achieve this level of rigor in C++. Hell, try writing anything like my tool https://nexte.st/ in C++ (be sure to get the signal handling exactly right). Rust tooling is at a completely different quality level from earlier-generation languages.
Again, these are all your preferences/opinions which you are stating as some sort of acknowledged truth; which is most definitely not the case. While there are many good points about Rust it is quite over-hyped with evangelical zeal which is why a lot of software engineers are turned off of it. Graydon Hoare himself has said he took the good ideas from old languages and put them together. That in itself is obviously not a bad thing (imo, the industry killed research in programming languages/OS from the mid-nineties when Java was marketed up the wazoo by Sun throwing ungodly amounts of money at it) but the "saviour complex" being pushed is a strict no-no with experienced C/C++ developers.
I don't think there really is any reasonable way to disagree with "constraints liberate, liberties constrain", sorry. Anyone who has spent any amount of time with algebraic structures in mathematics will grasp this intuitively, as will anyone who has written code in a type-safe style. It really is a basic law of nature, similar to other basic principles like Bayes' law.
I only brought in Rust because it does polymorphism in a non-OO style.
I've never seen a case where inheritance was superior to composition with a shared interface. Worst case with composition, it just returns the injected class's method directly. The beauty is that this really shines when you apply the liskov substitution principle.
I think Python's pattern using inheritance for mixins is probably a good candidate. But Python does have a culture of "inheritance is only for sharing code user beware if you try to use it for other things." Python's ABC classes for collections is also a good use of inheritance. Inherit from MutableMapping, implement the required methods, boom you get all the other mapping methods for free.
Pydantic / dataclass inheritance is elegant for building up different collections of fields. That being said it does use codegen / metaclass hackery to do it.
I think values should generally only be combined into a structure at the end (no half-formed structures with null data, no calls on methods that work on half-formed structures).
Destructors are more complicated, there's definitely times where you have to violate invariants that otherwise are always the case.
Here to recommend this article, really helped me to understand inheritance better. Liskov Substitution is just one aspect / type of it and may conflict with others.
Again, kudos to Uncle Bob for reminding me about the importance of good software architecture in his classic Clean Architecture! That book is my primary inspiration for this series. Without clean architecture, we’ll all be building firmware (my paraphrased summary).
What does clean architecture have to do with building firmware or not? Plenty of programmers make a living building firmware. Just because they don't need/can't/want to apply clean architecture in their code, doesn't mean they are inferior to those who do.
Furthermore, after a snippet which I suppose it is in Kotlin, there is this:
While mathematically a square is a rectangle, in terms of behavior substitutability, it isn’t. The Square class violates LSP because it changes the behavior that clients of Rectangle expect.
Instead of inheritance, we can use composition and interfaces
The Liskov principle is about one of the three types of polymorphism (so far): subtyping polymorphism. Which is about inheritance. Composition is _not_ subtyping. And interfaces (be it Java's or Kotlin's) are another type of polymorphism: ad-hoc. Even Wikipedia[1] has the correct definition:
Ad hoc polymorphism: defines a common interface for an arbitrary set of individually specified types.
Therefore, the examples of interfaces aren't compliant with LSP as well.
I understand the good intentions behind the article, but it left much to be desired. A proper research to at least fix the glaring errors should have been made beforehand.
I’m in the middle of reading Clean Architecture right now. The square/rectangle example is directly from the book.
The firmware statement is an argument made (differently) in the book that software is called soft because it’s easy to change. Firmware is harder to change because of its tight coupling and dependencies (to the hardware). Software that is hard to change due to tight coupling and dependencies could almost be considered firmware—like brand new code without tests can almost be considered legacy.
Like most articles on "Inheritance" this is clueless about providing any "real meaning/understanding". People always take the soundbites (eg. Uncle Bob SOLID) provided as a mnemonic as being the end-all, don't fully understand the nuances and then usually arrive at a wrong conclusion.
LSP (https://en.wikipedia.org/wiki/Liskov_substitution_principle) has to do with behavioural subtyping guaranteeing semantic interoperability between types in a hierarchy. It involves not just the syntax of function signatures but their semantic meaning involving Variance/Invariance/Covariance/Contravariance and their guarantees using an extension to Hoare Logic i.e. Preconditions/Postconditions/Invariants (derived from Meyer's DbC). Thus without enforcing the latter (which is generally done via documentation since there is no syntax for expressing pre/post/inv directly in most languages) the former is incomplete and thus the complete contract is easily missed/forgotten leading to the mistaken belief "Inheritance is bad". The LSP wikipedia page links to all the concepts, the original papers and more for further clarification.
Finally see Barbara Liskov's own book (with John Guttag) Program Development in Java: Abstraction, Specification, and Object-Oriented Design for a "correct approach" to OOP. Note that Java is just used as a example language while the principles are language independent.
If I remember correctly, Liskov didn't talk about inheritance but subtyping in a more general way. Java, C++ and other, especially statically typed, compiled languages often use inheritance to model subtyping but Liskov/Wing weren't making any statements about inheritance specifically.
This is correct. I read her paper closely. One example I give is how SICP provides two implementations for complex numbers[1], the rectangular for and polar form.
(make-rectangular (real-part z) (imag-part z))
and
(make-polar (magnitude z) (angle z))
then on page 138 provides this interface that both satisfy
> use inheritance to model subtyping but Liskov/Wing weren't making any statements about inheritance specifically.
Right. Inheritance is just one mechanism to realize Subtyping. When done with proper contract guarantees (i.e. pre/post/inv clauses) it is a very powerful way to express semantic relationships through code reuse.
I don't feel the rectangle/square example is valid, given that both alternatives follow different designs - there's no Shape base class in the inheritance example. Moreover, I don't think that switching from a base (abstract) class to an interface is enough on itself to call it composition.
The two issues the article mentions have imho less to do with the LSP itself, and more with the limitations that different programming languages have when it comes to define contracts through interfaces (not the same thing), like the lack of exception specs or non-nullability enforcement.
What if instead of Rectangle class we would have ReadonlyRectangle and Rectangle? Square could then inherit from ReadonlyRectangle, so code expecting only to read some properties and not write them could accept Square objects as ReadonlyRectangle. Alternatively if we really want to have only Square and Rectangle classes, there could be some language feature that whenever you want to cast Square to Rectangle it must be "const Rectangle" (const as in C++), so again we would be allowed to only use the "safe" subset of object methods.
I think what you mean is that if a Square that is also a Rectangle can't be made to be non-square, then inheritance works. Which, fair enough, but I think there's still other good reasons that inheritance is a bad approach. Interfaces (and traits) are still way better.
What is "ReadonlyRectangle"? Is it just an interface that only exposes read-only methods; or is it an explicit promise that the rectangle is immutable?
Perhaps we could go with even more classes. "Rectangle" and "Square" for the read-only methods, without any implications about mutability. "MutableRectangle" and "MutableSquare" for mutable implementations; "ImmutableRectangle" and "ImmutableSquare" for immutable implementations.
- "Rectangle" has methods "getWidth" and "getHeight".
- "Square" has a method "getSide".
- "ImmutableRectangle" implements "Rectangle".
- "ImmutableSquare" implements "Rectangle" and "ImmutableRectangle" and "Square".
- "MutableRectangle" implements "Rectangle"; has extra methods "setWidth" and "setHeight".
- "MutableSquare" implements "Rectangle" and "Square"; has an extra method "setSide".
...or you could just give up, and declare two classes "Square" and "Rectangle" (mutable) that either have nothing in common, or they just both extend some "Shape" class that can paint them and calculate the area.
I've written Rust full time for the last 8 years, being part of teams that have shipped several large, transformative, and basically correct projects. No OOP in sight. It's wonderful!
At work we sadly have to implement very OOP-y standards with all the bullshit that entails. 11 levels of inheritance with overrides all over the place sure isn't fun to deal with.
But for things I do myself I use objects and interfaces strictly as a tool to solve specific problems, not as the overall program structure.
Most of the time you just need to turn some bits into other bits with a function, no need to overcomplicate things.
The question of if a square is a rectangle or a rectangle is a square is the sort of thing that comes from OOP brain-rot. They're just data, and their "isa" relationship is likely not even relevant to the problem you're actually trying to solve, like displaying them onscreen.
A "square" could be a function that makes a rectangle out of a single float. A "rectangle" could be a function that produces a polygon. The concepts need not be modeled as types or objects at all.
The problem I see inheritance solving is not having to distribute subtype-specific logic throughout your code base - functions can interact with objects in a generic manner and you get to keep subtype-specific code all in one spot. That's a win.
Inheritance isn't the only means for achieving this capability, though. You can also use interfaces and protocols. I prefer to use interfaces. If my class needs to implement an interface than that's explicit: it implements the interface. I can use inheritance if the class really IS-A variant of another class and it can use that base class in fulfilling its obligation in implementing that interface. That's an implementation detail. But the fact it has responsibility for implementing that interface is made explicit.
Yeah but that's what I mean by using it as a tool, you're not trying to model some weird taxonomy with your classes (like the square/rectangle situation), you're using a language feature to enable generic code and/or for code reuse, any real-world relationship between the concepts is irrelevant.
Inheritance is bad because it enforces a subtyping relationship alongside code reuse and data reuse. It is extremely rare you want all 3 together and even when you think you do there's a rude awakening coming your way, specially if you did the "model real world concepts as a class hierarchy thing."
The object-relational mismatch is a weakness of the object side, not of the relational. Use the right tool for the job and forget about stupid programming paradigms.
Better to think of LSP as more of a gray scale than all or nothing. The more the APIs match, the more substitutability you gain.
Switching to composition has its advantages but you do lose all substitutability and often need to write forwarding methods that have to be kept in sync as the code evolves over time.
SOLID and clean code are not some universal bible that is followed everywhere, I spend a considerable amount of effort reasoning juniors and mid levels out of some of the bad habits they get from following these principles blindly.
For example the only reason DI became so popular is that you could not mock static in Java at the time. In FB codebase DI was also used in PHP until they found a way to mock static, after which the DI framework was deprecated and codemods started coming in removing DI. There is literally nothing wrong in using a factory method or constructing what you need on demand. These days static can also be mocked in Java and if you really think about it you see Spring Boot adds a lot of accidental complexity (but sure its convenient and well tested so its ok to use), concepts like beans and beanfactories are not essential for solving any business problem
Which brings me to S in SOLID, which I think is probably top 2 worst principles in software engineering (the no 1 spot goes to DRY). Somehow it came from some early 2000-s TDD crowd and the test pyramid, it makes sense if you embrace TDD, mocking, test pyramid and unit tests as a good thing. In reality that style of software is really hard to understand, every problem is split into 1000 small pieces invoking each other usually in some undefined ways, no flow can be understood without understanding and building a mental model of the entire 1000 object spaghetti. The tests themselves mostly just end up setting a bunch of mocks and then pretty much coupling the impl and the test on method call level, any change to the impl will cause the tests to break for only the reason that the new method call was not mocked. After going through all this ceremony the tests are not even guaranteeing the thing will work during runtime since the db, kafka or http was mocked out and all the filters, listeners, db validations were skipped. In these days so called integration tests with docker compose are a lot better (use actual db or kafka, wiremock the http level), that way your have a reasonble chance to catch things like did this mysql jdbc driver upgrade broke anything
I have to mention DRY also, the amount of sins caused in name of DRY by juniors is crazy, similar looking lines get moved into a common function/method/util all the time and coupling is introduced between 2 previously independant parts of the system. As the code involves and morphs into something different the original function starts getting more args to behave differently in one case and differently in another case, if it had been left as separate files each could evolve separately. I dont really know how to explain this better than coupling should not be introduced to save few lines of typing or boilerplate, in fact any abstraction or indirection should only be introduced when its really needed, the default mode should be copy/paste and no coupling (the person adding a cross cutting PR will likely not be a jr and has enough experience to know how and when to use grep).
Anyhow I have enough experience to know people are usually too convinced that all this solid, clean code stuff is peak software so I wont expect to change anyones thinking with 1 HN post, it usually takes me 2 years or so to train a person out of this and back to just putting the damn json in db without ceremony. Also need to make sure LLM-s have some good data that is based on experience and not dogmas to learn from :)
The rationale for Dependency Injection was never _just_ about "making testing static methods" easier. In fact, Dependency injection was never about static methods at all. No DI advocate — not even the radical Uncle Bob — will tell you to stop using Math.round() or Math.sqrt(), even though they are static methods.
The main driver for dependency injection was always to avoid strong coupling of unrelated classes. Strong coupling can be introduced by cases like Class A always instantiating a class B which is a particular subtype of class S (i.e. giving up the Liskov substitution principle), Class A initializing class B with particular parameters that cannot be extended or overridden, Class A calling a static method or a singleton method which modifies or reads a global value.
Strong coupling makes you lose on flexibility, reusability and code readability. If you need to modify how either class A or class B behave later, you may now need to painstakingly scan all the places BOTH classes are used (and all the places other classes touching them are used) and modify the way they are constructed. If you want to enable OrderProcessor to accept bank transfers, but it was built to always call "new CreditCardProcessor()" internally inside its constructor, you will now have to find every place CreditCardProcessor is constructed and modify it. The worst offenders I've seen are pure logic classes that have no business having side side effects, but still end up opening multiple files, or doing a bunch of HTTP requests that you cannot avoid, because their authors just thought: "Cool, I can mock all this stuff with PowerMock while testing!"
The other issue I mentioned is code readability. This is especially an issue with singletons or static methods that mutate global state. You basically get the dreaded action-at-a-distance[1]. You might initially write a class that is using a singleton UserSessionManager object to keep track of the current user session. The class only operates on simple single-threaded scenarios, but at one point some other developer decides to use your class in a multi-threaded context. And Boom. Since the singleton UserSessionManager wasn't a part of the interface of your class, the developer wasn't aware that it's being used and that the class is not ready for multi-threaded contexts[2]. But if you've used DI, the dependencies of the classes would have been explicit.
That's the true gist of DI really. DI is not about one heavyweight framework or another (in most cases you could do it quite easily without any framework). It's also not a pure OOP technique (it is common used in functional languages too, e.g. with Reader Monad). Dependency injection is really just about making your dependencies explicit and configurable rather than implicit and fixed.
As a tangent, mocking static methods was possible for a rather long time. PowerMock (which allows mocking statics with EasyMock and Mockito) was available at least since 2008, and JMockit is even earlier, available at least in 2006[3]. So mocking static methods in Java has been possible for a very long time, probably before even 5% of the Java programmers have even started using mock objects.
But it's not always ideal. Unfortunately, tools like PowerMock or JMockit static /final mocking are working by messing with the JVM internals. These libraries often broke down when a new version of Java was released and you had to wait until the compatibility issue was fixed. These libraries also relied on tricks like custom classloaders, Java Instrumentation Agents and bytecode manipulation. These low-level tricks don't play way with many other things. For instance, if you are using a framework which needs its own custom class loader, or when you're using another tool which needs bytecode manipulation. I was personally bitten by this when I wanted to implement mutation testing[4] in Java, and I couldn't get it to work with static mocking. Since I believe mutation testing carries more value than the convenience of being able to mock statics for testing, it was an easy choice to dump Powermock.
Do yourself a favor and wear yourself off all this SOLID, Uncle Bob, Object Oriented, Clean Code crap.
Don't ever use inheritance. Instead of things inheriting from other things, flip the relationship and make things HAVE other things. This is called composition and it has all the positives of inheritance but none of the negatives.
Example: imagine you have a school system where there are student users and there are employee users, and some features like grading that should only be available for employees.
Instead of making Student and Employee inherit from User, just have a User class/record/object/whatever you want to call it that constitutes the account
data class User (id: Int, name: String, email: String)
and for those Users who are students, create a Student that points to the user
data class Student (userId: Id, blabla student specific attributes)
and vice versa for the Employees
data class Employee (userId: Id, blabla employee specific attributes)
then, your types can simply and strongly prevent Students from being sent into functions that are supposed to operate on Employees etc etc (but for those cases where you really want functions that operate on Users, just send in each of their Users! nothing's preventing you from that flexibility if that's what you want)
and for those users who really are both (after all, students can graduate and become employees, and employees can enroll to study), THE SAME USER can be BOTH a Student and an Employee! (this is one of the biggest footguns with inheritance: in the inheritance world, a Student can never be an Employee, even though that's just an accident of using inheritance and in the real world there's actually nothing that calls for that kind of artificial, hard segregation)
Once you see it you can't unsee it. The emperor has no clothes. Type-wise functional programmers have had solutions for all these made up problems for decades. That's why these days even sane Object Oriented language designers like Josh Bloch, Brian Goetz, the Kotlin devs etc are taking their languages in that direction.
Never using inheritance is pushing the dogma to the other extreme imo. The whole prefer composition over inheritance is meant to help people avoid the typical OO over use of inheritance. It doesn't mean Is-A relation doesn't/shouldn't exist. It just means when there is data, prefer using composition to give access to that data.
There will be times when you want to represent Is-A relationship - especially when you want to guarantee a specific interface/set of functions your object will have - irrespective of the under the hood details.
Notifier n = GetNotifier(backend_name);
Here what you care about is notifier providing a set of functions (sendNotification, updateNotification, removeNotification) - irrespective of what the implementation details are - whether you're using desktop notifications or SMS notifications.
I see no need for inheritance there, that can and should be done using interfaces
eg in the contrived example I gave, any of all three of the User, Student and Employee can implement the interface (and if needed Student could simply delegate to its internal User while Employee could "override" by providing its own, different implementation)
Inheriting from a class creates dispatch problems, and there's instance variables/fields/whatever to deal with. Never mind multiple inheritance, as that gets hairy real fast. With interfaces there is no hierarchy, and all there is is a type and a data, and you either pass them around together, you monomorphize to avoid the need to pass two pointers instead of one, or you have the data point to its type in a generic way. With class hierarchies you have to have operations to cast data up and down the hierarchy, meaning trade one dispatch table for another.
Interfaces manifest the object's responsibility. Functions accepting objects as parameters should work with interfaces, not type instances. That way the responsibilities and capabilities are clear to the user of the interface and the implementor.
As far as how the implementor may fulfill its interface obligation, it may use inheritance, if it truly has an IS-A or subtype relationship with the base object.
If you have a sufficiently statically typed language then the is-a concern goes away -- certainly in the example you gave, since the compiler/linker knows to look for a `GetNotifier()` that returns a `Notifier`. Now, you might still want to know whether the notifier you got satisfies other traits than just those of `Notifier`, but you do that by using a type that has those traits rather than `Notifier`, and now you have little need for `instanceof` or similar operators. (You still might want to know what kind of thing you got because you might care about semantics that are not expressed in the interface. For example you might care to know whether a logger is "fast" as in local or "slow" as in remote, as that might cause you to log less verbosely to avoid drops or blocking or whatever. But the need for this `instanceof` goes down dramatically if you have interfaces and traits.)
Never is a good starting point as a guide but of course there are cases where is-a makes sense too.
Composition generally is not good enough to model all cases without resulting to some AOP or meta programming also, but in that case the is-a or a base class would arguably be the simpler approach, at least it can be reasoned about and debugged, as opposed to some AOP/Meta spaghetti that can only probably be understood from docs
Just because you see OO languages starting to favor composition over inheritance does not mean inheritance has no place, and indeed, interfaces as a form of composition have existed in many bog-standard OO languages for decades.
Your example dosn't compute, at least in most languages, because derived objects would not have the same shape as one another, only the shape of the base class. I.e. functions expecting a User object would of course accept either an Employee or a Student (both subclasses of a User), but functions expecting a Student object or an Employee object would not accept the other object type just because they share a base class. Indeed, that's the whole point. And as another poster mentioned, you are introducing a burden by now having no way to determine whether a User is an Employee or a Student without having to pass additional information.
Listen, I'll be the first to admit that the oo paradigm went overboard with inheritance and object classification to the n-th degree by inventing ridiculous object hierarchies etc, but inheritance (even multiple inheritance) has a place- not just when reasoning about and organizing code, but for programmer ergonomics. And with the trend for composition to disallow data members (like traits in Rust), it can seriously limit the expressiveness of code.
Sometimes inheritance is better, and if used properly, there's nothing wrong with that. The alternative is that you wind up implementing the same 5-10 interfaces repeatedly for every different object you create.
It should never be all or nothing. Inheritance has its place. Composition has its place.
And if you squint just right they're two sides of the same coin. "Is A" vs "Can Do" or "Has".
> The alternative is that you wind up implementing the same 5-10 interfaces repeatedly for every different object you create.
if both Student and Employee need to implement those interfaces, it's probably User that should have and implement them, not Student and Employee (and if they truly do need to have an implement them, they can simply delegate to their internal User = "no override" or provide a unique implementation = "override") (let alone that unless I'm misremembering in Kotlin interfaces can have default implementations)
Inheritance has no place in production codebases—unless there is strict discipline, enforced by tooling, ensuring calls only go in one direction. This Liskov stuff has zero bearing.
You are conflating two completely separate things.
The Liskov principle basically defines when you want inheritance versus when you don't. If your classes don't respect the Liskov principle, then they must not use inheritance.
The problems from your story relate to the implementation of some classes that really needed inheritance. The spaghetti you allude to was not caused by inheritance itself, if was caused by people creating spaghetti. The fact that child classes were calling parent class methods and then the parent class would call child class methods and so on is symptomatic of code that does too much, and of people taking code reuse to the extreme.
I've seen the same things happen with procedural code - a bunch of functions calling each other with all sorts of control parameters, and people adding just one more bool and one more if, because an existing function already does most of what I want, it just needs to do it slightly different at the seventh step. And after a few rounds of this, you end up with a function that can technically do 32 different things to the same data, depending on the combination of flags you pass in. And everyone deeply suspects that only 7 of those things are really used, but they're still afraid to break any of the other 25 possible cases with a change.
People piggybacking on existing code and changing it just a little bit is the root of most spaghetti. Whether that happens through flags or unprincipled inheritance or a mass of callbacks, it will always happen if enough people are trying to do this and enough time passes.
I don't believe in free will. People (including me) are going to do the easiest thing possible given the constraints. I don't think that's bad in any way -- it's best to embrace it. In this view, the tools should optimize for right thing to do aligning with the easy thing.
So if you have two options, one better than the other, then the better way to do things should be easier than the worse way. With inheritance as traditionally implemented, the worse way is easier than the better way.
And this can be fixed! You just need to make sure all the calls go one way (making the worse outcome harder), and then very carefully consider and work through the consequences of that. Maybe this fix ends up being unworkable due to the downstream consequences, but has anyone tried?
> I've seen the same things happen with procedural code - a bunch of functions calling each other with all sorts of control parameters, and people adding just one more bool and one more if, because an existing function already does most of what I want, it just needs to do it slightly different at the seventh step. And after a few rounds of this, you end up with a function that can technically do 32 different things to the same data, depending on the combination of flags you pass in. And everyone deeply suspects that only 7 of those things are really used, but they're still afraid to break any of the other 25 possible cases with a change.
Completely agree, that's a really bad way to do polymorphism too. A great way to do this kind of closed-universe polymorphism is through full-fledged sum types.
> People piggybacking on existing code and changing it just a little bit is the root of most spaghetti. Whether that happens through flags or unprincipled inheritance or a mass of callbacks, it will always happen if enough people are trying to do this and enough time passes.
You're absolutely right. Where I go further is that I think it's possible to avoid this, via the carrot of making the better thing be easy to do, and rigorous discipline against doing worse things enforced by automation. I think Rust, for all its many imperfections, does a really good job at this.
edit: or, if not avoid it, at least stem the decline. This aligns perfectly with not believing in free will -- if it's all genetic and environmental luck all the way down, then the easiest point of leverage is to change the environment such that people are lucky more often than before.
Respectfully, I think you're throwing the baby out with the bathwater.
I read your story, and I can certainly empathize.
But just because someone has made a tangled web using inheritance doesn't mean inheritance itself is to blame. Show me someone doing something stupid with inheritance and I can demonstrate the same stupidity with composition.
I mean, base classes should not be operating on derived class objects outside of the base class interface, like ever. That's just poorly architected code no matter which way you slice it. But just like Dijkstra railing against goto, there is a time and a place for (measured, intelligent & appropriate) use.
Even the Linux kernel uses subclassing extensively. Sometimes Struct B really is a Struct A, just with some extra bits. You shouldn't have to duplicate code or nest structures to attain those ergonomics.
Anything can lead to a rube-goldberg mess if not handled with some common sense.
I believe the problem is structural to all traditional class-based inheritance models.
The sort of situation I described is almost impossible with trait/typeclass-based polymorphism. You have to go very far out of the idiom with a weird mix of required and provided methods to achieve it, and I have never seen anyone do this in practice. The idiomatic way is for whatever consumes the trait to pass in a context type. There is a clear separation between the call-forward and the callback interfaces. In Rust, & and &mut mean that there's an upper limit to how tangled the interface can get.
I'm fine with inheritance if there's rigorous enforcement of one-directional calls in place (which makes it roughly equivalent to trait-based composition). The Liskov stuff is a terrible distraction from this far more serious issue. Does anyone do this?
> You shouldn't have to duplicate code or nest structures to attain those ergonomics.
What's wrong with nesting structures? I like nesting structures.
> Don't ever use inheritance. Instead of things inheriting from other things, flip the relationship and make things HAVE other things. This is called composition and it has all the positives of inheritance but none of the negatives.
Bah. There are completely legitimate uses of inheritance where it's a really great fit. I think you'll find that being dogmatic about avoiding a programming pattern will eventually get you twisted up in other ways.
Inheritance can be used in a couple of ways that achieve a very-specific kind of code reuse. While I went through the early 2000's Java hype cycle with interfaces and factories and builders and double-dispatch and visitors everywhere, I went through a period where I hated most of that crap and swore to never use the visitor pattern again.
But hey, within the past two years I found an unbeatable use case where the visitor pattern absolutely rocks (it's there: https://github.com/titzer/wizard-engine/blob/master/src/util...). If you can come up with another way by which you can deal with 550 different kinds of animals (the Wasm instructions) and inherit the logic for 545 and just override the logic for 5 of them, then be my guest. (And yes, you can use ADTs and pattern-matching, which I do, liberally--but the specifics of how immediates and their types are encoded and decoded just simply cannot be replicated with as little code as the visitor pattern).
So don't completely swear off inheritance. It's like saying you'll never use a butter knife because you only do pocket knives. After all, butter knives are dull and not good for anything but butter.
If you can use functions in the same way objects are used, there’s no need for visitor objects.
There’s a reason why everything is a Lisp. All of the patterns are obvious with its primitives, while higher level primitives like classes, interfaces hide that there’s data and there’s behavior/effects.
Visitor objects are needed when you want, at runtime, to decide what code to execute based on the types of two parameters of a function (regular OOP virtual dispatch can only do this based on the type of one argument, the one before the dot). While you can model this in different ways, there is nothing in "plain" Lisp (say, R7RS Scheme) that makes this particularly simple.
Common Lisp does have a nicer solution to this, in the form of CLOS generic functions. In CLOS, methods are defined based on the classes of all arguments, not just the first one like in traditional object systems. Combined with inheritance, you can implement the whole thing with the minimal amount of code. But it's still an OOP system designed specifically for this.
The Visitor Pattern is one of the ones that actually does not go away when you have CLOS. That is to say the traversal and visitation part of it doesn't go away, just all the boiler plate around simulating the double dispatch, like needing two methods: accept and visit and whatnot.
Like say we want to visit the elements of a list, which are objects, and involve them with a visiting object:
We write all the method specializations of generic-fun for all combinations of visitor and element type we need and that's it.
Importantly, the traversal function doesn't have to know anything about the visitor stuff. Here we have mapcar, which dates back to before object orientation.
The traversal is not really part of the visitor pattern. The element.accept(visitor) function together with the visitor.visitElementType(element) are the identifying part of the visitor pattern, and they completely disappear with CLOS.
A classic example is different parsers for the same set of expression types. The expressions likely form a tree, you may not need a list of expressions at all, so no mapcar.
The motivating scenario for the Visitor pattern is processing an AST that has polymorphic nodes, to achieve different kinds of processing based on the visiting object, where special cases in that processing are based on the AST node kind.
Even if we have multiple dispatch, the methods we have to write for all the combinations do not disappear.
Additionally, there may actually be a method analogous to accept which performs the recursion.
Suppose that the AST node is so abstract that only it knows where/what its children are. Then you have some:
;; accept renamed to recurse; visitor to fun
;; visit is funcall
(defmethod recurse ((node additive-expr) fun)
(recurse (additive-left-child node))
(recurse (additive-right-child node))
(funcall fun node))
(We might want recurse-bottom-up and recurse-top-down.)
If all the AST classes derive from a base that uniformly maintains a list of n children, then this would just
be in the base: (for-each-child ch (recurse ch fun)) or whatever.
Suppose we don't want to use a function, but an object (and not to use that object as funcallable). then we need (let's integrate the base class idea also):
I happily admit it's more than possible to come up with examples that make inheritance shine. After all, that's what the authors of these books and articles do.
But most of them put the cart before the horse (deliberately design a "problem" that inheritance "solves") and don't seriously evaluate pros and cons or even consider alternatives.
Even then, some of the examples might be legitimate, and what you're referring to might be a case of one. (though I doubt there's no equally elegant and succinct way to do it without inheritance)
But none of that changes the fact that inheritance absolutely shouldn't be the default goto solution for modeling any domain it has become (and we are taught to understand it as)
or that it's exceedingly uncommon to come across situations like yours where you have 500+ shared cases of behavior and you only want to "override" 5
or that inheritance is overwhelmingly used NOT for such niche edge cases but as a default tool to model even the most trivial relationships, with zero justification or consideration
I agree that examples matter a lot, and for some reason a lot of introductory OO stuff has really bad examples. Like the whole Person/Employee/Employer/Manager dark pattern. In no sane world would a person's current role be tied to their identity--how do you model a person being promoted? They suddenly move from being an employee to a manager...or maybe they start their own business and lose their job? And who's modeling these people and what for? That's never shown. Are we the bank? The IRS? An insurance company? Because all of these have a lot of other data modeling to do, and how you represent the identities of people will be wrapped up in that. E.g.--maybe a person is both an employee and a client at the same time? It's all bonkers to try to use inheritance and subtyping and interfaces for that.
Algebraic data types excel at data modeling. It's like their killer app. And then OO people trot out these atrocious data modeling examples which functional languages can do way better. It's a lot of confusion all around.
You gotta program in a lot of different paradigms to see this.
Yeah, there are some real good experts on various subjects here on HN. One thing i would recommend is to contact anybody directly if needed (through their email ids in their profile or otherwise) with any questions you might have. That way you can have a longer discussion and/or learn more on specific subjects. Most people are willing to help generously when approached in a knowledge-seeking manner. I always look at HN threads/discussion as merely giving me an idea of different concepts/subjects and ask for pointers to more knowledge either books/papers or experts. Hopefully i also do the same with my comments thus helping the overall s/n ratio of this site.
I will agree that the problem that class hierarchies attempt to solve is a problem one usually does not really have, but in your example you have not solved it at all.
It matches a relational database well, but once you have a just a user reference, you can not narrow it down to an employee without out of band information. If a user should be just one type that can be multiple things at once, you can give them a set of roles.
I agree that there are better ways to model roles and FGA
the point of the example wasn't to be an idiomatic solution to that problem, but to illustrate the pointlessness of inheritance in general, and User Student Employee was the first thing that came to mind that was more "real" than the usual Animal examples
in any case, as for only having a User reference, you don't ever have only a User reference - overwhelmingly you would usually load a Student or Employee,
and each of them would have those attributes on them that characterize each of them respectively (and those attributes would not be shared - if they were, such shared attributes would go on the User object)
only those functions that truly are operating only on User would you send in each of their User objects
the problem with what you're advocating is you end up with functions like this
fun doSomething(user: User) { if (user is Student) { do this } else { do that } }
One example for when you might want something like a class hierarchy is something like a graph/tree structure, where you have lots of objects of different types and different attributes carrying references to each other that are not narrowly typed. You then have a set of operations that mostly do not care about the narrow types, and other operations that do care. You have things that behave mostly the same with small variations, in many cases.
> and each of them would have those attributes on them that characterize each of them respectively (and those attributes would not be shared - if they were, such shared attributes would go on the User object)
Suppose you want to compute a price for display, but its computation is different for Students and Employees. The display doesn't care if it's a Student or an Employee, it doesn't want to need to know if it's a Student or an Employee, it just wants the correct value computed. It can't just be a simple shared attribute. At some point you will need to make that distinction, whether it's a method, a conditional, or some other transform. It's not obvious how your solution could solve this more elegantly.
> which is insane
Not at all, in my view this kind of pattern matching is generally preferable over dynamic dispatch with methods, because you see what can happen in all cases, at a glance. But again, you do not even have a contrived example where you would need dynamic dispatch in the first place, so naturally its use not obvious.
Your price could simply be an interface that both Student and Employee implements, and at the call site, you simply call foo.calculatePrice - Student will calculate it one way and Employee another, there is zero need for inheritance, or for the call site to know which object is which of the two types - all it needs to know is it has the calcultePrice interface.
I also prefer pattern matching over dynamic dispatch. But I maintain that conjuring up attributes out of thin air (which is 100% what you're doing when your function accepts one type, and then inside, conjures up another, that has additional attributes from the input!) is insane. There are so many good reasons not to want to do this I don't even know where to begin. For one, it's a huge footgun for when you add another type that inherits - the compiler will NOT force you to add new branches to cover that case, even though you may want it to. Also, such "reusable" if this then do this else do that methods when allowed to propagate through the stack means every function is no longer a function, it's actually *n functions, so ease of grokking and maintainability goes out the window. It's much better to relegate such ifs to the edges of the system (eg the http resource endpoint layer) and from there call methods that operate on what you want them to operate on, and do not require ifs.
> I also prefer pattern matching over dynamic dispatch. But I maintain that conjuring up attributes out of thin air (which is 100% what you're doing when your function accepts one type, and then inside, conjures up another, that has additional attributes from the input!) is insane.
Does it though? In one case, you have:
type S = A|B
func calculate(x:S):number
In the other, you have:
interface S = {calculate:() => number}
A::calculate():number
B::calculate():number
but also implicitly the possible:
Z::calculate():number
The latter case is more flexible, but also harder to reason about. Whoever calls S::calculate can never be quite sure what's in the bag.
> For one, it's a huge footgun for when you add another type that inherits - the compiler will NOT force you to add new branches to cover that case, even though you may want it to.
Lack of exhaustiveness checks is a problem many languages have, and if that's the case for you, that's an argument for preferring methods.
> it's actually n functions, so ease of grokking and maintainability goes out the window
...but that's exactly what interface methods are: N functions. That's what your problem is. You can't get around it. Moreover, if most of your types do not have a distinct implementation for that method, you will still need to define them all. This is where both the "super-function" and inheritance (or traits) can save you quite a bit of code.
It's sad that late-binding languages like Objective-C never got the love they should have, and instead people have favored strongly (or stronger) typed languages. In Objective-C, you could have your User class take a delegate. The delegate could either handle messages for students or employees. And you could code the base User object to ignore anything that couldn't be handled by its particular delegate.
This is a very flexible way of doing things. Sure, you'll have people complain that this is "slow." But it's only slow in computer standards. By human standards—meaning the person sitting at a desktop or phone UI—it's fast enough that they'll never notice.
I will complain that it is hard to reason about. You better have a really good reason to do something like this, like third party extensibility as an absolute requirement. It should not be your go to approach for everything.
Inheritance is nothing more or less than composition + implementing the same interface + saving a bit of boilerplate.
When class A inherits from class B, that is equivalent to class A containing an object of class B, and recoding the same interface as class B, and automatically generating method stubs for every method of the interface that call that same method on your B.
That is, these two are perfectly equivalent:
class B {
public void foo() {
}
}
class A_inheritance extends B {
}
class A_composition implements B {
private B super;
public void foo() {
super.foo();
}
This is pseudo-code because Java distinguishes between Interfaces and Classes, and doesn't have a way to refer to the Interface of a Class. In C++, which doesn't make this distinction, it's even more equivalent. Especially since C++ allows multiple inheritance, so it can even model a class composing multiple objects.
The problem with inheritance is that people get taught to use it for modeling purposes, instead of using it for what it actually does - when you need polymorphism with virtual dispatch, which isn't all that common in practice. That is, inheritance should only be used if you need to have somewhere in you code base a collection of different objects that get called in the same way but have to execute different code, and you can only find out at runtime which is which.
For your students and employees and users example, the only reason to make Student and Employee inherit from User would be if there is a reason to have a collection of Users that you need to interact with, and you expect that different things will happen if a User is an Employee than if that User is a Student. This also implies that a Student can't be an Employee and vice versa (but you could have a third type, StudentEmployee, which may need to do something different from either a Student or an Employee). This is pretty unlikely for this scenario, so inheritance is unlikely to be a good idea.
Note also that deep class hierarchies are also extremely unlikely to be a good idea by this standard: if classes A and B derive from C, that doesn't mean class D can derive from A - that is only useful if you have a collection of As that can be either A or D, and you actually need them to do different things. If you simply need a third type of behavior for Cs, D should be a subclass of C instead.
I'd also note that my definition applies for inheritance in the general case - whether the parent is an interface-only class or a concrete class with fields and methods and everything. I don't thing the distinction between "interface inheritance" and "implementation inheritance" is meaningful.
This is not quite correct, at least not in most commonly-used languages. The difference comes when a "superclass" method calls self.whatever(), and the whatever() is implemented in both the "superclass" and the "subclass". In the implementation-inheritance-based version, self.whatever() will call the subclass method. In the composition-based version, self.whatever() will call the superclass method. The implementation-inheritance-based version is sometimes called "open recursion". This is why you need implementation inheritance for the GoF Template pattern. See for instance this paper:
for one, see the reply from the handle cryptonector further up
> Inheriting from a class creates dispatch problems, and there's instance variables/fields/whatever to deal with. Never mind multiple inheritance, as that gets hairy real fast. With interfaces there is no hierarchy, and all there is is a type and a data, and you either pass them around together, you monomorphize to avoid the need to pass two pointers instead of one, or you have the data point to its type in a generic way. With class hierarchies you have to have operations to cast data up and down the hierarchy, meaning trade one dispatch table for another.
I agree with all of this. With interfaces, you get all the benefits, but none of the downsides.
Liskov substitution will not save you. One of the worst cases of inheritance I've ever seen was in a hierarchy that was a perfect Liskov fit -- an even better fit than traditional examples like "a JSON parser is a parser". See https://news.ycombinator.com/item?id=42512629.
The fundamental problem with inheritance, and one not shared by any other kind of polymorphism, is that you can make both upcalls and downcalls within the same hierarchy. No one should ever use inheritance in any long-term production use case without some way of enforcing strict discipline, ensuring that calls can only go one way -- up or down, but not both. I don't know to what extent tooling to enforce this discipline exists.
(Also I just realized I got punked by LLM slop.)
> No one should ever use inheritance in any long-term production use case without some way of enforcing strict discipline, ensuring that calls can only go one way -- up or down, but not both. I don't know to what extent tooling to enforce this discipline exists.
Disagree with your first part. Inheritance used to express Subtyping is different from that used for Code-reuse and yet again different from that used for implementing Framework skeleton structure. You have to disambiguate them carefully when using it. See the article linked to by user "Fannon" here - https://news.ycombinator.com/item?id=42789466
As for tooling, you have to enforce the contract using pre/post/inv clauses following Meyer's DbC and also explicit documentation.
Thanks for that article -- I have to agree with Jacob Zimmerman in the comments to the article:
> I don’t get it. I read one part of the article, think I get it, then I read a different part and what I read there doesn’t jive with what I thought I understood. And I can’t figure out how to reconcile them.
---
> As for tooling, you have to enforce the contract using pre/post/inv clauses following Meyer's DbC and also explicit documentation.
I think we call them asserts and type-level state machines :)
I don't really believe in documentation as enough of a barrier to doing worse things. It must, at a structural level, be easier to do better things.
There is no confusion if you understand that Inheritance is just a "mechanism" to express three (and maybe more) different kinds of "policies" and a single class may implement any or all of them in which case it becomes important to disambiguate which methods/functions express which "policies". There is a abstract concept and a syntactical expression of that concept which needs to be clear in one's mind.
Again, asserts are just the "mechanism" to express pre/post/inv "policies" in code. Without having an understanding of pre/post/inv from the pov of Hoare Logic, merely using asserts will not give you much benefit. Documentation is quite important here.
Both the above can be seen in the design of the Eiffel Language where they are integrated into proper syntactical mechanisms. Once you understand the concepts here, you can apply them explicitly even if your language does not support the needed syntax (eg. Contracts). See Bertrand Meyer's OOSC2 for details - https://bertrandmeyer.com/oosc2/ Specifically "Design-by-Contract (DbC)" and "Inheritance Techniques" and "Using Inheritance well".
Also relevant is my other comment here - https://news.ycombinator.com/item?id=42788947
I agree that inheritance does too many things and has too many degrees of flexibility. I think other kinds of polymorphism like typeclasses don't have this issue, and are better due to that. Automation is highly preferable to documentation.
I think the discussion would benefit from you concretely working through an example. What change(s) are you proposing to how inheritance is done in C++ or Java, and how would they prevent spaghetti code and nested upcalls/downcalls?
I am not sure that you understood what i wrote. Inheritance's flexibility is its very strength that allows you to express different concepts elegantly. Also no amount of Automation/Tooling/etc. can substitute for documentation explaining the intent behind the code.
The main thing i would like to see in C++/Java/whatever is support for "Design-by-Contract" (DbC) similar to that given in Eiffel. There is already a proposal for C++; see this recent HN discussion - https://news.ycombinator.com/item?id=42131473 Basically this is a way to apply Hoare Logic to functions/methods directly in the implementation language itself. Now use this across the types/classes in a inheritance hierarchy and you can enforce the semantics that you want to express.
Regarding a concrete example; I have already pointed you to Bertrand Meyer's OOSC2 and three specific chapters to read; They walk you through proper examples with explanations. Additionally also see his Applying Design By Contract paper (pdf) here - https://se.inf.ethz.ch/~meyer/publications/computer/contract.... If you would like to see complete application code using a C++ OO framework i suggest creating a sample MFC app using Visual Studio C++ wizard and just looking at the generated code without adding anything of your own. It uses a Document/View architecture (a variation of MVC) where your app specific classes derive from MFC framework given classes. The framework invokes your derived methods (i.e. downcall) which can as needed call back to base's method (i.e. upcall). There is a strong coupling between the framework classes and your app specific ones by design. You can see how different usages of inheritance are implemented to give a powerful app framework; see documentation starting here - https://learn.microsoft.com/en-us/cpp/mfc/document-view-arch...
Sorry I don't think your response really gets to the point. I'm aware of various techniques like contracts, but you're speaking in generalities rather than specifics. So yes, I haven't quite understood what you meant.
This is a common frustration I have with OOP discourse, it tends to be really up-in-the-air and not grounded in concrete specifics. (The article you linked also has this issue.) Meanwhile, users suffer in ways that just don't happen with typeclass-based polymorphism, and none of this discourse is required in my world. So why should I not recommend everyone use typeclass-based polymorphism?
> I am not sure that you understood what i wrote. Inheritance's flexibility is its very strength
No, being too flexible is a weakness, not a strength. At scale, rigorous discipline enforced by tooling is required.
> Also no amount of Automation/Tooling/etc. can substitute for documentation explaining the intent behind the code.
Yes, of course documentation is required. What I'm saying is that if it can be automated, it should be, and that relying on documentation alone is foolish.
In particular, invariants like "no downcalls" or "no upcalls" should 100% be enforced by automation. Documentation is not enough at scale and under pressure.
> i suggest creating a sample MFC app using Visual Studio C++ wizard
I'd rather not?
> The framework invokes your derived methods (i.e. downcall) which can as needed call back to base's method (i.e. upcall).
This sounds really bad to me at scale and under pressure.
I pointed you to a specific book i.e. OOSC2 and three specific chapters in it (to start with) which explain the concepts well with examples you asked for. How much more specific can one get? If you already know contracts then it should be easy to translate the concepts to any language of your choice. Meyer provides a thorough rationale and is extremely detailed in his examples. Furthermore, i also pointed you to one of the largest and commercially most successful class library and application framework (i.e. MFC) where you can see classic OOD/OOP (including upcalls/downcalls) in action; and yet you say i am "speaking in generalities"! It seems you are not willing to read/study but expect a couple of paragraphs to enlighten everything, which is not going to happen.
Eg: Base.method() has {pre1} and {post1} as contracts. Derived.method() has {pre2} and {post2} as contracts. What should be the relationship between {pre2}&{pre1} and {post2}&{post1} to enforce proper subtyping?
> This is a common frustration I have with OOP discourse, it tends to be really up-in-the-air and not grounded in concrete specifics
It is not up-in-the-air when ideas and specific books by authors like Bertrand Meyer and Barbara Liskov (both researchers and practitioners) are being pointed out. Trying to simplify their concepts into a couple of paragraphs would invariably miss important nuances and lead to misinterpretations (the bane of most HN discussions based on trivial articles/blog posts). Hence it is better they are studied directly and then we can have a discussion if you would like.
> Meanwhile, users suffer in ways that just don't happen with typeclass-based polymorphism, and none of this discourse is required in my world. So why should I not recommend everyone use typeclass-based polymorphism?
Sure, there are other types of polymorphisms which can be better in certain scenarios. But that is not under discussion here; we are talking about "traditional" dynamic runtime dispatch based polymorphism which is far easier to understand and implement even in small languages like C.
> No, being too flexible is a weakness, not a strength. At scale, rigorous discipline enforced by tooling is required.
Flexibility increases your "design space" and hence never a weakness. Rigorous discipline is needed throughout development but tooling can only do so much.
> In particular, invariants like "no downcalls" or "no upcalls" should 100% be enforced by automation.
This depends on the concept you are trying to express and cannot be the same in all scenarios (except for direct ones like "interface implementation").
> I'd rather not?
Well, you did ask for a concrete example and i showed you MFC apps.
> This sounds really bad to me at scale and under pressure.
Saying something is "bad" or "spaghetti" without understanding the design concepts behind the implementation is wrong. MFC is one the largest and most successful application frameworks in the industry and has proven itself in all sorts of applications at scale; studying it teaches one lots of OOD/OOP techniques (good/bad/ugly) needed in real-life industry apps.
> Flexibility increases your "design space" and hence never a weakness.
This is just objectively false. Constraints liberate and liberties constrain.
> Rigorous discipline is needed throughout development but tooling can only do so much.
Have you used Rust? I would recommend building some kind of non-trivial command line tool with it — you will quickly see how low your expectations for tooling have been.
> Eg: Base.method() has {pre1} and {post1} as contracts. Derived.method() has {pre2} and {post2} as contracts. What should be the relationship between {pre2}&{pre1} and {post2}&{post1} to enforce proper subtyping?
As someone who understands variance etc quite well, my answer is to simply not have subtypes. You absolutely do not need inheritance subtyping to build production software. (Rust has subtyping and variance only for lifetime parameters, and that's confusing enough.)
> Sure, there are other types of polymorphisms which can be better in certain scenarios. But that is not under discussion here; we are talking about "traditional" dynamic runtime dispatch based polymorphism which is far easier to understand and implement even in small languages like C.
I use traits for runtime dispatch in Rust all the time?
Inheritance is only traditional because C++ and Java made it so. I think it's been a colossal mistake.
> This is just objectively false. Constraints liberate and liberties constrain.
You are completely wrong here. Flexibility by definition means an increase in the allowed degrees of freedom in one or more axes which in turn allows one to mix and match feature sets to express more design concepts (eg. Multi-paradigm). Your second line is a silly slogan which presumably means constraints make the job of picking one choice from a set easier due to less thought needed. It is applicable to inexperienced developers but certainly not to experienced ones who need all the flexibility that a language can give.
> As someone who understands variance etc quite well, my answer is to simply not have subtypes. You absolutely do not need inheritance subtyping to build production software. (Rust has subtyping and variance only for lifetime parameters, and that's confusing enough.)
You have not understood the example. Variance is used to constrain types but pre/post are predicates relating subsets of values from the types; this constrains the state space (cartesian product of the types) itself. Second, your statement not to use subtyping is silly. Subtype relationships arise naturally amongst concepts in any non-trivial system which you can group in a hierarchy based on commonality (towards the top) and variability (towards the bottom). Inheritance is just a direct way of expressing it.
> Inheritance is only traditional because C++ and Java made it so. I think it's been a colossal mistake.
Statements like these betray an ignorance of the subject. I have already shown that Inheritance can be used for different purposes of which Subtyping in the LSP sense is what everybody agrees on. The other uses need experience and discipline but are very powerful when done clearly. Inheritance was first introduced in Simula67 based on a idea presented by Tony Hoare in 1966. C++ popularized it and others simply copied it. See wikipedia for more details - https://en.wikipedia.org/wiki/Inheritance_(object-oriented_p...
PS: This discussion reminded me of "The Blub Paradox" by Paul Graham (https://paulgraham.com/avg.html) which i think most Rust evangelicals suffer from. Just from my cursory look at Rust i have seen nothing compelling to want me to study it in depth over my preferred language of C++. With the addition of more features into "Modern C++" to support Functional Programming it has become even more flexible and powerful albeit with a steeper learning curve.
> Your second line is a silly slogan which presumably means constraints make the job of picking one choice from a set easier due to less thought needed
That is absolutely not what it means, and it is not a silly slogan — it is a basic law of reality.
As an example, if your build system is monadic (build nodes can add new nodes dynamically) then the number of nodes in it is not known upfront. If the build system is not monadic, the number of nodes is determined at the start of the build process.
As another example, the constraints that Rust sets around & and &mut mean that the compiler can do really aggressive noalias optimizations that no one would even dream about doing in C or C++.
See https://m.youtube.com/watch?v=GqmsQeSzMdw for more examples.
> It is applicable to inexperienced developers but certainly not to experienced ones who need all the flexibility that a language can give.
I'm quite an experienced developer, and I've tended to use more constrained languages over time. I love the fact that Rust constrains me by not having uncoordinated shared mutable state.
> This discussion reminded me of "The Blub Paradox" by Paul Graham (https://paulgraham.com/avg.html) which i think most Rust evangelicals suffer from
At Oxide we use Rust and would never have been able to achieve this level of rigor in C++. Hell, try writing anything like my tool https://nexte.st/ in C++ (be sure to get the signal handling exactly right). Rust tooling is at a completely different quality level from earlier-generation languages.
Again, these are all your preferences/opinions which you are stating as some sort of acknowledged truth; which is most definitely not the case. While there are many good points about Rust it is quite over-hyped with evangelical zeal which is why a lot of software engineers are turned off of it. Graydon Hoare himself has said he took the good ideas from old languages and put them together. That in itself is obviously not a bad thing (imo, the industry killed research in programming languages/OS from the mid-nineties when Java was marketed up the wazoo by Sun throwing ungodly amounts of money at it) but the "saviour complex" being pushed is a strict no-no with experienced C/C++ developers.
I don't think there really is any reasonable way to disagree with "constraints liberate, liberties constrain", sorry. Anyone who has spent any amount of time with algebraic structures in mathematics will grasp this intuitively, as will anyone who has written code in a type-safe style. It really is a basic law of nature, similar to other basic principles like Bayes' law.
I only brought in Rust because it does polymorphism in a non-OO style.
I've never seen a case where inheritance was superior to composition with a shared interface. Worst case with composition, it just returns the injected class's method directly. The beauty is that this really shines when you apply the liskov substitution principle.
I think Python's pattern using inheritance for mixins is probably a good candidate. But Python does have a culture of "inheritance is only for sharing code user beware if you try to use it for other things." Python's ABC classes for collections is also a good use of inheritance. Inherit from MutableMapping, implement the required methods, boom you get all the other mapping methods for free.
Pydantic / dataclass inheritance is elegant for building up different collections of fields. That being said it does use codegen / metaclass hackery to do it.
Same for Ruby, and you don't even need to inherit. You include the Enumerable module, implement next, and your instance is suddenly iterable
Can constructors and destructors still go counter to this idea if needed?
I think values should generally only be combined into a structure at the end (no half-formed structures with null data, no calls on methods that work on half-formed structures).
Destructors are more complicated, there's definitely times where you have to violate invariants that otherwise are always the case.
And functional programmers would argue that contravariance is the real meaning of Liskov’s substitution principle: https://apocalisp.wordpress.com/2010/10/06/liskov-substituti...
>So LSP just says “predicates are contravariant”
Maybe just leave out the "just" for a pleasant journey?
Since the interesting part of Barbara's uh, guideline, as "almost" pointed out by your link, is "almost" the opposite of "almost" trivial..
Don't mind me, I'm imbecilic :)
(See your link's comments, at least those imbeciles "almost" get it)
A practical rephrasing of LSP: subclasses should be subtypes.
Here to recommend this article, really helped me to understand inheritance better. Liskov Substitution is just one aspect / type of it and may conflict with others.
https://www.sicpers.info/2018/03/why-inheritance-never-made-...
Very Good; Gives the "Correct" overview of different usages (i.e. policy) of Inheritance (i.e. mechanism).
Quote: Inheritance was never a problem: trying to use the same tree for three different concepts was the problem.
From the article:
Again, kudos to Uncle Bob for reminding me about the importance of good software architecture in his classic Clean Architecture! That book is my primary inspiration for this series. Without clean architecture, we’ll all be building firmware (my paraphrased summary).
What does clean architecture have to do with building firmware or not? Plenty of programmers make a living building firmware. Just because they don't need/can't/want to apply clean architecture in their code, doesn't mean they are inferior to those who do.
Furthermore, after a snippet which I suppose it is in Kotlin, there is this:
While mathematically a square is a rectangle, in terms of behavior substitutability, it isn’t. The Square class violates LSP because it changes the behavior that clients of Rectangle expect. Instead of inheritance, we can use composition and interfaces
The Liskov principle is about one of the three types of polymorphism (so far): subtyping polymorphism. Which is about inheritance. Composition is _not_ subtyping. And interfaces (be it Java's or Kotlin's) are another type of polymorphism: ad-hoc. Even Wikipedia[1] has the correct definition:
Ad hoc polymorphism: defines a common interface for an arbitrary set of individually specified types.
Therefore, the examples of interfaces aren't compliant with LSP as well.
I understand the good intentions behind the article, but it left much to be desired. A proper research to at least fix the glaring errors should have been made beforehand.
[1]: https://en.wikipedia.org/wiki/Polymorphism_%28computer_scien...
I’m in the middle of reading Clean Architecture right now. The square/rectangle example is directly from the book.
The firmware statement is an argument made (differently) in the book that software is called soft because it’s easy to change. Firmware is harder to change because of its tight coupling and dependencies (to the hardware). Software that is hard to change due to tight coupling and dependencies could almost be considered firmware—like brand new code without tests can almost be considered legacy.
You shouldn’t believe what you read, especially from the book clean code. The principles are somewhat okay, the examples are terrible
Like most articles on "Inheritance" this is clueless about providing any "real meaning/understanding". People always take the soundbites (eg. Uncle Bob SOLID) provided as a mnemonic as being the end-all, don't fully understand the nuances and then usually arrive at a wrong conclusion.
LSP (https://en.wikipedia.org/wiki/Liskov_substitution_principle) has to do with behavioural subtyping guaranteeing semantic interoperability between types in a hierarchy. It involves not just the syntax of function signatures but their semantic meaning involving Variance/Invariance/Covariance/Contravariance and their guarantees using an extension to Hoare Logic i.e. Preconditions/Postconditions/Invariants (derived from Meyer's DbC). Thus without enforcing the latter (which is generally done via documentation since there is no syntax for expressing pre/post/inv directly in most languages) the former is incomplete and thus the complete contract is easily missed/forgotten leading to the mistaken belief "Inheritance is bad". The LSP wikipedia page links to all the concepts, the original papers and more for further clarification.
See also Bertrand Meyer's Using Inheritance Well from his book Object Oriented Software Construction, second edition book - https://archive.eiffel.com/doc/manuals/technology/oosc/inher...
Finally see Barbara Liskov's own book (with John Guttag) Program Development in Java: Abstraction, Specification, and Object-Oriented Design for a "correct approach" to OOP. Note that Java is just used as a example language while the principles are language independent.
If I remember correctly, Liskov didn't talk about inheritance but subtyping in a more general way. Java, C++ and other, especially statically typed, compiled languages often use inheritance to model subtyping but Liskov/Wing weren't making any statements about inheritance specifically.
>subtyping in a more general way
This is correct. I read her paper closely. One example I give is how SICP provides two implementations for complex numbers[1], the rectangular for and polar form.
and then on page 138 provides this interface that both satisfy 1 https://mitp-content-server.mit.edu/books/content/sectbyfn/b...> use inheritance to model subtyping but Liskov/Wing weren't making any statements about inheritance specifically.
Right. Inheritance is just one mechanism to realize Subtyping. When done with proper contract guarantees (i.e. pre/post/inv clauses) it is a very powerful way to express semantic relationships through code reuse.
I don't feel the rectangle/square example is valid, given that both alternatives follow different designs - there's no Shape base class in the inheritance example. Moreover, I don't think that switching from a base (abstract) class to an interface is enough on itself to call it composition.
The two issues the article mentions have imho less to do with the LSP itself, and more with the limitations that different programming languages have when it comes to define contracts through interfaces (not the same thing), like the lack of exception specs or non-nullability enforcement.
> class Square : Rectangle() { ...
What if instead of Rectangle class we would have ReadonlyRectangle and Rectangle? Square could then inherit from ReadonlyRectangle, so code expecting only to read some properties and not write them could accept Square objects as ReadonlyRectangle. Alternatively if we really want to have only Square and Rectangle classes, there could be some language feature that whenever you want to cast Square to Rectangle it must be "const Rectangle" (const as in C++), so again we would be allowed to only use the "safe" subset of object methods.
I think what you mean is that if a Square that is also a Rectangle can't be made to be non-square, then inheritance works. Which, fair enough, but I think there's still other good reasons that inheritance is a bad approach. Interfaces (and traits) are still way better.
What is "ReadonlyRectangle"? Is it just an interface that only exposes read-only methods; or is it an explicit promise that the rectangle is immutable?
Perhaps we could go with even more classes. "Rectangle" and "Square" for the read-only methods, without any implications about mutability. "MutableRectangle" and "MutableSquare" for mutable implementations; "ImmutableRectangle" and "ImmutableSquare" for immutable implementations.
- "Rectangle" has methods "getWidth" and "getHeight".
- "Square" has a method "getSide".
- "ImmutableRectangle" implements "Rectangle".
- "ImmutableSquare" implements "Rectangle" and "ImmutableRectangle" and "Square".
- "MutableRectangle" implements "Rectangle"; has extra methods "setWidth" and "setHeight".
- "MutableSquare" implements "Rectangle" and "Square"; has an extra method "setSide".
...or you could just give up, and declare two classes "Square" and "Rectangle" (mutable) that either have nothing in common, or they just both extend some "Shape" class that can paint them and calculate the area.
I thought there were more but these are the only two interesting prior threads I could find. Others?
A better explanation of the Liskov Substitution Principle - https://news.ycombinator.com/item?id=38182278 - Nov 2023 (1 comment)
The Liskov Substitution Principle (2019) - https://news.ycombinator.com/item?id=23245125 - May 2020 (93 comments)
A problem that only exists in OOP codebases. Just don't do it and avoid the issue entirely.
What do you work on that avoids oop code bases?
I've written Rust full time for the last 8 years, being part of teams that have shipped several large, transformative, and basically correct projects. No OOP in sight. It's wonderful!
At work we sadly have to implement very OOP-y standards with all the bullshit that entails. 11 levels of inheritance with overrides all over the place sure isn't fun to deal with.
But for things I do myself I use objects and interfaces strictly as a tool to solve specific problems, not as the overall program structure.
Most of the time you just need to turn some bits into other bits with a function, no need to overcomplicate things.
The question of if a square is a rectangle or a rectangle is a square is the sort of thing that comes from OOP brain-rot. They're just data, and their "isa" relationship is likely not even relevant to the problem you're actually trying to solve, like displaying them onscreen.
A "square" could be a function that makes a rectangle out of a single float. A "rectangle" could be a function that produces a polygon. The concepts need not be modeled as types or objects at all.
It depends on the actual use case.
That's an interesting perspective on inheritance.
The problem I see inheritance solving is not having to distribute subtype-specific logic throughout your code base - functions can interact with objects in a generic manner and you get to keep subtype-specific code all in one spot. That's a win.
Inheritance isn't the only means for achieving this capability, though. You can also use interfaces and protocols. I prefer to use interfaces. If my class needs to implement an interface than that's explicit: it implements the interface. I can use inheritance if the class really IS-A variant of another class and it can use that base class in fulfilling its obligation in implementing that interface. That's an implementation detail. But the fact it has responsibility for implementing that interface is made explicit.
Yeah but that's what I mean by using it as a tool, you're not trying to model some weird taxonomy with your classes (like the square/rectangle situation), you're using a language feature to enable generic code and/or for code reuse, any real-world relationship between the concepts is irrelevant.
Inheritance is bad because it enforces a subtyping relationship alongside code reuse and data reuse. It is extremely rare you want all 3 together and even when you think you do there's a rude awakening coming your way, specially if you did the "model real world concepts as a class hierarchy thing."
The object-relational mismatch is a weakness of the object side, not of the relational. Use the right tool for the job and forget about stupid programming paradigms.
Better to think of LSP as more of a gray scale than all or nothing. The more the APIs match, the more substitutability you gain.
Switching to composition has its advantages but you do lose all substitutability and often need to write forwarding methods that have to be kept in sync as the code evolves over time.
SOLID and clean code are not some universal bible that is followed everywhere, I spend a considerable amount of effort reasoning juniors and mid levels out of some of the bad habits they get from following these principles blindly.
For example the only reason DI became so popular is that you could not mock static in Java at the time. In FB codebase DI was also used in PHP until they found a way to mock static, after which the DI framework was deprecated and codemods started coming in removing DI. There is literally nothing wrong in using a factory method or constructing what you need on demand. These days static can also be mocked in Java and if you really think about it you see Spring Boot adds a lot of accidental complexity (but sure its convenient and well tested so its ok to use), concepts like beans and beanfactories are not essential for solving any business problem
Which brings me to S in SOLID, which I think is probably top 2 worst principles in software engineering (the no 1 spot goes to DRY). Somehow it came from some early 2000-s TDD crowd and the test pyramid, it makes sense if you embrace TDD, mocking, test pyramid and unit tests as a good thing. In reality that style of software is really hard to understand, every problem is split into 1000 small pieces invoking each other usually in some undefined ways, no flow can be understood without understanding and building a mental model of the entire 1000 object spaghetti. The tests themselves mostly just end up setting a bunch of mocks and then pretty much coupling the impl and the test on method call level, any change to the impl will cause the tests to break for only the reason that the new method call was not mocked. After going through all this ceremony the tests are not even guaranteeing the thing will work during runtime since the db, kafka or http was mocked out and all the filters, listeners, db validations were skipped. In these days so called integration tests with docker compose are a lot better (use actual db or kafka, wiremock the http level), that way your have a reasonble chance to catch things like did this mysql jdbc driver upgrade broke anything
I have to mention DRY also, the amount of sins caused in name of DRY by juniors is crazy, similar looking lines get moved into a common function/method/util all the time and coupling is introduced between 2 previously independant parts of the system. As the code involves and morphs into something different the original function starts getting more args to behave differently in one case and differently in another case, if it had been left as separate files each could evolve separately. I dont really know how to explain this better than coupling should not be introduced to save few lines of typing or boilerplate, in fact any abstraction or indirection should only be introduced when its really needed, the default mode should be copy/paste and no coupling (the person adding a cross cutting PR will likely not be a jr and has enough experience to know how and when to use grep).
Anyhow I have enough experience to know people are usually too convinced that all this solid, clean code stuff is peak software so I wont expect to change anyones thinking with 1 HN post, it usually takes me 2 years or so to train a person out of this and back to just putting the damn json in db without ceremony. Also need to make sure LLM-s have some good data that is based on experience and not dogmas to learn from :)
As for L, no strong beef with L it’s OK
> sins caused in name of DRY by juniors
A discussion of clones that can be OK and more: <https://cormack.uwaterloo.ca/~migod/papers/2008/emse08-Clone...>
The rationale for Dependency Injection was never _just_ about "making testing static methods" easier. In fact, Dependency injection was never about static methods at all. No DI advocate — not even the radical Uncle Bob — will tell you to stop using Math.round() or Math.sqrt(), even though they are static methods.
The main driver for dependency injection was always to avoid strong coupling of unrelated classes. Strong coupling can be introduced by cases like Class A always instantiating a class B which is a particular subtype of class S (i.e. giving up the Liskov substitution principle), Class A initializing class B with particular parameters that cannot be extended or overridden, Class A calling a static method or a singleton method which modifies or reads a global value.
Strong coupling makes you lose on flexibility, reusability and code readability. If you need to modify how either class A or class B behave later, you may now need to painstakingly scan all the places BOTH classes are used (and all the places other classes touching them are used) and modify the way they are constructed. If you want to enable OrderProcessor to accept bank transfers, but it was built to always call "new CreditCardProcessor()" internally inside its constructor, you will now have to find every place CreditCardProcessor is constructed and modify it. The worst offenders I've seen are pure logic classes that have no business having side side effects, but still end up opening multiple files, or doing a bunch of HTTP requests that you cannot avoid, because their authors just thought: "Cool, I can mock all this stuff with PowerMock while testing!"
The other issue I mentioned is code readability. This is especially an issue with singletons or static methods that mutate global state. You basically get the dreaded action-at-a-distance[1]. You might initially write a class that is using a singleton UserSessionManager object to keep track of the current user session. The class only operates on simple single-threaded scenarios, but at one point some other developer decides to use your class in a multi-threaded context. And Boom. Since the singleton UserSessionManager wasn't a part of the interface of your class, the developer wasn't aware that it's being used and that the class is not ready for multi-threaded contexts[2]. But if you've used DI, the dependencies of the classes would have been explicit.
That's the true gist of DI really. DI is not about one heavyweight framework or another (in most cases you could do it quite easily without any framework). It's also not a pure OOP technique (it is common used in functional languages too, e.g. with Reader Monad). Dependency injection is really just about making your dependencies explicit and configurable rather than implicit and fixed.
As a tangent, mocking static methods was possible for a rather long time. PowerMock (which allows mocking statics with EasyMock and Mockito) was available at least since 2008, and JMockit is even earlier, available at least in 2006[3]. So mocking static methods in Java has been possible for a very long time, probably before even 5% of the Java programmers have even started using mock objects.
But it's not always ideal. Unfortunately, tools like PowerMock or JMockit static /final mocking are working by messing with the JVM internals. These libraries often broke down when a new version of Java was released and you had to wait until the compatibility issue was fixed. These libraries also relied on tricks like custom classloaders, Java Instrumentation Agents and bytecode manipulation. These low-level tricks don't play way with many other things. For instance, if you are using a framework which needs its own custom class loader, or when you're using another tool which needs bytecode manipulation. I was personally bitten by this when I wanted to implement mutation testing[4] in Java, and I couldn't get it to work with static mocking. Since I believe mutation testing carries more value than the convenience of being able to mock statics for testing, it was an easy choice to dump Powermock.
[1] https://en.wikipedia.org/wiki/Action_at_a_distance_(computer...
[2] https://testing.googleblog.com/2008/08/by-miko-hevery-so-you...
[3] http://butunclebob.com/ArticleS.MichaelFeathers.ItsTimeToDep...
[4] https://en.wikipedia.org/wiki/Mutation_testing
BTW, if not everyone knows who MIT's Liskov is - Turing award winner - https://en.wikipedia.org/wiki/Barbara_Liskov
Sh*t, just realized that Liskov SP collides with LSP from Language Server Protocol :(
Liskov Substitution is good sometimes actually
Do yourself a favor and wear yourself off all this SOLID, Uncle Bob, Object Oriented, Clean Code crap.
Don't ever use inheritance. Instead of things inheriting from other things, flip the relationship and make things HAVE other things. This is called composition and it has all the positives of inheritance but none of the negatives.
Example: imagine you have a school system where there are student users and there are employee users, and some features like grading that should only be available for employees.
Instead of making Student and Employee inherit from User, just have a User class/record/object/whatever you want to call it that constitutes the account
and for those Users who are students, create a Student that points to the user and vice versa for the Employees then, your types can simply and strongly prevent Students from being sent into functions that are supposed to operate on Employees etc etc (but for those cases where you really want functions that operate on Users, just send in each of their Users! nothing's preventing you from that flexibility if that's what you want)and for those users who really are both (after all, students can graduate and become employees, and employees can enroll to study), THE SAME USER can be BOTH a Student and an Employee! (this is one of the biggest footguns with inheritance: in the inheritance world, a Student can never be an Employee, even though that's just an accident of using inheritance and in the real world there's actually nothing that calls for that kind of artificial, hard segregation)
Once you see it you can't unsee it. The emperor has no clothes. Type-wise functional programmers have had solutions for all these made up problems for decades. That's why these days even sane Object Oriented language designers like Josh Bloch, Brian Goetz, the Kotlin devs etc are taking their languages in that direction.
Never using inheritance is pushing the dogma to the other extreme imo. The whole prefer composition over inheritance is meant to help people avoid the typical OO over use of inheritance. It doesn't mean Is-A relation doesn't/shouldn't exist. It just means when there is data, prefer using composition to give access to that data.
There will be times when you want to represent Is-A relationship - especially when you want to guarantee a specific interface/set of functions your object will have - irrespective of the under the hood details.
Here what you care about is notifier providing a set of functions (sendNotification, updateNotification, removeNotification) - irrespective of what the implementation details are - whether you're using desktop notifications or SMS notifications.I see no need for inheritance there, that can and should be done using interfaces
eg in the contrived example I gave, any of all three of the User, Student and Employee can implement the interface (and if needed Student could simply delegate to its internal User while Employee could "override" by providing its own, different implementation)
What difference do you see between implementing an interface and inheriting from a class, that makes one good and the other bad?
I'm asking beyond the arbitrary distinctions that some languages like Java or C# bake in.
Inheriting from a class creates dispatch problems, and there's instance variables/fields/whatever to deal with. Never mind multiple inheritance, as that gets hairy real fast. With interfaces there is no hierarchy, and all there is is a type and a data, and you either pass them around together, you monomorphize to avoid the need to pass two pointers instead of one, or you have the data point to its type in a generic way. With class hierarchies you have to have operations to cast data up and down the hierarchy, meaning trade one dispatch table for another.
this! with interfaces, you get all the benefits and none of the negatives
Interfaces manifest the object's responsibility. Functions accepting objects as parameters should work with interfaces, not type instances. That way the responsibilities and capabilities are clear to the user of the interface and the implementor.
As far as how the implementor may fulfill its interface obligation, it may use inheritance, if it truly has an IS-A or subtype relationship with the base object.
If you have a sufficiently statically typed language then the is-a concern goes away -- certainly in the example you gave, since the compiler/linker knows to look for a `GetNotifier()` that returns a `Notifier`. Now, you might still want to know whether the notifier you got satisfies other traits than just those of `Notifier`, but you do that by using a type that has those traits rather than `Notifier`, and now you have little need for `instanceof` or similar operators. (You still might want to know what kind of thing you got because you might care about semantics that are not expressed in the interface. For example you might care to know whether a logger is "fast" as in local or "slow" as in remote, as that might cause you to log less verbosely to avoid drops or blocking or whatever. But the need for this `instanceof` goes down dramatically if you have interfaces and traits.)
Never is a good starting point as a guide but of course there are cases where is-a makes sense too.
Composition generally is not good enough to model all cases without resulting to some AOP or meta programming also, but in that case the is-a or a base class would arguably be the simpler approach, at least it can be reasoned about and debugged, as opposed to some AOP/Meta spaghetti that can only probably be understood from docs
Just because you see OO languages starting to favor composition over inheritance does not mean inheritance has no place, and indeed, interfaces as a form of composition have existed in many bog-standard OO languages for decades.
Your example dosn't compute, at least in most languages, because derived objects would not have the same shape as one another, only the shape of the base class. I.e. functions expecting a User object would of course accept either an Employee or a Student (both subclasses of a User), but functions expecting a Student object or an Employee object would not accept the other object type just because they share a base class. Indeed, that's the whole point. And as another poster mentioned, you are introducing a burden by now having no way to determine whether a User is an Employee or a Student without having to pass additional information.
Listen, I'll be the first to admit that the oo paradigm went overboard with inheritance and object classification to the n-th degree by inventing ridiculous object hierarchies etc, but inheritance (even multiple inheritance) has a place- not just when reasoning about and organizing code, but for programmer ergonomics. And with the trend for composition to disallow data members (like traits in Rust), it can seriously limit the expressiveness of code.
Sometimes inheritance is better, and if used properly, there's nothing wrong with that. The alternative is that you wind up implementing the same 5-10 interfaces repeatedly for every different object you create.
It should never be all or nothing. Inheritance has its place. Composition has its place.
And if you squint just right they're two sides of the same coin. "Is A" vs "Can Do" or "Has".
> The alternative is that you wind up implementing the same 5-10 interfaces repeatedly for every different object you create.
if both Student and Employee need to implement those interfaces, it's probably User that should have and implement them, not Student and Employee (and if they truly do need to have an implement them, they can simply delegate to their internal User = "no override" or provide a unique implementation = "override") (let alone that unless I'm misremembering in Kotlin interfaces can have default implementations)
Inheritance has no place in production codebases—unless there is strict discipline, enforced by tooling, ensuring calls only go in one direction. This Liskov stuff has zero bearing.
You are conflating two completely separate things.
The Liskov principle basically defines when you want inheritance versus when you don't. If your classes don't respect the Liskov principle, then they must not use inheritance.
The problems from your story relate to the implementation of some classes that really needed inheritance. The spaghetti you allude to was not caused by inheritance itself, if was caused by people creating spaghetti. The fact that child classes were calling parent class methods and then the parent class would call child class methods and so on is symptomatic of code that does too much, and of people taking code reuse to the extreme.
I've seen the same things happen with procedural code - a bunch of functions calling each other with all sorts of control parameters, and people adding just one more bool and one more if, because an existing function already does most of what I want, it just needs to do it slightly different at the seventh step. And after a few rounds of this, you end up with a function that can technically do 32 different things to the same data, depending on the combination of flags you pass in. And everyone deeply suspects that only 7 of those things are really used, but they're still afraid to break any of the other 25 possible cases with a change.
People piggybacking on existing code and changing it just a little bit is the root of most spaghetti. Whether that happens through flags or unprincipled inheritance or a mass of callbacks, it will always happen if enough people are trying to do this and enough time passes.
I don't believe in free will. People (including me) are going to do the easiest thing possible given the constraints. I don't think that's bad in any way -- it's best to embrace it. In this view, the tools should optimize for right thing to do aligning with the easy thing.
So if you have two options, one better than the other, then the better way to do things should be easier than the worse way. With inheritance as traditionally implemented, the worse way is easier than the better way.
And this can be fixed! You just need to make sure all the calls go one way (making the worse outcome harder), and then very carefully consider and work through the consequences of that. Maybe this fix ends up being unworkable due to the downstream consequences, but has anyone tried?
> I've seen the same things happen with procedural code - a bunch of functions calling each other with all sorts of control parameters, and people adding just one more bool and one more if, because an existing function already does most of what I want, it just needs to do it slightly different at the seventh step. And after a few rounds of this, you end up with a function that can technically do 32 different things to the same data, depending on the combination of flags you pass in. And everyone deeply suspects that only 7 of those things are really used, but they're still afraid to break any of the other 25 possible cases with a change.
Completely agree, that's a really bad way to do polymorphism too. A great way to do this kind of closed-universe polymorphism is through full-fledged sum types.
> People piggybacking on existing code and changing it just a little bit is the root of most spaghetti. Whether that happens through flags or unprincipled inheritance or a mass of callbacks, it will always happen if enough people are trying to do this and enough time passes.
You're absolutely right. Where I go further is that I think it's possible to avoid this, via the carrot of making the better thing be easy to do, and rigorous discipline against doing worse things enforced by automation. I think Rust, for all its many imperfections, does a really good job at this.
edit: or, if not avoid it, at least stem the decline. This aligns perfectly with not believing in free will -- if it's all genetic and environmental luck all the way down, then the easiest point of leverage is to change the environment such that people are lucky more often than before.
Respectfully, I think you're throwing the baby out with the bathwater.
I read your story, and I can certainly empathize.
But just because someone has made a tangled web using inheritance doesn't mean inheritance itself is to blame. Show me someone doing something stupid with inheritance and I can demonstrate the same stupidity with composition.
I mean, base classes should not be operating on derived class objects outside of the base class interface, like ever. That's just poorly architected code no matter which way you slice it. But just like Dijkstra railing against goto, there is a time and a place for (measured, intelligent & appropriate) use.
Even the Linux kernel uses subclassing extensively. Sometimes Struct B really is a Struct A, just with some extra bits. You shouldn't have to duplicate code or nest structures to attain those ergonomics.
Anything can lead to a rube-goldberg mess if not handled with some common sense.
I believe the problem is structural to all traditional class-based inheritance models.
The sort of situation I described is almost impossible with trait/typeclass-based polymorphism. You have to go very far out of the idiom with a weird mix of required and provided methods to achieve it, and I have never seen anyone do this in practice. The idiomatic way is for whatever consumes the trait to pass in a context type. There is a clear separation between the call-forward and the callback interfaces. In Rust, & and &mut mean that there's an upper limit to how tangled the interface can get.
I'm fine with inheritance if there's rigorous enforcement of one-directional calls in place (which makes it roughly equivalent to trait-based composition). The Liskov stuff is a terrible distraction from this far more serious issue. Does anyone do this?
> You shouldn't have to duplicate code or nest structures to attain those ergonomics.
What's wrong with nesting structures? I like nesting structures.
> Don't ever use inheritance. Instead of things inheriting from other things, flip the relationship and make things HAVE other things. This is called composition and it has all the positives of inheritance but none of the negatives.
Bah. There are completely legitimate uses of inheritance where it's a really great fit. I think you'll find that being dogmatic about avoiding a programming pattern will eventually get you twisted up in other ways.
Inheritance can be used in a couple of ways that achieve a very-specific kind of code reuse. While I went through the early 2000's Java hype cycle with interfaces and factories and builders and double-dispatch and visitors everywhere, I went through a period where I hated most of that crap and swore to never use the visitor pattern again.
But hey, within the past two years I found an unbeatable use case where the visitor pattern absolutely rocks (it's there: https://github.com/titzer/wizard-engine/blob/master/src/util...). If you can come up with another way by which you can deal with 550 different kinds of animals (the Wasm instructions) and inherit the logic for 545 and just override the logic for 5 of them, then be my guest. (And yes, you can use ADTs and pattern-matching, which I do, liberally--but the specifics of how immediates and their types are encoded and decoded just simply cannot be replicated with as little code as the visitor pattern).
So don't completely swear off inheritance. It's like saying you'll never use a butter knife because you only do pocket knives. After all, butter knives are dull and not good for anything but butter.
If you can use functions in the same way objects are used, there’s no need for visitor objects.
There’s a reason why everything is a Lisp. All of the patterns are obvious with its primitives, while higher level primitives like classes, interfaces hide that there’s data and there’s behavior/effects.
Visitor objects are needed when you want, at runtime, to decide what code to execute based on the types of two parameters of a function (regular OOP virtual dispatch can only do this based on the type of one argument, the one before the dot). While you can model this in different ways, there is nothing in "plain" Lisp (say, R7RS Scheme) that makes this particularly simple.
Common Lisp does have a nicer solution to this, in the form of CLOS generic functions. In CLOS, methods are defined based on the classes of all arguments, not just the first one like in traditional object systems. Combined with inheritance, you can implement the whole thing with the minimal amount of code. But it's still an OOP system designed specifically for this.
The Visitor Pattern is one of the ones that actually does not go away when you have CLOS. That is to say the traversal and visitation part of it doesn't go away, just all the boiler plate around simulating the double dispatch, like needing two methods: accept and visit and whatnot.
Like say we want to visit the elements of a list, which are objects, and involve them with a visiting object:
We write all the method specializations of generic-fun for all combinations of visitor and element type we need and that's it.Importantly, the traversal function doesn't have to know anything about the visitor stuff. Here we have mapcar, which dates back to before object orientation.
The traversal is not really part of the visitor pattern. The element.accept(visitor) function together with the visitor.visitElementType(element) are the identifying part of the visitor pattern, and they completely disappear with CLOS.
A classic example is different parsers for the same set of expression types. The expressions likely form a tree, you may not need a list of expressions at all, so no mapcar.
The motivating scenario for the Visitor pattern is processing an AST that has polymorphic nodes, to achieve different kinds of processing based on the visiting object, where special cases in that processing are based on the AST node kind.
Even if we have multiple dispatch, the methods we have to write for all the combinations do not disappear.
Additionally, there may actually be a method analogous to accept which performs the recursion.
Suppose that the AST node is so abstract that only it knows where/what its children are. Then you have some:
(We might want recurse-bottom-up and recurse-top-down.)If all the AST classes derive from a base that uniformly maintains a list of n children, then this would just be in the base: (for-each-child ch (recurse ch fun)) or whatever.
Suppose we don't want to use a function, but an object (and not to use that object as funcallable). then we need (let's integrate the base class idea also):
Now we have a myriad method specializations of do-action. By doing it this way, we get rid of a lambda shim. Instead of: we can just have: It's only not the Visitor Pattern because I used recurse instead of accept, do-action instead of visit, and agent instead of visitor.I happily admit it's more than possible to come up with examples that make inheritance shine. After all, that's what the authors of these books and articles do.
But most of them put the cart before the horse (deliberately design a "problem" that inheritance "solves") and don't seriously evaluate pros and cons or even consider alternatives.
Even then, some of the examples might be legitimate, and what you're referring to might be a case of one. (though I doubt there's no equally elegant and succinct way to do it without inheritance)
But none of that changes the fact that inheritance absolutely shouldn't be the default goto solution for modeling any domain it has become (and we are taught to understand it as)
or that it's exceedingly uncommon to come across situations like yours where you have 500+ shared cases of behavior and you only want to "override" 5
or that inheritance is overwhelmingly used NOT for such niche edge cases but as a default tool to model even the most trivial relationships, with zero justification or consideration
I agree that examples matter a lot, and for some reason a lot of introductory OO stuff has really bad examples. Like the whole Person/Employee/Employer/Manager dark pattern. In no sane world would a person's current role be tied to their identity--how do you model a person being promoted? They suddenly move from being an employee to a manager...or maybe they start their own business and lose their job? And who's modeling these people and what for? That's never shown. Are we the bank? The IRS? An insurance company? Because all of these have a lot of other data modeling to do, and how you represent the identities of people will be wrapped up in that. E.g.--maybe a person is both an employee and a client at the same time? It's all bonkers to try to use inheritance and subtyping and interfaces for that.
Algebraic data types excel at data modeling. It's like their killer app. And then OO people trot out these atrocious data modeling examples which functional languages can do way better. It's a lot of confusion all around.
You gotta program in a lot of different paradigms to see this.
I love your concrete examples!
Thanks for sharing the pointer to your wasm engine. Is that part of a course you teach, or something born out of an auto-didactic pursuit?
User "titzer" is Ben Titzer; co-founder of WebAssembly - https://s3d.cmu.edu/people/core-faculty/titzer-ben.html
TIL, thank you!
Yeah, there are some real good experts on various subjects here on HN. One thing i would recommend is to contact anybody directly if needed (through their email ids in their profile or otherwise) with any questions you might have. That way you can have a longer discussion and/or learn more on specific subjects. Most people are willing to help generously when approached in a knowledge-seeking manner. I always look at HN threads/discussion as merely giving me an idea of different concepts/subjects and ask for pointers to more knowledge either books/papers or experts. Hopefully i also do the same with my comments thus helping the overall s/n ratio of this site.
> Algebraic data types excel at data modeling.
Any good resources you can point to for this?
I will agree that the problem that class hierarchies attempt to solve is a problem one usually does not really have, but in your example you have not solved it at all.
It matches a relational database well, but once you have a just a user reference, you can not narrow it down to an employee without out of band information. If a user should be just one type that can be multiple things at once, you can give them a set of roles.
I agree that there are better ways to model roles and FGA
the point of the example wasn't to be an idiomatic solution to that problem, but to illustrate the pointlessness of inheritance in general, and User Student Employee was the first thing that came to mind that was more "real" than the usual Animal examples
in any case, as for only having a User reference, you don't ever have only a User reference - overwhelmingly you would usually load a Student or Employee,
and each of them would have those attributes on them that characterize each of them respectively (and those attributes would not be shared - if they were, such shared attributes would go on the User object)
only those functions that truly are operating only on User would you send in each of their User objects
the problem with what you're advocating is you end up with functions like this
which is insaneOne example for when you might want something like a class hierarchy is something like a graph/tree structure, where you have lots of objects of different types and different attributes carrying references to each other that are not narrowly typed. You then have a set of operations that mostly do not care about the narrow types, and other operations that do care. You have things that behave mostly the same with small variations, in many cases.
> and each of them would have those attributes on them that characterize each of them respectively (and those attributes would not be shared - if they were, such shared attributes would go on the User object)
Suppose you want to compute a price for display, but its computation is different for Students and Employees. The display doesn't care if it's a Student or an Employee, it doesn't want to need to know if it's a Student or an Employee, it just wants the correct value computed. It can't just be a simple shared attribute. At some point you will need to make that distinction, whether it's a method, a conditional, or some other transform. It's not obvious how your solution could solve this more elegantly.
> which is insane
Not at all, in my view this kind of pattern matching is generally preferable over dynamic dispatch with methods, because you see what can happen in all cases, at a glance. But again, you do not even have a contrived example where you would need dynamic dispatch in the first place, so naturally its use not obvious.
Your price could simply be an interface that both Student and Employee implements, and at the call site, you simply call foo.calculatePrice - Student will calculate it one way and Employee another, there is zero need for inheritance, or for the call site to know which object is which of the two types - all it needs to know is it has the calcultePrice interface.
I also prefer pattern matching over dynamic dispatch. But I maintain that conjuring up attributes out of thin air (which is 100% what you're doing when your function accepts one type, and then inside, conjures up another, that has additional attributes from the input!) is insane. There are so many good reasons not to want to do this I don't even know where to begin. For one, it's a huge footgun for when you add another type that inherits - the compiler will NOT force you to add new branches to cover that case, even though you may want it to. Also, such "reusable" if this then do this else do that methods when allowed to propagate through the stack means every function is no longer a function, it's actually *n functions, so ease of grokking and maintainability goes out the window. It's much better to relegate such ifs to the edges of the system (eg the http resource endpoint layer) and from there call methods that operate on what you want them to operate on, and do not require ifs.
> I also prefer pattern matching over dynamic dispatch. But I maintain that conjuring up attributes out of thin air (which is 100% what you're doing when your function accepts one type, and then inside, conjures up another, that has additional attributes from the input!) is insane.
Does it though? In one case, you have:
In the other, you have: but also implicitly the possible: The latter case is more flexible, but also harder to reason about. Whoever calls S::calculate can never be quite sure what's in the bag.> For one, it's a huge footgun for when you add another type that inherits - the compiler will NOT force you to add new branches to cover that case, even though you may want it to.
Lack of exhaustiveness checks is a problem many languages have, and if that's the case for you, that's an argument for preferring methods.
> it's actually n functions, so ease of grokking and maintainability goes out the window
...but that's exactly what interface methods are: N functions. That's what your problem is. You can't get around it. Moreover, if most of your types do not have a distinct implementation for that method, you will still need to define them all. This is where both the "super-function" and inheritance (or traits) can save you quite a bit of code.
It's sad that late-binding languages like Objective-C never got the love they should have, and instead people have favored strongly (or stronger) typed languages. In Objective-C, you could have your User class take a delegate. The delegate could either handle messages for students or employees. And you could code the base User object to ignore anything that couldn't be handled by its particular delegate.
This is a very flexible way of doing things. Sure, you'll have people complain that this is "slow." But it's only slow in computer standards. By human standards—meaning the person sitting at a desktop or phone UI—it's fast enough that they'll never notice.
I will complain that it is hard to reason about. You better have a really good reason to do something like this, like third party extensibility as an absolute requirement. It should not be your go to approach for everything.
Inheritance is nothing more or less than composition + implementing the same interface + saving a bit of boilerplate.
When class A inherits from class B, that is equivalent to class A containing an object of class B, and recoding the same interface as class B, and automatically generating method stubs for every method of the interface that call that same method on your B.
That is, these two are perfectly equivalent:
This is pseudo-code because Java distinguishes between Interfaces and Classes, and doesn't have a way to refer to the Interface of a Class. In C++, which doesn't make this distinction, it's even more equivalent. Especially since C++ allows multiple inheritance, so it can even model a class composing multiple objects.The problem with inheritance is that people get taught to use it for modeling purposes, instead of using it for what it actually does - when you need polymorphism with virtual dispatch, which isn't all that common in practice. That is, inheritance should only be used if you need to have somewhere in you code base a collection of different objects that get called in the same way but have to execute different code, and you can only find out at runtime which is which.
For your students and employees and users example, the only reason to make Student and Employee inherit from User would be if there is a reason to have a collection of Users that you need to interact with, and you expect that different things will happen if a User is an Employee than if that User is a Student. This also implies that a Student can't be an Employee and vice versa (but you could have a third type, StudentEmployee, which may need to do something different from either a Student or an Employee). This is pretty unlikely for this scenario, so inheritance is unlikely to be a good idea.
Note also that deep class hierarchies are also extremely unlikely to be a good idea by this standard: if classes A and B derive from C, that doesn't mean class D can derive from A - that is only useful if you have a collection of As that can be either A or D, and you actually need them to do different things. If you simply need a third type of behavior for Cs, D should be a subclass of C instead.
I'd also note that my definition applies for inheritance in the general case - whether the parent is an interface-only class or a concrete class with fields and methods and everything. I don't thing the distinction between "interface inheritance" and "implementation inheritance" is meaningful.
This is not quite correct, at least not in most commonly-used languages. The difference comes when a "superclass" method calls self.whatever(), and the whatever() is implemented in both the "superclass" and the "subclass". In the implementation-inheritance-based version, self.whatever() will call the subclass method. In the composition-based version, self.whatever() will call the superclass method. The implementation-inheritance-based version is sometimes called "open recursion". This is why you need implementation inheritance for the GoF Template pattern. See for instance this paper:
https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...
for one, see the reply from the handle cryptonector further up
> Inheriting from a class creates dispatch problems, and there's instance variables/fields/whatever to deal with. Never mind multiple inheritance, as that gets hairy real fast. With interfaces there is no hierarchy, and all there is is a type and a data, and you either pass them around together, you monomorphize to avoid the need to pass two pointers instead of one, or you have the data point to its type in a generic way. With class hierarchies you have to have operations to cast data up and down the hierarchy, meaning trade one dispatch table for another.
I agree with all of this. With interfaces, you get all the benefits, but none of the downsides.
Amen.
Reading the comments from top to bottom, the above is exactly what I wanted to write.
[flagged]