Examples of inheritance hierarchies are always totally useless shit like this - what if a cow is an animal; What if a cow is a mammal, all mammals are animals and all mammals have a lactate() method?
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said “Master, I have heard that objects are a very good thing - is this true?” Qc Na looked pityingly at his student and replied, “Foolish pupil - objects are merely a poor man’s closures.”
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire “Lambda: The Ultimate…” series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying “Master, I have diligently studied the matter, and now understand that objects are truly a poor man’s closures.” Qc Na responded by hitting Anton with his stick, saying “When will you learn? Closures are a poor man’s object.” At that moment, Anton became enlightened.
Should’ve listed the source. Bad commenter.
Source: https://wiki.c2.com/?ClosuresAndObjectsAreEquivalent
(Sorry dude, my time was limited by how long I could spare poopin in-between meetings.)
I LOVE TRAITS. YOU’LL HAVE TO TAKE THEM FROM MY COLD DEAD HANDS
[Insert SpongeBob screaming meme]
TRAITS ARE SO USEFUL AND STRUCTS ARE EASILY REPRESENTED IN MEMORY AND WORKED WITH, COMBINED THEY WILL TAKE OVER THE LINUX KERNEL AND THE WORLD
I’ll say this now.
Inheritance is the most misused capability of OOP which programmers think makes their code look smart, but most of the time just makes a giant fucking mess.
Aggregation > composition > inheritance
Its the best/worst thing about OOP no matter what language.
We had a rule at work that if you are 3 levels or more down an inheritance tree, then you are too far. The cognitive load is just too much, plus everything stops making sense.
One level can be great (MVC all have great conventions, MCP as well). Two can be pushing it (Strategy pattern when you have physical devices and cant be connected all the time, Certain kinds of business logic that repeat hundreds of times, etc…) But even there you are kinda pushing it.
I need code that I can look at a month from now and know WTF is happening. And sometimes its better to have less DRY and more comprehension. Or maybe im just a forever mediocre dev and dont see the “light”. I dunno.
This is exactly how I feel too. A little bit of repetition is totally worth it, versus having inappropriate coupling, or code that jumps in and out of parent/child classes everywhere so you can hardly keep it in your head what’s going on.
I freely accept that I AM a mediocre dev, but if that lends me to prefer code that is comprehensible and maintainable then I think being mediocre is doing my team a favour, honestly.
@tiramichu
It’s this mentality that shows you aren’t mediocre. Simplicity requires more skill, not less.
@mesamunefireThats kind of you to say 😀
But if I have to make an Array I have to inherit from Indexable which inherits from Collection which inherits from Object! How else am I supposed to implement an Array?
I remember some crazy stuff back when I had to work with a Java + ember.js project. Everything was like that.
PTSD flashbacks to the codebase I started on in 2008 which had… I don’t even remember. Like six or seven levels. Fucking nightmare. I did a printout of the analysis Oxygen gave me and it ended up as a 4x3 meters poster ;_;
WTF
I totally agree on this. I found that often things that appeared to need inheritance at first glance often didn’t if I gave deeper thought to it.
Granted I was working on much smaller projects rather than crazy huge multi team enterprise apps, but I’d guess that even then this is a good “rule of thumb”.
Cool, good to know someone else has the same experience.
Ive been on a couple of multi-year projects and they are NOT fun with OOP + developer went crazy with patterns they were experimenting at the time. Its what made the “rule” pop up to begin with.
Hold on, I’m in the middle of drawing an inheritance graph so I know how Dog is related to AircraftCarrier.
public interface ICanTravelThroughTheAir { } public class Flea : ICanTravelThroughTheAir { } public class AircraftCarrier { private readonly List<ICanTravelThroughTheAir> _aircraft = new(); public void AddAircraft(ICanTravelThroughTheAir flyingThing) { _aircraft.Add(flyingThing); } } public class Dog : AircraftCarrier { public void Woof() { Console.WriteLine("Bitch I'm an aircraft carrier!"); } } public static class Program { public int Main(string[] args) { var dog = new Dog(); for (var i = 0; i < 10000; i++) { dog.AddAircraft(new Flea()); } dog.Woof(); } }Needs more
AbstractDefaultProxyBeanFactoryFactoriesAnd dependency injection!
Every class needs to use DI for calling other classes, even though we’ll have anyway just a single implementation for all of them, and we won’t be using this flexibility at all (not even for tests where it might be useful for mocking dependencies).
Methods calling other methods? Heresy! There needs to be two or three interceptors on there, and some indirection over RabbitMQ using spring-integration, at the very least.
This is how you write proper enterprise-level software: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition
Not even unnecessarily templated, B- at best
I learned about it in school and we didna few assignments for it
But… never seen or heard anyone mention it outside of that
I guess we can make up some nische vases
I did hear it can be useful for video games though. But then again im sure people can manage fint without, as well
It’s wildly useful, when you store a lot of similar stuff, or treat a lot of similar stuff etc.
Like a GUI, a rendering engine or a scientific soft. Video games, any soft with users, and so on.
I get that people misuse it but for me it’s wild that people think you should like program without it, like at all cost.
We use it a lot when we have a solution that works for 95% of customers and a few need random things.
Else we will have multiple markets changing the same function with a thousand if else
Main issue is that makes it so that some functions are never generalized, like when customer A wanted to use an equal filter, customer B wanted a IN filter, and the rest have no filtering capability at all.
And polymorphism is the only way you could expose those composite Interfaces as bindings on C API based languages. And polymorphism is part of OOP.
If we take the text book definition of OOP, then the kernel is OOP …
Composition over inheritance every day, all day
In over ten years of professional programming, I have never used inheritance without regretting it.
When it’s the right tool, it’s incredibly useful. When it’s the wrong tool, and it often is, it racks up tech debt at an incredible rate.
It works great for technical constructs. E.g. A Button is a UI element. But for anything business logic related, yeah it’ll suck.
It might be nice to use in some very specific cases (e.g. addition-operation is a binary-operation AST node which is an AST node).
In most of the cases it just creates noise though, and you can usually do something different anyway to implement the same feature. For example in rust, just use enums and list all the possible cases and it’s even nicer to use than inheritance.
And not once have I regretted removing inheritance.
That’s wild. What did you use it for?
Deref is for smart pointers and not for inheritance.
The
windowscrate is full of Deref. Because the windows API is full of inheritance.It may not be what the trait was thought of for, but I’m glad we have it to interface with APIs that have actual inheritance.
Let me introduce you to this horror story: Deref Polymorphism https://rust-unofficial.github.io/patterns/anti_patterns/deref.html
Also sometimes newtypes
Separating data structure from implementation has benefits.
In languages with classic OOP classes and objects, it’s often necessary to write wrappers or adapters to allow new operations on existing objects. This adds overhead and require more code.
I wish I could take go’s interfaces and drop them into zig, and that’s all the object oriented concepts I need.









