views:

390

answers:

5

I was having a conversation with a co-worker a few days ago. When I mentioned that a new coding technology he's interested in doesn't seem like anything that couldn't already be done with existing tools rather easily, he replied a bit dismissively, "that's what they said about object-oriented programming, too. It's just procedural programming extended with a few syntax tricks." The conversation went off in a different direction, but that statement made me think.

This is the same guy who's seemed a bit confused during code reviews about why I made certain operations standalone functions instead of methods of some class. When I explained that they didn't deal with any specific object or class and had no need for a "self" parameter, (aka "this" in the C++ language family), he seemed to think that they ought to be part of a class anyway because that made for "better OOP".

I'll freely admit that he's a better programmer than me. He's older and has a lot more real-world experience, and I've learned a lot by working with him. But I can't help but think that he's got a fundamentally bad perspective on this, and it's something that I see becoming a lot more common these days: abstraction for the sake of abstraction.

Seems to me that what "they" said about OOP is right, with one exception. The only truly new thing that the object paradigm introduces is type inheritance. Everything else, including polymorphism (assuming you could have polymorphism without type inheritance) can be done in a purely procedural language. I've seen some very interesting examples of object-oriented C code, for example.

But now we have new languages where everything must be an object gaining popularity. Problem is, a lot of concepts just don't map well to the object paradigm, and so you end up with really ugly code. Just look at the heavy (ab)use of static methods in Java to see what happens when you decide by fiat that all holes must be round, regard less of the shape of your pegs. Back in the day, they used to call that abstraction inversion. Now it just seems to be accepted as normal.

The problem is that the easier it is to write code, the harder it is to debug and maintain it, because you have to drill through more layers of abstraction to find out what's really happening and where.

One example: At work, we've got a set of classes that builds SQL queries dynamically. Once you figure out the interface, it's easy to build a complex query by chaining together a bunch of little query-builder objects into a logical tree. But if the requirements change later on and someone else has to make it get a different result set from the database, it's a real pain to try to parse it. Debugging is even worse. You ever try browsing an expression tree in a symbolic debugger? Or stepping through its execution to try and find the point where the evaluation of an unknown node produces a specific known response? It's not pretty. Trying to view the output in a SQL profiler isn't much better: what comes out may work, but it tends to be ugly and require 5-10 minutes of refactoring and reformatting before it begins to resemble a human-readable SQL query.

I'm not opposed to using abstractions. I just think that because they tend to leak, and because they often don't reduce complexity so much as make the complexity someone else's problem, they're only useful up to a certain point, and a lot of coders these days don't seem to know where that point is. I'm not sure I do myself, really. I like to go by what I call the First Commandment of Abstractions: Thou shalt not make unto thee any abstraction that cannot be overridden if necessary, that thy code be free of ugly abstraction inversions. But I'm not sure if that's enough.

Does anyone have any good principles for knowing when to add more abstraction and, more importantly, when not to? I realize this is a real subjective question. I'm making it CW and I'd like to see some discussion on the subject. Hopefully I'll learn something that ends up making me (and maybe some other users) a better programmer.

A: 

I can't say that, in my limited experience, I've ever had a really good reason to use abstraction. The only abstraction (if you can call it that) I have needed would be an interface.

My rule of thumb is that I don't use abstraction unless I'm solving a specific problem. If I'm doing it "just in case" then it is adding unnecessary complexity.

Joe Philllips
So you program exclusively in assembler then?Pretty much everything you do is an abstraction. When you create a class, it is an abstraction. Functions are abstractions. Your programming language is an abstraction.
jalf
We obviously have different definitions of abstraction.
Joe Philllips
Mhm; pity only one of them is correct.
ehdv
In computer science, Abstraction is "a mechanism and practice to reduce and factor out details so that one can focus on few concepts at a time".
Daniel Daranas
By "use" I mean that I don't create abstraction.
Joe Philllips
How can you... uh... "use" someone else's "mechanism and practice to reduce and factor out details"? I can't agree that it makes any sense
Joe Philllips
Then your variable, module, and function names are meaningless random character strings?
le dorfier
Maybe that would fit into the "solving a specific problem" category? I don't abstract just for the sake of abstracting.
Joe Philllips
A: 

Is this long blog posting just a claim that the SOLID principles are important?

That "too much abstraction" breaks or bends the "Open Closed Principle"?

Or there's a limit to the "Dependency Inversion Principle"?

S.Lott
Nice article. I think that the idea that there's a limit to the dependency inversion principle is what I was getting at. In the linked PDF, it says "high level modules simply should not depend on low level modules in any way." That seems to me to be exactly as possible as building a skyscraper with no foundation or ground floor. Somewhere beneath the abstraction onion, something very specific has to be happening, or you have program that does nothing.
Mason Wheeler
A: 

My opinion:

There is enough abstraction when you can accomplish your business goals now and in the foreseeable future - at least long enough that you get satisfactory ROI from the project(s) at hand. From a purely academic standpoint, I really do not think that a value can be solidly put on this. What matters is that your customers are happy and business moves on.

Happy coding!

Andrew Sledge
A: 

Code should be close to change but open to extension. Object oriented paradigm is invented to provide this functionality. If I were you, my main question would be how expandable my code should be. If you are still feeling not so sure about OOP, take a look at the design patterns. Most of the your design time problems is already solved by others.

Burcu Dogan
A: 

I think that the benefit of abstraction "for abstraction's sake" as you put it is that over time, as abstractions become less leaky, increased abstraction makes it easier to write code and to isolate problems as they arise. The other problem is (at the risk of sounding cliche) code reuse. A big benefit of abstract tools is that you can change how they're used or what they're used on to get completely different effects: That modularity is harder to replicate with procedural programming. Not impossible, but harder.

On a somewhat unrelated note, Java takes this a little bit too far, as Steve Yegge humorously points out. So the answer is that there's a definite middle ground: OO, like any other tool, has its place; it should not be ubiquitous, but neither should it be eschewed merely because it is possible to get the same results using other methods.

ehdv