I was having a conversation with a co-worker a few days ago. When I mentioned that a new coding technology he's interested in doesn't seem like anything that couldn't already be done with existing tools rather easily, he replied a bit dismissively, "that's what they said about object-oriented programming, too. It's just procedural programming extended with a few syntax tricks." The conversation went off in a different direction, but that statement made me think.
This is the same guy who's seemed a bit confused during code reviews about why I made certain operations standalone functions instead of methods of some class. When I explained that they didn't deal with any specific object or class and had no need for a "self" parameter, (aka "this" in the C++ language family), he seemed to think that they ought to be part of a class anyway because that made for "better OOP".
I'll freely admit that he's a better programmer than me. He's older and has a lot more real-world experience, and I've learned a lot by working with him. But I can't help but think that he's got a fundamentally bad perspective on this, and it's something that I see becoming a lot more common these days: abstraction for the sake of abstraction.
Seems to me that what "they" said about OOP is right, with one exception. The only truly new thing that the object paradigm introduces is type inheritance. Everything else, including polymorphism (assuming you could have polymorphism without type inheritance) can be done in a purely procedural language. I've seen some very interesting examples of object-oriented C code, for example.
But now we have new languages where everything must be an object gaining popularity. Problem is, a lot of concepts just don't map well to the object paradigm, and so you end up with really ugly code. Just look at the heavy (ab)use of static methods in Java to see what happens when you decide by fiat that all holes must be round, regard less of the shape of your pegs. Back in the day, they used to call that abstraction inversion. Now it just seems to be accepted as normal.
The problem is that the easier it is to write code, the harder it is to debug and maintain it, because you have to drill through more layers of abstraction to find out what's really happening and where.
One example: At work, we've got a set of classes that builds SQL queries dynamically. Once you figure out the interface, it's easy to build a complex query by chaining together a bunch of little query-builder objects into a logical tree. But if the requirements change later on and someone else has to make it get a different result set from the database, it's a real pain to try to parse it. Debugging is even worse. You ever try browsing an expression tree in a symbolic debugger? Or stepping through its execution to try and find the point where the evaluation of an unknown node produces a specific known response? It's not pretty. Trying to view the output in a SQL profiler isn't much better: what comes out may work, but it tends to be ugly and require 5-10 minutes of refactoring and reformatting before it begins to resemble a human-readable SQL query.
I'm not opposed to using abstractions. I just think that because they tend to leak, and because they often don't reduce complexity so much as make the complexity someone else's problem, they're only useful up to a certain point, and a lot of coders these days don't seem to know where that point is. I'm not sure I do myself, really. I like to go by what I call the First Commandment of Abstractions: Thou shalt not make unto thee any abstraction that cannot be overridden if necessary, that thy code be free of ugly abstraction inversions. But I'm not sure if that's enough.
Does anyone have any good principles for knowing when to add more abstraction and, more importantly, when not to? I realize this is a real subjective question. I'm making it CW and I'd like to see some discussion on the subject. Hopefully I'll learn something that ends up making me (and maybe some other users) a better programmer.