views:

1225

answers:

16

I started my college two years ago, and since then I keep hearing "design your classes first". I really ask myself sometimes, should my solution to be a bunch of objects in the first place! Some say that you don't see its benefits because your codebase is very small - university projects. The project size excuse just don't go down my throat. If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.

I am not saying OOP is bad, I just feel it is abused in classrooms where students like me are told day and night that OOP is the right way.

IMHO, the proper answer shouldn't come from a professor, I prefer to hear it from real engineers in the field.

Is OOP the right approach always?

When is OOP the best approach?

When is OOP a bad approach?

This is a very general question. I am not asking for definite answers, just some real design experience from the field.

I don't care about performance. I am asking about design. I know it is engineering in real life.

==================================================================================

Thankful for all contributions. I chose Nosredna answer, because she addressed my questions in general and convinced me that I was wrong about the following : If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project.

+11  A: 

OOP is the right approach when your data can be well structured into objects.

For instance, for an embedded device that's processing an incoming stream of bytes from a sensor, there might not be much that can be clearly objectified.

Also in cases where ABSOLUTE control over performance is critical (when every cycle counts), an OOP approach can introduce costs that might be nontrivial to compute.

In the real world, most often, your problem can be VERY well described in terms of objects, although the law of leaky abstractions must not be forgotten!

Industry generally resolves, eventually, for the most part, to using the right tool for the job, and you can see OOP in many many places. Exceptions are often made for high-performance and low-level. Of course, there are no hard and fast rules.

You can hammer in a screw if you stick at it long enough...

Dave Gamble
I'd say OOP is the right approach *for the subset of your aplication which can be well structured into objects". I can't think of any applications where everything maps well to objects (any more than I can think of a language that uses all nouns and no verbs), but every application has a few bits and pieces that can be well represented by OOP.
jalf
I think OOP is not only modelling your data it is giving you advice how to structure your program to make it easy to change and to understand.
Janusz
+5  A: 
  1. No...OOP is not always the best approach.

  2. (A true) OOP design is the best approach when your problem can best be modeled as a set of objects that can accomplish your goals by communicating/using one another.

  3. Good question...but I'm guessing Scientific/Analytic applications are probably the best example. The majority of their problems can best be approached by functional programming rather than object oriented programming.

...that being said, let the flaming begin. I'm sure there are holes and I'd love to learn why.

Justin Niessner
Unfortunately, your second point is meaningless.
David Thornley
I'd go with vague rather than meaningless...if you're not able to define your objects easily, it's a good sign that your problem is not well suited to OOP (or you just don't understand object design...which is another problem).
Justin Niessner
Meaningless? I took it to be an example of an important computer science area that wasn't penetrated by O-O very deeply. How is that meaningless? +1 from me for a good answer.
duffymo
OOP is when a problem is modeled as a set of objects that accomplish your goals by communicating with and using one another. It's a definition, and so reduces to an OOP design is the best approach when the problem can best be modeled using OOP.
David Thornley
A: 

OOP is usually an excellent approach, but it does come with a certain amount of overhead, at least conceptual. I don't do OO for small programs, for example. However, it's something you really do need to learn, so I can see requiring it for small programs in a University setting.

If I have to do serious planning, I'm going to use OOP. If not, I won't.

This is for the classes of problems I've been doing (which includes modeling, a few games, and a few random things). It may be different for other fields, but I don't have experience with them.

David Thornley
+1  A: 

I've seen some of the best results of using OOP when adding new functionality to a system or maintaining/improving a system. Unfortunately, it's not easy to get that kind of experience while attending a university.

Alex B
+4  A: 

Is OOP the right approach always?

Nope.

When OOP is the best approach?

When it helps you.

When OOP is a bad approach?

When it impedes you.

That's really as specific as it gets. Sometimes you don't need OOP, sometimes it's not available in the language you're using, sometimes it really doesn't make a difference.

I will say this though, when it comes to technique and best practices continue to double check what your professors tell you. Just because they're teachers doesn't mean they're experts.

Spencer Ruport
-1 You've essentially written a tautology. OOP is good when it's good, and it isn't when it isn't. You could say the same thing about gas prices, the weather, dog food, you name it.
rtperson
I've seen glorious procedural programming and ugly object oriented programming. I don't feel that OOP is ever RIGHT or WRONG, only unavailable, unnecessarily time consuming, or simply unnecessary.
Spencer Ruport
+1 this is as simple as "when is a screwdriver the best approach?" when you need to turn a screw. "when is a screwdriver a bad approach?" when you need to do something other than turn a screw. When you need to work in objects, use OOP. Done. Don't overthink it.
Rex M
@rtperson - no, he said it's good when it helps. That's different, and gets to the heart of the matter. Unfortunately, it's not really possible to give a better answer, because the circumstances of each project vary so much.
Michael Kohne
@Michael - Let me illustrate my point a little. When is fire good? When it helps you. When is fire bad? When it hurts you. Two very unhelpful answers. If I was explaining the pros and cons of fire to a three year-old, I'd probably say something about how it can either give you a bad burn (the hurt part) or cook your food (the helpful part). Spencer's response isn't an answer -- it's a shrug.
rtperson
+1  A: 

I have yet to work on a project in the industry that was not a combination of both functional and OOP. It really comes down to your requirements and what are the best (maybe cheapest?) solutions for them.

northpole
I assume you mean "procedural," rather than "functional." I worked on many, many commercial projects that had no OO at all (assembly and C). So I'm willing to say that whatever works, works. It'd be silly to throw OO out as a tool, though. Most large programs get complicated enough to benefit from OO.
Nosredna
Functional, or procedural? I wasn't aware that functional programming was all that widespread 'in the wild'.
Harper Shelby
heh, ya procedural.
northpole
+62  A: 

The professors have the disadvantage that they can't put you on huge, nasty programs that go on for years, being worked on by many different programmers. They have to use rather unconvincing toy examples and try to trick you into seeing the bigger picture.

Essentially, they have to scare you into believing that when an HO gauge model train hits you, it'll tear your leg clean off. Only the most convincing profs can do it.


"If the solution goes well with the project, I believe it should be the right one also with the macro-version of that project."

That's where I disagree. A small project fits into your brain. The large version of it might not. To me, the benefit of OO is hiding enough of the details so that the big picture can still be crammed into my head. If you lack OO, you can still manage, but it means finding other ways to hide the complexity.

Keep your eye on the real goal--producing reliable code. OO works well in large programs because it helps you manage complexity. It also can aid in reusability.

But OO isn't the goal. Good code is the goal. If a procedural approach works and never gets complex, you win!

Nosredna
+1 from me. Very nicely written, Nosredna. Not the first of your answers that has been quite worthwhile, and not the last.
duffymo
Yup. This is one of the answers that makes me wish I could upvote one answer several times.
David Thornley
>>>the benefit of OO is hiding enough of the details so that the big picture can still be crammed into my head.Perfect explanation! :o)
Gary Willoughby
I wish I could add more than one upvote for this. You made the sneaky remark about procedural code that is the entire point of OO [symplify things]
monksy
+1  A: 

My experience is that OOP is mostly useful on a small scale - defining a class with certain behavior, and which maintains a number of invariants. Then I essentially just use that as yet another datatype to use with generic of functional programming.

Trying to design an entire application solely in terms of OOP just leads to huge bloated class hierarchies, spaghetti code where everything is hidden behind 5 layers of indirection, and even the smallest, most trivial unit of work ends up taking three seconds to execute.

OOP is useful --- when combined with other approaches.

jalf
+1  A: 
  1. OOP is not always the best approach. However it is the best approach in the majority of applications.
  2. OOP is the best approach in any system that lend itself to objects and the interaction of objects. Most business applications are best implemented in an object-oriented way.
  3. OOP is a bad approach for small 1 off applications where the cost of developing an framework of objects would exceed the needs of the moment.

Learning OOA, OOD & OOP skills will benefit the most programmers, so it is definately useful for Universities to teach it.

Jeffrey Hines
+1  A: 

The title asks one question, and the post asks another. What do you want to know?

OOP is a major paradigm, and it gets major attention. If metaprogramming becomes huge, it will get more attention. Java and C# are two of the most used languages at the moment (see: SO tags by number of uses). I think it's ignorant to state either way that OOP is a great/terrible paradigm.

I think your question can best be summarized by the old adage: "When the hammer is your tool, everything looks like a nail."

Hooked
Oh, wow, I don't like the SO markup.
Hooked
I've seen the proverb on Stackoverflow before. I think you are giving the bowdlerized version. The true version is, "When your only tool is C++, everything looks like a thumb."
Nosredna
+1 for making me define that on google. That version is a corruption of the true, true version, which is what I posted.
Hooked
+5  A: 

It might be helpful to think of the P of OOP as Principles rather than Programming. Whether or not you represent every domain concept as an object, the main OO principles (encapsulation, abstraction, polymorphism) are all immensely useful at solving particular problems, especially as software gets more complex. It's more important to have maintainable code than to have represented everything in a "pure" object hierarchy.

dahlbyk
+1 ... I believe this has been referred to as POOP. Seriously, I learned in personally traumatizing fashion that MS Access and VBA weren't going to let me really do OOP. But I came away with OO principles that changed all my VBA code for the better.
Smandoli
@Smandoli I learned OOP after having used VBA and was also let down by it's lack of a full implementation. However, I started incorporating the *principles* of OOP into my VBA projects and that changed them for the better.
Ben McCormack
+10  A: 

My 5 cents:

OOP is just one instance of a larger pattern: dealing with complexity by breaking down a big problem into smaller ones. Our feeble minds are limited to a small number of ideas they can handle at any given time. Even a moderately sized commercial application has more moving parts than most folks can fully maintain a complete mental picture of at a time. Some of the more successful design paradigms in software engineering capitalize on the notion of dealing with complexity. Whether it's breaking your architecture into layers, your program into modules, doing a functional breakdown of actions, using pre-built components, leveraging independent web services, or identifying objects and classes in your problem and solution spaces. Those are all tools for taming the beast that is complexity.

OOP has been particularly successful in several classes of problems. It works well when you can think about the problem in terms of "things" and the interactions between them. It works quite well when you're dealing with data, with user interfaces, or building general purpose libraries. The prevalence of these classes of apps helped make OOP ubiquitous. Other classes of problems call for other or additional tools. Operating systems distinguish kernel and user spaces, and isolate processes in part to avoid the complexity creep. Functional programming keeps data immutable to avoid the mesh of dependencies that occur with multithreading. Neither is your classic OOP design and yet they are crucial and successful in their own domains.

In your career, you are likely to face problems and systems that are larger than you could tackle entirely on your own. Your teacher are not only trying to equip you with the present tools of the trade. They are trying to convey that there are patterns and tools available for you to use when you are attempting to model real world problems. It's in your best interest to accumulate a collection of tools for your toolbox and choose the right tool(s) for the job. OOP is a powerful tool to have, but by far not the only one.

Oren Trutner
+1  A: 

The relevance and history of OOP runs back to the Simula languages back in the 1960s as a way to engineer software conceptually, where the developed code defines both the structure of the source and general permissible interactions with it. Obvious advantages are that a well-defined and well-created object is self-justifying and consistently repeatable as well as reliable; ideally also able to be extended and overridden.

The only time I know of that OOP is a 'bad approach' is during an embedded system programming efforts where resource availability is restricted; of course that's assuming your environment gives you access to them at all (as was already stated).

Hardryv
+3  A: 

Like most questions of this nature, the answer is "it depends."

Frederick P. Brooks said it the best in "The Mythical Man-Month" that "there is no single strategy, technique or trick that will exponentially raise the productivity of programmers." You wouldn't use a broad sword to make a surgical incision and you wouldn't use a scalpel in a sword fight.

There are amazing benefits to OOP, but you need to be comfortable with the pattern to take advantage of these benefits. Knowing and understanding OOP also allows you to create a cleaner procedural implementation for your solutions because of the underlying concepts of separation of concerns.

Babak Naffas
A: 

My opinion, freely offered, worth as much...

OOD/OOP is a tool. How good of a tool depends on the person using it, and how appropriate it is to use in a particular case depends on the problem. If I give you a saw, you'll know how to cut wood, but you won't necessarily be able to build a house.

The buzz that I'm picking up on is that functional programming is the wave of the future because it's extremely friendly to multi-threaded environments, so OO might be obsolete by the time you graduate. ;-)

Trueblood
+10  A: 

OOP is a real world computer concept that the university would be derelict to leave out of the curriculum. When you apply for jobs, you will be expected to be conversant in it.

That being said, pace jalf, OOP was primarily designed as a way to manage complexity. University projects written by one or two students on homework time are not a realistic setting for large projects like this, so the examples feel (and are) toy examples.

Also, it is important to realize that not everyone really sees OOP the same way. Some see it about encapsulation, and make huge classes that are very complex, but hide their state from any outside caller. Others want to make sure that a given object is only responsible for doing one thing and make a lot of small classes. Some seek an object model that closely mirrors real world abstractions that the program is trying to relate to, others see the object model as about how to organize the technical architecture of the problem, rather than the real world business model. There is no one true way with OOP, but at its core it was introduced as a way of managing complexity and keeping larger programs more maintainable over time.

Yishai
thoughtful deep answer. Awareness of the diversity within OOP is helpful in understanding how it can be converted so easily into an empty buzz-word.
Smandoli
Indeed. Anybody who writes a Java program with everything in the main function of one class isn't using OOP. That being said, there are better and worse ways of using objects.
David Thornley