views:

1012

answers:

18

Possible Duplicate:
Pitfalls of Object oriented programming

I recently had an argument with a friend who believed that every application should be designed in an OOP manner. I know that there are definitely applications where OOP would not fit well over the design but could not think of any specific example. Can someone give me an example where taking an object oriented approach to a problem would be cumbersome or otherwise suboptimal?

+5  A: 

I've run into a few scenarios where OO doesn't hold together, but I still find it a very powerful if widely-misunderstood approach. Some weak areas I've observed are:

  • Applications with little to no business logic that amount to just a bunch of CRUD screens; these are better handled with declarative designs
  • At system boundaries where there is a forced transition to procedural/relational, service-oriented or message queuing interfaces, though for this I often use a layer of "active", thread-bearing objects at the interface.
  • Environments where continuous realignment of the system architecture is impeded by, e.g., code ownership or backward compatibility considerations. This can lead to very contrived code structures in an OO design more quickly than in procedural/relational.
Jeffrey Hantin
A: 

The real world is really populated with real objects.

Any software the purports to solve any real problem (any problem that really exists in the real world) must be object-oriented.

You can write such programs with non-object languages (like C).

However. When you read C code, you find that it contains objects. They just aren't formalized well by the language. But even a lowly C-language struct is -- in effect -- a shabby poorly-supported object. Even in classic BASIC, a collection of parallel dim arrays is a kind of list of objects.

I know that there are definitely applications where OOP would not fit well

Such a thing can't exist and still be an "application"

Even a "pure" mathematical function still deals with "number" objects.

You could object and say "an integer isn't an object". You'd be wrong, however. An integer is an object. It has methods (+, -, *, /) and an attribute. Actually it has several class-level attributes (maximum integer value, minimum integer value, etc.)

The only way to escape from the tyranny of objects is to escape from the tyranny of data.

To have a "purely procedure" thing, you could create a "meta-procedure" or "template" or "macro" that had no concrete data type. That would be the closest you could come to a non-object thing. But to do work, it would have to be bound to a data type, meaning that you now have objects.

S.Lott
Your answer is "never"? Of course, many things can often be considered "objects", but that's a far cry from a design/architecture approach.
Justin L.
I think perhaps you're overapplying object-oriented philosophy. Not all values are objects, unless you're working in a language such as Ruby that doesn't have machine primitives.
Jon Purdy
@Jon: I think you don't get it. C doesn't have first class OO, yet the way to write any good C code is basically OO as a design pattern. For example, file functions follow the principle of doing only what they need to do, and they do it on a file descriptor (`this`/`self`/etc).
Longpoke
Justn, Jon, I wouldn't take this answer too seriously if I were you. Every now and then Mr Lott seems to like answering certain questions as if the OP is an idiot and has asked the "wrong question".
Ash
@Ash: I'm sorry if think I'm treating the OP as an idiot. My point is simple: OO is impossible to escape. All values **are** objects, even simple integers in C. Our heads (and our languages) are limited by the "object-ness" of the world. C has shabby, horrible, informal objects. But the OO principles apply to C as well as Python.
S.Lott
-1: 'The real world is really populated with real objects.' What a load of nonsense. This can only be true in a tautological, and therefore useless, sense. And while I'm giving this answer a kicking, what is the maximum value of an integer ?
High Performance Mark
@High Performance Mark: 4294967295.
S.Lott
-1 This doesn't answer the question and it is incorrect in some places. (Maximum value of an integer? Someone with your rep should at least know what an integer is)
Yacoby
@Yacoby: In Python 3 integers do not have a maximum value, that's true. But in most other languages (including Python 2) the integer class clearly has a maximum value. Sometimes it's "implementation dependent". Other times it's defined as part of the language.
S.Lott
+1 for having the guts to be refreshing and thinking beyond the mere code.
CesarGon
@CesarGon: Actually, thinking beyond the code is precisely what is not needed here. When the OP asked about using OOP, he meant as in "using classes and instantiating them". Of course the real world is modeled as objects. But the OP asked a different thing, and this answer is not relevant.
Javier Badia
@Javier: I disagree. The OP talks about *designing* an application and *taking an approach to a problem*. Words like "code" or "programming" do not appear in the OP. And, in any case, burying your head in code is the last thing that one needs when programming.
CesarGon
I can empathise with SL's point here, which, if I'm not mistaken, is an OO version of http://en.wikipedia.org/wiki/Greenspun%27s_Tenth_Rule However, OOP objects differ from real-world objects in that to be properly OO they should be rather 'dumb', or at least appear that way to external objects. No such condition exists for real-world objects.
CurtainDog
Well, back in the 80s people were claiming that OO was good because it allowed you to "see" the "real" objects in the "real" world. I want to think that we have matured past that naive conception of OO. I did not upvote this answer because I agree with it but because (as I said), for once, it thinks beyond the code. :-)
CesarGon
@CasarGon: My point is that all real programming is done with objects, and that there is not escape from the objects used to write real programs. You can claim you're not using an OO language, but the objects continue to actually exist. The counter-examples: "reading files" is still an object with methods and attributes. You may be using a shabby language (like C) but the underlying file object still exists. It doesn't have syntactically graceful methods. It has shabby function calls. But it's still an object.
S.Lott
@S.Lott: Well, I could argue that all real programming is done with Pascal-like data records and functions, and that a C# or Java class is just a shabby merge of a record plus some functions. It all depends on what paradigm you feel is natural. I am an OO person, and I have a pure OO mindset, so I tend to agree with you *intuitively*. However, I am a researcher in conceptual modelling and semantic technologies as well, and I know that modelling/programming paradigms shape your mind as much as natural languages. Despite my intuitive inclination to agree with you, I must sit on the fence.
CesarGon
@CesarGon: After 30 years, it appears to me that "Pascal-like data records and functions" are just shabby, incomplete objects with ratty syntax. Folks appear to reject this idea, but seem unable to propose any alternative. You can sit on the fence, but I still don't see any actual alternative that demonstrates that there is actual non-object programming.
S.Lott
@S.Lott: I sympathise. And I use OO every day.
CesarGon
a lot of the things we model in programming is not real life objects. When did you last see an integer? and I don't mean the textual representation of an integer I mean "the real life object". when did you last see the time? and I mean "The real life object". Those are abstract ideas made up by man. They don't have a max/min value they don't have constraints. They are simply an abstract idea. Just as money is. $100 is not the same as a $100 bill. There are many objects that can represent $100 (account balance, bills etc) but neither _are_ $100. Just as '1' is not an integer but a character :)
Rune FS
@Rune FS: In 30+ years of professional programming I have never "modeled" an integer. I have used computer integers to model real life objects. I think you're entirely misconstruing what I'm saying. Computer objects must model real-world objects. Real world objects are not "abstractions". They're real things.
S.Lott
@Lott your missing my point (which has nothing do do with my 25+ years of experience that are as irrellevant to this topic as your 30+ years :) ) my point is that a lot of what we model _is_ abstractions such as monetary value, time, project plans etc. If you went out and sought the real life object of time or money you would have as much success as the holy knights had searching for the grale. It doesn't exist. A blog or a forum like SO do not exist as objects in the real world they are made up constructions with no real life objects. They only exist as part of a model in a system
Rune FS
@Rune FS: "money", "time", "task" must be declared concrete in order for our software to model something that really exists. Software is a model of *something*. This mysterious *something* must be declared concrete in order to have some basis for defining classes that model this underlying *reality*. We don't model *abstractions*. We use abstractions to model *reality*. That's kind of definitional. To say otherwise is to model abstractions with other abstractions -- removing the usual sense of "model" as "simplification" and replacing it with "similar to".
S.Lott
I really have to disagree that it needs to be concrete before we Can model it. That would mean that computers cannot be used to compute in the mathematical sense. Math is by definition abstract. Take f(g,x) = 1/g(x) that's a function definition with provable properties. But its not something concrete from the Real World. You cant say if the result increases with increasing x's you Can however model it perfectly in any FP language such as F#. You could use OO but not before you've created a class to represent functions and a class defining the function and neither would model the function above
Rune FS
@Rune FS: An OO design is a "model" -- a model simplifies some aspect of the thing it represents. One can model the "real" world in OO. One can also model math in OO. Math exists. In a context where we model Math, Math becomes the "concrete thing". The OO design is a *model* -- a simplification -- of that concrete thing. The point of an OO model is to be a simplification of some "concrete" or "real" thing. Math (or ANYTHING else) can be modeled. Indeed ALL things can be modeled with OO, since ALL things have some fundamental detailed reality which can be simplified.
S.Lott
@S.Lott you're the one who said one could not model Math. wikipedia on abstraction and math: "Abstraction in mathematics is the process of extracting the underlying essence of a mathematical concept, removing any dependence on real world objects"So in other words Math can be used to remove any dependencies to real world objects and still we can model it (which is to simulate, not necessarily to simplify) however FP Excels in modelling Math compared to OO
Rune FS
@Rune FS: I did not say one couldn't model math. I said that I haven't ever designed an object that modeled a mathematical concept. Not in 30 years of programming. Math == Programming == Objects == Model of Real World. That's the point I'm trying to make. Modelling math can be done, but isn't practical and doesn't occur in practical applications.
S.Lott
+1  A: 

Realistically, I'm sure most problems can be boiled down to some sort of OO pattern. Even second-order functions (functions which take functions as arguments) can be emulated by assigning those functions to a specific object which is passed around (I believe this is called the Visitor pattern?), and currying (partial function application) can be emulated, again, via objects.

That said, generally, those feel like ugly hacks. At the end of the day, it really comes down to the developer's skill and tendencies; the compiled code should perform similarly no matter of the paradigm it's implemented in.

frio
+15  A: 

Sometimes, you just need something quick & dirty to get the job done. I do this a lot with perl, for example - suck data in, process it, and spit it out. No OO needed for that.

chris
Somehow I always felt that the "object-oriented" features of Perl were rather contrived, provided to satisfy a *perceived* rather than an actual need, and in large part unnecessary in the context of what Perl is most often *used* to do.
Jon Purdy
I'll give you one for that. QnD shell scripts are another area where I do exactly that.
paxdiablo
@Jon it's about as bad as making bash object oriented... oh wait, that's Power Shell :)
Earlz
@Jon - absolutely agree. I'll use an OO package when if offers me functionality I need, but other than that, forget it - if I'm doing something that needs that degree of OOness, then I'll pick a different language!
chris
@Earlz - true, but remember that for Windows admins, the alternative is to remote into dozens of machines and do a bunch of clicking.
chris
Iacopo
So "suck data in" (reading from file objects), and "spit it out" (writing to a file object) isn't object oriented? I would think that anything that involves I/O involves file objects, and is -- therefore OO. Even "process it" seems to be accessing attributes of the object read in. The object read in can be a string, but it's still an object with properties and methods. How is this **not** object-oriented? Can you please expand your example to show more clearly how there are no objects involved in this kind of processing?
S.Lott
@S.Lott: It might not be working with a file "object", it could be working with pipes. But regardless, I think you're stretching the definition of OO to call files "objects". I'm building my application without using any OO design. And in perl, I don't have to deal with the files directly - just the data.
chris
@chris: How is a pipe not an object? When you say "just the data" regarding PERL, aren't you talking about a String object? With specific methods and attributes implemented by PERL's syntax? How is this not an object?
S.Lott
@S.Lott: In OO, an object is an instance of a class. In my example, I am not defining any classes, I am not instantiating an objects, etc -no "OO Features" apply. You're simply playing with the semantics around the definition of an "object" and ignoring the specifics around what constitutes "Object Oriented". Regardless of whether data comes from a file or a pipe, I'm not instantiating it, I can't subclass it, I can't do any of the object-y things with it. Data might be a string, it might be a number, that all depends on what the data looks like and the context in which it's used.
chris
@S.Lott - For future reference, Perl is not spelled with all-caps, any more than PYTHON is. Perl is [not an acronym](http://perldoc.perl.org/perlfaq1.html#What%27s-the-difference-between-%22perl%22-and-%22Perl%22%3f).
ire_and_curses
@chris: How is "reading" not an object-y thing? I'm still lost on how reading and writing are not objects-oriented? Please explain further. I write lots of Python scripts that involve pre-built objects that I neither instantiate nor subclass. I'm not "playing" with semantics. I'm trying to understand the semantics you're alluding to. Somehow your file object is not an object. I can't see how that happens. Strings and numbers are object classes as I understand the definition of object. I can't see what distinction you're making.
S.Lott
@ire_and_curses: for future reference, PERL, is an acronym according to popular usage: http://www.acronymfinder.com/PERL.html.
S.Lott
@S.Lott - Incorrect. Did you not follow the link to the official FAQ I provided? I'll quote it for you: "But never write "PERL", because perl is not an acronym, apocryphal folklore and post-facto expansions notwithstanding."
ire_and_curses
@ire_and_curses: You have provided your data (twice). I've provided data that seems to contradict it. You don't need to continue providing the same data again. I've read it. The Norma Loquendi seems to disagree. Bonus, one cannot edit old comments. What -- exactly -- is your point in repeating your data?
S.Lott
@S.Lott: Tom Christiansen was one of the "inventors" of perl. I think that is a lot more believable than some random site with questionable content, whose only purpose is to make money off ads. Just because something is on a web site doesn't make it true, or correct.
chris
@chris: Sorry for quoting popular opinion. I value the ideas of others. Again, since the comment can't be edited, what -- exactly -- is your point?
S.Lott
@S.Lott: The point is that the source of information matters. The site you mentioned had nothing to back up the claim; the site that @ire mentioned had a quote from someone with a long association with the perl community - therefore, much more believable.
chris
@chris: We're both using Appeal to Authority as an argument. Why is one authority "better" than another? Why is "long association" better than "common understanding"? And what is the point of continuing to complain about this?
S.Lott
Doh, and then use LWP or Curl and back in OO land.
Longpoke
+2  A: 

A Proof of Concept (PoC) might come into this camp: if you're trying to prove something specific then adhereing to OOp might not be on the "critical path", if you know what I mean.

For one project I worked on we deliberately built the PoC is such a way as to discourage it being turned into the PROD system, because we knew we were doing a "quick-n-dirty" for specific ends and that it wasn't production quality.

Adrian K
I'll give you one for that. I'm currently working on a PoC/RAD demo for an application written in WPF/C# and I'd be scared of showing the code to a "real" C# developer for fear of being tarred, feathered, and run out of town :-)
paxdiablo
+1  A: 

i think one of the factors which you consider in designing an application is the size and scope of work that needs to be done. as the size of the project increases, i think oop leads to a more manageable and flexible software development. well you wouldn't really think of using oop when creating an application which only computes the sin of a number aye!? just a thought!

ultrajohn
+2  A: 

I think it has more to do with the programmer than with the problem. That's not to say some problems aren't inherently more suited to OOP than others, but just that I think the programmer's preferences and skills are generally more important in such design decisions.

For example, right now I would probably write any program more complex than "Hello, World!" using at least some degree of object-orientedness. But that's because OOP has become quite comfortable and 2nd nature to me. Other programmers with different backgrounds and experiences might choose to write the same programs in completely different ways. There are lots of ways to frame and model every problem and what's easiest for one programmer may not be easiest for another.

C. Dragon 76
+7  A: 

Others have mentioned quick-and-dirty programs. I would go further. I would not use OO when I'm writing a small utility to do one thing well, even if it's going to be around awhile and not considered "quick and dirty". OO only makes sense when you need the flexibility that things like encapsulation and polymorphism bring to the table.

dsimcha
+4  A: 

I would avoid using much OO in heavily parallel, performance critical projects. The problem is that, if you need to scale to 24 cores (I do scientific computing work and I sometimes do), then you need to keep the amount of dynamic allocations you do down or GC/memory management overhead will eat you alive. For the most part, OO can't be done properly if you're not willing to make liberal use of the heap and garbage collector.

dsimcha
A: 

Data entry applications are usually better written with just procedural code. At least, that's been my experience.

Esteban Araya
+5  A: 

Any program that needs to be formally proven to be correct. Formal proofs of programs that rely on assignment are much harder than proofs of functional programs.

Stephen C
+3  A: 

DSLs are a good example of languages that do not need to be and have adevantages not being OO languages. DSLs need to be tailored to the problem they are solving without adding lots of cruft. DSLs that attempt to have all of the fixings of modern general purpose languages end up not being very domain specific!

Another set of problems are logic programs. These are needed when we have a set of constraints and we need to find a solution (and maybe optimize it). In a general purpose OO language we have to do a lot of lifting to solve these kinds of problems, but logic programming languages like Prolog do them naturally.

Chris
+1  A: 

How about whenever you have to use a relational database? Though my answer in such situations would be to turf the database, it's often completely unfeasable.

CurtainDog
+1  A: 

I see a real need to not go with OOP when the resources (in particular CPU and memory) are scarce. For example, but not limited to, it is the case for the networked (switches, routers, ...) or embedded devices used in industrial computing.

The fake interview of Bjarne Stroustrup talks very well about it: http://harmful.cat-v.org/software/c++/I_did_it_for_you_all

landrain
+1  A: 

Depends on the language. Some programming languages are inherent or syntactically OOP-only. Others have basic data types and APIs that suggest a procedural coding style.

Also sometimes you want to mix and match. It doesn't make sense to wrap funtions into classes, if these just compute individual answers without ever looking at object attributes.
If however there are already multiple domain classes in an application, it makes sense to wedge the remaining functions into objects as well. A coherent paradigm makes everything more readable.

Personally, I'm mostly mix and match. If you shoehorn everything into objects, it's most likely you follow the Cargo Cult programming paradigm, not object-orientation.

mario
A: 

Anything written in assembly or machine code.

chris
+1  A: 

In my opinion, a lot of applications that are heavily reliant on complex relational data are not best suited to object-oriented programming. Sure, you can try to abstract it all away with a heavy ORM, but when you've got two dozen tables to join, possibly with some subqueries or recursive joins, and then you have to display this data in tabular format, you might as well just do it procedurally.

Lotus Notes
+2  A: 

At it's heart, programming is the art and science of solving problems within a set of constraints. I believe most people recognize that multiple modes of thinking exists for solving problems, object-orientation being one such paradigm. In practice, a single problem/program may require multiple modes of problems solving, and may benefit from using multiple approaches to modeling different aspects of the problem. In answering this question, I also think it's important to differentiate a style of problem solving, from a particular kind of language. One can certainly program in a object oriented or functional style in a procedural language like C.

The "object-oriented" paradigm is but one approach of decomposing a problem into a set of smaller problems that are easier to reason about and whose correctness we wish to ensure. OO is characterized by the identification of discrete actors (objects) with well-defined responsibilities, which interact by passing messages (themselves often consisting of objects) between one another. OO has proven effective at addressing problems "in the large" - such designing solutions for large scale systems whose responsibilities cover a large domain.

Other modes of thinking about problems certainly exist - and offer different vantage points from which to examine how to solve them. For instance, some problems can be viewed as a transformation of information from one representation to another. Functional programming paradigms can be an effective way of tackling such problems - essentially the problem is decomposed into a sequence of transformations from the source representation to the output representation. Functional programming, by avoiding state an mutable data lends itself better for problems where we can exploit parallelism and sub-computation caching. Not that such programs cannot be created in other styles (they certainly can) - it's just that the constructs and idioms provided by functional languages often are easier to compose with one another for such problems.

We should also be cognizant of the limitations and overhead that are introduced by certain programming styles. The approach of treating a problem as a set of interacting "objects" (whether they are materialized using language constructs or merely conceptualized in the mind of the programmer) requires establishing boundaries and constraints on how a program behaves. Such constraints may actually prove to be a barrier to effective construction of certain solutions. Take device drivers for instance. A common expectation of device drivers are that they need to be efficient in their operation. OO design does not necessarily improve efficiency of operations - in fact, OO tends to introduce additional layers of indirection and abstraction primarily to aid in maintinability and extensibility - neither of which is necessarily as important in the creation of device drivers. Similarly, programs designed in a functional style often require more memory or steps to process information due to the immutable nature of how such programs represent data.

So, to summarize my thoughts above, I would say that:

  1. Different kind of problems lend themselves to different kinds of problem solving approaches (data transformation vs. request/response vs. process-oriented).
  2. Some problems composed of multiple subproblems may benefit from multiple modalities of problem solving techniques (OO, functional, procedural, etc).
  3. The right tools make tackling a problem easier, while the wrong tools can make a problem much, much harder - if not intractable.
  4. Recognizing which problems lend themselves to which techniques requires experience, open-mindedness, and a willingness to think creatively.
  5. Different languages lend themselves to certain style of programming, with varying degrees of flexibility and clarity.

Finally, let's address the core of your question: where would an object oriented approach to a problem be cumbersome or otherwise suboptimal?

For specific examples, I would suggest that:

  1. Extremely small or trivial programs may gain less from an OO approach than the overhead imposed.
  2. Programing problems that demand native interaction with hardware or extreme performance requirements (eg device drivers) may be suboptimal in OO models.
  3. Problems which are better modeled as transformations or pipelined operations on a sequence of inputs may be better expressed in functional forms.
  4. Problems where insufficient time or requirements are present to clearly identify the underlying actor/interaction scheme may actually incurr a penalty from inappropriate OO design.
LBushkin