views:

686

answers:

11

Do you think it's worth trading off some performance for code-quality and maintainability? I remember a post by Jeff Atwood that stated that hardware is cheap, developers are not. I think I'd like to change that to "Hardware is cheap, time is not."

I've noticed with an MVC project I've been working on lately that sometimes I lose DAYS just trying to squeeze out a little extra performance out of my app and I'm starting to think it's just not worth it. I've just found myself having trouble with designing an ASP.NET MVC application. I love IQueryable to death in the fact that it allows me to append to the query so I can get some fluent code for using it. But being able to do something like that seems to add more responsibility on the controller/BLL.

So what do you think? In the case of web applications would you be ok with trading off some performance for maintainable/cleaner code? Do you think it's with to prematurely try to optimize everything you can? Because as we've seen you cannot predict all requirements.

+5  A: 

I really do not believe this is an either/or. If you write clean, simple code that does all processing only exactly the number of times it should, you will have some of the best performing code you can. It's really that simple.

krosenvold
I wasn't doubting the fact you can't have both. The question was really aimed at do you think it's worth trading off some performance for the other etc. Sorry I worded that wrong.
Chad Moran
I always pretend I am my own client that I am designing an app for... As long as it works and works well and that nobody else is looking at the code who cares how clean it is... just as long as it works
dswatik
Touche salesman.
Chad Moran
There's no doubt you can often get fast efficient code by writing it clean and maintainable. But if you ned to tweak the few percent of performance out of your code, you may have to sacrifice the maintainability or simplicity. So no, it's not *always* that simple
jalf
+3  A: 

The obvious answer is it depends. If your app is slow enough that it affects usability significantly, and you have measurements to prove that your optimizations actually help, then sacrificing maintainability can be a reasonable tradeoff. On the other hand, if you haven't measured or the app isn't slow enough to hurt usability, always go for readability, maintainability and flexibility. This just boils down to premature optimization being the root of all evil.

Note: Design time algorithmic and architectural optimizations aren't necessarily bad if you know performance is going to matter for your app, but in the case of your question, you clearly appear to be talking about micro-optimization, to which the above applies.

Also, in your specific case, if you can't tell whether your app is slow enough to hurt usability then it's premature. If you can then it's not.

dsimcha
I guess that was the root of my question which I'll edit. Premature optimization.
Chad Moran
+10  A: 
  1. Make it work
  2. If performance is questionable, profile and identify the problem
  3. Fix the problem.
  4. Repeat steps 1-4 if necessary
  5. ???
  6. Profit
Ben Robbins
Simple, usually the best answers are.
Chad Moran
Step 5 is pure magic, its very important.
Ólafur Waage
It's supposed to be 4 question marks, bro! (you just lost the game)
Daniel
A: 

I definitely do value my own time over application performance on the server side. If I notice that my site is not performing well enough on database requests etc, upgrading the server hardware is an alternative solution that could (at least short-term) solve my problem without looking at the code.

However, if the app is extremely network-inefficient, I would spend quite some time trying to improve that part. Sending large chunks of data affects my users, no matter what I do with my own server and uplink - and if they don't like the performance, they won't come back.

But as several others have said, it is not a matter of either/or - it depends a lot on situation, how heavy the performance issue is, where in the application etc.

Tomas Lycken
+1  A: 

Neither quality (meaning easy to read) nor performance is the most important - CORRECTNESS is!

anon
What if the incorrectness is minor and only occurs in a few edge cases? A hypothetical example could be some cosmetic issue that would require lots of special case logic, or a small amount of rounding error in a math function. IMHO, even correctness isn't a sacred cow that's immune to tradeoffs.
dsimcha
So you won't mind if the code your bank uses to calculate your account's interest drops a few $$$ here and there?
anon
Well, this wouldn't quite pass as "minor" incorrectness. I wouldn't mind if the formatting for my bank's website looked screwy once in a while if it meant more features, better performance, passing on lowered development costs as higher interest rates, etc. Banks don't use floating point anyhow.
dsimcha
@zabzonk: You only prove that correctness is _sometimes_ paramount, and that the question is ill stated because it depends on the problem domain.
dmckee
cont'd: In other problem domains, rounding error is more tolerable and there are legitimate tradeoffs between performance and floating point accuracy in things like scientific computing. Sometimes an approximate answer is good enough.
dsimcha
See older Cray super computers floating point for a real example of trading correctness for speed :)
ShuggyCoUk
Re floating point. I've worked for several investment banks and I can assure you they do (or did, one was Lehmans) use it.
anon
+9  A: 

Sir Tony Hoare famously said, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."

The first part of the quote has been all but forgotten (it doesn't roll off the tongue as easily), and thus many inexperienced engineers don't take performance into consideration during the design phase of a software project. This is almost always a fatal mistake, as later on a badly designed application is very difficult to optimise due to fundamental design flaws. At the same time, there is no point trying to save CPU cycles by using clever tricks when the performance bottlenecks aren't known yet.

As to your question, I think a properly design application that is designed to cope with its particular performance requirements won't need to be coded in an unmaintainable or "unclean" way. It's only when those performance bottlenecks are discovered (e.g. you discover your application spends 90% of its time in 10% of the code) that you might want to consider sparingly using optimisation tricks in small amounts of your code, so that it remains maintainable and easy to understand.

The great thing about many Web applications is that performance can be drastically improved using various caching techniques. As you control the server environment (and, like you say, hardware is cheap) you can make sure you cache the hell out of those commonly-used parts of your Web app. This doesn't really make for unmaintainable code if you use an abstraction layer. Facebook is a good example of a Web application that famously exploits caching (memcached) to its advantage.

Jason Davies
Great answer, thanks.
Chad Moran
The first part is forgotten because 97% in that time became 99.9% now.
Paco
+1  A: 

Agree with this to an extent. Developer time is costly, and profiling and optimizing code is a very expensive way to get probably not very much performance gain. Having said that it depends on the type of application and the environment you're working in.

If you're working on a web application, then you can make massive improvements by fixing a few simple issues (mainly on the client-side). Things like reducing HTTP requests by concatenating CSS/JS files, building image sprites, etc... will give you huge gains compared to actually profiling code, and are a very good use of developer time.

I don't know that I agree with the 'hardware is cheaper than developers' quote though. Of course hardware can help you scale your application and give it more performance oomph, but the last thing you want to do is rely on beefy hardware. If your software is too tightly coupled to your hardware you lose a lot of flexibility in terms of moving to new data centers, upgrading servers, etc... and not having that flexibility can be very costly in the longer term. Say you decide that the way to scale your application efficiently is to move to Amazon's EC2 infrastructure. If your application requires 32GB of RAM on each server you're going to find a move like this might require a re-write.

Andy Hume
+1  A: 

All good answers. The choice between speed and clean code is a false dichotomy.

I haven't seen you work, but I've watched others, and it's always the same story:

"It's not fast enough. I think the problem is in the XXX code. I think I'll tweak that and see if it helps."

  • You don't know the problem is there.
    You're guessing.

  • Never do anything based on a guess.
    (Of course you would never do that, would you? But most people do.)

You could profile the code.

My favorite method is to just halt it a few times while it's being slow, and ask it what the heck it's doing.

It's usually a surprise that one couldn't have guessed.

Mike Dunlavey
A: 

A standard definition of Quality is "Conformance to client expectations (requirement)". If you have done good requirements gathering, then you have agreed to certain performance criteria. If your application meets this criteria, then you are wasting your, or the client's, time and money trying to do better.

Writing loosely coupled, cohesive, and easy to read code just reduces the risk and cost associated with bugs and changes to the requirements. If you are prepared to accept the risk of 'ball of mud' coding, then go ahead. Me, I like to make a profit.

Matthew
+1  A: 

Before talking about performance you should really learn about big O notation, you can look that up in any books about algorithms or on wikipedia.

Big O notation says something about how much time a function takes. For instance. A list running from 0 to 100 you have O(N). No matter how high number you count to the O notation stays the same. This function has a linear runtime and cannot be improved in any ways.

Now if you have a list running from 0 to 100 and for each item in that list you do another list running from 0 to 100 you get O(N^2) which is twice the work and has a much worse runtime than O(N).

When writing applications that has to have good performance we talk about getting a good runtime written in O notation. Whether a window uses <0.1 seconds or >1 second doesn't really matter if they use the same algorithms.

That means, the shaving of of seconds you do probably doesn't have a different O notation so you're not really optimizing your code in any way - So for you, writing MVC in asp.net I would recommend you focus on writing clean and readable code instead :)

When you have learned about O notation you will be able to know what algorithms to pick (how to sort lists, populate them, retrieve data) in a way that uses the least run time in O notation and this knowledge will proabably make your code much faster than shaving seconds off your code writing tight loops ever will do.

Makach^^

Makach
A: 

Good design often sacrifices some performance for improvement of the overall program. For example, writing you code in layers has a cost, but we do it anyway because it makes changing the code easier over the long term. We use app servers remotely, not because it is the most efficient way, but because it scales.

I recall from Code Complete 2, McConnell does give an example where making the code horribly difficult to read was necessary as an optimization. That particular example was an encryption algorithm. The program was made into one method to eliminate the overhead of calling a function. So, there is indeed a time and place for this, but I believe it to be rare.

As for solving performance problems, in most cases, I have found performance issues to be either database/IO related or a but(memory leak). As others have suggested, profiling is the way to go, though it can still be tricky to track down many bugs.

As for the hardware issue, hardware relaxes but does not eliminate the need for optimized code. The faster hardware really just allows us to use less than optimal languages and do really nice GUI stuff.

Greg Ogle