views:

237

answers:

9

After my last question, I want to understand when an optimization is really worth the time a developer spends on it.

Is it worth spending 4 hours to have queries that are 20% quicker? Yes, no, maybe, yes if...?

Is 'wasting' 7 hours to switch a task to another language to save about 40% of the CPU usage 'worth it'?

My normal iteration for a new project is:

  1. Understand what the customer wants, and what he needs;
  2. Plan the project: what languages and where, database design;
  3. Develop the project;
  4. Test and bug-fix;
  5. Final analysis of the running project and final optimizations;
  6. If the project requires it, a further analysis on the real-life usage of the resources followed by further optimization;

"Write good, maintainable code" is implied.

Obviously, the big 'optimization' part happens at point #2, but often when reviewing the code after the project is over I find some sections that, even if they do their job well, could be improved. This is the rationale for point #5.

To give a concrete example of the last point, a simple example is when I expect 90% of queries to be SELECT and 10% to be INSERT/UPDATE, so I charge a DB table with indexes. But after 6 months, I see that in real-life there are 10% SELECT queries and 90% INSERT/UPDATEs, so the query speed is not optimized. This is the first example that comes to my mind (and obviously this is more a 'patch' to an initial mis-design than an optimization ;).

Please note that I'm a developer, not a business-man - but I like to have a clear conscience by giving my clients the best, where possible.

I mean, I know that if I lose 50 hours to gain 5% of an application's total speed-up, and the application is used by 10 users, then it maybe isn't worth the time... but what about when it is?

When do you think that an optimization is crucial?

What formula do you usually apply, aware that the time spent optimizing (and the final gain) is not always quantifiable on paper?

EDIT: sorry, but i cant accept an answer like 'untill people dont complain about id, optimization is not needed'; It can be a business-view (questionable, imho), but not an developer or (imho too) a good-sense answer. I know, this question is really subjective.

I agree with Cheeso, the performance optimization should be deferred, after some analysis about the real-life usage and load of the project, but a small'n'quick optimization can be done immediatly after the project is over.

Thanks to all ;)

+7  A: 

YAGNI. Unless people complain, a lot.


EDIT : I built a library that was slightly slower than the alternatives out there. It was still gaining usage and share because it was nicer to use and more powerful. I continued to invest in features and capability, deferring any work on performance.

At some point, there were enough features and performance bubbled to the top of the list I finally spent some time working on perf improvement, but only after considering the effort for a long time.

I think this is the right way to approach it.

Cheeso
Somewhat agree, but you cant exactly know from the beginning how much visitors will your site have, or how many datas will the project deal in the next 3 years ;)
DaNieL
worry about what is in 3 years in 2.5 years, today it is a waste of money to worry about something that you can't know
fuzzy lollipop
Ah, I see, so just assume your product is good quality until you start getting bug reports. How Agile of you.
Aaronaught
There's a problem in assuming people will actually lodge complaints instead of just not use your product. A typical user might not be able to perceive a problem as being performance related, but still get frustrated over it. Not necessarily the best metric for determining if you need to improve performance.
Bryan M.
you have a reasonable viewpoint. I still think performance should be deferred, in general.
Cheeso
Totally agree with Bryan
DaNieL
+5  A: 

Optimization is worth it when it is necessary.

If we have promised the client response times on holiday package searches that are 5 seconds or less and that the system will run on a single Oracle server (of whatever spec) and the searches are taking 30 seconds at peak load, then the optimization is definitely worth it because we're not going to get paid otherwise.

When you are initially developing a system, you (if you are a good developer) are designing things to be efficient without wasting time on premature optimization. If the resulting system isn't fast enough, you optimize. But your question seems to be suggesting that there's some hand-wavey additional optimization that you might do if you feel that it's worth it. That's not a good way to think about it because it implies that you haven't got a firm target in mind for what is acceptable to begin with. You need to discuss it with the stakeholders and set some kind of target before you start worrying about what kind of optimizations you need to do.

U62
++ This is good common-sense (and other answers have it too). Personally, where I part company with most people is in the step where performance is becoming an issue and something needs to be done about it. That is where I rely on the little-known magic technique of stackshots (http://stackoverflow.com/questions/406760/whats-your-most-controversial-programming-opinion/1562802#1562802) to find out what is going on and what needs to be fixed.
Mike Dunlavey
A: 

If the client doesn't see a need to do performance optimization, then there's no reason to do it.

Defining a measurable performance requirements SLA with the client (i.e., 95% of queries complete in under 2 seconds) near the beginning of the project lets you know if you're meeting that goal, or if you have more optimization to do. Performance testing at current and estimated future loads gives you the data that you need to see if you're meeting the SLA.

Kaleb Brasee
A: 

Optimization is rarely worth it before you know what needs to be optimized. Always remember that if I/O is basically idle and CPU is low, you're not getting anything out of the computer. Obviously you don't want the CPU pegged all the time and you don't want to be running out of I/O bandwidth, but realize that trying to have the computer basically idle all day while it performs intense operations is unrealistic.

Wait until you start to reach a predefined threshold (80% utilization is the mark I usually use, others think that's too high/low) and then optimize at that point if necessary. Keep in mind that the best solution may be scaling up or scaling out and not actually optimizing the software.

Brian Hasden
A: 

Like everyone said in the other questions answers is, when it makes monetary sense to change something then it needs changing. In most cases good enough wins the day. If the customers aren't complaining then it is good enough. If they are complaining then fix it enough so they stop complaining. Agile methodologies will give you some guidance on how to know when enough is enough. Who cares if something is using 40% CPU more CPU than you think it needs to be, if it is working and the customers are happy then it is good enough. Really simple, get it working and maintainable and then wait for complaints that probably will never come.

If what you are worried about was really a problem, NO ONE would ever have started using Java to build mission critical server side applications. Or Python or Erlang or anything else that isn't C for that matter. And if they did that, nothing would get done in a time frame to even acquire that first customer that you are so worried about losing. You will know well in advance that you need to change something well before it becomes a problem.

fuzzy lollipop
A: 

Optimization is worth the time spent on it when you get good speedups for spending little time in optimizing (obviously). To get that, you need tools/techniques that lead you very quickly to the code whose optimization yields the most benefit.

It is common to think that the way to find that code is by measuring the time spent by functions, but to my mind that only provides clues - you still have to play detective. What takes me straight to the code is stackshots. Here is an example of a 40-times speedup, achieved by finding and fixing several problems. Others on SO have reported speedup factors from 7 to 60, achieved with little effort.*

*(7x: Comment 1. 60x: Comment 30.)

Mike Dunlavey
A: 

Use Amdal's law. It shows you which is the overall improvement when optimizing a certain part of a system.

Also: If it ain't broke, don't fix it.

lmsasu
+1  A: 

Good posting everyone.

Have you looked at the unnecessary usage of transactions for simple SELECT(s)? I got burned on that one a few times... I also did some code cleanup and found MANY graphs being returned for maybe 10 records needed.... on and on... sometimes it's not YOUR code per se, but someone cutting corners.... Good luck!

Josh Molina
++ Welcome to SO!
Mike Dunlavey
+4  A: 

There are (at least) two categories of "efficiency" to mention here:

  • UI applications (and their dependencies), where the most important measure is the response time to the user.

  • Batch processing, where the main indicator is total running time.


In the first case, there are well-documented rules about response times. If you care about product quality, you need to keep response times short. The shorter the better, of course, but the breaking points are about:

  • 100 ms for an "immediate" response; animation and other "real-time" activities need to happen at least this fast;

  • 1 second for an "uninterrupted" response. Any more than this and users will be frustrated; you also need to start think about showing a progress screen past this point.

  • 10 seconds for retaining user focus. Any worse than this and your users will be pissed off.

If you're finding that several operations are taking more than 10 seconds, and you can fix the performance problems with a sane amount of effort (I don't think there's a hard limit but personally I'd say definitely anything under 1 man-month and probably anything under 3-4 months), then you should definitely put the effort into fixing it.

Similarly, if you find yourself creeping past that 1-second threshold, you should be trying very hard to make it faster. At a minimum, compare the time it would take to improve the performance of your app with the time it would take to redo every slow screen with progress dialogs and background threads that the user can cancel - because it is your responsibility as a designer to provide that if the app is too slow.

But don't make a decision purely on that basis - the user experience matters too. If it'll take you 1 week to stick in some async progress dialogs and 3 weeks to get the running times under 1 second, I would still go with the latter. IMO, anything under a man-month is justifiable if the problem is application-wide; if it's just one report that's run relatively infrequently, I'd probably let it go.

If your application is real-time - graphics-related for example - then I would classify it the same way as the 10-second mark for non-realtime apps. That is, you need to make every effort possible to speed it up. Flickering is unacceptable in a game or in an image editor. Stutters and glitches are unacceptable in audio processing. Even for something as basic as text input, a 500 ms delay between the key being pressed and the character appearing is completely unacceptable unless you're connected via remote desktop or something. No amount of effort is too much for fixing these kinds of problems.


Now for the second case, which I think is mostly self-evident. If you're doing batch processing then you generally have a scalability concern. As long as the batch is able to run in the time allotted, you don't need to improve it. But if your data is growing, if the batch is supposed to run overnight and you start to see it creeping into the wee hours of the morning and interrupting people's work at 9:15 AM, then clearly you need to work on performance.

Actually, you really can't wait that long; once it fails to complete in the required time, you may already be in big trouble. You have to actively monitor the situation and maintain some sort of safety margin - say a maximum running time of 5 hours out of the available 6 before you start to worry.

So the answer for batch processes is obvious. You have a hard requirement that the bast must finish within a certain time. Therefore, if you are getting close to the edge, performance must be improved, regardless of how difficult/costly it is. The question then becomes what is the most economical means of improving the process?

If it costs significantly less to just throw some more hardware at the problem (and you know for a fact that the problem really does scale with hardware), then don't spend any time optimizing, just buy new hardware. Otherwise, figure out what combination of design optimization and hardware upgrades is going to get you the best ROI. It's almost purely a cost decision at this point.


That's about all I have to say on the subject. Shame on the people who respond to this with "YAGNI". It's your professional responsibility to know or at least find out whether or not you "need it." Assuming that anything is acceptable until customers complain is an abdication of this responsibility.

Simply because your customers don't demand it doesn't mean you don't need to consider it. Your customers don't demand unit tests, either, or even reasonably good/maintainable code, but you provide those things anyway because it is part of your profession. And at the end of the day, your customers will be a lot happier with a smooth, fast product than with any of those other developer-centric things.

Aaronaught