views:

186

answers:

10

Hi, following the text book, I do measure performance whenever I try optimizing my code. Sometimes, however, the performance gain is rather small and I can't decisively decide whether I should implement that optimization.

For example, when a fix shortens an average response time of 100ms to 90ms under some conditions, should I implement that fix? What if it shortens 200ms to 190ms? How many condition should I try before I can conclude that it will be beneficial overall?

I guess it's not possible to give a straight forward answer to this, as it depends on too many things, but is there a good rule of thumb that I should follow? Are there any guideline/best-practices?

EDIT:Thanks for the great answers! I guess the moral of the story is, there is no easy way to tell whether you should, but there ARE guidelines that can aid that process.. Things you should consider, things you shouldn't do etc. This particular time I ended up implementing the fix, even though it made a few line of code into 20-30 lines of code. Because our app. is very performance critical, and it was a consistent 10% gain in various realistic cases.

+2  A: 

If it speeds up your program at all, why not implement it? You have already done the work by creating the new implementation, so you are not doing extra work by applying the new implementation.

Unless the code is THAT much harder to understand.

Also, 100 ms to 90 ms is a 10% gain in performance. A 10% gain should not be taken lightly.

The real question is, if it only took 100 ms to run in the first place, what was the point in trying to optimize it?

Kevin Crowell
@Kevin - great answer overall, but your last question has an obvious answer. If the server needs to do 1MM of these calculations, 10ms difference means the difference between 10k seconds and 9k seconds. Pretty significant. Unless I misplaced a significant digit somewhere :)
DVK
@DVK Right, but the OP eluded to the fact that it was an insignificant gain. I was only pointing out that optimization may have been better off somewhere else.
Kevin Crowell
Kevin - yeah, i agree completely that a more impactful optimization is preferred over less. I merely pointed out that 10% should not be discounted on the basis of a small absolute savings unless you know that small absolute is not nultiplied by a very large factor.
DVK
+5  A: 

Donald Knuth made the following two statements on optimization:

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" [2]

and

"In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"[5]

src: http://en.wikipedia.org/wiki/Program_optimization

darlinton
+2  A: 

As long as it's fast enough, then you don't need to optimise any more. But then, you wouldn't even bother profiling if that was the case...

Dean Harding
+7  A: 

I think the rule of thumb (at least for me) is two-fold:

  1. "It matters if it matters"--in the business world, this generally means that it matters if the clients care. That is, if the end users will "notice" the difference between 100ms and 90ms (I'm not being facetious here), then it matters.
  2. If "it matters," then you will want to test your code thoroughly against a realistic variety of use cases that are likely to arise or at least may arise. If an optimization speeds up code in 50% of cases, but actually runs slower than what you previously had the other 50% of the time, obviously, it may not be worth implementing.

Regarding point 1 above: by suggesting an end user of your software might "notice" a 10ms difference, I don't mean to suggest that they will actually visibly see a difference. But if your app runs on a server with millions of connections and every little speed increase takes a substantial load off the server, that might matter to the client running the server. Or if your app performs extremely time-critical work, this is another case where the result of a 10ms speedup might be noticeable, even if the speedup itself isn't.

Dan Tao
+1  A: 

In most cases, your time is more valuable than the computer's. If you think it'll take you half an hour longer to work out what the code is doing later (say if there's a bug in it), and it's only saved you a few seconds, ever, you're at a net loss.

kibibu
+7  A: 

The only sensible approach to your question is something along the lines of "when the benefit is large enough to warrant the time you invest in exploring, implementing and testing the optimization."

The "benefit is large enough" is extremely subjective. Can you or your employer sell more units of software if you make this change? Will your user base notice? Will it give you personal gratification to have the fastest-possible code? Which of those or similar questions apply is something only you can know.

By and large, most of the software I have written (in a 20+ year career) has been "fast enough" out of the box, and the code I cared to optimize presented itself as an obvious bottleneck to the end users: Queries taking a long time, scrolling too slow, that sort of thing.

Eric J.
+3  A: 
  • Is the optimization obfuscating your code too much?
  • Do you really need an optimization? if your app just runs fine then readability of the code is probably more important
  • Did you work on the general design and algorithms of your application before trying small hacky optimizations?
f4
+2  A: 

If the performance gain is small, consider the other factors: maintainability, risk of making the change, understandability, etc. If it reduces the ability to maintain or understand the code, it probably isn't worth doing. If it improves those attributes, then it's more reason to implement the change.

Michael Dean
+3  A: 

You should focus optimisation efforts on the parts of code that account for the most runtime. If a particular piece of code takes up 80% of the total runtime, then optimising it to reduce the time is takes by 5% will have as much impact as reducing the time of the rest of the code by 20%.

In general, optimisations make code less readable (not always, but often). Therefore you should avoid optimising until you are sure that there is a problem.

KarstenF
+1  A: 

It depends very much on the usage scenario. I'll assume here that the code in question has been profiled and thus it is known to be the bottleneck--i.e. not just "this could be faster", but "the program would give results/finish running faster if this were faster". In situations where this is not the case--e.g. if you spend 99% of your time waiting for more data to come over an ethernet connection--then you should care about correctness but not optimize for speed.

  • If you are writing a piece of user interface code, what you care about is perceived speed. Generally anything under ~100 ms is perceived as "instant"--no point speeding it up.
  • If you are writing a piece of code for a giant server farm, then if the cost of your salary to make the code fast is less than the cost of the extra electricity for the server farm, it's worthwhile. (But be sure to prioritize your time.)
  • If you are writing a piece of code that is used rarely or when unattended, as long as it completes in a semi-sane duration, don't worry about it. Install scripts tend to be of this sort (unless you start running into many minutes, at which point users might start abandoning the install because it's taking too long).
  • If you are writing code to automate a task for someone else, then if (your time spent coding + their time spent using the optimized code) is less than (their time spent using the slow code), it's worthwhile. If you're doing this in a commercial setting, weight this by your respective salaries.
  • If you are writing library code that will be used by many thousands of people, always make it faster if you have time to.
  • If you are under time pressure to simply have something working e.g. as a demo, don't optimize (except through sensible choice of algorithms from libraries) unless the result would be so slow that it isn't even "working".

One of the biggest annoyances for me personally is finding software which perhaps initially fell into one category and then later fell into another, but for which nobody went back to do needed optimizations. Until recently Javascript performance was a great example of this. Moral of the story is: don't just decide once; revisit the issue as the situation demands.

Rex Kerr