views:

306

answers:

7

When writing application code, it's generally accepted that premature micro-optimization is evil, and that profiling first is essential, and there is some debate about how much, if any, higher level optimization to do up front. However, I haven't seen any guidelines for when/how to optimize generic code that will be part of a library or framework, where you never know exactly how your code will be used in the future. What are some guidelines for this? Is premature micro-optimization still evil? How should performance be balanced with other design goals such as ease of use, ease of demonstrating correctness, ease of implementation, and flexibility?

+4  A: 

I would say that optimization must take a back seat to other design goals such as ease of use, ease of demonstrating correctness, ease of implementation, and flexibility.

Try to write your code intelligently using good practices and avoiding the obvious pitfalls. Still, don't optimize until you can do it with a profiler and real use cases.

You will still encounter some use cases you never thought of but you can't optimize for them if you never thought of them.

A well designed framework will usually be a reasonably performing one too.

Software Monkey
+4  A: 

"How should performance be balanced with other design goals...?"

  1. Get it to work.

  2. Optimize it until it cannot be optimized further.

Note the order. Avoid premature optimization means optimize it after it works.

Optimization is still very, very important. Premature optimization does not mean NO optimization. It means optimize after it works.

S.Lott
I would argue you should optimize until no further optimization is required. Over optimizing is just as bad as premature optimization.
Jon B
@Jon B: I'm hard-pressed to imagine what over-optimization can even mean. Optimization has limits, formally defined by the algorithmic complexity.
S.Lott
Spending more time making your code run faster when it already runs fast enough is over optimization.
Jon B
When you're creating a library for general consumption how do you ever know if it's "fast enough". I mean really, most code can only be realistically optimized to a certain point, and if it's for general use I'd say that common sense should tell you how much is enough.
ctacke
@ctacke - I agree on the common sense point. We do need some kind of goal though. Otherwise, it's done when I'm just not smart enough to make it better, and I'm free to spend as long as I like making it faster just for the sake of it.
Jon B
There is no over-optimization. http://en.wikipedia.org/wiki/Computational_complexity_theory. There are limits beyond which you cannot optimize based entirely on the nature of the algorithm. There is pointless fussing, but after a while it isn't optimization.
S.Lott
@S.Lott - I think you're missing my point. If I need a function that executes in < 100ms and you have it down to 50ms, we're done. It doesn't matter that you could get it down to 10ms - that would be wasted effort.
Jon B
@Jon B: Aggree on "Fast enough". Sometimes you can't get below 100ms because the algorithm complexity means NOTHING can go faster. If you have an O(n**3) algorithm, you need a better algorithm. You can't "overoptimize" because you have the wrong algorithm.
S.Lott
One might argue that switching to a better algorithm is another form of optimizing - depending on how wide your definition of "optimize" is.
Baginsss
@Baginsss: Agreed. That's why you can't over-optimize. You can optimize or you can switch to something better.
S.Lott
A: 

You're right it's not always clear where the best bang for the buck is for your time. Your best bet is to be a user of your framework as well as its designer.

Employ your own framework in a non-trivial application, try to exercise the whole range of functionality. The more you use it, it will become clear which are the things you need most to be optimal.

Also, get feedback and suggestions from other users as frequently as possible. You will inevitably find that other people want to do things to do with your framework that you would never think of.

Bill Karwin
A: 

I think the best approach is to have a really good set of use cases for how your framework will be exercised. Only then will you have any good idea of whether the performance is adequate for its intended use.

Sure, you're never going to know how somebody is going to use your framework in the future (in the early years of my career, it never failed to amaze me the creative ways that users put my software to use - ways I'd never envisaged!) but having thought about how you think it will be used should get you most of the way there.

Craig Shearer
A: 

If there are sufficiently different uses of your framework as to require conflicting optimizations, you should consider refactoring some of your code to de-couple these uses. For example, the .NET framework has Array.Copy and Buffer.BlockCopy. It would seem like Array.Copy is the only function you need, but Buffer.BlockCopy is optimized specifically for arrays of primitive types.

To get the best of both worlds, sometimes you actually need both worlds.

Jon B
+2  A: 

I heard an interesting and very enlightening discussion about the famous knuth quote on a podcast recently (think it was deep fried bytes), which I'll try summarize:

Everyone knows the famous quote: Premature optimization is the root of all evil..
However, that's only half of it. The full quote is:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Look at this carefully - say about 97% of the time.
The other side of that statement is about 3% of the time, "small" efficiencies are critical.

My monitor displays about 50 lines of code. Statistically, at least 1-2 lines of code on every screen will contain something performance sensitive! Following the common wisdom of 'do it now, optimize it later' doesn't seem like such a cunning plan when you think that on every screen you have a possible performance issue.

IMHO you should always be thinking about performance. You shouldn't expend a great deal of effort or sacrifice maintainability for it until proven by profiling/testing, but you should definitely have it in the back of your mind.

I'd personally apply this to generic code like this:
You are bound to have some code somewhere, which when you wrote it you thought "this will be slow", or "this is a dumb algorithm, but it's not important right now, so I'll fix it later." As you're in a shared library and you can't assert that method A will only ever get called with 5 items, you should go in and clean all this stuff up.

Once you've sorted those things out, I wouldn't bother going much further. Maybe run the profiler over your unit tests to make sure nothing dumb has snuck through, but otherwise wait for feedback from the consumers of your library.

Orion Edwards
Exactly. Efficiency should always be one of the main things one thinks about (after correctness and ease of use, of course). +8742568347534
TraumaPony
+1  A: 

My rule of thumb is:

don't optimize

The full rule is actually:

if you don't have a metric, don't optimize

This means that if you haven't measured the performance and generated a concrete metric, you shouldn't be doing anything to make the code perform better.

After all: without a metric, how do you know what to optimize?

Once you have one some profiling, you may actually be surprised by where the performance bottlenecks of your system are ... in my experience it is often the case that relatively minor changes can have a drastic impact.

Toby Hede
I agree, but I think that 'don't optimize' is far too often taken to mean "don't waste time thinking about performance", which I strongly believe is wrong.
Orion Edwards