views:

676

answers:

15

They say "Premature optimization is the root of all evil".. so what's the best point in time for optimization?

+11  A: 
  1. Don't do it.
  2. (For experts only) Don't do it yet.

--Michael A. Jackson

Adam Rosenfield
+1 Clearly from someone who's had to maintain optimized code!
Adam Liss
Appears to be a direct quote....
Mitch Wheat
Mitch Wheat
As would accurate typing :)
Mitch Wheat
Err sorry, I knew someone else said it, but I didn't know who. Fixed now.
Adam Rosenfield
+19  A: 

You should generally optimise once all functionality has been completed. Only then can you properly profile your code to see where the bottlenecks are, and optimisation is useless unless it's targeting specific bottlenecks.

In addition, optimisation of code followed by rewriting of it due to changing functionality means that the optimisation effort has been wasted. This is why everyone mentions "premature optimisation" whenever a question like this is asked.

By targeting your efforts, I mean there's no point doubling the speed of code that only runs for 1% of the time (1% improvement overall) if you can get a 10% speed increase in code that's used for 40% of the time (4% overall).

Or better yet, doubling the speed of something which takes 90% of the time (45% overall), but it's rare to find those cases.

Get it running first, then get it running fast, that's one of my mantras. Another is "Incorrect code is the least optimised code you can get".

paxdiablo
With the caveat that some algorithmic optimisation may be justified at the design stage. For example, if you know that servicing an http request will require a sort of 100,000 records, you may be able to predict without profiling that Bogosort will be inadequate, even using agile methodologies...
Steve Jessop
No, It's always good to be lazy with optimizations.
Cheery
and optimizing shouldn't be done at the expense of readability - if it takes 25 lines of comments to explain 2 lines of code (and it's not APL), you *may* have gone overboard (probably have)
warren
+3  A: 

The quick answer to a quick question is: When you've profiled your application and have determined where the bottleneck(s) are.

Greg Hewgill
+4  A: 

After you run profilers and examine where the bottlenecks in your application really are.

mipadi
A: 

After the functionality exist.

First get the code to work then look for ways to improve performance or memory usage. trying to optimize before this could lead to analysis paralysis.

Vincent Ramdhanie
+1  A: 

After the first revision has met with the enemy.. I mean users. Any profiling you do before hand is limited to how you think the application will be used.

Once you have real world usage data then you'll know where you need to spend some time.

Chris Lively
+9  A: 

For low-level optimizations, I agree with the general consensus: Optimize once you've profiled and know where the bottleneck is. However, in some cases, performance will have to factor into your high level design. For example, if performance is a real priority, and I suggest you make absolutely sure it is before you do something like this, you might want to design your program in a way that minimizes memory allocations. Things like this may fundamentally affect the way you decompose your problem. They may necessitate, for example, tighter coupling between classes so that they can share certain information rather than each "owning" its own copy.

Bottom line: Factor any high-level optimizations that affect the interaction of several functions or classes into your initial design, but save all the low-level stuff that stays within the bounds of a single function or class for after you have a working, profiled program.

dsimcha
+4  A: 

My answer:

It depends on what you call "optimization". There's "optimization" as in, "I know what the queries are going to be on this database table, and I know the usage patterns, so I'll design the proper indexing strategy for those patterns". That's a design-time performance optimization that I think is best done during the implementation, before performance problems arise. That kind of optimization should be left to the stone-cold database experts. Having someone new or overzealous do an index design early on in an implementation can actually hurt performance and maintainability.

Then there's the kind of "optimization" that implies adding maybe a bit of cruft to an application to eke out some more speed. For example, using bit-shifting in low-level code instead of multiplication. Adding low-level assembly code here and there in a C program. Straying from clear, standard APIs in favor of reinventing the wheel to get better performance (i.e., "the standard String API isn't fast enough, so I'm going to do X..."). These types of optimizations should be done only when you sense a palpable performance difference, visible to end users, which is a serious detriment to the perceived quality of your application. And you'd better know what you're doing before you embark upon such tasks! Woe to he who attempts to read your code only to find an unnecessary optimization!

All that being said, really and truly GROK your code before starting to optimize it. Focus on solving your business problem before your performance problems. Optimize because you HAVE TO, not because you're bored or just want to. Readability, functionality, and maintainability trump performance until performance starts affecting functionality.

Dave Markle
+2  A: 

Short answer: when it's too slow.

The real answer: when it's too slow for your users. That means you have someone using the feature and it's not fast enough for them. The first part is important: a slow feature that nobody uses isn't yet a problem worth solving.

Rich
+4  A: 
  1. The first rule of Optimization Club is, you do not Optimize.
  2. The second rule of Optimization Club is, you do not Optimize without measuring.
  3. If your app is running faster than the underlying transport protocol, the optimization is over.
  4. One factor at a time.
  5. No marketroids, no marketroid schedules.
  6. Testing will go on as long as it has to.
  7. If this is your first night at Optimization Club, you have to write a test case.

http://xoa.petdance.com/Rules_of_Optimization_Club

Andy Lester
+2  A: 

My personal take on optimization is a little against the common wisdom.

First, you should follow the advice of avoiding optimization before you can profile. BUT each time you do this, develop optimization patterns that you can use for a given type/class of problem. A simple example in .Net, if you are writing a function with a lot of string concatenations, then code the function with a StringBuilder out of the box.

Now keep in mind this knowledge does not transfer. You can not take any optimization pattern and apply it to another framework without determining the impact first. Languages and frameworks do optimizations in different ways. Some optimize switch/case statements, some require 8 branches before they do it. But you don't know until you learn the system.

Also, you are blessed if your optimization patterns transcend versions of the language/tool you use. You can't take it for granted. However, if you are going to spend a lot of time in say the .Net 2.0 framework, then you should take the time to learn the patterns of optimization for that language.

Edit: Please consider this a supplement to Dave Markle's post, which is very good.

torial
A: 

First principle: "optimized code" is clear and simple. If you improve performance but reduce clarity, you have failed to optimize. OK, there are rare exceptions, in which case the obscure code needs to be kept in a tiny black box with a clear interface.

This "when to optimize" question is more profound than most realize. The answer "never" has some merit, but doesn't really explain why. The answer "only when profile proves where improvement is needed" is partially correct, but it doesn't explain the pitfalls of the other assumptions you are making. Programmers control the code, but seldom the environment. Hardware and data can have a huge impact on performance, so what environment are you optimizing for? A clever algorithm may be 10x better in theory, but 10x slower in reality.

Consider this specific example: memory cache can make a short sequential search much faster than any other clever search, especially in the forward direction. In other words, you sometimes need to know about the hardware too, and that is certainly a nasty dependency to have in your code. Also, optimizing for one data set may slow performance for another. Assumptions need to be made explicit.

Too bad. Trying to optimizing performance can be a lot of fun. Just be very careful, and document all assumptions so that when they change, code can get re-optimized.

dongilmore
+1  A: 

You would only think about optimization when it's actually slow, and after you profile it.

For database applications, I would fill the tables with lots sample data (around 10x the average expected usage) and test it.

This doesn't mean you should do any sloppy coding, writing sensible code that you expect to have good performance is not optimization.

Osama ALASSIRY
A: 

It's not so much a matter of "when".

It's a matter of "how", and that includes "when", "why", and "what".

First of all, what do you mean by "optimize"?

If you are like most programmers in my experience, you mean 1) expound what you've heard about "measurement" and "profiling", 2) say that's too hard, so eyeball your code for what you think needs optimizing, and 3) fix that, get a marginal speedup, and say you optimized it.

The thing to do is diagnose, before you fix anything. With experience, you won't put performance problems into your code to begin with.

FOR EXAMPLE: Practically everybody says "I don't optimize prematurely - only a moron would do that!" Then there's a bad performance problem, and I get called in to straighten it out. Know what I find? Massive layers of redundant data structure abstractions, dictionaries, hash tables, event handlers - a regular cornucopia of the data structures, algorithms, and OO courses. And you know what the reason for these are, implicitly if not explicitly? They are considered MORE EFFICIENT!

Every single time.

Mike Dunlavey
+1  A: 

What others have written is true: optimizing "in the small" should be done after functionality is complete. But I believe that optimizing "in the large" ought to be done in the planning stages, as part of the system design.

I occasionally work with a well-known vertical-market software package for running dental offices. It's old software that has been maintained by several organizations over its continuing lifetime. All of its data is stored in a database (as it should) on a central file server.

The problem is all of the clients access the database files directly, over the Windows file share. There's no central server process. There are dozens of such files and they are all large, ranging from a few dozen to a few hundred MB each. Each of the clients manages its own locking and unlocking of the files.

This works fine for users on the LAN. Database access is almost instantaneous. But it's absolutely rotten for VPN users. Launching the software takes 15-30 minutes as the OS opens each of the database files by downloading them. Changing a single record takes 2-10 minutes as the affected DB files are pushed back to the server over a slow DSL line. It's a huge bottleneck.

There's no way to optimize this without completely tearing down the software and rebuilding it as a proper client-server system. Doing so would speed up database access for everyone probably by a factor of 10x-100x (or more) but it's simply not feasible. It should have been optimized at the design stage.

Barry Brown
Yes, Barry, I think your first paragraph puts it very succinctly. The rest of it is not an area I'm familiar with, but it sounds like exactly the right point.
Mike Dunlavey