views:

779

answers:

6

If I use "-O2" flag, the performance improves, but the compilation time gets longer.

How can I decide, whether to use it or not?

Maybe O2 makes the most difference in some certain types of code (e.g. math calculations?), and I should use it only for those parts of the project?

EDIT: I want to emphasize the fact that setting -O2 for all components of my project changes the total compilation time from 10 minutes to 30 minutes.

+4  A: 

Always, except when you're programming and just want to test something you just wrote.

Georg
+9  A: 

I'm in bioinformatics so my advice may be biased. That said, I always use the -O3 switch (for release and test builds, that is; not usually for debugging). True, it has certain disadvantages, namely increasing compile-time and often the size of the executable.

However, the first factor can be partially mitigated by a good build strategy and other tricks reducing the overall build time. Also, since most of the compilation is really I/O bound, the increase of compile time is often not that pronounced.

The second disadvantage, the executable's size, often simply doesn't matter at all.

Konrad Rudolph
Exactly - structuring your build properly and paying attention to dependencies in your makefiles usually makes compilation time a non-issue. Same usually helps with executable size, though I've seen people do very stupid stuff resulting in huge programs.
Nikolai N Fetissov
I like this answer, but I would add the caveat that it should be easy to run with no optimization as GDB works so much nicer that way.
dicroce
It's my experience that compile time is "easy" to fix compared to link time. (Time to to search for "reducing link time" questions!)
leander
+5  A: 

Never.

Use -O3 -Wall -Werror -std=[whatever your code base should follow]

Christoffer
+15  A: 

I would recommend using -O2 most of the time, benefits include:

  • Usually reduces size of generated code (unlike -O3).
  • More warnings (some warnings require analysis that is only done during optimization)
  • Often measurably improved performance (which may not matter).

If release-level code will have optimization enabled, it's best to have optimization enabled throughout the development/test cycle.

Source-level debugging is more difficult with optimizations enabled, occasionally it is helpful to disable optimization when debugging a problem.

Lance Richardson
If you need to reduce code size, why not use -Os instead of -O2?
Christoffer
Using -Os makes sense if code size is the most important thing to optimize for - usually it's not (except in small embedded systems...)
Lance Richardson
I have more than once seen compilers when told to optimize for size make larger binaries than when told to optimize for speed. I always just use -O2, sometimes -O3 but take the risks that go with it.
dwelch
+2  A: 

We usually have our build environment set up so that we can build debug builds that use -O0 and release builds that use -O3 (the build enviroment preserves the objects and libraries of all configurations, so that one can switch easily between configurations). During development one mostly builds and runs the debug configuration for faster build speed (and more accurate debug information) and less frequently also builds and tests the release configuration.

lothar
+2  A: 

Is the increased compilation time really noticable? I use -O2 all the time as the default, anything less just leaves a lot of "friction" in your code. Also note that the optimization levels of -O1, -O2 tends to be the best tested, as they are most interesting. -O0 tends to be more buggy, and you can debug pretty well at -O2 in my experience. Provided you have some idea about what a compiler can do in terms of code reordering, inlining, etc.

-Werror -Wall is necessary.

jakobengblom2
Well, somehow setting -O2 for all components of my project changes the total compilation time from 10 minutes to 30 minutes
Igor Oks
10 minutes to 30 minutes. That hurts enough to make you avoid it. Point taken.
jakobengblom2