views:

113

answers:

6

We develop a product for internal customers. We don't have a QA team, and don't use assertions. Performance is important, application size isn't.

Is it a good idea to have a single configuration (instead of separating Debug and Release), which will have the debug information (pdbs), and will also do the performance optimization?

Are there any cons to this approach?

+6  A: 

Keep both. There is a reason for having two configurations! Use the Debug one for debugging and the Release one for every-day use.

THe cons of "merging" configurations are obvious - you wont get the best optimizations you could with clean Release configuration and debugging will be awkward. The few seconds (or minutes) needed to rebuild the project in a different configuration are worth it, trust me.

PeterK
Why can't I have the best optimization in this configuration?
Igor Oks
@Igor: You either have nice debugging or performance. Debugging optimized code, while possible, is no fun at all.
sbi
@Igor, because release optimizations make debugging difficult: the compiler may reorder or optimize away instructions, inline calls etc. This makes it hard to follow even which line of code you are actually executing.
Péter Török
The two are mutually exclusive. Debugging needs to preserve code the compiler / optimiser can decide to remove, move, or merge to build faster code. If it's optimised it's hell to debug. If it's debuggable then it isn't optimised.
Binary Worrier
+2  A: 

I would say that you should always keep debug and release versions separate. Release versions are for your customers, Debug versions are for your developers. You say that you don't use assertions: perhaps you should be? Even if you don't use assertions in your own code, you can still trigger assertions in the underlying library code, eg when using invalid iterators. These will give the developer a warning that something's wrong. What would the user do if they saw this message: panic, call tech support, do nothing?

The debug version is there to provide you with extra tools to fix problems before you ship the release version. You should use every tool available to you to increase the quality of your product.

the_mandrill
A: 

You should have at least two. One for release (performance) and one for debugging - or do you write perfect code, first time every time?

graham.reeds
+1  A: 

The debug infos will be mostly worthless in an optimized build, because the optimizer will transform the program into something unrecognizable. Also, errors related to undefined behavior are easier to expose if you have a secondary configuration with other optimization flags.

Luther Blissett
+1  A: 

Debugging and optimization tend to work against each other. The compiler's optimizations typically make debugging a pain (functions can be inlined, loops unrolled, etc), and the strictness that makes debug info worthwhile ties the compiler's hands so it can't optimize as well. Basically, if you combine the two, it's the worst of both worlds.

Performance of the finished product thus pretty much demands that it be a "release" version, not a debug version, and certainly not some odd mix of the two.

cHao
A: 

Is it OK to have a single configuration, rather than separating Debug and Release (in our case)?

It may be OK - it depends heavily on your case (but depending on your details I think it is very not OK).

We don't have a QA team, and don't use assertions.

Assertions are not the issue with a debug build. They are another tool you can use (or not).

Having a QA team or not should not influence (heavily) the decision between debug and release builds (but if you do have a QA team, sooner or later you will probably want to have a debug version of your product).

A QA team will affect the quality of your product heavily. Without dedicated QA (by someone other than the people who develop the application) you have no guarantee of the quality or stability of your product, you can provide no guarantee it does what it's supposed to do (or that it's fit for any purpose) and you cannot make meaningful measurements on your product in lots of areas).

It may be you actually don't need a QA team, but in most cases you're just depriving your development team and customers (internal or not) of a lot of necessary data.

A debug build should make it easier to - well - debug your product and track issue and fix them. If you are doing no organized QA, you may not even know what your main issues to fix are.

Methinks that you actually have a QA team, you just don't see it as such: your internal customers (that may even be you) are your QA team. It's a bad idea, to the degree your application's function is important.

Working with no QA team is like creating a car by yourself and taking it on the road for testing: you have no idea if the wheels are held together OK, or if the breaks work until you are in traffic. It may be you don't kill anyone, but I wouldn't put the critical data in your company in your untested application, unless it's not really critical.

Performance is important, application size isn't.

If performance is important, who measures it? Does the measurement code belong to your released application? Do you add it and remove it in the released code?

It sounds like you're doing ad-hoc development and with a performance-critical application with no QA team and no dedicated debugging I'd have lots of doubts your team can actually deliver.

I don't know your situation and there may be a lot I don't see in this so maybe it's OK.

Are there any cons to this approach?

Yes: you will either end up with diagnostics code in your release version, or have to remove the diagnostics code after fixing each problem and add it again when working on the next problem.

You should not remove the debug version only for optimization though. That's not a valid argument, since you can optimize your release version and leave the debug version as is.

utnapistim