views:

1565

answers:

8

Hi.

I've encountered in the following paragraph:

“Debug vs Release setting in the IDE when you compile your code in Visual Studio makes almost no difference to performance… the generated code is almost the same. The C# compiler doesn’t really do any optimisation. The C# compiler just spits out IL… and at the runtime it’s the JITer that does all the optimisation. The JITer does have a Debug/Release mode and that makes a huge difference to performance. But that doesn’t key off whether you run the Debug or Release configuration of your project, that keys off whether a debugger is attached.”

The source is here and the podcast is here.

Can someone direct me to a microsoft an article that can actualy prove this?

Googling "C# debug vs release performance" mostly return results saying "Debug has a lot of performance hit, release is optimized, don't deploy dubug to production"

+4  A: 

I can’t comment on the performance but the advice “don’t deploy debug to production” still holds simply because debug code usually does quite a few things differently in large products. For one thing, you might have debug switches active and for another there will probably be additional redundant sanity checks and debug outputs that don’t belong in production code.

Konrad Rudolph
I agree with you on that issue, but this doesn't answer the main question
sagie
@sagie: yes, I’m aware of that but I thought the point was still worth making.
Konrad Rudolph
+15  A: 

Partially true. In debug mode the compiler emits debug symbols for all variables and compiles the code as is. In release mode some optimizations are included:

  • unused variables do not get compiled at all
  • some loop variables are taken out of the loop by the compiler if they are proven to be invariants
  • code written under #debug directive is not included etc.

The rest is up to the JIT.

Edit: Full list of optimizations here courtesy of Eric Lippert

AZ
And don't forget about Debug.Asserts! In DEBUG build, if they fail, they will halt the thread and pop up a message box. In release they get not compiled at all. This applies for all methods that have [ConditionalAttribute].
Ivan Zlatanov
The C# compiler does not do tail call optimizations; the jitter does. If you want an accurate list of what the C# compiler does when the optimize switch is on, see http://blogs.msdn.com/ericlippert/archive/2009/06/11/what-does-the-optimize-switch-do.aspx
Eric Lippert
ups. you'r right Eric. i'll remove it from the post
AZ
A: 

In msdn site...

Release vs. Debug configurations

While you are still working on your project, you will typically build your application by using the debug configuration, because this configuration enables you to view the value of variables and control execution in the debugger. You can also create and test builds in the release configuration to ensure that you have not introduced any bugs that only manifest on one type of build or the other. In .NET Framework programming, such bugs are very rare, but they can occur.

When you are ready to distribute your application to end users, create a release build, which will be much smaller and will usually have much better performance than the corresponding debug configuration. You can set the build configuration in the Build pane of the Project Designer, or in the Build toolbar. For more information, see Build Configurations.

hallie
+2  A: 

From msdn social

It is not well documented, here's what I know. The compiler emits an instance of the System.Diagnostics.DebuggableAttribute. In the debug version, the IsJitOptimizerEnabled property is True, in the release version it is False. You can see this attribute in the assembly manifest with ildasm.exe

The JIT compiler uses this attribute to disable optimizations that would make debugging difficult. The ones that move code around like loop-invariant hoisting. In selected cases, this can make a big difference in performance. Not usually though.

Mapping breakpoints to execution addresses is the job of the debugger. It uses the .pdb file and info generated by the JIT compiler that provides the IL instruction to code address mapping. If you would write your own debugger, you'd use ICorDebugCode::GetILToNativeMapping().

Basically debug deployment will be slower since the JIT compiler optimizations are disabled.

Neil
+2  A: 

What you read is quite valid. Release is usually more lean due to JIT optimization, not including debug code (#IF DEBUG or [Conditional("DEBUG")]), minimal debug symbol loading and often not being considered is smaller assembly which will reduce loading time. Performance different is more obvious when running the code in VS because of more extensive PDB and symbols that are loaded, but if you run it independently, the performance differences may be less apparent. Certain code will optimize better than other and it is using the same optimizing heuristics just like in other languages.

Scott has a good explanation on inline method optimization here

See this article that give a brief explanation why it is different in ASP.NET environment for debug and release setting.

Fadrian Sudaman
The inline explanation is very good
sagie
A: 

One thing you should note, regarding performance and whether the debugger is attached or not, something that took us by surprise.

We had a piece of code, involving many tight loops, that seemed to take forever to debug, yet ran quite well on its own. In other words, no customers or clients where experiencing problems, but when we were debugging it seemed to run like molasses.

The culprit was a Debug.WriteLine in one of the tight loops, which spit out thousands of log messages, left from a debug session a while back. It seems that when the debugger is attached and listens to such output, there's overhead involved that slows down the program. For this particular code, it was on the order of 0.2-0.3 seconds runtime on its own, and 30+ seconds when the debugger was attached.

Simple solution though, just remove the debug messages that was no longer needed.

Lasse V. Karlsen
A: 

To a large extent, that depends on whether your app is compute-bound, and it is not always easy to tell, as in Lasse's example. If I've got the slightest question about what it's doing, I pause it a few times and examine the stack. If there's something extra going on that I didn't really need, that spots it immediately.

Mike Dunlavey
+10  A: 

There is no article which "proves" anything about a performance question. The way to prove an assertion about the performance impact of a change is to try it both ways and test it under realistic-but-controlled conditions.

You're asking a question about performance, so clearly you care about performance. If you care about performance then the right thing to do is to set some performance goals and then write yourself a test suite which tracks your progress against those goals. Once you have a such a test suite you can then easily use it to test for yourself the truth or falsity of statements like "the debug build is slower".

And furthermore, you'll be able to get meaningful results. "Slower" is meaningless because it is not clear whether it's one microsecond slower or twenty minutes slower. "10% slower under realistic conditions" is more meaningful.

Spend the time you would have spent researching this question online on building a device which answers the question. You'll get far more accurate results that way. Anything you read online is just a guess about what might happen. Reason from facts you gathered yourself, not from other people's guesses about how your program might behave.

Eric Lippert