views:

223

answers:

8

The title says it all.

It never happened to me. In Visual Studio, I have a part of code that is executed 300 times, I time it every iteration with the performance counter, and then average it. If I'm running the code in the debugger I get an average of 1.01 ms if I run it without the debugger I get 1.8 ms.

I closed all other apps, I rebooted, I tried it many times: Always the same timing.

I'm trying to optimize my code, but before throwing me into changing the code, I want to be sure of my timings. To have something to compare with.

What can cause that strange behaviour?

Edit:

Some clarification:

I'm running the same compiled piece of code: the release build. The only difference is (F5 vs CTRL-F5) So, the compiler optimization should not be invoved.

Since each calcuated times were verry small, I changed the way I benchmark: I'm now timing the 300 iterations and then divide by 300. I have the same result.

About caching: The code is doing some image cross correlation, with different images at each iterations. The steps of the processing are not modified by the data in the images. So, I think caching is not the problem.

A: 

It's probably a compiler optimization that's actually making your code worse. This is extremely rare these days but if you're doing odd, odd stuff, this can happen.

Some debugger / IDEs like Visual Studio will automatically zero out memory for you in Debug mode; this may be a contributing factor.

Broam
+1  A: 

You are likely to get very erroneous results by doing it this way ... you should be using a profiler. You should read this article entitled The Perils of MicroBenchmarking:
http://blogs.msdn.com/shawnhar/archive/2009/07/14/the-perils-of-microbenchmarking.aspx

Joel Martinez
A: 

Are you running the exact same code in the debugger and outside the debugger or running debug in the debugger and release outside? If so the code isn't the same. If you're running debug and release and seeing the difference you could turn off optimization in release and see what that does or run your code in a profiler in debug and release and see what changes.

JLWarlow
The same release build is run.
jslap
A: 

The debug version initializes variables to 0 (usually).
While a release binary does not initialize variables (unless the code explicitly does). This may affect what the code is doing the ziae of a loop or a whole host of other possibilities.

Set the warning level to the highest level (level 4, default 3).
Set the flag that says treat warnings as errors.

Recompile and re-test.

Martin York
A: 

Before you dive into an optimization session get some facts:

  • dose it makes a difference? dose this application runs twice as slow measured over a reasonable length of time?
  • how are the debug and release builds configured
  • what is the state of this project? Is it a complete software or are you profiling a single function ?
  • how are you running the debug and build releases , are you sure you are testing under the same conditions (e.g. process priority settings )

suppose you do optimize the code what do you have in mind ?

Alon
+2  A: 

I don't think anyone has mentioned this yet, but the debug build may not only affect the way your code executes, but also the way the timer itself executes. This can lead to the timer being inaccurate / slower / definitely not reliable. I would recommend using a profiler as others have mentioned, and compare only similar configurations.

Marcin
+3  A: 

I think I figured it out.

If I add a Sleep(3000) before running the tests, they give the same result.

I think it has something to do with the loading of misc. dlls. In the debugger, the dlls were loaded before any code was executed. Outside the debugger, the dlls were loaded on demand, and one or more were loaded after the timer was started.

Thanks all.

jslap
A: 

Having read your additional data a distant bell started to ring ...

When running a program in the debugger it will catch both C++ exceptions and structured exceptions (windows execution)

One event that will trigger a structured exception is a divide by zero, it is possible that the debugger quickly catches and dismiss this event (as a first chance exception handling) while the release code goes a bit longer before doing something about it.

so if your code might be generating such or similar exceptions it worth a while to look into it.

Alon