tags:

views:

75

answers:

3

I have an algorithm of which I'm using System.Diagonstics to time - via the Stopwatch.

It works great but one thing I have noticed is that the first time I run the algorithm it takes around 52 milliseconds which is great.

The second time I run the algorithm it takes only a fraction of that time.

Is this due to the nature of .NET?

Each time I run the algorithm with a new set of data I re-initalise it. In other words I create a new object rather than re-use the old reference so I'm not sure why this still occurs. Normally I wouldn't care about something like this, but for this assignment I must measure the efficency and speed of my algorithms so it is important for myself to get an understanding to why this is happening.

Pseudo code of how I'm using the timer is below:

 Algorithm class

 Stopwatch get/set

 Method A
     Start stopwatch
     // Do work.
     Stop stopwatch
 End

 Method B
     Start stopwatch
     // Do work.
     Stop stopwatch
 End

End

After both methods are called in my runner, I get the stopwatch and inspect the time.

The algorithm

The algorithm is tactical waypoint reasoning for computer controlled A.I opponnents. I tried to keep it as simple as possible in the above example.

Results

19.7847
0.0443
0.0102
0.0159
0.0091
0.0073
0.0079
0.0079
0.0079
0.0079
0.0079
0.0079
0.0136
0.0079
0.0073
0.0079
0.0079
0.0079
0.0079
0.0073
...

Should I just ignore the first time the algorithm is run? Otherwise I'll end up with an average that is essentially the same as the value when its first run.

+1  A: 

The first time it runs, the CLR bytecode must be JITed, which incurs an overhead. Subsequent executions do not incur this cost.

spender
I'd +1 this as it was first, only it's important to consider that any number of things can have influence, whereas this answer suggests that the only possible explanation is JIT time
Ruben Bartelink
+5  A: 

If you're only timing for 52 milliseconds, any number of things could be happening - that's a very small amount of time to measure.

It could well be that it's due to JIT compilation of the method and everything it touches, for example.

In general, to get useful measurements you should time multiple iterations to get a longer period - this reduces the noise due to (for example) some other event in your operating system taking the CPU away briefly.

Jon Skeet
I didn't think of this. But I did notice that it would vary each time when I hammered the mouse button down. I'll do what yourself and Mark mentioned and put in a loop and take the average.
Finglas
"Fusion" also tends to account for a large amount of first-time cost.
Marc Gravell
+2  A: 

Repeat your tests thousands of times in a loop to get an average. You should try not to allocate and deallocate objects when you do this, so you reduce the possibility of a garbage collection.

Mark Bertenshaw
+1, didn't think of this.
Finglas
++ What I do is time it with my wristwatch. If I want milliseconds, I run it 1000 times. Microseconds - 10^6 times, etc.
Mike Dunlavey