I have a benchmarking application to test out the performance of some APIs that I have written. In this benchmarking application, I am basically using the QueryPerformanceCounter and getting the timing by dividing the difference of QPC values after and before calling into the API, by the frequency. But the benchmarking results seems to vary If I run the application (same executable running on the same set of Dlls) from different drives. Also, on a particular drive, running the application for the 1st time, closing the application and re-running it again produces different benchmarking results. Can anyone explain this behavior? Am I missing something here?
Some more useful information:
The behavior is like this: Run the application, close it and rerun it again, the benchmarking results seems to improve on the 2nd run and thereafter remains same. This behavior more prominent in case of running from C drive. I would also like to mention that my benchmark app has an option to rerun/retest a particular API without having to close the app. I do understand that there is jitting involved, but what I dont understand is that on the 1st run of app, when u rerun an API multiple times without closing the app, the performance stabilizes after a couple of runs, then when you close and rerun the same test, the performance seems to improve.
Also, how do you account for the performance change when run from different drives?
[INFORMATION UPDATE]
I did an ngen and now the performance difference between the different runs from same location is gone. i.e. If I open the benchmark app, run it once, close it and rerun it from same location, it shows same values.
But I have encountered another problem now. When I launch the app from D drive and run it a couple of times (couple of iterations of APIs within the same launch of benchmark prog), and then from the 3rd iteration onwards, the performance of all APIs seems to fall by around 20%. Then If you close and relaunch the app and run it, for first 2 iterations, it gives correct values (same value as that obtained from C), then again performance falls beyond that. This behavior is not seen when run from C drive. From C drive, no matter how many runs you take, it is pretty consistent.
I am using large double arrays to test my API performance. I was worried that the GC would kick in inbetween the tests so I am calling GC.Collect() & GC.WaitForPendingFinalizers() explictly before and after each test. So I dont think it has anything to do with GC.
I tried using AQ time to know whats happening from 3rd iteration onwards, but funny thing is that When I run the application with AQ time profiling it, the performance does not fall at all.
The performance counter as does not suggest any funny IO activity.
Thanks Niranjan