Given a C process that runs at the highest priority that requests the current time, Is the time returned adjusted for the amount of time the code takes to return to the user process space? Is it out of date when you get it? As a measurement taking the execution time of known number of assembly instructions in a loop and asking for the time before and after it could give you an approximation of the error. I know this must be an issue in scientific applications? I don't plan to write software involving any super colliders any time in the near future. I have read a few articles on the subject but they do not indicate that any correction is made to make the time given to you be slightly ahead of the time the system read in. Should I lose sleep over other things?
views:
151answers:
4Yes, they are almost definitely "wrong".
For Windows, the timing functions do not take into account the time it takes to transition back to user mode. Even if this were taken into account, it can't correct if the function returns, and your code hits a page fault/gets swapped out/etc., before capturing the return value.
In general, when timing things you should snap a start and an end time around a large number of iterations to weed out these sort of uncertainties.
Yes, the answer you get will be off by a certain (smallish) amount; I have never heard of a timer function compensating for the average return time, because such a thing is nearly impossible to predict well. Such things are usually implemented by simply reading a register in the hardware and returning the value, or a version of it scaled to the appropriate timescale.
That said, I wouldn't lose sleep over this. The accepted way of keeping this overhead from affecting your measurements in any significant way is not to use these timers for short events. Usually, you will time several hundred, thousand, or million executions of the same thing, and divide by the number of executions to estimate the average time. Such a thing is usually more useful than timing a single instance, as it takes into account average cache behavior, OS effects, and so forth.
No, you should not lose sleep over this. No amount of adjustment or other software trickery will yield perfect results on a system with a pipelined processor with multi-layered memory access running a multi-tasking operating system with memory management, devices, interrupt handlers... Not even if your process has the highest priority.
Plus, taking the difference of two such times will cancel out the constant overhead, anyway.
Edit: I mean yes, you should lose sleep over other things :).
Most of the real world problems involving high resolution timers are used for profiling, in which the time is read once during START, and once more during FINISH. So most of the times ~almost~ the same amount of delay in involved in both START and FINISH. And hence it works fine.
Now, for nuclear reactors, WINDOWS or for that many other operating system with generic functions may not be suitable. I guess they use REAL TIME operating systems which might give a better accurate time values than desktop operating systems.