views:

431

answers:

4

Hi,

We're getting following problem while using System.Threading.Timer (.NET 2.0) from a Windows service.

  1. There are around 12 different timer objects..
  2. Each timer has due time and interval. This is set correctly.
  3. It is observed that after 3 to 4 hours, the timers start signaling before their interval elapses. For example if the timer is supposed to signal at 4:59:59, it gets signaled at 4:59:52, 7 seconds earlier.

Can someone tell me what is the cause for this behavior and what is the solution for that ?

Thanks, Swati

A: 

Maybe re-scheduling is done after an operation that takes some seconds to complete. Or don't you control this ?

thelost
A: 

This can't be control as it depend on the speed of the system, if you have super computer then there may be not enough difference, but if you have normal PC then there will be difference in the timer interval after long execution.

Hope that will help.

Asim Sajjad
+6  A: 

Great question... and here's the reason:

"Timing" is a tricky thing when it comes to computers... you can never rely on an "interval" to be perfect. Some computers will 'tick' the timer only every 14 to 15 milliseconds, some more frequently than that, some less frequently.

So:

Thread.Sleep(1);

Could take anywhere from 1 to 30 milliseconds to run.

Instead, if you need a more precise timer - you have to capture the DateTime of when you begin, and then in your timers you would have to check by subtracting DateTime.Now and your original time to see if it is "time to run" your code.

Here's some sample code of what you need to do:

DateTime startDate = DateTime.Now;

Then, start your timer with the interval set to 1 millisecond. Then, in your method:

if (DateTime.Now.Subtract(startDate).TotalSeconds % 3 == 0)
{
    // This code will fire every 3 seconds.
}

That code above will fire faithfully every 3 seconds. You can leave it running for 10 years, and it will still fire every 3 seconds.

Timothy Khouri
I know it's only an example, but wouldn't you increase the timer interval to something like 100 or even 500 msecs? I know you could loose a bit of accuracy, but wouldn't that be better than having a timer fire 1000 times a second when you only need a results every 3 seconds?
Matt Warren
+1  A: 

Timothy's explanation is compelling, but System.Threading.Timer already works this way. At least in the Rotor implementation, it uses the value returned by GetTickCount() to calculate when the next callback is due. It isn't terribly likely that the real CLR does this as well, it is more likely to use CreateTimerQueueTimer(). Nevertheless, this API function is also likely to depend on the internal tick incremented by the clock tick interrupt.

At issue is how well the internal clock tick (as returned by GetTickCount() and Environment.TickCount) stays in sync with the absolute wall time (as returned by DateTime.Now). Unfortunately, that's an implementation detail of the HAL on your machine, the Hardware Abstraction Layer. The machine builder can provide a custom HAL, one that was tweaked to work with the machine's BIOS and chipset.

I'd expect trouble if the machine has two timing sources, a clock chip that was designed to track the wall time and keeps ticking even you unplug the machine power, and another frequency available on the chipset that can be divided to provide the clock interrupt. The original IBM AT PC worked that way. The clock chip can usually be trusted, the clock would be grossly off if it isn't accurate. The chipset cannot, cutting corners on the quality of the oscillators is an easy way to save a penny.

Solve your problem by avoiding long periods on your timer and recalculating the next due-time from DateTime.Now in the callback.

Hans Passant
Thanks for your help. Re-calculation of timers may solve my problem. I'm going to try that. Thanks once again !!
Swati Ghaisas