views:

475

answers:

6

I have a program that was built in C++ (MFC, Visual Studio 6.0) several years ago and has been running on a certain Windows machine for quite some time (more than 5 years). The PC was replaced a month ago (the old one died), and since then the program's timing behavior changed. I need help understanding why.

The main functionality of the program is to respond to keystrokes by sending out ON and OFF signals to an external card, with very accurate delay between the ON and the OFF. An example program flow:

> wait for keystroke...
> ! keystroke occurred
> send ON message
> wait 150ms
> send OFF message

Different keystrokes have different waiting periods associated with them, between 20ms and 150ms (a very deterministic time depending on the specific keystroke). The timing is very important. The waiting is executed using simple Sleep(). The accuracy of the sleep on the old PC was 1-2ms deviation. I can measure the timing externally to the computer (on the external card), so my measurement of the sleep time is very accurate. Please take into account this machine executed such ON-sleep-OFF cycles thousands of times a day for years, so the accuracy data I have is sound.

Since the PC was replaced the timing deviation is more than 10ms.

I did not install the previous PC, so it may had some additional software packages installed. Also, I'm ashamed to admit I don't remember whether the previous PC was Windows 2000 or Windows XP. I'm quite sure it was XP, but not 100% (and I can't check now...). The new one is Windows XP.

I tried changing the sleeping mechanism to be based on timers, but the accuracy did not improve.

Can anything explain this change? Is there a software package that may have been installed on the previous PC that may fix the problem? Is there a best practice to deal with the problem?

A: 

Sleep is dependent on the system clock. Your new machine probably has a different timing than your previous machine. From the documentation:

This function causes a thread to relinquish the remainder of its time slice and become unrunnable for an interval based on the value of dwMilliseconds. The system clock "ticks" at a constant rate. If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on. To increase the accuracy of the sleep interval, call the timeGetDevCaps function to determine the supported minimum timer resolution and the timeBeginPeriod function to set the timer resolution to its minimum. Use caution when calling timeBeginPeriod, as frequent calls can significantly affect the system clock, system power usage, and the scheduler. If you call timeBeginPeriod, call it one time early in the application and be sure to call the timeEndPeriod function at the very end of the application.

The documentation seems to imply that you can attempt to make it more accurate, but I wouldn't try that if I were you. Just use a timer.

What timers did you replace it with? If you used SetTimer(), that timer sucks too.
The correct solution is to use the higher-resolution TimerQueueTimer.

jeffamaphone
I'm interested in the difference... Do you know what can cause Sleep to behave differently? I would rather not use Timers altogether...
Roee Adler
A: 

Is your new PC multi-core and the old one single-core? The difference in timing accuracy may be the use of multiple threads and context switching.

Larry Watanabe
The new one is indeed dual core and the old one was not. It makes sense that this will cause timing differences, but I would be surprised if efficiency went down by moving to dual-core. Do you base this on any concrete information?
Roee Adler
+2  A: 

If your main concern is precision, consider using spinlock. Sleep() function is a hint for the scheduler to not to re-schedule the given thread for at least x ms, there's no guarantee that the thread will sleep exactly for the time specified.

arul
Is there a spinlock implementation in C++/MFC libraries?
Roee Adler
Good times. The CPU usage will spike, and possibly starve other applications... but such is the life of a 10ms requirement which cannot be guaranteed by a non-real-time OS.
Kieveli
@Kieveli: I only have this software running on the PC, so I can pay in CPU currency for accuracy... (still, what's the simplest implementation of spinlock in C++/MFC?)
Roee Adler
+3  A: 

The time resolution on XP is around 10ms - the system basically "ticks" every 10ms. Sleep is not a very good way to do accurate timing for that reason. I'm pretty sure win2000 has the same resolution but if I'm wrong that could be a reason.

You can change that resolution , atleast down to 1ms - see http://technet.microsoft.com/en-us/sysinternals/bb897569.aspx or use this http://www.lucashale.com/timerresolution/ - there's probably a registry key as well(windows media player will change that timer as well, probably only while it's running.

Could be the resolution somehow was altered on your old machine.

nos
I downloaded Lucas Hale's software and will try it tomorrow on the set-up. I'll update, thanks.
Roee Adler
They should have renamed the function to sleepish(). It's always been iffy on how long it will actually sleep... I'm amazed your previous configuration gave you accurate results!
Kieveli
The documentation is clear about this. I don't see why you're surprised.
jeffamaphone
I ran Lucas Hale's software and it works. Many thanks. Here's the link: http://www.lucashale.com/timerresolution/
Roee Adler
+1  A: 

Usually Sleep() will result in delay of ~15 ms or period multiple by ~15ms depending on sleep value. On of the good ways to find out haw it works is the following pseudo-code:

while true do
    print(GetTickCount());
    Sleep(1);
end;

And also it will show that the behavior of this code is different for, say, Windows XP and Vista/Win 7

Eugene N.
A: 

As others have mentioned, sleep has coarse accuracy.

I typically use Boost::asio for this kind of timing:

// Set up the io_service and deadline_timer
io_service io_
deadline_timer timer(io_service);

// Configure the wait period
timer.expires_from_now(posix_time::millisec(5));
timer.wait();

Asio uses the most effective implementation for your platform; on Windows I believe it uses overlapped IO.

If I set the time period to 1ms and loop the "timer." calls 10000 times the duration is typically about 10005-10100 ms. Very accurate, cross platform code (though accuracy is different on Linux) and very easy to read.

I can't explain why your previous PC was so accurate though; Sleep has been +/- 10ms whenever I've used it - worse if the PC is busy.

MattyT