views:

120

answers:

8

I have a machine which uses an NTP client to sync up to internet time so it's system clock should be fairly accurate.

I've got an application which I'm developing which logs data in real time, processes it and then passes it on. What I'd like to do now is output that data every N milliseconds aligned with the system clock. So for example if I wanted to do 20ms intervals, my oututs ought to be something like this:

13:15:05:000
13:15:05:020
13:15:05:040
13:15:05:060

I've seen suggestions for using the stopwatch class, but that only measures time spans as opposed to looking for specific time stamps. The code to do this is running in it's own thread, so should be a problem if I need to do some relatively blocking calls.

Any suggestions on how to achieve this to a reasonable (close to or better than 1ms precision would be nice) would be very gratefully received.

+1  A: 

Your best bet is using inline assembly and writing this chunk of code as a device driver.

That way:

  • You have control over instruction count
  • Your application will have execution priority
George Edison
If you needed microsecond accuracy, yes. Millisecond accuracy is acheivable in user mode.
Ben Voigt
As stated in the question "close to or better than 1ms precision would be nice"
George Edison
1ms = 1000 microseconds
Ben Voigt
+2  A: 

Don't know how well it plays with C++/CLR but you probably want to look at multimedia timers,
Windows isn't really real-time but this is as close as it gets

Martin Beckett
Only the kernel really gets a say on what executes and when.
George Edison
But 50fps assuming they aren't writing to disk should be possible on a decent machine without too much extra going on.
Martin Beckett
Buffering can solve any disk-writing problems.
George Edison
IDE disks can interrupt for a very long time even if you are only writing to their cache. We had to switch to scsi for video apps to avoid dropped frames - though I don't know how bad SATA is.
Martin Beckett
A: 

CreateWaitableTimer/SetWaitableTimer and a high-priority thread should be accurate to about 1ms. I don't know why the millisecond field in your example output has four digits, the max value is 999 (since 1000 ms = 1 second).

Ben Voigt
Good spot on the four digits. Just a type :-)
Jon Cage
+1  A: 

Ultimately you can't guarantee what you want because the operating system has to honour requests from other processes to run, meaning that something else can always be busy at exactly the moment that you want your process to be running. But you can improve matters using timeBeginPeriod to make it more likely that your process can be switched to in a timely manner, and perhaps being cunning with how you wait between iterations - eg. sleeping for most but not all of the time and then using a busy-loop for the remainder.

Kylotan
I'd been thinking along the same lines
Jon Cage
+1  A: 

Try doing this in two threads. In one thread, use something like this to query a high-precision timer in a loop. When you detect a timestamp that aligns to (or is reasonably close to) a 20ms boundary, send a signal to your log output thread along with the timestamp to use. Your log output thread would simply wait for a signal, then grab the passed-in timestamp and output whatever is needed. Keeping the two in separate threads will make sure that your log output thread doesn't interfere with the timer (this is essentially emulating a hardware timer interrupt, which would be the way I would do it on an embedded platform).

bta
A: 

Since as you said, this doesn't have to be perfect, there are some thing that can be done.

As far as I know, there doesn't exist a timer that syncs with a specific time. So you will have to compute your next time and schedule the timer for that specific time. If your timer only has delta support, then that is easily computed but adds more error since the you could easily be kicked off the CPU between the time you compute your delta and the time the timer is entered into the kernel.

As already pointed out, Windows is not a real time OS. So you must assume that even if you schedule a timer to got off at ":0010", your code might not even execute until well after that time (for example, ":0540"). As long as you properly handle those issues, things will be "ok".

Torlack
Waitable timers do indeed let you set an alarm time, not just a delta.
Ben Voigt
A: 

20ms is approximately the length of a time slice on Windows. There is no way to hit 1ms kind of timings in windows reliably without some sort of RT add on like Intime. In windows proper I think your options are WaitForSingleObject, SleepEx, and a busy loop.

stonemetal
But quantum != timer interrupt rate. A timer in a high priority thread has the approximate precision of the timer interrupt (so long as even higher priority tasks aren't active), which is considerably better than the rate at which round-robin context switching is done.
Ben Voigt
+2  A: 

You can get a pretty accurate time stamp out of timeGetTime() when you reduce the time period. You'll just need some work to get its return value converted to a clock time. This sample C# code shows the approach:

using System;
using System.Runtime.InteropServices;

class Program {
    static void Main(string[] args) {
        timeBeginPeriod(1);
        uint tick0 = timeGetTime();
        var startDate = DateTime.Now;
        uint tick1 = tick0;
        for (int ix = 0; ix < 20; ++ix) {
            uint tick2 = 0;
            do {  // Burn 20 msec
                tick2 = timeGetTime();
            } while (tick2 - tick1 < 20);
            var currDate = startDate.Add(new TimeSpan((tick2 - tick0) * 10000));
            Console.WriteLine(currDate.ToString("HH:mm:ss:ffff"));
            tick1 = tick2;
        }
        timeEndPeriod(1);
        Console.ReadLine();
    }
    [DllImport("winmm.dll")]
    private static extern int timeBeginPeriod(int period);
    [DllImport("winmm.dll")]
    private static extern int timeEndPeriod(int period);
    [DllImport("winmm.dll")]
    private static extern uint timeGetTime();
}

On second thought, this is just measurement. To get an action performed periodically, you'll have to use timeSetEvent(). As long as you use timeBeginPeriod(), you can get the callback period pretty close to 1 msec. One nicety is that it will automatically compensate when the previous callback was late for any reason.

Hans Passant