tags:

views:

106

answers:

2

I have a buffer with the UTC time stamp in C, I broadcast that buffer after every ten seconds. The problem is that the time difference between two packets is not consistent. After 5 to 10 iterations the time difference becomes 9, 11 and then again 10. Kindly help me to sort out this problem.

I am using <time.h> for UTC time.

+3  A: 

If your time stamp has only 1 second resolution then there will always be +/- 1 uncertainty in the least significant digit (i.e. +/- 1 second in this case).

Clarification: if you only have a resolution of 1 second then your time values are quantized. The real time, t, represented by such a quantized value has a range of t..t+0.9999. If you take the difference of two such times, t0 and t1, then the maximum error in t1-t0 is -0.999..+0.999, which when quantized is +/-1 second. So in your case you would expect to see difference values in the range 9..11 seconds.

Paul R
@Paul R, Kindly explain me what do you mean by 1 second resolution? And how can I solve this problem.
Arman
@Arman: see above for further explanation.
Paul R
+1  A: 

A thread that sleeps for X milliseconds is not guaranteed to sleep for precisely that many milliseconds. I am assuming that you have a statement that goes something like:

while(1) {
  ...
  sleep(10); // Sleep for 10 seconds.
  // fetch timestamp and send
}

You will get a more accurate gauge of time if you sleep for shorter periods (say 20 milliseconds) in a loop checking until the time has expired. When you sleep for 10 seconds, your thread gets moved further out of the immediate scheduling priority of the underlying OS.

You might also take into account that the time taken to send the timestamps may vary, depending on network conditions, etc, if you do a sleep(10) -> send ->sleep(10) type of loop, the time taken to send will be added onto the next sleep(10) in real terms.

Try something like this (forgive me, my C is a little rusty):

bool expired = false;
double last, current;
double t1, t2;
double difference = 0;

while(1) {
   ...
   last = (double)clock();
   while(!expired) {
      usleep(200); // sleep for 20 milliseconds
      current = (double)clock();
      if(((current - last) / (double)CLOCKS_PER_SEC) >= (10.0 - difference))
        expired = true;
   }
   t1 = (double)clock();
   // Set and send the timestamp.
   t2 = (double)clock();
   //
   // Calculate how long it took to send the stamps.
   // and take that away from the next sleep cycle.
   //
   difference = (t2 - t1) / (double)CLOCKS_PER_SEC;
   expired = false;
 }

If you are not bothered about using the standard C library, you could look at using the high resolution timer functionality of windows such as QueryPerformanceFrequency/QueryPerformanceCounter functions.

LONG_INTEGER freq;
LONG_INTEGER t2, t1;
//
// Get the resolution of the timer.
//
QueryPerformanceFrequency(&freq);

// Start Task.
QueryPerformanceCounter(&t1);

... Do something ....

QueryPerformanceCounter(&t2);

// Very accurate duration in seconds.
double duration = (double)(t2.QuadPart - t1.QuadPart) / (double)freq.QuadPart;
Adrian Regan
P.S. You might want to use something more accurate than difftime, as it's resolution is 1 second. something clock() with CLK_PER_SECOND constant.
Adrian Regan
@Adrian, Is CLOCK_PER_SECOND is available for windows??
Arman
Yes, it's part of the c standard library. CLOCKS_PER_SEC. To get a more accurate time reading double t = (dobule)clock() / (double)CLOCKS_PER_SEC;
Adrian Regan
I've edited the post to reflect the clock() way of doing things, and added the possibility of using windows functions QueryPerformance...
Adrian Regan