tags:

views:

2534

answers:

8

How do I stamp two times t1 and t2 and get the difference in milliseconds in C?

+4  A: 

This will give you the time in seconds + microseconds

#include <sys/time.h>
struct timeval tv;
gettimeofday(&tv,NULL);
tv.tv_sec // seconds
tv.tv_usec // microseconds
Arkaitz Jimenez
+1  A: 

Use gettimeofday() or better clock_gettime()

Nikolai N Fetissov
+1  A: 

Here is a good link on timestamps in C. I hope it helps.

gmcalab
This is a C++ lib, not a C one.
Glen
I've noticed that too, it was easy to fix the link.
Kirill V. Lyadvinsky
Cool, there's a C version as well. Now, why do we need a library when the language has support for doing exactly what the OP wants?
Glen
Hmmm i meant to point to the C version, not the C++ version. I was almost sure I pointed to this exact same page..... Thanks for taking away rep points, it was definitely warranted.
gmcalab
A: 

U can try routines in c time library (time.h). Plus take a look at the clock() in the same lib. It gives the clock ticks since the prog has started. But you can save its value before the operation you want to concentrate on, and then after that operation capture the cliock ticks again and find the difference between then to get the time difference.

deostroll
+3  A: 

Use @Arkaitz Jimenez's code to get two timevals:

#include <sys/time.h>
//...
struct timeval tv1, tv2, diff;

// get the first time:
gettimeofday(&tv1, NULL);

// do whatever it is you want to time
// ...

// get the second time:
gettimeofday(&tv2, NULL);

// get the difference:

int result = timeval_subtract(&diff, &tv1, &tv2);

// the difference is storid in diff now.

Sample code for timeval_subtract can be found at this web site:

 /* Subtract the `struct timeval' values X and Y,
    storing the result in RESULT.
    Return 1 if the difference is negative, otherwise 0.  */

 int
 timeval_subtract (result, x, y)
      struct timeval *result, *x, *y;
 {
   /* Perform the carry for the later subtraction by updating y. */
   if (x->tv_usec < y->tv_usec) {
     int nsec = (y->tv_usec - x->tv_usec) / 1000000 + 1;
     y->tv_usec -= 1000000 * nsec;
     y->tv_sec += nsec;
   }
   if (x->tv_usec - y->tv_usec > 1000000) {
     int nsec = (x->tv_usec - y->tv_usec) / 1000000;
     y->tv_usec += 1000000 * nsec;
     y->tv_sec -= nsec;
   }

   /* Compute the time remaining to wait.
      tv_usec is certainly positive. */
   result->tv_sec = x->tv_sec - y->tv_sec;
   result->tv_usec = x->tv_usec - y->tv_usec;

   /* Return 1 if result is negative. */
   return x->tv_sec < y->tv_sec;
 }
Bill
The code in timeval_subtract is evil because it modifies the input value y. It wouldn't be bad if the inputs were two struct timeval values - as opposed to pointers. But when evaluating 'x - y', you don't normally expect the computation to alter the value stored in 'y'.
Jonathan Leffler
@Jonathan, true. Though simply changing it to a pass-by-copy implementation would solve that problem
Glen
I agree. I'd fix it, but I can't double-check that my changes would compile at the moment, so I figured I'd leave it as-is.
Bill
+2  A: 

If you want to find elapsed time, this method will work as long as you don't reboot the computer between the start and end.

In Windows, use GetTickCount(). Here's how:

DWORD dwStart = GetTickCount();
...
... process you want to measure elapsed time for
...
DWORD dwElapsed = GetTickCount() - dwStart;

dwElapsed is now the number of elapsed milliseconds.

In Linux, use clock() and *CLOCKS_PER_SEC* to do about the same thing.

If you need timestamps that last through reboots or across PCs (which would need quite good syncronization indeed), then use the other methods (gettimeofday()).

Also, in Windows at least you can get much better than standard time resolution. Usually, if you called GetTickCount() in a tight loop, you'd see it jumping by 10-50 each time it changed. That's because of the time quantum used by the Windows thread scheduler. This is more or less the amount of time it gives each thread to run before switching to something else. If you do a:

timeBeginPeriod(1);

at the beginning of your program or process and a:

timeEndPeriod(1);

at the end, then the quantum will change to 1 ms, and you will get much better time resolution on the GetTickCount() call. However, this does make a subtle change to how your entire computer runs processes, so keep that in mind. However, Windows Media Player and many other things do this routinely anyway, so I don't worry too much about it.

I'm sure there's probably some way to do the same in Linux (probably with much better control, or maybe with sub-millisecond quantums) but I haven't needed to do that yet in Linux.

darron
A: 

Standard C99:

#include <time.h>

time_t t0 = time(0);
// ...
time_t t1 = time(0);
double datetime_diff_ms = difftime(t1, t0) * 1000.;

clock_t c0 = clock();
// ...
clock_t c1 = clock();
double runtime_diff_ms = (c1 - c0) * 1000. / CLOCKS_PER_SEC;

The precision of the types is implementation-defined, ie the datetime difference might only return full seconds.

Christoph
The datetime difference returns full seconds at best. If I interpreted the Standard correctly, time(), when it doesn't return (time_t)-1, is not guaranteed to return new values every second: it can have a resolution of 5 seconds or 1 minute for example.
pmg
@pmg: the precision is implementations-defined, eg on my system `time()` has a `1s` resolution; the precision of `clock()` is normally as high as possible, but it measures runtime and not datetime
Christoph
A: 

Also making aware of interactions between clock() and usleep(). usleep() suspends the program, and clock() only measures the time the program is running.

If might be better off to use gettimeofday() as mentioned here