views:

1110

answers:

6

Hello, Currently I'm getting execution wall time of my program in seconds by calliing:

time_t startTime = time(NULL);
//section of code
time_t endTime = time(NULL);
double duration = difftime(endTime,startTime);

Is it possible to get the wall time in milliseconds? If so how? Thanks in advance! alex

+6  A: 

If you're on a POSIX-ish machine, use gettimeofday() instead; that gives you reasonable portability and microsecond resolution.

Slightly more esoteric, but also in POSIX, is the clock_gettime() function, which gives you nanosecond resolution.

On many systems, you will find a function ftime() that actually returns you the time in seconds and milliseconds. However, it is no longer in the Single Unix Specification (roughly the same as POSIX). You need the header <sys/timeb.h>:

struct timeb mt;
if (ftime(&mt) == 0)
{
     mt.time ... seconds
     mt.millitime ... milliseconds
}

This dates back to Version 7 (or 7th Edition) Unix at least, so it has been very widely available.

I also have notes in my sub-second timer code on times() and clock(), which use other structures and headers again. I also have notes about Windows using clock() with 1000 clock ticks per second (millisecond timing), and an older interface GetTickCount() which is noted as necessary on Windows 95 but not on NT.

Jonathan Leffler
A: 

On Windows, use QueryPerformanceCounter and the associated QueryPerformanceFrequency. They don't give you a time that is translatable to calendar time, so if you need that then ask for the time using a CRT API and then immediately use QueryPerformanceCounter. You can then do some simple addition/subtraction to calculate the calendar time, with some error due to the time it takes to execute the API's consecutively. Hey, it's a PC, what did you expect???

David Gladfelter
-1 for flaming the PC platform
Charlie Somerville
QueryPerformanceCounter does not work in CPU speed-switch modes, which are so popular nowadays. The underlying PerformanceCounterFrequency changes rapidly, and it can only be used for small measurements.
Pavel Radzivilovsky
@Charlie, that's ridiculous. It is not a flame to point out that a non-real-time OS can't do accurate time measurements of events.
David Gladfelter
@Pavel: Your comment is inconsistent with the OS documentation, which states that the performance counter frequency is constant on a running OS. There have been defects that have caused the behavior you describe, but they've been patched. Please see MSDN: http://msdn.microsoft.com/en-us/library/ms644905%28VS.85%29.aspx
David Gladfelter
@David: Seen the doc, interesting. It looks like there're intel CPUs which enable this happen despite speed switches (most HALs forward the call to CPU instruction to query the clocks counter). I do not see how this is possible on other CPUs.
Pavel Radzivilovsky
@David PC =/= Windows
Charlie Somerville
A: 

The open-source GLib library has a GTimer system that claims to provide microsecond accuracy. That library is available on Mac OS X, Windows, and Linux. I'm currently using it to do performance timings on Linux, and it seems to work perfectly.

Bob Murphy
+1  A: 

If you can do this outside of the program itself, in linux, you can use time command (time ./my_program).

rascher
+2  A: 

I recently wrote a blog post that explains how to obtain the time in milliseconds cross-platform.

It will work like time(NULL), but will return the number of milliseconds instead of seconds from the unix epoch on both windows and linux.

Here is the code

#ifdef WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif

/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
 * windows and linux. */

int64 GetTimeMs64()
{
#ifdef WIN32
 /* Windows */
 FILETIME ft;
 LARGE_INTEGER li;
 uint64 ret;

 /* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
  * to a LARGE_INTEGER structure. */
 GetSystemTimeAsFileTime(&ft);
 li.LowPart = ft.dwLowDateTime;
 li.HighPart = ft.dwHighDateTime;

 ret = li.QuadPart;
 ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
 ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */

 return ret;
#else
 /* Linux */
 struct timeval tv;
 uint64 ret;

 gettimeofday(&tv, NULL);

 ret = tv.tv_usec;
 /* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
 ret /= 1000;

 /* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
 ret += (tv.tv_sec * 1000);

 return ret;
#endif
}

You can modify it to return microseconds instead of milliesconds if you want.

Andreas Bonini
A: 

gprof, which is part of the GNU toolkit, is an option. Most POSIX systems will have it installed, and it's available under Cygwin for Windows. Tracking the time yourself using gettimeofday() works fine, but it's the performance equivalent of using print statements for debugging. It's good if you just want a quick and dirty solution, but it's not quite as elegant as using proper tools.

To use gprof, you must specify the -pg option when compiling with gcc as in:

gcc -o prg source.c -pg

Then you can run gprof on the generated program as follows:

gprof prg > gprof.out

By default, gprof will generate the overall runtime of your program, as well as the amount of time spent in each function, the number of times each function was called, the average time spent in each function call, and similar information.

There are a large number of options you can set with gprof. If you're interested, there is more information in the man pages or through Google.

Swiss