I'm currently using a explicit cast to unsigned long long and using %llu to print it, but since size_t has the %z specifier, why clock_t doesn't have one? There isn't even a macro for it. Maybe I can assume that on a x64 system (OS and CPU) size_t has 8 byte in length (and even in this case, they have provided %z), but what about clock_t?
It's probably because clock ticks is not a very well-defined unit. You can convert it to seconds and print it as a double:
time_in_seconds = (double)time_in_clock_ticks / (double)CLOCKS_PER_SEC;
printf("%g seconds", seconds);
The CLOCKS_PER_SEC macro expands to an expression representing the number of clock ticks in a second.
As far as I know, the way you're doing is the best. Except that clock_t
may be a real type:
time_t
andclock_t
shall be integer or real-floating types.
http://www.opengroup.org/onlinepubs/009695399/basedefs/sys/types.h.html
The C standard has to accomodate a wide variety of architectures, which makes it impossible to make any further guarantees aside from the fact that the internal clock type is arithmetic.
In most cases, you're interested in time intervals, so I'd convert the difference in clock ticks to milliseconds. An unsigned long
is large enough to represent an interval of nearly 50 days even if its 32bit, so it should be large enough for most cases:
clock_t start;
clock_t end;
unsigned long millis = (end - start) * 1000 / CLOCKS_PER_SEC;
//One of the way is by using gettimeofday function, one can find difference using this function.
unsigned long diff(struct timeval second, struct timeval first)
{
struct timeval lapsed;
struct timezone tzp;
unsigned long t;
if (first.tv_usec > second.tv_usec) {
second.tv_usec += 1000000;
second.tv_sec--;
}
lapsed.tv_usec = second.tv_usec - first.tv_usec;
lapsed.tv_sec = second.tv_sec - first.tv_sec;
t = lapsed.tv_sec*1000000 + lapsed.tv_usec;
printf("%lu,%lu - %lu,%lu = %ld,%ld\n",second.tv_sec,second.tv_usec,first.tv_sec,first.tv_usec,lapsed.tv_sec,lapsed.tv_usec);
return t;
}