tags:

views:

2656

answers:

12

I'm trying to find a way to get the execution time of a section of code in C. I've already tried both time() and clock() from time.h, but it seems that time() returns seconds and clock() seems to give me milliseconds (or centiseconds?) I would like something more precise though. Is there a way I can grab the time with at least microsecond precision?

This only needs to be able to compile on Linux.

+1  A: 

You might want to google for an instrumentation tool.

Daren Thomas
+9  A: 

You want a profiler application.

Search keywords at SO and search engines: linux profiling

Thorsten79
A Profiler gives statistical information, which isn't the same as an actual measurement.
Lee-Man
+2  A: 

Try "bench.h"; it lets you put a START_TIMER; and STOP_TIMER("name"); into your code, allowing you to arbitrarily benchmark any section of code (note: only recommended for short sections, not things taking dozens of milliseconds or more). Its accurate to the clock cycle, though in some rare cases it can change how the code in between is compiled, in which case you're better off with a profiler (though profilers are generally more effort to use for specific sections of code).

It only works on x86.

Dark Shikari
Nice one, we have a similar one, with useful addition, PERF_MARK. This will allow to mark multiply point stored in static array. We have a version that can save a string to easy reading the results, array holds 100 entries by default but can be changed. PERF_STOP dump the results.
Ilya
To whoever added the note about this failing on multi-core systems: I deleted it, because it is simply incorrect. The macro automatically handles context switches and other sudden changes in RDTSC values, so no such problem exists. I use it exclusively on multi-core machines and it works fine.
Dark Shikari
QAZ
+3  A: 

Have a look at gettimeofday, clock_*, or get/setitimer.

ysth
+14  A: 

You referred to clock() and time() - were you looking for gettimeofday()? That will fill in a struct timeval, which contains seconds and microseconds.

Of course the actual resolution is up to the hardware.

Andrew Edgecombe
+1  A: 

You won't find a library call which lets you get past the clock resolution of your platform. Either use a profiler (man gprof) as another poster suggested, or - quick & dirty - put a loop around the offending section of code to execute it many times, and use clock().

fizzer
A: 

If you are developing on x86 or x64 why not use the Time Stamp Counter: RDTSC.

It will be more reliable then Ansi C functions like time() or clock() as RDTSC is an atomic function. Using C functions for this purpose can introduce problems as you have no gaurentee that the thread they are executing in will not be switched out and as a result the value they return will not be an accurate description of the actual execution time you are trying to measure.

With RDTSC you can better measure this. You will need to convert the tick count back into a human readable time H:M:S format which will depend on the processors clock frequency but google around and I am sure you will find examples.

However even with RDTSC you will be including the time your code was switched out of execution, while a better solution than using time()/clock() if you need an exact measurment you will have to turn to a profiler that will instrument your code and take into account when your code is not actualyl executing due to context switches or whatever.

QAZ
A: 

For what it's worth, here's one that's just a few macros:

#include <time.h>
clock_t startm, stopm;
#define START if ( (startm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define STOP if ( (stopm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define PRINTTIME printf( "%6.3f seconds used by the processor.", ((double)stopm-startm)/CLOCKS_PER_SEC);

Then just use it with:

main() {
  START;
  // Do stuff you want to time
  STOP;
  PRINTTIME;
}

From http://ctips.pbwiki.com/Timer

PhirePhly
A: 

it's good. woring very well..

A: 

It depends on the conditions.. Profilers are nice for general global views however if you really need an accurate view my recommendation is KISS. Simply run the code in a loop such that it takes a minute or so to complete. Then compute a simple average based on the total run time and iterations executed.

This approach allows you to:

  1. Obtain accurate results with low resolution timers.

  2. Not run into issues where instrumentation interferes with high speed caches (l2,l1,branch..etc) close to the processor. However running the same code in a tight loop can also provide optimistic results that may not reflect real world conditions.

Einstein
A: 

Don't know which enviroment/OS you are working on, but your timing may be inaccurate if another thread, task, or process preempts your timed code in the middle. I suggest exploring mechanisms such as mutexes or semaphores to prevent other threads from preemting your process.

ivanpro
A: 

gettimeofday() provides you with a resolution of microseconds, whereas clock_gettime() provides you with a resolution of nanoseconds.

int clock_gettime(clockid_t clk_id, struct timespec *tp);

The clk_id identifies the clock to be used. Use CLOCK_REALTIME if you want a system-wide clock visible to all processes. Use CLOCK_PROCESS_CPUTIME_ID for per-process timer and CLOCK_THREAD_CPUTIME_ID for a thread-specific timer.

krakit