views:

3141

answers:

13

How can you measure the amount of time a function will take to execute?

This is a relatively short function and the execution time would probably be in the millisecond range.

This particular question relates to an embedded system, programmed in C or C++.

+1  A: 
start_time = timer
function()
exec_time = timer - start_time
Galen
A: 

This is basically impossible to answer without more details. Which platform are you developing on? Which language? Etc...

senfo
Hope you would put these on the comments instead of on the answer ;)
Jon Limjap
+2  A: 

Invoke it in a loop with a ton of invocations, then divide by the number of invocations to get the average time.

so:

// begin timing
for (int i = 0; i < 10000; i++) {
    invokeFunction();
}
// end time
// divide by 10000 to get actual time.
Mike Stone
Note: I've only ever done this when the time it takes to invoke the function is bigger than the granularity of the machine's clock... which is usually pretty obvious when all the results are 0 and 15 milliseconds.
Mike Stone
The only problem is, you also have to account for the loop overhead if you're serious about getting an accurate timing.
Ates Goral
Loop overhead is pretty minimal, to compensate for it you could simply time an empty loop and then subtract that time from the function time.
Aaron
+6  A: 

The best way to do that on an embedded system is to set an external hardware pin when you enter the function and clear it when you leave the function. This is done preferably with a little assembly instruction so you don't skew your results too much.

Edit: One of the benefits is that you can do it in your actual application and you don't need any special test code. External debug pins like that are (should be!) standard practice for every embedded system.

cschol
Amen Brother! Of course, the presumption is that the programmer is handy with hardware...not always a safe assumption :-)
Benoit
"Handy with hardware" is the "embedded" part in "Embedded Software Development". That better be a safe assumption. :)
cschol
Embedded Linux...need I say more?
Benoit
A: 

If you're looking for sub-millisecond resolution, try one of these timing methods. They'll all get you resolution in at least the tens or hundreds of microseconds:

If it's embedded Linux, look at Linux timers:

http://linux.die.net/man/3/clock_gettime

Embedded Java, look at nanoTime(), though I'm not sure this is in the embedded edition:

http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#nanoTime()

If you want to get at the hardware counters, try PAPI:

http://icl.cs.utk.edu/papi/

Otherwise you can always go to assembler. You could look at the PAPI source for your architecture if you need some help with this.

tgamblin
A: 

In OS X terminal (and probably Unix, too), use "time":

time python function.py
stalepretzel
You are correct, it's a shell built-in command.
Bernard
A: 

If the code is .Net, use the stopwatch class (.net 2.0+) NOT DateTime.Now. DateTime.Now isn't updated accurately enough and will give you crazy results

+2  A: 

if you're using linux, you can time a program's runtime by typing in the command line:

time [funtion_name]

if you run only the function in main() (assuming C++), the rest of the app's time should be negligible.

joe
+7  A: 

There are three potential solutions:

Hardware Solution:

Use a free output pin on the processor and hook an oscilloscope or logic analyzer to the pin. Initialize the pin to a low state, just before calling the function you want to measure, assert the pin to a high state and just after returning from the function, deassert the pin.


    *io_pin = 1;
    myfunc();
    *io_pin = 0;

Bookworm solution:

If the function is fairly small, and you can manage the disassembled code, you can crack open the processor architecture databook and count the cycles it will take the processor to execute every instructions. This will give you the number of cycles required.
Time = # cycles * Processor Clock Rate / Clock ticks per instructions

This is easier to do for smaller functions, or code written in assembler (for a PIC microcontroller for example)

Timestamp counter solution:

Some processors have a timestamp counter which increments at a rapid rate (every few processor clock ticks). Simply read the timestamp before and after the function. This will give you the elapsed time, but beware that you might have to deal with the counter rollover.

Benoit
+1  A: 

Windows XP/NT Embedded or Windows CE/Mobile

You an use the QueryPerformanceCounter() to get the value of a VERY FAST counter before and after your function. Then you substract those 64-bits values and get a delta "ticks". Using QueryPerformanceCounterFrequency() you can convert the "delta ticks" to an actual time unit. You can refer to MSDN documentation about those WIN32 calls.

Other embedded systems

Without operating systems or with only basic OSes you will have to:

  • program one of the internal CPU timers to run and count freely.
  • configure it to generate an interrupt when the timer overflows, and in this interrupt routine increment a "carry" variable (this is so you can actually measure time longer than the resolution of the timer chosen).
  • before your function you save BOTH the "carry" value and the value of the CPU register holding the running ticks for the counting timer you configured.
  • same after your function
  • substract them to get a delta counter tick.
  • from there it is just a matter of knowing how long a tick means on your CPU/Hardware given the external clock and the de-multiplication you configured while setting up your timer. You multiply that "tick length" by the "delta ticks" you just got.

VERY IMPORTANT Do not forget to disable before and restore interrupts after getting those timer values (bot the carry and the register value) otherwise you risk saving incorrect values.

NOTES

  • This is very fast because it is only a few assembly instructions to disable interrupts, save two integer values and re-enable interrupts. The actual substraction and conversion to real time units occurs OUTSIDE the zone of time measurement, that is AFTER your function.
  • You may wish to put that code into a function to reuse that code all around but it may slow things a bit because of the function call and the pushing of all the registers to the stack, plus the parameters, then popping them again. In an embedded system this may be significant. It may be better then in C to use MACROS instead or write your own assembly routine saving/restoring only relevant registers.
Philibert Perusse
A: 

Depends on your embedded platform and what type of timing you are looking for. For embedded Linux, there are several ways you can accomplish. If you wish to measure the amout of CPU time used by your function, you can do the following:

#include <time.h>
#include <stdio.h>
#include <stdlib.h>

#define SEC_TO_NSEC(s) ((s) * 1000 * 1000 * 1000)

int work_function(int c) {
    // do some work here
    int i, j;
    int foo = 0;
    for (i = 0; i < 1000; i++) {
        for (j = 0; j < 1000; j++) {
            for ^= i + j;
        }
    }
}

int main(int argc, char *argv[]) {
    struct timespec pre;
    struct timespec post;
    clock_gettime(CLOCK_THREAD_CPUTIME_ID, &pre);
    work_function(0);
    clock_gettime(CLOCK_THREAD_CPUTIME_ID, &post);

    printf("time %d\n",
        (SEC_TO_NSEC(post.tv_sec) + post.tv_nsec) -
        (SEC_TO_NSEC(pre.tv_sec) + pre.tv_nsec));
    return 0;
}

You will need to link this with the realtime library, just use the following to compile your code:

gcc -o test test.c -lrt

You may also want to read the man page on clock_gettime there is some issues with running this code on SMP based system that could invalidate you testing. You could use something like sched_setaffinity() or the command line cpuset to force the code on only one core.

If you are looking to measure user and system time, then you could use the times(NULL) which returns something like a jiffies. Or you can change the parameter for clock_gettime() from CLOCK_THREAD_CPUTIME_ID to CLOCK_MONOTONIC...but be careful of wrap around with CLOCK_MONOTONIC.

For other platforms, you are on your own.

Drew

Drew Frezell
+1  A: 

I always implement an interrupt driven ticker routine. This then updates a counter that counts the number of milliseconds since start up. This counter is then accessed with a GetTickCount() function.

Example:

#define TICK_INTERVAL 1    // milliseconds between ticker interrupts
static unsigned long tickCounter;

interrupt ticker (void)  
{
    tickCounter += TICK_INTERVAL;
    ...
}

unsigned in GetTickCount(void)
{
    return tickCounter;
}

In your code you would time the code as follows:

int function(void)
{
    unsigned long time = GetTickCount();

    do something ...

    printf("Time is %ld", GetTickCount() - ticks);
}
selwyn
This is good if the resolution of the Tick count is a good enough approximation. If you really need to know the exact amount of time a function takes, it might not work well because of the jitter.
Benoit
+1  A: 

I repeat the function call a lot of times (millions) but also employ the following method to discount the loop overhead:

start = getTicks();

repeat n times {
    myFunction();
    myFunction();
}

lap = getTicks();

repeat n times {
    myFunction();
}

finish = getTicks();

// overhead + function + function
elapsed1 = lap - start;

// overhead + function
elapsed2 = finish - lap;

// overhead + function + function - overhead - function = function
ntimes = elapsed1 - elapsed2;

once = ntimes / n; // Average time it took for one function call, sans loop overhead

Instead of calling function() twice in the first loop and once in the second loop, you could just call it once in the first loop and don't call it at all (i.e. empty loop) in the second, however the empty loop could be optimized out by the compiler, giving you negative timing results :)

Ates Goral
Interesting approach to discounting loop overhead. If the code being measured is amenable to this, it's a good way.
Benoit