views:

103

answers:

5

I want to make a call - either a function call or doing some condition for a PERIOD of time ... typically 10 - 20 seconds.

I would get some user input for the amount of time and do that ...

What is the proper function to use on Linux/Unix systems?

gettimeofday seems to be the way to go ... or perhaps time_t time(time_t *t) ... seems simple. What is preferred?

A: 

try asio

plan9assembler
A: 

This can be done like so

#include <ctime>
/*
your function here
*/

    int main()
    {
        double TimeToRunInSecs = ...;
        clock_t c = clock();
        while(double(clock()-c)/CLOCKS_PER_SEC < TimeToRunInSecs)
       {
          myFunc();
       }
    }

the standard clock() function returns number of SOMETHING from the process start. In one second there are CLOCK_PER_SEC SOMETHINGs :)

HTH

Armen Tsirunyan
This is almost certainly not what the OP wants.
Amigable Clark Kant
@Amigable: Why do you think so? I think this is exactly what he wants.
Armen Tsirunyan
CPU time is very dependent on CPU performance, load and a lot of factors. I may be wrong, but in any case I think the question needs work.
Amigable Clark Kant
A: 

I could do a

time_t current_time = time(0);

and measure off of that ... but is there a preferred way ... mainly this is a best practices kind of question ....

x

Xofo
+3  A: 

So is it something like this you want? This will repeatedly call myfunc() for the next 20 seconds. So could do 1 call (if myfunc takes at least 20 seconds to run) or hundreds of calls (of myfunc() takes a few milliseconds to complete):

#include <time.h>

void myfunc()
{
    /* do something */
}    

int main()
{
    time_t start = time(NULL);
    time_t now = time(NULL);
    while ((now - start) <= 20) {
        myfunc();
        time_t now = time(NULL);
    }
}

It's probably worth asking what you're ultimately trying to achieve. If this is for profiling (e.g., what's the average amount of time function f takes to execute), then you might want to look at other solutions - e.g., using the built-in profiling that gcc gives you (when building code with the "-pg" option), and analyzing with gprof.

Chris J
Best answer for a poorly formulated question, +1.
Amigable Clark Kant
A: 

Couple of things..

If you want to ensure that the function takes a Time X to complete, irrespective of how long the actual code within the function took, do something like this (highly pseudo code)

class Delay
{
  public:
    Delay(long long delay) : _delay(delay) // in microseconds
    {
       ::gettimeofday(_start, NULL); // grab the start time...
    }

    ~Delay()
    {
      struct timeval end;

      ::gettimeofday(end, NULL); // grab the end time
      long long ts = _start.tv_sec * 1000000 + _start.tv_usec;
      long long tse = end.tv_sec * 1000000 + end.tv_usec;

      long long diff = tse - ts;
      if (diff < _delay)
      {
        // need to sleep for the difference...
        // do this using select;
        // construct a struct timeval (same as required for gettimeofday)
        fd_set rfds;
        struct timeval tv;
        int retval;

        FD_ZERO(&rfds);

        diff = _delay - diff; // calculate the time to sleep

        tv.tv_sec = diff / 1000000;
        tv.tv_usec = diff % 1000000;

        retval = select(0, &rfds, NULL, NULL, &tv);
        // should only get here when this times out...
      }
    }
  private:
    struct timeval _start;   
};

Then define an instance of this Delay class at the top of your function to delay - should do the trick... (this code is untested and could have bugs in it, I just typed it to give you an idea..)

Nim