views:

615

answers:

10

Hi, I have a system that spend 66% of its time in a time(NULL) call.

It there a way to cache or optimize this call?

Context: I'm playing with Protothread for c++. Trying to simulate threads with state machines. So Therefore I cant use native threads.

Here's the header:

#ifndef __TIMER_H__
#define __TIMER_H__

#include <time.h>
#include <iostream>

class Timer
{
private:
    time_t initial;
public:
    Timer();
    unsigned long passed();
};

#endif

and the source file:

#include "Timer.h"

using namespace std;

Timer::Timer()
{
    initial = time(NULL);
}

unsigned long Timer::passed()
{
    time_t current = time(NULL);
    return (current - initial);
}

UPDATE: Final solution! The cpu cycles it going away somewhere, and if I spend them being correct. That is not so bad after all.

#define start_timer() timer_start=time(NULL)
#define timeout(x)   ((time(NULL)-timer_start)>=x)
+1  A: 

Caching will not help, unless and until you don't want the current time. Can you post some code?

dirkgently
I guess if I only make the call half the time. And use the cached value I will save some cycles.
Flinkman
Well, that is the whole point -- how frequently you need to know the current time. Tinker around.
dirkgently
+3  A: 

Call it less often - unless you really need the current time hundreds of times a second, you shouldn't be calling it that often.

EDIT: After trying it, I'm even more curious, I realize you might be on a small embeded system, but on my system, I had no problems running 10,000,000 calls to time() in a second. You're likely doing something seriously wrong given that time() is only going to change once a second. What exactly are you trying to achieve?

Eclipse
I'm building state machines. Every statemachne need to look if the timer is done. So there goes my calls. At the moment I'm benchmarking with 1M state machines.
Flinkman
Ah, I see - then I'd say add an extra thread or a timer that sets an integer to the current time every 250 ms or so, and just check against that integer.
Eclipse
+1  A: 

It really depends, but saving the result won't help if you always want the current time. time( NULL ) likely results in a system call, which will take time since you have to switch to/from kernel mode.

What you can do is read the tsc at the same time that you get the current time, then read the tsc again when you want to get the current time, and add the number of cycles/CPU speed to your time.

There are some answers about rdtsc on here that should help you.

Edit: see my answer in http://stackoverflow.com/questions/638269/timer-to-find-elapsed-time-in-a-function-call-in-c for more information about rdtsc.

Also note that I don't particularly recommend this unless you absolutely have to. It is highly likely that calling rdtsc, subtracting from the previous rdtsc converting that to a fractional equivalent in seconds by dividing by your cpu spped will be slower than just calling time() again.

FreeMemory
+5  A: 

That sounds quite much, given that time only has a precision of 1 second. Sounds like you call it way too often. One possible improvement would be to maybe call it only each 500ms. So it will still hit every second.

So instead of calling it 100 times a second, start off a timer that rings every 500ms, taking the current time and storing it into an integer. Then, read that integer 100 times a second instead.

Johannes Schaub - litb
+4  A: 

As pointed out, you cannot cache it, as the whole point of time() is to give you the current time, which obviously changes all the time.

The real question however probably is: Why is the program calling time() so frequently? I can't think of any good reason to do so.

Is it polling time()? In that case sleep() might be more appropriate.

sleske
+9  A: 

I presume you are calling it within some loop which is otherwise stonkingly efficient.

What you could do is keep a count of how many iterations your loop goes through before the return value of time changes.

Then don't call it again until you've gone through that many iterations again.

You can dynamically adjust this count upwards or downwards if you find you're going adrift, but you should be able to engineer it so that on average, it calls time() once per second.

Here's a rough idea of how you might do it (there's many variations on this theme)

int iterations_per_sec=10; //wild guess
int iterations=0;

while(looping)
{
    //do the real work

    //check our timing
    if (++iterations>iterations_per_sec)
    {
        int t=time(NULL);
        if (t==lasttime)
        {
            iterations_per_sec++;
        }
        else
        {
            iterations_per_sec=iterations/(t-lasttime);
            iterations=0;
            lastime=t;

            //do whatever else you want to do on a per-second basis
        }
    }

}
Paul Dixon
Thanks. Sounds like you done it before. Now I have some new Ideas.
Flinkman
Not too bad and it's auto adjusting, including quickly in the right direction if you're not calling time() often enough [inaccuracy] and slowly in the other direction, calling it too often [inefficiency but still accuracy]. +1 for a clever solution.
paxdiablo
Tried this on my main loop and it improves by 185%!
Flinkman
OK,could not use it in the mail loop. It was to practical to have a accurate main loop. Then I could measure the effectiveness of the other parts.
Flinkman
This only work as long as we have constant rate. It the rate changes to half or one of a tenth. Then we will get a new time(NULL) in ten seconds, for a short period.
Flinkman
A: 

Typically what you can do is save the result of time off into a local variable, and then use that as your current time until you perform some blocking call, or some long running CPU intensive section of code.

What are you doing that you need to call time this often and can you post some code?

Greg Rogers
+2  A: 

If you're on Unix, you may consider using gettimeofday (http://www.opengroup.org/onlinepubs/000095399/functions/gettimeofday.html) - it's faster and has better precision.

Arkadiy
A: 

You could create a thread which called time() a few times a second and then slept, updating a shared variable.

A quick skim of Protothread implied that it didn't use OS threads, so you might get away with no memory barriers. Otherwise something like an efficient read/write lock should mean it's negligible cost.

Pete Kirkham
A: 

You could use a separate thread which would run an endless loop that would sleep() for 1 second (or less if you need finer granularity) and then update the timestamp value.

Other threads would just check this timestamp value without any performance penalty.

Milan Babuškov
If a thread requests sleeping for 1 second, isn't that a *minimum*? Couldn't the OS decide not to schedule the thread for longer if it desired? Then it seems your time could end up arbitrarily inaccurate.
Joseph Garvin