tags:

views:

183

answers:

9

Hey folks,

I need some way in c++ to keep track of the number of milliseconds since program execution. And I need the precision to be in milliseconds. (In my googling, I've found lots of folks that said to include time.h and then multiply the output of time() by 1000 ... this won't work.)

+1  A: 

See clock ()

Dave18
I love the sample program in that link, especially the while loop.
glowcoder
CLOCKS_PER_SEC is not guaranteed to be in milliseconds.
jim mcnamara
Yes, It would be OS dependent.
Dave18
@jim mcnamara: it's supposed to guarantee to expand into the number of clock cycles per second. So it might not be in milliseconds, but it'll get you seconds, and it shouldn't be too hard to get milliseconds from that...
Randolpho
I think what Jim means is that it will be in round seconds - which it won't necessarily be. If there are (random numbers picked out of the air) 15,000 clocks a second, and your program has taken 45 clocks, you will take 45, divide it by 15000 and get 0.003 seconds. You then multiply by 1000 to get 3 milliseconds. Done. Unless I've failed maths. Which is possible.
Stephen
-1 clock only counts time while you're program is active. Any blocking calls are not counted ( filesystem access, networking, sleep, etc).
caspin
This is not a good solution.. clock is good but it will have problems for future use as others have said
0A0D
A: 

This isn't C++ specific (nor portable), but you can do:

SYSTEMTIME systemDT;

In Windows.

From there, you can access each member of the systemDT struct.

You can record the time when the program started and compare the current time to the recorded time (systemDT versus systemDTtemp, for instance).

To refresh, you can call GetLocalTime(&systemDT);

To access each member, you would do systemDT.wHour, systemDT.wMinute, systemDT.wMilliseconds.

To get more information on SYSTEMTIME.

0A0D
A: 

Include time.h, and then use the clock() function. It returns the number of clock ticks elapsed since the program was launched. Just divide it by "CLOCKS_PER_SEC" to obtain the number of seconds, you can then multiply by 1000 to obtain the number of milliseconds.

Jesse Emond
Surely you mean multiply by 1000?
Stephen
the Question said he needs precision not the divide by 1000 solutions :)
anijhaw
I meant Multiply btw :P
anijhaw
haha that was my bad ;) thanks
Jesse Emond
Also, surely you mean divide by CLOCKS_PER_SECOND? Or are my maths skills failing me? (My in-head example was: "I run 10m/s. I have run for 2 metres. Therefore, it has taken 0.2s").
Stephen
Nope, MY math skills are failing me.. -_- thanks, again :)
Jesse Emond
A: 

This small class may help you.

http://dl.dropbox.com/u/6882617/timer.zip

Tony Alexander Hild
+1  A: 
loentar
gettimeofday() is not supported on windows.While compiling code in windows QueryPerformanceCounter will be used. Is't a high resolution timer. This method is very fast and precise.
loentar
A: 

Do you want wall clock time, CPU time, or some other measurement? Also, what platform is this? There is no universally portable way to get more precision than time() and clock() give you, but...

  • on most Unix systems, you can use gettimeofday() and/or clock_gettime(), which give at least microsecond precision and access to a variety of timers;
  • I'm not nearly as familiar with Windows, but one of these functions probably does what you want.
Zack
+1  A: 

The most portable way is using the clock function.It usually reports the time that your program has been using the processor, or an approximation thereof. Note however the following:

  • The resolution is not very good for GNU systems. That's really a pity.

  • Take care of casting everything to double before doing divisions and assignations.

  • The counter is held as a 32 bit number in GNU 32 bits, which can be pretty annoying for long-running programs.

There are alternatives using "wall time" which give better resolution, both in Windows and Linux. But as the libc manual states: If you're trying to optimize your program or measure its efficiency, it's very useful to know how much processor time it uses. For that, calendar time and elapsed times are useless because a process may spend time waiting for I/O or for other processes to use the CPU.

dignor.sign
+2  A: 

clock has been suggested a number of times. This has two problems. First of all, it often doesn't have a resolution even close to a millisecond (10-20 ms is probably more common). Second, some implementations of it (e.g., Unix and similar) return CPU time, while others (E.g., Windows) return wall time.

You haven't really said whether you want wall time or CPU time, which makes it hard to give a really good answer. On Windows, you could use GetProcessTimes. That will give you the kernel and user CPU times directly. It will also tell you when the process was created, so if you want milliseconds of wall time since process creation, you can subtract the process creation time from the current time (GetSystemTime). QueryPerformanceCounter has also been mentioned. This has a few oddities of its own -- for example, in some implementations it retrieves time from the CPUs cycle counter, so its frequency varies when/if the CPU speed changes. Other implementations read from the motherboard's 1.024 MHz timer, which does not vary with the CPU speed (and the conditions under which each are used aren't entirely obvious).

On Unix, you can use GetTimeOfDay to just get the wall time with (at least the possibility of) relatively high precision. If you want time for a process, you can use times or getrusage (the latter is newer and gives more complete information that may also be more precise).

Bottom line: as I said in my comment, there's no way to get what you want portably. Since you haven't said whether you want CPU time or wall time, even for a specific system, there's not one right answer. The one you've "accepted" (clock()) has the virtue of being available on essentially any system, but what it returns also varies just about the most widely.

Jerry Coffin
He wants a time delta which is neither wall time nor cpu time.
caspin
What he really needs to do is just simply record the time when his program starts and when he needs to know how long it has been running, just get the current time again and subtract the two values. That will be about as precise as you can get.
0A0D
@Caspin: First, I question whether his question is sufficiently clear to say it's a delta with any certainty. Second, even if it is a delta, it still has to be a delta of wall or CPU time (i.e., "How long since process started?", or "how much CPU time has the process used?"
Jerry Coffin
+1  A: 

Here is a C++0x solution and an example why clock() might not do what you think it does.

#include <chrono>
#include <iostream>
#include <cstdlib>
#include <ctime>

int main()
{
   auto start1 = std::chrono::monotonic_clock::now();
   auto start2 = std::clock();

   sleep(1);

   for( int i=0; i<100000000; ++i);

   auto end1 = std::chrono::monotonic_clock::now();
   auto end2 = std::clock();

   auto delta1 = end1-start1;
   auto delta2 = end2-start2;

   std::cout << "chrono: " << std::chrono::duration_cast<std::chrono::duration<float>>(delta1).count() << std::endl;

   std::cout << "clock: " << static_cast<float>(delta2)/CLOCKS_PER_SEC << std::endl;
}

On my system this outputs:

chrono: 1.36839
clock: 0.36

You'll notice the clock() method is missing a second. An astute observer might also notice that clock() looks to have less resolution. On my system it's ticking by in 12 millisecond increments, terrible resolution.

If you are unable or unwilling to use C++0x, take a look at Boost.DateTime's ptime microsec_clock::universal_time().

caspin