tags:

views:

393

answers:

4

Is there a way to measure time with high-precision in Python --- more precise than one second? I doubt that there is a cross-platform way of doing that; I'm interesting in high precision time on Unix, particularly Solaris running on a Sun SPARC machine.

timeit seems to be capable of high-precision time measurement, but rather than measure how long a code snippet takes, I'd like to directly access the time values.

+4  A: 

You can simply use the standard time module:

>>> import time
>>> time.time()
1261367718.971009
daf
+1  A: 

You can also use time.clock() It counts the time used by the process on Unix and time since the first call to it on Windows. It's more precise than time.time().

It's the usually used function to measure performance.

Just call

import time
time.clock()
#Your code here
print 'Time in function', time.clock()

EDITED: Ups, I miss the question as you want to know exactly the time, not the time spent...

Khelben
+1  A: 

time.clock() has 13 decimal points on Windows but only two on Linux. time.time() has 17 decimals on Linux and 16 on Windows but the actual precision is different.

Described here http://docs.python.org/library/time.html I don't agree with the documentation that time.clock() should be used for benchmarking on Unix/Linux. It is not precise enough.

So what timer to use depends on operating system.

On Linux the time resolution is high in time.time()

time.time(), time.time() (1281384913.4374139, 1281384913.4374161)

On Windows however the time functions seems to use the last called number

time.time()-int(time.time()), time.time()-int(time.time()), time.time()-time.time() (0.9570000171661377, 0.9570000171661377, 0.0)

Even if I write the calls on different lines in Windows it still returns the same value so the real precision is lower.

So in serious measurements a platform check (import platform, platform.system()) has to be done in order to determine whether to use time.clock() or time.time()

(Tested on Windows 7 and Ubuntu 9.10 with python 2.6 and 3.1)

David
A: 

Python tries hard to use the most precise time function for your platform to implement time.time():

/* Implement floattime() for various platforms */

static double
floattime(void)
{
    /* There are three ways to get the time:
      (1) gettimeofday() -- resolution in microseconds
      (2) ftime() -- resolution in milliseconds
      (3) time() -- resolution in seconds
      In all cases the return value is a float in seconds.
      Since on some systems (e.g. SCO ODT 3.0) gettimeofday() may
      fail, so we fall back on ftime() or time().
      Note: clock resolution does not imply clock accuracy! */
#ifdef HAVE_GETTIMEOFDAY
    {
        struct timeval t;
#ifdef GETTIMEOFDAY_NO_TZ
        if (gettimeofday(&t) == 0)
            return (double)t.tv_sec + t.tv_usec*0.000001;
#else /* !GETTIMEOFDAY_NO_TZ */
        if (gettimeofday(&t, (struct timezone *)NULL) == 0)
            return (double)t.tv_sec + t.tv_usec*0.000001;
#endif /* !GETTIMEOFDAY_NO_TZ */
    }

#endif /* !HAVE_GETTIMEOFDAY */
    {
#if defined(HAVE_FTIME)
        struct timeb t;
        ftime(&t);
        return (double)t.time + (double)t.millitm * (double)0.001;
#else /* !HAVE_FTIME */
        time_t secs;
        time(&secs);
        return (double)secs;
#endif /* !HAVE_FTIME */
    }
}

( from http://svn.python.org/view/python/trunk/Modules/timemodule.c?revision=81756&view=markup )

Joe Koberg