views:

6427

answers:

4

I can give it floating point numbers, such as

time.sleep(0.5)

but how accurate is it? If i give it

time.sleep(0.05)

will it really sleep about 50 ms?

+10  A: 

From the documentation:

On the other hand, the precision of time() and sleep() is better than their Unix equivalents: times are expressed as floating point numbers, time() returns the most accurate time available (using Unix gettimeofday where available), and sleep() will accept a time with a nonzero fraction (Unix select is used to implement this, where available).

And more specifically w.r.t. sleep():

Suspend execution for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.

Stephan202
+11  A: 

The accuracy of the time.sleep function depends on the accuracy of your underlying OS's sleep accuracy. For non-realtime OS's like a stock Linux Kernel or Windows the smallest interval you can sleep for is about 10-13ms. I have seen accurate sleeps within several milliseconds of that time when above the minimum 10-13ms.

Update: Like mentioned in the docs sited below, it's common to do the sleep in a loop that will make sure to go back to sleep if you have woken up the early.

I should also mention that if you are running Ubuntu you can try out a pseudo real-time kernel by install the rt kernel package.

Joe
Actually, Linux kernels have defaulted to a higher tick rate for quite a while, so the "minimum" sleep is much closer to 1ms than 10ms. It's not guaranteed--other system activity can make the kernel unable to schedule your process as soon as you'd like, even without CPU contention. That's what the realtime kernels are trying to fix, I think. But, unless you really need realtime behavior, simply using a high tick rate (kernel HZ setting) will get you not-guaranteed-but-high-resolution sleeps in Linux without using anything special.
Glenn Maynard
Yes you are right, I tried with Linux 2.6.24-24 and was able to get pretty close to 1000 Hz update rates. At the time I was doing this I was also running the code on Mac and Windows, so I probably got confused. I know windows XP at least has a tick rate of about 10ms.
Joe
A: 

You can't really guarantee anything about sleep(), except that it will at least make a best effort to sleep as long as you told it (signals can kill your sleep before the time is up, and lots more things can make it run long). For sure the minimum you can get on a standard desktop operating system is going to be around 16ms (timer granularity plus time to context switch), but chances are that the % deviation from the provided argument is going to be significant when you're trying to sleep for 10s of milliseconds. Signals, other threads holding the GIL, kernel scheduling fun, processor speed stepping, etc. can all play havoc with the duration your thread/process actually sleeps.

Nick Bastin
The documentation says otherwise:> The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine.
Glenn Maynard
Ah fair point, fixed the post, although getting longer sleeps() is much more likely than shorter ones.
Nick Bastin
+2  A: 

Why don't you find out:

from datetime import datetime
import time

def check_sleep(amount):
    start = datetime.now()
    time.sleep(amount)
    end = datetime.now()
    delta = end-start
    return delta.seconds + delta.microseconds/1000000.

error = sum(abs(check_sleep(0.050)-0.050) for i in xrange(100))*10
print "Average error is %0.2fms" % error

For the record, I get around 0.1ms error on my HTPC and 2ms on my laptop, both linux machines.

Ants Aasma
Empirical testing will give you a very narrow view. There are many kernels, operating systems and kernel configurations that affect this. Older Linux kernels default to a lower tick rate, which results in a greater granularity. In the Unix implementation, an external signal during the sleep will cancel it at any time, and other implementations might have similar interruptions.
Glenn Maynard
Well of course the empirical observation is not transferable. Aside from operating systems and kernels there are a lot of transient issues that affect this. If hard real time guarantees are required then the whole system design from hardware up needs to be taken into consideration. I just found the results relevant considering the statements that 10ms is the minimum accuracy. I'm not at home in the Windows world, but most linux distros have been running tickless kernels for a while now. With multicores now prevalent it's pretty likely to get scheduled really close to the timeout.
Ants Aasma
+1 for the nice empirical code
Cawas