tags:

views:

370

answers:

3

Hi there!

Based on ideas presented in link I implemented several different "sleep methods". One of this methods was the "binary sleep", which looks like this:

while (System.currentTimeMillis() < nextTimeStamp)
{
sleepTime -= (sleepTime / 2);
sleep(sleepTime);
}

Because the check if the next time step is already reached takes place at the beginning I, would expect that the method is running too long. But the cummulative distribution of the simulation error (expected time - real time) looks like this: alt text

Does somebody has an idea why I'm getting this results? Maybe the method System.currentTimeMillis() does not really return the current time?

BR,

Markus

@irreputable

When I made the evaluation I also created a bell curve by using a german statistic program. Because it was not possible to change caption, here is the english translation of all relevant items:

Häufigkeit = frequency

Fehler = error

Mittelwert = average

Std-Abw = standard deviation

alt text

+9  A: 

No it does not. Its young brother System#nanoTime() has a much better precision than System#currentTimeMillis().

Apart from the answers in their Javadocs (click at the links here above), this subject was discussed several times here as well. Do a search on "currenttimemillis vs nanotime" and you'll get under each this topic: System.currentTimeMillis vs System.nanoTime.

BalusC
nanoTime() has better precision but it's accuracy is still the same a currentTimeMillis() (depends on the underlying operating system). See http://en.wikipedia.org/wiki/Accuracy_and_precision
Steve Kuo
Thanks for the heads up (English is not my native). I've fixed it.
BalusC
However .. Your English is also not that good. It should be "its" (of it) and not "it's" (it is) :o)
BalusC
Muphry's law strikes again.
Skip Head
+2  A: 

Per the docs,

 * Returns the current time in milliseconds.  Note that
 * while the unit of time of the return value is a millisecond,
 * the granularity of the value depends on the underlying
 * operating system and may be larger.  For example, many
 * operating systems measure time in units of tens of
 * milliseconds.
Jonathan Feinberg
+1  A: 

What you are seeing is the underlying clock resolving to 15ms resolution. This is a feature of the OS & interrupt rate. There is a patch for the linux kernel to increase this resolution to 1ms, I'm not sure about windows. There have been a number of posts about this already.

Joel