views:

606

answers:

4

Java gives access to two method to get the current time: System.nanoTime() and System.currentTimeMillis(). The first one gives a result in nanoseconds, but the actual accuracy is much worse than that (many microseconds).

Is the JVM already providing the best possible value for each particular machine? Otherwise, is there some Java library that can give finer measurement, possibly by being tied to a particular system?

+3  A: 

The problem with getting super precise time measurements is that some processors can't/don't provide such tiny increments.

As far as I know, System.currentTimeMilis() and System.nanoTime() is the best measurement you will be able to find.

jjnguy
Modern processors (>1ghz) cycle faster than 1 nanosecond, so they are technically quite capable.
James Jones
They could keep track of the time, but it doesn't mean that they are reporting time that accurately.
jjnguy
Vitali
Indeed, I just tried, on my home machine, it looks like nanotime does take more than one microsecond (mean is 1.2, measured by calling it 100000 times).
penpen
linux time tick precision is 10ms by default, so asking for nano second is not useful unless you tune the kernal to support it (the url of how to tune is in my answer)
Oscar Chan
A: 

Unfortunately, I don't think java RTS is mature enough at this moment.

Java time does try to provide best value (they actually delegate the native code to call get the kernal time). However, JVM specs make this coarse time measurement disclaimer mainly for things like GC activities, and support of underlying system.

  • Certain GC activities will block all threads even if you are running concurrent GC.
  • default linux clock tick precision is only 10ms. Java cannot make it any better if linux kernal does not support.

I haven't figured out how to address #1 unless your app does not need to do GC. A decent and med size application probably and occasionally spends like tens of milliseconds on GC pauses. You are probably out of luck if your precision requirement is lower 10ms.

As for #2, You can tune the linux kernal to give more precision. However, you are also getting less out of your box because now kernal context switch more often.

Perhaps, we should look at it different angle. Is there a reason that OPS needs precision of 10ms of lower? Is it okay to tell Ops that precision is at 10ms AND also look at the GC log at that time, so they know the time is +-10ms accurate without GC activity around that time?

Oscar Chan
"Certain GC activities will block all threads even if you are running concurrent GC."You are right, but on the other hand, with some tuning of the JVM parameters, this can be partially alleviated. And as proposed, yes, the time passed in GC can be taken into account, and removed.
penpen
My point is not that we can't tune it. My point is that you can't get GC to lower to nanosecond level that you seem to like even if you tune it. That was my definition of "decent" applications, which should already be tuned :)
Oscar Chan
+3  A: 

It's a bit pointless in Java measuring time down to the nanosecond scale; an occasional GC hit will easily wipe out any kind of accuracy this may have given. In any case, the documentation states that whilst it gives nanosecond precision, it's not the same thing as nanosecond accuracy; and there are operating systems which don't report nanoseconds in any case (which is why you'll find answers quantized to 1000 when accessing them; it's not luck, it's limitation).

Not only that, but depending on how the feature is actually implemented by the OS, you might find quantized results coming through anyway (e.g. answers that always end in 64 or 128 instead of intermediate values).

It's also worth noting that the purpose of the method is to find the two time differences between some (nearby) start time and now; if you take System.nanoTime() at the start of a long-running application and then take System.nanoTime() a long time later, it may have drifted quite far from real time. So you should only really use it for periods of less than 1s; if you need a longer running time than that, milliseconds should be enough. (And if it's not, then make up the last few numbers; you'll probably impress clients and the result will be just as valid.)

AlBlue
"So you should only really use it for periods of less than 1s". It is for small repeated phenomenon. "And if it's not, then make up the last few numbers". Nah, they may want to try and reproduce this :)
penpen
A: 

If you are looking to record some type of phenomenon on the order of nanoseconds, what you really need is a real-time operating system. The accuracy of the timer will greatly depend on the operating system's implementation of its high resolution timer and the underlying hardware.

However, you can still stay with Java since there are RTOS versions available.

James Jones