tags:

views:

7331

answers:

8

I saw on the Internet that I was supposed to use System.nanoTime() but that doesn't work for me - it gives me the time with milliseconds precision. I just need the microseconds before and after my function executes so that I know how long it takes. I'm using Windows XP.

Basically, I have this code that, for example, does 1 million up to 10 millions of insertions in a java linked list. The problem is that I can't measure the precision right; sometimes it takes less time to insert everything in the smaller list.

Here's an example:

class test
{
    public static void main(String args[])
    {
     for(int k=1000000; k<=10000000; k+=1000000)
     {
      System.out.println(k);
      LinkedList<Integer> aux = new LinkedList<Integer>();
      //need something here to see the start time
      for(int i=0; i<k; i++)
       aux.addFirst(10000);
      //need something here to see the end time
      //print here the difference between both times
     }
    }
}

I did this many times - there was an exterior loop doing it 20 times for each k - but the result aren't good. Sometimes it takes less time to to make 10 million insertions than 1 million, because I'm not getting the correct measured time with what I'm using now (System.nanoTime())

Edit 2: Yes, I'm using the Sun JVM.

Edit 3: I may have done something wrong in the code, I'll see if changing it does what I want.

Edit 4: My mistake, it seems System.nanoTime() works. Phew.

+9  A: 

It's not clear to me exactly what you're benchmarking, but in general any test which takes such a short amount of time to run, that accuracy lower than 50 ms is relevant, is going to be very prone to other disturbances.

I generally try to make benchmarks run for at least 10 seconds. The framework I'm writing at the moment will guess how many iterations to run so that it will take 30 seconds. That means you won't get radically different results just because some other process stole the CPU for a few milliseconds.

Running for longer is almost always a better approach than trying to measure with finer-grained accuracy.

Jon Skeet
+2  A: 

That's weird. System.nanoTime() is supposed to work. Are you using the Sun JVM?

Can you just repeat your operation 1000 times and divide the time by 1000 to find out what you need to know?

sjbotha
+3  A: 

You have to repeat the tests thousands of times. There are lots of things happening that will influence your measurements, like garbage collection, I/O, swap in/out, the size of the ready queue threads, etc.

Tiago
+6  A: 

My guess is that since System.nanoTime() uses the "most precise available system timer" which apparently only has millisecond-precision on your system, you can't get anything better.

Zach Scrivena
+1  A: 

It may be the case that the underlying OS doesn't provide timers with nanosecond precision.

There is also an older post.

starblue
A: 

Such a benchmark that relies on short time-interval gives you unreliable results. You will always get different results, because of external factors like I/O, Swapping, Process Switches, Caches, Garbage Collection etc. Additionally the JVM optimizes your calls, so it's likely that the first measured things are going slower than later call. The JVM starts more and more to optimize the commands you execute.

Additionally the method like System.nanoTime() is dependent on the timers of the underlying system. They may (and most likely will) not have the granularity to measure in that accuracy. To cite the API:

This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees are made about how frequently values change.

To really measure with high precision you need to access an external timing hardware with guaranteed precision.

To make your benchmark more stable you need to execute it more than once and to measure bigger time-intervals than only milliseconds.

Mnementh
+2  A: 

System.nanoTime() uses a counter in the CPU and is usually accurate to about 1 micro-second on Windows XP and Linux.

Note: Windows XP is often less accurate on multi-cpu machines as it doesn't compensate for different CPUs having different counters. Linux does. Note 2: It will drift relative to the System.currentTimeMillis() as it is based on the accuracy of the clock for your CPU (which doesn't need to be so accurate over a period of time), rather than the clock you have for getting the time.(which drifts less per day, but has less granularity)

In your benchmark you are basically testing the speed at which you can create new objects. Not surprisingly your results will vary dramatically based on your GC settings and how recently a GC has been performed.

Try running your tests with the following options and you should see very different results.

-verbosegc -XX:NewSize=128m -mx256m

Peter Lawrey
A: 

If you want a reliable result, use a profiler. I suggest VisualVM. It is easy to use and to install.

marcospereira