How to compute accurately the time it takes a Java program to write or read a number of bytes from/to a file ?
It is really important that the time is being measured accurately. (The time should be computed by the program itself).
How to compute accurately the time it takes a Java program to write or read a number of bytes from/to a file ?
It is really important that the time is being measured accurately. (The time should be computed by the program itself).
The standard idiom is:
long startTime = System.nanoTime();
doSomething();
long elapsedTime = System.nanoTime() - startTime;
not tested, but something like:
long delta = System.nanoTime();
try {
// do your stuff
} finally {
delta = System.nanoTime() - delta;
}
There is a code sample here:
http://www.goldb.org/stopwatchjava.html
/*
Copyright (c) 2005, Corey Goldberg
StopWatch.java is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
*/
public class StopWatch {
private long startTime = 0;
private long stopTime = 0;
private boolean running = false;
public void start() {
this.startTime = System.currentTimeMillis();
this.running = true;
}
public void stop() {
this.stopTime = System.currentTimeMillis();
this.running = false;
}
//elaspsed time in milliseconds
public long getElapsedTime() {
long elapsed;
if (running) {
elapsed = (System.currentTimeMillis() - startTime);
}
else {
elapsed = (stopTime - startTime);
}
return elapsed;
}
//elaspsed time in seconds
public long getElapsedTimeSecs() {
long elapsed;
if (running) {
elapsed = ((System.currentTimeMillis() - startTime) / 1000);
}
else {
elapsed = ((stopTime - startTime) / 1000);
}
return elapsed;
}
//sample usage
public static void main(String[] args) {
StopWatch s = new StopWatch();
s.start();
//code you want to time goes here
s.stop();
System.out.println("elapsed time in milliseconds: " + s.getElapsedTime());
}
}
Well, this will probably get downvoted for not being hi-tech enough, but the way I would do that is just run it in a loop some number of times.
Like if you run it 1000 times and clock it, that gives you milliseconds. Run it 1,000,000 times, and it gives you microseconds.
If you also want to find out why it's taking as long as it is, you can just pause it some number of times (like 10) while it's running, and that will tell you what it's doing and why.
The problem with the get System.xxx method is that the method itself needs a few milliseconds to compute. The usually "accepted" way of doing it is running the test a few tens of thousands of times and calculating an average of this.
Also, depending on your OS there is something called the time granularity (example for windows). This is the smallest amount of time your OS can compute. On some OS its a millisecond, on some others its a nanosecond. It might or might not be relevant in your case.