views:

1095

answers:

13

Does anyone ever use stopwatch benchmarking, or should a performance tool always be used? Are there any good free tools available for Java? What tools do you use?

EDIT: Thanks for all the answers so far.

To clarify my concerns, stopwatch benchmarking is subject to error due to operating system scheduling. On a given run of your program the OS might schedule another process (or several) in the middle of the function you're timing. In Java things are even a little bit worse if you're trying to time a threaded application, as the JVM scheduler throws even a little bit more randomness into the mix.

How do you address operating system scheduling when benchmarking?

A: 

I always use stopwatch benchmarking as it is so much easier. The results don't need to be very accurate for me though. If you need accurate results then you shouldn't use stopwatch benchmarking.

Hintswen
A: 

I don't think stopwatch benchmarking is too horrible, but if you can get onto a Solaris or OS X machine you should check out DTrace. I've used it to get some great information about timing in my applications.

commondream
+8  A: 

Stopwatch benchmarking is fine, provided you measure enough iterations to be meaningful. Typically, I require a total elapsed time of some number of single digit seconds. Otherwise, your results are easily significantly skewed by scheduling, and other O/S interruptions to your process.

For this I use a little set of static methods I built a long time ago, which are based on System.currentTimeMillis().

Edit1: For the profiling work I have used jProfiler for a number of years and have found it very good. I have recently looked over YourKit, which seems great from the WebSite, but I've not used it at all, personally.

Edit2: To answer the edited question on scheduling interruptions. I find that doing repeated runs until consistency is achieved/observed works in practice to weed out anomalous results from process scheduling. I also find that thread scheduling has no practical impact for runs of between 5 and 30 seconds. Lastly, after you pass the few seconds threshold scheduling has, in my experience, negligible impact on the results - I find that a 5 second run consistently averages out the same as a 5 minute run for time/iteration.

Edit3: You may also want to consider prerunning the tested code about 10,000 times to "warm up" the JIT, depending on the number of times you expect the tested code to run over time in real life.

Software Monkey
Thanks. I had a feeling that this was going to be pretty common, I just had a hard time justifying it to myself without outside confirmation. :)
Bill the Lizard
@Bill: You're most welcome.
Software Monkey
+3  A: 

A profiler gives you more detailed information, which can help to diagnose and fix perf problems.

In terms of actual measurement, stopwatch time is what users notice so if you want to validate that things are within acceptable limits, stopwatch time is fine.

When you want to actually fix problems, however, a profiler can be really helpful.

Scott Wisniewski
+2  A: 

I ran a program today that searched through and collected information from a bunch of dBase files, it took just over an hour to run. I took a look at the code, made an educated guess at what the bottleneck was, made a minor improvement to the algorithm, and reran the program, this time it completed in 2.5 minutes. I didn't need any fancy profiling tools or benchmark suites to tell me the new version was a significant improvement. If I needed to further optimize the running time I probably would have done some more sophisticated analysis but this wasn't necessary. I find that this sort of "stopwatch benchmarking" is an acceptable solution in quite a number of cases and resorting to more advanced tools would actually be more time-consuming in these cases.

Robert Gamble
I don't mind a downvote for a legitimate reason but at least have the decency to explain what is wrong/unhelpful with answer when you do.
Robert Gamble
A: 

I do it all the time. I'd much rather use a profiler, but the vendor of the domain-specific language I'm working with doesn't provide one.

Andrew Medico
+2  A: 

It's totally valid as long as you measure large enough intervals of time. I would execute 20-30 runs of what you intend to test so that the total elapsed time is over 1 second. I've noticed that time calculations based off System.currentTimeMillis() tend to be either 0ms or ~30ms; I don't think you can get anything more precise than that. You may want to try out System.nanoTime() if you really need to measure a small time interval:

http://java.sun.com/javase/6/docs/api/java/lang/System.html#nanoTime()

cliff.meyers
A: 

After all, it's probably the second most popular form of benchmarking, right after "no-watch benchmarking" - where we say "this activity seems slow, that one seems fast."

Usually what's most important to optimize is whatever interferes with the user experience - which is most often a function of how frequently you perform the action, and whatever else is going on at the same time. Other forms of benchmarking often just help zero in on these.

le dorfier
+1  A: 

Profilers can get in the way of timings, so I would use a combination of stopwatch timing to identify overall performance problems, then use the profiler to work out where the time is being spent. Repeat as required.

Daniel Paull
A: 

I think a key question is the complexity and length of time of the operation.

I sometimes even use physical stopwatch measurements to see if something takes minutes, hours, days, or even weeks to compute (I am working with an application where run times on the orders of several days are not unheard of, even if seconds and minutes are the most common time spans).

However, the automation afforded by calls to any kind of clock system on the computer, like the java millis call referred to in the linked article, is clearly superior to manually seeing how long something runs.

Profilers are nice, when they work, but I have had problems applying them to our application, which usually involves dynamic code generation, dynamic loading of DLLs, and work performed in the two built-in just-in-time-compiled scripting languages of my application. They are quite often limited to assuming a single source language and other unrealistic expectations for complex software.

jakobengblom2
+1  A: 

Stopwatch is actually the best benchmark!

The real end to end user response time is the time that actually matters.

It is not always possable to obtain this time using the available tools, for instance most testing tools do not include the time it takes for a browser to render a page so an overcomplex page with badly written css will show sub second response times to the testing tools, but, 5 seconds plus response time to the user.

The tools are great for automated testing, and for problem determinittion but dont lose sight of what you really want to measure.

James Anderson
+1  A: 

You need to test a realistic number of iterations as you will get different answers depending on how you test the timing. If you only perform an operation once, it could be misleading to take the average of many iterations. If you want to know the time it takes after the JVM has warmed up you might run many (e.g. 10,000) iterations which are not included in the timings.

I also suggest you use System.nanoTime() as its much more accurate. If your test time is around 10 micro-seconds or less, you don't want to call this too often or it can change your result. (e.g. If I am testing for say 5 seconds and I want to know when this is up I only get the nanoTime every 1000 iterations, if I know an iteration is very quick)

Peter Lawrey
A: 

How do you address operating system scheduling when benchmarking?

Benchmark for long enough on a system which is representative of the machine you will be using. If your OS slows down your application, then that should be part of the result.

There is no point in saying, my program would be faster, if only I didn't have an OS.

If you are using Linux, you can use tools such as numactl, chrt and taskset to control how CPUs are used and the scheduling.

Peter Lawrey