views:

37

answers:

1

I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer.

The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook).

This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it.

Here are the results:

Memory
GC activity
Threads
CPU load

Interesting, right?

So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this?

Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern.

I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

+1  A: 

Short answer: no, I don't think this looks scary.

I don't think you have enough information here to determine exactly where the spiky behavior comes from. There's nothing that indicates that there's a memory leak, thread leak, resource leak or obvious contention. Perhaps some of your tasks managed to get in step with each other? In any case, you seem to be observing correct behavior and nothing in the profile indicates dangerous symptoms.

That said, I would stronly recommend that you upgrade to the latest version of Java. Update 3 is too old: we're not even allowed to use it at work due to the security issues. There has been plenty of work done on the garbage collector since then as well.

As a first step, I would recommend upgrading to latest Java (update 20 as of this writing) and re-run your test. It's entirely possible that this puzzling behavior will vanish on the second try.

EDIT: with respect to the instant deadlock found in the comment, I wouldn't use that finding as an indication of anything other than multithreaded programming is hard (do-able but hard). I've recommended Java Concurrency in Practice before and I strongly recommend that you keep it nearby while coding.

That said, I would also recommend avoiding RMI if at all possible. The hard-wired coupling that creates between client and server (client hangs until server fulfills request) can cause another layer of distributed computing complexity that really isn't worth it for a simple "request + fulfill" pairing.

Congrats on finding the deadlock, though. There are plenty of times when I wish my own (RMI-caused) issues were as straightforward....

Bob Cross
Thanks for the Java update tip, I'll do that and run the test again tonight. Tasks getting into step with each other is something I haven't really considered, the wait time is randomized but maybe I should look into that.And yes, I do share your view that this doesn't look inherently scary, since, like you said, resources don't seem to be leaking.
darri
I'll be damned, after a couple hours of frustration I found out that the profiler I'm using causes an almost instant deadlock in my connection pool running with Java 6 update 20, but not with update 3, at least that's what it seems like. This experiment will have to wait until this weekend.
darri
My stubborness kept me going last night, I upgraded to JProfiler 6 and voila, things are running without deadlocks again. But, the escalating spike pattern persists, no change there. I'll run this on linux over the weekend, for curiosity's sake.I'll check out Java Concurrency in Practice, I've always been on somewhat of a "need to know" basis when it comes to java concurrency, understanding things just enough to make them work but really having to work for it when it comes to debugging said code :).
darri
@darri, if you exposed a deadlock in the profile, I would assume that you really do have a deadlock case. Can you look at a thread dump? The stacktrace will show which objects are locked between the threads. "jvisualvm" is built into the JVM in update 20: I'd recommend that you try running in the profiler that can force the deadlock and check out the thread dump. It may find the problem for you.
Bob Cross
@Bob Cross, I'm not so sure about the deadlock case, I think it might be some instrumentation differences between Java 6 u20 and JProfiler v5 (somewhat old) that caused the deadlock, since I've not been able to repeat the deadlocking after updating both JRE and JProfiler. I saw the thread dump and know the lock happened on the connection pool monitor object, but I've never seen that before, not before nor after the update (the pool codebase is two years old), so I'm inclined to give my connection pool the benefit of the doubt for now, but I'll definitely check out 'jvisualvm' over the weekend.
darri