views:

102

answers:

2

I have a Solaris sparc (64-bit) server, which has 16 GB of memory. There are a lot of small Java processes running on it, but today I got the "Could not reserve enough space for object heap" error when trying to launch a new one. I was surprised, since there was still more than 4GB free on the server. The new process was able to successfully launch after some of the other processes were shut down; the system had definitely hit a ceiling of some kind.

After searching the web for an explanation, I began to wonder if it was somehow related to the fact that I'm using the 32-bit JVM (none of the java processes on this server require very much memory).

I believe the default max memory pool is 64MB, and I was running close to 64 of these processes. So that would be 4GB all told ... right at the 32-bit limit. But I don't understand why or how any of these processes would be affected by the others. If I'm right, then in order to run more of these processes I'll either have to tune the max heap to be lower than the default, or else switch to using the 64-bit JVM (which may mean raising the max heap to be higher than the default for these processes). I'm not opposed to either of these, but I don't want to waste time and it's still a shot in the dark right now.

Can anyone explain why it might work this way? Or am I completely mistaken?

If I am right about the explanation, then there is probably documentation on this: I'd very much like to find it. (I'm running Sun's JDK 6 update 17 if that matters.)

Edit: I was completely mistaken. The answers below confirmed my gut instinct that there's no reason why I shouldn't be able to run as many JVMs as I can hold. A little while later I got an error on the same server trying to run a non-java process: "fork: not enough space". So there's some other limit I'm encountering that is not java-specific. I'll have to figure out what it is (no, it's not swap space). Over to serverfault I go, most likely.

+1  A: 

I suspect the memory is fragmented. Check also http://stackoverflow.com/questions/103622/tools-to-view-solve-windows-xp-memory-fragmentation for a confirmation that memory fragmentation can cause such errors.

cristis
Sorry, but I'm not seeing anything in that question that would confirm that memory fragmentation would cause this problem. In fact, http://stackoverflow.com/questions/171205/java-maximum-memory-on-windows-xp seems to indicate that the memory only needs to be contiguous within the address space of the JVM itself ... If I'm wrong and you're right, any idea how I would confirm this?
Zac Thompson
+3  A: 

I believe the default max memory pool is 64MB, and I was running close to 64 of these processes. So that would be 4GB all told ... right at the 32-bit limit.

No. The 32bit limit is per process (at least on a 64bit OS). But the default maximum heap is not fixed at 64MB:

initial heap size: Larger of 1/64th of the machine's physical memory on the machine or some reasonable minimum.

maximum heap size: Smaller of 1/4th of the physical memory or 1GB.

Note: The boundaries and fractions given for the heap size are correct for J2SE 5.0. They are likely to be different in subsequent releases as computers get more powerful.

Michael Borgwardt
Ah, yes, thank you, I was recalling the old limit from 1.4. And yes, I knew that the limit was per-process, but I was grasping at straws.
Zac Thompson