views:

364

answers:

1

I've got this webapp that needs some memory tuning. While I'm already profiling the application itself and trimming things down, the JVM itself seems overly bloated to me on our busiest instance. (The lower volume instances do not have this problem.) The details:

  • Platform:
    • RHEL4 64-bit (Linux 2.6.9-78.0.5.ELsmp #1 SMP x86_64)
    • Sun Java 6 (Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode))
    • Tomcat 6 with -d64 in startup.sh
  • My webapp currently has some code that in production requires the benefits of running 64-bit.
  • I've observed that after some time (a week) the JVMs resident memory size (as shown by top) is three times the size of my -Xmx setting.
  • The non-heap memory size, etc are all relatively trivial, a mere single digit percentage of the heap size
  • There is only one section of code that requires a 64-bit bit address space

If I could refactor out the need for a 64-bit JVM, and drop the -d64 switch, would that make the JVM's resident memory footprint smaller? In other words...

What impact, if any, does the -d64 switch have on the Sun JVM resident memory usage?

+6  A: 

Usage of the d64 switch gets the JVM into the 64-bit mode. Technically, on Solaris/Linux and most Unixes, the JVM process will execute in the LP64 model.

The LP64 model is different from the 32-bit model (ILP32) in that pointers happen to be 64 bit wide as opposed to 32 bit pointers. For the JVM, this allows for greater memory addressability, but it also means that the size occupied by the object references alone has doubled. So there is greater bloat for the same number of objects at a given time in a 32-bit JVM and a 64-bit one.

Another thing that is often forgotten is the size of the instructions themselves. On a 64-bit JVM, the size of the instructions will occupy native machine register size.

If however, you use compressed object pointers in a 64-bit environment, the JVM will encode and decode pointers whenever possible for heap sizes greater than 4 GB. Briefly stated, when you use compressed pointers, the JVM attempts to use 32-bit wide values as much as possible.

Hint: Switch on the UseCompressedOops flag, using -XX:+UseCompressedOops to get rid of some of the bloat. YMMV, but people have reported upto 50% drop in memory bloat by using compressed oops.

EDIT

The UseCompressedOops flag is supported in version 14.0 of the Java HotSpot VM, available from Java 6 Update 14 onwards.

Vineet Reynolds
Fantastic Answer. You've convinced me to refactor my code, and drop the -d64. I'll come back and comment on how it goes. I'll also work in a JVM update so I can try out -XX:+UseCompressedOops. Thanks. You've won *Stu's World Famous "Awesome Coder of the Month"* award for September 2009!
Stu Thompson
Wow, I never knew some google-fu would elucidate such a reaction. Thanks, I'm amazed by your reply! Btw, if you manage to keep your heap less than 4GB, your 64-bit JVM will behave like a 32-bit one; not sure about any performance impact though.
Vineet Reynolds
More than Google-fo!!! I had looked around myself, and was *thinking* it would be worth the effort...but there are *soooo* many JVM switches, and values, and if-then-but cases, and gotchyas. It can be overwhelming, and my need was for a *"do it"* / *"don't bother"* answer, accompanied by relevant spoon feeding. Thanks again.
Stu Thompson
Side note: is not my heap that goes beyond 4GB, but my use of MappedByteBuffers that use an address space beyond 4GB. It seemed like a great idea at the time...
Stu Thompson
Wouldn't blame you. If you need address that large an address space, 64-bit is the way to go; the thumb rule that 32-bit systems would almost always give better performance, while 64-bit systems give that much larger memory (practically unlimited). Some people try to achieve a balance by running several 32-bit JVMs in a 64-bit OS with lots of RAM; of course, they lose some performance when they have to communicate between JVMs.
Vineet Reynolds