I've got this webapp that needs some memory tuning. While I'm already profiling the application itself and trimming things down, the JVM itself seems overly bloated to me on our busiest instance. (The lower volume instances do not have this problem.) The details:
Linux 2.6.9-78.0.5.ELsmp #1 SMP x86_64
)Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)
) -d64
in startup.sh
If I could refactor out the need for a 64-bit JVM, and drop the -d64
switch, would that make the JVM's resident memory footprint smaller? In other words...
What impact, if any, does the -d64
switch have on the Sun JVM resident memory usage?
Usage of the d64 switch gets the JVM into the 64-bit mode. Technically, on Solaris/Linux and most Unixes, the JVM process will execute in the LP64 model.
The LP64 model is different from the 32-bit model (ILP32) in that pointers happen to be 64 bit wide as opposed to 32 bit pointers. For the JVM, this allows for greater memory addressability, but it also means that the size occupied by the object references alone has doubled. So there is greater bloat for the same number of objects at a given time in a 32-bit JVM and a 64-bit one.
Another thing that is often forgotten is the size of the instructions themselves. On a 64-bit JVM, the size of the instructions will occupy native machine register size.
If however, you use compressed object pointers in a 64-bit environment, the JVM will encode and decode pointers whenever possible for heap sizes greater than 4 GB. Briefly stated, when you use compressed pointers, the JVM attempts to use 32-bit wide values as much as possible.
Hint: Switch on the UseCompressedOops flag, using -XX:+UseCompressedOops to get rid of some of the bloat. YMMV, but people have reported upto 50% drop in memory bloat by using compressed oops.
EDIT