Java process memory usage (jcmd vs pmap)

Konrad picture Konrad · Sep 25, 2016 · Viewed 9.8k times · Source

I have a java application running on Java 8 inside a docker container. The process starts a Jetty 9 server and a web application is being deployed. The following JVM options are passed: -Xms768m -Xmx768m.

Recently I noticed that the process consumes a lot of memory:

$ ps aux 1
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
app          1  0.1 48.9 5268992 2989492 ?     Ssl  Sep23   4:47 java -server ...

$ pmap -x 1
Address           Kbytes     RSS   Dirty Mode  Mapping
...
total kB         5280504 2994384 2980776

$ jcmd 1 VM.native_memory summary
1:

Native Memory Tracking:

Total: reserved=1378791KB, committed=1049931KB
-                 Java Heap (reserved=786432KB, committed=786432KB)
                            (mmap: reserved=786432KB, committed=786432KB) 

-                     Class (reserved=220113KB, committed=101073KB)
                            (classes #17246)
                            (malloc=7121KB #25927) 
                            (mmap: reserved=212992KB, committed=93952KB) 

-                    Thread (reserved=47684KB, committed=47684KB)
                            (thread #47)
                            (stack: reserved=47288KB, committed=47288KB)
                            (malloc=150KB #236) 
                            (arena=246KB #92)

-                      Code (reserved=257980KB, committed=48160KB)
                            (malloc=8380KB #11150) 
                            (mmap: reserved=249600KB, committed=39780KB) 

-                        GC (reserved=34513KB, committed=34513KB)
                            (malloc=5777KB #280) 
                            (mmap: reserved=28736KB, committed=28736KB) 

-                  Compiler (reserved=276KB, committed=276KB)
                            (malloc=146KB #398) 
                            (arena=131KB #3)

-                  Internal (reserved=8247KB, committed=8247KB)
                            (malloc=8215KB #20172) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=19338KB, committed=19338KB)
                            (malloc=16805KB #184025) 
                            (arena=2533KB #1)

-    Native Memory Tracking (reserved=4019KB, committed=4019KB)
                            (malloc=186KB #2933) 
                            (tracking overhead=3833KB)

-               Arena Chunk (reserved=187KB, committed=187KB)
                            (malloc=187KB) 

As you can see there is a huge difference between the RSS (2,8GB) and what is actually being shown by VM native memory statistics (1.0GB commited, 1.3GB reserved).

Why there is such huge difference? I understand that RSS also shows the memory allocation for shared libraries but after analysis of pmap verbose output I realized that it is not the shared libraries issue but rather memory is consumed by somehing whas is called [ anon ] structure. Why JVM allocated so much anonymous memory blocks?

I was searching and found out the following topic: Why does a JVM report more committed memory than the linux process resident set size? However the case described there is different, because less memory usage is shown by RSS than by JVM stats. I have opposite situation and can't figure out the reason.

Answer

Benak Raj picture Benak Raj · Dec 10, 2016

I was facing similar issue with one of our Apache Spark job where we were submitting our application as a fat jar, After analyzing thread dumps we figured that Hibernate is the culprit, we used to load hibernate classes on startup of the application which was actually using java.util.zip.Inflater.inflateBytes to read hibernate class files , this was overshooting our native resident memory usage by almost 1.5 gb , here is a bug raised in hibernate for this issue https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc , the patch suggested in the comments worked for us, Hope this helps.