Over the past year I've made huge improvements in my application's Java heap usage--a solid 66% reduction. In pursuit of that, I've been monitoring various metrics, such as Java heap size, cpu, Java non-heap, etc. via SNMP.
Recently, I've been monitoring how much real memory (RSS, resident set) by the JVM and am somewhat surprised. The real memory consumed by the JVM seems totally independent of my applications heap size, non-heap, eden space, thread count, etc.
Heap Size as measured by Java SNMP Java Heap Used Graph http://lanai.dietpizza.ch/images/jvm-heap-used.png
Real Memory in KB. (E.g.: 1 MB of KB = 1 GB) Java Heap Used Graph http://lanai.dietpizza.ch/images/jvm-rss.png
(The three dips in the heap graph correspond to application updates/restarts.)
This is a problem for me because all that extra memory the JVM is consuming is 'stealing' memory that could be used by the OS for file caching. In fact, once the RSS value reaches ~2.5-3GB, I start to see slower response times and higher CPU utilization from my application, mostly do to IO wait. As some point paging to the swap partition kicks in. This is all very undesirable.
So, my questions:
The gory details:
Relevant JVM parameters:
-Xms128m
-Xmx640m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:+CMSIncrementalMode
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintGCApplicationStoppedTime
-XX:+CMSLoopWarn
-XX:+HeapDumpOnOutOfMemoryError
How I measure RSS:
ps x -o command,rss | grep java | grep latest | cut -b 17-
This goes into a text file and is read into an RRD database my the monitoring system on regular intervals. Note that ps outputs Kilo Bytes.
While in the end it was ATorras's answer that proved ultimately correct, it kdgregory who guided me to the correct diagnostics path with the use of pmap
. (Go vote up both their answers!) Here is what was happening:
Things I know for sure:
java.nio
-based file access back-end. This back-end maps MappedByteBuffers
to the files themselves.MappedByteBuffer.force()
on every JRobin underlying database MBBpmap
listed:
That last point was my "Eureka!" moment.
My corrective actions:
MappedByteBuffer.force()
to database update events, and not a periodic timer. Will the problem magically go away?Java RSS memory used graph http://lanai.dietpizza.ch/images/stackoverflow-rss-problem-fixed.png
Questions that I may or may not have time to figure out:
MappedByteBuffer.force()
? If nothing has changed, does it still write the entire file? Part of the file? Does it load it first?MappedByteBuffer.force()
to database update events, and not a periodic timer, will the problem magically go away?Just an idea: NIO buffers are placed outside the JVM.
EDIT: As per 2016 it's worth considering @Lari Hotari comment [ Why does the Sun JVM continue to consume ever more RSS memory even when the heap, etc sizes are stable? ] because back to 2009, RHEL4 had glibc < 2.10 (~2.3)
Regards.