Java process memory usage keeps increasing infinitely

Preconditions:

  • PC with 16 Gb of RAM
  • JDK 1.8.x installed on Ubuntu 16.10 x64.
  • a standard Spring-based web application, that is deployed on Tomcat 8.5.x. Tomcat is configured with next parameters: CATALINA_OPTS="$CATALINA_OPTS -Xms128m -Xmx512m -XX:NewSize=64m -XX:MaxNewSize=128m -Xss512k -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -XX:MaxMetaspaceSize=512m -XX:-TieredCompilation -XX:ReservedCodeCacheSize=512m"
  • JMeter 2.13 for load tests running
  • JProfiler 9.x for java heap memory usage tracking
  • top util for java process memory usage tracking
  • When I start load tests sequentially 3 times I observe (using top ) that java process is increasing a number of used memory:

  • after Tomcat start it uses ~1Gb
  • after first test running it uses 4.5Gb
  • when all tests are finished Tomcat is using 7Gb of RAM
  • The all this time heap size is limited and JProfiler confirms that - heap size does not exceed 512Mb.

    This is a screenshot of JProfiler. Red numbers at the bottom are memory size is used by java process (according to top ). 在这里输入图像描述

    The question is: why does the java process keep increasing memory usage the all time while it's working?

    Thanks!

    UPD#1: About the possible duplicate: they have confirmed that this only happens on Solaris. but I use Ubuntu 16.10. As well the pointed question does not have an answer that would explain the cause of the problem.

    UPD#2: I had to return to this issue after some pause. And now I use pmap util to make a dump of memory used by the java process. I have three dumps: before tests running, after the first tests execution and after some N tests executions. Tests they produce a lot of traffic to the application. All dumps are here: https://gist.github.com/proshin-roman/752cea2dc25cde64b30514ed9ed9bbd0. They are quite huge but the most interesting things are on the 8th line with size of heap: it takes 282.272 Kb before tests and 3.036.400 Kb finally - more than 10x difference! And it's growing each time I run tests. At the same time the heap size is constant (according to JProfiler/VisualVM). What options do I have to find the cause of this problem? Debug JVM? I've tried to find any ways to "look" at this segment of memory but failed. So:

  • can I identify somehow content of the [heap] segment of memory?
  • does such behavior of java look expected?
  • I will appreciate any tips about this problem. Thanks all!

    UPD #3 : using jemalloc (thanks @ivan for the idea) I got next image: 在这里输入图像描述

    And it looks like I have almost the same problem as described here: http://www.evanjones.ca/java-native-leak-bug.html

    UPD #4 : for now I found that the issue is related to java.util.zip.Inflater/Deflater and these classes are used in many places in my application. But the largest impact on memory consumption makes interaction with remove SOAP-service. My application uses reference implementation of JAX-WS standard and it gave next memory consumption under load (it has low precision after 10Gb): 内存消耗与参考实现 Then I've made the same load tests but with Apache CXF implementation and it gave next result: 使用Apache CXF的内存消耗 So you can see that CXF uses less memory and it's more stable (it's not growing the all time as ref.impl.). Finally I found an issue on JDK issue tracker - https://bugs.openjdk.java.net/browse/JDK-8074108 - it's again about memory leaks in zip library and the issue is not closed yet. So it looks like I can not really fix the problem with memory leaks in my app, just can make some workaround.

    Thanks all for your help!


    My hypothesis is that you collect allocation info / call stacks / etc in JProfiler and the RSS growth you observe is related to JProfiler keeping that data in memory.

    You can verify if that's true by collecting less info (there should be a screen at the start of the profiling allowing you to eg not collect object allocations) and see if you observe smaller RSS growth as a result. Running your load test without JProfiler is also an option.

    I had a similar case in the past.


    Can you rerun your tests with this option -XX:MaxDirectMemorySize=1024m ? The exact value of this limit does not matter, but it shows possible "leaks".

    Can you also provide GC details ( -XX:+PrintGC )?

    java.nio.ByteBuffer is a possible cause of them due to it specific finalization.

    UPDATE #1

    I have seen similar behavior for two another reasons: java.misc.Unsafe (unlikely) and high-loaded JNI-calls.

    It is hard to understand without profile of the test.

    UPDATE #2

    Both high-loaded JNI-calls and finalize() method cause the described problem since objects do not have enough time to finalize.

    The juzip.Inflater fragment below:

    /**
     * Closes the decompressor when garbage is collected.
     */
    protected void finalize() {
        end();
    }
    
    /**
     * Closes the decompressor and discards any unprocessed input.
     * This method should be called when the decompressor is no longer
     * being used, but will also be called automatically by the finalize()
     * method. Once this method is called, the behavior of the Inflater
     * object is undefined.
     */
    public void end() {
        synchronized (zsRef) {
            long addr = zsRef.address();
            zsRef.clear();
            if (addr != 0) {
                end(addr);
                buf = null;
            }
        }
    }
    
    private native static void end(long addr);
    

    根据Occam的剃须刀:不是因为你有某处存在内存泄漏(即,“无意的对象保留”a'la Effective Java Item 6)吗?

    链接地址: http://www.djcxy.com/p/19848.html

    上一篇: 什么是Yii :: app()在PHP框架yii中

    下一篇: Java进程内存使用量无限增加