Java and Docker, the limitations

Java and Docker, the limitations

TLDR;

Java and Docker aren’t friends out of the box. Docker can set memory and CPU limitations that Java can’t automatically detect. Using either Java Xmx flags (cumbersome/duplicated) or the new experimental JVM flags we can solve this issue.

Docker love for Java is in its way in newer versions of both OpenJ9 and OpenJDK 10!

Mismatch in virtualization

The combination of Java and Docker isn’t a match made in heaven, initially it was far from it. For starters, the whole premise of the JVM, Java Virtual Machine, was that having a Virtual Machine makes the underlying hardware irrelevant from the program’s point of view.

So what do we gain by packaging our Java application inside a JVM (Virtual Machine) inside a Docker container? Not a lot, for the most part you are duplicating JVMs and Linux containers, which kills memory usage. This just sounds silly.

It does make it easy to bundle together your program, the settings, a specific JDK, Linux settings and (if needed) an application server and other tools as one ‘thing’. This complete container has a better level of encapsulation from a devops/cloud point of view.

Problem 1: Memory

Most applications in production today are still using Java 8 (or older) and this might give you problems. Java 8 (before update 131) doesn’t play nice with Docker. The problem is that the amount of memory and CPUs available to the JVM isn’t the total amount of memory and CPU of your machine, it is what Docker is allowing you to use (duh).

For example if you limit your Docker container to get only 100MB of memory, this isn’t something ‘old’ Java was aware of. Java doesn’t see this limit. The JVM will claim more and more memory and go over this limit. Docker will then take action into its own hands and kill the process inside the container if too much memory is used! The Java process is ‘Killed’. This is not what we want…

To fix this you will also need to specify to Java there is a maximum memory limit. In older Java versions (before 8u131) you needed to specify this inside your container by setting -Xmx flags to limit the heap size. This feels wrong, you’d rather not want to define these limits twice, nor do you want to define this ‘inside’ your container.

Luckily there are better ways to fix this now. From Java 9 onwards (and from 8u131+ onwards, backported) there are flags added to the JVM:

-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

These flags will force the JVM to look at the Linux cgroup configuration. This is where Docker containers specify their maximum memory settings. Now, if your application reaches the limit set by Docker (500MB), the JVM will see this limit. It’ll try to GC. If it still runs out of memory the JVM will do what it is supposed to do, throw an OutOfMemoryException. Basically this allows the JVM to ‘see’ the limit that has been set by Docker.

From Java 10 onwards (see test below) these experimental flags are the new default and are enabled using the -XX:+UseContainerSupport flag (you can disable this behaviour by providing -XX:-UseContainerSupport).

Problem 2: CPU

The second problem is similar, but it has to do with the CPU. In short, the JVM will look at the hardware and detect the amount of CPU’s there are. It’ll optimize your runtime to use those CPU’s. But again, Docker might not allow you to use all these CPU’s, there is another mismatch here. Sadly this isn’t fixed in Java 8 or Java 9, but was tackled in Java 10.

From Java 10 onwards the available CPUs will be calculated in a different way (by default) fixing this problem (also with UseContainerSupport).

Testing Java and Docker memory handling

As a fun exercise, lets verify and test how Docker handles out of memory using a couple of different JVM versions/flags and even a different JVM.

First we create a test application, one that simply ‘eats’ memory and doesn’t free it.

import java.util.ArrayList;
import java.util.List;

public class MemEat {
    public static void main(String[] args) {
        List l = new ArrayList<>();
        while (true) {
            byte b[] = new byte[1048576];
            l.add(b);
            Runtime rt = Runtime.getRuntime();
            System.out.println( "free memory: " + rt.freeMemory() );
        }
    }
}

We can start Docker containers and run this application to see what will happen.

Test 1: Java 8u111

First we’ll start with a container that has an older version of Java 8 (update 111).

docker run -m 100m -it java:openjdk-8u111 /bin/bash

We compile and run the MemEat.java file:

javac MemEat.java

java MemEat
...
free memory: 67194416
free memory: 66145824
free memory: 65097232
Killed

As expected, Docker has killed the our Java process. Not what we want (!). Also you can see the output, Java thinks it still has a lot of memory left to allocate.

We can fix this by providing Java with a maximum memory using the -Xmx flag:

javac MemEat.java

java -Xmx100m MemEat
...
free memory: 1155664
free memory: 1679936
free memory: 2204208
free memory: 1315752
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
	at MemEat.main(MemEat.java:8)

After providing our own memory limits, the process is halted correctly, the JVM understands the limits it is operating under. The problem is however that you are now setting these memory limits twice, for Docker AND for the JVM.

Test 2: Java 8u144

As mentioned, with the new flags this has been fixed, the JVM will now follow the settings provided by Docker. We can test this using a newer JVM.

docker run -m 100m -it adoptopenjdk/openjdk8 /bin/bash

(this OpenJDK Java image currently contains, at the time of writing, Java 8u144)

Next we compile and run the MemEat.java file again without any flags:

javac MemEat.java

java MemEat
...
free memory: 67194416
free memory: 66145824
free memory: 65097232
Killed

The same problem exists. But we can now supply the experimental flags mentioned above:

javac MemEat.java
java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap MemEat
...
free memory: 1679936
free memory: 2204208
free memory: 1155616
free memory: 1155600
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
	at MemEat.main(MemEat.java:8)

This time we didn’t set any limits on the JVM by telling it what the limits are, we just told the JVM to look at the correct settings! Much better.

Test 3: Java 10u23

Some people in the comments and on Reddit mentioned that Java 10 solves everything by making the experimental flags the new default. This behaviour can be turned off by disabling this flag: -XX:-UseContainerSupport.

When I tested this it initially didn’t work. At the time of writing the AdoptAJDK OpenJDK10 image is packaged with jdk-10+23. This JVM apparently doesn’t understand the ‘UseContainerSupport’ flag (yet) and the process was still killed by Docker.

docker run -m 100m -it adoptopenjdk/openjdk10 /bin/bash

Testing the code (and even providing the flag manually):

javac MemEat.java

java MemEat
...
free memory: 96262112
free memory: 94164960
free memory: 92067808
free memory: 89970656
Killed

java -XX:+UseContainerSupport MemEat

Unrecognized VM option 'UseContainerSupport'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

Test 4: Java 10u46 (Nightly)

I decided to try the latest ‘nightly’ build of AdoptAJDK OpenJDK 10. Instead of Java 10+23 it includes 10+46.

docker run -m 100m -it adoptopenjdk/openjdk10:nightly /bin/bash

There is a problem in this nightly build though, the exported PATH points to the old Java 10+23 directory, not to 10+46, we need to fix this.

export PATH=$PATH:/opt/java/openjdk/jdk-10+46/bin/

javac MemEat.java

java MemEat
...
free memory: 3566824
free memory: 2796008
free memory: 1480320
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
  at MemEat.main(MemEat.java:8)

Succes! Without providing any flags Java 10 correctly detected Dockers memory limits.

Test 5: OpenJ9

I’ve also been experimenting with OpenJ9 recently, this free alternative JVM has been open sourced from IBMs J9 and is now maintained by Eclipse.

Read more about OpenJ9 in my next blogpost.

It is fast and is very good with memory management, mindblowlingly good, often using up to 30-50% less memory for our microservices. This almost makes it possible to classify Spring Boot apps as ‘micro’ with a 100-200mb runtime nstead of 300mb+. I’m planning on doing a write-up about this very soon.

To my surprise however, OpenJ9 doesn’t yet have an option similar to the flags currently (backported) in Java 8/9/10+ for cgroup memory limits. For example if we apply the previous testcase to the latest AdoptAJDK OpenJDK 9 + OpenJ9 build:

docker run -m 100m -it adoptopenjdk/openjdk9-openj9 /bin/bash

And we add the OpenJDK flags (which are ignored by OpenJ9) we get:

java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap MemEat
...
free memory: 83988984
free memory: 82940400
free memory: 81891816
Killed

Oops, the JVM is killed by Docker again.

I really hope a similar option will be added soon to OpenJ9, because I’d love to run this in production without having to specify the maximum memory twice. Eclipse/IBM is working on a fix for this, there are already issues and even pull requests for this issue.

UPDATE: (not recommended hack)

A slightly ugly/hacky way to fix this is using the following composed flag:

java -Xmx`cat /sys/fs/cgroup/memory/memory.limit_in_bytes` MemEat
...
free memory: 3171536
free memory: 2127048
free memory: 2397632
free memory: 1344952
JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2018/05/15 14:04:26 - please wait.
JVMDUMP032I JVM requested System dump using '//core.20180515.140426.125.0001.dmp' in response to an event
JVMDUMP010I System dump written to //core.20180515.140426.125.0001.dmp
JVMDUMP032I JVM requested Heap dump using '//heapdump.20180515.140426.125.0002.phd' in response to an event
JVMDUMP010I Heap dump written to //heapdump.20180515.140426.125.0002.phd
JVMDUMP032I JVM requested Java dump using '//javacore.20180515.140426.125.0003.txt' in response to an event
JVMDUMP010I Java dump written to //javacore.20180515.140426.125.0003.txt
JVMDUMP032I JVM requested Snap dump using '//Snap.20180515.140426.125.0004.trc' in response to an event
JVMDUMP010I Snap dump written to //Snap.20180515.140426.125.0004.trc
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
  at MemEat.main(MemEat.java:8)

In this case the heap size is limited to the memory allocated to the Docker instance, this works for older JVMs and OpenJ9. This is of course wrong because the container itself and other parts of the JVM off the heap also use memory. But it seems to work, appearantly Docker is lenient in this case. Maybe some bash-guru will make a better version subtracting a portion from the bytes for other processes.

Anyway, don’t do this, it might not work.

Test 6: OpenJ9 (Nightly)

Someone suggested using the latest ‘nightly’ build for OpenJ9.

docker run -m 100m -it adoptopenjdk/openjdk9-openj9:nightly /bin/bash

This will get the latest nightly build of OpenJ9, and it has two things:

  1. Another broken PATH parameter, fix that.
  2. The JVM has support for the new flag UseContainerSupport (like Java 10 will)
export PATH=$PATH:/opt/java/openjdk/jdk-9.0.4+12/bin/

javac MemEat.java

java -XX:+UseContainerSupport MemEat
...
free memory: 5864464
free memory: 4815880
free memory: 3443712
free memory: 2391032
JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2018/05/15 21:32:07 - please wait.
JVMDUMP032I JVM requested System dump using '//core.20180515.213207.62.0001.dmp' in response to an event
JVMDUMP010I System dump written to //core.20180515.213207.62.0001.dmp
JVMDUMP032I JVM requested Heap dump using '//heapdump.20180515.213207.62.0002.phd' in response to an event
JVMDUMP010I Heap dump written to //heapdump.20180515.213207.62.0002.phd
JVMDUMP032I JVM requested Java dump using '//javacore.20180515.213207.62.0003.txt' in response to an event
JVMDUMP010I Java dump written to //javacore.20180515.213207.62.0003.txt
JVMDUMP032I JVM requested Snap dump using '//Snap.20180515.213207.62.0004.trc' in response to an event
JVMDUMP010I Snap dump written to //Snap.20180515.213207.62.0004.trc
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

TADAAA, a fix is on the way!

Oddly it seems this flag isn’t enabled by default in OpenJ9 like it is in Java 10 though. Again: Make sure you test this is you want to run Java inside a Docker container.

Conclusion

IN SHORT: Be aware of the mismatch, the limitations. Test your memory settings and JVM flags, don’t assume anything.

If you are running Java inside a Docker container, make sure that you have Docker memory limits AND limits in the JVM or a JVM that understands these limits.

If you’re not able to upgrade your Java version set your own limits using -Xmx.

For Java 8 and Java 9, update to the latest version and use:

-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

For Java 10, make sure it understands the ‘UseContainerSupport’ (update to latest) and just run it.

For OpenJ9 (which I highly recommend for bringing down your memory footprint in production) for now set the limits using -Xmx, but soon there will be a version that understands the ‘UseContainerSupport’ flag.