Kubernetes (minikube) pod OOMKilled with apparently plenty of memory left in node

DMB3 picture DMB3 · Jul 23, 2017 · Viewed 25.2k times · Source

I'm using minikube, starting it with

minikube start --memory 8192

For 8Gb of RAM for the node. I'm allocating pods with the resource constraints

    resources:
      limits:
        memory: 256Mi
      requests:
        memory: 256Mi

So 256Mb of RAM for each node which would give me, I would assume, 32 pods until 8Gb memory limit has been reached but the problem is that whenever I reach the 8th pod to be deployed, the 9th will never run because it's constantly OOMKilled.

For context, each pod is a Java application with a frolvlad/alpine-oraclejdk8:slim Docker container ran with -Xmx512m -Xms128m (even if JVM was indeed using the full 512Mb instead of 256Mb I would still be far from the 16 pod limit to hit the 8Gb cap).

What am I missing here? Why are pods being OOMKilled with apparently so much free allocatable memory left?

Thanks in advance

Answer

Radek 'Goblin' Pieczonka picture Radek 'Goblin' Pieczonka · Jul 24, 2017

You must understand the way requests and limits work.

Requests are the requirements for the amount of allocatable resources required on the node for a pod to get scheduled on it. These will not cause OOMs, they will cause pod not to get scheduled.

Limits, on the other side, are hard limits for given pod. The pod will be capped at this level. So, even if you have 16GB RAM free, but have a 256MiB limit on it, as soon as your pod reaches this level, it will experience an OOM kill.

If you want, you can define only requests. Then, your pods will be able to grow to full node capacity, without being capped.

https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/