Kubernetes: understanding memory usage for "kubectl top node"

Kirill Kireyev picture Kirill Kireyev · Jul 11, 2017 · Viewed 21.6k times · Source

How do I interpret the memory usage returned by "kubectl top node". E.g. if it returns:

    NAME                   CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%
    ip-XXX.ec2.internal    222m         11%       3237Mi          41%
    ip-YYY.ec2.internal    91m          9%        2217Mi          60%

By comparison, if I look in the Kubernetes dashboard for the same node, I get: Memory Requests: 410M / 7.799 Gi


kubernetes dashboard

[1]


How do I reconcile the difference?

Answer

Ken Chen picture Ken Chen · Jul 20, 2017

kubectl top node is reflecting the actual usage to the VM(nodes), and k8s dashboard is showing the percentage of limit/request you configured.

E.g. Your EC2 instance has 8G memory and you actually use 3237MB so it's 41%. In k8s, you only request 410MB(5.13%), and have a limit of 470MB memory. This doesn't mean you only consume 5.13% memory, but the amount configured.

  Namespace         Name                                CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------         ----                                ------------    ----------  --------------- -------------
  default           kube-lego                           20m (2%)    0 (0%)      0 (0%)      0 (0%)
  default           mongo-0                             100m (10%)  0 (0%)      0 (0%)      0 (0%)
  default           web                                 100m (10%)  0 (0%)      0 (0%)      0 (0%)
  kube-system       event-exporter-                     0 (0%)      0 (0%)      0 (0%)      0 (0%)
  kube-system       fluentd-gcp-v2.0-z6xh9              100m (10%)  0 (0%)      200Mi (11%) 300Mi (17%)
  kube-system       heapster-v1.4.0-3405140848-k6cm9    138m (13%)  138m (13%)  301456Ki (17%)  301456Ki (17%)
  kube-system       kube-dns-3809445927-hn5xk           260m (26%)  0 (0%)      110Mi (6%)  170Mi (9%)
  kube-system       kube-dns-autoscaler-38801           20m (2%)    0 (0%)      10Mi (0%)   0 (0%)
  kube-system       kube-proxy-gke-staging-default-     100m (10%)  0 (0%)      0 (0%)      0 (0%)
  kube-system       kubernetes-dashboard-1962351        100m (10%)  100m (10%)  100Mi (5%)  300Mi (17%)
  kube-system       l7-default-backend-295440977        10m (1%)    10m (1%)    20Mi (1%)   20Mi (1%)

Here you see many pods with 0 request/limit means unlimited, which didn't count in k8s dashboard but definitely consume memory.

Sum up the memory request/limit you'll find they match k8s dashboard. enter image description here