kubectl get nodes shows NotReady

Sandeep Nag picture Sandeep Nag · Oct 29, 2018 · Viewed 24.1k times · Source

I have installed two nodes kubernetes 1.12.1 in cloud VMs, both behind internet proxy. Each VMs have floating IPs associated to connect over SSH, kube-01 is a master and kube-02 is a node. Executed export:

no_proxy=127.0.0.1,localhost,10.157.255.185,192.168.0.153,kube-02,192.168.0.25,kube-01

before running kubeadm init, but I am getting the following status for kubectl get nodes:

NAME      STATUS     ROLES    AGE   VERSION
kube-01   NotReady   master   89m   v1.12.1
kube-02   NotReady   <none>   29s   v1.12.2

Am I missing any configuration? Do I need to add 192.168.0.153 and 192.168.0.25 in respective VM's /etc/hosts?

Answer

Shashank Pai picture Shashank Pai · Oct 29, 2018

Looks like pod network is not installed yet on your cluster . You can install weave for example with below command

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

After a few seconds, a Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.

You can install pod networks of your choice . Here is a list

after this check

$ kubectl describe nodes

check all is fine like below

Conditions:
  Type              Status
  ----              ------
  OutOfDisk         False
  MemoryPressure    False
  DiskPressure      False
  Ready             True
Capacity:
 cpu:       2
 memory:    2052588Ki
 pods:      110
Allocatable:
 cpu:       2
 memory:    1950188Ki
 pods:      110

next ssh to the pod which is not ready and observe kubelet logs. Most likely errors can be of certificates and authentication.

You can also use journalctl on systemd to check kubelet errors.

$ journalctl -u kubelet