Kubernetes CoreDNS in CrashLoopBackOff

Hakon89 picture Hakon89 · Nov 30, 2018 · Viewed 9.1k times · Source

I understand that this question is asked dozen times, but nothing has helped me through internet searching.

My set up:

CentOS Linux release 7.5.1804 (Core)
Docker Version: 18.06.1-ce
Kubernetes: v1.12.3

Installed by official guide and this one:https://www.techrepublic.com/article/how-to-install-a-kubernetes-cluster-on-centos-7/

CoreDNS pods are in Error/CrashLoopBackOff state.

kube-system   coredns-576cbf47c7-8phwt                 0/1     CrashLoopBackOff   8          31m
kube-system   coredns-576cbf47c7-rn2qc                 0/1     CrashLoopBackOff   8          31m

My /etc/resolv.conf:

nameserver 8.8.8.8

Also tried with my local dns-resolver(router)

nameserver 10.10.10.1

Setup and init:

kubeadm init --apiserver-advertise-address=10.10.10.3 --pod-network-cidr=192.168.1.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

I tried to solve this with: Editing the coredns: root@kub~]# kubectl edit cm coredns -n kube-system and changing

proxy . /etc/resolv.conf

directly to

proxy . 10.10.10.1

or proxy . 8.8.8.8

Also tried to:

kubectl -n kube-system get deployment coredns -o yaml |   sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' |   kubectl apply -f -

And still nothing helps me.

Error from the logs:

plugin/loop: Seen "HINFO IN 7847735572277573283.2952120668710018229." more than twice, loop detected

The other thread - coredns pods have CrashLoopBackOff or Error state didnt help at all, becouse i havent hit any solutions that were described there. Nothing helped.

Answer

Narendranath Reddy picture Narendranath Reddy · Mar 26, 2019

Even I have got such error and I successfully managed to work by below steps.

However, you missed 8.8.4.4

sudo nano /etc/resolv.conf

nameserver 8.8.8.8
nameserver 8.8.4.4

run following commands to restart daemon and docker service

sudo systemctl daemon-reload

sudo systemctl restart docker

If you are using kubeadm make sure you delete an entire cluster from master and provision cluster again.

kubectl drain <node_name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node_name>
kubeadm reset

Once You Provision the new cluster

kubectl get pods --all-namespaces

It Should give below expected Result

NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   calico-node-gldlr          2/2     Running   0          24s
kube-system   coredns-86c58d9df4-lpnj6   1/1     Running   0          40s
kube-system   coredns-86c58d9df4-xnb5r   1/1     Running   0          40s
kube-system   kube-proxy-kkb7b           1/1     Running   0          40s
kube-system   kube-scheduler-osboxes     1/1     Running   0          10s