coredns do not resolve service name correctly

user1208081 picture user1208081 · Nov 16, 2018 · Viewed 8.2k times · Source

i use Kubernetes v1.11.3 ,it use coredns to resolve host or service name,but i find in pod ,the resolve not work correctly,

# kubectl get services --all-namespaces -o wide
NAMESPACE     NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE       SELECTOR
default       kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP          50d       <none>
kube-system   calico-etcd   ClusterIP   10.96.232.136   <none>        6666/TCP         50d       k8s-app=calico-etcd
kube-system   kube-dns      ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP    50d       k8s-app=kube-dns
kube-system   kubelet       ClusterIP   None            <none>        10250/TCP        32d       <none>
testalex      grafana       NodePort    10.96.51.173    <none>        3000:30002/TCP   2d        app=grafana
testalex      k8s-alert     NodePort    10.108.150.47   <none>        9093:30093/TCP   13m       app=alertmanager
testalex      prometheus    NodePort    10.96.182.108   <none>        9090:30090/TCP   16m       app=prometheus

following command no response

# kubectl exec -it k8s-monitor-7ddcb74b87-n6jsd -n testalex /bin/bash
[root@k8s-monitor-7ddcb74b87-n6jsd /]# ping k8s-alert
PING k8s-alert.testalex.svc.cluster.local (10.108.150.47) 56(84) bytes of data.

and no cordons output log

# kubectl logs coredns-78fcdf6894-h78sd -n kube-system

i think maybe something is wrong,but i can not locate the problem,another question is why the two coredns pods on the master node,it suppose to one on each node

UPDATE

it seems coredns work fine ,but i do not understand the ping command no return

[root@k8s-monitor-7ddcb74b87-n6jsd yum.repos.d]# nslookup kubernetes.default
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

[root@k8s-monitor-7ddcb74b87-n6jsd yum.repos.d]# cat /etc/resolv.conf
nameserver 10.96.0.10
search testalex.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

# kubectl get ep kube-dns --namespace=kube-system

NAME       ENDPOINTS                                                        AGE
kube-dns   192.168.121.3:53,192.168.121.4:53,192.168.121.3:53 + 1 more...   50d

also dns server can not be reached

# kubectl exec -it k8s-monitor-7ddcb74b87-n6jsd -n testalex /bin/bash
[root@k8s-monitor-7ddcb74b87-n6jsd /]# cat /etc/resolv.conf
nameserver 10.96.0.10
search testalex.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
[root@k8s-monitor-7ddcb74b87-n6jsd /]# ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10) 56(84) bytes of data.
^C
--- 10.96.0.10 ping statistics ---
9 packets transmitted, 0 received, 100% packet loss, time 8000ms

i think maybe i misconfig the network this is my cluster init command

 kubeadm init --kubernetes-version=v1.11.3  --apiserver-advertise-address=10.100.1.20 --pod-network-cidr=172.16.0.0/16 

and this is calico ip pool set

# kubectl exec -it calico-node-77m9l -n kube-system /bin/sh
Defaulting container name to calico-node.
Use 'kubectl describe pod/calico-node-77m9l -n kube-system' to see all of the containers in this pod.
/ # cd /tmp
/tmp # ls
calicoctl  tunl-ip
/tmp # ./calicoctl get ipPool
CIDR
192.168.0.0/16

Answer

Prafull Ladha picture Prafull Ladha · Nov 16, 2018

You can start by checking if the dns is working

Run the nslookup on kubernetes.default from inside the pod k8s-monitor-7ddcb74b87-n6jsd, check if it is working.

[root@k8s-monitor-7ddcb74b87-n6jsd /]# nslookup kubernetes.default
Server:     10.96.0.10
Address:    10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

If this returns output that means everything is working from the coredns. If output is not okay, then look into the the resolve.conf inside the pod k8s-monitor-7ddcb74b87-n6jsd, it should return output something like this:

[root@metrics-master-2 /]# cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:5

At last check the coredns endpoints are exposed using:

kubectl get ep kube-dns --namespace=kube-system
NAME       ENDPOINTS                       AGE
kube-dns   10.180.3.17:53,10.180.3.17:53    1h

You can verify if queries are being received by CoreDNS by adding the log plugin to the CoreDNS configuration (aka Corefile). The CoreDNS Corefile is held in a ConfigMap named coredns

Hope this helps.

EDIT:

You might be having this issue, Please have a look:

https://github.com/kubernetes/kubeadm/issues/1056