Kubernetes: No Route to Host

Jaskaranbir Singh picture Jaskaranbir Singh · Oct 3, 2018 · Viewed 25.4k times · Source

I have a Bare-Metal Kubernetes custom setup (manually setup cluster using Kubernetes the Hard Way). Everything seems to work, but I cannot access services externally.

I can get the list of services when curl:

https://<ip-addr>/api/v1/namespaces/kube-system/services

However, when I try to proxy (using kubectl proxy, and also by using the <master-ip-address>:<port>):

https://<ip-addr>/api/v1/namespaces/kube-system/services/toned-gecko-grafana:80/proxy/

I get:

Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Trying to reach: 'http://10.44.0.16:3000/'
  • Even if I normally curl http://10.44.0.16:3000/ I get the same error. This is the result whether I curl from inside the VM where Kubernetes is installed. Was able to resolve this, check below.

  • I can access my services externally using NodePort.

  • I can access my services if I expose them through Nginx-Ingress.

  • I am using Weave as CNI, and the logs were normal except a couple of log-lines at the beginning about it not being able to access Namespaces (RBAC error). Though logs were fine after that.

  • Using CoreDNS, logs look normal. APIServer and Kubelet logs look normal. Kubernetes-Events look normal, too.

  • Additional Note: The DNS Service-IP I assigned is 10.3.0.10, and the service IP range is: 10.3.0.0/24, and POD Network is 10.2.0.0/16. I am not sure what 10.44.x.x is or where is it coming from.

Here is output from one of the services:

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "kubernetes-dashboard",
    "namespace": "kube-system",
    "selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
    "uid": "5c8bb34f-c6a2-11e8-84a7-00163cb4ceeb",
    "resourceVersion": "7054",
    "creationTimestamp": "2018-10-03T00:22:07Z",
    "labels": {
      "addonmanager.kubernetes.io/mode": "Reconcile",
      "k8s-app": "kubernetes-dashboard",
      "kubernetes.io/cluster-service": "true"
    },
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"kubernetes-dashboard\",\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"}}}\n"
    }
  },
  "spec": {
    "ports": [
      {
        "protocol": "TCP",
        "port": 443,
        "targetPort": 8443,
        "nodePort": 30033
      }
    ],
    "selector": {
      "k8s-app": "kubernetes-dashboard"
    },
    "clusterIP": "10.3.0.30",
    "type": "NodePort",
    "sessionAffinity": "None",
    "externalTrafficPolicy": "Cluster"
  },
  "status": {
    "loadBalancer": {

    }
  }
}

I am not sure how to debug this, even some pointers to the right direction would help. If anything else is required, please let me know.


Output from kubectl get svc:

NAME                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
coredns-primary        ClusterIP   10.3.0.10    <none>        53/UDP,53/TCP,9153/TCP   4h51m
kubernetes-dashboard   NodePort    10.3.0.30    <none>        443:30033/TCP            4h51m

EDIT:

Turns out I didn't have kube-dns service running for some reason, despite having CoreDNS running. It was as mentioned here: https://github.com/kubernetes/kubeadm/issues/1056#issuecomment-413235119

Now I can curl from inside the VM successfully, but the proxy-access still gives me the same error: No route to host. I am not sure why or how would this fix the issue, since I don't see DNS being in play here, but it fixed the issue regardles. Would appreciate any possible explanation on this too.

Answer

KevinLiu picture KevinLiu · Sep 5, 2019

I encountered the same issue and resolved it by running the commands below:

iptables --flush
iptables -tnat --flush
systemctl stop firewalld
systemctl disable firewalld
systemctl restart docker