How to access the service deployed on one pod via another pod in Kubernetes?

Aditya Datta picture Aditya Datta · Nov 21, 2018 · Viewed 12.2k times · Source

Can anybody let me know how can we access the service deployed on one pod via another pod in a kubernetes cluster?

Example:

There is a nginx service which is deployed on Node1 (having pod name as nginx-12345) and another service which is deployed on Node2 (having pod name as service-23456). Now if 'service' wants to communicate with 'nginx' for some reason, then how can we access 'nginx' inside the 'service-23456' pod?

Answer

Prafull Ladha picture Prafull Ladha · Nov 21, 2018

There are various ways to access the service in kubernetes, you can expose your services through NodePort or LoadBalancer and access it outside the cluster.

See the official documentation of how to access the services.

Kubernetes official document states that:

Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.

So access a service directly from other node is dependent on which type of Kubernetes cluster you're using.

EDIT:

Once the service is deployed in your cluster you should be able to contact the service using its name, and Kube-DNS will answer with the correct ClusterIP to speak to your final pods. ClusterIPs are governed by IPTables rules created by kube-proxy on Workers that NAT your request to the final container’s IP.

The Kube-DNS naming convention is service.namespace.svc.cluster-domain.tld and the default cluster domain is cluster.local.

For example, if you want to contact a service called mysql in the db namespace from any namespace, you can simply speak to mysql.db.svc.cluster.local.

If this is not working then there might be some issue with kube-dns in your cluster. Hope this helps.

EDIT2 : There are some known issue with dns resolution in ubuntu, Kubernetes official document states that

Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.