Cannot connect to a Mongodb pod in Kubernetes (Connection refused)

MikiTesi picture MikiTesi · Oct 24, 2018 · Viewed 7.1k times · Source

I have a few remote virtual machines, on which I want to deploy some Mongodb instances and then make them accessible remotely, but for some reason I can't seem to make this work.

These are the steps I took:

  • I started a Kubernetes pod running Mongodb on a remote virtual machine.
  • Then I exposed it through a Kubernetes NodePort service.
  • Then I tried to connect to the Mongodb instance from my laptop, but it didn't work.

Here is the command I used to try to connect:

$ mongo host:NodePort   

(by "host" I mean the Kubernetes master).

And here is its output:

MongoDB shell version v4.0.3
connecting to: mongodb://host:NodePort/test
2018-10-24T21:43:41.462+0200 E QUERY    [js] Error: couldn't connect to server host:NodePort, connection attempt failed: SocketException:
Error connecting to host:NodePort :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:257:13
@(connect):1:6
exception: connect failed

From the Kubernetes master, I made sure that the Mongodb pod was running. Then I ran a shell in the container and checked that the Mongodb server was working properly. Moreover, I had previously granted remote access to the Mongodb server, by specifying the "--bind-ip=0.0.0.0" option in its yaml description. To make sure that this option had been applied, I ran this command inside the Mongodb instance, from the same shell:

db._adminCommand( {getCmdLineOpts: 1}

And here is the output:

{
"argv" : [
    "mongod",
    "--bind_ip",
    "0.0.0.0"
],
"parsed" : {
    "net" : {
        "bindIp" : "0.0.0.0"
    }
},
"ok" : 1
}

So the Mongodb server should actually be accessible remotely.

I can't figure out whether the problem is caused by Kubernetes or by Mongodb.

As a test, I followed exactly the same steps by using MySQL instead, and that worked (that is, I ran a MySQL pod and exposed it with a Kubernetes service, to make it accessible remotely, and then I successfully connected to it from my laptop). This would lead me to think that the culprit is Mongodb here, but I'm not sure. Maybe I'm just making a silly mistake somewhere.

Could someone help me shed some light on this? Or tell me how to debug this problem?

EDIT:

Here is the output of the kubectl describe deployment <mongo-deployment> command, as per your request:

Name:                   mongo-remote
Namespace:              default
CreationTimestamp:      Thu, 25 Oct 2018 06:31:24 +0000
Labels:                 name=mongo-remote
Annotations:            deployment.kubernetes.io/revision=1
Selector:               name=mongo-remote
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  name=mongo-remote
  Containers:
   mongocontainer:
    Image:      mongo:4.0.3
    Port:       5000/TCP
    Host Port:  0/TCP
    Command:
      mongod
      --bind_ip
      0.0.0.0
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   mongo-remote-655478448b (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  15m   deployment-controller  Scaled up replica set mongo-remote-655478448b to 1

For the sake of completeness, here is the yaml description of the deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo-remote
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: mongo-remote
    spec:
      containers:
        - name: mongocontainer
          image: mongo:4.0.3
          imagePullPolicy: Always
          command:
          - "mongod"
          - "--bind_ip"
          - "0.0.0.0"
          ports:
          - containerPort: 5000
            name: mongocontainer
      nodeSelector:
        kubernetes.io/hostname: xxx

Answer

MikiTesi picture MikiTesi · Oct 26, 2018

I found the mistake (and as I suspected, it was a silly one).
The problem was in the yaml description of the deployment. As no port was specified in the mongod command, mongodb was listening on the default port (27017), but the container was listening on another specified port (5000).

So the solution is to either set the containerPort as the default port of mongodb, like so:

       command:
      - "mongod"
      - "--bind_ip"
      - "0.0.0.0"
      ports:
      - containerPort: 27017
        name: mongocontainer

Or to set the port of mongodb as the one of the containerPort, like so:

      command:
      - "mongod"
      - "--bind_ip"
      - "0.0.0.0"
      - "--port"
      - "5000"
      ports:
      - containerPort: 5000
        name: mongocontainer