I am having problem sharing a folder between Docker containers running on different nodes of Docker Swarm. My swarm consist of one manager and two workers.
I am using this compose file to deploy applications:
version: '3'
services:
redis:
image:
redis:latest
networks:
- default
ports:
- 6379:6379
volumes:
- test-volume:/test
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager]
logstash:
image: docker.elastic.co/logstash/logstash:5.2.2
networks:
- default
volumes:
- test-volume:/test
deploy:
placement:
constraints: [node.role == worker]
networks:
default:
external: false
volumes:
test-volume:
I can confirm that the folder is successfully mounted in both containers with use of docker exec _id_ ls /test
. But when I add a file into this folder with docker exec _id_ touch /test/file
second container does not see created file.
How to configure the swarm so the files are visible in both containers?
Volumes created in docker swarm via default driver are local to the node. So if you put both containers on the same host they will have a shared volume. But when you put your containers on different nodes, there will be a separate volume created on each node.
Now in order to achieve bind mounts/volumes across multiple nodes you have these options:
Use a cluster filesystem like glusterfs, ceph and ... across swarm nodes, then use bind mounts in your service defenition pointing to shared fs.
Use one of the many storage drivers available to docker that provide shared storage like flocker, ...
Switch to Kubernetes and take advantage of automated volume provisioning using multiple backends via Storage classes and claims.
UPDATE: As @Krishna noted in the comments Flocker has been shut down and there isn't a lot of activity on the github repo.