Mount multiple s3fs buckets automatically with /etc/fstab

Ryan Cheng picture Ryan Cheng · Apr 9, 2013 · Viewed 27.4k times · Source

In the s3fs instruction wiki, we were told that we could auto mount s3fs buckets by entering the following line to /etc/fstab

s3fs#mybucket /mnt/mybucket fuse    allow_other,use_cache=/tmp,url=https://s3.amazonaws.com 0 0

This works fine for 1 bucket, but when I try to mount multiple buckets onto 1 EC2 instance by having 2 lines:

s3fs#mybucket /mnt/mybucket fuse    allow_other,use_cache=/tmp 0 0
s3fs#mybucket2 /mnt/mybucket2 fuse    allow_other,use_cache=/tmp 0 0

only the second line works I tried duplicating s3fs to s3fs2 and to:

s3fs#mybucket /mnt/mybucket fuse    allow_other,use_cache=/tmp 0 0
s3fs2#mybucket2 /mnt/mybucket2 fuse    allow_other,use_cache=/tmp 0 0

but this still does not work. only the second one gets mounted:

How do I automatically mount multiple s3 bucket via s3fs in /etc/fstab without manually using:

s3fs mybucket /mn/mybucket2-ouse_cache=/tmp

Answer

B. Shea picture B. Shea · May 1, 2017

Perhaps your network wasn't up?

Minimal entry - with only one option (_netdev = Mount after network is 'up')

<bucket name> <mount point> fuse.s3fs _netdev, 0 0

I am running Ubuntu 16.04 and multiple mounts works fine in /etc/fstab.

Example similar to what I use for ftp image uploads (tested with extra bucket mount point):

mybucket1.mydomain.org /mnt/mybucket1 fuse.s3fs _netdev,allow_other,passwd_file=/home/ftpuser/.passwd-aws-s3fs,default_acl=public-read,uid=1001,gid=65534   0 0

mybucket2.mydomain.org /mnt/mybucket2 fuse.s3fs _netdev,allow_other,passwd_file=/home/ftpuser/.passwd-aws-s3fs,default_acl=public-read,uid=1001,gid=65534   0 0

sudo mount -a to test the new entries and mount them (then do a reboot test).

If you wish to mount as non-root, look into the UID,GID options as per above. This isn't absolutely necessary if using the fuse option allow_other as the permissions are '0777' on mounting.

WARNING: Updatedb (the locate command uses this) indexes your system. You should check that either PRUNEFS or PRUNEPATHS in /etc/updatedb.conf covers either your s3fs filesystem or s3fs mount point. The default is to 'prune' any s3fs filesystems, but it's worth checking. Otherwise, not only will your system slow down if you have many files in the bucket, but your AWS bill will increase. See the FAQ link for more.

Reference:
https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon
https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ