Mount S3 bucket as filesystem on AWS ECS container

Pratik Mungekar picture Pratik Mungekar · Aug 27, 2018 · Viewed 15.6k times · Source

I am trying to mount S3 as a volume on AWS ECS docker container using rexray/s3fs driver.

I am able to do this on my local machine, where I installed plugin

$docker plugin install rexray/s3fs

and mounted S3 bucket on docker container.

$docker plugin ls

ID                  NAME                 DESCRIPTION                                   ENABLED

3a0e14cadc17        rexray/s3fs:latest   REX-Ray FUSE Driver for Amazon Simple Storage   true 

$docker run -ti --volume-driver=rexray/s3fs -v s3-bucket:/data img

I am trying replicate this on AWS ECS.

Tried follow below document: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html

If I give Driver value then task is not able to run and giving "was unable to place a task because no container instance met all of its requirements." error.

I am using t2.medium instance and giving of it requirement for task so it should not be H/W requirement issue.

If I remove the Driver config from Job definition task gets executed.

It seems I am miss configuring something.

Is anyone trying/tried same thing, please share the knowledge.

Thanks!!

Answer

wimnat picture wimnat · May 29, 2019

Your approach of using the rexray/s3fs driver is correct.

These are the steps I followed to get things working on Amazon Linux 1.

First you will need to install s3fs.

yum install -y gcc libstdc+-devel gcc-c+ fuse fuse-devel curl-devel libxml2-devel mailcap automake openssl-devel git gcc-c++
git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr --with-openssl
make
make install

Now install the driver. There are some options here you might want to modify such as using an IAM role instead of Access Key and AWS region.

docker plugin install rexray/s3fs:latest S3FS_REGION=ap-southeast-2 S3FS_OPTIONS="allow_other,iam_role=auto,umask=000" LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_MOUNT_ROOTPATH=/ --grant-all-permissions

Now the very important step of restarting the ECS agent. I also update for good measure.

yum update -y ecs-init
service docker restart && start ecs

You should now be ready to create your task definition. The important part is your volume configuration which is shown below.

"volumes": [
  {
    "name": "name-of-your-s3-bucket",
    "host": null,
    "dockerVolumeConfiguration": {
      "autoprovision": false,
      "labels": null,
      "scope": "shared",
      "driver": "rexray/s3fs",
      "driverOpts": null
    }
  }
]

Now you just need to specify the mount point in the container definition:

"mountPoints": [
  {
    "readOnly": null,
    "containerPath": "/where/ever/you/want",
    "sourceVolume": "name-of-your-s3-bucket"
  }
]

Now as long as you have appropriate IAM permissions for accessing the s3 bucket your container should start and you can get on with using s3 as a volume.

If you get an error running the task that says "ATTRIBUTE" double check that the plugin has been successfully installed on the ec2 instance and the ecs agent has been restarted. Also double check your driver name is "rexray/s3fs".