In moving to AWS EC2, I want to restrict my instances' user permissions for good reason. One thing the instances need to do is access files on S3 and write files there. However, I cannot find any way to achieve this without giving all permissions to that user.
s3cmd allows me to call "ls" and "du" on the s3 buckets I gave the policy permission to, but always fails with a 403 error when trying to PUT/sync with one of these folders. If I use my root credentials, the transfer goes right through.
So, I don't get why if I give all permissions to the user for said buckets, it cannot PUT, but if I give it arn:aws:s3:::* (all buckets) then it can. Makes no sense to me.
Anyone else ever dealt with this before?
Try something like this. I think the problem is that you need s3:ListAllMyBuckets and s3:ListBuckets for the s3cmd to work. Not sure why but it wont work unless it can get a list of the buckets. I had the same problem the first time i tried to use permissions with s3cmd and this was the solution.
{
"Statement": [
{
"Action": [
"s3:ListAllMyBuckets"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket/path",
"arn:aws:s3:::bucket/path/*"
]
}
]
}
Edit I've added the s3:PutObjectAcl
action which is required for newer versions of s3cmd as stated by Will Jessop below.