Terraform - Upload file to S3 on every apply

Muthaiah PL picture Muthaiah PL · May 13, 2019 · Viewed 12k times · Source

I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here:

  1. uploaded version outputs as null. I would expect some version_id like 1, 2, 3
  2. When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed. I would expect to upload all the times when I run terraform apply and create a new version.

What am I doing wrong? Here is my Terraform config:

resource "aws_s3_bucket" "my_bucket" {
  bucket = "my_bucket_name"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "file_upload" {
  bucket = "my_bucket"
  key    = "my_bucket_key"
  source = "my_files.zip"
}

output "my_bucket_file_version" {
  value = "${aws_s3_bucket_object.file_upload.version_id}"
}

Answer

Martin Atkins picture Martin Atkins · May 14, 2019

Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.

To make subsequent changes, there are a few options:

  • You could use a different local filename for each new version.
  • You could use a different remote object path for each new version.
  • You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.

The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:

resource "aws_s3_bucket_object" "file_upload" {
  bucket = "my_bucket"
  key    = "my_bucket_key"
  source = "${path.module}/my_files.zip"
  etag   = "${filemd5("${path.module}/my_files.zip")}"
}

With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.


(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)