I am trying to move images for my site from my host to Amazon S3 cloud hosting. These images are of client work sites and cannot be publicly available. I would like them to be displayed on my site preferably by using the PHP SDK available from Amazon.
So far I have been able to script for the conversion so that I look up records in my database, grab the file path, name it appropriately, and send it to Amazon.
//upload to s3
$s3->create_object($bucket, $folder.$file_name_new, array(
'fileUpload' => $file_temp,
'acl' => AmazonS3::ACL_PRIVATE, //access denied, grantee only own
//'acl' => AmazonS3::ACL_PUBLIC, //image displayed
//'acl' => AmazonS3::ACL_OPEN, //image displayed, grantee everyone has open permission
//'acl' => AmazonS3::ACL_AUTH_READ, //image not displayed, grantee auth users has open permissions
//'acl' => AmazonS3::ACL_OWNER_READ, //image not displayed, grantee only ryan
//'acl' => AmazonS3::ACL_OWNER_FULL_CONTROL, //image not displayed, grantee only ryan
'storage' => AmazonS3::STORAGE_REDUCED
)
);
Before I copy everything over, I have created a simple form to do test upload and display of the image. If I upload an image using ACL_PRIVATE, I can either grab the public url and I will not have access, or I can grab the public url with a temporary key and can display the image.
<?php
//display the image link
$temp_link = $s3->get_object_url($bucket, $folder.$file_name_new, '1 minute');
?>
<a href='<?php echo $temp_link; ?>'><?php echo $temp_link; ?></a><br />
<img src='<?php echo $temp_link; ?>' alt='finding image' /><br />
Using this method, how will my caching work? I'm guessing every time I refresh the page, or modify one of my records, I will be pulling that image again, increasing my get requests.
I have also considered using bucket policies to only allow image retrieval from certain referrers. Do I understand correctly that Amazon is supposed to only fetch requests from pages or domains I specify?
I referenced: https://forums.aws.amazon.com/thread.jspa?messageID=188183𭼗 to set that up, but then am confused as to which security I need on my objects. It seemed like if I made them Private they still would not display, unless I used the temp link like mentioned previously. If I made them public, I could navigate to them directly, regardless of referrer.
Am I way off what I'm trying to do here? Is this not really supported by S3, or am I missing something simple? I have gone through the SDK documentation and lots of searching and feel like this should be a little more clearly documented so hopefully any input here can help others in this situation. I've read others who name the file with a unique ID, creating security through obscurity, but that won't cut it in my situation, and probably not best practice for anyone trying to be secure.
The best way to serve your images is to generate a url using the PHP SDK. That way the downloads go directly from S3 to your users.
You don't need to download via your servers as @mfonda suggested - you can set any caching headers you like on S3 objects - and if you did you would be losing some major benefits of using S3.
However, as you pointed out in your question, the url will always be changing (actually the querystring) so browsers won't cache the file. The easy work around is simply to always use the same expiry date so that the same querystring is always generated. Or better still 'cache' the url yourself (eg in the database) and reuse it every time.
You'll obviously have to set the expiry time somewhere far into the future, but you can regenerate these urls every so often if you prefer. eg in your database you would store the generated url and the expiry date(you could parse that from the url too). Then either you just use the existing url or, if the expiry date has passed, generate a new one. etc...