How do I copy files from S3 to Amazon EMR HDFS?

Tomer picture Tomer · Sep 20, 2011 · Viewed 38.9k times · Source

I'm running hive over EMR, and need to copy some files to all EMR instances.

One way as I understand is just to copy files to the local file system on each node the other is to copy the files to the HDFS however I haven't found a simple way to copy stright from S3 to HDFS.

What is the best way to go about this?

Answer

Patrick Salami picture Patrick Salami · Sep 22, 2011

the best way to do this is to use Hadoop's distcp command. Example (on one of the cluster nodes):

% ${HADOOP_HOME}/bin/hadoop distcp s3n://mybucket/myfile /root/myfile

This would copy a file called myfile from an S3 bucket named mybucket to /root/myfile in HDFS. Note that this example assumes you are using the S3 file system in "native" mode; this means that Hadoop sees each object in S3 as a file. If you use S3 in block mode instead, you would replace s3n with s3 in the example above. For more info about the differences between native S3 and block mode, as well as an elaboration on the example above, see http://wiki.apache.org/hadoop/AmazonS3.

I found that distcp is a very powerful tool. In addition to being able to use it to copy a large amount of files in and out of S3, you can also perform fast cluster-to-cluster copies with large data sets. Instead of pushing all the data through a single node, distcp uses multiple nodes in parallel to perform the transfer. This makes distcp considerably faster when transferring large amounts of data, compared to the alternative of copying everything to the local file system as an intermediary.