Redshift COPY command for Parquet format with Snappy compression

cloudninja picture cloudninja · Mar 10, 2016 · Viewed 13.6k times · Source

I have datasets in HDFS which is in parquet format with snappy as compression codec. As far as my research goes, currently Redshift accepts only plain text, json, avro formats with gzip, lzo compression codecs.

Alternatively, i am converting the parquet format to plain text and changing the snappy codec to gzip using a Pig script.

Is there currently a way to load data directly from parquet files to Redshift?

Answer

Joe Harris picture Joe Harris · Mar 14, 2016

No, there is currently no way to load Parquet format data directly into Redshift.

EDIT: Starting from April 19, 2017 you can use Redshift Spectrum to directly query Parquet data on S3. Therefore you can now "load" from Parquet with INSERT INTO x SELECT * FROM parquet_data http://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html

EDIT 2: Starting from May 17, 2018 (for clusters on version 1.0.2294 or later) you can directly load Parquet and ORC files into Redshift. https://docs.aws.amazon.com/redshift/latest/dg/copy-usage_notes-copy-from-columnar.html