I'm having trouble finding a library that allows Parquet files to be written using Python. Bonus points if I can use Snappy or a similar compression mechanism in conjunction with it.
Thus far the only method I have found is using Spark with the pyspark.sql.DataFrame
Parquet support.
I have some scripts that need to write Parquet files that are not Spark jobs. Is there any approach to writing Parquet files in Python that doesn't involve pyspark.sql
?
Update (March 2017): There are currently 2 libraries capable of writing Parquet files:
Both of them are still under heavy development it seems and they come with a number of disclaimers (no support for nested data e.g.), so you will have to check whether they support everything you need.
OLD ANSWER:
As of 2.2016 there seems to be NO python-only library capable of writing Parquet files.
If you only need to read Parquet files there is python-parquet.
As a workaround you will have to rely on some other process like e.g. pyspark.sql
(which uses Py4J and runs on the JVM and can thus not be used directly from your average CPython program).