After some searching I failed to find a thorough comparison of fastparquet
and pyarrow
.
I found this blog post (a basic comparison of speeds).
and a github discussion that claims that files created with fastparquet
do not support AWS-athena (btw is it still the case?)
when/why would I use one over the other? what are the major advantages and disadvantages ?
my specific use case is processing data with dask
writing it to s3 and then reading/analyzing it with AWS-athena.
I used both fastparquet and pyarrow for converting protobuf data to parquet and to query the same in S3 using Athena. Both worked, however, in my use-case, which is a lambda function, package zip file has to be lightweight, so went ahead with fastparquet. (fastparquet library was only about 1.1mb, while pyarrow library was 176mb, and Lambda package limit is 250mb).
I used the following to store a dataframe as parquet file:
from fastparquet import write
parquet_file = path.join(filename + '.parq')
write(parquet_file, df_data)