How do you control the size of the output file?

user447359 picture user447359 · Aug 28, 2016 · Viewed 32.4k times · Source

In spark, what is the best way to control file size of the output file. For example, in log4j, we can specify max file size, after which the file rotates.

I am looking for similar solution for parquet file. Is there a max file size option available when writing a file?

I have few workarounds, but none is good. If I want to limit files to 64mb, then One option is to repartition the data and write to temp location. And then merge the files together using the file size in the temp location. But getting the correct file size is difficult.

Answer

soulmachine picture soulmachine · Sep 2, 2016

It's impossible for Spark to control the size of Parquet files, because the DataFrame in memory needs to be encoded and compressed before writing to disks. Before this process finishes, there is no way to estimate the actual file size on disk.

So my solution is:

  • Write the DataFrame to HDFS, df.write.parquet(path)
  • Get the directory size and calculate the number of files

    val fs = FileSystem.get(sc.hadoopConfiguration)
    val dirSize = fs.getContentSummary(path).getLength
    val fileNum = dirSize/(512 * 1024 * 1024)  // let's say 512 MB per file
    
  • Read the directory and re-write to HDFS

    val df = sqlContext.read.parquet(path)
    df.coalesce(fileNum).write.parquet(another_path)
    

    Do NOT reuse the original df, otherwise it will trigger your job two times.

  • Delete the old directory and rename the new directory back

    fs.delete(new Path(path), true)
    fs.rename(new Path(newPath), new Path(path))
    

This solution has a drawback that it needs to write the data two times, which doubles disk IO, but for now this is the only solution.