How to convert an 500GB SQL table into Apache Parquet?

ShanZhengYang picture ShanZhengYang · Jan 6, 2017 · Viewed 11.2k times · Source

Perhaps this is well documented, but I am getting very confused how to do this (there are many Apache tools).

When I create an SQL table, I create the table using the following commands:

CREATE TABLE table_name(
   column1 datatype,
   column2 datatype,
   column3 datatype,
   .....
   columnN datatype,
   PRIMARY KEY( one or more columns )
);

How does one convert this exist table into Parquet? This file is written to disk? If the original data is several GB, how long does one have to wait?

Could I format the original raw data into Parquet format instead?

Answer

liprais picture liprais · Apr 27, 2017

Apache Spark can be used to do this:

1.load your table from mysql via jdbc
2.save it as a parquet file

Example:

from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.read.jdbc("YOUR_MYSQL_JDBC_CONN_STRING",  "YOUR_TABLE",properties={"user": "YOUR_USER", "password": "YOUR_PASSWORD"})
df.write.parquet("YOUR_HDFS_FILE")