Create Spark Dataset from a CSV file

Powers picture Powers · Sep 16, 2016 · Viewed 12k times · Source

I would like to create a Spark Dataset from a simple CSV file. Here are the contents of the CSV file:

name,state,number_of_people,coolness_index
trenton,nj,"10","4.5"
bedford,ny,"20","3.3"
patterson,nj,"30","2.2"
camden,nj,"40","8.8"

Here is the code to make the Dataset:

var location = "s3a://path_to_csv"

case class City(name: String, state: String, number_of_people: Long)

val cities = spark.read
  .option("header", "true")
  .option("charset", "UTF8")
  .option("delimiter",",")
  .csv(location)
  .as[City]

Here is the error message: "Cannot up cast number_of_people from string to bigint as it may truncate"

Databricks talks about creating Datasets and this particular error message in this blog post.

Encoders eagerly check that your data matches the expected schema, providing helpful error messages before you attempt to incorrectly process TBs of data. For example, if we try to use a datatype that is too small, such that conversion to an object would result in truncation (i.e. numStudents is larger than a byte, which holds a maximum value of 255) the Analyzer will emit an AnalysisException.

I am using the Long type, so I didn't expect to see this error message.

Answer

user6022341 picture user6022341 · Sep 16, 2016

Use schema inference:

val cities = spark.read
  .option("inferSchema", "true")
  ...

or provide schema:

val cities = spark.read
  .schema(StructType(Array(StructField("name", StringType), ...)

or cast:

val cities = spark.read
  .option("header", "true")
  .csv(location)
  .withColumn("number_of_people", col("number_of_people").cast(LongType))
  .as[City]