How to build a sparkSession in Spark 2.0 using pyspark?

haileyeve picture haileyeve · Sep 30, 2016 · Viewed 82.5k times · Source

I just got access to spark 2.0; I have been using spark 1.6.1 up until this point. Can someone please help me set up a sparkSession using pyspark (python)? I know that the scala examples available online are similar (here), but I was hoping for a direct walkthrough in python language.

My specific case: I am loading in avro files from S3 in a zeppelin spark notebook. Then building df's and running various pyspark & sql queries off of them. All of my old queries use sqlContext. I know this is poor practice, but I started my notebook with

sqlContext = SparkSession.builder.enableHiveSupport().getOrCreate().

I can read in the avros with

mydata = sqlContext.read.format("com.databricks.spark.avro").load("s3:...

and build dataframes with no issues. But once I start querying the dataframes/temp tables, I keep getting the "java.lang.NullPointerException" error. I think that is indicative of a translational error (e.g. old queries worked in 1.6.1 but need to be tweaked for 2.0). The error occurs regardless of query type. So I am assuming

1.) the sqlContext alias is a bad idea

and

2.) I need to properly set up a sparkSession.

So if someone could show me how this is done, or perhaps explain the discrepancies they know of between the different versions of spark, I would greatly appreciate it. Please let me know if I need to elaborate on this question. I apologize if it is convoluted.

Answer

Csaxena picture Csaxena · May 18, 2018
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('abc').getOrCreate()

now to import some .csv file you can use

df=spark.read.csv('filename.csv',header=True)