Apache Spark SQL is a tool for "SQL and structured data processing" on Spark, a fast and general-purpose cluster computing system.
import numpy as np df = spark.createDataFrame( [(1, 1, None), (1, 2, float(5)), (1, 3, np.nan), (1, 4, None), (1, 5, float(10)), (1, 6, float('nan')), (1, 6, float('nan'))], ('session', "timestamp1", "id2")) …
apache-spark pyspark apache-spark-sql pyspark-sqlI have this code: l = [('Alice', 1),('Jim',2),('Sandra',3)] df = sqlContext.createDataFrame(l, ['name', 'age']) df.withColumn('age2', df.age + 2).…
python apache-spark-sql spark-dataframeWhat's the difference between spark.sql.shuffle.partitions and spark.default.parallelism? I have tried to set both of them …
performance apache-spark hadoop apache-spark-sqlI have a large pyspark.sql.dataframe.DataFrame and I want to keep (so filter) all rows where the URL …
python apache-spark pyspark apache-spark-sqlRight now, I have to use df.count > 0 to check if the DataFrame is empty or not. But it …
apache-spark apache-spark-sqlI am using Spark 1.3 and would like to join on multiple columns using python interface (SparkSQL) The following works: I …
python apache-spark join pyspark apache-spark-sqlConsider I have a defined schema for loading 10 csv files in a folder. Is there a way to automatically load …
apache-spark apache-spark-sql spark-dataframeI'm using the following code to agregate students per year. The purpose is to know the total number of student …
python pyspark apache-spark-sqlThe goal of this question is to document: steps required to read and write data using JDBC connections in PySpark …
python scala apache-spark apache-spark-sql pysparkI am trying to effectively join two DataFrames, one of which is large and the second is a bit smaller. …
apache-spark dataframe apache-spark-sql apache-spark-1.4