Apache Spark is an open source distributed data processing engine written in Scala providing a unified API and distributed data sets to users for both batch and streaming processing.
What's the difference between spark.sql.shuffle.partitions and spark.default.parallelism? I have tried to set both of them …
performance apache-spark hadoop apache-spark-sqlI have a large pyspark.sql.dataframe.DataFrame and I want to keep (so filter) all rows where the URL …
python apache-spark pyspark apache-spark-sqlI have an Spark app which runs with no problem in local mode,but have some problems when submitting to …
scala apache-sparkRight now, I have to use df.count > 0 to check if the DataFrame is empty or not. But it …
apache-spark apache-spark-sqlWhen using Scala in Spark, whenever I dump the results out using saveAsTextFile, it seems to split the output into …
scala apache-sparkCan anyone explain the difference between reducebykey,groupbykey,aggregatebykey and combinebykey? I have read the documents regarding this , but couldn't …
apache-sparkI'm trying to get the path to spark.worker.dir for the current sparkcontext. If I explicitly set it as …
apache-spark config pysparkI am writing a Spark application and want to combine a set of Key-Value pairs (K, V1), (K, V2), ..., (K, …
python apache-spark mapreduce pyspark rddI am using Spark 1.3 and would like to join on multiple columns using python interface (SparkSQL) The following works: I …
python apache-spark join pyspark apache-spark-sqlConsider I have a defined schema for loading 10 csv files in a folder. Is there a way to automatically load …
apache-spark apache-spark-sql spark-dataframe