Incorrect memory allocation for Yarn/Spark after automatic setup of Dataproc Cluster

habitats picture habitats · Nov 8, 2015 · Viewed 11k times · Source

I'm trying to run Spark jobs on a Dataproc cluster, but Spark will not start due to Yarn being misconfigured.

I receive the following error when running "spark-shell" from the shell (locally on the master), as well as when uploading a job through the web-GUI and the gcloud command line utility from my local machine:

15/11/08 21:27:16 ERROR org.apache.spark.SparkContext: Error initializing     SparkContext.
java.lang.IllegalArgumentException: Required executor memory (38281+2679 MB) is above the max threshold (20480 MB) of this cluster! Please increase the value of 'yarn.s
cheduler.maximum-allocation-mb'.

I tried modifying the value in /etc/hadoop/conf/yarn-site.xml but it didn't change anything. I don't think it pulls the configuration from that file.

I've tried with multiple cluster combinations, at multiple sites (mainly Europe), and I only got this to work with the low memory version (4-cores, 15 gb memory).

I.e. this is only a problem on the nodes configured for memory higher than the yarn default allows.

Answer

Dennis Huo picture Dennis Huo · Nov 8, 2015

Sorry about these issues you're running into! It looks like this is part of a known issue where certain memory settings end up computed based on the master machine's size rather than the worker machines' size, and we're hoping to fix this in an upcoming release soon.

There are two current workarounds:

  1. Use a master machine type with memory either equal to or smaller than worker machine types.
  2. Explicitly set spark.executor.memory and spark.executor.cores either using the --conf flag if running from an SSH connection like:

    spark-shell --conf spark.executor.memory=4g --conf spark.executor.cores=2
    

    or if running gcloud beta dataproc, use --properties:

    gcloud beta dataproc jobs submit spark --properties spark.executor.memory=4g,spark.executor.cores=2
    

You can adjust the number of cores/memory per executor as necessary; it's fine to err on the side of smaller executors and letting YARN pack lots of executors onto each worker, though you can save some per-executor overhead by setting spark.executor.memory to the full size available in each YARN container and spark.executor.cores to all the cores in each worker.

EDIT: As of January 27th, new Dataproc clusters will now be configured correctly for any combination of master/worker machine types, as mentioned in the release notes.