Spark : check your cluster UI to ensure that workers are registered

vineet sinha picture vineet sinha · Feb 26, 2016 · Viewed 29k times · Source

I have a simple program in Spark:

/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object SimpleApp {
  def main(args: Array[String]) {
    val conf = new SparkConf().setMaster("spark://10.250.7.117:7077").setAppName("Simple Application").set("spark.cores.max","2")
    val sc = new SparkContext(conf)    
    val ratingsFile = sc.textFile("hdfs://hostname:8020/user/hdfs/mydata/movieLens/ds_small/ratings.csv")

    //first get the first 10 records 
    println("Getting the first 10 records: ")
    ratingsFile.take(10)    

    //get the number of records in the movie ratings file
    println("The number of records in the movie list are : ")
    ratingsFile.count() 
  }
}

When I try to run this program from the spark-shell i.e. I log into the name node (Cloudera installation) and run the commands sequentially on the spark-shell:

val ratingsFile = sc.textFile("hdfs://hostname:8020/user/hdfs/mydata/movieLens/ds_small/ratings.csv")
println("Getting the first 10 records: ")
ratingsFile.take(10)    
println("The number of records in the movie list are : ")
ratingsFile.count() 

I get correct results, but if I try to run the program from eclipse, no resources are assigned to program and in the console log all I see is:

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Also, in the Spark UI, I see this:

Job keeps Running - Spark

Also, it should be noted that this version of spark was installed with Cloudera (hence no worker nodes show up).

What should I do to make this work?

EDIT:

I checked the HistoryServer and these jobs don't show up there (even in incomplete applications)

Answer

StephenBoesch picture StephenBoesch · Feb 27, 2016

I have done configuration and performance tuning for many spark clusters and this is a very common/normal message to see when you are first prepping/configuring a cluster to handle your workloads.

This is unequivocally due to insufficient resources to have the job launched. The job is requesting one of:

  • more memory per worker than allocated to it (1GB)
  • more CPU's than available on the cluster