Spark nodes keep printing GC (Allocation Failure) and no tasks run

Eric Meadows picture Eric Meadows · Mar 29, 2019 · Viewed 9.5k times · Source

I am running a Spark job using Scala, but it gets stuck not executing and tasks by my worker nodes.

Currently I am submitting this to Livy, which submits to our Spark Cluster with 8 cores and 12GB of RAM with the following configuration:

data={
    'file': bar_jar.format(bucket_name),
    'className': 'com.bar.me',
    'jars': [
        common_jar.format(bucket_name),
    ],
    'args': [
        bucket_name,
        spark_master,
        data_folder
    ],
    'name': 'Foo',
    'driverMemory': '2g',
    'executorMemory': '9g',
    'driverCores': 1,
    'executorCores': 1,
    'conf': {
        'spark.driver.memoryOverhead': '200',
        'spark.executor.memoryOverhead': '200',
        'spark.submit.deployMode': 'cluster'
    }
}

The node logs then are endlessly filled with:

2019-03-29T22:24:32.119+0000: [GC (Allocation Failure) 2019-03-29T22:24:32.119+0000:
[ParNew: 68873K->20K(77440K), 0.0012329 secs] 257311K->188458K(349944K), 
0.0012892 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]

The issue is that the next stages & tasks are not executing, so the behavior is quite unexpected. Tasks won't run

Answer

Joy Yeh picture Joy Yeh · Jul 20, 2020

It is apparently a normal GC event:

This ‘Allocation failure’ log is not an error but is a totally normal case in JVM. This is a typical GC event which causes the Java Garbage Collection process to get triggered. Garbage Collection removes dead objects, compact reclaimed memory and thus helps in freeing up memory for new object allocations.

Source: https://medium.com/@technospace/gc-allocation-failures-42c68e8e5e04

Edit: If the next stages are not executing, maybe you should check stderr instead of stdout.