Why are AWS Batch Jobs stuck in RUNNABLE?

arm picture arm · Jan 8, 2018 · Viewed 8.1k times · Source

I use a computing environment of 0-256 m3.medium on demand instances. My Job definition requires 1 CPU and 3 GB of Ram, which m3.medium has.

What are possible reasons why AWS Batch Jobs are stuck in state RUNNABLE?

AWS says:

A job that resides in the queue, has no outstanding dependencies, and is therefore ready to be scheduled to a host. Jobs in this state are started as soon as sufficient resources are available in one of the compute environments that are mapped to the job’s queue. However, jobs can remain in this state indefinitely when sufficient resources are unavailable.

but that does not answer my question

Answer

nachoab picture nachoab · Feb 9, 2018

There are other reasons why a Job can get stuck in RUNNABLE:

  • Insufficient permissions for the role associated to the Computed Environment
  • No internet access from the Compute Environment instance. You will need to associate a NAT or Internet Gateway to the Compute Environment subnet.
    • Make sure to check the "Enable auto-assign public IPv4 address" setting on your Compute Environment's subnet. (Pointed out by @thisisbrians in the comments)
  • Problems with your image. You need to use an ECS optimized AMI or make sure you have the ECS container agent working. More info at aws docs
  • You're trying to launch instances for which you account is limited to 0 instances (EC2 console > limits, in the left menu). (Read more on gergely-danyi comment)
  • And as mentioned insufficient resources

Also, make sure to read the AWS Batch troubleshooting