Pods stuck in PodInitializing state indefinitely

Anderson picture Anderson · Nov 15, 2018 · Viewed 24.6k times · Source

I've got a k8s cronjob that consists of an init container and a one pod container. If the init container fails, the Pod in the main container never gets started, and stays in "PodInitializing" indefinitely.

My intent is for the job to fail if the init container fails.

---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: job-name
  namespace: default
  labels:
    run: job-name
spec:
  schedule: "15 23 * * *"
  startingDeadlineSeconds: 60
  concurrencyPolicy: "Forbid"
  successfulJobsHistoryLimit: 30
  failedJobsHistoryLimit: 10
  jobTemplate:
    spec:
      # only try twice
      backoffLimit: 2
      activeDeadlineSeconds: 60
      template:
        spec:
          initContainers:
          - name: init-name
            image: init-image:1.0
          restartPolicy: Never
          containers:
          - name: some-name
            image: someimage:1.0
          restartPolicy: Never

a kubectl on the pod that's stuck results in:

Name:               job-name-1542237120-rgvzl
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               my-node-98afffbf-0psc/10.0.0.0
Start Time:         Wed, 14 Nov 2018 23:12:16 +0000
Labels:             controller-uid=ID
                    job-name=job-name-1542237120
Annotations:        kubernetes.io/limit-ranger:
                      LimitRanger plugin set: cpu request for container elasticsearch-metrics; cpu request for init container elasticsearch-repo-setup; cpu requ...
Status:             Failed
IP:                 10.0.0.0
Controlled By:      Job/job-1542237120
Init Containers:
init-container-name:
    Container ID:  docker://ID
    Image:         init-image:1.0
    Image ID:      init-imageID
    Port:          <none>
    Host Port:     <none>
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 14 Nov 2018 23:12:21 +0000
      Finished:     Wed, 14 Nov 2018 23:12:32 +0000
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wwl5n (ro)
Containers:
  some-name:
    Container ID:  
    Image:         someimage:1.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wwl5n (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True

Answer

ajtrichards picture ajtrichards · Nov 15, 2018

To try and figure this out I would run the command:

kubectl get pods - Add the namespace param if required.

Then copy the pod name and run:

kubectl describe pod {POD_NAME}

That should give you some information as to why it's stuck in the initializing state.