Kubernetes Job Cleanup

Lior Regev picture Lior Regev · Apr 3, 2016 · Viewed 32.1k times · Source

From what I understand the Job object is supposed to reap pods after a certain amount of time. But on my GKE cluster (Kubernetes 1.1.8) it seems that "kubectl get pods -a" can list pods from days ago.

All were created using the Jobs API.

I did notice that after delete the job with kubectl delete jobs The pods were deleted too.

My main concern here is that I am going to run thousands and tens of thousands of pods on the cluster in batch jobs, and don't want to overload the internal backlog system.

Answer

JJC picture JJC · Mar 30, 2017

It looks like starting with Kubernetes 1.6 (and the v2alpha1 api version), if you're using cronjobs to create the jobs (that, in turn, create your pods), you'll be able to limit how many old jobs are kept. Just add the following to your job spec:

successfulJobsHistoryLimit: X
failedJobsHistoryLimit: Y

Where X and Y are the limits of how many previously run jobs the system should keep around (it keeps jobs around indefinitely by default [at least on version 1.5.])

Edit 2018-09-29:

For newer K8S versions, updated links with documentation for this are here: