I'm attempting to create a Kubernetes CronJob to run an application every minute.
A prerequisite is that I need to get my application code onto the container that runs within the CronJob. I figure that the best way to do so is to use a persistent volume, a pvclaim, and then defining the volume and mounting it to the container. I've done this successfully with containers running within a Pod, but it appears to be impossible within a CronJob? Here's my attempted configuration:
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: update_db
spec:
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
restartPolicy: OnFailure
The corresponding error:
error: error validating "cron-applications.yaml": error validating data: found invalid field volumes for v2alpha1.CronJobSpec; if you choose to ignore these errors, turn validation off with --validate=false
I can't find any resources that show that this is possible. So, if not possible, how does one solve the problem of getting application code into a running CronJob?
A CronJob uses a PodTemplate as everything else based on Pods and can use Volumes. You placed your Volume specification directly in the CronJobSpec instead of the PodSpec, use it like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: update-db
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
volumeMounts:
- name: application-code
mountPath: /where/ever
restartPolicy: OnFailure
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim