I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
I am using Persistent Volumes and Persisten Volume Claims, as such:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
I have initially created my disks using the command:
$ gcloud compute disks create --size 250GB pd-disk
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...
I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Any help is appreciated. Thanks.
I do not see any direct problem for which such behavior as explained above has occurred! But what I can rather ask you to try is to use a "Deployment" instead of a "Pod" as suggested by many here, especially when using PVs and PVCs. Deployment takes care of many things to maintain the "Desired State". I have attached my code below for your reference which works and both the volumes are persistent even after deleting/terminating/restarting as this is managed by the Deployment's desired state.
Two difference which you would find in my code from yours are:
Deployment yml.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: platform
labels:
component: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
component: nginx
spec:
nodeSelector:
role: app-1
containers:
- name: nginx
image: vip-intOAM:5001/nginx:1.15.3
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/nginx/conf.d/"
name: nginx-confd
- mountPath: "/var/www/"
name: nginx-web-content
volumes:
- name: nginx-confd
persistentVolumeClaim:
claimName: glusterfsvol-nginx-confd-pvc
- name: nginx-web-content
persistentVolumeClaim:
claimName: glusterfsvol-nginx-web-content-pvc
One of my PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfsvol-nginx-confd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
glusterfs:
endpoints: gluster-cluster
path: nginx-confd
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
PVC for the above
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi