K8s Persistent VolumeMarch 8, 2021 | Cluster
Persisten Volume (Local)
There is a lot of different types of volume mounting, mainly because of the fact that Kubernetes (K8s) is often used for the purpose of high availabillity. So, the whole concept of storage on one node, is kind of opposed to that therminology. But, to try and make things simple here, we will use the local storage type. Another solution that is often used in the development/pi area is the NFS type (network file storage).
When working with local storage, K8s needs to have a binding class setup to control the access to the volume. So, first we need to make a storageClass.yml (or what ever you call it):
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: pi3-storage-class provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
kubectl create -f storageClass.yaml
and then we can setup our persistent volume.
apiVersion: v1 kind: PersistentVolume metadata: name: pi3-pv spec: capacity: storage: 28Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: pi3-storage-class local: path: /mnt/storage nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - pi3
kubectl create -f persistenVolume.yml
If everything went well thus far, you would be able to se your persisten volume:
kubectl get pv
Next thing is to make the claim on some of the storage (pi3PVC.yml):
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pi3-test-claim spec: accessModes: - ReadWriteOnce storageClassName: pi3-storage-class resources: requests: storage: 5Mi
and kubectl create -f pi3PVC.yml
Worth noting here is that the claim is only connected to the storage class and not directly to the volume we created.
Let's create a simpel Nginx pod (http-pod.yml):
apiVersion: v1 kind: Pod metadata: name: www labels: name: www spec: containers: - name: www image: nginx:alpine ports: - containerPort: 80 name: www volumeMounts: - name: www-persistent-storage mountPath: /usr/share/nginx/html volumes: - name: www-persistent-storage persistentVolumeClaim: claimName: pi3-test-claim
kubectl create -f http-pod.yml
Create an index.html file in the storage (pi3:/mnt/storage)
pi3# echo "Hello World!" > /mnt/storage/index.html
Now, the pv will be bound and serving to the pod.
To find the pod IP use the following line:
kubectl get pod -o wide
try to call the pod and get a response:
curl [pod IP]
If everything went fine you would get a "Hello World!" response back.
Lets delete the pod and make a deployment
kubectl delete pod www
If you plan to use an SD-card for nfs-share follow the following instructions, for preparation and setup of the NFS-Share first.
Copy the following three yaml files:
wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/raw/master/deploy/rbac.yaml wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/raw/master/deploy/class.yaml wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/raw/master/deploy/deployment.yaml
and edit "deployment.yaml": the "values" of NFS_SERVER, NFS_PATH and "server:" and "path:" under "nfs:". All in the bottom of the file. And also "mountPath:" under "containers:" it has to point to a valid directory on the client.
and create the content of the files:
sudo kubectl create -f rbac.yaml sudo kubectl create -f deployment-arm.yaml sudo kubectl create -f class.yaml
The storageclass and role based access and provisioner pod have all been deployed. There may have been some Googleing and debugging on the way. But, hopefully everything went well.
The volume claim(s) just have to be made now. Something like the following and you should be ready to go.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pv-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" labels: app: blog spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi