Using NFS persistent volumes is a relatively easy, for kubernetes, on-ramp to using the kubernetes storage infrastructure.
Before following this guide, you should have an installed kubernetes cluster. If you don’t, check out the guide how to Install K3s.
Setting up the NFS share
We will share a directory on the primary cluster node for all the other nodes to access. I will assume that you want to share a filesystem mounted at /mnt/storage-disk.
-
Install the NFS service:
1sudo apt update && sudo apt install -y nfs-kernel-server -
Edit the file
/etc/exports:1sudo nano -w /etc/exports -
Add a new line at the bottom with the following content:
1/mnt/storage-disk *(rw,sync,no_subtree_check,no_root_squash,anonuid=65534,anongid=65534) -
Save and exit the editor with ctrl+x followed by y to indicate that we want to save the file and finally enter to confirm.
-
Load the configuration we just wrote into the NFS server:
1sudo exportfs -a
Install nfs client onto each of the other nodes
On each of the other nodes, we need the nfs client to be installed or pods will fail to schedule and start. On k8s-2 and k8s-3 run:
|
|
Configuring the NFS provisioner
We could create a Persistent Volume and Persistent Volume Claim manually, but there’s an automated method using a Provisioner. This is a special configuration that takes our NFS share and automatically slices it up into Persistent Volumes whenever a new Persistent Volume Claim is created up till the provisioned storage space is exhausted.
To install the provisioner we’ll first get helm because it simplifies the installation inordinately!
-
Install
helmon the primary cluster node:1curl -fsSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash -
Configure
helmto access the provisioner repository:1sudo helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ -
Using
helminstall the provisioner:1 2 3 4sudo env KUBECONFIG=/etc/rancher/k3s/k3s.yaml \ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=k8s-1 \ --set nfs.path=/mnt/storage-diskThere are many options that may be set with the
--setflag. Each option must be supplied with their own--set option.name=valueparameter pair. For the full list of parameters see the configuration section of the helm chart readme for the NFS provisioner.
Creating a test deployment to verify the provisioner
Now we will test that the provisioner is working by creating a test claim and a test deployment that uses the claim.
-
Create and edit a file called
test-claim.yaml:1nano -w test-claim.yaml -
Paste the following content into the editor:
1 2 3 4 5 6 7 8 9 10 11apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-claim spec: storageClassName: nfs-client accessModes: - ReadWriteMany resources: requests: storage: 1Mi-
The
apiVersionmust be set tov1so that K8s knows which fields are acceptible. -
The
kindmust bePersistendVolumeClaimto inform K8s that we are creating a PVC. -
The
metadata.nameentry is an arbitrary name for the PVC but we will use it to tie the PVC to the pod in the following steps. -
The
spec.storageClassNamemust benfs-clientbecause this is what the default installation usinghelmsets up. -
The
spec.accessModesshould be a list with a single item of valueReadWriteMany.The available access modes are:
- ReadWriteOnce – the volume can be mounted as read-write by a single node
- ReadOnlyMany – the volume can be mounted read-only by many nodes
- ReadWriteMany – the volume can be mounted as read-write by many nodes
-
The
spec.resources.requests.storageshould be a number suffixed withMi,Gi, orTiindicating Mebibytes (1024*1024bytes), Gibibytes (1024*1024*1024bytes), or Tebibytes (1024*1024*1024*1024bytes). E.g.5Giwould equal five Gibibytes or5*1024*1024*1024bytes. If there is insufficient space in the NFS share then this PVC will stay in a pending state.
-
-
Save and exit the editor with ctrl+x followed by y to indicate that we want to save the file and finally enter to confirm.
-
Create and edit a file called
test-pod.yaml:1nano -w test-pod.yaml -
Paste the following content into the editor:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-pod image: gcr.io/google_containers/busybox:1.24 command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim-
The
apiVersionmust be set tov1so that K8s knows which fields are acceptible. -
The
kindmust bePodto inform K8s that we are creating a pod. -
The
metadata.nameentry is an arbitrary name for the pod. -
The
spec.containersfield takes a list of container definitions:- The
namefield is an arbitrary name for the container. - The
imagefield is the docker-compatible container image name. Here we are using a container hosted by google calledbusyboxwith version1.24. - The
commandfield is the command to run inside the container when it is launched. - The
argsfield is a list of arguments to pass to the command specified above. - The
volumeMountsfield is a list of mount definitions for inside the container.- The
namefield in the mount definition is the name of the volume as configured in.spec.volumes. - The
mountPathfield tells K8s where to mount the filesystem from the volume into the container.
- The
- The
-
The
volumesfield is a list of volumes to make available for containers within this pod.- The volume
namefield is an arbitrary name that we use to reference in the.spec.containers.volumeMounts.namefield. - The
persistentVolumeClaim.claimNamefield is the name of a PVC. We configured this in ourtest-claim.yamlfile’s.metadata.namefield, i.e.test-claim.
- The volume
-
-
Save and exit the editor with ctrl+x followed by y to indicate that we want to save the file and finally enter to confirm.
-
Apply the claim and the pod definitions to the cluster:
1sudo kubectl apply -f test-claim.yaml -f test-pod.yaml
The container should now download its image, run and exit. Left behind will be a new folder called /mnt/storage-disk/default-test-claim-pvc-* where the * is a UUID. Inside this folder should be a single file called SUCCESS indicating that the PVC successfully provisioned, mounted into the container, and the container was able to write a file.