Kubernetes with NFS Persistent Volumes

Using NFS persistent volumes is a relatively easy, for kubernetes, on-ramp to using the kubernetes storage infrastructure.

Before following this guide, you should have an installed kubernetes cluster. If you don’t, check out the guide how to Install K3s.

We will share a directory on the primary cluster node for all the other nodes to access. I will assume that you want to share a filesystem mounted at /mnt/storage-disk.

  1. Install the NFS service:

    1
    
    sudo apt update && sudo apt install -y nfs-kernel-server
    
  2. Edit the file /etc/exports:

    1
    
    sudo nano -w /etc/exports
    
  3. Add a new line at the bottom with the following content:

    1
    
    /mnt/storage-disk *(rw,sync,no_subtree_check,no_root_squash,anonuid=65534,anongid=65534)
    
  4. Save and exit the editor with ctrl+x followed by y to indicate that we want to save the file and finally enter to confirm.

  5. Load the configuration we just wrote into the NFS server:

    1
    
    sudo exportfs -a
    

On each of the other nodes, we need the nfs client to be installed or pods will fail to schedule and start. On k8s-2 and k8s-3 run:

1
sudo apt update && sudo apt install -y nfs-common

We could create a Persistent Volume and Persistent Volume Claim manually, but there’s an automated method using a Provisioner. This is a special configuration that takes our NFS share and automatically slices it up into Persistent Volumes whenever a new Persistent Volume Claim is created up till the provisioned storage space is exhausted.

To install the provisioner we’ll first get helm because it simplifies the installation inordinately!

  1. Install helm on the primary cluster node:

    1
    
    curl -fsSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
    
  2. Configure helm to access the provisioner repository:

    1
    
    sudo helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
    
  3. Using helm install the provisioner:

    1
    2
    3
    4
    
    sudo env KUBECONFIG=/etc/rancher/k3s/k3s.yaml \
    helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
      --set nfs.server=k8s-1 \
      --set nfs.path=/mnt/storage-disk
    

    There are many options that may be set with the --set flag. Each option must be supplied with their own --set option.name=value parameter pair. For the full list of parameters see the configuration section of the helm chart readme for the NFS provisioner.

Now we will test that the provisioner is working by creating a test claim and a test deployment that uses the claim.

  1. Create and edit a file called test-claim.yaml:

    1
    
    nano -w test-claim.yaml
    
  2. Paste the following content into the editor:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: test-claim
    spec:
      storageClassName: nfs-client
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Mi
    
    • The apiVersion must be set to v1 so that K8s knows which fields are acceptible.

    • The kind must be PersistendVolumeClaim to inform K8s that we are creating a PVC.

    • The metadata.name entry is an arbitrary name for the PVC but we will use it to tie the PVC to the pod in the following steps.

    • The spec.storageClassName must be nfs-client because this is what the default installation using helm sets up.

    • The spec.accessModes should be a list with a single item of value ReadWriteMany.

      The available access modes are:

      • ReadWriteOnce – the volume can be mounted as read-write by a single node
      • ReadOnlyMany – the volume can be mounted read-only by many nodes
      • ReadWriteMany – the volume can be mounted as read-write by many nodes
    • The spec.resources.requests.storage should be a number suffixed with Mi, Gi, or Ti indicating Mebibytes (1024*1024 bytes), Gibibytes (1024*1024*1024 bytes), or Tebibytes (1024*1024*1024*1024 bytes). E.g. 5Gi would equal five Gibibytes or 5*1024*1024*1024 bytes. If there is insufficient space in the NFS share then this PVC will stay in a pending state.

  3. Save and exit the editor with ctrl+x followed by y to indicate that we want to save the file and finally enter to confirm.

  4. Create and edit a file called test-pod.yaml:

    1
    
    nano -w test-pod.yaml
    
  5. Paste the following content into the editor:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: test-pod
    spec:
      containers:
      - name: test-pod
        image: gcr.io/google_containers/busybox:1.24
        command:
          - "/bin/sh"
        args:
          - "-c"
          - "touch /mnt/SUCCESS && exit 0 || exit 1"
        volumeMounts:
          - name: nfs-pvc
            mountPath: "/mnt"
      restartPolicy: "Never"
      volumes:
        - name: nfs-pvc
          persistentVolumeClaim:
            claimName: test-claim
    
    • The apiVersion must be set to v1 so that K8s knows which fields are acceptible.

    • The kind must be Pod to inform K8s that we are creating a pod.

    • The metadata.name entry is an arbitrary name for the pod.

    • The spec.containers field takes a list of container definitions:

      • The name field is an arbitrary name for the container.
      • The image field is the docker-compatible container image name. Here we are using a container hosted by google called busybox with version 1.24.
      • The command field is the command to run inside the container when it is launched.
      • The args field is a list of arguments to pass to the command specified above.
      • The volumeMounts field is a list of mount definitions for inside the container.
        • The name field in the mount definition is the name of the volume as configured in .spec.volumes.
        • The mountPath field tells K8s where to mount the filesystem from the volume into the container.
    • The volumes field is a list of volumes to make available for containers within this pod.

      • The volume name field is an arbitrary name that we use to reference in the .spec.containers.volumeMounts.name field.
      • The persistentVolumeClaim.claimName field is the name of a PVC. We configured this in our test-claim.yaml file’s .metadata.name field, i.e. test-claim.
  6. Save and exit the editor with ctrl+x followed by y to indicate that we want to save the file and finally enter to confirm.

  7. Apply the claim and the pod definitions to the cluster:

    1
    
    sudo kubectl apply -f test-claim.yaml -f test-pod.yaml
    

The container should now download its image, run and exit. Left behind will be a new folder called /mnt/storage-disk/default-test-claim-pvc-* where the * is a UUID. Inside this folder should be a single file called SUCCESS indicating that the PVC successfully provisioned, mounted into the container, and the container was able to write a file.