blog-image

Kubernetes 1.27 and GlusterFS - Export NFS

  • dHENRY
  • 15/10/2023
  • (Reading time : 5 mn)

(**) Translated with www.DeepL.com/Translator

As announced in the previous post, support for GlusterFS volumes has been removed from the Kubernetes 1.27 version.

To continue using GlusterFS with Kubernetes, there is a solution that will suit both systems: NFS (Network File System).

Kubernetes supports this kind of volume, and GlusterFS volumes can be exported to NFS via the “nfs-ganesha” service.

To prepare for migration to 1.27 with this major change, make sure that NFS volume support is already working on your current cluster and for all applications using GlusterFS volumes.

NFS and Serveur GlusterFS

Warning: if your Gluster server is already using a standard NFS service, this will have to be stopped. Once installed, the NFS-GANESHA service will take over.

To list volumes exported via NFS by your server: showmount -e localhost.

Preparing the server (Debian 11)

(source) :

  • Connected as “root” user to Gluster server:
    • Stop the nfs service (if existing): systemctl stop nfs;systemctl disable nfs.
    • For each Gluster volume: disable NFS and enable features.cache-invalidation
volumes=$(gluster v list) \
for vol in $volumes \
do \
  echo "Setting volume : $vol" \
  gluster vol set $vol nfs.disable ON \
  gluster vol set $vol features.cache-invalidation ON \
done
  • Installing Debian packages : apt install nfs-ganesha nfs-ganesha-gluster

At this stage, you need to check the running status of the nfs-ganesha server: systemctl status nfs-ganesha. The status must be “Running”.

NFS export of a GlusterFS volume

Select a GlusterFS volume to export via NFS, to list your volumes: gluster v list.

Edit the “/etc/ganesha/ganesha.conf” file, adding the first export (end of file):

For this example, I’ve chosen to export the GlusterFS volume: “musicdata”, (to be adapted to your context).

!!!ATTENTION!!! ===» When you add volumes to be exported to this file, as indicated, be sure to change the value of “Export_Id”, which must be unique for each exported volume.

[...]
EXPORT
{
        Export_Id = 1; # Export ID unique to each export
        Path = "musicdata";  # Path of the volume to be exported. Eg: "/test_volume"

        FSAL {
                name = GLUSTER;
                hostname = "localhost";  # IP of one of the nodes in the trusted pool
                volume = "musicdata";         # Volume name. Eg: "test_volume"
        }

        Access_type = RW;        # Access permissions
        Squash = No_root_squash; # To enable/disable root squashing
        Disable_ACL = TRUE;      # To enable/disable ACL
        # Mandatory if using nfsV4
        Pseudo = "/musicdata";        # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
        Protocols = 3,4 ;        # NFS protocols supported
        #Sectype = sys,krb5,krb5i,krb5p;
        SecType = "sys";         # Security flavors supported
}

Save changes, then reload NFS configuration : systemctl reload nfs-ganesha

Check that the volume is exported by the NFS service : showmount -e localhost :

root@mytestserver1:~#showmount -e localhost
Export list for localhost:
musicdata (everyone)

If you encounter a problem, check the nfs-ganesha service logs. : cat /var/log/ganesha/ganesha.log or tail -f /var/log/ganesha/ganesha.log

Test mounting an NFS volume on one of the Kubernetes cluster workers

  • Connected as “root” to one of your cluster’s “Workers”, and create a mount point : mkdir /test-nfs

  • Make sure the server has the “nfs-common” package: apt install -y nfs-common, otherwise Pods won’t be able to mount this type of volume…

  • Install the Debian nfs client package: apt install nfs-common.

  • Connect the NFS export : mount -t nfs [Ip Addr or hostname - Gluster NFS server]:/[export volume name] /test-nfs

    • Example with my GlusterFS-NFS server :

      • ip Addr : 192.168.1.1

      • hostname : mynfsserver.mylocaldomain

      • exported gluster nfs volume name : musicdata

        • By IP address : mount -t nfs 192.168.1.1:/musicdata /test-nfs
        • By Hostname : mount -t nfs mynfsserver.mylocaldomain:/musicdata /test-nfs
  • Check mount operation: ls /test-nfs/

Everything’s OK, let’s clean up :

umount /test-nfs
rmdir /test-nfs
  • Install the Debian package ’nfs-common’ on all your Kubernetes workers.

Kubernetes applications: Replacing “GlusterFS Persistent Volumes” with “NFS Persistent Volumes

Resource usage in a Kubernetes cluster is linked to your deployment manifest. This indicates the type of volumes to be used.
With Gluster, we needed a specific service, which is no longer necessary. The Gluster to nfs conversion takes place at the level of “kind PersistentVolume”, replace “glusterfs” by “nfs” and adapt its attributes: “endpoints” by “server”, and the value of “path” must always be indicated as an absolute path.

With GlusterFS

## Gluster service
apiVersion: "v1"
kind: "Service"
metadata:
  name: "glusterfs-cluster"
  namespace: "mynamespace"
spec:
  ports:
    - port: 1
---
apiVersion: "v1"
kind: "Endpoints"
metadata:
  name: "glusterfs-cluster"
  namespace: "mynamespace"
subsets:
  - addresses:
      - ip: "[Gluster Server Ip address]"
    ports:
      - port: 1
---
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: "gluster-pv-mynamespace-data"
spec:
  ## volume could only be used by namespace mynamespace
  claimRef:
    name: "claimref-gluster-pv-mynamespace-data"
    namespace: "mynamespace"
  capacity:
    storage: "8Gi"
  accessModes:
    - "ReadWriteOnce"
  storageClassName: ""
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "mynamespace-data"
    readOnly: false

With NFS

## With Nfs no endpoint service needed
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: "nfs-pv-mynamespace-data"
spec:
  ## volume could only be used by namespace mynamespace
  claimRef:
    name: "claimref-nfs-pv-mynamespace-data"
    namespace: "mynamespace"
  capacity:
    storage: "8Gi"
  accessModes:
    - "ReadWriteOnce"
  storageClassName: ""
  # replace glusterfs with nfs, put the Gluster NFS server Ip address or Hostname, prefix path value with "/"
  nfs:
    server: "[NFS-Ganesha Server Ip address | hostname]"
    # Warn path must be absolute
    path: "/mstream-data"
    readOnly: false

For maximum compatibility between manifests, all volumes exported via NFS are given the same name as the Gluster volumes specified in the original manifests.

Summary of steps for Kubernetes 1.27 migration

  • Install nfs-ganesha on the Gluster server
  • Export volumes used by applications installed in the Kubernetes cluster
  • For each application installed in the Kubernetes cluster, test first on a non-critical application…
    • To avoid any surprises, never make any direct changes to application parameters in Kubernetes, without reflecting these changes in your “manifests”. To completely reinstall an application, simply delete the namespace and apply the new manifest:
      • Delete the application (kubectl delete -f [my manifest])
      • Modify the manifest (changes linked to use of the NFS service)
      • Apply manifest (kubectl apply -f [my manifest])
      • Save manifest in a git repository…

Auto-provisioning NFS-Ganesha

If you wish to use an “auto provisioning” mechanism via StorageClass, See the doc for nfs-ganesha here

+++

(**) Translated with www.DeepL.com/Translator

(*) Image Rendering Powered by