This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Nodes

Prepare and operate Kubernetes nodes that run the Klustre CSI daemonset.

Klustre CSI only schedules on nodes that can mount Lustre exports. Use the topics below to prepare those nodes, understand what the daemonset mounts from the host, and keep kubelet integration healthy.

1 - Node preparation

Install the Lustre client, label nodes, and grant the privileges required by Klustre CSI.

Install the Lustre client stack

Every node that runs Lustre-backed pods must have:

  • mount.lustre and umount.lustre binaries (via lustre-client RPM/DEB).
  • Kernel modules compatible with your Lustre servers.
  • Network reachability to the Lustre MGS/MDS/OSS endpoints.

Verify installation:

mount.lustre --version
lsmod | grep lustre

Label nodes

The default storage class and daemonset use the label lustre.csi.klustrefs.io/lustre-client=true.

kubectl label nodes <node-name> lustre.csi.klustrefs.io/lustre-client=true

Remove the label when a node no longer has Lustre access:

kubectl label nodes <node-name> lustre.csi.klustrefs.io/lustre-client-

Allow privileged workloads

Klustre CSI pods require:

  • privileged: true, allowPrivilegeEscalation: true
  • hostPID: true, hostNetwork: true
  • HostPath mounts for /var/lib/kubelet, /dev, /sbin, /usr/sbin, /lib, and /lib64

Label the namespace with Pod Security Admission overrides:

kubectl create namespace klustre-system
kubectl label namespace klustre-system \
  pod-security.kubernetes.io/enforce=privileged \
  pod-security.kubernetes.io/audit=privileged \
  pod-security.kubernetes.io/warn=privileged

Maintain consistency

  • Keep AMIs or OS images in sync so every node has the same Lustre client version.
  • If you use autoscaling groups, bake the client packages into your node image or run a bootstrap script before kubelet starts.
  • Automate label management with infrastructure-as-code (e.g., Cluster API, Ansible) so the right nodes receive the lustre-client=true label on join/leave events.

2 - Node integration flow

Understand how Klustre CSI interacts with kubelet and the host filesystem.

Daemonset host mounts

DaemonSet/klustre-csi-node mounts the following host paths:

  • /var/lib/kubelet/plugins and /var/lib/kubelet/pods – required for CSI socket registration and mount propagation.
  • /dev – ensures device files (if any) are accessible when mounting Lustre.
  • /sbin, /usr/sbin, /lib, /lib64 – expose the host’s Lustre client binaries and libraries to the container.

If your kubelet uses custom directories, update pluginDir and registrationDir in the settings ConfigMap.

CSI socket lifecycle

  1. The node plugin listens on csiEndpoint (defaults to /var/lib/kubelet/plugins/lustre.csi.klustrefs.io/csi.sock).
  2. The node-driver-registrar sidecar registers that socket with kubelet via registrationDir.
  3. Kubelet uses the UNIX socket to call NodePublishVolume and NodeUnpublishVolume when pods mount or unmount PVCs.

If the daemonset does not come up or kubelet cannot reach the socket, run:

kubectl describe daemonset klustre-csi-node -n klustre-system
kubectl logs -n klustre-system daemonset/klustre-csi-node -c klustre-csi

PATH and library overrides

The containers inherit PATH and LD_LIBRARY_PATH values that point at the host bind mounts. If your Lustre client lives elsewhere, override:

  • nodePlugin.pathEnv
  • nodePlugin.ldLibraryPath

via Helm values or by editing the daemonset manifest.

Health signals

  • Kubernetes events referencing lustre.csi.klustrefs.io indicate mount/unmount activity.
  • kubectl get pods -n klustre-system -o wide should show one pod per labeled node.
  • A missing pod usually means the node label is absent or taints/tolerations are mismatched.