Klustre CSI only schedules on nodes that can mount Lustre exports. Use the topics below to prepare those nodes, understand what the daemonset mounts from the host, and keep kubelet integration healthy.
This is the multi-page printable view of this section. Click here to print.
Nodes
1 - Node preparation
Install the Lustre client stack
Every node that runs Lustre-backed pods must have:
mount.lustreandumount.lustrebinaries (vialustre-clientRPM/DEB).- Kernel modules compatible with your Lustre servers.
- Network reachability to the Lustre MGS/MDS/OSS endpoints.
Verify installation:
mount.lustre --version
lsmod | grep lustre
Label nodes
The default storage class and daemonset use the label lustre.csi.klustrefs.io/lustre-client=true.
kubectl label nodes <node-name> lustre.csi.klustrefs.io/lustre-client=true
Remove the label when a node no longer has Lustre access:
kubectl label nodes <node-name> lustre.csi.klustrefs.io/lustre-client-
Allow privileged workloads
Klustre CSI pods require:
privileged: true,allowPrivilegeEscalation: truehostPID: true,hostNetwork: true- HostPath mounts for
/var/lib/kubelet,/dev,/sbin,/usr/sbin,/lib, and/lib64
Label the namespace with Pod Security Admission overrides:
kubectl create namespace klustre-system
kubectl label namespace klustre-system \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/audit=privileged \
pod-security.kubernetes.io/warn=privileged
Maintain consistency
- Keep AMIs or OS images in sync so every node has the same Lustre client version.
- If you use autoscaling groups, bake the client packages into your node image or run a bootstrap script before kubelet starts.
- Automate label management with infrastructure-as-code (e.g., Cluster API, Ansible) so the right nodes receive the
lustre-client=truelabel on join/leave events.
2 - Node integration flow
Daemonset host mounts
DaemonSet/klustre-csi-node mounts the following host paths:
/var/lib/kubelet/pluginsand/var/lib/kubelet/pods– required for CSI socket registration and mount propagation./dev– ensures device files (if any) are accessible when mounting Lustre./sbin,/usr/sbin,/lib,/lib64– expose the host’s Lustre client binaries and libraries to the container.
If your kubelet uses custom directories, update pluginDir and registrationDir in the settings ConfigMap.
CSI socket lifecycle
- The node plugin listens on
csiEndpoint(defaults to/var/lib/kubelet/plugins/lustre.csi.klustrefs.io/csi.sock). - The node-driver-registrar sidecar registers that socket with kubelet via
registrationDir. - Kubelet uses the UNIX socket to call
NodePublishVolumeandNodeUnpublishVolumewhen pods mount or unmount PVCs.
If the daemonset does not come up or kubelet cannot reach the socket, run:
kubectl describe daemonset klustre-csi-node -n klustre-system
kubectl logs -n klustre-system daemonset/klustre-csi-node -c klustre-csi
PATH and library overrides
The containers inherit PATH and LD_LIBRARY_PATH values that point at the host bind mounts. If your Lustre client lives elsewhere, override:
nodePlugin.pathEnvnodePlugin.ldLibraryPath
via Helm values or by editing the daemonset manifest.
Health signals
- Kubernetes events referencing
lustre.csi.klustrefs.ioindicate mount/unmount activity. kubectl get pods -n klustre-system -o wideshould show one pod per labeled node.- A missing pod usually means the node label is absent or taints/tolerations are mismatched.