title: Concepts description: Understand the Klustre CSI architecture, components, and data flow. weight: 2
This page explains how the Klustre CSI Plugin is structured so you can reason about what happens when volumes are created, mounted, or removed.
High-level architecture
Klustre CSI Plugin implements the CSI node service for Lustre file systems. It focuses on node-side operations only:
Kubernetes API Kubelet on each node
-------------- --------------------
PersistentVolume -> NodePublishVolume -> mount.lustre -> Pod mount
PersistentVolumeClaim NodeUnpublishVolume -> umount.lustre
There is no controller deployment or dynamic provisioning. Instead, administrators define static PersistentVolume objects that reference existing Lustre exports. The plugin mounts those exports inside pods when a PersistentVolumeClaim is bound to a workload.
Components
- DaemonSet (
klustre-csi-node) – Runs on every node that has the Lustre client installed. It includes:klustre-csicontainer: the Rust-based CSI node driver.node-driver-registrarsidecar: registers the driver socket with kubelet and handles node-level CSIDriver metadata.
- Settings ConfigMap – Injects runtime configuration (socket paths, image tags, log level).
- StorageClass (
klustre-csi-static) – Encapsulates mount options, retain policy, and topology constraints. - CSIDriver object – Advertises the driver name
lustre.csi.klustrefs.io, declares that controller operations aren’t required, and enablespodInfoOnMount.
Data flow
- Cluster admin creates a PV that sets
csi.driver=lustre.csi.klustrefs.ioandvolumeAttributes.source=<HOST>@tcp:/fs. - PVC binds to the PV, usually via the
klustre-csi-staticstorage class. - Pod scheduled on labeled node (
lustre.csi.klustrefs.io/lustre-client=true). - kubelet invokes NodePublishVolume:
- CSI driver reads the
sourceattribute. - It spawns
mount.lustreinside the container with the host’s/sbinand/usr/sbinbind-mounted. - The Lustre client mounts into the pod’s
volumeMount.
- CSI driver reads the
- Pod uses the mounted path (read/write simultaneously across multiple pods).
- When the pod terminates and the PVC is detached, kubelet calls
NodeUnpublishVolume, which triggersumount.lustre.
Labels and topology
The driver relies on a node label to ensure only valid hosts run Lustre workloads:
lustre.csi.klustrefs.io/lustre-client=true
allowedTopologies in the storage class references this label so the scheduler only considers labeled nodes. Make sure to label/clear nodes as you add or remove Lustre support (see Label Lustre-capable nodes).
Security and privileges
- Pods run
privilegedwithSYS_ADMINcapability to executemount.lustre. - Host namespaces:
hostNetwork: trueandhostPID: trueso the driver can interact with kubelet paths and kernel modules. - HostPath mounts expose
/var/lib/kubelet,/dev,/sbin,/usr/sbin,/lib, and/lib64. Verify your cluster policy (PSA/PSP) allows this in theklustre-systemnamespace.
Limitations (by design)
- ❌ No dynamic provisioning (no
CreateVolume/DeleteVolume): you must create static PVs. - ❌ No CSI snapshots or expansion.
- ❌ No node metrics endpoints.
- ✅ Designed for
ReadWriteManyworkloads with preexisting Lustre infrastructure.
Understanding these guardrails helps you decide whether Klustre CSI fits your cluster and what automation you may need to layer on top (for example, scripts that generate PVs from a Lustre inventory). Now that you know the architecture, head back to the Introduction or browse the Reference for tunable parameters.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.