CSI provisioner and driver
Attention
This feature is experimental and will not support upgrades to future versions.
For this section, we will refer to Rook's deployment examples in the deploy/examples directory.
Enabling the CSI drivers¶
The Ceph CSI NFS provisioner and driver require additional RBAC to operate. Apply the deploy/examples/csi/nfs/rbac.yaml
manifest to deploy the additional resources.
Rook will only deploy the Ceph CSI NFS provisioner and driver components when the ROOK_CSI_ENABLE_NFS
config is set to "true"
in the rook-ceph-operator-config
configmap. Change the value in your manifest, or patch the resource as below.
Note
The rook-ceph operator Helm chart will deploy the required RBAC and enable the driver components if csi.nfs.enabled
is set to true
.
Creating NFS exports via PVC¶
Prerequisites¶
In order to create NFS exports via the CSI driver, you must first create a CephFilesystem to serve as the underlying storage for the exports, and you must create a CephNFS to run an NFS server that will expose the exports. RGWs cannot be used for the CSI driver.
From the examples, filesystem.yaml
creates a CephFilesystem called myfs
, and nfs.yaml
creates an NFS server called my-nfs
.
You may need to enable or disable the Ceph orchestrator.
You must also create a storage class. Ceph CSI is designed to support any arbitrary Ceph cluster, but we are focused here only on Ceph clusters deployed by Rook. Let's take a look at a portion of the example storage class found at deploy/examples/csi/nfs/storageclass.yaml
and break down how the values are determined.
provisioner
: rook-ceph.nfs.csi.ceph.com because rook-ceph is the namespace where the CephCluster is installednfsCluster
: my-nfs because this is the name of the CephNFSserver
: rook-ceph-nfs-my-nfs-a because Rook creates this Kubernetes Service for the CephNFS named my-nfsclusterID
: rook-ceph because this is the namespace where the CephCluster is installedfsName
: myfs because this is the name of the CephFilesystem used to back the NFS exportspool
: myfs-replicated because myfs is the name of the CephFilesystem defined infsName
and because replicated is the name of a data pool defined in the CephFilesystemcsi.storage.k8s.io/*
: note that these values are shared with the Ceph CSI CephFS provisioner
Creating a PVC¶
See deploy/examples/csi/nfs/pvc.yaml
for an example of how to create a PVC that will create an NFS export. The export will be created and a PV created for the PVC immediately, even without a Pod to mount the PVC.
Attaching an export to a pod¶
See deploy/examples/csi/nfs/pod.yaml
for an example of how a PVC can be connected to an application pod.
Connecting to an export directly¶
After a PVC is created successfully, the share
parameter set on the resulting PV contains the share
path which can be used as the export path when mounting the export manually. In the example below /0001-0009-rook-ceph-0000000000000001-55c910f9-a1af-11ed-9772-1a471870b2f5
is the export path.
Taking snapshots of NFS exports¶
NFS export PVCs can be snapshotted and later restored to new PVCs.
Creating snapshots¶
First, create a VolumeSnapshotClass as in the example here. The csi.storage.k8s.io/snapshotter-secret-name
parameter should reference the name of the secret created for the cephfsplugin here.
In snapshot, volumeSnapshotClassName
should be the name of the VolumeSnapshotClass previously created. The persistentVolumeClaimName
should be the name of the PVC which is already created by the NFS CSI driver.
Verifying snapshots¶
The snapshot will be ready to restore to a new PVC when READYTOUSE
field of the volumesnapshot is set to true.
Restoring snapshot to a new PVC¶
In pvc-restore, dataSource
name should be the name of the VolumeSnapshot previously created. The dataSource
kind should be "VolumeSnapshot".
Create a new PVC from the snapshot.
Verifying restored PVC Creation¶
Cleaning up snapshot resource¶
To clean your cluster of the resources created by this example, run the following:
Cloning NFS exports¶
Creating clones¶
In pvc-clone, dataSource
should be the name of the PVC which is already created by NFS CSI driver. The dataSource
kind should be "PersistentVolumeClaim" and also storageclass should be same as the source PVC.
Create a new PVC Clone from the PVC as in the example here.
Verifying a cloned PVC¶
Cleaning up clone resources¶
To clean your cluster of the resources created by this example, run the following:
Consuming NFS from an external source¶
For consuming NFS services and exports external to the Kubernetes cluster (including those backed by an external standalone Ceph cluster), Rook recommends using Kubernetes regular NFS consumption model. This requires the Ceph admin to create the needed export, while reducing the privileges needed in the client cluster for the NFS volume.
Export and get the nfs client to a particular cephFS filesystem:
Create the PV and PVC using nfs-client-server-ip
. It will mount NFS volumes with PersistentVolumes and then mount the PVCs in the user Pod Application to utilize the NFS type storage.