Ceph Operator Helm Chart
Installs rook to create, configure, and manage Ceph clusters on Kubernetes.
Introduction¶
This chart bootstraps a rook-ceph-operator deployment on a Kubernetes cluster using the Helm package manager.
Prerequisites¶
- Kubernetes 1.19+
- Helm 3.x
See the Helm support matrix for more details.
Installing¶
The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster.
- Install the Helm chart
- Create a Rook cluster.
The helm install
command deploys rook on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the rook-ceph
namespace (you will install your clusters into separate namespaces).
Rook currently publishes builds of the Ceph operator to the release
and master
channels.
Release¶
The release channel is the most recent release of Rook that is considered stable for the community.
For example settings, see the next section or values.yaml
Configuration¶
The following table lists the configurable parameters of the rook-operator chart and their default values.
Parameter | Description | Default |
---|---|---|
admissionController | Set tolerations and nodeAffinity 1 for admission controller pod. The admission controller would be best to start on the same nodes as other ceph daemons. | nil |
allowLoopDevices | If true, loop devices are allowed to be used for osds in test clusters | false |
annotations | Pod annotations | {} |
cephCommandsTimeoutSeconds | The timeout for ceph commands in seconds | "15" |
crds.enabled | Whether the helm chart should create and update the CRDs. If false, the CRDs must be managed independently with deploy/examples/crds.yaml. WARNING Only set during first deployment. If later disabled the cluster may be DESTROYED. If the CRDs are deleted in this case, see the disaster recovery guide to restore them. | true |
csi.allowUnsupportedVersion | Allow starting an unsupported ceph-csi image | false |
csi.attacher.image | Kubernetes CSI Attacher image | registry.k8s.io/sig-storage/csi-attacher:v4.1.0 |
csi.cephFSAttachRequired | Whether to skip any attach operation altogether for CephFS PVCs. See more details here. If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the CephFS PVC fast. WARNING It's highly discouraged to use this for CephFS RWO volumes. Refer to this issue for more details. | true |
csi.cephFSFSGroupPolicy | Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html | "File" |
csi.cephFSKernelMountOptions | Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR | nil |
csi.cephFSPluginUpdateStrategy | CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate | RollingUpdate |
csi.cephcsi.image | Ceph CSI image | quay.io/cephcsi/cephcsi:v3.8.0 |
csi.cephfsGrpcMetricsPort | CSI CephFS driver GRPC metrics port | 9091 |
csi.cephfsLivenessMetricsPort | CSI CephFS driver metrics port | 9081 |
csi.cephfsPodLabels | Labels to add to the CSI CephFS Deployments and DaemonSets Pods | nil |
csi.clusterName | Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster | nil |
csi.csiAddons.enabled | Enable CSIAddons | false |
csi.csiAddons.image | CSIAddons Sidecar image | "quay.io/csiaddons/k8s-sidecar:v0.5.0" |
csi.csiAddonsPort | CSI Addons server port | 9070 |
csi.csiCephFSPluginResource | CEPH CSI CephFS plugin resource requirement list | see values.yaml |
csi.csiCephFSPluginVolume | The volume of the CephCSI CephFS plugin DaemonSet | nil |
csi.csiCephFSPluginVolumeMount | The volume mounts of the CephCSI CephFS plugin DaemonSet | nil |
csi.csiCephFSProvisionerResource | CEPH CSI CephFS provisioner resource requirement list | see values.yaml |
csi.csiNFSPluginResource | CEPH CSI NFS plugin resource requirement list | see values.yaml |
csi.csiNFSProvisionerResource | CEPH CSI NFS provisioner resource requirement list | see values.yaml |
csi.csiRBDPluginResource | CEPH CSI RBD plugin resource requirement list | see values.yaml |
csi.csiRBDPluginVolume | The volume of the CephCSI RBD plugin DaemonSet | nil |
csi.csiRBDPluginVolumeMount | The volume mounts of the CephCSI RBD plugin DaemonSet | nil |
csi.csiRBDProvisionerResource | CEPH CSI RBD provisioner resource requirement list csi-omap-generator resources will be applied only if enableOMAPGenerator is set to true | see values.yaml |
csi.enableCSIEncryption | Enable Ceph CSI PVC encryption support | false |
csi.enableCSIHostNetwork | Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary in some network configurations where the SDN does not provide access to an external cluster or there is significant drop in read/write performance | true |
csi.enableCephfsDriver | Enable Ceph CSI CephFS driver | true |
csi.enableCephfsSnapshotter | Enable Snapshotter in CephFS provisioner pod | true |
csi.enableGrpcMetrics | Enable Ceph CSI GRPC Metrics | false |
csi.enableLiveness | Enable Ceph CSI Liveness sidecar deployment | false |
csi.enableMetadata | Enable adding volume metadata on the CephFS subvolumes and RBD images. Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images. Hence enable metadata is false by default | false |
csi.enableNFSSnapshotter | Enable Snapshotter in NFS provisioner pod | true |
csi.enableOMAPGenerator | OMAP generator generates the omap mapping between the PV name and the RBD image which helps CSI to identify the rbd images for CSI operations. CSI_ENABLE_OMAP_GENERATOR needs to be enabled when we are using rbd mirroring feature. By default OMAP generator is disabled and when enabled, it will be deployed as a sidecar with CSI provisioner pod, to enable set it to true. | false |
csi.enablePluginSelinuxHostMount | Enable Host mount for /etc/selinux directory for Ceph CSI nodeplugins | false |
csi.enableRBDSnapshotter | Enable Snapshotter in RBD provisioner pod | true |
csi.enableRbdDriver | Enable Ceph CSI RBD driver | true |
csi.forceCephFSKernelClient | Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS you may want to disable this setting. However, this will cause an issue during upgrades with the FUSE client. See the upgrade guide | true |
csi.grpcTimeoutInSeconds | Set GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150 | 150 |
csi.imagePullPolicy | Image pull policy | "IfNotPresent" |
csi.kubeletDirPath | Kubelet root directory path (if the Kubelet uses a different path for the --root-dir flag) | /var/lib/kubelet |
csi.logLevel | Set logging level for cephCSI containers maintained by the cephCSI. Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity. | 0 |
csi.nfs.enabled | Enable the nfs csi driver | false |
csi.nfsAttachRequired | Whether to skip any attach operation altogether for NFS PVCs. See more details here. If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the NFS PVC fast. WARNING It's highly discouraged to use this for NFS RWO volumes. Refer to this issue for more details. | true |
csi.nfsFSGroupPolicy | Policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html | "File" |
csi.nfsPluginUpdateStrategy | CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate | RollingUpdate |
csi.nfsPodLabels | Labels to add to the CSI NFS Deployments and DaemonSets Pods | nil |
csi.pluginNodeAffinity | The node labels for affinity of the CephCSI RBD plugin DaemonSet 1 | nil |
csi.pluginPriorityClassName | PriorityClassName to be set on csi driver plugin pods | "system-node-critical" |
csi.pluginTolerations | Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet | nil |
csi.provisioner.image | Kubernetes CSI provisioner image | registry.k8s.io/sig-storage/csi-provisioner:v3.4.0 |
csi.provisionerNodeAffinity | The node labels for affinity of the CSI provisioner deployment 1 | nil |
csi.provisionerPriorityClassName | PriorityClassName to be set on csi driver provisioner pods | "system-cluster-critical" |
csi.provisionerReplicas | Set replicas for csi provisioner deployment | 2 |
csi.provisionerTolerations | Array of tolerations in YAML format which will be added to CSI provisioner deployment | nil |
csi.rbdAttachRequired | Whether to skip any attach operation altogether for RBD PVCs. See more details here. If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast. WARNING It's highly discouraged to use this for RWO volumes as it can cause data corruption. csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on. Refer to this issue for more details. | true |
csi.rbdFSGroupPolicy | Policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html | "File" |
csi.rbdGrpcMetricsPort | Ceph CSI RBD driver GRPC metrics port | 9090 |
csi.rbdLivenessMetricsPort | Ceph CSI RBD driver metrics port | 8080 |
csi.rbdPluginUpdateStrategy | CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate | RollingUpdate |
csi.rbdPluginUpdateStrategyMaxUnavailable | A maxUnavailable parameter of CSI RBD plugin daemonset update strategy. | 1 |
csi.rbdPodLabels | Labels to add to the CSI RBD Deployments and DaemonSets Pods | nil |
csi.readAffinity.crushLocationLabels | Define which node labels to use as CRUSH location. This should correspond to the values set in the CRUSH map. | labels listed here |
csi.readAffinity.enabled | Enable read affinity for RBD volumes. Recommended to set to true if running kernel 5.8 or newer. | false |
csi.registrar.image | Kubernetes CSI registrar image | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0 |
csi.resizer.image | Kubernetes CSI resizer image | registry.k8s.io/sig-storage/csi-resizer:v1.7.0 |
csi.serviceMonitor.enabled | Enable ServiceMonitor for Ceph CSI drivers | false |
csi.serviceMonitor.interval | Service monitor scrape interval | "5s" |
csi.sidecarLogLevel | Set logging level for Kubernetes-csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity. | 0 |
csi.snapshotter.image | Kubernetes CSI snapshotter image | registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1 |
csi.topology.domainLabels | domainLabels define which node labels to use as domains for CSI nodeplugins to advertise their domains | nil |
csi.topology.enabled | Enable topology based provisioning | false |
currentNamespaceOnly | Whether the operator should watch cluster CRD in its own namespace or not | false |
disableAdmissionController | Whether to disable the admission controller | true |
disableDeviceHotplug | Disable automatic orchestration when new devices are discovered. | false |
discover.nodeAffinity | The node labels for affinity of discover-agent 1 | nil |
discover.podLabels | Labels to add to the discover pods | nil |
discover.resources | Add resources to discover daemon pods | nil |
discover.toleration | Toleration for the discover pods. Options: NoSchedule , PreferNoSchedule or NoExecute | nil |
discover.tolerationKey | The specific key of the taint to tolerate | nil |
discover.tolerations | Array of tolerations in YAML format which will be added to discover deployment | nil |
discoverDaemonUdev | Blacklist certain disks according to the regex provided. | nil |
enableDiscoveryDaemon | Enable discovery daemon | false |
enableOBCWatchOperatorNamespace | Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used | true |
hostpathRequiresPrivileged | Runs Ceph Pods as privileged to be able to write to hostPaths in OpenShift with SELinux restrictions. | false |
image.pullPolicy | Image pull policy | "IfNotPresent" |
image.repository | Image | "rook/ceph" |
image.tag | Image tag | master |
imagePullSecrets | imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts. | nil |
logLevel | Global log level for the operator. Options: ERROR , WARNING , INFO , DEBUG | "INFO" |
monitoring.enabled | Enable monitoring. Requires Prometheus to be pre-installed. Enabling will also create RBAC rules to allow Operator to create ServiceMonitors | false |
nodeSelector | Kubernetes nodeSelector to add to the Deployment. | {} |
priorityClassName | Set the priority class for the rook operator deployment if desired | nil |
pspEnable | If true, create & use PSP resources | false |
rbacEnable | If true, create & use RBAC resources | true |
resources | Pod resource requests & limits | {"limits":{"cpu":"500m","memory":"512Mi"},"requests":{"cpu":"100m","memory":"128Mi"}} |
scaleDownOperator | If true, scale down the rook operator. This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling to deploy your helm charts. | false |
tolerations | List of Kubernetes tolerations to add to the Deployment. | [] |
unreachableNodeTolerationSeconds | Delay to use for the node.kubernetes.io/unreachable pod failure toleration to override the Kubernetes default of 5 minutes | 5 |
useOperatorHostNetwork | If true, run rook operator on the host network | nil |
Development Build¶
To deploy from a local build from your development environment:
- Build the Rook docker image:
make
- Copy the image to your K8s cluster, such as with the
docker save
then thedocker load
commands - Install the helm chart:
Uninstalling the Chart¶
To see the currently installed Rook chart:
To uninstall/delete the rook-ceph
deployment:
The command removes all the Kubernetes components associated with the chart and deletes the release.
After uninstalling you may want to clean up the CRDs as described on the teardown documentation.