Ceph Cluster Helm Chart
Creates Rook resources to configure a Ceph cluster using the Helm package manager. This chart is a simple packaging of templates that will optionally create Rook resources such as:
- CephCluster, CephFilesystem, and CephObjectStore CRs
- Storage classes to expose Ceph RBD volumes, CephFS volumes, and RGW buckets
- Ingress for external access to the dashboard
- Toolbox
Prerequisites¶
- Kubernetes 1.19+
- Helm 3.x
- Install the Rook Operator chart
Installing¶
The helm install
command deploys rook on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the rook-ceph
namespace. The clusters can be installed into the same namespace as the operator or a separate namespace.
Rook currently publishes builds of this chart to the release
and master
channels.
Before installing, review the values.yaml to confirm if the default settings need to be updated.
- If the operator was installed in a namespace other than
rook-ceph
, the namespace must be set in theoperatorNamespace
variable. - Set the desired settings in the
cephClusterSpec
. The defaults are only an example and not likely to apply to your cluster. - The
monitoring
section should be removed from thecephClusterSpec
, as it is specified separately in the helm settings. - The default values for
cephBlockPools
,cephFileSystems
, andCephObjectStores
will create one of each, and their corresponding storage classes. - All Ceph components now have default values for the pod resources. The resources may need to be adjusted in production clusters depending on the load. The resources can also be disabled if Ceph should not be limited (e.g. test clusters).
Release¶
The release channel is the most recent release of Rook that is considered stable for the community.
The example install assumes you have first installed the Rook Operator Helm Chart and created your customized values-override.yaml.
Configuration¶
The following table lists the configurable parameters of the rook-operator chart and their default values.
Parameter | Description | Default |
---|---|---|
cephBlockPools | A list of CephBlockPool configurations to deploy | See below |
cephBlockPoolsVolumeSnapshotClass | Settings for the block pool snapshot class | See RBD Snapshots |
cephClusterSpec | Cluster configuration. | See below |
cephFileSystemVolumeSnapshotClass | Settings for the filesystem snapshot class | See CephFS Snapshots |
cephFileSystems | A list of CephFileSystem configurations to deploy | See below |
cephObjectStores | A list of CephObjectStore configurations to deploy | See below |
clusterName | The metadata.name of the CephCluster CR | The same as the namespace |
configOverride | Cluster ceph.conf override | nil |
ingress.dashboard | Enable an ingress for the ceph-dashboard | {} |
kubeVersion | Optional override of the target kubernetes version | nil |
monitoring.createPrometheusRules | Whether to create the Prometheus rules for Ceph alerts | false |
monitoring.enabled | Enable Prometheus integration, will also create necessary RBAC rules to allow Operator to create ServiceMonitors. Monitoring requires Prometheus to be pre-installed | false |
monitoring.prometheusRule.annotations | Annotations applied to PrometheusRule | {} |
monitoring.prometheusRule.labels | Labels applied to PrometheusRule | {} |
monitoring.rulesNamespaceOverride | The namespace in which to create the prometheus rules, if different from the rook cluster namespace. If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus deployed) to set rulesNamespaceOverride for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions. | nil |
operatorNamespace | Namespace of the main rook operator | "rook-ceph" |
pspEnable | Create & use PSP resources. Set this to the same value as the rook-ceph chart. | false |
toolbox.affinity | Toolbox affinity | {} |
toolbox.enabled | Enable Ceph debugging pod deployment. See toolbox | false |
toolbox.image | Toolbox image, defaults to the image used by the Ceph cluster | nil |
toolbox.priorityClassName | Set the priority class for the toolbox if desired | nil |
toolbox.resources | Toolbox resources | {"limits":{"cpu":"500m","memory":"1Gi"},"requests":{"cpu":"100m","memory":"128Mi"}} |
toolbox.tolerations | Toolbox tolerations | [] |
Ceph Cluster Spec¶
The CephCluster
CRD takes its spec from cephClusterSpec.*
. This is not an exhaustive list of parameters. For the full list, see the Cluster CRD topic.
The cluster spec example is for a converged cluster where all the Ceph daemons are running locally, as in the host-based example (cluster.yaml). For a different configuration such as a PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml), or stretch cluster (cluster-stretched.yaml), replace this entire cephClusterSpec
with the specs from those examples.
Ceph Block Pools¶
The cephBlockPools
array in the values file will define a list of CephBlockPool as described in the table below.
Parameter | Description | Default |
---|---|---|
name | The name of the CephBlockPool | ceph-blockpool |
spec | The CephBlockPool spec, see the CephBlockPool documentation. | {} |
storageClass.enabled | Whether a storage class is deployed alongside the CephBlockPool | true |
storageClass.isDefault | Whether the storage class will be the default storage class for PVCs. See PersistentVolumeClaim documentation for details. | true |
storageClass.name | The name of the storage class | ceph-block |
storageClass.parameters | See Block Storage documentation or the helm values.yaml for suitable values | see values.yaml |
storageClass.reclaimPolicy | The default Reclaim Policy to apply to PVCs created with this storage class. | Delete |
storageClass.allowVolumeExpansion | Whether volume expansion is allowed by default. | true |
storageClass.mountOptions | Specifies the mount options for storageClass | [] |
storageClass.allowedTopologies | Specifies the allowedTopologies for storageClass | [] |
Ceph File Systems¶
The cephFileSystems
array in the values file will define a list of CephFileSystem as described in the table below.
Parameter | Description | Default |
---|---|---|
name | The name of the CephFileSystem | ceph-filesystem |
spec | The CephFileSystem spec, see the CephFilesystem CRD documentation. | see values.yaml |
storageClass.enabled | Whether a storage class is deployed alongside the CephFileSystem | true |
storageClass.name | The name of the storage class | ceph-filesystem |
storageClass.pool | The name of Data Pool, without the filesystem name prefix | data0 |
storageClass.parameters | See Shared Filesystem documentation or the helm values.yaml for suitable values | see values.yaml |
storageClass.reclaimPolicy | The default Reclaim Policy to apply to PVCs created with this storage class. | Delete |
storageClass.mountOptions | Specifies the mount options for storageClass | [] |
Ceph Object Stores¶
The cephObjectStores
array in the values file will define a list of CephObjectStore as described in the table below.
Parameter | Description | Default |
---|---|---|
name | The name of the CephObjectStore | ceph-objectstore |
spec | The CephObjectStore spec, see the CephObjectStore CRD documentation. | see values.yaml |
storageClass.enabled | Whether a storage class is deployed alongside the CephObjectStore | true |
storageClass.name | The name of the storage class | ceph-bucket |
storageClass.parameters | See Object Store storage class documentation or the helm values.yaml for suitable values | see values.yaml |
storageClass.reclaimPolicy | The default Reclaim Policy to apply to PVCs created with this storage class. | Delete |
ingress.enabled | Enable an ingress for the object store | false |
ingress.annotations | Ingress annotations | {} |
ingress.host.name | Ingress hostname | "" |
ingress.host.path | Ingress path prefix | / |
ingress.tls | Ingress tls | / |
ingress.ingressClassName | Ingress tls | "" |
Existing Clusters¶
If you have an existing CephCluster CR that was created without the helm chart and you want the helm chart to start managing the cluster:
-
Extract the
spec
section of your existing CephCluster CR and copy to thecephClusterSpec
section invalues-override.yaml
. -
Add the following annotations and label to your existing CephCluster CR:
-
Run the
helm install
command in the Installing section to create the chart. -
In the future when updates to the cluster are needed, ensure the values-override.yaml always contains the desired CephCluster spec.
Development Build¶
To deploy from a local build from your development environment:
Uninstalling the Chart¶
To see the currently installed Rook chart:
To uninstall/delete the rook-ceph-cluster
chart:
The command removes all the Kubernetes components associated with the chart and deletes the release. Removing the cluster chart does not remove the Rook operator. In addition, all data on hosts in the Rook data directory (/var/lib/rook
by default) and on OSD raw devices is kept. To reuse disks, you will have to wipe them before recreating the cluster.
See the teardown documentation for more information.