External Storage Cluster
An external cluster is a Ceph configuration that is managed outside of the local K8s cluster. The external cluster could be managed by cephadm, or it could be another Rook cluster that is configured to allow the access (usually configured with host networking).
In external mode, Rook will provide the configuration for the CSI driver and other basic resources that allows your applications to connect to Ceph in the external cluster.
External configuration¶
-
Source cluster: The cluster providing the data, usually configured by cephadm
-
Consumer cluster: The K8s cluster that will be consuming the external source cluster
Prerequisites¶
Create the desired types of storage in the provider Ceph cluster:
Commands on the source Ceph cluster¶
In order to configure an external Ceph cluster with Rook, we need to extract some information in order to connect to that cluster.
1. Create all users and keys¶
Run the python script create-external-cluster-resources.py for creating all users and keys.
--namespace
: Namespace where CephCluster will run, for examplerook-ceph-external
--format bash
: The format of the output--rbd-data-pool-name
: The name of the RBD data pool--alias-rbd-data-pool-name
: Provides an alias for the RBD data pool name, necessary if a special character is present in the pool name such as a period or underscore--rgw-endpoint
: (optional) The RADOS Gateway endpoint in the format<IP>:<PORT>
or<FQDN>:<PORT>
.--rgw-pool-prefix
: (optional) The prefix of the RGW pools. If not specified, the default prefix isdefault
--rgw-tls-cert-path
: (optional) RADOS Gateway endpoint TLS certificate file path--rgw-skip-tls
: (optional) Ignore TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED)--rbd-metadata-ec-pool-name
: (optional) Provides the name of erasure coded RBD metadata pool, used for creating ECRBDStorageClass.--monitoring-endpoint
: (optional) Ceph Manager prometheus exporter endpoints (comma separated list ofentries of active and standby mgrs) --monitoring-endpoint-port
: (optional) Ceph Manager prometheus exporter port--skip-monitoring-endpoint
: (optional) Skip prometheus exporter endpoints, even if they are available. Useful if the prometheus module is not enabled--ceph-conf
: (optional) Provide a Ceph conf file--keyring
: (optional) Path to Ceph keyring file, to be used with--ceph-conf
--cluster-name
: (optional) Ceph cluster name--output
: (optional) Output will be stored into the provided file--dry-run
: (optional) Prints the executed commands without running them--run-as-user
: (optional) Provides a user name to check the cluster's health status, must be prefixed byclient
.--cephfs-metadata-pool-name
: (optional) Provides the name of the cephfs metadata pool--cephfs-filesystem-name
: (optional) The name of the filesystem, used for creating CephFS StorageClass--cephfs-data-pool-name
: (optional) Provides the name of the CephFS data pool, used for creating CephFS StorageClass--rados-namespace
: (optional) Divides a pool into separate logical namespaces, used for creating RBD PVC in a RadosNamespaces--subvolume-group
: (optional) Provides the name of the subvolume group, used for creating CephFS PVC in a subvolumeGroup--rgw-realm-name
: (optional) Provides the name of the rgw-realm--rgw-zone-name
: (optional) Provides the name of the rgw-zone--rgw-zonegroup-name
: (optional) Provides the name of the rgw-zone-group--upgrade
: (optional) Upgrades the 'Ceph CSI keyrings (For example: client.csi-cephfs-provisioner) with new permissions needed for the new cluster version and older permission will still be applied.--restricted-auth-permission
: (optional) Restrict cephCSIKeyrings auth permissions to specific pools, and cluster. Mandatory flags that need to be set are--rbd-data-pool-name
, and--cluster-name
.--cephfs-filesystem-name
flag can also be passed in case of CephFS user restriction, so it can restrict users to particular CephFS filesystem.
Multi-tenancy¶
To enable multi-tenancy, run the script with the --restricted-auth-permission
flag and pass the mandatory flags with it, It will generate the secrets which you can use for creating new Consumer cluster
deployment using the same Source cluster
(ceph cluster). So you would be running different isolated consumer clusters on top of single Source cluster
.
Note
Restricting the csi-users per pool, and per cluster will require creating new csi-users and new secrets for that csi-users. So apply these secrets only to new Consumer cluster
deployment while using the same Source cluster
.
RGW Multisite¶
Pass the --rgw-realm-name
, --rgw-zonegroup-name
and --rgw-zone-name
flags to create the admin ops user in a master zone, zonegroup and realm. See the Multisite doc for creating a zone, zonegroup and realm.
Upgrade Example¶
1) If consumer cluster doesn't have restricted caps, this will upgrade all the default csi-users (non-restricted):
2) If the consumer cluster has restricted caps: Restricted users created using --restricted-auth-permission
flag need to pass mandatory flags: '--rbd-data-pool-name
(if it is a rbd user), --cluster-name
and --run-as-user
' flags while upgrading, in case of cephfs users if you have passed --cephfs-filesystem-name
flag while creating csi-users then while upgrading it will be mandatory too. In this example the user would be client.csi-rbd-node-rookstorage-replicapool
(following the pattern csi-user-clusterName-poolName
)
Note
An existing non-restricted user cannot be converted to a restricted user by upgrading. The upgrade flag should only be used to append new permissions to users. It shouldn't be used for changing a csi user already applied permissions. For example, you shouldn't change the pool(s) a user has access to.
2. Copy the bash output¶
Example Output:
Commands on the K8s consumer cluster¶
Import the Source Data¶
-
Paste the above output from
create-external-cluster-resources.py
into your current shell to allow importing the source data. -
Run the import script.
!!! note If your Rook cluster nodes are running a kernel earlier than or equivalent to 5.4, remove fast-diff,object-map,deep-flatten,exclusive-lock
from the imageFeatures
line.
1 2 3 |
|
Helm Installation¶
To install with Helm, the rook cluster helm chart will configure the necessary resources for the external cluster with the example values-external.yaml
.
Skip the manifest installation section and continue with Cluster Verification.
Manifest Installation¶
If not installing with Helm, here are the steps to install with manifests.
-
Deploy Rook, create common.yaml, crds.yaml and operator.yaml manifests.
-
Create common-external.yaml and cluster-external.yaml
Cluster Verification¶
-
Verify the consumer cluster is connected to the source ceph cluster:
-
Verify the creation of the storage class depending on the rbd pools and filesystem provided.
ceph-rbd
andcephfs
would be the respective names for the RBD and CephFS storage classes. -
Then you can now create a persistent volume based on these StorageClass.
Connect to an External Object Store¶
Create the object store resources:
- Create the external object store CR to configure connection to external gateways.
- Create an Object store user for credentials to access the S3 endpoint.
- Create a bucket storage class where a client can request creating buckets.
- Create the Object Bucket Claim, which will create an individual bucket for reading and writing objects.
Hint
For more details see the Object Store topic
Connect to v2 mon port¶
If encryption or compression on the wire is needed, specify the v2 port. Check if the v2 port is available in ceph quorum_status
, then you can update the export ROOK_EXTERNAL_CEPH_MON_DATA
to use the v2 port 3300
.
Exporting Rook to another cluster¶
If you have multiple K8s clusters running, and want to use the local rook-ceph
cluster as the central storage, you can export the settings from this cluster with the following steps.
1) Copy create-external-cluster-resources.py into the directory /etc/ceph/
of the toolbox.
Important
For other clusters to connect to storage in this cluster, Rook must be configured with a networking configuration that is accessible from other clusters. Most commonly this is done by enabling host networking in the CephCluster CR so the Ceph daemons will be addressable by their host IPs.