Disaster Recovery
Under extenuating circumstances, steps may be necessary to recover the cluster health. There are several types of recovery addressed in this document.
Restoring Mon Quorum¶
Under extenuating circumstances, the mons may lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon is still healthy. The following steps will remove the unhealthy mons from quorum and allow you to form a quorum again with a single mon, then grow the quorum back to the original size.
The Rook Krew Plugin has a command restore-quorum
that will walk you through the mon quorum automated restoration process.
If the name of the healthy mon is c
, you would run the command:
See the restore-quorum documentation for more details.
Restoring CRDs After Deletion¶
When the Rook CRDs are deleted, the Rook operator will respond to the deletion event to attempt to clean up the cluster resources. If any data appears present in the cluster, Rook will refuse to allow the resources to be deleted since the operator will refuse to remove the finalizer on the CRs until the underlying data is deleted. For more details, see the dependency design doc.
While it is good that the CRs will not be deleted and the underlying Ceph data and daemons continue to be available, the CRs will be stuck indefinitely in a Deleting
state in which the operator will not continue to ensure cluster health. Upgrades will be blocked, further updates to the CRs are prevented, and so on. Since Kubernetes does not allow undeleting resources, the following procedure will allow you to restore the CRs to their prior state without even necessarily suffering cluster downtime.
Note
In the following commands, the affected CephCluster
resource is called rook-ceph
. If yours is named differently, the commands will need to be adjusted.
-
Scale down the operator.
-
Backup all Rook CRs and critical metadata
-
(Optional, if webhook is enabled) Delete the
ValidatingWebhookConfiguration
. This is the resource which connects Rook custom resources to the operator pod's validating webhook. Because the operator is unavailable, we must temporarily disable the valdiating webhook in order to make changes.1 2 3
```console kubectl delete ValidatingWebhookConfiguration rook-ceph-webhook ```
-
Remove the owner references from all critical Rook resources that were referencing the
CephCluster
CR.-
Programmatically determine all such resources, using this command:
-
Verify that all critical resources are shown in the output. The critical resources are these:
- Secrets:
rook-ceph-admin-keyring
,rook-ceph-config
,rook-ceph-mon
,rook-ceph-mons-keyring
- ConfigMap:
rook-ceph-mon-endpoints
- Services:
rook-ceph-mon-*
,rook-ceph-mgr-*
- Deployments:
rook-ceph-mon-*
,rook-ceph-osd-*
,rook-ceph-mgr-*
- PVCs (if applicable):
rook-ceph-mon-*
and the OSD PVCs (named<deviceset>-*
, for exampleset1-data-*
)
- Secrets:
-
For each listed resource, remove the
ownerReferences
metadata field, in order to unlink it from the deletingCephCluster
CR.To do so programmatically, use the command:
For a manual alternative, issue
kubectl edit
on each resource, and remove the block matching:
-
-
Before completing this step, validate these things. Failing to do so could result in data loss.
- Confirm that
cluster.yaml
contains theCephCluster
CR. - Confirm all critical resources listed above have had the
ownerReference
to theCephCluster
CR removed.
Remove the finalizer from the
CephCluster
resource. This will cause the resource to be immediately deleted by Kubernetes.After the finalizer is removed, the
CephCluster
will be immediately deleted. If all owner references were properly removed, all ceph daemons will continue running and there will be no downtime. - Confirm that
-
Create the
CephCluster
CR with the same settings as previously -
If there are other CRs in terminating state such as CephBlockPools, CephObjectStores, or CephFilesystems, follow the above steps as well for those CRs:
- Backup the CR
- Remove the finalizer and confirm the CR is deleted (the underlying Ceph resources will be preserved)
- Create the CR again
-
Scale up the operator
-
Watch the operator log to confirm that the reconcile completes successfully.
Adopt an existing Rook Ceph cluster into a new Kubernetes cluster¶
Situations this section can help resolve:
- The Kubernetes environment underlying a running Rook Ceph cluster failed catastrophically, requiring a new Kubernetes environment in which the user wishes to recover the previous Rook Ceph cluster.
- The user wishes to migrate their existing Rook Ceph cluster to a new Kubernetes environment, and downtime can be tolerated.
Prerequisites¶
- A working Kubernetes cluster to which we will migrate the previous Rook Ceph cluster.
- At least one Ceph mon db is in quorum, and sufficient number of Ceph OSD is
up
andin
before disaster. - The previous Rook Ceph cluster is not running.
Overview for Steps below¶
- Start a new and clean Rook Ceph cluster, with old
CephCluster
CephBlockPool
CephFilesystem
CephNFS
CephObjectStore
. - Shut the new cluster down when it has been created successfully.
- Replace ceph-mon data with that of the old cluster.
- Replace
fsid
insecrets/rook-ceph-mon
with that of the old one. - Fix monmap in ceph-mon db.
- Fix ceph mon auth key.
- Disable auth.
- Start the new cluster, watch it resurrect.
- Fix admin auth key, and enable auth.
- Restart cluster for the final time.
Steps¶
Assuming dataHostPathData
is /var/lib/rook
, and the CephCluster
trying to adopt is named rook-ceph
.
- Make sure the old Kubernetes cluster is completely torn down and the new Kubernetes cluster is up and running without Rook Ceph.
- Backup
/var/lib/rook
in all the Rook Ceph nodes to a different directory. Backups will be used later. - Pick a
/var/lib/rook/rook-ceph/rook-ceph.config
from any previous Rook Ceph node and save the old clusterfsid
from its content. - Remove
/var/lib/rook
from all the Rook Ceph nodes. - Add identical
CephCluster
descriptor to the new Kubernetes cluster, especially identicalspec.storage.config
andspec.storage.nodes
, exceptmon.count
, which should be set to1
. - Add identical
CephFilesystem
CephBlockPool
CephNFS
CephObjectStore
descriptors (if any) to the new Kubernetes cluster. - Install Rook Ceph in the new Kubernetes cluster.
- Watch the operator logs with
kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx
, and wait until the orchestration has settled. - STATE: Now the cluster will have
rook-ceph-mon-a
,rook-ceph-mgr-a
, and all the auxiliary pods up and running, and zero (hopefully)rook-ceph-osd-ID-xxxxxx
running.ceph -s
output should report 1 mon, 1 mgr running, and all of the OSDs down, all PGs are inunknown
state. Rook should not start any OSD daemon since all devices belongs to the old cluster (which have a differentfsid
). -
Run
kubectl -n rook-ceph exec -it rook-ceph-mon-a-xxxxxxxx bash
to enter therook-ceph-mon-a
pod, -
Stop the Rook operator by running
kubectl -n rook-ceph edit deploy/rook-ceph-operator
and setreplicas
to0
. - Stop cluster daemons by running
kubectl -n rook-ceph delete deploy/X
where X is every deployment in namespacerook-ceph
, exceptrook-ceph-operator
androok-ceph-tools
. -
Save the
rook-ceph-mon-a
address withkubectl -n rook-ceph get cm/rook-ceph-mon-endpoints -o yaml
in the new Kubernetes cluster for later use. -
SSH to the host where
rook-ceph-mon-a
in the new Kubernetes cluster resides.- Remove
/var/lib/rook/mon-a
- Pick a healthy
rook-ceph-mon-ID
directory (/var/lib/rook/mon-ID
) in the previous backup, copy to/var/lib/rook/mon-a
.ID
is any healthy mon node ID of the old cluster. - Replace
/var/lib/rook/mon-a/keyring
with the saved keyring, preserving only the[mon.]
section, remove[client.admin]
section. -
Run
docker run -it --rm -v /var/lib/rook:/var/lib/rook ceph/ceph:v14.2.1-20190430 bash
. The Docker image tag should match the Ceph version used in the Rook cluster. The/etc/ceph/ceph.conf
file needs to exist forceph-mon
to work.
- Remove
-
Tell Rook to run as old cluster by running
kubectl -n rook-ceph edit secret/rook-ceph-mon
and changingfsid
to the originalfsid
. Note that thefsid
is base64 encoded and must not contain a trailing carriage return. For example: -
Disable authentication by running
kubectl -n rook-ceph edit cm/rook-config-override
and adding content below: -
Bring the Rook Ceph operator back online by running
kubectl -n rook-ceph edit deploy/rook-ceph-operator
and setreplicas
to1
. - Watch the operator logs with
kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx
, and wait until the orchestration has settled. - STATE: Now the new cluster should be up and running with authentication disabled.
ceph -s
should report 1 mon & 1 mgr & all of the OSDs up and running, and all PGs in eitheractive
ordegraded
state. -
Run
kubectl -n rook-ceph exec -it rook-ceph-tools-XXXXXXX bash
to enter tools pod: -
Re-enable authentication by running
kubectl -n rook-ceph edit cm/rook-config-override
and removing auth configuration added in previous steps. - Stop the Rook operator by running
kubectl -n rook-ceph edit deploy/rook-ceph-operator
and setreplicas
to0
. - Shut down entire new cluster by running
kubectl -n rook-ceph delete deploy/X
where X is every deployment in namespacerook-ceph
, exceptrook-ceph-operator
androok-ceph-tools
, again. This time OSD daemons are present and should be removed too. - Bring the Rook Ceph operator back online by running
kubectl -n rook-ceph edit deploy/rook-ceph-operator
and setreplicas
to1
. - Watch the operator logs with
kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx
, and wait until the orchestration has settled. - STATE: Now the new cluster should be up and running with authentication enabled.
ceph -s
output should not change much comparing to previous steps.
Backing up and restoring a cluster based on PVCs into a new Kubernetes cluster¶
It is possible to migrate/restore an rook/ceph cluster from an existing Kubernetes cluster to a new one without resorting to SSH access or ceph tooling. This allows doing the migration using standard kubernetes resources only. This guide assumes the following:
- You have a CephCluster that uses PVCs to persist mon and osd data. Let's call it the "old cluster"
- You can restore the PVCs as-is in the new cluster. Usually this is done by taking regular snapshots of the PVC volumes and using a tool that can re-create PVCs from these snapshots in the underlying cloud provider. Velero is one such tool.
- You have regular backups of the secrets and configmaps in the rook-ceph namespace. Velero provides this functionality too.
Do the following in the new cluster:
- Stop the rook operator by scaling the deployment
rook-ceph-operator
down to zero:kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas 0
and deleting the other deployments. An example command to do this isk -n rook-ceph delete deployment -l operator!=rook
- Restore the rook PVCs to the new cluster.
- Copy the keyring and fsid secrets from the old cluster:
rook-ceph-mgr-a-keyring
,rook-ceph-mon
,rook-ceph-mons-keyring
,rook-ceph-osd-0-keyring
, ... - Delete mon services and copy them from the old cluster:
rook-ceph-mon-a
,rook-ceph-mon-b
, ... Note that simply re-applying won't work because the goal here is to restore theclusterIP
in each service and this field is immutable inService
resources. - Copy the endpoints configmap from the old cluster:
rook-ceph-mon-endpoints
- Scale the rook operator up again :
kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas 1
- Wait until the reconciliation is over.