Skip to content

Rook Upgrades

This guide will walk through the steps to upgrade the software in a Rook cluster from one version to the next. This guide focuses on updating the Rook version for the management layer, while the Ceph upgrade guide focuses on updating the data layer.

Upgrades for both the operator and for Ceph are entirely automated except where Rook's permissions need to be explicitly updated by an admin or when incompatibilities need to be addressed manually due to customizations.

We welcome feedback and opening issues!

Supported Versions

This guide is for upgrading from Rook v1.12.x to Rook v1.13.x.

Please refer to the upgrade guides from previous releases for supported upgrade paths. Rook upgrades are only supported between official releases.

For a guide to upgrade previous versions of Rook, please refer to the version of documentation for those releases.

Important

Rook releases from master are expressly unsupported. It is strongly recommended to use official releases of Rook. Unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases. Builds from the master branch can have functionality changed or removed at any time without compatibility support and without prior notice.

Breaking changes in v1.13

  • The minimum supported version of Kubernetes is v1.23. Upgrade to Kubernetes v1.23 or higher before upgrading Rook.
  • The minimum supported version of Ceph is v17.2.0. If a lower version is currently deployed, Upgrade Ceph before upgrading Rook.
  • CephCSI CephFS driver introduced a breaking change in v3.9.0. If any existing CephFS storageclass in the cluster has MountOptions parameter set, follow the steps mentioned in the CephCSI upgrade guide to ensure a smooth upgrade. This became the default CSI version in Rook v1.12.1, and may have already been resolved.
  • Support for the admission controller has been removed. CRD validation is now enabled with Validating Admission Policies. Validating Admission Policy rules are ignored in Kubernetes v1.24 and lower. If the admission controller is enabled, it is advised to upgrade to Kubernetes v1.25 or higher before upgrading Rook. For more info, see https://github.com/rook/rook/pull/11532.

Considerations

With this upgrade guide, there are a few notes to consider:

  • WARNING: Upgrading a Rook cluster is not without risk. There may be unexpected issues or obstacles that damage the integrity and health the storage cluster, including data loss.
  • The Rook cluster's storage may be unavailable for short periods during the upgrade process for both Rook operator updates and for Ceph version updates.
  • Read this document in full before undertaking a Rook cluster upgrade.

Patch Release Upgrades

Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to another are as simple as updating the common resources and the image of the Rook operator. For example, when Rook v1.13.1 is released, the process of updating from v1.13.0 is as simple as running the following:

git clone --single-branch --depth=1 --branch v1.13.1 https://github.com/rook/rook.git
cd rook/deploy/examples

If the Rook Operator or CephCluster are deployed into a different namespace than rook-ceph, see the Update common resources and CRDs section for instructions on how to change the default namespaces in common.yaml.

Then, apply the latest changes from v1.13, and update the Rook Operator image.

kubectl apply -f common.yaml -f crds.yaml
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.13.1

As exemplified above, it is a good practice to update Rook common resources from the example manifests before any update. The common resources and CRDs might not be updated with every release, but Kubernetes will only apply updates to the ones that changed.

Also update optional resources like Prometheus monitoring noted more fully in the upgrade section below.

Helm

If Rook is installed via the Helm chart, Helm will handle some details of the upgrade itself. The upgrade steps in this guide will clarify what Helm handles automatically.

The rook-ceph helm chart upgrade performs the Rook upgrade. The rook-ceph-cluster helm chart upgrade performs a Ceph upgrade if the Ceph image is updated. The rook-ceph chart should be upgraded before rook-ceph-cluster, so the latest operator has the opportunity to update custom resources as necessary.

Note

Be sure to update to a supported Helm version

Cluster Health

In order to successfully upgrade a Rook cluster, the following prerequisites must be met:

  • The cluster should be in a healthy state with full functionality. Review the health verification guide in order to verify a CephCluster is in a good starting state.
  • All pods consuming Rook storage should be created, running, and in a steady state.

Rook Operator Upgrade

The examples given in this guide upgrade a live Rook cluster running v1.12.9 to the version v1.13.0. This upgrade should work from any official patch release of Rook v1.12 to any official patch release of v1.13.

Let's get started!

Environment

These instructions will work for as long the environment is parameterized correctly. Set the following environment variables, which will be used throughout this document.

1
2
3
# Parameterize the environment
export ROOK_OPERATOR_NAMESPACE=rook-ceph
export ROOK_CLUSTER_NAMESPACE=rook-ceph

1. Update common resources and CRDs

Hint

Common resources and CRDs are automatically updated when using Helm charts.

First, apply updates to Rook common resources. This includes modified privileges (RBAC) needed by the Operator. Also update the Custom Resource Definitions (CRDs).

Get the latest common resources manifests that contain the latest changes.

git clone --single-branch --depth=1 --branch master https://github.com/rook/rook.git
cd rook/deploy/examples

If the Rook Operator or CephCluster are deployed into a different namespace than rook-ceph, update the common resource manifests to use your ROOK_OPERATOR_NAMESPACE and ROOK_CLUSTER_NAMESPACE using sed.

1
2
3
4
sed -i.bak \
    -e "s/\(.*\):.*# namespace:operator/\1: $ROOK_OPERATOR_NAMESPACE # namespace:operator/g" \
    -e "s/\(.*\):.*# namespace:cluster/\1: $ROOK_CLUSTER_NAMESPACE # namespace:cluster/g" \
  common.yaml

Apply the resources.

kubectl apply -f common.yaml -f crds.yaml

Prometheus Updates

If Prometheus monitoring is enabled, follow this step to upgrade the Prometheus RBAC resources as well.

kubectl apply -f deploy/examples/monitoring/rbac.yaml

2. Update the Rook Operator

Hint

The operator is automatically updated when using Helm charts.

The largest portion of the upgrade is triggered when the operator's image is updated to v1.13.x. When the operator is updated, it will proceed to update all of the Ceph daemons.

kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:master

3. Update Ceph CSI

Hint

This is automatically updated if custom CSI image versions are not set.

Important

The minimum supported version of Ceph-CSI is v3.8.0.

Update to the latest Ceph-CSI drivers if custom CSI images are specified. See the CSI Custom Images documentation.

Note

If using snapshots, refer to the Upgrade Snapshot API guide.

4. Wait for the upgrade to complete

Watch now in amazement as the Ceph mons, mgrs, OSDs, rbd-mirrors, MDSes and RGWs are terminated and replaced with updated versions in sequence. The cluster may be unresponsive very briefly as mons update, and the Ceph Filesystem may fall offline a few times while the MDSes are upgrading. This is normal.

The versions of the components can be viewed as they are updated:

watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{.metadata.name}{"  \treq/upd/avl: "}{.spec.replicas}{"/"}{.status.updatedReplicas}{"/"}{.status.readyReplicas}{"  \trook-version="}{.metadata.labels.rook-version}{"\n"}{end}'

As an example, this cluster is midway through updating the OSDs. When all deployments report 1/1/1 availability and rook-version=v1.13.0, the Ceph cluster's core components are fully updated.

1
2
3
4
5
6
7
8
9
Every 2.0s: kubectl -n rook-ceph get deployment -o j...

rook-ceph-mgr-a         req/upd/avl: 1/1/1      rook-version=v1.13.0
rook-ceph-mon-a         req/upd/avl: 1/1/1      rook-version=v1.13.0
rook-ceph-mon-b         req/upd/avl: 1/1/1      rook-version=v1.13.0
rook-ceph-mon-c         req/upd/avl: 1/1/1      rook-version=v1.13.0
rook-ceph-osd-0         req/upd/avl: 1//        rook-version=v1.13.0
rook-ceph-osd-1         req/upd/avl: 1/1/1      rook-version=v1.12.9
rook-ceph-osd-2         req/upd/avl: 1/1/1      rook-version=v1.12.9

An easy check to see if the upgrade is totally finished is to check that there is only one rook-version reported across the cluster.

1
2
3
4
5
6
# kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
  rook-version=v1.12.9
  rook-version=v1.13.0
This cluster is finished:
  rook-version=v1.13.0

5. Verify the updated cluster

At this point, the Rook operator should be running version rook/ceph:v1.13.0.

Verify the CephCluster health using the health verification doc.

6. Disable CSI holder pods

CSI "holder" pods are frequently reported objects of confusion and struggle in Rook. Because of this, they are being deprecated and will be removed in Rook v1.16.

If there are any CephClusters that use the non-default network setting network.provider: "multus", or if the operator config CSI_ENABLE_HOST_NETWORK is set to "false", perform migration steps to remove holder pods by setting CSI_REMOVE_HOLDER_PODS: "true" after following this migration guide: Modifying CSI Networking