Quickstart
Welcome to Rook! We hope you have a great experience installing the Rook cloud-native storage orchestrator platform to enable highly available, durable Ceph storage in your Kubernetes cluster.
If you have any questions along the way, please don't hesitate to ask us in our Slack channel. You can sign up for our Slack here.
This guide will walk you through the basic setup of a Ceph cluster and enable you to consume block, object, and file storage from other pods running in your cluster.
Always use a virtual machine when testing Rook. Never use your host system where local devices may mistakenly be consumed.
Minimum Version¶
Kubernetes v1.17 or higher is supported by Rook.
CPU Architecture¶
Architectures released are amd64 / x86_64
and arm64
.
Prerequisites¶
To make sure you have a Kubernetes cluster that is ready for Rook
, you can follow these instructions.
In order to configure the Ceph storage cluster, at least one of these local storage options are required:
- Raw devices (no partitions or formatted filesystems)
- Raw partitions (no formatted filesystem)
- LVM Logical Volumes (no formatted filesystem)
- Persistent Volumes available from a storage class in
block
mode
TL;DR¶
A simple Rook cluster can be created with the following kubectl commands and example manifests.
After the cluster is running, you can create block, object, or file storage to be consumed by other applications in your cluster.
Deploy the Rook Operator¶
The first step is to deploy the Rook operator. Check that you are using the example yaml files that correspond to your release of Rook. For more options, see the example configurations documentation.
You can also deploy the operator with the Rook Helm Chart.
Before you start the operator in production, there are some settings that you may want to consider:
- Consider if you want to enable certain Rook features that are disabled by default. See the operator.yaml for these and other advanced settings.
- Device discovery: Rook will watch for new devices to configure if the
ROOK_ENABLE_DISCOVERY_DAEMON
setting is enabled, commonly used in bare metal clusters. - Node affinity and tolerations: The CSI driver by default will run on any node in the cluster. To configure the CSI driver affinity, several settings are available.
If you wish to deploy into a namespace other than the default rook-ceph
, see the Ceph advanced configuration section on the topic.
Cluster Environments¶
The Rook documentation is focused around starting Rook in a production environment. Examples are also provided to relax some settings for test environments. When creating the cluster later in this guide, consider these example cluster manifests:
- cluster.yaml: Cluster settings for a production cluster running on bare metal. Requires at least three worker nodes.
- cluster-on-pvc.yaml: Cluster settings for a production cluster running in a dynamic cloud environment.
- cluster-test.yaml: Cluster settings for a test environment such as minikube.
See the Ceph example configurations for more details.
Create a Ceph Cluster¶
Now that the Rook operator is running we can create the Ceph cluster. For the cluster to survive reboots, make sure you set the dataDirHostPath
property that is valid for your hosts. For more settings, see the documentation on configuring the cluster.
Create the cluster:
Use kubectl
to list pods in the rook-ceph
namespace. You should be able to see the following pods once they are all running. The number of osd pods will depend on the number of nodes in the cluster and the number of devices configured. If you did not modify the cluster.yaml
above, it is expected that one OSD will be created per node.
Hint
If the rook-ceph-mon
, rook-ceph-mgr
, or rook-ceph-osd
pods are not created, please refer to the Ceph common issues for more details and potential solutions.
To verify that the cluster is in a healthy state, connect to the Rook toolbox and run the ceph status
command.
- All mons should be in quorum
- A mgr should be active
- At least one OSD should be active
- If the health is not
HEALTH_OK
, the warnings or errors should be investigated
If the cluster is not healthy, please refer to the Ceph common issues for more details and potential solutions.
Storage¶
For a walkthrough of the three types of storage exposed by Rook, see the guides for:
- Block: Create block storage to be consumed by a pod (RWO)
- Shared Filesystem: Create a filesystem to be shared across multiple pods (RWX)
- Object: Create an object store that is accessible inside or outside the Kubernetes cluster
Ceph Dashboard¶
Ceph has a dashboard in which you can view the status of your cluster. Please see the dashboard guide for more details.
Tools¶
Create a toolbox pod for full access to a ceph admin client for debugging and troubleshooting your Rook cluster. Please see the toolbox documentation for setup and usage information. Also see our advanced configuration document for helpful maintenance and tuning examples.
Monitoring¶
Each Rook cluster has some built in metrics collectors/exporters for monitoring with Prometheus. To learn how to set up monitoring for your Rook cluster, you can follow the steps in the monitoring guide.
Telemetry¶
To allow us to understand usage, the maintainers for Rook and Ceph would like to receive telemetry reports for Rook clusters. The data is anonymous and does not include any identifying information. We invite you to enable the telemetry reporting feature with the following command in the toolbox:
The telemetry is disabled by default. For more details on what is reported and how your privacy is protected, see the Ceph Telemetry Documentation.
Teardown¶
When you are done with the test cluster, see these instructions to clean up the cluster.