A host storage cluster is one where Rook configures Ceph to store data directly on the host. The Ceph mons will store the metadata on the host (at a path defined by the dataDirHostPath), and the OSDs will consume raw devices or partitions.
The Ceph persistent data is stored directly on a host path (Ceph Mons) and on raw devices (Ceph OSDs).
To get you started, here are several example of the Cluster CR to configure the host.
For the simplest possible configuration, this example shows that all devices or partitions should be consumed by Ceph. The mons will store the metadata on the host node under /var/lib/rook.
apiVersion:ceph.rook.io/v1kind:CephClustermetadata:name:rook-cephnamespace:rook-cephspec:cephVersion:# see the "Cluster Settings" section below for more details on which image of ceph to runimage:quay.io/ceph/ceph:v17.2.1dataDirHostPath:/var/lib/rookmon:count:3allowMultiplePerNode:falsestorage:useAllNodes:trueuseAllDevices:true
More commonly, you will want to be more specific about which nodes and devices where Rook should configure the storage. The placement settings are very flexible to add node affinity, anti-affinity, or tolerations. For more options, see the placement documentation.
In this example, Rook will only configure Ceph daemons to run on nodes that are labeled with role=rook-node, and more specifically the OSDs will only be created on nodes labeled with role=rook-osd-node.
apiVersion:ceph.rook.io/v1kind:CephClustermetadata:name:rook-cephnamespace:rook-cephspec:cephVersion:image:quay.io/ceph/ceph:v17.2.1dataDirHostPath:/var/lib/rookmon:count:3allowMultiplePerNode:falsedashboard:enabled:true# cluster level storage configuration and selectionstorage:useAllNodes:trueuseAllDevices:true# Only create OSDs on devices that match the regular expression filter, "sdb" in this exampledeviceFilter:sdb# To control where various services will be scheduled by kubernetes, use the placement configuration sections below.# The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=rook-node' and# the OSDs would specifically only be created on nodes labeled with roke=rook-osd-node.placement:all:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:roleoperator:Invalues:-rook-nodeosd:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:roleoperator:Invalues:-rook-osd-node
If you need fine-grained control for every node and every device that is being configured, individual nodes and their config can be specified. In this example, we see that specific node names and devices can be specified.
Hint
Each node's 'name' field should match their 'kubernetes.io/hostname' label.
apiVersion:ceph.rook.io/v1kind:CephClustermetadata:name:rook-cephnamespace:rook-cephspec:cephVersion:image:quay.io/ceph/ceph:v17.2.1dataDirHostPath:/var/lib/rookmon:count:3allowMultiplePerNode:falsedashboard:enabled:true# cluster level storage configuration and selectionstorage:useAllNodes:falseuseAllDevices:falsedeviceFilter:config:metadataDevice:databaseSizeMB:"1024"# this value can be removed for environments with normal sized disks (100 GB or larger)nodes:-name:"172.17.4.201"devices:# specific devices to use for storage can be specified for each node-name:"sdb"# Whole storage device-name:"sdc1"# One specific partition. Should not have a file system on it.-name:"/dev/disk/by-id/ata-ST4000DM004-XXXX"# both device name and explicit udev links are supportedconfig:# configuration can be specified at the node level which overrides the cluster level config-name:"172.17.4.301"deviceFilter:"^sd."