Block Device Quick Start¶
To use this guide, you must have executed the procedures in the Storage
Cluster Quick Start guide first. Ensure your Ceph Storage Cluster is
in an active + clean state before working with the Ceph Block
Device.
You may use a virtual machine for your ceph-client node, but do not
execute the following procedures on the same physical node as your Ceph
Storage Cluster nodes (unless you use a VM). See FAQ for details.
Install Ceph¶
Verify that you have an appropriate version of the Linux kernel. See OS Recommendations for details.
lsb_release -a uname -r
On the admin node, use
ceph-deployto install Ceph on yourceph-clientnode.ceph-deploy install ceph-client
On the admin node, use
ceph-deployto copy the Ceph configuration file and theceph.client.admin.keyringto theceph-client.ceph-deploy admin ceph-client
The
ceph-deployutility copies the keyring to the/etc/cephdirectory. Ensure that the keyring file has appropriate read permissions (e.g.,sudo chmod +r /etc/ceph/ceph.client.admin.keyring).
Create a Block Device Pool¶
On the admin node, use the
cephtool to create a pool (we recommend the name ‘rbd’).On the admin node, use the
rbdtool to initialize the pool for use by RBD:rbd pool init <pool-name>
Configure a Block Device¶
On the
ceph-clientnode, create a block device image.rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
On the
ceph-clientnode, map the image to a block device.sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
Use the block device by creating a file system on the
ceph-clientnode.sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo This may take a few moments.
Mount the file system on the
ceph-clientnode.sudo mkdir /mnt/ceph-block-device sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device cd /mnt/ceph-block-device
Optionally configure the block device to be automatically mapped and mounted at boot (and unmounted/unmapped at shutdown) - see the rbdmap manpage.
See block devices for additional details.