
Recently we updated our test jobs so that all of them use the `deploy-env` Ansible role which utilizes the Kubeadm to deploy the test Kubernetes cluster. The role works for both multi-node and single-node environments. Although the deployment of Kubernetes itself is out of scope of Openstack-Helm, we recommen using this role to deploy test and development Kubernetes clusters. So at the moment there is no need to provide different sets of tools single-node and multi-node test envs. Now this is a matter of the Ansible inventory file. Also the deployment procedure of OpenStack on top of Kubernetes using Helm is the same for multi-node and single-node clusters because it only relies on the Kubernetes API. We will be improving the `deploy-env` role even futher and we will be cleaning up the deployment scripts and the documentation so to provide a clear experience for the Openstack-Helm users. Change-Id: I70236c4a2b870b52d2b01f65b1ef9b9518646964
2.5 KiB
Deploy Ceph
Ceph is a highly scalable and fault-tolerant distributed storage system designed to store vast amounts of data across a cluster of commodity hardware. It offers object storage, block storage, and file storage capabilities, making it a versatile solution for various storage needs. Ceph's architecture is based on a distributed object store, where data is divided into objects, each with its unique identifier, and distributed across multiple storage nodes. It uses a CRUSH algorithm to ensure data resilience and efficient data placement, even as the cluster scales. Ceph is widely used in cloud computing environments and provides a cost-effective and flexible storage solution for organizations managing large volumes of data.
Kubernetes introduced the CSI standard to allow storage providers like Ceph to implement their drivers as plugins. Kubernetes can use the CSI driver for Ceph to provision and manage volumes directly. By means of CSI stateful applications deployed on top of Kubernetes can use Ceph to store their data.
At the same time, Ceph provides the RBD API, which applications can utilize to create and mount block devices distributed across the Ceph cluster. The OpenStack Cinder service utilizes this Ceph capability to offer persistent block devices to virtual machines managed by the OpenStack Nova.
The recommended way to deploy Ceph on top of Kubernetes is by means of Rook operator. Rook provides Helm charts to deploy the operator itself which extends the Kubernetes API adding CRDs that enable managing Ceph clusters via Kuberntes custom objects. For details please refer to the Rook documentation.
To deploy the Rook Ceph operator and a Ceph cluster you can use the script ceph.sh. Then to generate the client secrets to interface with the Ceph RBD API use this script ceph_secrets.sh
cd ~/osh/openstack-helm-infra
./tools/deployment/openstack-support-rook/020-ceph.sh
./tools/deployment/openstack-support-rook/025-ceph-ns-activate.sh
Note
Please keep in mind that these are the deployment scripts that we use for testing. For example we place Ceph OSD data object on loop devices which are slow and are not recommended to use in production.