OpenStack Kolla-ansible: deploy Ceph with ceph-ansible
OpenStack Train version is the latest version supporting the deployment of Ceph through kolla-ansible. Since Ussuri (Victoria is now the latest stable version of OpenStack, released on october 14th 2020), it's not possible anymore to deploy Ceph automatically: if you want to integrate Ceph with your OpenStack deployment you'll have to use an external deployed cluster.
In this example, I'll deploy a simple Ceph cluster (3 controller nodes, 3 storage nodes), containerized, with collocated osds (block.wal
, block.db
) with ceph-ansible project.
All that follows is my configuration, you will probably have to make changes to adapt it to yours.
Note: this will work with Nautilus version, but it hasn't been tested with Octopus. You can try by changing this line in group_vars/all.yml
:
ceph_docker_image_tag: latest-nautilus
by this:
ceph_docker_image_tag: latest-octopus
My lab is currently stopped but I'll update this post as soon as I get a chance to test it.
Prepare your environment
Create your virtualenv:
[openstack@centos-kolla ~] $ mkdir ~/ceph && cd ~/ceph
[openstack@centos-kolla ~] $ virtualenv ~/ceph
[openstack@centos-kolla ~] $ source ~/ceph/bin/activate
(ceph) [openstack@centos-kolla ~] $ pip install ansible
Clone the repository then checkout Nautilus stable version:
(ceph) [openstack@centos-kolla ~] $ git clone https://github.com/ceph/ceph-ansible.git
(ceph) [openstack@centos-kolla ~] $ cd ceph-ansible
(ceph) [openstack@centos-kolla ~] $ git checkout stable-4.0
Prepare your Configuration
I'll take my previous configuration of my kolla deployment (more informations here):
- my kolla
storage_interface
will be mymonitor_interface
- my kolla
cluster_interface
will be mycluster_network
- my kolla
kolla_external_vip_interface
will be myradosgw_interface
And the dedicated OSDs devices:
/dev/sdb
/dev/sdc
/dev/sdd
Global configuration
You have to edit the all.yml
configuration:
(ceph) [openstack@centos-kolla ceph-ansible] $ vi group_vars/all.yml
generate_fsid: true
monitor_interface: ens38
journal_size: 5120
public_network: 172.16.12.0/24
cluster_network: 172.16.13.0/24
cluster_interface: ens39
ceph_docker_image: "ceph/daemon"
ceph_docker_image_tag: latest-nautilus
containerized_deployment: true
ceph_docker_registry: docker.io
radosgw_interface: ens33
dashboard_admin_password: admin
grafana_admin_password: password
Disk configuration
Add your physical disks:
(ceph) [openstack@centos-kolla ceph-ansible] $ vi group_vars/osds.yml
osd_scenario: collocated
osd_objectstore: bluestore
dmcrypt: false
devices:
- /dev/sdb
- /dev/sdc
- /dev/sdd
Host configuration
Create your hosts
file:
(ceph) [openstack@centos-kolla ceph-ansible] $ vi hosts
[mons]
control01 ansible_host=172.16.10.11
control02 ansible_host=172.16.10.12
control03 ansible_host=172.16.10.13
[osds]
compute01 ansible_host=172.16.10.21
compute02 ansible_host=172.16.10.22
compute03 ansible_host=172.16.10.23
[grafana-server]
control01 ansible_host=172.16.10.11
[mgrs]
control01 ansible_host=172.16.10.11
[rgws]
control01 ansible_host=172.16.10.11
control02 ansible_host=172.16.10.12
control03 ansible_host=172.16.10.13
Deploy
(ceph) [openstack@centos-kolla ceph-ansible] $ ansible-playbook site-docker.yml -i hosts -e container_package_name=docker-ce
If you don't want to deploy your Ceph cluster with Docker:
(ceph) [openstack@centos-kolla ceph-ansible] $ ansible-playbook site.yml -i hosts
and set containerized_deployment
to false
in all.yml
configuration.file.
Verify your deployment
You will have this output if everything went well:
PLAY RECAP **********************************************************************************************************************************************************************************************************************************
compute01 : ok=182 changed=15 unreachable=0 failed=0 skipped=293 rescued=0 ignored=0
compute02 : ok=172 changed=12 unreachable=0 failed=0 skipped=284 rescued=0 ignored=0
compute03 : ok=174 changed=12 unreachable=0 failed=0 skipped=282 rescued=0 ignored=0
control01 : ok=464 changed=53 unreachable=0 failed=0 skipped=509 rescued=0 ignored=0
control02 : ok=219 changed=16 unreachable=0 failed=0 skipped=333 rescued=0 ignored=0
control03 : ok=219 changed=16 unreachable=0 failed=0 skipped=333 rescued=0 ignored=0
INSTALLER STATUS ****************************************************************************************************************************************************************************************************************************
Install Ceph Monitor : Complete (0:00:42)
Install Ceph Manager : Complete (0:00:24)
Install Ceph OSD : Complete (0:00:51)
Install Ceph RGW : Complete (0:00:19)
Install Ceph Dashboard : Complete (0:00:32)
Install Ceph Grafana : Complete (0:00:33)
Install Ceph Node Exporter : Complete (0:00:29)
Saturday 16 May 2020 18:57:08 -0400 (0:00:00.037) 0:07:48.438 **********
===============================================================================
ceph-container-common : pulling docker.io/ceph/daemon:latest-nautilus image -------------------------------------------------------------------------------------------------------------------------------------------------------- 150.79s
ceph-mon : waiting for the monitor(s) to form the quorum... ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 14.90s
ceph-grafana : wait for grafana to start -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 13.24s
ceph-osd : wait for all osd to be up ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 11.33s
ceph-osd : use ceph-volume lvm batch to create bluestore osds ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.59s
ceph-infra : install firewalld python binding ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.78s
ceph-mgr : wait for all mgr to be up ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.47s
ceph-dashboard : set or update dashboard admin username and password ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.69s
ceph-container-common : get ceph version --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.30s
ceph-mon : fetch ceph initial keys --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.25s
gather and delegate facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 3.62s
ceph-mgr : disable ceph mgr enabled modules ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.36s
ceph-osd : systemd start osd --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.31s
ceph-mgr : add modules to ceph-mgr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.01s
ceph-config : create ceph initial directories ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.74s
ceph-config : create ceph initial directories ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.73s
ceph-config : create ceph initial directories ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.70s
ceph-dashboard : disable mgr dashboard module (restart) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.63s
ceph-config : look up for ceph-volume rejected devices ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.54s
ceph-osd : unset noup flag ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.52s
Check if all OSDs are up and if Ceph is in HEALTH_OK
status:
[openstack@openstack-controller-1 ~] $ docker exec -it ceph-mon-openstack-controller-1 ceph -s
cluster:
id: afd59d00-55c5-4524-b77b-b1c3e4a4ccc3
health: HEALTH_OK
services:
mon: 3 daemons, quorum openstack-controller-1,openstack-controller-2,openstack-controller-3 (age 6m)
mgr: openstack-controller-1(active, since 3m)
osd: 9 osds: 9 up (since 5m), 9 in (since 5m)
rgw: 3 daemons active (openstack-controller-1.rgw0, openstack-controller-2.rgw0, openstack-controller-3.rgw0)
data:
pools: 4 pools, 128 pgs
objects: 190 objects, 1.9 KiB
usage: 9.0 GiB used, 171 GiB / 180 GiB avail
pgs: 128 active+clean
That's it, you have now a shiny new Ceph cluster deployed with ceph-ansible
!
Next steps: integrate it with a kolla deployment (ephemeral, block, images, object). I'm also preparing some posts explaining how integrate external Ceph deployment into an hyperconverged kolla deployment.
Resources:
https://docs.ceph.com/ceph-ansible/master/
https://programmer.group/deploying-docker-based-ceph-cluster-using-ceph-ansible.html
Pictures: me