arrow-left arrow-right brightness-2 chevron-left chevron-right circle-half-full facebook-box facebook loader magnify menu-down rss-box star twitter-box twitter white-balance-sunny window-close
How to enable Ceph multitenancy for object storage in OpenStack?
5 min read

How to enable Ceph multitenancy for object storage in OpenStack?

How to enable Ceph multitenancy for object storage in OpenStack?

How to manage object storage multitenancy with an integrated Ceph cluster into an OpenStack deployment?


If the manipulation is performed after a deployment, changes will only work for new users and new tenants.


Integrate Ceph RGW multitenancy and S3 authentication with kolla-ansible

Standard behaviour

Regarding bucket operation, Ceph as some constraints (see here).

The part we're interested in is this: Bucket names must be unique.

Why? Because with OpenStack you have the concept of multitenancy. In a vanilla Swift deployment each tenants can overlap the bucket name between them.
The "classic" object-store service API endpoint are created like this:

openstack endpoint create --region RegionOne \
  object-store public http://endpoint:8080/v1/AUTH_%\(project_id\)s

This specific part AUTH_%\(project_id\)s is very important: each tenant have it's own namespace and can do anything without any impact on other tenant. If you create a container ($CONTAINER), the URL will be:


On the other side the default endpoint with an integrated Ceph is:


It means that when you create a container $CONTAINER, the URL will be /v1/swift/$CONTAINER and that's a big problem if you want to have a multitenancy.

You'll have this regardless if you use Horizon/Swift API or S3 endpoint.

How it looks in Ceph

The key is to understand how Ceph works in this situation.

For example if I create (from Swift CLI) 2 containers named demo in 2 differents tenants from Swift endpoint, this is how it'll work:

On the first tenant I can create the container demo:

(openstack) [[email protected] ~] $ swift post demo --debug
INFO:swiftclient:RESP STATUS: 404 Not Found
INFO:swiftclient:RESP BODY: NoSuchKey
DEBUG:urllib3.connectionpool: "PUT /swift/v1/demo HTTP/1.1" 201 0
DEBUG:swiftclient:RESP STATUS: 201 Created

But on the second tenant:

(openstack) [[email protected] ~] $ swift post demo
DEBUG:urllib3.connectionpool: "POST /swift/v1/demo HTTP/1.1" 403 12
INFO:swiftclient:RESP STATUS: 403 Forbidden
INFO:swiftclient:RESP BODY: AccessDenied
ClientException: Container POST failed: 403 Forbidden   AccessDenied
Container POST failed: 403 Forbidden   AccessDenied
Failed Transaction ID: tx000000000000000000070-005e962c65-86cc-default

I'll not be allowed to create this container because the name already exists.

If we look on Ceph side we see clearly why we cannot overlap 2 containers:

[[email protected] ~] $ radosgw-admin bucket list

This behaviour might be fine for a variety of use cases, but if you want to take advantage of multitenancy, it's really problematic.

Enable multitenancy capabilities

We need to specify to Ceph that he needs to use specific bucket name and have an identifier of the tenant name on each bucket.

You have to add 2 options in your ceph.conf:

rgw_keystone_implicit_tenants = true
rgw_swift_account_in_url = true

The first one will tell Ceph to put the bucket under the tenant id for each new user. The second one will allow you to use the endpoint with the tenant id identifier.

You also have to change your swift endpoint with adding %\(tenant_id\)s add the end of each endpoint. Get the IDs:

(openstack) [[email protected] ~] $ openstack endpoint list | grep swift
| 1be42ae9c8ee43edbb7e4bf73eb9f958 | RegionOne | swift        | object-store   | True    | public    |         |
| 3cfcfe9eed924d7a9acf4d86761be6cb | RegionOne | swift        | object-store   | True    | internal  |         |
| c279c8bda5694eb19d4e6c0924a0e6fd | RegionOne | swift        | object-store   | True    | admin     |         |

Then edit them (remplace the id at the end by your endpoints id):

openstack endpoint set --region RegionOne --service swift --interface public --url\(tenant_id\)s 1be42ae9c8ee43edbb7e4bf73eb9f958
openstack endpoint set --region RegionOne --service swift --interface internal --url\(tenant_id\)s 3cfcfe9eed924d7a9acf4d86761be6cb
openstack endpoint set --region RegionOne --service swift --interface admin --url\(tenant_id\)s c279c8bda5694eb19d4e6c0924a0e6fd

It'll work both with and without AUTH_ prefix. More informations here.

How it looks now in Ceph

I've created 2 tenants:

(openstack) [[email protected] ~] $ openstack project list | grep 'demo2\|demo3'
| 2ae80d2a85ce446bae46595611033a94 | demo2   |
| cdc03141fb1f4090a7de4cbaee55a667 | demo3   |

Then created a demo container with each of these tenant.

Now if we look at Ceph:

[[email protected] ~] $ radosgw-admin bucket list

I see that each container is in the tenant namespace and I haven't any impact by overlapping the container names accross the tenants.

In a simpler way, here's what used to happen before:

Creation of a container named "demo"
Tenant A id: 2ae80d2a85ce446bae46595611033a94
Tenant B id: cdc03141fb1f4090a7de4cbaee55a667

    +-------------------+   OK
    | user A > tenant A +--------->  /demo 

    +-------------------+   KO
    | user B > tenant B +--------->  /demo already exist

and now:

Creation of a container named "demo"
Tenant A id: 2ae80d2a85ce446bae46595611033a94
Tenant B id: cdc03141fb1f4090a7de4cbaee55a667

    +-------------------+   OK
    | user A > tenant A +--------->  2ae80d2a85ce446bae46595611033a94/demo

    +-------------------+   OK
    | user B > tenant B +--------->  cdc03141fb1f4090a7de4cbaee55a667/demo

Swift public access

The endpoint now will be:


Example with the container demo of demo3 tenant:

(openstack) [[email protected] ~] $ swift stat demo -v
                   Auth Token: gAAAAABeljlU0SW_cEqFomGOq0L33N45uDFAHsKBWzPg4w2WxguR4aD1WlexGD6mB3FQK4HDnpms8y-z3_O-KP5rc-DMyzkj6XoHqZagRyTiUFc9twyyg17CUyk5uTl8PENrtLUZN_K3vZRRqiubT__imIGMEqmrl0fivO6Q_CUwxi8obmmeTlQ
                      Account: cdc03141fb1f4090a7de4cbaee55a667
                    Container: demo
                      Objects: 0
                        Bytes: 0
                     Read ACL:
                    Write ACL:
                      Sync To:
                     Sync Key:
              X-Storage-Class: STANDARD
                Accept-Ranges: bytes
             X-Storage-Policy: default-placement
X-Container-Bytes-Used-Actual: 0
                Last-Modified: Tue, 14 Apr 2020 22:29:29 GMT
                  X-Timestamp: 1586903369.52510
                   X-Trans-Id: tx00000000000000000005f-005e963954-121c1-default
                 Content-Type: text/plain; charset=utf-8
       X-Openstack-Request-Id: tx00000000000000000005f-005e963954-121c1-default

S3 public access

The S3 endpoint will be now look like that:


Example with the container demo of demo3 tenant:

(openstack) [[email protected] ~] $ curl -s | tidy -xml -i -
No warnings or errors were found.

<?xml version="1.0" encoding="utf-8"?>
<ListBucketResult xmlns="">

That's all good 😀


Picture : Fredy Jacob