Fix cephfs access after upgrading

After the system upgrade, the ceph mds data was deleted, and we could
no longer find the files stored with CephFS.

This procedure resets and recreates the mds so that the files appear
again after the volume is mounted.

Test Plan:
 - Tested Upgrading procedure from 21.05 on AIO-SX
 - Tested B&R procedure on AIO-SX

Closes-bug: 1955823

Signed-off-by: Thiago Miranda <ThiagoOliveira.Miranda@windriver.com>
Change-Id: Ic146dda35c3c71cf11c73a4f972b274ec3404d07
This commit is contained in:
Thiago Miranda 2021-12-27 07:51:01 -05:00 committed by Felipe Sanches Zanoni
parent f33b81f918
commit f72175fe29
2 changed files with 9 additions and 9 deletions

View File

@ -31,15 +31,12 @@ set -x
# Check if the filesystem for the system RWX provisioner is present # Check if the filesystem for the system RWX provisioner is present
ceph fs ls | grep ${FS_NAME} ceph fs ls | grep ${FS_NAME}
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
# Attempt to create the pool if not present, this should be present # If we have existing metadata/data pools, use them
ceph fs new ${FS_NAME} ${METADATA_POOL_NAME} ${DATA_POOL_NAME} ceph fs new ${FS_NAME} ${METADATA_POOL_NAME} ${DATA_POOL_NAME} --force
if [ $? -eq 22 ]; then # Reset the filesystem and journal
# We need to rebuild the fs since we have hit: ceph fs reset ${FS_NAME} --yes-i-really-mean-it
# Error EINVAL: pool 'kube-cephfs-metadata' already contains some cephfs-journal-tool --rank=${FS_NAME}:0 event recover_dentries summary
# objects. Use an empty pool instead. cephfs-journal-tool --rank=${FS_NAME}:0 journal reset
ceph fs new ${FS_NAME} ${METADATA_POOL_NAME} ${DATA_POOL_NAME} --force
ceph fs reset ${FS_NAME} --yes-i-really-mean-it
fi
fi fi
# Start the Ceph MDS # Start the Ceph MDS

View File

@ -281,6 +281,9 @@
script: recover_cephfs.sh script: recover_cephfs.sh
register: cephfs_recovery_out register: cephfs_recovery_out
- name: Create ceph.client.guest.keyring to allow ceph mount again
command: touch /etc/ceph/ceph.client.guest.keyring
- debug: var=cephfs_recovery_out.stdout_lines - debug: var=cephfs_recovery_out.stdout_lines
- name: Restart ceph one more time to pick latest changes - name: Restart ceph one more time to pick latest changes