
This change configures Ceph daemon pods so that /var/lib/ceph/crash maps to a hostPath location that persists when the pod restarts. This will allow for post-mortem examination of crash dumps to attempt to understand why daemons have crashed. Change-Id: I53277848f79a405b0809e0e3f19d90bbb80f3df8
30 lines
1.4 KiB
YAML
30 lines
1.4 KiB
YAML
---
|
|
ceph-osd:
|
|
- 0.1.0 Initial Chart
|
|
- 0.1.1 Change helm-toolkit dependency to >= 0.1.0
|
|
- 0.1.2 wait for only osd pods from post apply job
|
|
- 0.1.3 Search for complete logical volume name for OSD data volumes
|
|
- 0.1.4 Don't try to prepare OSD disks that are already deployed
|
|
- 0.1.5 Fix the sync issue between osds when using shared disk for metadata
|
|
- 0.1.6 Logic improvement for used osd disk detection
|
|
- 0.1.7 Synchronization audit for the ceph-volume osd-init script
|
|
- 0.1.8 Update post apply job
|
|
- 0.1.9 Check inactive PGs multiple times
|
|
- 0.1.10 Fix typo in check inactive PGs logic
|
|
- 0.1.11 Fix post-apply job failure related to fault tolerance
|
|
- 0.1.12 Add a check for misplaced objects to the post-apply job
|
|
- 0.1.13 Remove default OSD configuration
|
|
- 0.1.14 Alias synchronized commands and fix descriptor leak
|
|
- 0.1.15 Correct naming convention for logical volumes in disk_zap()
|
|
- 0.1.16 dmsetup remove logical devices using correct device names
|
|
- 0.1.17 Fix a bug with DB orphan volume removal
|
|
- 0.1.18 Uplift from Nautilus to Octopus release
|
|
- 0.1.19 Update rbac api version
|
|
- 0.1.20 Update directory-based OSD deployment for image changes
|
|
- 0.1.21 Refactor Ceph OSD Init Scripts - First PS
|
|
- 0.1.22 Refactor Ceph OSD Init Scripts - Second PS
|
|
- 0.1.23 Use full image ref for docker official images
|
|
- 0.1.24 Ceph OSD Init Improvements
|
|
- 0.1.25 Export crash dumps when Ceph daemons crash
|
|
...
|