Sean Eagan b1a247e7f5 Helm 3 - Fix Job labels
If labels are not specified on a Job, kubernetes defaults them
to include the labels of their underlying Pod template. Helm 3
injects metadata into all resources [0] including a
`app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes
sees a Job's labels they are no longer empty and thus do not get
defaulted to the underlying Pod template's labels. This is a
problem since Job labels are depended on by
- Armada pre-upgrade delete hooks
- Armada wait logic configurations
- kubernetes-entrypoint dependencies

Thus for each Job template this adds labels matching the
underlying Pod template to retain the same labels that were
present with Helm 2.

[0]: https://github.com/helm/helm/pull/7649

Change-Id: I3b6b25fcc6a1af4d56f3e2b335615074e2f04b6d
2021-09-30 16:01:31 -05:00

36 lines
1.7 KiB
YAML

---
ceph-osd:
- 0.1.0 Initial Chart
- 0.1.1 Change helm-toolkit dependency to >= 0.1.0
- 0.1.2 wait for only osd pods from post apply job
- 0.1.3 Search for complete logical volume name for OSD data volumes
- 0.1.4 Don't try to prepare OSD disks that are already deployed
- 0.1.5 Fix the sync issue between osds when using shared disk for metadata
- 0.1.6 Logic improvement for used osd disk detection
- 0.1.7 Synchronization audit for the ceph-volume osd-init script
- 0.1.8 Update post apply job
- 0.1.9 Check inactive PGs multiple times
- 0.1.10 Fix typo in check inactive PGs logic
- 0.1.11 Fix post-apply job failure related to fault tolerance
- 0.1.12 Add a check for misplaced objects to the post-apply job
- 0.1.13 Remove default OSD configuration
- 0.1.14 Alias synchronized commands and fix descriptor leak
- 0.1.15 Correct naming convention for logical volumes in disk_zap()
- 0.1.16 dmsetup remove logical devices using correct device names
- 0.1.17 Fix a bug with DB orphan volume removal
- 0.1.18 Uplift from Nautilus to Octopus release
- 0.1.19 Update rbac api version
- 0.1.20 Update directory-based OSD deployment for image changes
- 0.1.21 Refactor Ceph OSD Init Scripts - First PS
- 0.1.22 Refactor Ceph OSD Init Scripts - Second PS
- 0.1.23 Use full image ref for docker official images
- 0.1.24 Ceph OSD Init Improvements
- 0.1.25 Export crash dumps when Ceph daemons crash
- 0.1.26 Mount /var/crash inside ceph-osd pods
- 0.1.27 Limit Ceph OSD Container Security Contexts
- 0.1.28 Change var crash mount propagation to HostToContainer
- 0.1.29 Fix Ceph checkDNS script
- 0.1.30 Ceph OSD log-runner container should run as ceph user
- 0.1.31 Helm 3 - Fix Job labels
...