
If labels are not specified on a Job, kubernetes defaults them to include the labels of their underlying Pod template. Helm 3 injects metadata into all resources [0] including a `app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes sees a Job's labels they are no longer empty and thus do not get defaulted to the underlying Pod template's labels. This is a problem since Job labels are depended on by - Armada pre-upgrade delete hooks - Armada wait logic configurations - kubernetes-entrypoint dependencies Thus for each Job template this adds labels matching the underlying Pod template to retain the same labels that were present with Helm 2. [0]: https://github.com/helm/helm/pull/7649 Change-Id: I3b6b25fcc6a1af4d56f3e2b335615074e2f04b6d
18 lines
677 B
YAML
18 lines
677 B
YAML
---
|
|
ceph-mon:
|
|
- 0.1.0 Initial Chart
|
|
- 0.1.1 Change helm-toolkit dependency to >= 0.1.0
|
|
- 0.1.2 Enable shareProcessNamespace in mon daemonset
|
|
- 0.1.3 Run mon container as ceph user
|
|
- 0.1.4 Uplift from Nautilus to Octopus release
|
|
- 0.1.5 Add Ceph CSI plugin
|
|
- 0.1.6 Fix python3 issue for util scripts
|
|
- 0.1.7 remove deprecated svc annotation tolerate-unready-endpoints
|
|
- 0.1.8 Use full image ref for docker official images
|
|
- 0.1.9 Remove unnecessary parameters for ceph-mon
|
|
- 0.1.10 Export crash dumps when Ceph daemons crash
|
|
- 0.1.11 Correct mon-check executing binary and logic
|
|
- 0.1.12 Fix Ceph checkDNS script
|
|
- 0.1.13 Helm 3 - Fix Job labels
|
|
...
|