Sean Eagan b1a247e7f5 Helm 3 - Fix Job labels
If labels are not specified on a Job, kubernetes defaults them
to include the labels of their underlying Pod template. Helm 3
injects metadata into all resources [0] including a
`app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes
sees a Job's labels they are no longer empty and thus do not get
defaulted to the underlying Pod template's labels. This is a
problem since Job labels are depended on by
- Armada pre-upgrade delete hooks
- Armada wait logic configurations
- kubernetes-entrypoint dependencies

Thus for each Job template this adds labels matching the
underlying Pod template to retain the same labels that were
present with Helm 2.

[0]: https://github.com/helm/helm/pull/7649

Change-Id: I3b6b25fcc6a1af4d56f3e2b335615074e2f04b6d
2021-09-30 16:01:31 -05:00

21 lines
790 B
YAML

---
ceph-rgw:
- 0.1.0 Initial Chart
- 0.1.1 Change helm-toolkit dependency version to ">= 0.1.0"
- 0.1.2 Uplift from Nautilus to Octopus release
- 0.1.3 update rbac api version
- 0.1.4 Rgw placement target support
- 0.1.5 Add tls support
- 0.1.6 Update tls override options
- 0.1.7 Use ca cert for helm tests
- 0.1.8 Add placement target delete support to RGW
- 0.1.9 Use full image ref for docker official images
- 0.1.10 Fix a bug in placement target deletion for new targets
- 0.1.11 Change s3 auth order to use local before external
- 0.1.12 Export crash dumps when Ceph daemons crash
- 0.1.13 Add configmap hash for keystone rgw
- 0.1.14 Disable crash dumps for rgw
- 0.1.15 Correct rgw placement target functions
- 0.1.16 Helm 3 - Fix Job labels
...