Currently the nova-compute-ironic pod is configured to use full
nova.conf which is not subjects to `nova_compute_redactions`. As a
result, when the nova-compute-ironic starts, following traceback is
printed:
```
❯ kubectl --context uc_iad3_dev-NEW logs nova-compute-ironic-0
Defaulted container "nova-compute-ironic" out of: nova-compute-ironic, init (init)
+ exec nova-compute --config-file /etc/nova/nova.conf --config-file /etc/nova/nova-ironic.conf
2024-10-29 15:37:28.841 1179391 INFO nova.virt.driver [None req-99e9d536-2b34-4dfc-ac0f-f9680e213913 - - - - - -] Loading compute driver 'ironic.IronicDriver'
2024-10-29 15:37:29.498 1179391 ERROR nova.db.main.api [None req-48b845ff-01ca-4126-8e43-aeaa2675d0e1 - - - - - -] No DB access allowed in nova-compute: File "/var/lib/openstack/lib/python3.10/site-packages/eventlet/greenthread.py", line 265, in main
result = function(*args, **kwargs)
File "/var/lib/openstack/lib/python3.10/site-packages/nova/utils.py", line 664, in context_wrapper
return func(*args, **kwargs)
File "/var/lib/openstack/lib/python3.10/site-packages/nova/context.py", line 422, in gather_result
result = fn(*args, **kwargs)
File "/var/lib/openstack/lib/python3.10/site-packages/nova/db/main/api.py", line 179, in wrapper
return f(*args, **kwargs)
File "/var/lib/openstack/lib/python3.10/site-packages/nova/objects/service.py", line 554, in _db_service_get_minimum_version
return db.service_get_minimum_version(context, binaries)
File "/var/lib/openstack/lib/python3.10/site-packages/nova/db/main/api.py", line 238, in wrapper
_check_db_access()
File "/var/lib/openstack/lib/python3.10/site-packages/nova/db/main/api.py", line 188, in _check_db_access
stacktrace = ''.join(traceback.format_stack())
```
According to the https://docs.openstack.org/nova/latest/configuration/config.html#api-database the [`api-database`] config group should not be configured for this service.
Change-Id: Ie53eb250be756d96315c0be623d7aa716565661a
For all test jobs we explicitly deploy Nova with virt_type=qemu
to make tests less dependent on the infrastructure hardware.
By default Nova sets virt_type=kvm but in case of using
DPDK feature we'd better explicitly set it so.
Change-Id: I88c8d2f8f1cc9d155486773c7052347e916255d8
Use quay.io/airshipit/kubernetes-entrypoint:latest-ubuntu_focal
by default instead of 1.0.0 which is v1 formatted and
not supported any more by docker.
Change-Id: Idf43d229d1c81c506653980b5e8cd6463550bc5f
- In some charts third party images are used.
Need inspection which of them can be updated.
- For some charts we don't build images.
For this case let's build images for active
projects and probably retire charts for retired
or inactive projects.
Change-Id: Ic9e634806d40595992d68c1fc3cd54b655ca5d02
Currently Nova API server still using eventlet-based HTTP servers,
it is generally considered more performant and flexible to run them
using a generic HTTP server that supports WSGI.
Change-Id: I489557181bb8becbaf5cf7d9812a671d5cb3cc4a
metadata_port value used in queens version and changed
in rocky version to metadata_listen_port
story: 2011052
task: 49616
Change-Id: I106f50f620c2594b1f8ea7dc516d2e254c6af479
This change updates all Ceph images for Jammy-based deployments in
openstack-helm to latest-ubuntu_jammy.
Change-Id: Id80f0fc074da01548006fc37c2629b27fbddbd25
Kubernetes subpath mount does not reflect the changes of the
volume origin(ConfigMap, Secret or whatever).
This patch uses directory mount instead of subPath for renewed
certs to be reflected inside the pod automatically
Change-Id: I740737d23db1fe3621b4490523730375e6c36313
In environments where there is a large number of ports (100+) on a
hypervisor, the start up can take a long time, and eventually the
liveness test will fail because the process is stuck plugging ports
in.
No need initial delay for live/readiness probe and Startup probe
is enough
Change-Id: I54544a45a716fa4ff840019c0526343063ed1ac5
As the nova.DEFAULT.log_config_append is a aption
for the configuration of nova, we should be add
condtional statement here.
Change-Id: Ib9c50c9ccc0c93226fffccc997c232b0259dff0c
The current values specified in values.yaml along with the configmap-etc
template can make it very difficult for the end user to properly configure
a cinder authentication method other than password. These changes give the end
user the needed flexibility.
Change-Id: I99e75e1aa9ddd8378518b1291123a34d2881715f
Add options to nova to enable/disable the use of:
1. The vnc or spice server proxyclient address found by the console
compute init container
2. The my_ip hypervisor address found by compute init container
3. The libvirt live_migration_inbound_addr used by nova compute to
live-migrate instances
These options can be used to prevent cases where the found addresses
overwrite what has already been defined in nova.conf by per host nova
compute DaemonSet overrides.
It is important to allow the flexibility of using or not the default
ConfigMap - DaemonSet cluster level configuration, allowing the
possibility of having custom per host overrides definitions that will
not be overwrite by nova-compute-init.sh
One use case (live-migration) for this flexibility is the following:
Originally the nova-compute-init.sh script received the capability of
selection a target interface (by name, in a ConfigMap level) through
which the live-migration traffic should be handled [1], allowing the
possibility of selecting a separate network to handle live-migration
traffic. This was not assuming any interface/network IP if users did not
set .Values.conf.libvirt.live_migration_interface.
Later [2], same script was updated to fall-back to default gateway IP
resolution in case the live_migration_interface is not defined.
So, currently it is mandatory to define a "cluster level config" for the
interface name (i.e., through ConfigMap) or to rely on default gateway
IP resolution for live-migration addresses.
This can be problematic for use cases were:
* There are many networks defined for the cluster and a host default
gateway might not resolve to the desired network IP;
* There is the need of having a per host definition of nova.conf, since
nova-compute-init.sh will create a new .conf that will overwrite it.
[1] commit 31be86079d711c698b2560b4bed654e23373a596
[2] commit 8f0a15413839c92d6d527bf7cbc441380de6c2af
Change-Id: Iaf86e0a215802001f58d607a1a3a18acf83f5e81
Signed-off-by: Thales Elero Cervi <thaleselero.cervi@windriver.com>
Signed-off-by: Robert Church <robert.church@windriver.com>
Once manifests.certificates is set as true, TLS for all
components are enabled. There is no way to enable TLS for each
component.
This patch is to support the usecase to just enable vencrypt auth
scheme.
Change-Id: I1e33071a16e0eb764c51442f99c3795ceb9efb19
If we define ovsdb_connection in os_vif_ovs config group, health
probe fails for nova-compute because of the wrong condition to
detect db connection string from configuration file.
This patch detects db connection string using string.startswith()
in a more strict way.
Change-Id: I12a3ea4061d5c13879b878b85eb206726b5db27c
This patchset allows enabling vencrypt for VNC, based on a
downstream patchset. [1]
Primary differences:
- uses HTK to render the cert instead of its own template
- leaves the creation of a separate (sub)issuer for vencrypt as
outside the scope of this (and the libvirt) chart.
1. https://github.com/vexxhost/atmosphere/pull/483
Co-Authored-By: Oleksandr Kozachenko okozachenko1203@gmail.com
Change-Id: If377faebc4c65f37b08a3c8aab2fed844a07c26f
- Also run last two test scripts in compute-kit job
sequentially. This is handy since it allows to see
what is happening during the test run. Both these
test scripts usually take just few minutes. But if
we run them using ansible async feature and one of
the scripts fails then we are forced to wait for
a long timeout.
Change-Id: I75b8fde3ec4e3355319b1c3f257e2d76c36f6aa4
Also a new nodeset was temporarily added.
The aio compute-kit jobs for recent releases require
a huge node to work reliably. We'll remove the temporary nodeset
once this is merged
https://review.opendev.org/c/openstack/openstack-helm-infra/+/884989
Change-Id: I7572fc39a8f6248ff7dac44f20076ba74a3499fc
oslo_messaging.RPCClient is currently deprecated.
Configure health probe to use get_rpc_client if get_rpc_client is
available
story: 2010766
task: 48076
Change-Id: I0795e6e099b935ead8d6d3d22722999b852749d0
If the transport_url of nova's oslo messaging notification and the default transport_url value are different, timeout occurs when oslo_messaging.RPCClient.call is executed because of the part written in oslo_messaging.get_notification_transport.
This change moves the health probe to use get_rpc_transport instead.
story: 2010766
task: 48074
Change-Id: Ia6a2b9ce500e8806f76882b28f4d9cca440b6e1a
If application credentials with access rules are required,
an OpenStack service using keystonemiddleware to authenticate
with keystone, needs to define service_type in its configuration
file.
Change-Id: I7034e82837d724f12d57969857f79d67c962cebe
nova-compute-ssh failed to start with error "Missing privilege
separation directory: /run/sshd" when nova ssh is enabled.
Change-Id: I4fa25a56f191aae6b4fa9efce508723d7c256c8c
At the moment, if live_migration_inbound_addr is not defined it
will default to the hostname of the hypervisor which requires
DNS in order to work properly.
DNS can be complicated and it is possible that an environment
might not have it, so it makes sense to default to grabbing the
default route interface to do live migrations over in order
to allow live migrations when DNS is not setup.
Change-Id: I10eb63fc64d7cd34ef89df529637b1e81951e38c
This change updates all Ceph image references to use Focal images
for all charts in openstack-helm.
Change-Id: I67cd294e2aabf3c3af404da42204f9b6157b06f7
We have a few deprecated config options that are not being
used anymore as well as some that have been moved to other
groups for quite sometime.
Change-Id: Ibd447897f6399bab47b031ccab228188ebed8266
This PS adds backoffLimit to nova-bootstrap job in nova chart. By default, this job was created from a template in helm-toolkit.
58291db1a6
In this commit the job was re-designed without controlling of the backoffLimit value.
Change-Id: Icb28363be8063d849fd22e9c2542edf1eb203d60