Running nova playbook with tag limit may lead to an error:
The conditional check 'nova_virt_type != 'ironic'' failed. The error
was: error while evaluating conditional (nova_virt_type != 'ironic'):
'nova_virt_type' is undefined\n\nThe error appears to be in
'/etc/ansible/roles/os_nova/tasks/main.yml': line 289, column 3, but
may be elsewhere in the file depending on the exact syntax problem.
It can be easily fixed by applying always tag to tasks from
nova_virt_detect.yml
Change-Id: I56aee80180804b8a3e3316cffc6fa8115513b8f1
CentOS has upgraded their libivrt to version 9.3, where libvirt-daemon
is not installed as a dependency anymore. So we need to explicitly
isntall this package to restore functionality.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=2209936
Change-Id: Ic6f2606b5a478c7a891c25bd131ad351a19699bc
Having auth credentials in service_user is required to interact with
other services. Otherwise nova won't be properly authenticated,
for example during volume detach request.
Change-Id: Ifd607d3acfb18ee4d1de0b8dc39350419cae9c22
In order to cover OSSA-2023-003, a requirement to define service_user
section for all nova services has been added by nova.
Change-Id: I81cd6431fec94f56b0ebd66c94e90c9623ba0e38
We're adding 2 services that are responsible for executing db purge and
archive_deleted_rows. Services will be deployed by default, but left
stopped/disabled. This way we allow deployers to enable/disable
feature by changing value of nova_archive/purge_deleted.
Otherwise, when variables set to true once, setting them to false won't
lead to stopoing of DB trimming and that would need to be done manualy.
Change-Id: I9f110f663fae71f5f3c01c6d09e6d1302d517466
This is required by qemu-system-x86 but only recommended by
qemu-system-arm. Without the file /usr/lib/ipxe/efi-virtio.rom
from ipxe-qemu it is not possible to boot a VM on arm
hosts.
This patch ensures that ipxe-qemu is always installed.
Change-Id: I27fd98a1568bda8bea3d88c3f18b44a080982d0e
By overriding the variable `nova_backend_ssl: True` HTTPS will
be enabled, disabling HTTP support on the nova backend api.
The ansible-role-pki is used to generate the required TLS
certificates if this functionality is enabled.
`nova_pki_console_certificates` are used to encrypt:
- traffic between console proxy and compute hosts
`nova_pki_certificates` are used to encrypt:
- traffic between haproxy and its backends(including console proxy)
It would be complex to use nova_pki_console_certificates to encrypt
traffic between haproxy and console proxy because they don't have valid
key_usage for that and changing key_usage would require to manually set
`pki_regen_cert` for existing environments.
Certs securing traffic between haproxy and console proxy are provided in
execstarts because otherwise they would have to be defined in nova.conf
that may be shared with nova-api(which stands behind uwsgi and should
not use TLS).
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879085
Change-Id: Ibff3bf0b5eedc87c221bbb1b5976b12972fda608
When import is used ansible loads imported role or tasks which
results in plenty of skipped tasks which also consume time. With
includes ansible does not try to load play so time not wasted on
skipping things.
Depends-On: https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/880344
Change-Id: I47c6623e166254802ed0b479b2353c5f2ceb5cfa
At the moment, we do deploy openrc file on conductors and delegate
task to them. At the moment there is no good reason to do so,
since we're actively utilizing service_setup_host for all interactions
with API. With that we also replace `openstack` commands with native
compute_service_info module that provides all information we need.
Change-Id: I016ba4c5dd211c5165a74a6011da7bb384c7a82a
According to nova rolling upgrade process [1], online_data_migrations
should run once all the services are running the latest version of the
code and were restarted. With that, we should move online migrations
after handlers being flushed, when all services are restarted.
At the same time, nova-status upgrade check must run before services
are restarted to the new version, as service restart might lead to
service breakage if upgrade check fails [2]. It makes no sense to
run upgrade check when upgrade is fully finished.
[1] https://docs.openstack.org/nova/latest/admin/upgrades.html#rolling-upgrade-process
[2] https://docs.openstack.org/nova/latest/cli/nova-status.html#upgrade
Change-Id: Ic681f73a09bb0ac280c227f85c6e79b31fd3429a
At the moment we don't restart services if systemd unit file is changed.
We knowingly prevent systemd_service role handlers to execute
by providing `state: started` as otherwise service will be restarted twice.
With that now we ensure that role handlers will also listen for systemd
unit changes.
Change-Id: I4273d2fbcbff3028e693e3274093c1afebdcfca2
At the moment we don't really utilize neutron_provider_networks
mapping except of 2 quite specific drivers, that are NSX and Nuage.
For these 2 usecases we suggest using overrides functionality instead.
Change-Id: I7d905a1dbda1ec722b161b96742247c806bed162
use_forwarded_for option for api has been deprecated since 26.0.0
as this feature is the duplicate of the HTTPProxyToWSGI that
has being enabled by default now.
Change-Id: I45e70e42605455df944ced63f106a76f351052e8
Calico driver support has been removed from OpenStack-Ansible
starting in Antelope release [1]. We clean-up nove role to drop calico
support from it as well.
[1] https://review.opendev.org/c/openstack/openstack-ansible/+/866119
Change-Id: Ie9c118b8bab265e5bf06b6ec05731cd673ee4d95
qemu-system on debian derivative OS is a meta-package which installs
qemu-system-* for all architecures understood by qemu.
This is different from redhat type OS where the qemu-kvm package
installed with dnf only installs the qemu-system-* binary matching
the host architecture.
This gives two problems, first there is inconsistency in openstack-ansible
deployments between redhat and debian OS. Second, there is a potentially
unexpected emulation of architectures when launching VM on a cloud
with a mix of compute architectures when a full set of qemu-system-*
binaries is available on a compute node. The compute node becomes a
candidate for scheduling any of the supported architectures and a
very specific configuration is needed both from the operator and end
user to ensure that VM are run on a native architecture or emulated as
required.
This patch changes the installation so that redhat and debian compute nodes
only have the native qemu-system binary installed.
A new feature should be introduced to openstack-ansible in the future
to explicitly control installation of non-native qemu-system-* binaries
and write the config options for controlling emulation.
Change-Id: I1c876c7968efb7f24880f1a6e96ba6b7264ddc94
RDO packages for nova does depend on python3-openvswitch,
which makes it required to install OVS on computes regardless
of everything else.
We also clean out pre-rhel9 variable files as they're not needed anymore
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/872896
Change-Id: I3e31254b7dd1c0ff3cb46153cefce6f6cadd52aa
When Galera SSL is enabled, use SSL encrypted database connections with
nova-manage commands where a connection string is provided.
Change-Id: I7019b966b475c09a4e3218461941c1112ae28028
Nova complains about an inability to access endpoint list for block
storage. This patch updates nova.conf with the respective configuration.:
Example errors in nova-compute log:
1. The [cinder] section of your nova configuration file must be configured
for authentication with the block-storage service endpoint.
2. Delete attachment failed for attachment <UUID>. Error: Unknown auth type:
None (HTTP 401) Code: 401: cinderclient.exceptions.Unauthorized:
Unknown auth type: None (HTTP 401)
Change-Id: I4c1ae32ed078a4412ff44b7ac3f921b223d0cba3
Resource providers can be configured using the API or CLI, or they
can also be configured on a per-compute node basis using config
files stored in /etc/nova/provider_config.
This patch adds support for a user defined list of provider config
files to be created on the compute nodes. This can be specified in
user_variables or perhaps more usefully in group_vars/host_vars.
A typical use case would be describing the resources made available
as a result of GPU or other hardware installed in a compute node.
Change-Id: I13d70a1030b1173b1bc051f00323e6fb0781872b
With original patch [1] I somehow missed to define enable_rbd_download
along with adding rbd_user/pool/conf. However, neither of these
options are taken into account if enable_rbd_download is set to false,
which is the default value.
[1] https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/828897
Change-Id: I3220de5863c9c3af418e71774c103c4712b16086
Add file to the reno documentation build to show release notes for
stable/zed.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/zed.
Sem-Ver: feature
Change-Id: I877a352de30bdf9b461603e236d8ec0973640c45
Section
[os_vif_ovs]
isolate_vif = True
was placed in the middle of the [libvirt] section, causing
all migration settings to be placed in os_vif_ovs instead of libvirt.
Change-Id: Ief7eb74343f69912fa8a41a200edf22596adfea3
This variable determines if one of the nova console proxies is
deployed alongside the nova-compute service for ironic. Currently
the only supported values are "disabled" and "serialconsole"
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/860947
Change-Id: I8eae97f9c60956049072de8b04e557671a8cdcfa
This should be nova_management_address which by default is
equivalent to ansible_host, but the use of ansible_host is confusing
when the whole of the rest of os_nova uses nova_managment_address
for the address to bind services to.
Change-Id: Ie34acf0115d8e89e2888952e1c2d3dc03a284aff
At the moment we don't provide any option rather then use memcached
backend. With that we also hardocde list of packages that should be
installed inside virtualenv for selected backend.
Adding bmemcached requirement to oslo_cache.memcache_pool [1] gives us
opportunity to refactor this bit of deployment and allow to be more
flexible in backend selection and requirements installation for it.
[1] https://review.opendev.org/c/openstack/oslo.cache/+/854628
Change-Id: I48e193ef29e56aa8639511c5b5dcddc70f5e1198
Currently Jinja trim_blocks function does remove newline from end of
proxyclient_address which makes port_range option appearing on the same
line.
Change-Id: If33021bd0453be3ca18753777e82da12f470b278
Closes-Bug: #1988337
This line was introduced by I3046953f3e27157914dbe1fefd78c7eb2ddddcf6
to bring it in line with other OSA roles, but should already be
covered by the distribution_major_version line above.
Change-Id: I21b3972553acf38af205e17aa2d48ed19332bcb0
Without that patch all deployers that did use OVS had to remember to
apply override for their deployments.
Now OSA will enable isolation of vif by default when OVS is used.
Change-Id: I4195153658c867f259226e80cefac0fcac4caac5
Related-Bug: #1734320
The 'AvailabilityZoneFilter' is deprecated since the 24.0.0 (Xena)
release. The feature is enabled by query_placement_for_availability_zone
config option and is now enabled by default.
Change-Id: I6be16f7621899a45271a70e7c39d76b837d8c5c9
Implement support for service_tokens. For that we convert
role_name to be a list along with renaming corresponding variable.
Additionally service_type is defined now for keystone_authtoken which
enables to validate tokens with restricted access rules
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/845690
Change-Id: I04b22722b32b6dc8b1dc95e18c3fe96ad17e51ac
Keystone role was never migrated to usage of haproxy-endpoints role
and included task was used instead the whole time.
With that to reduce complexity and to have unified approach, all mention
of the role and handler are removed from the code.
Change-Id: I3693ee3a9a756161324e3a79464f9650fb7a9f1a
With sphinx release of 5.0.0, they changed default for language variable
to 'en' from None. With that current None valuable is not valid and should
not be used.
Change-Id: I6f3bdb6e63986bb25371f09c6c468dc055fd3050
Centos-9 no longer ships this file so skip adjusting it [1]. The
file should not exist on Centos-9 systems where OSA is used.
If this file is created by a deployer it will potentially
interfere with the operation of libvirt and other configuration
made by openstack-ansible.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=2042529
Change-Id: Ieeba7fb803e151a9e6d0adac3d1512aef3785e9a