In order to reduce divergance with ansible-lint rules, we apply
auto-fixing of violations.
In current patch we replace all kind of truthy variables with
`true` or `false` values to align with recommendations along with
alignment of used quotes.
Change-Id: Ie1737a7f88d783e39492c704bb6805c89a199553
This change matches an earlier modification to os_neutron
Currently we symlink /etc/<service> to empty directory at pre-stage,
and filling it with config only during post_install. This means,
that policies and rootwrap filters are not working properly until
playbook execution finish. Additionally, we replace sudoers file
with new path in it, which makes current operations impossible for
the service, since rootwrap can not gain sudo privileges.
With this change we move symlinking and rootwrap steps to handlers,
which means that we will do replace configs while service is stopped.
During post_install we place all of the configs inside the venv,
which is versioned at the moment.
This way we minimise downtime of the service while performing upgrades
Closes-Bug: #2056180
Change-Id: I9c8212408c21e09895ee5805011aecb40b689a13
In case compute nodes using non-standard SSH port or some other
hacky connection between each other, deployers might need to
supply extra configuration inside it.
community.general.ssh_config module was not used, as it requires extra
`paramiko` module to be installed on each destination host.
Change-Id: Ic79aa391e729adf61f5653dd3cf72fee1708e2f5
The default value for heartbeat_in_pthread has been reverted in
oslo.messaging to False [1] and backported back to Yoga.
At the moment this setting brings intermittent issues during live
migrations of instances and some other operations. So makes sense
to align it with default value.
[1] https://review.opendev.org/c/openstack/oslo.messaging/+/852251
Change-Id: I5601726095ff19620de2d87220efad191cf7cb6d
During last release cycle oslo.messaging has landed [1] series of extremely
useful changes that are designed to implement modern messaging
techniques for rabbitmq quorum queues.
Since these changes are breaking and require queues being re-created,
it makes total sense to align these with migration to quorum queues by default.
Change-Id: Ia5069c9976d07ee3949e637d8eb76a06b380cdec
In order to be able to globally enable notification reporting for all services,
without an need to have ceilometer deployed or bunch of overrides for each
service, we add `oslomsg_notify_enabled` variable that aims to control
behaviour of enabled notifications.
Presence of ceilometer is still respected by default and being referenced.
Potential usecase are various billing panels that do rely on notifications
but do not require presence of Ceilometer.
Change-Id: Ib5d4f174be922f9b6f5ece35128a604fddb58e59
In order to allow definition of policies per service, we need to add variables
so service roles, that will be passed to openstack.osa.mq_setup.
Currently this can be handled by leveraging group_vars and overriding `oslomsg_rpc_policies` as a whole, but it's not obvious and
can be non-trivial for some groups which are co-locating multiple services
or in case of metal deployments.
Change-Id: I6a4989df2cd53cc50faae120e96aa4480268f42d
This patch proposes to move condition on when to install certificates from
the role include statement to a combined "view" for API and Consoles.
While adding computes to the same logic might be beneficial for CI and
AIO metal deployments, it potentially might have a negative effect for
real deployments, as it will create bunch of Skipped tasks for computes
so we leave them separated.
With that API and Console are usually placed on same hosts, so it makes
sense to distribute certs towards them once but keeping possibility of
different hosts in mind.
Change-Id: I8e28a79a6e3a5be1fe54004ea1d2c3a3ccdc20bc
Add variable nova_cell_force_update to enable deployers to ensure that
role execution will also update cell mappings whenever that is needed.
For instance, it could be password rotation or intention to update MySQL
address.
Change-Id: I5b99d58a5c4d27a363306361544c5d80759483fd
Due to clash in resulting certificate names they were re-genearated each
playbook run.
In order to sort that we need to rename certificate names. As `nova_backend_ssl`
was implemented latest and not that widely adopted, we change name
for it.
This will cause all backend certificates for API to be re-generated.
Change-Id: I4bca3bb2733fe25dad71345f84d9030c535c901b
Instead of evaluating same condition of my_ip in multiple places across
the role this patch suggests doing this once in vars and using the
resulting variable afterwards.
This not only reduce amount of evaluations made throughout the role runtime,
but also solves possible corner cases where some syntax may go off.
Closes-Bug: #2052884
Change-Id: I454b53713ecacf844ac14f77b6d1e1adc1322c0e
It appears there was a change to remove the list option when
moving from pci_passthrough_whitelist. Instead device_spec
can be specified multiple times in the file.
This patch aims to resolve this whilst maintaining backwards
compatibility.
Change-Id: I12b38e45d7b41fbf4786d3320e511eb9127fe216
As of today we do not have any means of Blazar integration with Nova,
while we do provide roles for Blazar installation for a while now. This
patch aims to bring in more native integration and remove necessity
of overrides for such deployment.
Related-Bug: #2048048
Co-Authored-By: Alexey Rusetsky <fenuks@fenuks.ru>
Change-Id: Ica50a5504de1b1604f72123751cbb3f45c85ab46
For quite some time, we relate usage of --by-service flag for
nova-manage cell_v2 discover_hosts command to the used nova_virt_type.
However, we run db_post_setup tasks only once and delegating to the
conductor host. With latest changes to the logic, when this task in
included from the playbook level it makes even less sense, since
definition of nova_virt_type for conductor is weird and wrong.
Instead, we attempt to detect if ironic is in use by checking hostvars
of all compute nodes for that. It will include host_vars, group_vars,
all sort of extra variables, etc.
Thus, ironic hosts should be better discovered now with nova-manage
command.
Related-Bug: #2034583
Change-Id: I3deea859a4017ff96919290ba50cb375c0f960ea
Some deployments might want to perform live migrations over dedicated
networks, like fast storage network, while keep management over default
mgmt network.
Current default behaviour will prevent such usecase, since
nova_libvirt_live_migration_inbound_addr is not added to the generated
for libvirtd certificate, and thus live migration will fail.
Also to enable users override default behviour more nicely and reduce
code duplication, new variable ``nova_pki_compute_san`` was introduced,
that handles SAN definition for compute nodes.
Change-Id: I22cc1a20190f0573b0350369a6cea5310ab0f0a7
This change implements and enables by default quorum support
for rabbitmq as well as providing default variables to globally tune
it's behaviour.
In order to ensure upgrade path and ability to switch back to HA queues
we change vhost names with removing leading `/`, as enabling quorum
requires to remove exchange which is tricky thing to do with running
services.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/875399
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/873618
Change-Id: I792595dac8b651debcd364cd245145721575a516
When Nova is deployed with a mix of x86 and arm systems
(for example), it may be necessary to deploy both 'novnc' and
'serialconsole' proxy services on the same host in order to
service the mixed compute estate.
This patch introduces a list which defines the required proxy
console types.
Change-Id: I93cece8babf35854e5a30938eeb9b25538fb37f6
With update of ansible-lint to version >=6.0.0 a lot of new
linters were added, that enabled by default. In order to comply
with linter rules we're applying changes to the role.
With that we also update metdata to reflect current state.
Depends-On: https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/888223
Change-Id: I730ae569f199fc8542a5a61beb149f459465d7e2
Long time ago a variable `nova_ram_weight_multiplier` was implemented
and its default value was set to 5.0.
There are 2 issues with this:
1. Default value in nova is 1.0 [1] so our value is much bigger than
nova's default without having a strong reason for that.
2. OSA does not provide similar variables for other multipliers like
`cpu_weight_multiplier`.
Because there are a couple of different multipliers and more of them
can be implemented in the future(for ex.
`hypervisor_version_weight_multiplier` was implemented in 2023.2) it
would be hard for the OSA project to maintain variables for all of them.
It is better to deprecate `nova_ram_weight_multiplier` and let users
define multipliers with `nova_nova_conf_overrides` if necessary.
[1] https://docs.openstack.org/nova/2023.1/configuration/config.html#filter_scheduler.ram_weight_multiplier
Change-Id: I4f82840e94312d38696e3ddd05ef494821233f4d
We're adding 2 services that are responsible for executing db purge and
archive_deleted_rows. Services will be deployed by default, but left
stopped/disabled. This way we allow deployers to enable/disable
feature by changing value of nova_archive/purge_deleted.
Otherwise, when variables set to true once, setting them to false won't
lead to stopoing of DB trimming and that would need to be done manualy.
Change-Id: I9f110f663fae71f5f3c01c6d09e6d1302d517466
By overriding the variable `nova_backend_ssl: True` HTTPS will
be enabled, disabling HTTP support on the nova backend api.
The ansible-role-pki is used to generate the required TLS
certificates if this functionality is enabled.
`nova_pki_console_certificates` are used to encrypt:
- traffic between console proxy and compute hosts
`nova_pki_certificates` are used to encrypt:
- traffic between haproxy and its backends(including console proxy)
It would be complex to use nova_pki_console_certificates to encrypt
traffic between haproxy and console proxy because they don't have valid
key_usage for that and changing key_usage would require to manually set
`pki_regen_cert` for existing environments.
Certs securing traffic between haproxy and console proxy are provided in
execstarts because otherwise they would have to be defined in nova.conf
that may be shared with nova-api(which stands behind uwsgi and should
not use TLS).
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879085
Change-Id: Ibff3bf0b5eedc87c221bbb1b5976b12972fda608
At the moment we don't really utilize neutron_provider_networks
mapping except of 2 quite specific drivers, that are NSX and Nuage.
For these 2 usecases we suggest using overrides functionality instead.
Change-Id: I7d905a1dbda1ec722b161b96742247c806bed162
use_forwarded_for option for api has been deprecated since 26.0.0
as this feature is the duplicate of the HTTPProxyToWSGI that
has being enabled by default now.
Change-Id: I45e70e42605455df944ced63f106a76f351052e8
Calico driver support has been removed from OpenStack-Ansible
starting in Antelope release [1]. We clean-up nove role to drop calico
support from it as well.
[1] https://review.opendev.org/c/openstack/openstack-ansible/+/866119
Change-Id: Ie9c118b8bab265e5bf06b6ec05731cd673ee4d95
Resource providers can be configured using the API or CLI, or they
can also be configured on a per-compute node basis using config
files stored in /etc/nova/provider_config.
This patch adds support for a user defined list of provider config
files to be created on the compute nodes. This can be specified in
user_variables or perhaps more usefully in group_vars/host_vars.
A typical use case would be describing the resources made available
as a result of GPU or other hardware installed in a compute node.
Change-Id: I13d70a1030b1173b1bc051f00323e6fb0781872b
This variable determines if one of the nova console proxies is
deployed alongside the nova-compute service for ironic. Currently
the only supported values are "disabled" and "serialconsole"
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/860947
Change-Id: I8eae97f9c60956049072de8b04e557671a8cdcfa
This should be nova_management_address which by default is
equivalent to ansible_host, but the use of ansible_host is confusing
when the whole of the rest of os_nova uses nova_managment_address
for the address to bind services to.
Change-Id: Ie34acf0115d8e89e2888952e1c2d3dc03a284aff
At the moment we don't provide any option rather then use memcached
backend. With that we also hardocde list of packages that should be
installed inside virtualenv for selected backend.
Adding bmemcached requirement to oslo_cache.memcache_pool [1] gives us
opportunity to refactor this bit of deployment and allow to be more
flexible in backend selection and requirements installation for it.
[1] https://review.opendev.org/c/openstack/oslo.cache/+/854628
Change-Id: I48e193ef29e56aa8639511c5b5dcddc70f5e1198
The 'AvailabilityZoneFilter' is deprecated since the 24.0.0 (Xena)
release. The feature is enabled by query_placement_for_availability_zone
config option and is now enabled by default.
Change-Id: I6be16f7621899a45271a70e7c39d76b837d8c5c9
Implement support for service_tokens. For that we convert
role_name to be a list along with renaming corresponding variable.
Additionally service_type is defined now for keystone_authtoken which
enables to validate tokens with restricted access rules
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/845690
Change-Id: I04b22722b32b6dc8b1dc95e18c3fe96ad17e51ac
This uses ssh signed certificates so there is no longer the need
to distribute the nova public key from each compute host to all
other compute hosts.
The legacy scripts and authorized key files are removed as a
migration step.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/825292
Change-Id: I3456bdf7bed66a2675b8a410d4cf6b2174598a22
When nova don't use rbd images (ie local storage) it still might be good
idea to use direct connection to rbd to get images rather then
connect through HTTP.
Change-Id: I4f2d7cf54e07376c7a25d45093f5d83be5422234
This configuration option has been observed to result in file
descriptor leaks in certain circumstances. A variable is added
here so that it can be easily overridden.
Change-Id: I7de034307da9352e6f5d1f5f175a330fb8c86463
Related-Bug: #1961603