The actual pin was removed in 13925d1965aa5658304c9ca88fb0307d6cff2eff.
This comment is a leftover that is quite misleading.
Change-Id: I65c1196a7c9dfdf5624280251d7f7ecd9df0c283
Install twine into a venv and set appropriate environment
variables. Also added tests.
Based on commit adding `ensure-nox` (77b1b24) role.
Related-bug: #2095514
Change-Id: Ibb4e89f79879b4d0ae0294440c9c0b79fc57a7fa
The PyPA "build" project is the canonical pyproject (PEP 517) build
frontend, and is necessary in cases where SetupTools-based projects
want to do modern Python packaging standards-compliant builds. The
SetupTools maintainers have long since deprecated direct calls to
setup.py scripts, with this as the preferred solution.
Note that pyproject-build is designed to be backwards-compatible
with old-style SetupTools projects that don't have a pyproject.toml
file, so this should be a safe and transparent change. That said, we
include a failsafe switch to bring back the old behavior just in
case it's needed by some projects for unexpected reasons.
Change-Id: I9b28c97092c32870bf730f5ca6cac966435370bc
Uv (https://docs.astral.sh/uv/) is not declared as a dependency for a
Python project, it must be available somehow in the system. This role
installs it if missing.
- Latest version is installed, unless `ensure_uv_version` is
informed.
- The installed executable path is set as the `uv_executable` fact.
- The `/usr/local/bin/uv` symlink can also be created if
`ensure_uv_global_symlink: true`.
This new role is a verbatim copy of the `ensure-poetry` role, just doing
a `s/poetry/uv/g`. Even this commit is a replay of the commit adding
that role: 524b7e7b95dcd6adc311e74dd7f0e6da8a3cce58.
Change-Id: I55bc5e1d273045d0978b09f719bf79a875336e30
pyproject-build (https://build.pypa.io/) is used as a pyproject (PEP
517) build frontend. This role installs it if missing.
This new role is basically a copy of the `ensure-poetry` role, in
turn copied from other roles in this repository.
Change-Id: If6e3970d995256beea170cad039d7dba9538d191
There were some jobs recently that showed an unexpected processor count.
Add some data to allow to debug this.
Change-Id: I587a492d1aa94b0886c7e9a2260a3e2eb384e788
Ansible-core 2.16.4 appears to have a behavior change where it
will include the implicit localhost in hostvars, which means that
any location we iterate over hostvars and assume it's a real host
could throw an exception. To avoid that, add checks that the
variables we are about to access on the host exist.
Change-Id: Iff89da761e5f6748b454610a64c2fdd4f5e56a77
This adds a role (and job) to mirror container images from one
registry to another.
Also, disable the name[template] ansible-lint check because it
greatly reduces the utility of including templates in task names.
Change-Id: Id01295c51b67ffb7e98637c6cdcc4e7a14c92b22
We were checking if dockerhub is a valid key in the
zuul_site_mirror_info/mirror_info dictionaries but did so without
quoting dockerhub as a string. This meant ansible tried to look up
dockerhub as a variable producing this error:
The conditional check 'dockerhub is in zuul_site_mirror_info' failed.
The error was: error while evaluating conditional (dockerhub is in
zuul_site_mirror_info): 'dockerhub' is undefined. 'dockerhub' is undefined
Fix this by quoting dockerhub so that we lookup the string as a key
instead of a variable.
Change-Id: Ie869b9b52fd0a5b70fc07548ce449937ed2c9589
This adds new style mirror_info handling to use-docker-mirror to give us
greater control over whether or not docker hub should be mirrored. We
ignore old style configuration if new style is present which gives us
this control. Otherwise we fallback to the old behavior.
We also update the ensure-docker test jobs to be triggered by updates to
the use-docker-mirror roles as ensure-docker includes this role. We
should get decent functional testing coverage this way.
Change-Id: Ia1b216a6dd68bcafbe599777037c5d7b1b3e8201
the openvswitch.openvswitch collection is removed from Ansible packages
starting with Ansible 11. This causes ansible-lint to correctly not find
the openvswitch_bridge module when ansible-lint runs with Ansible 11.
Workaround this by capping Ansible used by ansible-lint to <10 and leave
a note about the module going away where we use it.
Change-Id: Id2d4e4f59c7d7e595c5458bc8717146c2326c573
There are cases when a downstream user of run-buildset-registry needs
to use a different image. This can happen e.g. when this has to be
cached in a local registry. To facilitate this use case add the
buildset_registry_image variable that lets the user specify a
different image.
Change-Id: I0cd3bd2f6bcd0ac73609bf37ce99557472e9f3d1
For some reason (unknown really for us) triggering webhook with http
basic auth using Ansible's uri module started recently failing when it
is run on some operating systems, like e.g. Ubuntu Noble.
Let's switch to use curl command directly to trigger that webhook
instead.
Change-Id: Idbf643ea27220504ac9e37eaf9f18930d2fc08ab
If you need to run native arm64 builds, you can take advantage
of this change which will rely on the remote builders in order
to build things natively giving a significant speed up in
container build time.
Change-Id: I962bb2357a2c458d5e72b334b4fe36b55b034864
This counts the open file handles and inodes. This may be useful
(after establishing a baseline) for evaluating ulimit errors.
Change-Id: I6d5c67d7c5c03d4aa7cd88b2238163cc729d9782
We removed the default value, because having a default value actually
makes no sense at all. To be helpful for any transitions, add a runtime
check that the variable is set.
Also, while we're at it, update the docs to indicate that the parameter
is required.
Change-Id: I1e18ea51d9d56561608ff241d71b63965c4f78bd
The ensure-nodejs role defaults to install nodejs 6 which produces this
error currently:
Failed to update apt cache: W:The repository
'https://deb.nodesource.com/node_6.x noble Release' does not have a
Release file., W:Data from such a repository can't be authenticated
and is therefore potentially dangerous to use.
We need to make a few changes to bring this ensure-nodejs role up to
modern expectations for nodesource usage. First we drop the default
nodejs version from ensure-nodejs. Everyone is already setting this
value to make this role work or they are broken and will need to change
something anyway. This gets us off of the nodejs update treadmill in
this role.
Then with nodejs 16 and newer there is a new gpg key and no deb-src
packages so we need to change the apt configuration if using 16 and
newer. We make these changes to match the corresponding setup_16.x etc
scripts from nodesource.
Change-Id: I0d5c93e4fbcee0be2cc477bf9f625e419a2b9bd1
Previously we pinned to 1.28/stable due to a bug that prevented
1.29/stable from working. Now we've hit a new issue with 1.28/stable on
bookworm. The fix for that appears to simply be to upgrade to
1.31/stable so we do so here. More details can be found in this GitHub
issue:
https://github.com/canonical/microk8s/issues/4361
The new version appears to return from the snap installation before the
k8s installation is fully ready to deal with add-on installation. This
occasionally produces errors like:
subprocess.CalledProcessError:
Command '('/snap/microk8s/7178/microk8s-kubectl.wrapper', 'get',
'all,ingress', '--all-namespaces')'
returned non-zero exit status 1.
Work around that with `microk8s status --wait-ready` to ensure that k8s
is up before adding addons.
While we are at it we also update the collect-kubernetes-logs role to
collect microk8s inspect output as that would've enabled us to debug the
above issue without holding nodes. We also update test jobs to trigger
when the collect-kubernetes-logs and collect-container-logs roles are
updated to ensure we get coverage from those jobs when updating these
roles.
Change-Id: I60022ec6468c2cadd723a71bbc583f20096b27dc
It's highly likely that folks may want to use YAML anchors to
build up list of DIB elements. To aid in that, allow the value
to be a list of lists and automatically flatton it.
Change-Id: I55b9cb16951b51da32f99ca5858b75217951b279
It would be useful especially when ec2 fleet api is configured,
and the instance type is unknown in advance.
Change-Id: Ibcdade5cfffd13fddd95e797c60c5327bb34fdb6
Fstrings are not supported in python3.5 which is in use on Xenial.
We don't claim to support Xenial, but this is an easy regression
to avoid.
Also, add test jobs for this role so that we get feedback before
copying it to the prod roles.
Also, add a xenial test job to exercise it since we still have
Xenial nodes available.
Change-Id: Ifc773aa688adb1a01cfe691b3bdca0b3086658cd
This adds a role convert-diskimage which uses the qemu-img tool to
convert diskimages from one format to another. Currently supported image
formats are raw and qcow2.
Change-Id: I4770af04c37f39e0cce23d5dd59ead744bed7d74
This adds a role variable to configure the diskimage-builder environment.
This allows users a choice of using the Ansible "environment" argument,
or using a variable. The variable may be particularly useful since it
allows full configuration of the role from a Zuul job definition.
Change-Id: I68542f13454b4f2e2e9bb8d356feefddba23d8f2
* This adds some extra options to the ensure-kubernetes role:
* podman + cri-o can now be used for testing
* This mode seems to be slightly more supported than the
current profiles.
* The location for minikube install can be moved.
* The use-buildset-registry role needed slight updates in order
to populate the kubernetes registry config early.
Change-Id: Ia578f1e00432eec5d81304f70db649e420786a02
* It looks like zuul-jobs-test-registry-buildset-registry-k8s-crio
is busted with Ubuntu Jammy + cri-o installed from kubic, with
errors like https://github.com/cri-o/ocicni/issues/77
(also, kubic has been wound down and cri-o has been spun off)
* cri-o in Noble uninstalls docker-ce, in a follow-up we should
clean that up and switch to a pure podman profile
* This minikube configuration is not supported, but it seems that
upstream cri-o might have made some fixes that makes it work
* Update the job to use Ubuntu Noble instead of Jammy
* Update ensure-podman for Ubuntu Noble
(podman is now part of the Ubuntu distro)
* Update the cri-o install in ensure-minikube for Ubuntu Noble and later
(cri-o is now part of k8s)
Other miscellaneous fixes and workarounds:
* k8s.gcr.io is being sunsetted, updated the test image:
https://kubernetes.io/blog/2023/03/10/image-registry-redirect/
* Relaxed the security to run minikube from /tmp (in future,
we should set the default to /usr/local/bin)
* Updated the microk8s check-distro task for Noble
Change-Id: I3b0cbac5c72c31577797ba294de8b8c025f8c2c3