Tox v4 just released and has a number of breaking changes. One of which
(yet to be properly identified) has completely broken Zuul's tox
siblings processing and produces errors like:
Traceback (most recent call last):
File "<stdin>", line 107, in <module>
File "<stdin>", line 99, in _ansiballz_main
File "<stdin>", line 47, in invoke_module
File "/usr/lib/python3.10/runpy.py", line 224, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/tmp/ansible_tox_install_sibling_packages_payload_0af4x4lc/ansible_tox_install_sibling_packages_payload.zip/ansible/modules/tox_install_sibling_packages.py", line 397, in <module>
File "/tmp/ansible_tox_install_sibling_packages_payload_0af4x4lc/ansible_tox_install_sibling_packages_payload.zip/ansible/modules/tox_install_sibling_packages.py", line 328, in main
File "/usr/lib/python3.10/configparser.py", line 724, in read_string
self.read_file(sfile, source)
File "/usr/lib/python3.10/configparser.py", line 719, in read_file
self._read(f, source)
File "/usr/lib/python3.10/configparser.py", line 1097, in _read
raise DuplicateOptionError(sectname, optname,
configparser.DuplicateOptionError: While reading from '<string>' [line 45]: option 'root' in section 'testenv:docs' already exists
Let's pin tox for now and a change that removes the pin can also fix tox
siblings (the change to remove the pin should be self testing
thankfully).
Also we make the zuul-tox-docs job temporarily non voting as it runs
ensure-tox non speculatively out of the opendev/base-jobs repo. We have
plenty of tox jobs runing against this changeto show it should work
fine.
Change-Id: Idcebd397f47cfef718721d2576cb43dbdb70801d
As shown in [0] the markdownlint job is currently broken, as
the ensure-nodejs role still defaults to node_version 6, which is
severly outdated and no longer installable on our current default Ubuntu
22.04 nodes. Pin to the latest LTS node version, 18.
[0] https://review.opendev.org/c/opendev/sandbox/+/618075
Change-Id: I864587f1fd6d32dc5e744fb3cf00e13485cba326
This updates the nodejs-test-dependencies role to install packages on
Debian as well as Ubuntu. The main driver for this is Ubuntu Jammy has
updated firefox to run out of a snap and getting selenium to operate in
that environment has been difficult. It is theoretically possible, but
rather than force users to sort it out offer Debian as an alternative.
Change-Id: I51cc95046520597a02c307c5d368f492933ed263
Ruamel is a little too sensitive to subtle changes in YAML when
preserving comments. The compact form of list in [test-chart]
trips it up. Reorder the job definition to avoid that and add
the standard AUTOGENERATED comment.
Change-Id: I738679aa5ab2575872f67e13ab4cafa8b34a20ed
Some log upload tasks were missing no_log instructions and might
write out credentials to the job-output.json file. Update these
tasks to include no_log.
Change-Id: I1f18cec117d9205945644ce19d5584f5d676e8d8
This updates the ensure-kubernetes testing to check the pod is
actually running. This was hiding some issues on Jammy where the
installation succeeded but the pod was not ready.
The essence of the problem seems to be that the
containernetworking-plugins tools are coming from upstream packages on
Ubuntu Jammy. This native package places the networking tools in a
different location to those from the Opensuse kubic repo.
We need to update the cri-o path and the docker path for our jobs.
For cri-o this is just an update to the config file, which is
separated out into the crio-Ubuntu-22.04 include file.
For docker things are bit harder, because you need the cri-docker shim
now to use a docker runtime with kubernetes. Per the note inline,
this shim has some hard-coded assumptions which mean we need to
override the way it overrides (!). This works but does all feel a bit
fragile; we should probably consider our overall support for the
docker backend.
With ensure-kubernetes working now, we can revert the non-voting jobs
from the eariler change Id6ee7ed38fec254493a2abbfa076b9671c907c83.
Change-Id: I5f02f4e056a0e731d74d00ebafa96390c06175cf
The ensure-pip and ensure-pip-localhost playbooks were identical
except for the hosts line. Refactor them into a test role and
invoke that role from the test playbooks.
Change-Id: I037a5d0cb56f96f6ebbbbffbc49cd68a26c71f24
Warehouse (the software implementing the PyPI service) expressly
disallows reuploading any file which is already present, and returns
an error if you attempt to do so. Under normal circumstances this is
desirable feedback, but if you're using this role to upload a batch
of files and encounter a network or service error partway through,
you can be left in a position where it's impossible to rerun the job
for those same artifacts later in order to correct the issue.
Add a pypi_twine_skip_existing boolean toggle to the pypi-upload
role, which will allow callers to indicate to twine that errors from
PyPI indicating a file is already present are to be treated as a
non-fatal condition so that it can proceed with uploading any which
do exist. Set its default to false, preserving the existing behavior
for the sake of backward compatibility.
In addition to the previously stated use case, this also makes it
possible to build different architecture-specific wheels in separate
jobs without needing to worry about deciding which one will include
the sdist, since they can all try to upload it safely.
Change-Id: I66a8ffce47eb5e856c3b481c20841b92e060b532
Currently we start a test pod for cri-o, but not for docker. Move
this into post so both get coverage.
Change-Id: I768130982e22cb50e360646043ac095d77cca963
We have missed testing various things on Jammy and other platforms.
Use tags to make it clearer what platform each job wants to test
itself on.
Change-Id: Ib656ef4a8bc01de838e3aba14a80d196b8dbfd08
This updates to ansible-lint 6.8.2.
The prior changes have updated various things for new things the
linter picked up.
As noted inline, the jinja2 parsing has not been working well with
this repository. I've been keeping an eye on it over several
releases, but I think at this point it's not going to work well for
us. I've left discussion in .ansible-lint.
The var-spacing in stage-output is the old name for the jinja2
filtering; since that is now ignored in .ansible-lint it can be
removed.
Change-Id: Ia2c9392eeb232b9b2b1d4febce8493d71de64482
Newer ansbile-lint finds "when" or "become" statements that are at the
end of blocks. Ordering these before the block seems like a very
logical thing to do, as we read from top-to-bottom so it's good to see
if the block will execute or not.
This is a no-op, and just moves the places the newer linter found.
Change-Id: If4d1dc4343ea2575c64510e1829c3fe02d6c273f
Latest ansible-lint is finding this. It seems reasonable enough to
ensure the task is named; it's always nice to have context about what
is happening as you read the file.
Change-Id: Ia7e490aaba99da9694a6f3fdb1bca9838221b30a
ansible-lint's name[template] check looks for templates and says they
should only be at the end of the string. This is because in many
circumstances, including errors, the name can't be templated in -- so
the message has a chance of not making sense. Honestly I can never
remember when it's safe to use templates in names and not; this seems
reasonable enough compromise.
Change-Id: I3a415c7706494f393b126b36d7eec7193638a3f1
This is pretty trivial, but consistency is probably better in this
regard and it does guide you to writing a sentence that is human
parsable, which is the point of it.
Change-Id: Iaab9bb6aec0ad0f1d3cae10364c1f1b37d02801e
This currently uses zuul-client to test installation. Unfortunately
for the testing the recent release dropped support for older pythons
we still need to test with.
To avoid this, make a dummy package that we include in zuul-jobs that
does nothing, but will also install anywhere.
Change-Id: Ia1beed2b21f39db4e2ab75258425d7897238ecf6
These packages are included in Jammy, so install directly.
We can unpin the buildset registry job so it runs on Jammy now.
Change-Id: I00b269f5924474ac00b56c7695a5c526a9a56046
These tests are unhappy after we switched the default nodeset to
Ubuntu Jammy; it is not obvious to me why. Rather than just pinning
them back to the old distro and leaving it to bitrot more, make these
non-voting as we investigate.
zuul-jobs-test-registry-buildset-registry is also failing, but has to
run for the Zuul to +1 this. Pin this to focal, but as mentioned
inline we have a follow-on update to skopeo install that seems to fix
this.
Change-Id: Id6ee7ed38fec254493a2abbfa076b9671c907c83
Ubuntu Jammy installs the named-checkzone tool to
/usr/bin/named-checkzone, but old ubuntu installed to
/usr/sbin/named-checkzone. Rather than try and keep track of the
different locations we update the task to run under the shell module so
that we can rely on $PATH to do the heavy lifting for us.
To help ensure this doesn't break the old path and to catch problems
earlier I have also added testing across the debuntu set of platforms.
The role doesn't currently support other platforms as it relies on the
bind9utils package.
Change-Id: I1650b605cb6f25fa7585524b427d65d2fc291338
Python 3.11 is out now. Add a tox-py3.11 base job to make it easier to
run tests on this new release of Python.
Change-Id: I19c98c0e683e36f727ae869e7b60f7c16d7eb78d
The default base job nodeset is moving from focal to jammy. Jammy
doesn't have python3.8 to run these jobs. Address that by explicitly
forcing these jobs to run on focal.
Change-Id: I57433092ea2afbec4546659ea20f31161cc41a6e
Using "For Example::" marks the next block as code, so we end up
emiting ".. code-block:: yaml"[1] as a code block which we don't really
want. Remove the additional ':' so we emit the expected HTML.
[1] Check https://zuul-ci.org/docs/zuul-jobs/general-roles.html#role-stage-output
before merge
Change-Id: Ic2e1fb9acb6a6b4ec77bf1ee0ec9ac5d809dfb7c
The new 5.3.0 release of Sphinx has started giving circular reference
errors on some of the included files. Pin this while we figure it
out.
Change-Id: I7674eb0e08207e1ec3b3941361d1fae75f124ddd
The docker image that we build the zuul executor from is a Debian
image, but it does not follow the same python3 policies as Debian
itself. While we would not necessarily expect all roles to work
on the executor, it is reasonable to want to use the ensure-pip
role (which logically should be a no-op on the executor) for the
side effect of finding and returning the appropriate pip command.
Currently, the role fails on the executor because it mistakenly
concludes that it must install python3-venv to get a working
venv module. By increasing the precision of the check for what
is missing (the actual error is a missing "ensurepip" python module
(oh irony!), we can avoid attempting an installation of
python3-venv on python docker images (including the Zuul executor
images).
This also adds the ensure-pip-localhost job
This tests that the ensure-pip role works on the Zuul executor.
The executor is a debian host with a working python environment,
so it should be a no-op (and no packages should need to be installed).
Change-Id: Id7f13f2f73d45e680f79c00a83751b185212a63d
Checksum retrieval from Github doesn't work when Artifactory is used as
a Github mirror when the installer is not already cached.
Allow setting the Bazel installer checksum as a variable to make the
role work in such cases.
See also https://www.jfrog.com/jira/browse/RTFACT-22923
Change-Id: Icc3480420895b9052a4f1c133659a31fff0723be
The stage-output role's README indicates that it stages to the
zuul_output_dir on the remote node, but this isn't true. It actually
stages to {{ stage_dir }} which has a different default than
{{ zuul_output_dir }} in fetch-output.
Update the readme to make this more clear and reduce confusion.
Change-Id: I7a7ef801db8194a7101a8dc0a6d89e1292b3fa86
roles/upload-logs-azure/tasks/main.yaml calls
"zuul_azure_storage_upload:" but the library file is currently called
"zuul_azure_upload.py". Since it says in the comments of the file to
call it as "zuul_azure_storage_upload.py" (and that matches the google
one) rename it.
I found this when working backwards with an ansible-lint that runs
against Ansible 2.8. I think this is an ansible-lint bug; see [1]
[1] https://github.com/ansible/ansible-lint/issues/2283
Change-Id: Ic30d82771e6c591cf17bcd15ca9dc92fb0f89e04