
This change simplifies the path creation logic to avoid processing user defined variables such as job name and pipeline name, which might cause the log url to exceed the database storage presently fixed at 255 char. Add warning in the job's header when the url is over 255 characters, explaining that Zuul won't report the job properly in its database; but the job can still run. Change-Id: I34fb5662a2f958c55f60458ce107bad2a73b9c96
Upload logs to Google Cloud Storage
Before using this role, create at least one bucket and set up appropriate access controls or lifecycle events. This role will not automatically create buckets (though it will configure CORS policies).
This role requires the google-cloud-storage
Python
package to be installed in the Ansible environment on the Zuul executor.
It uses Google Cloud Application Default Credentials.
Role Variables
This role will not create buckets which do not already exist. If partitioning is not enabled, this is the name of the bucket which will be used. If partitioning is enabled, then this will be used as the prefix for the bucket name which will be separated from the partition name by an underscore. For example, "logs_42" would be the bucket name for partition 42.
Note that you will want to set this to a value that uniquely identifies your Zuul installation.
This log upload role normally uses Google Cloud Application Default Credentials, however it can also operate in a mode where it uses a credential file written by gcp-authdaemon: https://opendev.org/zuul/gcp-authdaemon
To use this mode of operation, supply a path to the credentials file previously written by gcp-authdaemon.
Also supply :zuul
upload-logs-gcs.zuul_log_project
.
When using :zuul
upload-logs-gcs.zuul_log_credentials_file
, the name of the Google Cloud project of the log container must also be supplied.