This adds missing oslo modules to the config generator config, so that
we can create a good sample config file.
Change-Id: I35f19d02aa7316d7a814f29a60d5edacc9c26283
Add the new middleware CORS for Zaqar
It only supports for WSGI.
Websocket doesn't need this feature.
Change-Id: Ifc6d2d1c5dde5152cab6e3aa2f3cf9f207481267
Implements: blueprint support-cors
Guru is a mechanism whereby developers
and system administrators can generate
a report about the state of a running
Zaqar executable.
This report is called a *Guru Meditation Report*
This mechanism will help developer or operator
to fix issues in (production) deployments without
stopping Zaqar service.
Implements: blueprint introduce-guru-to-zaqar
Change-Id: I72885be396be7eb0a9dd8fd564d706a8351b02c6
A new endpoint /v2/queues/myqueue/purge is added to support purge
a queue, which accepts a POST body like:
{"resource_types": ["messages", "subscriptions"]} to allow user
purge particular resource of the queue. Test cases are added as
well.
APIImpact
DocImpact
Partially Implements: blueprint purge-queue
Change-Id: Ie82713fce7cb0db6612693cee81be8c3170d292a
*) Add osprofiler wsgi middleware
This middleware is used for 2 things:
1) It checks that person who want to trace is trusted and knows
secret HMAC key.
2) It start tracing in case of proper trace headers
and add first wsgi trace point, with info about HTTP request.
*) Add initialization of osprofiler at start of server
Initialize and set an oslo.messaging based notifier instance
to osprofiler which be used to send notifications to Ceilometer.
*) Enable profile on existing useful storage backends
Change controller creation logic of data and control panel for
mongodb, redis and sqlalchemy storage backends, as well as
an aggregative pooling driver.
*) Add options to allow operator control profiles separately
NOTE to test this:
1) You have to enable necessary profiler option(s) base on your needed.
2) You need to enable follow services in localrc for devstack:
CEILOMETER_NOTIFICATION_TOPICS=notifications,profiler
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral
ENABLED_SERVICES+=,ceilometer-anotification,ceilometer-collector
ENABLED_SERVICES+=,ceilometer-alarm-evaluator,ceilometer-alarm-notifier
ENABLED_SERVICES+=,ceilometer-api
3) You should use python-zaqarclient with this change:
I880c003511e9e4ef99806ba5b19d0ef6996be80b
Run any command with --os-profile <SECRET_KEY>
$ openstack --os-profile <SECRET_KEY> queue list
# it will print <Trace ID>
Get pretty HTML with traces:
$ osprofiler trace show --html <Trace ID>
note that osprofiler should be run from admin user name & tenant.
DocImpact
Partially-implements BP: osprofiler
Change-Id: I32565de6c447cd5e95a0ef54a9fbd4e571c2d820
Co-Authored-By: wangxiyuan <wangxiyuan@huawei.com>
Zaqar is missing the policy check for queue stats
and share API. This patch adds them and update
the sample policy file.
Closes-Bug: #1640313
Change-Id: I79e84fb02588148c5df88e4115e17e4ecd9369a4
The subscription confirmation feature will contain four patches:
1. webhook with mongoDB
2. email with mongoDB
3. webhook with redis
4. email with redis
This patch is the first part of subscription confirmation feature for
webhook with MongoDB. Others will be achieved in follow patches.
This patch did:
1. Add v2/queue/<queue_name>/subscription/<subscription_id>/confirm
endpoint.
2. Add a new config option: "require_confirmation".
3. Add a new property "confirmed" to subscription resource for
MongoDB driver.
4. Add a new policy "subscription: confirm".
5. Add a new property "message type" for notification.
6. Use the pre-signed url in confirm request.
8. Re-use POST subscription to allow re-confirm.
9. Update notification for webhook subscription with mongoDB.
10. Support unsubscrib the subscription
11. Add tests for the feature.
12. Add doc and sample.
Docimpact
APIimpact
Change-Id: Id38d4a5b4f9303b12e22e2b5c248facda4c00143
Implements: blueprint subscription-confirmation-support
This adds the ability to send keystone authentified notifications using
trusts. To do so, you specify the posted URL with the "trust+" prefix,
and Zaqar will create and store a trust when subscribing to a queue, if
the trust not provided in the subscription options
It also add a capability to the webhook task to be able to send more
structured data in the notification, allowing to include the Zaqar
message in the data.
blueprint mistral-notifications
DocImpact
Change-Id: I12b9c1b34cdd220fcf1bdc2720043d4a8f75dc85
Currently generated zaqar.conf.sample is missing oslo.cache library
options, but they are used by Zaqar and are important in production
install.
This patch makes the command 'tox -e gencofig' generate also oslo.cache
options.
Change-Id: Ia8f78fd5a106888882f882aed8d8355e7e1e459e
Closes-Bug: 1560707
The current configuration is not generating the correct configuration
file. The net result is that even when auth_strategy option is set to
keystone, the 'keystone_authtoken' section is still missing. This patch
fixes the mistake.
Change-Id: I2a37cde436736d39be93cd082a8ab13b58e21133
zaqar.conf.sample file can be generated by tox -e genconfig commnad.
The generated zaqar.conf.sample file is supposed to have a [drivers]
section as per zaqar README.rst
However, the [drivers] section is not present in zaqar.conf.sample file
This patch fixes it
Closes-bug: #1501130
Change-Id: Ic1d98680fe1040f68944b529b1c4c4ec2c835cea
This commit adds support for RBAC using oslo.policy. This allows Zaqar
for having a fine-grained access control to the resources it exposes.
As of this patch, the implementation allows to have access control in
a per-operation basis rather than specific resources.
Co-Authored-by: Thomas Herve <therve@redhat.com>
Co-Authored-by: Flavio Percoco <flaper87@gmail.com>
blueprint: fine-grained-permissions
Change-Id: I90374a11815ac2bd9d31768588719d2d4c4e7f5d
Add a sample configuration for running the wsgi transport using uwsgi,
and make devstack uses it, while running zaqar-server with websocket
transport.
This allows running both websockets and wsgi transports on devstack.
Change-Id: Ifac7461ec6b0501b1b9021030d9c173cf368a59b
Given we are going to implement notification, so the 'queues'
package is not suitable for current scope of zaqar. This
patch will remove the 'queues' package.
Partially implements: blueprint notifications
Change-Id: I6984f31f4bd1e646b585c45c088ed239b58587c4
Oslo's config generator has been moved under oslo.config, which doesn't
require using a bash script anymore.
The patch removes the old scripts and updates the generation task in
tox.ini
Closes-bug: #1373800
Change-Id: Ia757b0d141f8557144108d386496d1e9bfc7333f
The max_message_size option name is misleading. The option
determines max size of message post body. Replacing the
option name to max_messages_post_size.
Change-Id: Ie01cee026e7ebf530cdb2709e2c17d030ad95480
Closes-Bug: #1357397
This patch implements the standard controllers for the redis
storage driver. It has been tested against a localhost Redis
server with ZAQAR_TEST_REDIS=1.
Change-Id: Ib7c100afd11a0410c3f241c1925d5aaf172ce6a8
Partially-Implements: blueprint redis-storage-driver
This commit adds several enhancements to benchmarking tool: server_url
and path to messages now can be configured in config file. Default
output of program has been changed: now it prints values in json so they
can be parsed more easily. Previous human readable representation is
accessible via --verbose flag.
The `total_requests` metric now shows all performed requests (either
failed or successful) and new metric - `successful_requests` - was
introduced to store count of successful requests.
Change-Id: Id6fe4b2046394a348ba07eb5b2b003c6024b78b0
Partially-implements: blueprint gen-bench-reports
auth_token middleware in python-keystoneclient is deprecated and has
been moved to the keystonemiddleware repo.
Change-Id: I174b62d035b84aff1cf0d60efb84f7650445f42c
Closes-Bug: #1342274
This patch renames every package, file, match of Marconi in the codebase
to Zaqar *except* for the .gitreview file, which will have to be updated
*after* I8e587af588d9be0b5ebbab4b0f729b106a2ae537 lands.
Implements blueprint: project-rename
Change-Id: I63cf2c680cead4641f3e430af379452058bce5b3
Expose 'ssl_keyfile','ssl_certfile','ssl_cert_reqs' and
'ssl_ca_certs' options for maximum security. By default, ssl
is not enabled except that ssl parameter was included in the
mongodb uri directly, and ssl_cert_reqs = CERT_REQUIRED which
means user must provide the 'ssl_ca_certs' if ssl is enabled
by adding the ssl parameter in the mongodb uri.
Change-Id: I67cb5a9b2d76625de2932c854d0a696e9118ca6b
Closes-Bug: #1328720
This patch adds oslo's config generator to the source tree and uses it
to generate marconi's sample configs. It also adds a check to pep8 that
verifies the config file is up-to-date.
Change-Id: Iec7defa244dc8649a5c832bb81b9ec6f30f0ee37
Now we're supporting sqlalchemy and the sqlite is deprecated.
So the marconi.conf-sample should be update to reflect the
change so as to avoid confusion.
Closes-Bug: #1288619
Change-Id: Ief5dad6345dc24e70af18e9e47d0f8dd384cee47
This patch adds two features to our current sqlalchemy driver:
- option to configure connection URI for driver
- Skeleton of ControlDriver written
With the ControlDriver, the expected methods were stubbed out.
Small fix: all controller methods not yet implemented now raise
NotImplementedError.
Change-Id: I1cd4a4d75cbbee7f0ff574c5be4d11660359ab7e
Partially-Implements: blueprint: sql-storage-driver
This patch contains several misc. changes to queue, message, and
claim limits to reduce confusion and bring the implementation in
line with the v1 spec.
1. Removed a couple of WSGI driver config options that are
no longer needed now that we have redefined (and simplified) how
we constrain message and metadata size.
metadata_max_length = 65536
content_max_length = 262144
2. Renamed options to be more readable and consistent
3. Moved options to [transport] section
4. Made max messages that can be claimed its own setting, to reduce confusion
5. Removed enforcing an upper limit on the number of messages that can be
posted; this was never in the spec, and appears to be gold-plating. Now, the
only upper limit is max_message_size.
6. Removed the check on the size of a create claim request since (1) it is
not part of the API spec, and (2) sanity-checks like that are best done by
the web server, before a request even touches the app.
7. Migrated limits for storage driver interface params to static values,
since those defaults define the static contract between transport and
storage drivers.
8. Wrapped validation error messages in gettextutils._, and converted them
to use .format instead of %.
Change-Id: I1372e5002f030f5c8c47774ab00ca8ee7e12232d
Closes-Bug: #1270260
This patch removes some configuration files that were leftover from
the days of the proxy. Also, it removes mention of the proxy from
common.transport.version.
Change-Id: I88f7d6490f5b0d0bdbdc827c69a72180ab6c3a12
Changes [proxy:config] and [queues:config] into
just [drivers] since both these projects are
separate and so there's less repetition.
Change-Id: I982b5a08ed45426df17d9008854853c68c207608
Closes-Bug:#1231669
This change is made in prepartion for the upcoming sharded storage
features. Shard registration is a feature that only operators should
be able to do, and since the sharding is done within the queues
application, it was necessary to break this out into a separate API.
This patch adds a new configuration variable: admin_mode. It is used
to multiplex which version of the API is loaded. Furthermore, the
admin API is an enhanced version of the public API in that it allows
every route that the public API in addition to admin-only
endpoints. This should ease unit testing in future patches.
A few small refactorings were made, including:
- health resource moved to common transport location
- version module moved to common/transport
- pass config from bootstrap to transport driver
- pass cache in base transport driver
- convert base transport driver to use six.add_metaclass
- pass public bootstrap to bootstrap unit test
Change-Id: I0d6ff381afb25adb8a4b784a60b6d6eb71431245
Partially-implements: blueprint storage-sharding
This patch provides the plumbing for implementing storage
sharding across multiple backends. Sharding is agnostic to
storage driver type and transport type. The new feature is
optional, and disabled by default.
The design eschews placing any kind of sharding reverse proxy
in the network, allowing the storage drivers to continue
communicating directly with their respective backends.
Sharding can be enabled by setting the global "sharding"
option to True. Future patches will add a sharding section to
the config that can be used to tweak the way sharding works when
it is enabled.
Storage drivers are managed by a Catalog class. The Catalog is
responsible for registering and deregistering queues in the
catalog backend, and for looking up an appropriate driver,
according to which shard a particular queue has been assigned.
In the future, this design will make it straightforward to map
individual queues to different storage backends, according to user
preference.
FWIW, I considered enabling sharding by inserting the routing driver
as the last stage in the storage pipeline. However, it felt like
a hack for the following reasons:
* Doing so orphaned the regular, solitary driver that was
still always loaded at the end of the pipeline.
* Since the bootstrap was not aware of the sharding driver,
it could not be used to provide setup, so the catalog
object had to be turned into a singleton and options
had to always be loaded from the global config.
* The driver would have to be added to each controller
pipeline, and would have to always be the last stage in
the pipeline. Introducing a simple "sharded" boolean option
seemed to be a more straightforward, less error-prone way
for operators to enable sharding.
Partially-Implements: blueprint storage-sharding
Change-Id: I5190211e81fe4acd311b2cfdd0bae806cc3fec81
This patch moves pipeline setup into the bootstrap and out of
the storage driver base class, so that the base class can be
inherited by meta-drivers, such as the planned sharding manager,
without introducing a loop in the bootstrapping logic.
Now, a meta-driver is exposed to the transport object that
takes care of wiring up the pipeline for each resource
controller behind the scenes.
As part of this work, the pipeline config was modified to
support configuring different stages depending on the
resource. We create three instances of Pipeline anyway,
so it seemed to make sense to allow the operator to
configure the pipelines independently.
Partially-Implements: blueprint storage-pipeline
Change-Id: Ibdb7d0e9537b1eec38a13f4881df7462039bbf98
This patchset separates the configuration of the proxy from that of
the queues server. This was done in order to simplify the
configuration file for each, and because it is not expected that the
proxy and the queues servers would be launched on the same
host. Furthermore, many of the proxy options are not relevant to the
queues server.
Furthermore, to allow this, common.config had to be modified to take a
prog parameter. This enabled the ability to save multiple
configuration files to one directory. See below for details.
The new files are:
- etc/marconi-proxy.conf
- etc/marconi-queues.conf
They are expected to be saved to one of:
- ~/.marconi
- /etc/marconi
Regarding namespaces, queues specific options are associated with the
'queues:*' group and proxy specific options are associated to the
'proxy:*' group.
The appropriate changes are also applied to the test suite and
helpers.
Change-Id: I7cf25e47ecff47934b50c21000b31308e1a4c8a9
Implements: blueprint placement-service
This patch adds smarter configuration to the proxy in two steps:
1. mirror the transport implementation used in marconi.queues in
marconi.proxy
2. add a bootstrap file to take care of start up
Rationale: make configuration work, make deploying easy, make
alternate transport implementations feasible.
Another change: the unit tests are fixed by adding a few changes:
1. add drop functionality to the proxy storage interface
2. use drop/flush in test suite tearDown
3. rm tests.unit.test_config
4. delete queues at the end of the catalogue test (not yet robust)
The rationale for (3) was that test_config did not play nice with
other tests when they were registering their options, and failed as a
result. Furthermore, we should not need to test oslo.config.
Configuration changes: new fields in etc/marconi.conf
- drivers:proxy
- drivers:proxy:storage:{memory.mongodb}
- drivers:proxy:transport:wsgi
- oslo_cache
Also, fix: InternalServerError -> HTTPInternalServerError
Finally, redis was removed from requirements.txt.
Change-Id: If2365a1a738a3975fe6bde7bd07dfdee3460cecd
Implements: blueprint placement-service
This patch causes data to be partitioned across multiple databases in
order to reduce writer lock contention. The "queues" collection is
isolated in its own database, while the messages collection is partitioned
across several other databases. The number of partitions is configurable.
For example, if the number of partitions is set to 4, these databases
will be created in MongoDB:
marconi_queues
marconi_messages_p0
marconi_messages_p1
marconi_messages_p2
marconi_messages_p3
Implements: blueprint mongodb-multidb
Change-Id: I399f4a39e5377a381aef489b046bc14155ccb75b
This patch changes markers so that they are generated using a per-queue
side counter. A heuristic is used to mitigate a race condition. Due to the
new semantics, partial inserts are no longer possible due to collisions,
which ended up simplifying the retry logic for posting messages.
As a consequence of this patch, the last message posted no longer needs
to remain in the queue indefinitely, rendering marconi-gc unnecessary,
and so it has been removed.
Also, since the mongod GC worker runs once a minute, the queries no longer
filter out expired-but-not-yet-gc'd messages; on average, a message may
live more than 30 seconds passed it's expected lifetime, but I do not
think that this will harm or complicate any application building on top of
Marconi, practically speaking. That being said, it is worth calling out
in documentation.
Closes-Bug: #1218602
Change-Id: I34e24e7dd7c4e017c84eb5929ce37ad4c9e5266a
This patch brings together oslo.cache, oslo.config, and stevedore to
provide pluggable, hierarchical catalogue caching for marconi proxy.
Here's the list of changes:
- add configuration/driver loading at the app level
- remove select from proxy storage driver - unnecessary intelligence
at storage layer
- node.weighted_select -> partition.weighted_select (clearer name)
- forwarding logic further refactored, placed in own module
- caching logic placed in lookup module
- selector passed down at app level to handle round-robin state
globally
* open to becoming configurable
- adds several TODOs for a better proxy
Change-Id: I3bc568315e685486d63cdce3ec278c89e3f2b2bc
Implements: blueprint placement-service
This change add the following options to the config file:
[limits:storage]
default_queue_paging = 10
default_message_paging = 10
So that the default value of the "limit" URI param is now configurable.
This patch also removes the "actions" cruft.
Implements: blueprint configurable-default-paging
Change-Id: Id38295f1e607226a4259be7744e6ce2d7b6de12e
Although "message_paging_uplimit" also limit the maximum number of
IDs can be supplied in a URI, which is not quite a "page" (in bulk
deletion), but we don't need the configuration to be too precise.
Change-Id: I0737146f1212c82db18de35e35206d3932a46628
This patch add the configuration variables of transport driver-
specific limits and input validation to the sample config file,
so that the users don't need to open the source code to figure
out how to change the limits :)
Change-Id: I811b7dc4ca44d25a3cdb5402e11d599aa532ab39