Under monasca_common/policy an enforcement engine is added for using
oslo.policy in the other monasca projects.
There are same methods as in enforcement engings currently used in
nova and keystone projects.
Also added unit tests under tests/policy for testing implemented
methods.
Task: 6105
Story: 2001233
Change-Id: Ic5402ba0986416c9386c1dc3fc1559f148ea9625
Signed-off-by: Amir Mofakhar <amofakhar@op5.com>
On failure to publish, clear the topic metadata then retry, in
case the IP Addresses have changed. This can occur when
Monasca is run in Kubernetes and the Kafka pod is restarted.
Restarting the Kafka pod can happen often enough the
API should be able handle it without losing a message.
Change-Id: If48971c40883b5be10adec947562cdda7e82d77c
Story: 2001386
Task: 5963
PROBLEM: Consumer offset was resetting to the latest index rather than the earliest
SOLUTION: Modified consumer creation to include `auto_offset_reset="smallest"` which
allows the offset to reset to the earliest known index.
NOTE: This does exactly what the whence parameter in SimpleConsumer.seek()
is expected to do, however in order to achieve this functionality,
the parameter `auto_offset_reset` MUST be set to either "largest"
or "smallest".
Change-Id: I887892d80f2da9619c7f11737b3ab2e1d1dacf1e
The new alarm rules will each have an expression in their
definition which will need to be parsed by both the Monasca-
API and the Monasca-Notification-Engine. Documentation for
this will be included in the API along with descriptions of the
new rules.
Story: 2000939
Task: 4692
Change-Id: I1a98fafae8dfdfa6fdb2eb66f4a4a4f40e518e46
This replaces the deprecated (in python 3.2) unittest.TestCase
method assertRaisesRegexp() with assertRaisesRegex()
Change-Id: I0bed1f2a0bb8ef57a48e3b778795e8ac75f3a2eb
Following commits does several things:
* migrates CI of monasca-common to ostestr
* enables PY35 compatybility
Also:
* marked one tests as excluded under PY35 because changing
it would require affecting embedded kafka library which
will be eventually removed in future
Change-Id: I432a466e2620bc8d305ef2630307b636461c8e81
monasca_common.logging seems to be unused
in other monasca projects therefore should
be removed.
Also removed one dependency that was used only
by that module
Change-Id: Ib875d9bae86c9b2b715edbe0226347b3fc9ec8ed
long, as type, is also possible to be a timestamp
format
Needed-By: I2f9d22a2c5e18826c8f9bb1e817ad963731b390f
Change-Id: I186abe4cdafd58d998f8aaf36d866795771a9e0a
To let other OpenStack projects move forward with new versions of kafka-python
we're forking kafka-python and embedding it in monasca-common. This allows us
to migrate to the new async interfaces provided by more recent kafka clients
over time and not block other projects.
Requiring pykafka to allow us to have ~4x more throughput once we write to
their async interfaces.
Change-Id: Ifb6ab67ce1335a5ec4ed7dd8b0027dc9d46a6dda
Depends-On: I26f9c588f2818059ab6ba24f9fad8e213798a39c
In some part in the code we import objects. In the Openstack style
guidelines they recommend to import only modules.
http://docs.openstack.org/developer/hacking/#imports
Change-Id: Icc97b8d76901b8807bf04737bc1f72b5393e2879
The API spec says value_meta is optional, so
allow none as a value. This change will also
match the java API.
Change-Id: Ibabff76b3f1592334c281f57a1a5b939bb11e1f8
The function xrange() was renamed to range() in Python 3.
Use "from six.moves import range" to get xrange() on Python 2 and range()
on Python 3 as the name "range", and replace "xrange()" with "range()".
The import is omitted for small ranges (1024 items or less).
This patch was generated by the following tool (revision 0c1d096b3903)
with the "xrange" operation:
https://bitbucket.org/haypo/misc/src/tip/python/sixer.py
Manual change:
* Replace range(n) with list(range(n)) in a loop of
nova/virt/libvirt/driver.py which uses list.pop()
Blueprint nova-python3
Change-Id: Ifc264fe262982b62d9791cedef6040eecc8af04e
Cleaned up test-requirements.txt in order to use the latest hacking
package.
Removed the ignored pep8 checks and made the code pass all of them.
Also removed the OpenStack Foundation copyright notice that was put
there accidentally before.
Change-Id: I3d287eb71fc2bf0e4d52856c11cbc8a347cac2ed
Add tests for kafka producer and consumer modules. The test coverage
of monasca_common was was 9%. This is improved in this commit.
Show test coverage when running tox. Change the nosetests command to
show test coverage for the whole module.
Change-Id: I771a539aee5fa92c065ee16b5bb94c9ae7e7a09b
"json.dumps" function returns encoded value when specifying
"ensure_ascii=False", so it's not necessary to encode the returned
value.
Change-Id: Ic4834a27d36993cd9f4d2b6945cf108e7149d95b
Closes-Bug: 1569112
Our partiton rebalance mechanism broke on the upgrade from kafka-python 0.9.2
to 0.9.5. Rather than fiddling with the internals of the kafka consumer object
we're now reconstructing the consumer object after each rebalance and handing
it the specific partitions it needs to worry about.
Closes-bug: #1560178
Change-Id: I469ceb28538db1f36918f211eaea4fcfdaa17649
This commit provides healthcheck package for
monasca-common where so called checks can be defined
and used throughout monasca-* projects.
Summary:
- KafkaHealthCheck
- HealthCheckResult
Change-Id: Ib404ae128c8a3c93b24e4e237a3d77130fb18b53
The kafka client library recieves batches of a certain size from kafka
regardless of the batch size I ask for. Asking for a batch size that's larger
than the respose size results in multiple requests to kafka until the batch
size I asked for is recieved.
Asking for a single message still causes the kafka client to recieve a batch of
data from kafka but doing it this way, which is the same way that the kafka
library __iter__ function works, results in a ~5x improvement in throughput
over asking for larger batch sizes.
Increasing the requested batch size results in a dramatic increase in
performance. For the Java consumer the default read size is 1MB compared to
the 4KB default read size in the Python consumer. Increasing the Python
consumer to match the Java version results in a ~10x improvement.
Change-Id: I3380df56749a577ae7116e5da841dcb91c85312a
If we don't specify the current set of partitions to the commit call the Kafka
consumer object seems to issue a commit for all the partitions it has
information on and not just the ones it is actively reading. This fix will
allow it to only commit to the partitions that it is consuming from.
Change-Id: Ifd5aa9c8fe4d83f804629f1a301a40556721d018
Not sure how they got there but characters in columns 18 and 19 on
line 43 are not ascii. Remove them
Change-Id: Ibbf27c89b3ba6f110b47d042ea24cc1443ad055d
MySQL-python has GPL2 license. Apache license is not compatible with it.
We propose to replace it with pymysql which has MIT license.
Change-Id: I8d758f5e4908c1047dc4167ebd28cad24fff3a28
New consumer object intended to be used by the pieces of Monasca that want to
consume data from kafka.
New producer object that will write to kafka in a performant manner.
Require kazoo
Removed PyYaml requirement
Change-Id: I2eb0c5cd1ed64b83a67912109c4c6de7a1d73722
This is the start of a python monasca-common package.
Initially it has a common python logging config,
common oslo opts, and mysql common code.
Change-Id: I15c32b72fc42a8c5ce9eeedf20ca3a11907bf29f