Compare commits

...

26 Commits

Author SHA1 Message Date
rbalasun
cf0219961b Added counter for volume creation
Change-Id: I56b3a5866968891785d4dc85d6276a998c81ff2a
2020-08-10 17:01:42 -07:00
ahothan
ee744e7ca7 Add unit testing to kb_config
Change-Id: I64b4a6baaa481eeccb34b228c520eb81a1ae0d46
2020-07-28 18:42:55 -07:00
ahothan
1316bd443d Migrate code to python3/Ubuntu 20.04
Change-Id: I18a21e04d009afdee3afc2723afdbade24bfdf71
2020-07-24 23:22:25 -07:00
Yichen Wang
de38fad996 Fix compability issue for adding floating ip to VM
Change-Id: I4dd7603f044499185c5d870f4dc7771c8fdf6769
2019-05-09 15:41:56 -07:00
ahothan
1d7e405274 Remove publish-to-pypi as it stays in project-config
Change-Id: I9609815058a8751f225599008e5590f59a405ca2
2019-04-30 23:48:06 -07:00
ahothan
0ec39ec1d0 Fix git URL to use opendev repo
Change-Id: I462ff90be8368e24f5f633e8306607402e8bbb60
2019-04-30 15:42:50 -07:00
ahothan
0c81e7c385 Fix rtd, add zuul.yaml, update REST doc
Change-Id: I0c5f2964e813c830c20da55630b081913a861f8d
2019-04-30 11:44:39 -07:00
Yichen Wang
1da3c08594 Enhancements networking support of storage testing
1. Support to run storage testing off provider network;
2. Support to run sotrage testing with IPv6 subnets;
3. Remove the NOVA client version restriction;
4. Update to use ubuntu 18.04 as KloudBuster base image;
5. Use config_drive to pass configs;

Change-Id: Ie0753b0c6616edb13c5426c26a9e04983d330d0d
2019-04-29 21:14:04 -07:00
OpenDev Sysadmins
3466504541 OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:51:41 +00:00
melissaml
e31d4b5513 Change openstack-dev to openstack-discuss
Mailinglists have been updated. Openstack-discuss replaces openstack-dev.

Change-Id: Iadd9bb1e54ad7b54c8a68b482f8c5f4e0ada57bd
2018-12-07 21:44:48 +08:00
zhouxinyong
ed43284048 Optimizing the safety of the http link site in README.rst.
Change-Id: I7f969ed539020030b62135df1572be9443aaab88
2018-11-14 03:55:38 +08:00
melissaml
2ce398100a Update the outdated URL
Change-Id: I7f75590d662c8831d4c3e175d1db6c78e812201e
2018-09-23 16:35:47 +08:00
REDDY, CHINASUBBA
367e44bddd make proxy falvor configurable like other flavors
https://bugs.launchpad.net/kloudbuster/+bug/1766372

Change-Id: Ib38bbfaf6c6f58b9cd44961967859e76ee441cdf
2018-06-19 14:45:15 -05:00
ahothan
1f0b0963b9 Fix dib issue with missing block-device element
This is new with dib 2.15

Change-Id: I31a090e1212bd96b054814f4e186972a4cd2b937
Signed-off-by: ahothan <ahothan@cisco.com>
2018-06-13 17:54:49 -07:00
ahothan
cae5f726df Add support for configurable redis server ready timeout
https://bugs.launchpad.net/kloudbuster/+bug/1766373

Change-Id: Icc367ec52e7d61e5729220eee3a8753b228f4a09
Signed-off-by: ahothan <ahothan@cisco.com>
2018-06-13 13:09:28 -07:00
ahothan
8c52aa0e06 Fix web UI crash https://bugs.launchpad.net/kloudbuster/+bug/1775689
Change-Id: I74deaa7923e10da5694403753e6743cce1dc3501
Signed-off-by: ahothan <ahothan@cisco.com>
2018-06-13 08:08:32 -07:00
ahothan
fc42153774 Fix failure to add static route in http client VM
Add hostname in etc/hosts to prevent sudo to fail
Add additional syslog in agent code
Add devuser login to troubleshoot VM

Change-Id: I59a28fe3eb0c354721989a3f3e1102e67949d545
Signed-off-by: ahothan <ahothan@cisco.com>
2018-06-13 06:45:44 -07:00
Ajay Kalambur
ed33269ba4 Support for cinder volume_type
Used for testing cinder multiback end support and QOS specs

Change-Id: I5df4c2e47ef2253c296898f4e49b29e6d3927642
2018-06-12 13:07:48 -07:00
ahothan
e9da263ae9 Fix keystone v3 issue with Queens
https://bugs.launchpad.net/kloudbuster/+bug/1774616

Change-Id: I6f229285cd24bc410d7fb921bc82a0b9f7fc5b38
Signed-off-by: ahothan <ahothan@cisco.com>
2018-06-04 10:50:45 +02:00
ahothan
4a0f595d02 Fix build issue caused by pip error (10.0.1)
Change-Id: Ie36d8d5068c4a6dd3d2efc1c4adcb6ab723c4f06
2018-05-23 00:25:55 -07:00
XiaojueGuan
295edde7ce Trivial: update url to new url
Change-Id: I29401f4b3429796d1b740c61ed95444c7cd1029a
2018-05-13 22:19:08 +08:00
mortenhillbom
41bbf14620 Add fio cpu data to result json file
Change-Id: I2abeb7007c138d384c9c9393aef06dee820ccb59
2018-03-16 15:18:41 -07:00
Kerim Gokarslan
51962aa0f1 Add TSDB plugin including Prometheus
Change-Id: Id832932b6b84f6d296bfd01d8fe91ad0edb0f7d3
2018-03-15 18:42:21 -07:00
ahothan
753eb84b2b Add --interactive
Change-Id: I335d969b05a4144ee8dab0d5e272981fb5877cb2
2018-03-05 15:38:43 -08:00
Nguyen Hung Phuong
668dd098a4 Replaces yaml.load() with yaml.safe_load()
Yaml.load() return Python object may be dangerous if you receive a YAML
document from an untrusted source such as the Internet. The function
yaml.safe_load() limits this ability to simple Python objects like integers or
lists.

Reference:
https://security.openstack.org/guidelines/dg_avoid-dangerous-input-parsing-libraries.html

Change-Id: I8cff003dad2d0b4ca19b12d45cb5538f683192cd
2018-02-13 09:28:01 +00:00
ahothan
f984115a53 Do not use python-novaclient 10.0.0 or higher (floating ip api removed)
Change-Id: I2ff61f9c4421fdc1192a755a2f8e24307c25e8bf
2018-01-29 17:04:11 -08:00
65 changed files with 1442 additions and 596 deletions

2
.gitignore vendored

@ -68,3 +68,5 @@ scale/dib/kloudbuster.d/
# kb_web
!kb_server/public/ui/components/*/*.css
!kb_server/public/ui/components/*/*.js
.pytest_cache/

@ -1,4 +1,4 @@
[gerrit]
host=review.openstack.org
host=review.opendev.org
port=29418
project=openstack/kloudbuster.git
project=x/kloudbuster.git

14
.zuul.yaml Normal file

@ -0,0 +1,14 @@
- project:
templates:
- docs-on-readthedocs
vars:
rtd_webhook_id: '83817'
check:
jobs:
- tox-pep8
- tox-py27
gate:
jobs:
- tox-pep8
- tox-py27

@ -1,29 +1,33 @@
# docker file for creating a container that has kloudbuster installed and ready to use
# this will build from uptreams master latest
FROM ubuntu:16.04
MAINTAINER kloudbuster-core <kloudbuster-core@lists.launchpad.net>
FROM ubuntu:20.04
# Simpler would be to clone direct from upstream (latest)
# but the content might differ from the curent repo
# So we'd rather copy the current kloudbuster directory
# along with the pre-built qcow2 image
COPY ./ /kloudbuster/
# The name of the kloudbuster wheel package
# must be placed under ./dist directory before calling docker build
# example: ./dist/kloudbuster-8.0.0-py3-none-any.whl
ARG WHEEL_PKG
# The name of the kloudbuster VM qcow2 image
# must be placed in the current directory
# example: ./kloudbuster-8.0.0.qcow2
ARG VM_IMAGE
# copy the wheel package so it can be installed inside the container
COPY ./dist/$WHEEL_PKG /
# copy the VM image under /
COPY $VM_IMAGE /
# copy the VM Image
# Install KloudBuster script and dependencies
# Note the dot_git directory must be renamed to .git
# in order for pip install -e . to work properly
RUN apt-get update && apt-get install -y \
git \
libyaml-dev \
python \
python-dev \
python-pip \
&& pip install -U -q pip \
&& pip install -U -q setuptools \
&& cd /kloudbuster \
&& pip install -q -e . \
&& rm -rf .git \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get autoremove -y && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN apt-get update \
&& apt-get install -y python3 python3-pip python-is-python3 \
&& pip3 install /$WHEEL_PKG \
&& rm -f /$WHEEL_PKG

@ -1,4 +1,4 @@
kloudbuster Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/

@ -1,5 +1,5 @@
=====================
KloudBuster version 7
KloudBuster version 8
=====================
How good is your OpenStack **data plane** or **storage service** under real

@ -31,6 +31,12 @@ command line.
.. note:: Admin access is required to use this feature.
Interactive mode
----------------
When using the CLI, the "--interactive" option allows to re-run the workloads any number of times
from the prompt after the resources are staged.
This is useful for example to avoid restaging after each run.
Running KloudBuster without admin access
----------------------------------------

@ -211,6 +211,13 @@ This section defines the storage specific configs in the staging phase::
# The size of the test file for running IO tests in GB. Must be less or
# equal than disk_size.
io_file_size: 1
# Optional volume_type for cinder volumes
# Do not specify unless using QOS specs or testing a specific volume type
# Used to test multibackend support and QOS specs
# Must be a valid cinder volume type as listed by openstack volume type list
# Make sure volume type is public
# If an invalid volume type is specified tool will Error out on volume create
# volume_type: cephtype
* **client:storage_tool_configs**

@ -110,13 +110,13 @@ KloudBuster follows the same workflow as any other OpenStack project.
If you would like to contribute to the development of OpenStack, you must
follow the steps in this page:
`<http://docs.openstack.org/infra/manual/developers.html>`_
`<https://docs.openstack.org/infra/manual/developers.html>`_
If you already have a good understanding of how the system works and your
OpenStack accounts are set up, you can skip to the development workflow
section of this documentation to learn how changes to OpenStack should be
submitted for review via the Gerrit tool:
`<http://docs.openstack.org/infra/manual/developers.html#development-workflow>`_
`<https://docs.openstack.org/infra/manual/developers.html#development-workflow>`_
Pull requests submitted through GitHub will be ignored.

@ -43,7 +43,7 @@ Build the image with below commands:
.. code-block:: bash
# Clone the kloudbuster repository if you have not done so
git clone https://github.com/openstack/kloudbuster.git
git clone https://opendev.org/x/kloudbuster.git
cd kloudbuster
# Install kloudbuster
pip install -e .

@ -41,7 +41,7 @@ This also means that this cloud had 20,000 TCP active connections at all times
during the scale test.
.. image:: images/kb-http-thumbnail.png
:target: https://htmlpreview.github.io/?https://github.com/openstack/kloudbuster/blob/master/doc/source/gallery/http.html
:target: https://htmlpreview.github.io/?https://opendev.org/x/kloudbuster/src/branch/master/doc/source/gallery/http.html
Sample HTTP Monitoring Report
@ -89,7 +89,7 @@ for a total of 60,000 random read operations. The latency line tells us that
99.9% of these 60,000 read operations are completed within 1.576 msec.
.. image:: images/kb-storage-thumbnail.png
:target: https://htmlpreview.github.io/?https://github.com/openstack/kloudbuster/blob/master/doc/source/gallery/storage.html
:target: https://htmlpreview.github.io/?https://opendev.org/x/kloudbuster/src/branch/master/doc/source/gallery/storage.html
The sequential write results are more challenging as they show that the VMs
cannot achieve their requested write bandwidth (60MB/s) and can only get 49MB/s

@ -20,7 +20,7 @@ Quick installation on Ubuntu/Debian
$ # create a virtual environment
$ virtualenv ./vkb
$ source ./vkb/bin/activate
$ git clone https://github.com/openstack/kloudbuster.git
$ git clone https://opendev.org/x/kloudbuster.git
$ cd kloudbuster
$ pip install -e .
$ pip install -r requirements-dev.txt
@ -34,7 +34,7 @@ Quick installation on RHEL/Fedora/CentOS
$ # create a virtual environment
$ virtualenv ./vkb
$ source ./vkb/bin/activate
$ git clone https://github.com/openstack/kloudbuster.git
$ git clone https://opendev.org/x/kloudbuster.git
$ cd kloudbuster
$ pip install -e .
$ pip install -r requirements-dev.txt
@ -57,7 +57,7 @@ First, download XCode from App Store, then execute below commands:
$ # create a virtual environment
$ virtualenv ./vkb
$ source ./vkb/bin/activate
$ git clone https://github.com/openstack/kloudbuster.git
$ git clone https://opendev.org/x/kloudbuster.git
$ cd kloudbuster
$ pip install -e .
$ pip install -r requirements-dev.txt

@ -3,7 +3,7 @@ KloudBuster Pip Install Quick Start Guide
=========================================
KloudBuster is available in the Python Package Index (PyPI)
`KloudBuster PyPI <https://pypi.python.org/pypi/KloudBuster>`_
`KloudBuster PyPI <https://pypi.org/project/KloudBuster>`_
and can be installed on any system that has python 2.7.
1. Install pip and the python virtualenv (if not installed already)

@ -1,5 +1,5 @@
=====================
KloudBuster version 7
KloudBuster version 8
=====================
How good is your OpenStack **data plane** or **storage service** under real
@ -69,6 +69,8 @@ Feature List
* User configurable workload sequence
* Support for creating cinder volumes with custom volume types and associated QOS specs
* Supports automated scale progressions (VM count series in any multiple
increment) to reduce dramatically scale testing time
@ -87,8 +89,6 @@ Feature List
* Aggregated results provide an easy to understand way to assess the scale of
the cloud under test
* KloudBuster VM image pre-built and available from the OpenStack Community App
Catalog (https://apps.openstack.org/)
**Diagrams** describing how the scale test resources are staged and how the
traffic flows are available in :ref:`arch`.
@ -98,6 +98,15 @@ graphical charts generated straight off the tool.
**Examples of results** are available in :ref:`gallery`.
New in Release 8
----------------
* Kloudbuster is now fully python 3 compatible, python 2.7 is no longer supported.
* Validated againts OpenStack Train release
New in Release 7
----------------
@ -137,7 +146,7 @@ If you are interested in OpenStack Performance and Scale, contributions and
feedbacks are welcome!
If you have any feedbacks or would like to contribute,
send an email to openstack-dev@lists.openstack.org with a '[kloudbuster]'
send an email to openstack-discuss@lists.openstack.org with a '[kloudbuster]'
tag in the subject.
@ -172,9 +181,9 @@ maintainer of KloudBuster.
Links
-----
* Complete documentation: `<http://kloudbuster.readthedocs.org>`_
* `KloudBuster REST API documentation Preview <https://htmlpreview.github.io/?https://github.com/openstack/kloudbuster/blob/master/doc/source/_static/kloudbuster-swagger.html>`_
* Source: `<https://github.com/openstack/kloudbuster>`_
* Complete documentation: `<https://kloudbuster.readthedocs.io/>`_
* `KloudBuster REST API documentation Preview <https://htmlpreview.github.io/?https://opendev.org/x/kloudbuster/src/branch/master/doc/source/_static/kloudbuster-swagger.html>`_
* Source: `<https://opendev.org/x/kloudbuster>`_
* Supports/Bugs: `<http://launchpad.net/kloudbuster>`_
* Mailing List: kloudbuster-core@lists.launchpad.net

@ -82,11 +82,98 @@ Once the server is started, you can use different HTTP methods
(GET/PUT/POST/DELETE) to interact with the KloudBuster REST interface using the
provided URL at port 8080.
* `KloudBuster REST API Documentation Preview <https://htmlpreview.github.io/?https://github.com/openstack/kloudbuster/blob/master/doc/source/_static/kloudbuster-swagger.html>`_
* `REST API Documentation (Swagger yaml) <https://github.com/openstack/kloudbuster/blob/master/kb_server/kloudbuster-swagger.yaml>`_
* `KloudBuster REST API Documentation Preview <https://htmlpreview.github.io/?https://opendev.org/x/kloudbuster/src/branch/master/doc/source/_static/kloudbuster-swagger.html>`_
* `REST API Documentation (Swagger yaml) <https://opendev.org/x/kloudbuster/src/branch/master/doc/source/_static/kloudbuster-swagger.html>`_
The following curl examples assume the server runs on localhost.
Display version and retrieve default configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To get the current version and retrieve the default configuration and copy to a file:
.. code-block:: bash
> curl http://localhost:8080/api/kloudbuster/version
7.1.1
> curl http://localhost:8080/api/config/default_config >default_config
...
Create a new Kloudbuster session
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Before running any benchmark, the first step is to create a new session:
The body of the REST request msut have the following fields:
.. code-block:: bash
{
'credentials': {'tested-rc': '<STRING>',
'testing-rc': '<STRING>'},
'kb_cfg': {<USER_OVERRIDED_CONFIGS>},
'topo_cfg': {<TOPOLOGY_CONFIGS>},
'tenants_cfg': {<TENANT_AND_USER_LISTS_FOR_REUSING>},
'storage_mode': true|false
}
List of fields and content:
- credentials (mandatory)
- tested-rc (mandatory) contains the openrc variables (string containing the list of variables separated by \n)
- testing-rc (optional) only needed in case of dual cloud testing (HTTP only)
- kb_cfg (mandatory) a string containing the Kloudbuster configuration to use (json)
- topo_cfg (optional) a string containing the topology configuration (json)
- tenants_cfg (optional) a string containing the list of tenants and users to use (json)
- storage_mode (mandatory) true for storage benchmark, false for HTTP benchmark
Example of configuration:
.. code-block:: bash
# Content of a standard openrc file that we store in a variable
OPENRC="export OS_CACERT=/root/openstack-configs/haproxy-ca.crt
export OS_AUTH_URL=https://10.0.0.1:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=55DgmREFWenMqkxK
export OS_REGION_NAME=RegionOne
export OS_PROJECT_NAME=admin
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_IDENTITY_API_VERSION=3"
# Example of simple Kloudbuster configuration
KBCFG="{client:{storage_stage_configs:{ vm_count: 1, disk_size: 50, io_file_size: 1},
storage_tool_configs:{[description: 'Random Read', mode: 'randread', runtime: 30,
block_size: '4k', iodepth: 4]}}}"
# REST request body in json format
BODY="{'credentials': {'tested-rc': $OPENRC},
'kb_cfg': $KBCFG,
'storage_mode': true}"
Create the Kloudbuster session with above configuration:
.. code-block:: bash
> curl -H "Content-Type: application/json" -X POST -d "$BODY" http://localhost:8080/api/config/running_config
Note that this request only updates the running configuration and does not start any benchmark.
It will return a session ID that needs to be passed to subsequent REST requests.
Start a storage benchmark using the running configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SESSION_ID is the id returned from the /api/config/running_config POST request.
.. code-block:: bash
> curl -H "Content-Type: application/json" -X POST http://localhost:8080/api/kloudbuster/run_test/$SESSION_ID
Examples of REST requests
^^^^^^^^^^^^^^^^^^^^^^^^^
.. _upload_kb_image:
@ -143,5 +230,3 @@ To upload the image from a local copy of that image using the Glance CLI:
.. code-block:: bash
glance image-create --file kloudbuster-7.0.0.qcow2 --disk-format qcow2 --container-format bare --visibility public --name kloudbuster-7.0.0

104
kb_build.sh Normal file → Executable file

@ -2,32 +2,49 @@
# This script will build the kloudbuster VM image and the container image under the ./build directory
# Check we are in a virtual environment
function check_in_venv {
IN_VENV=$(python -c 'import sys; print hasattr(sys, "real_prefix")')
echo $IN_VENV
# canned user/password for direct login
export DIB_DEV_USER_USERNAME=kb
export DIB_DEV_USER_PASSWORD=kb
export DIB_DEV_USER_PWDLESS_SUDO=Y
# Set the data sources to have ConfigDrive only
export DIB_CLOUD_INIT_DATASOURCES="ConfigDrive"
function cleanup_qcow2 {
echo
echo "Error: found unrelated qcow2 files that would make the container image too large."
echo "Cleanup qcow2 files before re-running:"
ls -l *.qcow2
exit 3
}
# build the VM image first
function build_vm {
kb_image_name=kloudbuster-$KB_TAG
qcow_count=$(find . -name '*qcow2' | wc -l)
if [ ! -f $kb_image_name.qcow2 ]; then
if [ $qcow_count -gt 0 ]; then
cleanup_qcow2
fi
echo "Building $kb_image_name.qcow2..."
pip install diskimage-builder
pip3 install "diskimage-builder>=2.15"
cd ./kb_dib
# Add the kloudbuster elements directory to the DIB elements path
export ELEMENTS_PATH=./elements
# Install Ubuntu 16.04
export DIB_RELEASE=xenial
# Install Ubuntu 18.04
export DIB_RELEASE=bionic
time disk-image-create -o $kb_image_name ubuntu kloudbuster
time disk-image-create -o $kb_image_name block-device-mbr ubuntu kloudbuster
rm -rf venv $kb_image_name.d
mv $kb_image_name.qcow2 ..
cd ..
else
if [ $qcow_count -gt 1 ]; then
cleanup_qcow2
fi
echo "Reusing $kb_image_name.qcow2"
fi
@ -36,7 +53,21 @@ function build_vm {
# Build container
function build_container {
sudo docker build --tag=berrypatch/kloudbuster:$KB_TAG .
# Create a wheel package
# ./dist/kloudbuster-$KB_TAG-py3-none-any.whl
python setup.py build bdist_wheel || { echo "Error building package"; exit 5; }
wheel_pkg="kloudbuster-$KB_TAG-py3-none-any.whl"
if [ -f ./dist/$wheel_pkg ]; then
echo "Created package: ./dist/$wheel_pkg"
else
echo "Error: Cannot find created package: ./dist/$wheel_pkg"
exit 4
fi
build_args="--build-arg WHEEL_PKG=$wheel_pkg --build-arg VM_IMAGE=$kb_image_name.qcow2"
echo "docker build $build_args --tag=berrypatch/kloudbuster:$KB_TAG ."
sudo docker build $build_args --tag=berrypatch/kloudbuster:$KB_TAG .
echo "sudo docker build $build_args --tag=berrypatch/kloudbuster:latest ."
sudo docker build $build_args --tag=berrypatch/kloudbuster:latest .
}
function help {
@ -47,7 +78,7 @@ function help {
echo "Builds the KloudBuster VM and Docker container images"
echo "The Docker container image will include the VM image for easier upload"
echo
echo "Must run in a virtual environment and must be called from the root of the repository"
echo "Kloudbuster must be installed for this script to run (typically would run from a virtual environment)"
exit 1
}
@ -65,27 +96,50 @@ while [[ $# -gt 0 ]]; do
# Shift after checking all the cases to get the next option
shift
done
in_venv=$(check_in_venv)
if [ $in_venv != "True" ]; then
echo "Error: Must be in a virtual environment to run!"
exit 2
# check that we have python3/pip3 enabled
python -c 'print 0' >/dev/null 2>/dev/null
if [ $? -eq 0 ]; then
echo "Error: python 3 is required as default python version"
exit 3
fi
# check that we are in a virtual environment
INVENV=$(python -c 'import sys;print(hasattr(sys, "real_prefix") or (hasattr(sys, "base_prefix") and sys.base_prefix != sys.prefix))')
if [ $INVENV != "True" ]; then
echo "Error: must run inside a venv as many packages will be installed"
exit 4
fi
# check that kloudbuster binary is installed
# Get the kloudbuster version (must be retrieved from stderr)
KB_TAG=$(kloudbuster --version 2>&1)
if [ $? != 0 ]; then
echo "Installing kloudbuster..."
# Install kloudbuster in the virtual env in editable mode
pip3 install -q -e .
KB_TAG=$(kloudbuster --version 2>&1)
if [ $? != 0 ]; then
echo "Error: cannot retrieve version from kloudbuster..."
echo
kloudbuster --version
exit 2
fi
fi
# check that docker is installed
if [ $build_vm_only = 0 ]; then
docker --version >/dev/null 2>/dev/null
if [ $? -ne 0 ]; then
echo "Error: docker is not installed"
exit 4
fi
fi
# check we're at the root of the kloudbuster repo
if [ ! -d kloudbuster -o ! -f Dockerfile ]; then
echo "Error: Must be called from the root of the kloudbuster repository to run!"
exit 2
fi
# Install kloudbuster in the virtual env
pip install -q -U setuptools
pip install -q -e .
# Get the kloudbuster version (must be retrieved from stderr)
KB_TAG=$(kloudbuster --version 2>&1)
if [ $? != 0 ]; then
echo "Error retrieving kloudbuster version:"
echo
kloudbuster --version
exit 2
fi
echo
echo "Building KloudBuster with tag $KB_TAG"

2
kb_dib/Vagrantfile vendored

@ -15,7 +15,7 @@ apt-get -y install qemu-utils
git clone git://github.com/openstack/diskimage-builder.git
git clone git://github.com/openstack/dib-utils.git
# install kloudbuster
git clone git://github.com/openstack/kloudbuster.git
git clone https://opendev.org/x/kloudbuster.git
kb_root=kloudbuster
# Extract image version number '__version__ = 2.0' becomes '__version__=2_0'

@ -5,6 +5,12 @@ KloudBuster
KloudBuster Image
Contains all the packages and files needed to run a universal KloudBuster VM
The same image can run using one of the following roles (Assigned from the user-data python program):
- Server VM for a given traffic type (e.g. http server or tcp/udp server)
- Client VM for a given traffic type (e.g. http client or tcp/udp client)
- Redis server (only 1 instance in the client cloud)
VMs are launched using cloud config and can be access with ssh:
- username: cloud-user
- no password, use key pairs to create the VM

@ -1,3 +1,4 @@
vm
install-static
package-installs
devuser

@ -6,8 +6,8 @@ libssl-dev:
libyaml-dev:
nginx:
ntpdate:
python-pip:
python-dev:
python3-pip:
python3-dev:
redis-server:
xfsprogs:
zlib1g-dev:

@ -1,4 +1,3 @@
#!/bin/sh
pip install --upgrade pip
pip install setuptools wheel
pip3 install setuptools wheel

@ -43,40 +43,46 @@ mkdir -p /data/www
chmod -R 777 /data
# redis server should listen on all interfaces
sed -i "s/127.0.0.1/0.0.0.0/g" /etc/redis/redis.conf
sed -i "s/^bind 127.0.0.1 ::1/bind 0.0.0.0 ::0/g" /etc/redis/redis.conf
# if started nginx should be allowed to open more file descriptors
sed -i 's/start-stop-daemon\ --start/ulimit\ \-n\ 102400\n\t\0/g' /etc/init.d/nginx
# Auto start the KloudBuster Agent, with user-data
sed -i "s/^exit\s0/cd \/kb_test\n\0/g" /etc/rc.local
sed -i "s/^exit\s0/if wget http\:\/\/169.254.169.254\/latest\/user-data; then \:; fi\n\0/g" /etc/rc.local
sed -i "s/^exit\s0/python kb_vm_agent.py \&\n\0/g" /etc/rc.local
echo '#!/bin/bash' > /etc/rc.local
echo 'echo -e "127.0.0.1\\t`hostname`" >> /etc/hosts' >> /etc/rc.local
echo 'echo `hostname -I` `hostname` >> /etc/hosts' >> /etc/rc.local
echo 'mkdir -p /mnt/config' >> /etc/rc.local
echo 'mount /dev/disk/by-label/config-2 /mnt/config' >> /etc/rc.local
echo 'cp /mnt/config/openstack/latest/user_data /kb_test/' >> /etc/rc.local
echo 'cd /kb_test' >> /etc/rc.local
echo 'python3 kb_vm_agent.py &' >> /etc/rc.local
chmod +x /etc/rc.local
# =================
# KloudBuster Proxy
# =================
cd /kb_test
git clone git://github.com/openstack/kloudbuster.git
git clone https://opendev.org/x/kloudbuster.git
cd kloudbuster
pip install -r requirements.txt
pip3 install -r requirements.txt
# ======
# Client
# ======
# python redis client, HdrHistorgram_py
pip install redis hdrhistogram
pip3 install redis hdrhistogram
# Install HdrHistorgram_c
cd /tmp
git clone git://github.com/HdrHistogram/HdrHistogram_c.git
git clone https://github.com/HdrHistogram/HdrHistogram_c.git
cd HdrHistogram_c
cmake .
make install
# Install the http traffic generator
cd /tmp
git clone git://github.com/yicwang/wrk2.git
git clone https://github.com/yicwang/wrk2.git
cd wrk2
make
mv wrk /usr/local/bin/wrk2
@ -107,13 +113,7 @@ rm -rf /tmp/wrk2
rm -rf /tmp/fio
# Uninstall unneeded packages
apt-get -y --purge remove libyaml-dev
apt-get -y --purge remove libssl-dev
apt-get -y --purge remove zlib1g-dev
apt-get -y --purge remove libaio-dev
apt-get -y --purge remove python-pip
apt-get -y --purge remove python-dev
apt-get -y --purge remove build-essential
apt-get -y --purge remove cmake
apt-get -y --purge remove libyaml-dev libssl-dev zlib1g-dev libaio-dev python3-pip python3-dev build-essential cmake
apt-get -y --purge autoremove
## apt-get -y install python
apt-get -y autoclean

@ -1,17 +1,18 @@
#!/usr/bin/env python
#!/usr/bin/env python3
import yaml
cloudcfg = "/etc/cloud/cloud.cfg"
user = "cloud-user"
with open(cloudcfg) as f:
cfg = yaml.load(f)
cfg = yaml.safe_load(f)
synver = "1"
try:
if cfg['system_info']['default_user']['name']:
synver = "2"
except KeyError:
synver = "1"
pass
if synver == "1":
if cfg['user'] == user:
@ -27,7 +28,7 @@ elif synver == "2":
# Change the user to cloud-user
cfg['system_info']['default_user']['name'] = user
cfg['system_info']['default_user']['gecos'] = "Cloud User"
print cfg['system_info']['default_user']['name']
print(cfg['system_info']['default_user']['name'])
with open(cloudcfg, "w") as f:
yaml.dump(cfg, f, default_flow_style=False)

@ -13,19 +13,21 @@
# under the License.
#
from hdrh.histogram import HdrHistogram
import json
import multiprocessing
import redis
import socket
import struct
import subprocess
from subprocess import Popen
import sys
import syslog
import threading
import time
import traceback
from hdrh.histogram import HdrHistogram
import redis
# Define the version of the KloudBuster agent and VM image
#
# When VM is up running, the agent will send the READY message to the
@ -35,12 +37,15 @@ import traceback
# and can be left constant moving forward.
__version__ = '7'
# TODO(Logging on Agent)
# to add later logging on Agent
def exec_command(cmd, cwd=None):
p = subprocess.Popen(cmd, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = p.communicate()
(_, stderr) = p.communicate()
if p.returncode:
syslog.syslog("Command failed: " + ' '.join(cmd))
if stderr:
syslog.syslog(stderr)
return p.returncode
def refresh_clock(clocks, force_sync=False):
@ -50,7 +55,7 @@ def refresh_clock(clocks, force_sync=False):
command = "sudo ntpdate" + step + clocks
exec_command(command.split(" "))
class KB_Instance(object):
class KB_Instance():
# Check whether the HTTP Service is up running
@staticmethod
@ -69,7 +74,7 @@ class KB_Instance(object):
if if_name:
debug_msg += " and %s" % if_name
cmd += " dev %s" % if_name
print debug_msg
print(debug_msg)
return cmd
@staticmethod
@ -101,7 +106,7 @@ class KB_Instance(object):
else:
debug_msg = "with next hop %s" % if_name
cmd += " dev %s" % if_name
print debug_msg
print(debug_msg)
return cmd
# Run the HTTP benchmarking tool
@ -163,13 +168,13 @@ class KB_Instance(object):
cmd = '%s %s %s %s' % (dest_path, fixed_opt, required_opt, optional_opt)
return cmd
class KBA_Client(object):
class KBA_Client():
def __init__(self, user_data):
host = user_data['redis_server']
port = user_data['redis_server_port']
self.user_data = user_data
self.redis_obj = redis.StrictRedis(host=host, port=port)
self.redis_obj = redis.Redis(host=host, port=port)
self.pubsub = self.redis_obj.pubsub(ignore_subscribe_messages=True)
self.hello_thread = None
self.stop_hello = threading.Event()
@ -181,10 +186,10 @@ class KBA_Client(object):
def setup_channels(self):
# Check for connections to redis server
while (True):
while True:
try:
self.redis_obj.get("test")
except (redis.exceptions.ConnectionError):
except redis.exceptions.ConnectionError:
time.sleep(1)
continue
break
@ -195,7 +200,7 @@ class KBA_Client(object):
def report(self, cmd, client_type, data):
message = {'cmd': cmd, 'sender-id': self.vm_name,
'client-type': client_type, 'data': data}
self.redis_obj.publish(self.report_chan_name, message)
self.redis_obj.publish(self.report_chan_name, str(message))
def send_hello(self):
# Sending "hello" message to master node every 2 seconds
@ -226,6 +231,8 @@ class KBA_Client(object):
self.last_process = p
lines_iterator = iter(p.stdout.readline, b"")
for line in lines_iterator:
# line is bytes, so need to make it a str
line = line.decode('utf-8')
# One exception, if this is the very last report, we will send it
# through "DONE" command, not "REPORT". So what's happening here
# is to determine whether this is the last report.
@ -263,23 +270,25 @@ class KBA_Client(object):
# When 'ACK' is received, means the master node
# acknowledged the current VM. So stopped sending more
# "hello" packet to the master node.
# Unfortunately, there is no thread.stop() in Python 2.x
self.stop_hello.set()
elif message['cmd'] == 'EXEC':
self.last_cmd = ""
arange = message['data']['active_range']
my_id = int(self.vm_name[self.vm_name.rindex('I') + 1:])
if (not arange) or (my_id >= arange[0] and my_id <= arange[1]):
if (not arange) or (arange[0] <= my_id <= arange[1]):
try:
par = message['data'].get('parameter', '')
str_par = 'par' if par else ''
cmd_res_tuple = eval('self.exec_%s(%s)' % (message['data']['cmd'], str_par))
cmd = message['data']['cmd']
if isinstance(cmd, bytes):
cmd = cmd.decode('utf-8')
cmd_res_tuple = eval('self.exec_%s(%s)' % (cmd, str_par))
cmd_res_dict = dict(zip(("status", "stdout", "stderr"), cmd_res_tuple))
except Exception as exc:
except Exception:
cmd_res_dict = {
"status": 1,
"stdout": self.last_cmd,
"stderr": str(exc)
"stderr": traceback.format_exc() + '\nmessage: ' + str(message['data'])
}
if self.__class__.__name__ == "KBA_Multicast_Client":
self.report('DONE_MC', message['client-type'], cmd_res_dict)
@ -287,14 +296,14 @@ class KBA_Client(object):
self.report('DONE', message['client-type'], cmd_res_dict)
else:
# Unexpected
print 'ERROR: Unexpected command received!'
print('ERROR: Unexpected command received!')
class KBA_HTTP_Client(KBA_Client):
def exec_setup_static_route(self):
self.last_cmd = KB_Instance.get_static_route(self.user_data['target_subnet_ip'])
result = self.exec_command(self.last_cmd)
if (self.user_data['target_subnet_ip'] not in result[1]):
if self.user_data['target_subnet_ip'] not in result[1]:
self.last_cmd = KB_Instance.add_static_route(
self.user_data['target_subnet_ip'],
self.user_data['target_shared_interface_ip'])
@ -319,7 +328,7 @@ class KBA_Multicast_Client(KBA_Client):
self.last_cmd = KB_Instance.get_static_route(self.user_data['target_subnet_ip'])
result = self.exec_command(self.last_cmd)
if (self.user_data['target_subnet_ip'] not in result[1]):
if self.user_data['target_subnet_ip'] not in result[1]:
self.last_cmd = KB_Instance.add_static_route(
self.user_data['target_subnet_ip'],
self.user_data['target_shared_interface_ip'])
@ -336,10 +345,10 @@ class KBA_Multicast_Client(KBA_Client):
'megabytes': 'megabytes', 'rate_Mbps': 'mbps', 'msmaxjitter': 'jitter',
'msavgOWD': 'latency'} # Format/Include Keys
try:
return {kmap[k]: abs(float(v))
for (k, v) in [c.split("=")
for c in p_out.split(" ")]
if k in kmap}
return {
kmap[k]: abs(float(v)) for (k, v) in [c.split("=")
for c in p_out.split(" ")] if k in kmap
}
except Exception:
return {'error': '0'}
@ -361,12 +370,12 @@ class KBA_Multicast_Client(KBA_Client):
queue.put([cmds[cmd][0], out])
# End Function #
for cmd in cmds:
for _ in cmds:
multiprocessing.Process(target=spawn, args=(cmd_index, queue)).start()
cmd_index += 1
p_err = ""
try:
while(j < len(cmds)):
while j < len(cmds):
out = queue.get(True, timeout)
key = out[0]
j += 1
@ -496,7 +505,7 @@ class KBA_Storage_Client(KBA_Client):
grp_msb_bits = clat['FIO_IO_U_PLAT_BITS']
buckets_per_grp = clat['FIO_IO_U_PLAT_VAL']
for bucket in xrange(total_buckets):
for bucket in range(total_buckets):
if clat[str(bucket)]:
grp = bucket / buckets_per_grp
subbucket = bucket % buckets_per_grp
@ -507,7 +516,8 @@ class KBA_Storage_Client(KBA_Client):
val = int(base + (base / buckets_per_grp) * (subbucket - 0.5))
histogram.record_value(val, clat[str(bucket)])
p_output['jobs'][0][test]['clat']['hist'] = histogram.encode()
# histogram.encode() returns a base64 compressed histogram as bytes
p_output['jobs'][0][test]['clat']['hist'] = histogram.encode().decode('utf-8')
p_output['jobs'][0][test]['clat'].pop('bins')
p_output['jobs'][0][test]['clat'].pop('percentile')
@ -530,7 +540,7 @@ class KBA_Storage_Client(KBA_Client):
return self.encode_bins(p_out)
class KBA_Server(object):
class KBA_Server():
def __init__(self, user_data):
self.user_data = user_data
@ -540,14 +550,14 @@ class KBA_Server(object):
html_size = self.user_data['http_server_configs']['html_size']
cmd_str = 'dd if=/dev/zero of=/data/www/index.html bs=%s count=1' % html_size
cmd = cmd_str.split()
return False if exec_command(cmd) else True
return not bool(exec_command(cmd))
def start_nginx_server(self):
cmd = ['sudo', 'service', 'nginx', 'start']
return exec_command(cmd)
def start_nuttcp_server(self):
cmd = ['/usr/local/bin/nuttcp', '-S' '-P5000']
cmd = ['/usr/local/bin/nuttcp', '-S', '-P5000']
return exec_command(cmd)
def start_multicast_listener(self, mc_addrs, multicast_ports, start_address="231.0.0.128"):
@ -570,7 +580,7 @@ class KBA_Server(object):
s.bind((m_addr, port))
s.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
while True:
d, e = s.recvfrom(10240)
s.recvfrom(10240)
# End Function #
@ -583,7 +593,7 @@ class KBA_Server(object):
while True:
continue
class KBA_Proxy(object):
class KBA_Proxy():
def start_redis_server(self):
cmd = ['sudo', 'service', 'redis-server', 'start']
return exec_command(cmd)
@ -591,21 +601,24 @@ class KBA_Proxy(object):
if __name__ == "__main__":
try:
with open('user-data', 'r') as f:
with open('user_data', 'r') as f:
user_data = dict(eval(f.read()))
except Exception as e:
# KloudBuster starts without user-data
cwd = 'kloudbuster/kb_server'
cmd = ['python', 'setup.py', 'develop']
cmd = ['python3', 'setup.py', 'develop']
rc = exec_command(cmd, cwd=cwd)
if not rc:
syslog.syslog("Starting kloudbuster HTTP server")
cmd = ['/usr/local/bin/pecan', 'serve', 'config.py']
sys.exit(exec_command(cmd, cwd=cwd))
if user_data.get('role') == 'KB-PROXY':
role = user_data.get('role')
if role == 'KB-PROXY':
agent = KBA_Proxy()
syslog.syslog("Starting kloudbuster proxy server")
sys.exit(agent.start_redis_server())
if user_data.get('role').endswith('Server'):
if role.endswith('Server'):
agent = KBA_Server(user_data)
if user_data['role'].startswith('Multicast'):
KB_Instance.add_multicast_route()
@ -621,17 +634,20 @@ if __name__ == "__main__":
user_data.get('multicast_ports'),
user_data.get('multicast_listener_address_start'))
if agent.config_nginx_server():
syslog.syslog("Starting kloudbuster nginx server")
sys.exit(agent.start_nginx_server())
else:
sys.exit(1)
elif user_data.get('role').endswith('Client'):
if user_data['role'].startswith('HTTP'):
elif role.endswith('Client'):
if role.startswith('HTTP'):
syslog.syslog("Starting kloudbuster HTTP client")
agent = KBA_HTTP_Client(user_data)
elif user_data['role'].startswith('Multicast'):
elif role.startswith('Multicast'):
KB_Instance.add_multicast_route()
refresh_clock(user_data.get('ntp_clocks'), force_sync=True)
agent = KBA_Multicast_Client(user_data)
else:
syslog.syslog("Starting kloudbuster storage client")
agent = KBA_Storage_Client(user_data)
agent.setup_channels()
agent.hello_thread = threading.Thread(target=agent.send_hello)

@ -29,7 +29,7 @@ from pecan import response
LOG = logging.getLogger("kloudbuster")
class ConfigController(object):
class ConfigController():
# Decorator to check for missing or invalid session ID
def check_session_id(func):
@ -198,7 +198,7 @@ class ConfigController(object):
allowed_status = ['READY']
except Exception as e:
response.status = 400
response.text = u"Invalid JSON: \n%s" % (e.message)
response.text = u"Invalid JSON: \n%s" % str(e)
return response.text
# http_tool_configs and storage_tool_config for client VMs is allowed to be

@ -26,7 +26,7 @@ from pecan import response
LOG = logging.getLogger("kloudbuster")
class KBController(object):
class KBController():
def __init__(self):
self.kb_thread = None

@ -17,7 +17,7 @@ import threading
KB_SESSIONS = {}
KB_SESSIONS_LOCK = threading.Lock()
class KBSessionManager(object):
class KBSessionManager():
@staticmethod
def has(session_id):
@ -46,7 +46,7 @@ class KBSessionManager(object):
KB_SESSIONS_LOCK.release()
class KBSession(object):
class KBSession():
def __init__(self):
self.kb_status = 'READY'
self.first_run = True

@ -19,7 +19,7 @@ from pecan import expose
from pecan import response
class APIController(object):
class APIController():
@expose()
def _lookup(self, primary_key, *remainder):
if primary_key == "config":
@ -30,7 +30,7 @@ class APIController(object):
abort(404)
class RootController(object):
class RootController():
@expose()
def index(self):
response.status = 301

@ -15,7 +15,7 @@
import os
import time
import log as logging
import kloudbuster.log as logging
from novaclient.exceptions import BadRequest
LOG = logging.getLogger(__name__)
@ -24,7 +24,7 @@ class KBVolAttachException(Exception):
pass
class BaseCompute(object):
class BaseCompute():
"""
The Base class for nova compute resources
1. Creates virtual machines with specific configs
@ -46,13 +46,12 @@ class BaseCompute(object):
self.shared_interface_ip = None
self.vol = None
# Create a server instance with associated
# security group, keypair with a provided public key
def create_server(self, image_name, flavor_type, keyname,
nic, sec_group, avail_zone=None, user_data=None,
config_drive=None, retry_count=100):
config_drive=True, retry_count=100):
"""
Create a server instance with associated security group, keypair with a provided public key.
Create a VM instance given following parameters
1. VM Name
2. Image Name
@ -93,6 +92,7 @@ class BaseCompute(object):
LOG.error('Instance creation error:' + instance.fault['message'])
return None
time.sleep(2)
return None
def attach_vol(self):
if self.vol.status != 'available':
@ -117,7 +117,7 @@ class BaseCompute(object):
def detach_vol(self):
if self.instance and self.vol:
attached_vols = self.novaclient.volumes.get_server_volumes(self.instance.id)
if len(attached_vols):
if attached_vols:
try:
self.novaclient.volumes.delete_server_volume(self.instance.id, self.vol.id)
except BadRequest:
@ -133,7 +133,7 @@ class BaseCompute(object):
return flavor
class SecGroup(object):
class SecGroup():
def __init__(self, novaclient, neutronclient):
self.secgroup = None
@ -141,11 +141,11 @@ class SecGroup(object):
self.novaclient = novaclient
self.neutronclient = neutronclient
def create_secgroup_with_rules(self, group_name):
def create_secgroup_with_rules(self, group_name, is_ipv6=False):
body = {
'security_group': {
'name': group_name,
'description': 'Test sec group'
'description': 'KloudBuster security group'
}
}
group = self.neutronclient.create_security_group(body)['security_group']
@ -158,6 +158,9 @@ class SecGroup(object):
}
}
if is_ipv6:
body['security_group_rule']['ethertype'] = 'IPv6'
# Allow ping traffic
body['security_group_rule']['protocol'] = 'icmp'
body['security_group_rule']['port_range_min'] = None
@ -235,7 +238,7 @@ class SecGroup(object):
LOG.error('Failed while deleting security group %s.' % self.secgroup['id'])
return False
class KeyPair(object):
class KeyPair():
def __init__(self, novaclient):
self.keypair = None
@ -265,7 +268,7 @@ class KeyPair(object):
if self.keypair:
self.novaclient.keypairs.delete(self.keypair)
class Flavor(object):
class Flavor():
def __init__(self, novaclient):
self.novaclient = novaclient
@ -301,7 +304,7 @@ class Flavor(object):
except Exception:
pass
class NovaQuota(object):
class NovaQuota():
def __init__(self, novaclient, tenant_id):
self.novaclient = novaclient

@ -14,11 +14,11 @@
import time
from perf_instance import PerfInstance
from kloudbuster.perf_instance import PerfInstance
import base_compute
import base_storage
import log as logging
import kloudbuster.base_compute as base_compute
import kloudbuster.base_storage as base_storage
import kloudbuster.log as logging
import netaddr
from neutronclient.common.exceptions import NetworkInUseClient
@ -35,17 +35,18 @@ class KBGetExtNetException(Exception):
class KBGetProvNetException(Exception):
pass
def create_floating_ip(neutron_client, ext_net):
def create_floating_ip(neutron_client, ext_net, port_id):
"""
Function that creates a floating ip and returns it
Accepts the neutron client and ext_net
Accepts the neutron client, ext_net, and port_id
Module level function since this is not associated with a
specific network instance
"""
body = {
"floatingip": {
"floating_network_id": ext_net['id']
"floating_network_id": ext_net['id'],
"port_id": port_id
}
}
fip = neutron_client.create_floatingip(body)
@ -100,7 +101,7 @@ def find_provider_network(neutron_client, name):
networks = neutron_client.list_networks()['networks']
for network in networks:
if network['provider:physical_network']:
if name == "" or name == network['name']:
if name in ("", network['name']):
return network
if name != "":
LOG.error("The provider network: " + name + " was not found.")
@ -115,11 +116,11 @@ def find_first_network(neutron_client):
If no external network is found return None
"""
networks = neutron_client.list_networks()['networks']
if (len(networks) > 0):
if networks:
return networks[0]
return None
class BaseNetwork(object):
class BaseNetwork():
"""
The Base class for neutron network operations
1. Creates networks with 1 subnet inside each network
@ -155,7 +156,8 @@ class BaseNetwork(object):
secgroup_instance = base_compute.SecGroup(self.nova_client, self.neutron_client)
self.secgroup_list.append(secgroup_instance)
secgroup_name = network_prefix + "-SG" + str(secgroup_count)
secgroup_instance.create_secgroup_with_rules(secgroup_name)
secgroup_instance.create_secgroup_with_rules(
secgroup_name, is_ipv6=self.network['is_ipv6'])
self.res_logger.log('sec_groups', secgroup_instance.secgroup['name'],
secgroup_instance.secgroup['id'])
@ -165,15 +167,23 @@ class BaseNetwork(object):
if config_scale['use_floatingip']:
external_network = find_external_network(self.neutron_client)
volume_type = None
storage_mode = self.router.user.tenant.kloud.storage_mode
if storage_mode and config_scale['storage_stage_configs']['target'] == 'volume':
bs_obj = base_storage.BaseStorage(self.cinder_client)
vol_size = config_scale['storage_stage_configs']['disk_size']
volume_type = config_scale['storage_stage_configs'].get('volume_type', None)
DEFAULT_VOL_CREATE_TIMEOUT_SEC = 150
try:
vol_timeout = int(config_scale['storage_stage_configs'].get('vol_create_timeout_sec', DEFAULT_VOL_CREATE_TIMEOUT_SEC))
except ValueError:
vol_timeout = DEFAULT_VOL_CREATE_TIMEOUT_SEC
LOG.info("Incorrect input for vol_create_timeout_sec in cfg file ,proceeding with default timeout %d" % vol_timeout)
else:
vol_size = 0
# Schedule to create the required number of VMs
for instance_count in xrange(vm_total):
for instance_count in range(vm_total):
vm_name = network_prefix + "-I" + str(instance_count)
perf_instance = PerfInstance(vm_name, self, config_scale)
self.instance_list.append(perf_instance)
@ -182,14 +192,16 @@ class BaseNetwork(object):
# Don't create volumn for KB-Proxy
if vol_size and instance_count < vm_total - 1:
vol_name = network_prefix + "-V" + str(instance_count)
perf_instance.vol = bs_obj.create_vol(vol_size, name=vol_name)
perf_instance.vol = bs_obj.create_vol(vol_size, vol_timeout, name=vol_name,
type=volume_type)
self.res_logger.log('volumes', vol_name, perf_instance.vol.id)
perf_instance.subnet_ip = self.network['subnet_ip']
if config_scale['use_floatingip']:
# Create the floating ip for the instance
# store it and the ip address in perf_instance object
perf_instance.fip = create_floating_ip(self.neutron_client, external_network)
port_id = perf_instance.instance.interface_list()[0].id
perf_instance.fip = create_floating_ip(self.neutron_client, external_network, port_id)
perf_instance.fip_ip = perf_instance.fip['floatingip']['floating_ip_address']
self.res_logger.log('floating_ips',
perf_instance.fip['floatingip']['floating_ip_address'],
@ -255,11 +267,14 @@ class BaseNetwork(object):
# add subnet id to the network dict since it has just been added
self.network['subnets'] = [subnet['id']]
self.network['subnet_ip'] = cidr
self.network['is_ipv6'] = False
def add_provider_network(self, name):
self.network = find_provider_network(self.neutron_client, name)
if len(self.network['subnets']) > 0:
self.network['subnet_ip'] = self.get_cidr_from_subnet_id(self.network['subnets'][0])
subnet = self.neutron_client.show_subnet(self.network['subnets'][0])['subnet']
self.network['subnet_ip'] = subnet['cidr']
self.network['is_ipv6'] = bool(subnet['ipv6_address_mode'])
def get_cidr_from_subnet_id(self, subnetID):
sub = self.neutron_client.show_subnet(subnetID)
@ -270,6 +285,7 @@ class BaseNetwork(object):
"""Generate next CIDR for network or subnet, without IP overlapping.
"""
global cidr
# pylint: disable=not-callable
cidr = str(netaddr.IPNetwork(cidr).next())
return cidr
@ -293,14 +309,14 @@ class BaseNetwork(object):
def get_all_instances(self):
return self.instance_list
class Router(object):
class Router():
"""
Router class to create a new routers
Supports addition and deletion
of network interfaces to router
"""
def __init__(self, user, is_dumb=False):
def __init__(self, user, provider_network=None):
self.neutron_client = user.neutron_client
self.nova_client = user.nova_client
self.router = None
@ -313,7 +329,7 @@ class Router(object):
self.shared_port_id = None
# Store the interface ip of shared network attached to router
self.shared_interface_ip = None
self.is_dumb = is_dumb
self.provider_network = provider_network
def create_network_resources(self, config_scale):
"""
@ -322,10 +338,11 @@ class Router(object):
network
"""
if self.is_dumb:
if self.provider_network:
# This is dummy router, use provider network directly
network_instance = BaseNetwork(self)
self.network_list.append(network_instance)
network_instance.add_provider_network(config_scale['multicast_provider_network_name'])
network_instance.add_provider_network(self.provider_network)
network_instance.create_compute_resources(network_instance.network['name'],
config_scale)
return
@ -365,12 +382,12 @@ class Router(object):
# Now delete the compute resources and the network resources
flag = flag & network.delete_compute_resources()
if network.network:
if self.is_dumb:
if self.provider_network:
continue
flag = flag & self.remove_router_interface(network)
flag = flag & network.delete_network()
# Also delete the shared port and remove it from router interface
if self.shared_network and not self.is_dumb:
if self.shared_network and not self.provider_network:
flag = flag & self.remove_router_interface(self.shared_network, use_port=True)
self.shared_network = None
@ -408,10 +425,10 @@ class Router(object):
Also delete the networks attached to this router
"""
# Delete the network resources first and than delete the router itself
if not self.router and not self.is_dumb:
if not self.router and not self.provider_network:
return True
network_flag = self.delete_network_resources()
if self.is_dumb:
if self.provider_network:
return network_flag
router_flag = False
for _ in range(10):
@ -484,7 +501,7 @@ class Router(object):
class NeutronQuota(object):
class NeutronQuota():
def __init__(self, neutronclient, tenant_id):
self.neutronclient = neutronclient

@ -14,14 +14,14 @@
import time
import log as logging
import kloudbuster.log as logging
LOG = logging.getLogger(__name__)
class KBVolCreationException(Exception):
pass
class BaseStorage(object):
class BaseStorage():
"""
The Base class for cinder storage resources
"""
@ -29,15 +29,25 @@ class BaseStorage(object):
def __init__(self, cinderclient):
self.cinderclient = cinderclient
def create_vol(self, size, name=None):
vol = self.cinderclient.volumes.create(size, name=name)
for _ in range(10):
def create_vol(self, size, vol_timeout, name=None, type=None):
if type:
vol = self.cinderclient.volumes.create(size, name=name,
volume_type=type)
else:
vol = self.cinderclient.volumes.create(size, name=name)
start_t = time.time()
while (True):
if vol.status == 'creating':
time.sleep(1)
time.sleep(5)
elif vol.status == 'available':
break
elif vol.status == 'error':
raise KBVolCreationException('Not enough disk space in the host?')
if (time.time() - start_t) > vol_timeout:
raise KBVolCreationException('Volume creation timed out')
break
vol = self.cinderclient.volumes.get(vol.id)
return vol
@ -65,7 +75,7 @@ class BaseStorage(object):
# self.cinderclient.volumes.detach(volume)
class CinderQuota(object):
class CinderQuota():
def __init__(self, cinderclient, tenant_id):
self.cinderclient = cinderclient

@ -15,7 +15,7 @@ openrc_file:
# Name of the image to use for all test VMs (client, server and proxy)
# without the qcow2 extension
#
#
# Leave empty to use the default test VM image (recommended).
# If non empty use quotes if there are space characters in the name (e.g. 'my image')
# The default test VM image is named "kloudbuster-<version>" where
@ -28,12 +28,13 @@ image_name:
#
# To upload the image, download it first to a local file system before running kloudbuster
# Fill with the full pathname of the image with qcow2 extension
# e.g.
# e.g.
# vm_image_file: /kloudbuster/kloudbuster-7.0.0.qcow2
# If empty, KloudBuster will attempt to locate that file (with the default name)
# If empty, KloudBuster will attempt to locate that file (with the default name)
# under the following directories:
# - root of the kloudbuster package
# - current directory
# - home directory
# - top directory ("/")
vm_image_file:
# Keystone admin role name (default should work in most deployments)
@ -61,9 +62,32 @@ vm_creation_concurrency: 5
# example to debug)
public_key_file:
# Name of Provider network used for multicast tests.
multicast_provider_network_name: '' # Leave blank to use first available (default) network.
# Name of Provider network used for multicast tests or storage tests.
provider_network:
# TSDB connectors are optional and can be used to retrieve CPU usage information and attach them
# to the results.
tsdb:
# The default TSDB class will return nothing (i.e. feature disabled by default).
# Must be replaced with a functional class module and name to retrieve CPU usage information from
# the actual TSDB.
module: 'kloudbuster.tsdb'
# TSDB class name. This class has to be defined in the module given in tsdb_module.
class: 'TSDB'
# The interval in seconds between 2 consecutive CPU measurements to be returned by the TSDB.
step_size: 30
# Duration in seconds of the warmup and cooldown period
# If run duration is 110 seconds, CPU metric will be retrieved for the window starting 10 sec from
# the start (skip the warmup period) and 10 sec before the end of the run
# (skip the cooldown period), and there will be 90/30 = 3 samples retrieved per CPU core.
wait_time: 10
# TSDB server address
server_ip: localhost
# TSDB server port
server_port: 9090
# Maximum time allowed for the Proxy Redis server to get ready in seconds
proxy_timeout_sec: 120
# ==================================================
# SERVER SIDE CONFIG OPTIONS (HTTP SERVER SIDE ONLY)
@ -129,8 +153,18 @@ client:
disk: 0
# metadata are supported and can be added if needed
# example:
#extra_specs:
#extra_specs:
#"hw:cpu_policy": dedicated
# Flavor to use for proxy instance
proxy_flavor:
# Number of vCPUs for the flavor
vcpus: 1
# Memory for the flavor in MB
ram: 2048
# Size of local disk in GB
disk: 0
# Size of ephemeral disk in GB
ephemeral: 0
# Assign floating IP for every client side test VM
# Default: no floating IP (only assign internal fixed IP)
@ -282,6 +316,16 @@ client:
# The size of the test file for running IO tests in GB. Must be less or
# equal than disk_size. Defaults to 1 GB
io_file_size: 1
# Optional volume_type for cinder volumes
# Do not specify unless using QOS specs or testing a specific volume type
# Used to test multibackend support and QOS specs
# Must be a valid cinder volume type as listed by openstack volume type list
# Make sure volume type is public
# If an invalid volume type is specified tool will Error out on volume create
# volume_type: cephtype
# Volume creation timeout value specified in sec
vol_create_timeout_sec: 150
# Storage tool specific configs (per VM)
# Multiple factors can impact the storage performance numbers, and KloudBuster is defining

@ -21,11 +21,11 @@ from keystoneauth1 import session
import os
import re
import log as logging
import kloudbuster.log as logging
LOG = logging.getLogger(__name__)
class Credentials(object):
class Credentials():
def get_session(self):
dct = {

@ -15,7 +15,7 @@
import json
from perf_tool import PerfTool
from kloudbuster.perf_tool import PerfTool
from hdrh.histogram import HdrHistogram
@ -55,6 +55,8 @@ class FioTool(PerfTool):
assign_dict(parsed_output, 'write_KB', job['write']['io_bytes'])
assign_dict(parsed_output, 'write_hist', job['write']['clat']['hist'], 'write_bw')
assign_dict(parsed_output, 'cpu', {'usr': job['usr_cpu'], 'sys': job['sys_cpu']})
except Exception:
return self.parse_error('Could not parse: "%s"' % (stdout))
return parsed_output
@ -76,6 +78,12 @@ class FioTool(PerfTool):
all_res[key] = int(total)
all_res['tool'] = results[0]['results']['tool']
all_cpus = []
for item in results:
all_cpus.append(item['results'].get('cpu', None))
if all_cpus:
all_res['cpu'] = all_cpus
clat_list = []
# perc_list = [1, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 95, 99, 99.5, 99.9, 99.95, 99.99]
perc_list = [50, 75, 90, 99, 99.9, 99.99, 99.999]
@ -91,7 +99,7 @@ class FioTool(PerfTool):
histogram.decode_and_add(item['results'][clat])
latency_dict = histogram.get_percentile_to_value_dict(perc_list)
for key, value in latency_dict.iteritems():
for key, value in latency_dict.items():
all_res[clat].append([key, value])
all_res[clat].sort()
@ -100,10 +108,10 @@ class FioTool(PerfTool):
@staticmethod
def consolidate_samples(results, vm_count):
all_res = FioTool.consolidate_results(results)
total_count = float(len(results)) / vm_count
total_count = len(results) // vm_count
if not total_count:
return all_res
all_res['read_iops'] = int(all_res['read_iops'] / total_count)
all_res['write_iops'] = int(all_res['write_iops'] / total_count)
all_res['read_iops'] = all_res['read_iops'] // total_count
all_res['write_iops'] = all_res['write_iops'] // total_count
return all_res

@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
# Copyright 2016 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@ -26,7 +26,7 @@
# #
# It is safe to use the script with the resource list generated by #
# KloudBuster, usage: #
# $ python force_cleanup.py --file kb_20150807_183001_svr.log #
# $ python3 force_cleanup.py --file kb_20150807_183001_svr.log #
# #
# Note: If running under single-tenant or tenant/user reusing mode, you have #
# to cleanup the server resources first, then client resources. #
@ -57,20 +57,25 @@ import traceback
# openstack python clients
import cinderclient
from keystoneclient import client as keystoneclient
from cinderclient.client import Client as CinderClient
import keystoneclient
from keystoneclient.client import Client as KeystoneClient
import neutronclient
from neutronclient.neutron.client import Client as NeutronClient
from novaclient.client import Client as NovaClient
from novaclient.exceptions import NotFound
from tabulate import tabulate
# kloudbuster base code
import credentials
import kloudbuster.credentials as credentials
resource_name_re = None
def prompt_to_run():
print "Warning: You didn't specify a resource list file as the input. "\
"The script will delete all resources shown above."
answer = raw_input("Are you sure? (y/n) ")
print("Warning: You didn't specify a resource list file as the input. "
"The script will delete all resources shown above.")
answer = input("Are you sure? (y/n) ")
if answer.lower() != 'y':
sys.exit(0)
@ -83,7 +88,7 @@ def fetch_resources(fetcher, options=None):
except Exception as e:
res_list = []
traceback.print_exc()
print "Warning exception while listing resources:" + str(e)
print('Warning exception while listing resources:', str(e))
resources = {}
for res in res_list:
# some objects provide direct access some
@ -98,16 +103,15 @@ def fetch_resources(fetcher, options=None):
resources[resid] = resname
return resources
class AbstractCleaner(object):
__metaclass__ = ABCMeta
class AbstractCleaner(metaclass=ABCMeta):
def __init__(self, res_category, res_desc, resources, dryrun):
self.dryrun = dryrun
self.category = res_category
self.resources = {}
if not resources:
print 'Discovering %s resources...' % (res_category)
for rtype, fetch_args in res_desc.iteritems():
print('Discovering %s resources...' % (res_category))
for rtype, fetch_args in res_desc.items():
if resources:
if rtype in resources:
self.resources[rtype] = resources[rtype]
@ -116,20 +120,20 @@ class AbstractCleaner(object):
def report_deletion(self, rtype, name):
if self.dryrun:
print ' + ' + rtype + ' ' + name + ' should be deleted (but is not deleted: dry run)'
print(' + ' + rtype + ' ' + name + ' should be deleted (but is not deleted: dry run)')
else:
print ' + ' + rtype + ' ' + name + ' is successfully deleted'
print(' + ' + rtype + ' ' + name + ' is successfully deleted')
def report_not_found(self, rtype, name):
print ' ? ' + rtype + ' ' + name + ' not found (already deleted?)'
print(' ? ' + rtype + ' ' + name + ' not found (already deleted?)')
def report_error(self, rtype, name, reason):
print ' - ' + rtype + ' ' + name + ' ERROR:' + reason
print(' - ' + rtype + ' ' + name + ' ERROR:' + reason)
def get_resource_list(self):
result = []
for rtype, rdict in self.resources.iteritems():
for resid, resname in rdict.iteritems():
for rtype, rdict in self.resources.items():
for resid, resname in rdict.items():
result.append([rtype, resname, resid])
return result
@ -139,21 +143,20 @@ class AbstractCleaner(object):
class StorageCleaner(AbstractCleaner):
def __init__(self, sess, resources, dryrun):
from cinderclient import client as cclient
from novaclient import client as nclient
self.nova = nclient.Client('2', endpoint_type='publicURL', session=sess)
self.cinder = cclient.Client('2', endpoint_type='publicURL', session=sess)
self.nova = NovaClient('2', endpoint_type='publicURL', session=sess)
self.cinder = CinderClient('2', endpoint_type='publicURL', session=sess)
res_desc = {'volumes': [self.cinder.volumes.list, {"all_tenants": 1}]}
super(StorageCleaner, self).__init__('Storage', res_desc, resources, dryrun)
def clean(self):
print '*** STORAGE cleanup'
print('*** STORAGE cleanup')
try:
kb_volumes = []
kb_detaching_volumes = []
for id, name in self.resources['volumes'].iteritems():
for id, name in self.resources['volumes'].items():
try:
vol = self.cinder.volumes.get(id)
if vol.attachments:
@ -162,15 +165,15 @@ class StorageCleaner(AbstractCleaner):
if not self.dryrun:
ins_id = vol.attachments[0]['server_id']
self.nova.volumes.delete_server_volume(ins_id, id)
print ' . VOLUME ' + vol.name + ' detaching...'
print(' . VOLUME ' + vol.name + ' detaching...')
else:
print ' . VOLUME ' + vol.name + ' to be detached...'
print(' . VOLUME ' + vol.name + ' to be detached...')
kb_detaching_volumes.append(vol)
except NotFound:
print 'WARNING: Volume %s attached to an instance that no longer '\
'exists (will require manual cleanup of the database)' % (id)
print('WARNING: Volume %s attached to an instance that no longer '
'exists (will require manual cleanup of the database)' % id)
except Exception as e:
print str(e)
print(str(e))
else:
# no attachments
kb_volumes.append(vol)
@ -180,8 +183,8 @@ class StorageCleaner(AbstractCleaner):
# check that the volumes are no longer attached
if kb_detaching_volumes:
if not self.dryrun:
print ' . Waiting for %d volumes to be fully detached...' % \
(len(kb_detaching_volumes))
print(' . Waiting for %d volumes to be fully detached...' %
(len(kb_detaching_volumes)))
retry_count = 5 + len(kb_detaching_volumes)
while True:
retry_count -= 1
@ -190,19 +193,19 @@ class StorageCleaner(AbstractCleaner):
latest_vol = self.cinder.volumes.get(kb_detaching_volumes[0].id)
if self.dryrun or not latest_vol.attachments:
if not self.dryrun:
print ' + VOLUME ' + vol.name + ' detach complete'
print(' + VOLUME ' + vol.name + ' detach complete')
kb_detaching_volumes.remove(vol)
kb_volumes.append(vol)
if kb_detaching_volumes and not self.dryrun:
if retry_count:
print ' . VOLUME %d left to be detached, retries left=%d...' % \
(len(kb_detaching_volumes), retry_count)
print(' . VOLUME %d left to be detached, retries left=%d...' %
len(kb_detaching_volumes), retry_count)
time.sleep(2)
else:
print ' - VOLUME detach timeout, %d volumes left:' % \
(len(kb_detaching_volumes))
print(' - VOLUME detach timeout, %d volumes left:' %
len(kb_detaching_volumes))
for vol in kb_detaching_volumes:
print ' ', vol.name, vol.status, vol.id, vol.attachments
print(' ', vol.name, vol.status, vol.id, vol.attachments)
break
else:
break
@ -213,17 +216,15 @@ class StorageCleaner(AbstractCleaner):
try:
vol.force_delete()
except cinderclient.exceptions.BadRequest as exc:
print str(exc)
print(str(exc))
self.report_deletion('VOLUME', vol.name)
except KeyError:
pass
class ComputeCleaner(AbstractCleaner):
def __init__(self, sess, resources, dryrun):
from neutronclient.neutron import client as nclient
from novaclient import client as novaclient
self.neutron_client = nclient.Client('2.0', endpoint_type='publicURL', session=sess)
self.nova_client = novaclient.Client('2', endpoint_type='publicURL', session=sess)
self.neutron_client = NeutronClient('2.0', endpoint_type='publicURL', session=sess)
self.nova_client = NovaClient('2', endpoint_type='publicURL', session=sess)
res_desc = {
'instances': [self.nova_client.servers.list, {"all_tenants": 1}],
'flavors': [self.nova_client.flavors.list],
@ -232,15 +233,16 @@ class ComputeCleaner(AbstractCleaner):
super(ComputeCleaner, self).__init__('Compute', res_desc, resources, dryrun)
def clean(self):
print '*** COMPUTE cleanup'
print('*** COMPUTE cleanup')
try:
# Get a list of floating IPs
fip_lst = self.neutron_client.list_floatingips()['floatingips']
deleting_instances = self.resources['instances']
for id, name in self.resources['instances'].iteritems():
for id, name in self.resources['instances'].items():
try:
if self.nova_client.servers.get(id).addresses.values():
ins_addr = self.nova_client.servers.get(id).addresses.values()[0]
addrs = list(self.nova_client.servers.get(id).addresses.values())
if addrs:
ins_addr = addrs[0]
fips = [x['addr'] for x in ins_addr if x['OS-EXT-IPS:type'] == 'floating']
else:
fips = []
@ -259,33 +261,36 @@ class ComputeCleaner(AbstractCleaner):
deleting_instances.remove(id)
self.report_not_found('INSTANCE', name)
if not self.dryrun and len(deleting_instances):
print ' . Waiting for %d instances to be fully deleted...' % \
(len(deleting_instances))
if not self.dryrun and deleting_instances:
print(' . Waiting for %d instances to be fully deleted...' %
len(deleting_instances))
retry_count = 5 + len(deleting_instances)
while True:
retry_count -= 1
for ins_id in deleting_instances.keys():
# get a copy of the initial list content
instances_list = list(deleting_instances)
for ins_id in instances_list:
try:
self.nova_client.servers.get(ins_id)
except NotFound:
self.report_deletion('INSTANCE', deleting_instances[ins_id])
deleting_instances.pop(ins_id)
if not len(deleting_instances):
if not deleting_instances:
# all deleted
break
if retry_count:
print ' . INSTANCE %d left to be deleted, retries left=%d...' % \
(len(deleting_instances), retry_count)
print(' . INSTANCE %d left to be deleted, retries left=%d...' %
(len(deleting_instances), retry_count))
time.sleep(2)
else:
print ' - INSTANCE deletion timeout, %d instances left:' % \
(len(deleting_instances))
print(' - INSTANCE deletion timeout, %d instances left:' %
len(deleting_instances))
for ins_id in deleting_instances.keys():
try:
ins = self.nova_client.servers.get(ins_id)
print ' ', ins.name, ins.status, ins.id
print(' ', ins.name, ins.status, ins.id)
except NotFound:
print(' ', deleting_instances[ins_id],
'(just deleted)', ins_id)
@ -294,7 +299,7 @@ class ComputeCleaner(AbstractCleaner):
pass
try:
for id, name in self.resources['flavors'].iteritems():
for id, name in self.resources['flavors'].items():
try:
flavor = self.nova_client.flavors.find(name=name)
if not self.dryrun:
@ -306,7 +311,7 @@ class ComputeCleaner(AbstractCleaner):
pass
try:
for id, name in self.resources['keypairs'].iteritems():
for id, name in self.resources['keypairs'].items():
try:
if self.dryrun:
self.nova_client.keypairs.get(name)
@ -321,8 +326,7 @@ class ComputeCleaner(AbstractCleaner):
class NetworkCleaner(AbstractCleaner):
def __init__(self, sess, resources, dryrun):
from neutronclient.neutron import client as nclient
self.neutron = nclient.Client('2.0', endpoint_type='publicURL', session=sess)
self.neutron = NeutronClient('2.0', endpoint_type='publicURL', session=sess)
# because the response has an extra level of indirection
# we need to extract it to present the list of network or router objects
@ -357,10 +361,10 @@ class NetworkCleaner(AbstractCleaner):
pass
def clean(self):
print '*** NETWORK cleanup'
print('*** NETWORK cleanup')
try:
for id, name in self.resources['sec_groups'].iteritems():
for id, name in self.resources['sec_groups'].items():
try:
if self.dryrun:
self.neutron.show_security_group(id)
@ -373,7 +377,7 @@ class NetworkCleaner(AbstractCleaner):
pass
try:
for id, name in self.resources['floating_ips'].iteritems():
for id, name in self.resources['floating_ips'].items():
try:
if self.dryrun:
self.neutron.show_floatingip(id)
@ -386,7 +390,7 @@ class NetworkCleaner(AbstractCleaner):
pass
try:
for id, name in self.resources['routers'].iteritems():
for id, name in self.resources['routers'].items():
try:
if self.dryrun:
self.neutron.show_router(id)
@ -412,7 +416,7 @@ class NetworkCleaner(AbstractCleaner):
except KeyError:
pass
try:
for id, name in self.resources['networks'].iteritems():
for id, name in self.resources['networks'].items():
try:
if self.dryrun:
self.neutron.show_network(id)
@ -429,7 +433,7 @@ class NetworkCleaner(AbstractCleaner):
class KeystoneCleaner(AbstractCleaner):
def __init__(self, sess, resources, dryrun):
self.keystone = keystoneclient.Client(endpoint_type='publicURL', session=sess)
self.keystone = KeystoneClient(endpoint_type='publicURL', session=sess)
self.tenant_api = self.keystone.tenants \
if self.keystone.version == 'v2.0' else self.keystone.projects
res_desc = {
@ -439,9 +443,9 @@ class KeystoneCleaner(AbstractCleaner):
super(KeystoneCleaner, self).__init__('Keystone', res_desc, resources, dryrun)
def clean(self):
print '*** KEYSTONE cleanup'
print('*** KEYSTONE cleanup')
try:
for id, name in self.resources['users'].iteritems():
for id, name in self.resources['users'].items():
try:
if self.dryrun:
self.keystone.users.get(id)
@ -454,7 +458,7 @@ class KeystoneCleaner(AbstractCleaner):
pass
try:
for id, name in self.resources['tenants'].iteritems():
for id, name in self.resources['tenants'].items():
try:
if self.dryrun:
self.tenant_api.get(id)
@ -466,7 +470,7 @@ class KeystoneCleaner(AbstractCleaner):
except KeyError:
pass
class KbCleaners(object):
class KbCleaners():
def __init__(self, creds_obj, resources, dryrun):
self.cleaners = []
@ -479,13 +483,13 @@ class KbCleaners(object):
for cleaner in self.cleaners:
table.extend(cleaner.get_resource_list())
count = len(table) - 1
print
print()
if count:
print 'SELECTED RESOURCES:'
print tabulate(table, headers="firstrow", tablefmt="psql")
print('SELECTED RESOURCES:')
print(tabulate(table, headers="firstrow", tablefmt="psql"))
else:
print 'There are no resources to delete.'
print
print('There are no resources to delete.')
print()
return count
def clean(self):
@ -511,7 +515,7 @@ def get_resources_from_cleanup_log(logfile):
if not resid:
# normally only the keypairs have no ID
if restype != "keypairs":
print 'Error: resource type %s has no ID - ignored!!!' % (restype)
print('Error: resource type %s has no ID - ignored!!!' % (restype))
else:
resid = '0'
if restype not in resources:
@ -556,9 +560,9 @@ def main():
try:
resource_name_re = re.compile(opts.filter)
except Exception as exc:
print 'Provided filter is not a valid python regular expression: ' + opts.filter
print str(exc)
sys.exit(1)
print('Provided filter is not a valid python regular expression: ' + opts.filter)
print(str(exc))
return 1
else:
resource_name_re = re.compile('KB')
@ -566,19 +570,21 @@ def main():
cleaners = KbCleaners(cred, resources, opts.dryrun)
if opts.dryrun:
print
print()
print('!!! DRY RUN - RESOURCES WILL BE CHECKED BUT WILL NOT BE DELETED !!!')
print
print()
# Display resources to be deleted
count = cleaners.show_resources()
if not count:
sys.exit(0)
return 0
if not opts.file and not opts.dryrun:
prompt_to_run()
cleaners.clean()
return 0
if __name__ == '__main__':
main()
sys.exit(main())

@ -15,21 +15,24 @@
import os
import sys
import yaml
from pathlib import Path
from __init__ import __version__
from attrdict import AttrDict
import log as logging
from oslo_config import cfg
from pkg_resources import resource_string
import credentials
import kloudbuster.credentials as credentials
from kloudbuster.__init__ import __version__
import kloudbuster.log as logging
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class KBConfigParseException(Exception):
pass
# Some hardcoded client side options we do not want users to change
hardcoded_client_cfg = {
# Number of tenants to be created on the cloud
@ -49,6 +52,7 @@ hardcoded_client_cfg = {
'secgroups_per_network': 1
}
def get_absolute_path_for_file(file_name):
'''
Return the filename in absolute path for any file
@ -64,8 +68,8 @@ def get_absolute_path_for_file(file_name):
return abs_file_path
class KBConfig(object):
class KBConfig():
def __init__(self):
# The default configuration file for KloudBuster
default_cfg = resource_string(__name__, "cfg.scale.yaml")
@ -80,6 +84,9 @@ class KBConfig(object):
self.tenants_list = None
self.storage_mode = False
self.multicast_mode = False
self.tsdb = None
self.tsdb_module = False
self.tsdb_class = False
def update_configs(self):
# Initialize the key pair name
@ -121,17 +128,13 @@ class KBConfig(object):
# Check if the default image is located at the default locations
# if vm_image_file is empty
if not self.config_scale['vm_image_file']:
# check current directory
default_image_file = default_image_name + '.qcow2'
if os.path.isfile(default_image_file):
self.config_scale['vm_image_file'] = default_image_file
else:
# check at the root of the package
# root is up one level where this module resides
pkg_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
default_image_file = pkg_root + '/' + default_image_file
img_path_list = [os.getcwd(), str(Path.home()), '/']
for img_path in img_path_list:
default_image_file = os.path.join(img_path, default_image_name + '.qcow2')
if os.path.isfile(default_image_file):
self.config_scale['vm_image_file'] = default_image_file
LOG.info('Found VM image: %s', default_image_file)
break
# A bit of config dict surgery, extract out the client and server side
# and transplant the remaining (common part) into the client and server dict
@ -155,7 +158,7 @@ class KBConfig(object):
# If multicast mode, the number of receivers is specified in the multicast config instead.
if self.multicast_mode:
self.server_cfg['vms_per_network'] =\
self.server_cfg['vms_per_network'] = \
self.client_cfg['multicast_tool_configs']['receivers'][-1]
self.config_scale['server'] = self.server_cfg
@ -169,6 +172,12 @@ class KBConfig(object):
tc['rate'] = '0'
if 'rate_iops' not in tc:
tc['rate_iops'] = 0
if not self.tsdb:
self.tsdb = self.config_scale['tsdb']
if not self.tsdb_module:
self.tsdb_module = self.config_scale['tsdb']['module']
if not self.tsdb_class:
self.tsdb_class = self.config_scale['tsdb']['class']
def init_with_cli(self):
self.storage_mode = CONF.storage
@ -231,4 +240,4 @@ class KBConfig(object):
self.config_scale['number_tenants'] = 1
except Exception as e:
LOG.error('Cannot parse the count of tenant/user from the config file.')
raise KBConfigParseException(e.message)
raise KBConfigParseException(str(e))

@ -12,7 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import log as logging
import kloudbuster.log as logging
from time import gmtime
from time import strftime
@ -21,7 +21,7 @@ LOG = logging.getLogger(__name__)
class KBResTypeInvalid(Exception):
pass
class KBResLogger(object):
class KBResLogger():
def __init__(self):
self.resource_list = {}

@ -12,21 +12,31 @@
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import division
import abc
from collections import deque
import json
import log as logging
import redis
import sys
import threading
import time
import kloudbuster.log as logging
# A set of warned VM version mismatches
vm_version_mismatches = set()
LOG = logging.getLogger(__name__)
def cmp(x, y):
"""
Replacement for built-in function cmp that was removed in Python 3
Compare the two objects x and y and return an integer according to
the outcome. The return value is negative if x < y, zero if x == y
and strictly positive if x > y.
"""
return (x > y) - (x < y)
class KBException(Exception):
pass
@ -36,7 +46,7 @@ class KBVMUpException(KBException):
class KBProxyConnectionException(KBException):
pass
class KBRunner(object):
class KBRunner():
"""
Control the testing VMs on the testing cloud
"""
@ -51,6 +61,7 @@ class KBRunner(object):
self.tool_result = {}
self.agent_version = None
self.report = None
self.msg_thread = None
# Redis
self.redis_obj = None
@ -61,13 +72,13 @@ class KBRunner(object):
def msg_handler(self):
for message in self.pubsub.listen():
if message['data'] == "STOP":
if message['data'] == b"STOP":
break
LOG.kbdebug(message)
self.message_queue.append(message)
def setup_redis(self, redis_server, redis_server_port=6379, timeout=120):
LOG.info("Setting up the redis connections...")
LOG.info("Connecting to redis server in proxy VM %s:%d...", redis_server, redis_server_port)
connection_pool = redis.ConnectionPool(
host=redis_server, port=redis_server_port, db=0)
@ -77,14 +88,11 @@ class KBRunner(object):
success = False
retry_count = max(timeout // self.config.polling_interval, 1)
# Check for connections to redis server
for retry in xrange(retry_count):
for retry in range(retry_count):
try:
self.redis_obj.get("test")
success = True
except (redis.exceptions.ConnectionError):
# clear active exception to avoid the exception summary
# appended to LOG.info by oslo log
sys.exc_clear()
except redis.exceptions.ConnectionError:
LOG.info("Connecting to redis server... Retry #%d/%d", retry, retry_count)
time.sleep(self.config.polling_interval)
continue
@ -111,8 +119,8 @@ class KBRunner(object):
def send_cmd(self, cmd, client_type, data):
message = {'cmd': cmd, 'sender-id': 'kb-master',
'client-type': client_type, 'data': data}
LOG.kbdebug(message)
self.redis_obj.publish(self.orches_chan_name, message)
LOG.kbdebug(str(message))
self.redis_obj.publish(self.orches_chan_name, str(message))
def polling_vms(self, timeout, polling_interval=None):
'''
@ -125,9 +133,9 @@ class KBRunner(object):
retry = cnt_succ = cnt_failed = 0
clist = self.client_dict.copy()
samples = []
perf_tool = self.client_dict.values()[0].perf_tool
perf_tool = list(self.client_dict.values())[0].perf_tool
while (retry < retry_count and len(clist)):
while (retry < retry_count and clist):
time.sleep(polling_interval)
sample_count = 0
while True:
@ -135,10 +143,9 @@ class KBRunner(object):
msg = self.message_queue.popleft()
except IndexError:
# No new message, commands are in executing
# clear active exc to prevent LOG pollution
sys.exc_clear()
break
# pylint: disable=eval-used
payload = eval(msg['data'])
vm_name = payload['sender-id']
cmd = payload['cmd']
@ -149,11 +156,10 @@ class KBRunner(object):
instance = self.full_client_dict[vm_name]
if instance.up_flag:
continue
else:
clist[vm_name].up_flag = True
clist.pop(vm_name)
cnt_succ = cnt_succ + 1
self.agent_version = payload['data']
clist[vm_name].up_flag = True
clist.pop(vm_name)
cnt_succ = cnt_succ + 1
self.agent_version = payload['data']
elif cmd == 'REPORT':
sample_count = sample_count + 1
# Parse the results from HTTP Tools
@ -183,8 +189,8 @@ class KBRunner(object):
else:
LOG.error('[%s] received invalid command: %s' + (vm_name, cmd))
log_msg = "VMs: %d Ready, %d Failed, %d Pending... Retry #%d" %\
(cnt_succ, cnt_failed, len(clist), retry)
log_msg = "VMs: %d Ready, %d Failed, %d Pending... Retry #%d/%d" %\
(cnt_succ, cnt_failed, len(clist), retry, retry_count)
if sample_count != 0:
log_msg += " (%d sample(s) received)" % sample_count
LOG.info(log_msg)
@ -202,6 +208,7 @@ class KBRunner(object):
LOG.info("Waiting for agents on VMs to come up...")
cnt_succ = self.polling_vms(timeout)[0]
if cnt_succ != len(self.client_dict):
print('Exception %d != %d' % (cnt_succ, len(self.client_dict)))
raise KBVMUpException("Some VMs failed to start.")
self.send_cmd('ACK', None, None)
@ -213,14 +220,19 @@ class KBRunner(object):
self.host_stats[phy_host] = []
self.host_stats[phy_host].append(self.result[vm])
perf_tool = self.client_dict.values()[0].perf_tool
perf_tool = list(self.client_dict.values())[0].perf_tool
for phy_host in self.host_stats:
self.host_stats[phy_host] = perf_tool.consolidate_results(self.host_stats[phy_host])
@abc.abstractmethod
def run(self, test_only=False):
def run(self, test_only=False, run_label=None):
# must be implemented by sub classes
return None
def stop(self):
self.send_cmd('ABORT', None, None)
def get_sorted_vm_list(self):
vm_list = self.full_client_dict.keys()
vm_list.sort(cmp=lambda x, y: cmp(int(x[x.rfind('I') + 1:]), int(y[y.rfind('I') + 1:])))
return vm_list

@ -12,11 +12,9 @@
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import division
from kb_runner_base import KBException
from kb_runner_base import KBRunner
import log as logging
from kloudbuster.kb_runner_base import KBException
from kloudbuster.kb_runner_base import KBRunner
import kloudbuster.log as logging
LOG = logging.getLogger(__name__)
@ -84,8 +82,7 @@ class KBRunner_HTTP(KBRunner):
self.check_http_service(active_range)
if self.config.prompt_before_run:
print "Press enter to start running benchmarking tools..."
raw_input()
_ = input("Press enter to start running benchmarking tools...")
LOG.info("Running HTTP Benchmarking...")
self.report = {'seq': 0, 'report': None}
@ -93,9 +90,10 @@ class KBRunner_HTTP(KBRunner):
self.run_http_test(active_range)
# Call the method in corresponding tools to consolidate results
perf_tool = self.client_dict.values()[0].perf_tool
LOG.kbdebug(self.result.values())
self.tool_result = perf_tool.consolidate_results(self.result.values())
perf_tool = list(self.client_dict.values())[0].perf_tool
results = list(self.result.values())
LOG.kbdebug(results)
self.tool_result = perf_tool.consolidate_results(results)
self.tool_result['http_rate_limit'] =\
len(self.client_dict) * self.config.http_tool_configs.rate_limit
self.tool_result['total_connections'] =\
@ -112,10 +110,7 @@ class KBRunner_HTTP(KBRunner):
except KBHTTPBenchException:
raise KBException("Error while running HTTP benchmarking tool.")
def run(self, test_only=False):
if not test_only:
# Resources are already staged, just re-run the HTTP benchmarking tool
self.wait_for_vm_up()
def run(self, test_only=False, run_label=None):
if self.config.progression.enabled:
self.tool_result = {}
@ -123,8 +118,7 @@ class KBRunner_HTTP(KBRunner):
multiple = self.config.progression.vm_multiple
limit = self.config.progression.http_stop_limit
timeout = self.config.http_tool_configs.timeout
vm_list = self.full_client_dict.keys()
vm_list.sort(cmp=lambda x, y: cmp(int(x[x.rfind('I') + 1:]), int(y[y.rfind('I') + 1:])))
vm_list = self.get_sorted_vm_list()
self.client_dict = {}
cur_stage = 1
@ -140,7 +134,7 @@ class KBRunner_HTTP(KBRunner):
if self.tool_result and 'latency_stats' in self.tool_result:
err = self.tool_result['http_sock_err'] + self.tool_result['http_sock_timeout']
pert_dict = dict(self.tool_result['latency_stats'])
if limit[1] in pert_dict.keys():
if limit[1] in pert_dict:
timeout_at_percentile = pert_dict[limit[1]] // 1000000
elif limit[1] != 0:
LOG.warning('Percentile %s%% is not a standard statistic point.' % limit[1])
@ -149,7 +143,7 @@ class KBRunner_HTTP(KBRunner):
'reaches the stop limit.')
break
for idx in xrange(cur_vm_count, target_vm_count):
for idx in range(cur_vm_count, target_vm_count):
self.client_dict[vm_list[idx]] = self.full_client_dict[vm_list[idx]]
description = "-- %s --" % self.header_formatter(cur_stage, len(self.client_dict))
LOG.info(description)

@ -12,11 +12,9 @@
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import division
from kb_runner_base import KBException
from kb_runner_base import KBRunner
import log as logging
from kloudbuster.kb_runner_base import KBException
from kloudbuster.kb_runner_base import KBRunner
import kloudbuster.log as logging
LOG = logging.getLogger(__name__)
class KBMulticastServerUpException(KBException):
@ -41,12 +39,12 @@ class KBRunner_Multicast(KBRunner):
def setup_static_route(self, active_range, timeout=30):
func = {'cmd': 'setup_static_route', 'active_range': active_range}
self.send_cmd('EXEC', 'multicast', func)
self.polling_vms(timeout)[0]
_ = self.polling_vms(timeout)[0]
def check_multicast_service(self, active_range, timeout=30):
func = {'cmd': 'check_multicast_service', 'active_range': active_range}
self.send_cmd('EXEC', 'multicast', func)
self.polling_vms(timeout)[0]
_ = self.polling_vms(timeout)[0]
def run_multicast_test(self, active_range, opts, timeout):
func = {'cmd': 'run_multicast_test', 'active_range': active_range,
@ -61,10 +59,10 @@ class KBRunner_Multicast(KBRunner):
@staticmethod
def json_to_csv(jsn):
csv = "Test,receivers,addresses,ports,bitrate,pkt_size,"
firstKey = [x for x in jsn.keys()][0]
firstKey = list(jsn)[0]
keys = jsn[firstKey].keys()
csv += ",".join(keys) + "\r\n"
for obj_k in jsn.keys():
for obj_k in jsn:
obj = jsn[obj_k]
obj_vals = map(str, obj.values())
csv += '"' + obj_k + '"' + "," + obj_k + "," + ",".join(obj_vals) + "\r\n"
@ -81,8 +79,7 @@ class KBRunner_Multicast(KBRunner):
self.check_multicast_service(active_range)
if self.config.prompt_before_run:
print "Press enter to start running benchmarking tools..."
raw_input()
_ = input("Press enter to start running benchmarking tools...")
LOG.info("Running Multicast Benchmarking...")
self.report = {'seq': 0, 'report': None}
@ -98,14 +95,10 @@ class KBRunner_Multicast(KBRunner):
except KBMulticastBenchException:
raise KBException("Error while running multicast benchmarking tool.")
def run(self, test_only=False):
if not test_only:
# Resources are already staged, just re-run the multicast benchmarking tool
self.wait_for_vm_up()
def run(self, test_only=False, run_label=None):
self.tool_result = {}
vm_list = self.full_client_dict.keys()
vm_list.sort(cmp=lambda x, y: cmp(int(x[x.rfind('I') + 1:]), int(y[y.rfind('I') + 1:])))
vm_list = self.get_sorted_vm_list()
self.client_dict = {}
cur_stage = 1
@ -128,7 +121,7 @@ class KBRunner_Multicast(KBRunner):
server_port = 5000
for nReceiver in receivers:
for idx in range(0, nReceiver):
for _ in range(0, nReceiver):
self.client_dict[vm_list[0]] = self.full_client_dict[vm_list[0]]
if nReceiver > 1:

@ -12,11 +12,9 @@
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import division
from kb_runner_base import KBException
from kb_runner_base import KBRunner
import log as logging
from kloudbuster.kb_runner_base import KBException
from kloudbuster.kb_runner_base import KBRunner
import kloudbuster.log as logging
LOG = logging.getLogger(__name__)
@ -74,8 +72,7 @@ class KBRunner_Storage(KBRunner):
# timeout is calculated as 30s/GB/client VM
timeout = 60 * self.config.storage_stage_configs.io_file_size * len(self.client_dict)
parameter = {'size': str(self.config.storage_stage_configs.io_file_size) + 'GiB'}
parameter['mkfs'] = True \
if self.config.storage_stage_configs.target == 'volume' else False
parameter['mkfs'] = bool(self.config.storage_stage_configs.target == 'volume')
func = {'cmd': 'init_volume', 'active_range': active_range,
'parameter': parameter}
@ -104,7 +101,7 @@ class KBRunner_Storage(KBRunner):
self.result[key] = instance.perf_client_parser(**self.result[key])
return cnt_pending
def single_run(self, active_range=None, test_only=False):
def single_run(self, active_range=None, test_only=False, run_label=None):
try:
if not test_only:
if self.config.storage_stage_configs.target == 'volume':
@ -114,11 +111,10 @@ class KBRunner_Storage(KBRunner):
self.init_volume(active_range)
if self.config.prompt_before_run:
print "Press enter to start running benchmarking tools..."
raw_input()
_ = input("Press enter to start running benchmarking tools...")
test_count = len(self.config.storage_tool_configs)
perf_tool = self.client_dict.values()[0].perf_tool
perf_tool = list(self.client_dict.values())[0].perf_tool
self.tool_result = []
vm_count = active_range[1] - active_range[0] + 1\
if active_range else len(self.full_client_dict)
@ -130,9 +126,10 @@ class KBRunner_Storage(KBRunner):
timeout_vms = self.run_storage_test(active_range, dict(cur_config))
# Call the method in corresponding tools to consolidate results
LOG.kbdebug(self.result.values())
results = list(self.result.values())
LOG.kbdebug(results)
tc_result = perf_tool.consolidate_results(self.result.values())
tc_result = perf_tool.consolidate_results(results)
tc_result['description'] = cur_config['description']
tc_result['mode'] = cur_config['mode']
tc_result['block_size'] = cur_config['block_size']
@ -150,6 +147,8 @@ class KBRunner_Storage(KBRunner):
tc_result['rate'] = req_rate
tc_result['total_client_vms'] = vm_count
tc_result['timeout_vms'] = timeout_vms
if run_label:
tc_result['run_label'] = run_label
self.tool_result.append(tc_result)
if timeout_vms:
return timeout_vms
@ -158,10 +157,7 @@ class KBRunner_Storage(KBRunner):
except KBInitVolumeException:
raise KBException("Could not initilize the volume.")
def run(self, test_only=False):
if not test_only:
# Resources are already staged, just re-run the storage benchmarking tool
self.wait_for_vm_up()
def run(self, test_only=False, run_label=None):
if self.config.progression.enabled:
self.tool_result = {}
@ -169,8 +165,8 @@ class KBRunner_Storage(KBRunner):
start = self.config.progression.vm_start
multiple = self.config.progression.vm_multiple
limit = self.config.progression.storage_stop_limit
vm_list = self.full_client_dict.keys()
vm_list.sort(cmp=lambda x, y: cmp(int(x[x.rfind('I') + 1:]), int(y[y.rfind('I') + 1:])))
vm_list = self.get_sorted_vm_list()
self.client_dict = {}
cur_stage = 1
@ -184,13 +180,14 @@ class KBRunner_Storage(KBRunner):
if target_vm_count > len(self.full_client_dict):
break
for idx in xrange(cur_vm_count, target_vm_count):
for idx in range(cur_vm_count, target_vm_count):
self.client_dict[vm_list[idx]] = self.full_client_dict[vm_list[idx]]
description = "-- %s --" % self.header_formatter(cur_stage, len(self.client_dict))
LOG.info(description)
timeout_vms = self.single_run(active_range=[0, target_vm_count - 1],
test_only=test_only)
test_only=test_only,
run_label=run_label)
LOG.info('-- Stage %s: %s --' % (cur_stage, str(self.tool_result)))
cur_stage += 1
@ -210,9 +207,8 @@ class KBRunner_Storage(KBRunner):
if req_iops or req_rate:
degrade_iops = (req_iops - cur_iops) * 100 / req_iops if req_iops else 0
degrade_rate = (req_rate - cur_rate) * 100 / req_rate if req_rate else 0
if ((cur_tc['mode'] in ['randread', 'randwrite'] and
degrade_iops > limit)
or (cur_tc['mode'] in ['read', 'write'] and degrade_rate > limit)):
if (cur_tc['mode'] in ['randread', 'randwrite'] and degrade_iops > limit) or \
(cur_tc['mode'] in ['read', 'write'] and degrade_rate > limit):
LOG.warning('KloudBuster is stopping the iteration '
'because the result reaches the stop limit.')
tc_flag = False
@ -224,5 +220,5 @@ class KBRunner_Storage(KBRunner):
break
yield self.tool_result
else:
self.single_run(test_only=test_only)
self.single_run(test_only=test_only, run_label=run_label)
yield self.tool_result

@ -12,7 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import log as logging
import kloudbuster.log as logging
LOG = logging.getLogger(__name__)
@ -22,7 +22,7 @@ class KBVMMappingAlgoNotSup(Exception):
class KBVMPlacementAlgoNotSup(Exception):
pass
class KBScheduler(object):
class KBScheduler():
"""
1. VM Placements
2. Mapping client VMs to target servers

@ -1 +0,0 @@
../kb_dib/elements/kloudbuster/static/kb_test/kb_vm_agent.py

@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
# Copyright 2016 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@ -13,10 +13,11 @@
# License for the specific language governing permissions and limitations
# under the License.
from __init__ import __version__
from kloudbuster.__init__ import __version__
from concurrent.futures import ThreadPoolExecutor
import datetime
import importlib
import json
import os
import sys
@ -25,47 +26,52 @@ import time
import traceback
import webbrowser
import base_compute
import base_network
from cinderclient import client as cinderclient
from cinderclient.client import Client as CinderClient
from glanceclient import exc as glance_exception
from glanceclient.v2 import client as glanceclient
from kb_config import KBConfig
from kb_res_logger import KBResLogger
from kb_runner_base import KBException
from kb_runner_http import KBRunner_HTTP
from kb_runner_multicast import KBRunner_Multicast
from kb_runner_storage import KBRunner_Storage
from kb_scheduler import KBScheduler
from glanceclient.v2.client import Client as GlanceClient
import keystoneauth1
from keystoneclient.v2_0 import client as keystoneclient
from keystoneclient import client as keystoneclient
import log as logging
from neutronclient.neutron import client as neutronclient
from novaclient import client as novaclient
from neutronclient.neutron.client import Client as NeutronClient
from novaclient.client import Client as NovaClient
from oslo_config import cfg
from pkg_resources import resource_filename
from pkg_resources import resource_string
from tabulate import tabulate
import tenant
import kloudbuster.base_compute as base_compute
import kloudbuster.base_network as base_network
from kloudbuster.kb_config import KBConfig
from kloudbuster.kb_res_logger import KBResLogger
from kloudbuster.kb_runner_base import KBException
from kloudbuster.kb_runner_http import KBRunner_HTTP
from kloudbuster.kb_runner_multicast import KBRunner_Multicast
from kloudbuster.kb_runner_storage import KBRunner_Storage
from kloudbuster.kb_scheduler import KBScheduler
import kloudbuster.log as logging
import kloudbuster.tenant as tenant
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class KBVMCreationException(Exception):
pass
class KBFlavorCheckException(Exception):
pass
# flavor names to use
FLAVOR_KB_PROXY = 'KB.proxy'
FLAVOR_KB_CLIENT = 'KB.client'
FLAVOR_KB_SERVER = 'KB.server'
class Kloud(object):
class Kloud():
def __init__(self, scale_cfg, cred, reusing_tenants, vm_img,
testing_side=False, storage_mode=False, multicast_mode=False):
self.tenant_list = []
@ -92,12 +98,12 @@ class Kloud(object):
# these client handles use the kloudbuster credentials (usually admin)
# to do tenant creation, tenant nova+cinder quota allocation and the like
self.keystone = keystoneclient.Client(session=self.osclient_session)
self.neutron_client = neutronclient.Client('2.0', endpoint_type='publicURL',
session=self.osclient_session)
self.nova_client = novaclient.Client('2', endpoint_type='publicURL',
session=self.osclient_session)
self.cinder_client = cinderclient.Client('2', endpoint_type='publicURL',
session=self.osclient_session)
self.neutron_client = NeutronClient('2.0', endpoint_type='publicURL',
session=self.osclient_session)
self.nova_client = NovaClient('2', endpoint_type='publicURL',
session=self.osclient_session)
self.cinder_client = CinderClient('2', endpoint_type='publicURL',
session=self.osclient_session)
LOG.info("Creating kloud: " + self.prefix)
if self.placement_az:
LOG.info('%s Availability Zone: %s' % (self.name, self.placement_az))
@ -109,6 +115,7 @@ class Kloud(object):
flavor_manager = base_compute.Flavor(self.nova_client)
fcand = {'vcpus': sys.maxint, 'ram': sys.maxint, 'disk': sys.maxint}
# find the smallest flavor that is at least 1vcpu, 1024MB ram and 10MB disk
find_flag = False
for flavor in flavor_manager.list():
flavor = vars(flavor)
if flavor['vcpus'] < 1 or flavor['ram'] < 1024 or flavor['disk'] < 10:
@ -146,7 +153,7 @@ class Kloud(object):
reusing_users=user_list)
self.tenant_list.append(tenant_instance)
else:
for tenant_count in xrange(self.scale_cfg['number_tenants']):
for tenant_count in range(self.scale_cfg['number_tenants']):
tenant_name = self.prefix + "-T" + str(tenant_count)
tenant_instance = tenant.Tenant(tenant_name, self, tenant_quota)
self.res_logger.log('tenants', tenant_instance.tenant_name,
@ -179,22 +186,16 @@ class Kloud(object):
else:
flavor_dict['ephemeral'] = 0
if self.testing_side:
proxy_flavor = {
"vcpus": 1,
"ram": 2048,
"disk": 0,
"ephemeral": 0
}
create_flavor(flavor_manager, FLAVOR_KB_PROXY, proxy_flavor, extra_specs)
proxy_flavor_dict = self.scale_cfg.proxy_flavor
create_flavor(flavor_manager, FLAVOR_KB_PROXY, proxy_flavor_dict, extra_specs)
create_flavor(flavor_manager, FLAVOR_KB_CLIENT, flavor_dict, extra_specs)
else:
create_flavor(flavor_manager, FLAVOR_KB_SERVER, flavor_dict, extra_specs)
def delete_resources(self):
if not self.reusing_tenants:
for fn, flavor in self.flavors.iteritems():
for fn, flavor in self.flavors.items():
LOG.info('Deleting flavor %s', fn)
try:
flavor.delete()
@ -251,40 +252,40 @@ class Kloud(object):
if instance.vol:
instance.attach_vol()
instance.fixed_ip = instance.instance.networks.values()[0][0]
# example:
# instance.instance.networks = OrderedDict([('KBc-T0-U-R0-N0', ['10.1.0.194'])])
# there should be only 1 item in the ordered dict
instance.fixed_ip = list(instance.instance.networks.values())[0][0]
u_fip = instance.config['use_floatingip']
if instance.vm_name == "KB-PROXY" and not u_fip and not self.multicast_mode:
if self.scale_cfg['provider_network']:
instance.fip = None
elif instance.vm_name == "KB-PROXY" and not u_fip and not self.multicast_mode:
neutron_client = instance.network.router.user.neutron_client
external_network = base_network.find_external_network(neutron_client)
instance.fip = base_network.create_floating_ip(neutron_client, external_network)
ext_net = base_network.find_external_network(neutron_client)
port_id = instance.instance.interface_list()[0].id
# Associate the floating ip with this instance
instance.fip = base_network.create_floating_ip(neutron_client, ext_net, port_id)
instance.fip_ip = instance.fip['floatingip']['floating_ip_address']
self.res_logger.log('floating_ips',
instance.fip['floatingip']['floating_ip_address'],
instance.fip['floatingip']['id'])
if instance.fip:
# Associate the floating ip with this instance
instance.instance.add_floating_ip(instance.fip_ip)
instance.ssh_ip = instance.fip_ip
else:
# Store the fixed ip as ssh ip since there is no floating ip
instance.ssh_ip = instance.fixed_ip
instance.ssh_ip = instance.fip_ip if instance.fip else instance.fixed_ip
if not instance.vm_name == "KB-PROXY" and self.multicast_mode:
nc = instance.network.router.user.neutron_client
base_network.disable_port_security(nc, instance.fixed_ip)
def create_vms(self, vm_creation_concurrency):
try:
with ThreadPoolExecutor(max_workers=vm_creation_concurrency) as executor:
for feature in executor.map(self.create_vm, self.get_all_instances()):
for _ in executor.map(self.create_vm, self.get_all_instances()):
self.vm_up_count += 1
except Exception:
self.exc_info = sys.exc_info()
except Exception as exc:
self.exc_info = exc
class KloudBuster(object):
class KloudBuster():
"""
Creates resources on the cloud for loading up the cloud
1. Tenants
@ -295,7 +296,8 @@ class KloudBuster(object):
"""
def __init__(self, server_cred, client_cred, server_cfg, client_cfg,
topology, tenants_list, storage_mode=False, multicast_mode=False):
topology, tenants_list, storage_mode=False, multicast_mode=False,
interactive=False, tsdb_connector=None):
# List of tenant objects to keep track of all tenants
self.server_cred = server_cred
self.client_cred = client_cred
@ -303,6 +305,8 @@ class KloudBuster(object):
self.client_cfg = client_cfg
self.storage_mode = storage_mode
self.multicast_mode = multicast_mode
self.interactive = interactive
self.tsdb_connector = tsdb_connector
if topology and tenants_list:
self.topology = None
@ -319,8 +323,8 @@ class KloudBuster(object):
LOG.warning("REUSING MODE: The flavor configs will be ignored.")
else:
self.tenants_list = {'server': None, 'client': None}
# TODO(check on same auth_url instead)
self.single_cloud = False if client_cred else True
# !TODO(check on same auth_url instead)
self.single_cloud = bool(not client_cred)
if not client_cred:
self.client_cred = server_cred
# Automatically enable the floating IP for server cloud under dual-cloud mode
@ -342,7 +346,7 @@ class KloudBuster(object):
def get_hypervisor_list(self, cred):
ret_list = []
sess = cred.get_session()
nova_client = novaclient('2', endpoint_type='publicURL',
nova_client = NovaClient('2', endpoint_type='publicURL',
http_log_debug=True, session=sess)
for hypervisor in nova_client.hypervisors.list():
if vars(hypervisor)['status'] == 'enabled':
@ -353,7 +357,7 @@ class KloudBuster(object):
def get_az_list(self, cred):
ret_list = []
sess = cred.get_session()
nova_client = novaclient('2', endpoint_type='publicURL',
nova_client = NovaClient('2', endpoint_type='publicURL',
http_log_debug=True, session=sess)
for az in nova_client.availability_zones.list():
zoneName = vars(az)['zoneName']
@ -366,14 +370,11 @@ class KloudBuster(object):
def check_and_upload_image(self, kloud_name, image_name, image_url, sess, retry_count):
'''Check a VM image and upload it if not found
'''
glance_client = glanceclient.Client('2', session=sess)
try:
# Search for the image
img = glance_client.images.list(filters={'name': image_name}).next()
# image found
return img
except StopIteration:
sys.exc_clear()
glance_client = GlanceClient('2', session=sess)
# Search for the image
images = list(glance_client.images.list(filters={'name': image_name}))
if images:
return images[0]
# Trying to upload image
LOG.info("KloudBuster VM Image is not found in %s, trying to upload it..." % kloud_name)
@ -383,7 +384,7 @@ class KloudBuster(object):
retry = 0
try:
LOG.info("Uploading VM Image from %s..." % image_url)
with open(image_url) as f_image:
with open(image_url, "rb") as f_image:
img = glance_client.images.create(name=image_name,
disk_format="qcow2",
container_format="bare",
@ -413,7 +414,7 @@ class KloudBuster(object):
return None
except Exception:
LOG.error(traceback.format_exc())
LOG.error("Failed while uploading the image: %s", str(exc))
LOG.exception("Failed while uploading the image")
return None
return img
@ -450,7 +451,7 @@ class KloudBuster(object):
row = [instance.vm_name, instance.host, instance.fixed_ip,
instance.fip_ip, instance.subnet_ip, instance.shared_interface_ip]
table.append(row)
LOG.info('Provision Details (Tested Kloud)\n' +
LOG.info('Provision Details (Tested Kloud)\n%s',
tabulate(table, headers="firstrow", tablefmt="psql"))
table = [["VM Name", "Host", "Internal IP", "Floating IP", "Subnet"]]
@ -459,7 +460,7 @@ class KloudBuster(object):
row = [instance.vm_name, instance.host, instance.fixed_ip,
instance.fip_ip, instance.subnet_ip]
table.append(row)
LOG.info('Provision Details (Testing Kloud)\n' +
LOG.info('Provision Details (Testing Kloud)\n%s',
tabulate(table, headers="firstrow", tablefmt="psql"))
def gen_server_user_data(self, test_mode):
@ -546,7 +547,7 @@ class KloudBuster(object):
self.stage()
self.run_test()
except KBException as e:
LOG.error(e.message)
LOG.error(str(e))
except base_network.KBGetProvNetException:
pass
except Exception:
@ -610,7 +611,8 @@ class KloudBuster(object):
self.kb_runner = KBRunner_HTTP(client_list, self.client_cfg,
self.single_cloud)
self.kb_runner.setup_redis(self.kb_proxy.fip_ip or self.kb_proxy.fixed_ip)
self.kb_runner.setup_redis(self.kb_proxy.fip_ip or self.kb_proxy.fixed_ip,
timeout=self.client_cfg.proxy_timeout_sec)
if self.client_cfg.progression['enabled'] and not self.multicast_mode:
log_info = "Progression run is enabled, KloudBuster will schedule " \
"multiple runs as listed:"
@ -620,7 +622,7 @@ class KloudBuster(object):
cur_vm_count = 1 if start else multiple
# Minus 1 for KB-Proxy
total_vm = self.get_tenant_vm_count(self.client_cfg) - 1
while (cur_vm_count <= total_vm):
while cur_vm_count <= total_vm:
log_info += "\n" + self.kb_runner.header_formatter(stage, cur_vm_count)
cur_vm_count = (stage + 1 - start) * multiple
stage += 1
@ -654,22 +656,40 @@ class KloudBuster(object):
self.client_vm_create_thread.join()
if self.testing_kloud and self.testing_kloud.exc_info:
raise self.testing_kloud.exc_info[1], None, self.testing_kloud.exc_info[2]
raise self.testing_kloud.exc_info[1].with_traceback(self.testing_kloud.exc_info[2])
if self.kloud and self.kloud.exc_info:
raise self.kloud.exc_info[1], None, self.kloud.exc_info[2]
raise self.kloud.exc_info[1].with_traceback(self.kloud.exc_info[2])
# Function that print all the provisioning info
self.print_provision_info()
def run_test(self, test_only=False):
start_time = time.time()
runlabel = None
self.gen_metadata()
self.kb_runner.config = self.client_cfg
if not test_only:
# Resources are already staged, just re-run the storage benchmarking tool
self.kb_runner.wait_for_vm_up()
# Run the runner to perform benchmarkings
for run_result in self.kb_runner.run(test_only):
if not self.multicast_mode or len(self.final_result['kb_result']) == 0:
self.final_result['kb_result'].append(self.kb_runner.tool_result)
LOG.info('SUMMARY: %s' % self.final_result)
while 1:
if self.interactive:
print()
runlabel = input('>> KB ready, enter label for next run or "q" to quit: ')
if runlabel.lower() == "q":
break
for _ in self.kb_runner.run(test_only, runlabel):
if not self.multicast_mode or len(self.final_result['kb_result']) == 0:
self.final_result['kb_result'].append(self.kb_runner.tool_result)
if self.tsdb_connector:
tsdb_result = self.tsdb_connector.get_results(start_time=start_time)
if tsdb_result:
self.final_result['tsdb'] = tsdb_result
LOG.info('SUMMARY: %s' % self.final_result)
if not self.interactive:
break
def stop_test(self):
self.kb_runner.stop()
@ -708,7 +728,6 @@ class KloudBuster(object):
self.kloud = None
self.testing_kloud = None
def dump_logs(self, offset=0):
if not self.fp_logfile:
return ""
@ -724,8 +743,7 @@ class KloudBuster(object):
def get_tenant_vm_count(self, config):
# this does not apply for storage mode!
return (config['routers_per_tenant'] * config['networks_per_router'] *
config['vms_per_network'])
return config['routers_per_tenant'] * config['networks_per_router'] * config['vms_per_network']
def calc_neutron_quota(self):
total_vm = self.get_tenant_vm_count(self.server_cfg)
@ -735,7 +753,7 @@ class KloudBuster(object):
self.server_cfg['networks_per_router']
server_quota['subnet'] = server_quota['network']
server_quota['router'] = self.server_cfg['routers_per_tenant']
if (self.server_cfg['use_floatingip']):
if self.server_cfg['use_floatingip']:
# (1) Each VM has one floating IP
# (2) Each Router has one external IP
server_quota['floatingip'] = total_vm + server_quota['router']
@ -759,7 +777,7 @@ class KloudBuster(object):
client_quota['network'] = 1
client_quota['subnet'] = 1
client_quota['router'] = 1
if (self.client_cfg['use_floatingip']):
if self.client_cfg['use_floatingip']:
# (1) Each VM has one floating IP
# (2) Each Router has one external IP, total of 1 router
# (3) KB-Proxy node has one floating IP
@ -842,6 +860,7 @@ class KloudBuster(object):
return quota_dict
def create_html(hfp, template, task_re, is_config):
for line in template:
line = line.replace('[[result]]', task_re)
@ -860,26 +879,30 @@ def create_html(hfp, template, task_re, is_config):
url = 'file://' + os.path.abspath(CONF.html)
webbrowser.open(url, new=2)
def generate_charts(json_results, html_file_name, is_config):
'''Save results in HTML format file.'''
LOG.info('Saving results to HTML file: ' + html_file_name + '...')
try:
if json_results['test_mode'] == "storage":
test_mode = json_results['test_mode']
if test_mode == "storage":
template_path = resource_filename(__name__, 'template_storage.html')
elif json_results['test_mode'] == "http":
elif test_mode == "http":
template_path = resource_filename(__name__, 'template_http.html')
else:
raise
LOG.error('Invalid test mode, : %s', test_mode)
return 1
except Exception:
LOG.error('Invalid json file.')
sys.exit(1)
return 1
with open(html_file_name, 'w') as hfp, open(template_path, 'r') as template:
create_html(hfp,
template,
json.dumps(json_results, sort_keys=True),
is_config)
return 0
def main():
def process_cli_args(args):
cli_opts = [
cfg.StrOpt("config",
short="c",
@ -929,6 +952,9 @@ def main():
secret=True,
help="Testing cloud password",
metavar="<password>"),
cfg.BoolOpt("interactive",
default=False,
help="Running KloudBuster in interactive mode"),
cfg.StrOpt("html",
default=None,
help='store results in HTML file',
@ -960,7 +986,10 @@ def main():
metavar="<source json file>"),
]
CONF.register_cli_opts(cli_opts)
CONF(sys.argv[1:], project="kloudbuster", version=__version__)
CONF(args, project="kloudbuster", version=__version__)
def main():
process_cli_args(sys.argv[1:])
logging.setup("kloudbuster")
if CONF.rc and not CONF.tested_rc:
@ -972,45 +1001,49 @@ def main():
if CONF.charts_from_json:
if not CONF.html:
LOG.error('Destination html filename must be specified using --html.')
sys.exit(1)
return 1
with open(CONF.charts_from_json, 'r') as jfp:
json_results = json.load(jfp)
generate_charts(json_results, CONF.html, None)
sys.exit(0)
return 0
if CONF.show_config:
print resource_string(__name__, "cfg.scale.yaml")
sys.exit(0)
print(resource_string(__name__, "cfg.scale.yaml").decode('utf-8'))
return 0
if CONF.multicast and CONF.storage:
LOG.error('--multicast and --storage can not both be chosen.')
sys.exit(1)
return 1
try:
kb_config = KBConfig()
kb_config.init_with_cli()
except TypeError:
LOG.exception('Error parsing the configuration file')
sys.exit(1)
return 1
# The KloudBuster class is just a wrapper class
# levarages tenant and user class for resource creations and deletion
tsdb_module = importlib.import_module(kb_config.tsdb_module)
tsdb_connector = getattr(tsdb_module, kb_config.tsdb_class)(
config=kb_config.tsdb)
kloudbuster = KloudBuster(
kb_config.cred_tested, kb_config.cred_testing,
kb_config.server_cfg, kb_config.client_cfg,
kb_config.topo_cfg, kb_config.tenants_list,
storage_mode=CONF.storage, multicast_mode=CONF.multicast)
storage_mode=CONF.storage, multicast_mode=CONF.multicast,
interactive=CONF.interactive, tsdb_connector=tsdb_connector)
if kloudbuster.check_and_upload_images():
kloudbuster.run()
if CONF.json:
'''Save results in JSON format file.'''
# Save results in JSON format file
LOG.info('Saving results in json file: ' + CONF.json + "...")
with open(CONF.json, 'w') as jfp:
json.dump(kloudbuster.final_result, jfp, indent=4, sort_keys=True)
if CONF.multicast and CONF.csv and 'kb_result' in kloudbuster.final_result:
'''Save results in JSON format file.'''
# Save results in JSON format file
if len(kloudbuster.final_result['kb_result']) > 0:
LOG.info('Saving results in csv file: ' + CONF.csv + "...")
with open(CONF.csv, 'w') as jfp:
@ -1018,6 +1051,8 @@ def main():
if CONF.html:
generate_charts(kloudbuster.final_result, CONF.html, kb_config.config_scale)
return 0
if __name__ == '__main__':
main()
sys.exit(main())

@ -43,6 +43,8 @@ WARN = logging.WARN
WARNING = logging.WARNING
def setup(product_name, logfile=None):
# pylint: disable=protected-access
dbg_color = handlers.ColorHandler.LEVEL_COLORS[logging.DEBUG]
handlers.ColorHandler.LEVEL_COLORS[logging.KBDEBUG] = dbg_color
CONF.logging_default_format_string = '%(asctime)s %(levelname)s %(message)s'
@ -62,6 +64,7 @@ def setup(product_name, logfile=None):
project=product_name).logger.setLevel(logging.KBDEBUG)
def getLogger(name="unknown", version="unknown"):
# pylint: disable=protected-access
if name not in oslogging._loggers:
oslogging._loggers[name] = KloudBusterContextAdapter(
logging.getLogger(name), {"project": "kloudbuster",

@ -15,7 +15,7 @@
import json
from perf_tool import PerfTool
from kloudbuster.perf_tool import PerfTool
class NuttcpTool(PerfTool):

@ -13,10 +13,10 @@
# under the License.
#
from base_compute import BaseCompute
from fio_tool import FioTool
from nuttcp_tool import NuttcpTool
from wrk_tool import WrkTool
from kloudbuster.base_compute import BaseCompute
from kloudbuster.fio_tool import FioTool
from kloudbuster.nuttcp_tool import NuttcpTool
from kloudbuster.wrk_tool import WrkTool
# An openstack instance (can be a VM or a LXC)

@ -15,14 +15,13 @@
import abc
import log as logging
import kloudbuster.log as logging
LOG = logging.getLogger(__name__)
# A base class for all tools that can be associated to an instance
class PerfTool(object):
__metaclass__ = abc.ABCMeta
class PerfTool(metaclass=abc.ABCMeta):
def __init__(self, instance, tool_name):
self.instance = instance

42
kloudbuster/prometheus.py Normal file

@ -0,0 +1,42 @@
#!/usr/bin/env python
# Copyright 2018 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import requests
import time
class Prometheus():
def __init__(self, config):
self.server_address = "http://{}:{}/api/v1/".format(config['server_ip'],
config['server_port'])
self.step_size = config['step_size']
self.wait_time = config['wait_time']
def get_results(self, start_time, end_time=None):
if not end_time:
end_time = time.time()
if end_time - start_time <= self.wait_time * 2:
return None
try:
return requests.get(
url="{}query_range?query=cpu_usage_system{{"
"tag=%22ceph%22}}&start={}&end={}&step={}".format(self.server_address,
start_time + self.wait_time,
end_time - self.wait_time,
self.step_size)).json()
except requests.exceptions.RequestException as e:
print(e)
return None

@ -0,0 +1,28 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pytz>=2016.4
pbr>=3.0.1
Babel>=2.3.4
python-cinderclient>=2.0.1
python-glanceclient>=2.6.0
python-openstackclient>=3.11.0
python-neutronclient>=6.2.0
# starting from 10.0.0, floating ip apis are removed from novaclient
python-novaclient>=9.0.0,<10.0.0
python-keystoneclient>=3.10.0
attrdict>=2.0.0
hdrhistogram>=0.5.2
# ipaddress is required to get TLS working
# otherwise certificates with numeric IP addresses in the ServerAltName field will fail
ipaddress>= 1.0.16
oslo.config>=4.1.1
oslo.log>=3.26.1
pecan>=1.2.1
redis>=2.10.5
tabulate>=0.7.7
pyyaml>=3.12
requests

@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
# Copyright 2016 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@ -22,7 +22,7 @@ def exec_command(cmd, cwd=None, show_console=False):
p = subprocess.Popen(cmd, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if show_console:
for line in iter(p.stdout.readline, b""):
print line,
print(line)
p.communicate()
return p.returncode
@ -40,10 +40,10 @@ def launch_kb(cwd):
except OSError:
continue
if os.uname()[0] == "Darwin":
print
print "To run the KloudBuster web server you need to install the coreutils package:"
print " brew install coreutils"
print
print()
print("To run the KloudBuster web server you need to install the coreutils package:")
print(" brew install coreutils")
print()
raise OSError('Cannot find stdbuf or gstdbuf command')
def main():
@ -52,8 +52,9 @@ def main():
try:
return launch_kb(cwd)
except KeyboardInterrupt:
print 'Terminating server...'
print('Terminating server...')
return 1
if __name__ == '__main__':
sys.exit(main())

@ -12,20 +12,19 @@
# License for the specific language governing permissions and limitations
# under the License.
import base_compute
import base_network
import base_storage
from keystoneclient import exceptions as keystone_exception
import log as logging
import users
import kloudbuster.base_compute as base_compute
import kloudbuster.base_network as base_network
import kloudbuster.base_storage as base_storage
import kloudbuster.log as logging
import kloudbuster.users as users
LOG = logging.getLogger(__name__)
class KBQuotaCheckException(Exception):
pass
class Tenant(object):
class Tenant():
"""
Holds the tenant resources
1. Provides ability to create users in a tenant
@ -68,7 +67,7 @@ class Tenant(object):
LOG.info("Creating tenant: " + self.tenant_name)
tenant_object = \
self.tenant_api.create(self.tenant_name,
# domain="default",
domain="default",
description="KloudBuster tenant",
enabled=True)
return tenant_object
@ -109,7 +108,7 @@ class Tenant(object):
meet_quota = True
quota = quota_manager.get()
for key, value in self.tenant_quota[quota_type].iteritems():
for key, value in self.tenant_quota[quota_type].items():
if quota[key] < value:
meet_quota = False
break

@ -1,20 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
test_kloudbuster
----------------------------------
Tests for `kloudbuster` module.
"""

24
kloudbuster/tsdb.py Normal file

@ -0,0 +1,24 @@
#!/usr/bin/env python3
# Copyright 2018 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
class TSDB():
def __init__(self, config):
pass
def get_results(self, start_time, end_time=time.time()):
pass

@ -12,17 +12,18 @@
# License for the specific language governing permissions and limitations
# under the License.
import base_compute
import base_network
import kloudbuster.base_compute as base_compute
import kloudbuster.base_network as base_network
import kloudbuster.log as logging
from cinderclient import client as cinderclient
from keystoneclient import exceptions as keystone_exception
import log as logging
from neutronclient.neutron import client as neutronclient
from novaclient import client as novaclient
LOG = logging.getLogger(__name__)
class User(object):
class User():
"""
User class that stores router list
Creates and deletes N routers based on num of routers
@ -143,11 +144,12 @@ class User(object):
self.key_pair.add_public_key(self.key_name, config_scale.public_key_file)
# Find the external network that routers need to attach to
if self.tenant.kloud.multicast_mode:
router_instance = base_network.Router(self, is_dumb=True)
if self.tenant.kloud.multicast_mode or (self.tenant.kloud.storage_mode and
config_scale.provider_network):
router_instance = base_network.Router(
self, provider_network=config_scale.provider_network)
self.router_list.append(router_instance)
router_instance.create_network_resources(config_scale)
else:
external_network = base_network.find_external_network(self.neutron_client)
# Create the required number of routers and append them to router list

@ -15,10 +15,9 @@
import json
from perf_tool import PerfTool
from hdrh.histogram import HdrHistogram
import log as logging
from kloudbuster.perf_tool import PerfTool
import kloudbuster.log as logging
LOG = logging.getLogger(__name__)
@ -119,7 +118,7 @@ class WrkTool(PerfTool):
err_flag = True
perc_list = [50, 75, 90, 99, 99.9, 99.99, 99.999]
latency_dict = histogram.get_percentile_to_value_dict(perc_list)
for key, value in latency_dict.iteritems():
for key, value in latency_dict.items():
all_res['latency_stats'].append([key, value])
all_res['latency_stats'].sort()

287
pylintrc Normal file

@ -0,0 +1,287 @@
[MASTER]
extension-pkg-whitelist=netifaces,lxml
ignore=CVS
ignore-patterns=
jobs=1
limit-inference-results=100
load-plugins=
persistent=yes
suggestion-mode=yes
unsafe-load-any-extension=no
init-hook=import sys; sys.path.append('installer/')
[MESSAGES CONTROL]
confidence=
disable=missing-docstring,
invalid-name,
global-statement,
broad-except,
useless-object-inheritance,
useless-else-on-loop,
no-member,
arguments-differ,
redundant-keyword-arg,
cell-var-from-loop,
no-self-use,
consider-using-set-comprehension,
wrong-import-position,
wrong-import-order,
redefined-outer-name,
no-else-return,
assignment-from-no-return,
dangerous-default-value,
no-name-in-module,
function-redefined,
redefined-builtin,
unused-argument,
too-many-instance-attributes,
too-many-locals,
too-many-function-args,
too-many-branches,
too-many-arguments
enable=c-extension-no-member
[REPORTS]
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
output-format=text
reports=no
score=yes
[REFACTORING]
max-nested-blocks=10
never-returning-functions=sys.exit
[LOGGING]
logging-format-style=old
logging-modules=logging
[SPELLING]
max-spelling-suggestions=4
spelling-dict=
spelling-ignore-words=
spelling-private-dict-file=
spelling-store-unknown-words=no
[MISCELLANEOUS]
notes=XXX,
TODO
[TYPECHECK]
contextmanager-decorators=contextlib.contextmanager
generated-members=
ignore-mixin-members=yes
ignore-none=yes
ignore-on-opaque-inference=yes
missing-member-hint=yes
missing-member-hint-distance=1
missing-member-max-choices=1
[VARIABLES]
additional-builtins=mibBuilder,OPENSTACK_NEUTRON_NETWORK
allow-global-unused-variables=yes
callbacks=cb_,
_cb
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
ignored-argument-names=_.*|^ignored_|^unused_
init-import=no
redefining-builtins-modules=builtins,io
[FORMAT]
expected-line-ending-format=
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
indent-after-paren=4
indent-string=' '
max-line-length=150
max-module-lines=2500
no-space-check=trailing-comma,
dict-separator
single-line-class-stmt=no
single-line-if-stmt=no
[SIMILARITIES]
ignore-comments=yes
ignore-docstrings=yes
ignore-imports=no
min-similarity-lines=10
[BASIC]
argument-naming-style=snake_case
attr-naming-style=snake_case
bad-names=foo,
bar,
baz,
toto,
tutu,
tata
class-attribute-naming-style=any
class-naming-style=PascalCase
const-naming-style=UPPER_CASE
docstring-min-length=-1
function-naming-style=snake_case
good-names=i,
j,
k,
ex,
Run,
_
include-naming-hint=yes
inlinevar-naming-style=any
method-naming-style=snake_case
module-naming-style=snake_case
name-group=
no-docstring-rgx=^_
property-classes=abc.abstractproperty
variable-naming-style=snake_case
[STRING]
check-str-concat-over-line-jumps=no
[IMPORTS]
allow-wildcard-with-all=no
analyse-fallback-blocks=no
deprecated-modules=optparse,tkinter.tix
ext-import-graph=
import-graph=
int-import-graph=
known-standard-library=
known-third-party=enchant
[CLASSES]
defining-attr-methods=__init__,
__new__,
setUp
exclude-protected=_asdict,
_fields,
_replace,
_source,
_make
valid-classmethod-first-arg=cls
valid-metaclass-classmethod-first-arg=cls
[DESIGN]
max-args=15
max-attributes=32
max-bool-expr=10
max-branches=80
max-locals=40
max-parents=12
additional-builtins=OPENSTACK_NEUTRON_NETWORK
max-public-methods=100
max-returns=50
max-statements=300
min-public-methods=0
[EXCEPTIONS]
overgeneral-exceptions=BaseException,
Exception

@ -6,16 +6,14 @@ pytz>=2016.4
pbr>=3.0.1
Babel>=2.3.4
futures>=3.1.1
python-cinderclient>=2.0.1
python-glanceclient>=2.6.0
python-openstackclient>=3.11.0
python-neutronclient>=6.2.0
# migrate security group API to neutron client before moving to nova client 8.0.0
python-novaclient>=8.0.0
python-novaclient>=9.0.0
python-keystoneclient>=3.10.0
attrdict>=2.0.0
hdrhistogram>=0.5.2
hdrhistogram>=0.8.0
# ipaddress is required to get TLS working
# otherwise certificates with numeric IP addresses in the ServerAltName field will fail
ipaddress>= 1.0.16
@ -25,6 +23,3 @@ pecan>=1.2.1
redis>=2.10.5
tabulate>=0.7.7
pyyaml>=3.12
# Workaround for pip install failed on RHEL/CentOS
functools32>=3.2.3

@ -1,11 +1,12 @@
[metadata]
name = kloudbuster
summary = KloudBuster is an open source tool that allows anybody to load any Neutron OpenStack cloud at massive data plane scale swiftly and effortlessly.
description-file =
README.rst
long_description_content_type = text/x-rst
long_description = README.rst
author = KloudBuster team at OpenStack
author-email = kloudbuster-core@lists.launchpad.net
home-page = https://github.com/openstack/kloudbuster
home-page = https://opendev.org/x/kloudbuster
classifier =
Environment :: OpenStack
Intended Audience :: Developers
@ -15,8 +16,8 @@ classifier =
Operating System :: POSIX :: Linux
Operating System :: MacOS
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.6
[files]
packages =

@ -25,6 +25,7 @@ except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
setup_requires=['pbr', 'wheel'],
scripts=['kloudbuster/kb_extract_img_from_docker.sh'],
pbr=True)
pbr=True,
python_requires='>=3.6')

@ -2,15 +2,6 @@
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.11,>=0.10.0
coverage>=3.6
discover
python-subunit>=0.0.18
sphinx>=1.4.0
sphinx>=1.4.0,<2.0
sphinx_rtd_theme>=0.1.9
oslosphinx>=2.5.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
testrepository>=0.0.18
testscenarios>=0.4
testtools>=1.4.0
oslosphinx>=2.5.0

93
tests/conftest.py Normal file

@ -0,0 +1,93 @@
"""
This module is used to define shared pytest fixtures.
Because this module is placed under tests, all fixtures defined here can be used
by all test cases below tests
"""
import os
import shutil
import pytest
def stage_file(dirname, filename, content=None):
# we cannot use a os.path.join because we want to support
# multi-parts in the filename
if dirname.endswith('/') and filename.startswith('/'):
pathname = dirname + filename[1:]
elif filename.startswith('/') or dirname.endswith('/'):
pathname = dirname + filename
else:
pathname = dirname + '/' + filename
print('Staging file: ', pathname)
os.makedirs(os.path.dirname(pathname), exist_ok=True)
if content is None:
content = pathname
with open(pathname, 'w') as ff:
ff.write(content)
@pytest.fixture
def stage_fs():
"""
This fixture can be used to stage a complete file system below a given root.
This is a fixture factory and each test function can call stage_fs with the root of the
file system to stage and with a configuration.
The entire root_fs will be deleted when the fixture terminates unless skip_clean=True
Example of files_config:
{
'file1.yaml': 'any content',
'folder1': {
'file_with_arbitrary_content.yaml': None,
'empty_file.txt': '',
'nested_empty_folder: {}
}
'folder1/file2.txt': None
}
if '/tmp/pico' is passed as fs_root, this fixture will stage the following files:
/tmp/pico/file1.yaml (containing 'any content')
/tmp/pico/folder1/file_with_arbitrary_content.yaml (containing arbitary text)
/tmp/pico/folder1/empty_file.txt (empty)
/tmp/pico/folder1/nested_empty_folder/ (empty directory)
/tmp/pico/folder1/file2.txt (any content)
To use this fixture, simply add "stage_fs" as argument to your test function, then
call stage_fs() with arguments described in below _stage_fs function.
Also see the unit test code (test_fixtures.py)
"""
saved_fs_root = []
def _stage_fs(fs_root, files_config, skip_clean=False):
"""
fs_root: pathname of the root under which all all the files defined in files_config must be staged
files_config: a dict of file content reflecting the desired staged file system
skip_clean: set to True if you do not want the staged directory to be cleaned on exit (for troubleshooting only)
"""
if not saved_fs_root:
if skip_clean:
# for troubleshooting, it is possible to preserve
# the stage_fs directory after the test finishes
saved_fs_root.append(None)
else:
saved_fs_root.append(fs_root)
# remove the stage_fs root directory at start
# so we're sure we start with a clean directory
shutil.rmtree(fs_root, ignore_errors=True)
os.makedirs(fs_root, exist_ok=True)
for file, content in files_config.items():
if isinstance(content, dict):
# remove any "/" at start
if file[0] == '/':
file = file[1:]
new_fs_root = os.path.join(fs_root, file)
os.makedirs(new_fs_root, exist_ok=True)
_stage_fs(new_fs_root, content)
else:
stage_file(fs_root, file, content)
yield _stage_fs
if saved_fs_root:
if saved_fs_root[0]:
# remove the stage_fs directory when the fixture terminates
shutil.rmtree(saved_fs_root[0], ignore_errors=True)

61
tests/test_kloudbuster.py Normal file

@ -0,0 +1,61 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
test_kloudbuster
----------------------------------
Tests for `kloudbuster` module.
"""
from kloudbuster.kb_config import KBConfig
from kloudbuster.kloudbuster import process_cli_args
def test_kbconfig_default():
# verify that we load the defaulgt config properly
kbcfg = KBConfig()
kbcfg.update_configs()
cfg = kbcfg.config_scale
assert cfg.openrc_file is None
assert cfg.vm_creation_concurrency == 5
assert cfg.client.flavor.vcpus == 1
config_yaml = """
client:
flavor:
vcpus: 100
ram: 2048
disk: 0
extra_specs:
"hw:cpu_policy": dedicated
storage_stage_configs:
vm_count: 1
target: 'volume'
disk_size: 10
io_file_size: 55
"""
def test_kbconfig_overide(stage_fs):
config_fs = {
'config.yaml': config_yaml
}
stage_fs('/tmp/kbtest', config_fs)
# verify that config override is working
args = ['-c', '/tmp/kbtest/config.yaml']
process_cli_args(args)
kbcfg = KBConfig()
kbcfg.init_with_cli()
kbcfg.update_configs()
cfg = kbcfg.config_scale
print(cfg.client.storage_stage_configs)
assert cfg.client.flavor.vcpus == 100
assert cfg.client.storage_stage_configs.io_file_size == 55

58
tox.ini

@ -1,41 +1,51 @@
[tox]
minversion = 1.6
envlist = py27,pep8
envlist = py3,pylint,pep8
skipsdist = True
basepython = python3
[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
# commands = python setup.py test --slowest --testr-args='{posargs}'
[testenv:py3]
deps =
pytest>=5.4
pytest-cov>=2.8
mock>=4.0
-r{toxinidir}/requirements.txt
commands =
{posargs:pytest --cov=kloudbuster --cov-report=term-missing -vv tests}
[testenv:pep8]
commands = flake8
deps =
pep8>=1.5.7
flake8>=3.8.3
-r{toxinidir}/requirements.txt
whitelist_externals = flake8
commands = flake8 kloudbuster
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py test --coverage --testr-args='{posargs}'
[testenv:pylint]
deps =
pylint>=2.4
pytest>=5.4
pytest-cov>=2.8
mock>=4.0
-r{toxinidir}/requirements.txt
commands = pylint --rcfile=pylintrc kloudbuster
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:debug]
commands = oslo_debug_helper {posargs}
deps =
sphinx>=1.4.0
sphinx_rtd_theme>=0.1.9
oslosphinx>=2.5.0
commands = python3 setup.py build_sphinx
[flake8]
max-line-length = 100
max-line-length = 150
show-source = True
# H233: Python 3.x incompatible use of print operator
# H236: Python 3.x incompatible __metaclass__, use six.add_metaclass()
# E302: expected 2 blank linee
# E302: expected 2 blank lines
# E303: too many blank lines (2)
# H306: imports not in alphabetical order
# H404: multi line docstring should start without a leading new line
# H405: multi line docstring summary not separated with an empty line
ignore = H233,H236,E302,E303,H404,H405
# W504 line break after binary operator
ignore = E302,E303,H306,H404,H405,W504
builtins = _
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build