
Issue: At AT&T we have large complex test stacks that make putting everything into a single heat template and environment file very cumbersome. Large monolithic templates make it harder to debug failures, maintain, extend, and organize these tests. In order to solve this issue we have enhanced Shaker to support specifying support templates with environment files. This commit enhances Shaker to add the ability to define support_templates with env_files in test definitions. Support templates spin up "support type" resources before the actual test stack is spun up. This could range from networks, to volumes, to anything Heat can create. The support resources do not have any reliance on resources created in the test stack, they set up a "foundation" for the test stack. The test stack can then reference these resources by name. (e.g. assume they exist by the time the test stack is spun up) While the example provided with this commit is simple, and the support networks that get created are not directly used in the test, it shows the basic principles of how support templates work. As a real-world example and to give an idea of the complexity this enhancement is trying to solve, we have a test definition that looks like this: support_templates: - Base: template: templates/module_1_base.yaml env_file: env/module_1_base.env - SI_L2: template: templates/module_2_si_l2.yaml env_file: env/module_2_si_l2.env - SI_L3: template: templates/module_3_si_l3.yaml env_file: env/module_3_si_l3.env template: templates/module_4_master_servant.yaml env_file: env/module_4_master_servant.env The first support stack (module_1) sets up some "base" network resources. This stack provides some network resources used by the SI_L2 and SI_L3 support stacks. SI_L2 is a support stack with 2 VMs that do Contrail service chaining on an L2 network. SI_L3 is a support stack with 2 VMs that do Contrail service chaining on an L3 network. Then the test stack (module 4) gets spun up on N amount of computes and runs traffic across the SI_L2 and SI_L3 service chained networks. After the test run all stacks are cleaned up Using the concept of support stacks allows us to beter organize and maintain our complex tests and allows for faster debugging due to the "layered" nature of the setup. Support templates also allow us to spin up more Shaker test threads that use the same support templates simultaneously to better simulate real-world network traffic. It also reduces the set up time of certain tests we have since the support stacks already exist. This enhancement does not alter existing Shaker functionality and is fully backwards compatible. Change-Id: Ife51bc55874c6ec4faac221bab8f9f0eea175fdc
Shaker
The distributed data-plane testing tool built for OpenStack.
Shaker wraps around popular system network testing tools like iperf, iperf3 and netperf (with help of flent). Shaker is able to deploy OpenStack instances and networks in different topologies. Shaker scenario specifies the deployment and list of tests to execute. Additionally tests may be tuned dynamically in command-line.
Features
- User-defined topology via Heat templates
- Simultaneously test execution on multiple instances
- Interactive report with stats and charts
- Built-in SLA verification
Deployment Requirements
- Shaker server routable from OpenStack cloud
- Admin-user access to OpenStack API is preferable
Run in Python Environment
$ pip install pyshaker
$ . openrc
$ shaker-image-builder
$ shaker --server-endpoint <host:port> --scenario <scenario> --report <report.html>``
- where:
-
host
andport
- host and port of machine where Shaker is deployedscenario
- the scenario to execute, e.g. openstack/perf_l2 ( catalog)<report.html>
- file to store the final report
Full list of parameters is available in documentation.
Shaker in Container
Shaker is available as container at Docker Hub at shakhat/shaker
$ docker run -p <port>:<port> -v <artifacts-dir>:/artifacts shakhat/shaker --scenario <scenario> --server-endpoint <host:port>
--os-auth-url <os-auth-url> --os-username <os-username> --os-password <os-password> --os-project-name <os-project-name>
- where:
-
host
andport
- host and port on machine where Shaker is deployedartifacts-dir
- where to store report and raw resultscenario
- the scenario to execute, e.g. openstack/perf_l2 ( catalog)os-XXX
- OpenStack cloud credentials
Links
- PyPi - https://pypi.org/project/pyshaker/
- Docker - https://hub.docker.com/r/shakhat/shaker/
- Docs - http://pyshaker.readthedocs.io/
- Bugtracker - https://launchpad.net/shaker/
Description
Languages
Python
91%
HTML
8%
Shell
0.8%
Dockerfile
0.2%