5.3 KiB
Installation
Installation in Python environment
Shaker is distributed as Python package and available through PyPi (https://pypi.org/project/pyshaker/).
$ pip install --user pyshaker
OpenStack Deployment
Requirements:
- Computer where Shaker is executed should be routable from OpenStack instances and should have open port to accept connections from agents running on instances
For full features support it is advised to run Shaker by admin user.
However with some limitations it works for non-admin user - see non_admin_mode
for
details.
Base image
Automatic build in OpenStack
The base image can be built using shaker-image-builder tool.
$ shaker-image-builder
There are 2 modes available:
- heat - using Heat template (requires Glance v1 for base image upload);
- dib - using diskimage-builder elements (requires qemu-utils and debootstrap to build Ubuntu-based image).
By default the mode is selected automatically preferring heat if Glance API v1 is available. Created
image is uploaded into Glance and made available for further executions
of Shaker. For full list of parameters refer to shaker_image_builder
.
Manual build with disk-image-builder
Shaker image can also be built using diskimage-builder tool.
- Install disk-image-builder. Refer to diskimage-builder installation
- Clone Shaker repo:
git clone https://opendev.org/performa/shaker
- Add search path for diskimage-builder elements:
export ELEMENTS_PATH=shaker/shaker/resources/image_elements
- Build the image based on Ubuntu Xenial:
disk-image-create -o shaker-image.qcow2 ubuntu vm shaker
- Upload image into Glance:
openstack image create --public --file shaker-image.qcow2 --disk-format qcow2 shaker-image
- Create flavor:
openstack flavor create --ram 512 --disk 3 --vcpus 1 shaker-flavor
Running Shaker by non-admin user
While the full feature set is available when Shaker is run by admin user, it works with some limitations for non-admin user too.
Image builder limitations
Image builder requires flavor name to be specified via command line parameter --flavor-name. Create flavor prior running Shaker, or choose one that satisfies instance template requirements. For Ubuntu-based image the requirement is 512 Mb RAM, 3 Gb disk and 1 CPU
Execution limitations
Non-admin user has no permissions to list compute nodes and to deploy instances to particular compute nodes.
When instances need to be deployed on low number of compute nodes it is possible to use server groups and specify anti-affinity policy within them. Note however that server group size is limited by quota_server_group_members parameter in nova.conf. The following is part of Heat template adds server groups.
Add to resources section:
server_group:
type: OS::Nova::ServerGroup
properties:
name: {{ unique }}_server_group
policies: [ 'anti-affinity' ]
Add attribute to server definition:
scheduler_hints:
group: { get_resource: server_group }
The similar patch is needed to implement dense scenarios. The difference is in server group policy, it should be 'affinity'.
Alternative approach is to specify number of compute nodes. Note that the number must always be specified. If Nova distributes instances evenly (or with normal random distribution) then the chances that instances are placed on unique nodes are quite high (well, there will be collisions due to https://en.wikipedia.org/wiki/Birthday_problem, so expect that number of unique pair will be lower than specified number of compute nodes).
Non-OpenStack Deployment (aka Spot mode)
To run scenarios against remote nodes (shaker-spot
command) install shaker on the local host. Make sure all necessary tools
are installed too. Refer to spot_scenarios
for more details.
Run Shaker against OpenStack deployed by Fuel-CCP on Kubernetes
Shaker can be run in Kubernetes environment and can execute scenarios against OpenStack deployed by Fuel-CCP tool.
Shaker app consists of service:
k8s/shaker-svc.yaml
and pod:
k8s/shaker-pod.yaml
You may need to change values for variables defined in config files:
- SHAKER_SERVER_ENDPOINT should point to external address of Kubernetes cluster, and OpenStack instances must have access to it
- OS_*** parameters describe connection to Keystone endpoint
- SHAKER_SCENARIO needs to be altered to run the needed scenario
- Pod is configured to write logs into /tmp on the node that hosts the pod
- port, nodePort and targetPort must be equal and not to conflict with other exposed services