Fix the traceback when doing progression run

1. Fix the traceback when doing progression run;
2. Fix the agent code to dynamically generate HTML payload;
3. Count both socket errors and timeouts in the stop limit of progression runs;
4. Enhance the documents and add the reference data;

Change-Id: I242e7bbb6ab02f6ec7f27bc334f991d153386c9b
This commit is contained in:
Yichen Wang 2015-11-17 09:46:30 -08:00
parent 85f6630853
commit 97874b246f
4 changed files with 22 additions and 13 deletions

View File

@ -376,10 +376,15 @@ configurations among all cominations as a standard run.
In the standard run, the number of connections per VM will be set to 1000,
the number of requests per seconds per VM is set to 1000, the HTTP request
timeout is set to 5 seconds. The stop limit for progression runs will be error
packets greater than 50. Above configurations are all set by default.
packets greater than 50. The size of the HTML page in the server VMs will be
32768 Bytes. Above configurations are all set by default.
In order to perform the standard run, set the max VM counts for the tests,
and enable the rogression runs. KloudBuster will start the iteration until
reaching the stop limit or the max scale. Eventually, once the KloudBuster run
is finished, the cloud performance can be told by looking at how many VMs
reaching the stop limit or the max scale. Eventually, once the KloudBuster
run is finished, the cloud performance can be told by looking at how many VMs
KloudBuster can run to.
As a reference, for a Kilo OpenStack deployment (LinuxBridge + VLAN) with
Packstack, using an 10GE NIC card for data plane traffic, KloudBuster can run
up to 18 VMs and achieve approximately 5 GBps throughput.

View File

@ -305,7 +305,10 @@ if __name__ == "__main__":
sys.exit(agent.start_redis_server())
if user_data.get('role') == 'Server':
agent = KBA_Server(user_data)
sys.exit(agent.start_nginx_server())
if agent.config_nginx_server():
sys.exit(agent.start_nginx_server())
else:
sys.exit(1)
elif user_data.get('role') == 'Client':
agent = KBA_Client(user_data)
agent.setup_channels()

View File

@ -109,18 +109,20 @@ client:
vm_step: 1
# The stop condition, it is used for KloudBuster to determine when to
# stop the progression, and do the cleanup if needed. It defines as:
# [number_of_err_packets, percentile_of_packet_not_timeout(%%)]
# [number_of_socket_errs, percentile_of_requests_not_timeout(%%)]
#
# e.g. [50, 99.99] means, KloudBuster will continue the progression run
# only if *ALL* below conditions are satisfied:
# (1) The error count of packets are less or equal than 50;
# (2) 99.99%% of the packets are within the timeout range;
# (1) The socket error count (including errors and timeouts) is less
# or equal than 50;
# (2) 99.99%% of the requests are within the timeout range;
#
# Note:
# (1) The timeout value is defined in the client:http_tool_config section;
# (2) The percentile of packets must be in the below list:
# (1) The HTTP request timeout value is defined in the
# client:http_tool_config section;
# (2) The percentile of requests must be in the below list:
# [50, 75, 90, 99, 99.9, 99.99, 99.999]
# (3) Sets percentile to 0 to disable timeout checks;
# (3) Sets percentile to 0 to disable HTTP request timeout checks;
stop_limit: [50, 0]
# Assign floating IP for every client side test VM

View File

@ -311,8 +311,7 @@ class KBRunner(object):
limit = self.config.progression.stop_limit
timeout = self.config.http_tool_configs.timeout
vm_list = self.full_client_dict.keys()
vm_list.sort()
vm_list.sort(cmp=lambda x, y: cmp(int(x[x.rfind('I') + 1:]), int(y[y.rfind('I') + 1:])))
self.client_dict = {}
cur_stage = 1
@ -323,7 +322,7 @@ class KBRunner(object):
if target_vm_count > len(self.full_client_dict):
break
if self.tool_result and 'latency_stats' in self.tool_result:
err = self.tool_result['http_sock_err']
err = self.tool_result['http_sock_err'] + self.tool_result['http_sock_timeout']
pert_dict = dict(self.tool_result['latency_stats'])
if limit[1] in pert_dict.keys():
timeout_at_percentile = pert_dict[limit[1]] // 1000000