Addition of a nova power controlling module
While it appears Helion RC6 does not automatically resume virtual machine states, this untested and proof of concept module will allow us to stop/start and pause/unpause instances, as well as return them to a previously known state if recorded. Change-Id: If822de1c57bf6f6101ba8816d69c250c77488203
This commit is contained in:
parent
8637773eb5
commit
640202df2e
30
README.rst
30
README.rst
@ -190,3 +190,33 @@ file.
|
||||
before starting any jobs.
|
||||
* post_hook_command - Similar to the pre_hook_command variable, when
|
||||
defined, will execute upon the completion of the upgrade job.
|
||||
* previous_upgrade_failed_restart_mysql - This option enables logic to restart
|
||||
MySQL in the event it is not running on the controllerMgmt node, likely from
|
||||
a failed upgrade.
|
||||
|
||||
Nova Powercontrol
|
||||
-----------------
|
||||
|
||||
A module named nova_powercontrol has been included which is intended to utilize
|
||||
nova for all instance power control operations. This utility module also records
|
||||
the previous state of the instance and has a special flag which allows the user
|
||||
to resume or restart all virtual machines that are powered off/suspended upon the
|
||||
completion of the upgrade if the module is utilized to shut down the instances.
|
||||
|
||||
To Use:
|
||||
|
||||
From the tripleo-ansible folder, execute the command:
|
||||
|
||||
bash scripts/retrieve_oc_vars
|
||||
|
||||
The script will then inform you of a file you need to source into your current
|
||||
user environment, it will contain the overcloud API credentials utilizing modified
|
||||
variable names which the playbook knows how to utilize.
|
||||
|
||||
source /root/oc-stackrc-tripleo-ansible
|
||||
|
||||
Now that the environment variables are present, add the following to the
|
||||
ansible-playbook command line for the playbooks to utilize the nova_powercontrol
|
||||
module:
|
||||
|
||||
-e use_nova_powercontrol=True
|
||||
|
350
playbooks/library/nova_powercontrol
Normal file
350
playbooks/library/nova_powercontrol
Normal file
@ -0,0 +1,350 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# This code is part of Ansible, but is an independent component. This
|
||||
# particular file snippet, and this file snippet only, is BSD
|
||||
# licensed. Modules you write using this snippet, which is embedded
|
||||
# dynamically by Ansible still belong to the author of the module, and
|
||||
# may assign their own license to the complete work.
|
||||
#
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions
|
||||
# are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above
|
||||
# copyright notice, this list of conditions and the following
|
||||
# disclaimer in the documentation and/or other materials provided
|
||||
# with the distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
||||
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
||||
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
||||
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
|
||||
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
# POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
import os
|
||||
|
||||
try:
|
||||
from novaclient.v1_1 import client as nova_client
|
||||
from novaclient import exceptions
|
||||
import time
|
||||
except ImportError:
|
||||
print("failed=True msg='novaclient is required for this module'")
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: nova_powercontrol
|
||||
short_description: Stops/Starts virtual machines via nova.
|
||||
description:
|
||||
- Controls the power state of virtual machines via nova.
|
||||
options:
|
||||
login_username:
|
||||
description:
|
||||
- login username to authenticate to keystone
|
||||
required: true
|
||||
default: admin
|
||||
login_password:
|
||||
description:
|
||||
- Password of login user
|
||||
required: true
|
||||
default: 'yes'
|
||||
login_tenant_name:
|
||||
description:
|
||||
- The tenant name of the login user
|
||||
required: true
|
||||
default: 'yes'
|
||||
auth_url:
|
||||
description:
|
||||
- The keystone url for authentication
|
||||
required: false
|
||||
default: 'http://127.0.0.1:35357/v2.0/'
|
||||
region_name:
|
||||
description:
|
||||
- Name of the region
|
||||
required: false
|
||||
default: None
|
||||
state:
|
||||
description:
|
||||
- The desired state for the instance to be set to.
|
||||
required: true
|
||||
default: None
|
||||
instance_id:
|
||||
description:
|
||||
- Instance ID of a single instance to control.
|
||||
default: None
|
||||
hypervisor:
|
||||
description:
|
||||
- Hypervisor ID of the instances to control.
|
||||
default: None
|
||||
zone:
|
||||
description:
|
||||
- zone name of the instances to control.
|
||||
default: None
|
||||
all_instances:
|
||||
description:
|
||||
- Any value to affirm decision to act upon all instances.
|
||||
default: None
|
||||
requirements: ["novaclient"]
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
# Changes the power state via nova
|
||||
- nova_power_control:
|
||||
login_username: admin
|
||||
login_password: admin
|
||||
login_tenant_name: admin
|
||||
instance_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529
|
||||
state: stopped
|
||||
'''
|
||||
|
||||
|
||||
# The following two openstack_ are copy pasted from an upcoming
|
||||
# core module "lib/ansible/module_utils/openstack.py" Once that's landed,
|
||||
# these should be replaced with a line at the bottom of the file:
|
||||
# from ansible.module_utils.openstack import *
|
||||
def openstack_argument_spec():
|
||||
# Consume standard OpenStack environment variables.
|
||||
# This is mainly only useful for ad-hoc command line operation as
|
||||
# in playbooks one would assume variables would be used appropriately
|
||||
OS_AUTH_URL = os.environ.get('OS_AUTH_URL', 'http://127.0.0.1:35357/v2.0/')
|
||||
OS_PASSWORD = os.environ.get('OS_PASSWORD', None)
|
||||
OS_REGION_NAME = os.environ.get('OS_REGION_NAME', None)
|
||||
OS_USERNAME = os.environ.get('OS_USERNAME', 'admin')
|
||||
OS_TENANT_NAME = os.environ.get('OS_TENANT_NAME', OS_USERNAME)
|
||||
|
||||
spec = dict(
|
||||
login_username=dict(default=OS_USERNAME),
|
||||
auth_url=dict(default=OS_AUTH_URL),
|
||||
region_name=dict(default=OS_REGION_NAME),
|
||||
availability_zone=dict(default=None),
|
||||
)
|
||||
if OS_PASSWORD:
|
||||
spec['login_password'] = dict(default=OS_PASSWORD)
|
||||
else:
|
||||
spec['login_password'] = dict(required=True)
|
||||
if OS_TENANT_NAME:
|
||||
spec['login_tenant_name'] = dict(default=OS_TENANT_NAME)
|
||||
else:
|
||||
spec['login_tenant_name'] = dict(required=True)
|
||||
return spec
|
||||
|
||||
|
||||
def _get_server(nova, instance_id):
|
||||
try:
|
||||
return nova.servers.get(instance_id)
|
||||
except Exception, e:
|
||||
module.fail_json(
|
||||
msg="Error accessing instance %s: %s" % (instance_id, e.message)
|
||||
)
|
||||
|
||||
|
||||
def _write_metadata(nova, server):
|
||||
nova.servers.set_meta(
|
||||
server.id,
|
||||
{'ansible_previous_state': server.status}
|
||||
)
|
||||
|
||||
|
||||
def _suspend_instance(nova, instance_id):
|
||||
try:
|
||||
server = None
|
||||
server = _get_server(nova, instance_id)
|
||||
if server.status != "ACTIVE":
|
||||
if server.status != "PAUSED":
|
||||
return False
|
||||
_write_metadata(nova, server)
|
||||
nova.servers.suspend(server.id)
|
||||
return (True, nova.servers.suspend(server.id))
|
||||
except Exception, e:
|
||||
return (False, e.message)
|
||||
|
||||
|
||||
def _resume_instance(nova, instance_id):
|
||||
try:
|
||||
server = None
|
||||
server = _get_server(nova, instance_id)
|
||||
if server.status != "SUSPENDED":
|
||||
return False
|
||||
_write_metadata(nova, server)
|
||||
return (True, nova.servers.resume(server.id))
|
||||
except Exception, e:
|
||||
return (False, e.message)
|
||||
|
||||
|
||||
def _stop_instance(nova, instance_id):
|
||||
try:
|
||||
server = None
|
||||
server = _get_server(nova, instance_id)
|
||||
if server.status != "ACTIVE":
|
||||
if server.status != "PAUSED":
|
||||
return False
|
||||
_write_metadata(nova, server)
|
||||
return (True, nova.servers.stop(server.id))
|
||||
except Exception, e:
|
||||
return (False, e.message)
|
||||
|
||||
|
||||
def _start_instance(nova, instance_id):
|
||||
try:
|
||||
server = None
|
||||
server = _get_server(nova, instance_id)
|
||||
if server.status != "STOPPED":
|
||||
if server.status != "SUSPENDED":
|
||||
if server.status != "PAUSED":
|
||||
return False
|
||||
_write_metadata(nova, server)
|
||||
return (True, nova.servers.start(server.id))
|
||||
except Exception, e:
|
||||
return (False, e.message)
|
||||
|
||||
|
||||
def _previous_state(nova, instance_id):
|
||||
"""
|
||||
Attempts to return instances to their previous state if they were
|
||||
stopped or suspended.
|
||||
"""
|
||||
try:
|
||||
server = _get_server(nova, instance_id)
|
||||
if server.metadata["ansible_previous_state"] == "ACTIVE":
|
||||
if server.status != "ACTIVE":
|
||||
if server.status == "STOPPED":
|
||||
return _start_instance(nova, instance_id)
|
||||
if server.status == "SUSPENDED":
|
||||
return _resume_instance(nova, instance_id)
|
||||
return (False, None)
|
||||
except Exception, e:
|
||||
return (False, e.message)
|
||||
|
||||
|
||||
def _determine_task(nova, instance_id, action):
|
||||
choice = {
|
||||
'running': _start_instance,
|
||||
'start': _start_instance,
|
||||
'on': _start_instance,
|
||||
'stopped': _stop_instance,
|
||||
'stop': _stop_instance,
|
||||
'off': _stop_instance,
|
||||
'resume': _resume_instance,
|
||||
'previous': _previous_state
|
||||
}
|
||||
return choice[action](nova, instance_id)
|
||||
|
||||
|
||||
def _many_servers(module, nova, servers):
|
||||
"""
|
||||
Enumerates through the list of services and calls
|
||||
_determine_task.
|
||||
"""
|
||||
results = []
|
||||
for server in servers:
|
||||
result = _determine_task(nova, server.id, module.params['state'])
|
||||
results.append(results)
|
||||
module.exit_json(changed=True, output=results)
|
||||
|
||||
|
||||
def _all_instances(module, nova):
|
||||
"""
|
||||
Retrieves the entire list of servers and changes and applies the
|
||||
desired state.
|
||||
"""
|
||||
results = []
|
||||
for server in nova.servers.list():
|
||||
_determine_task(nova, server.id, module.params['state'])
|
||||
results.append(results)
|
||||
module.exit_json(changed=True, output=results)
|
||||
|
||||
|
||||
def _nova_powercontrol(module, nova):
|
||||
state = module.params['state']
|
||||
if module.params['instance_id'] is not None:
|
||||
(status, message) = _determine_task(
|
||||
nova,
|
||||
module.params['instance_id'],
|
||||
state
|
||||
)
|
||||
if status:
|
||||
module.exit_json(changed=True, output=message)
|
||||
else:
|
||||
module.fail_json(
|
||||
msg="Instance %s failed to update with error %s" % (
|
||||
module.params['instance_id'],
|
||||
message
|
||||
)
|
||||
)
|
||||
elif module.params['hypervisor'] is not None:
|
||||
try:
|
||||
hypervisor = nova.hypervisors.search(
|
||||
True,
|
||||
module.params['hypervisor'],
|
||||
servers=True
|
||||
)
|
||||
_many_servers(module, nova, hypervisor.servers)
|
||||
except:
|
||||
module.fail_json(
|
||||
msg="Unable to find zone %s" % module.params['hypervisor']
|
||||
)
|
||||
elif module.params['zone'] is not None:
|
||||
try:
|
||||
servers = nova.servers.list(
|
||||
True,
|
||||
{'OS-EXT-AZ:availability_zone': module.params['zone']}
|
||||
)
|
||||
_many_servers(module, nova, servers)
|
||||
except:
|
||||
module.fail_json(
|
||||
msg="Unable to find zone %s" % module.params['zone']
|
||||
)
|
||||
elif module.params['all_instances'] is not None:
|
||||
_all_instances(module, nova)
|
||||
else:
|
||||
module.fail_json(
|
||||
msg="Unable to proceed: Requires instance_id, hypervisor, "
|
||||
"zone, or special flag all_instances."
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
argument_spec = openstack_argument_spec()
|
||||
argument_spec.update(dict(
|
||||
state=dict(required=True),
|
||||
instance_id=dict(),
|
||||
hypervisor=dict(),
|
||||
zone=dict(),
|
||||
all_instances=dict()
|
||||
))
|
||||
module = AnsibleModule(argument_spec=argument_spec)
|
||||
|
||||
nova = nova_client.Client(module.params['login_username'],
|
||||
module.params['login_password'],
|
||||
module.params['login_tenant_name'],
|
||||
module.params['auth_url'],
|
||||
region_name=module.params['region_name'],
|
||||
service_type='compute')
|
||||
try:
|
||||
nova.authenticate()
|
||||
except exceptions.Unauthorized, e:
|
||||
module.fail_json(
|
||||
msg="Invalid OpenStack Nova credentials.: %s" % e.message
|
||||
)
|
||||
except exceptions.AuthorizationFailure, e:
|
||||
module.fail_json(
|
||||
msg="Unable to authorize user: %s" % e.message
|
||||
)
|
||||
|
||||
_nova_powercontrol(module, nova)
|
||||
|
||||
|
||||
# this is magic, see lib/ansible/module_common.py
|
||||
from ansible.module_utils.basic import *
|
||||
main()
|
23
playbooks/restart_vms.yml
Normal file
23
playbooks/restart_vms.yml
Normal file
@ -0,0 +1,23 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
- name: Tell the cloud to shutdown via nova_powercontrol.
|
||||
nova_powercontrol:
|
||||
login_username: "{{ lookup('env',OC_OS_USERNAME'}}"
|
||||
login_password: "{{ lookup('env',OC_OS_PASSWORD'}}"
|
||||
login_tenant_name: "{{ lookup('env',OC_OS_TENANT_NAME'}}"
|
||||
auth_url: "{{ lookup('env',OC_OS_AUTH_URL'}}"
|
||||
all_instances: yes
|
||||
state: previous
|
||||
when: use_nova_powercontrol is defined
|
@ -15,13 +15,25 @@
|
||||
- name: Ensure libvirt-bin is running
|
||||
sudo: yes
|
||||
service: name=libvirt-bin state=started enabled=yes
|
||||
when: use_nova_powercontrol is not defined
|
||||
- name: Collect list of VMs
|
||||
sudo: yes
|
||||
virt: command=list_vms
|
||||
register: virtual_machines
|
||||
when: use_nova_powercontrol is not defined
|
||||
- name: Issue graceful shutdowns
|
||||
sudo: yes
|
||||
virt: state=shutdown name={{item}}
|
||||
when: use_nova_powercontrol is not defined
|
||||
with_items: virtual_machines.list_vms
|
||||
- name: Tell the cloud to shutdown via nova_powercontrol.
|
||||
nova_powercontrol:
|
||||
login_username: "{{ lookup('env',OC_OS_USERNAME'}}"
|
||||
login_password: "{{ lookup('env',OC_OS_PASSWORD'}}"
|
||||
login_tenant_name: "{{ lookup('env',OC_OS_TENANT_NAME'}}"
|
||||
auth_url: "{{ lookup('env',OC_OS_AUTH_URL'}}"
|
||||
all_instances: yes
|
||||
state: stopped
|
||||
when: use_nova_powercontrol is defined
|
||||
- name: Pausing for 60 seconds to give VMs time to stop.
|
||||
pause: seconds=60
|
||||
|
7
scripts/retrieve_oc_vars
Normal file
7
scripts/retrieve_oc_vars
Normal file
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
set -eux
|
||||
# Execute from the tripleo-ansible folder
|
||||
ansible $(heat output-show overcloud controller0IP|sed s/\"//g) -s -u heat-admin -m fetch -a "dest=/root/oc-stackrc src=/root/stackrc flat=yes" -i plugins/inventory/heat.py
|
||||
cat /root/oc-stackrc | sed s/OS_/OC_OS_/ > /root/oc-stackrc-tripleo-ansible
|
||||
|
||||
echo " **** Before Proceeding, Execute: source /root/oc-stackrc-tripleo-ansible"
|
Loading…
x
Reference in New Issue
Block a user