From 1198e1c7f40909b28f105213bfd9363f112ed113 Mon Sep 17 00:00:00 2001 From: Peter Matulis Date: Fri, 18 Oct 2019 12:29:33 -0400 Subject: [PATCH] Review and refactor app-upgrade-openstack.rst Change-Id: Ibeb901cc7f6048742f99be93b05a5f35fb6adffb --- deploy-guide/source/app-upgrade-openstack.rst | 737 ++++++++++-------- 1 file changed, 430 insertions(+), 307 deletions(-) diff --git a/deploy-guide/source/app-upgrade-openstack.rst b/deploy-guide/source/app-upgrade-openstack.rst index 3d553fd..7fab328 100644 --- a/deploy-guide/source/app-upgrade-openstack.rst +++ b/deploy-guide/source/app-upgrade-openstack.rst @@ -1,161 +1,232 @@ +============================== Appendix B: OpenStack Upgrades ============================== Overview -------- -This document outlines approaches to upgrading OpenStack using the charms. +This document outlines how to upgrade a Juju-deployed OpenStack cloud. + +.. warning:: + + Upgrading an OpenStack cloud is not risk-free. The procedures outlined in + this guide should first be tested in a pre-production environment. + +Please read the following before continuing: + +- the OpenStack charms `Release Notes`_ for the corresponding current and + target versions of OpenStack +- the `Known OpenStack upgrade issues`_ section in this document + +Definitions +----------- + +Charm upgrade + An upgrade of the charm software which is used to deploy and manage + OpenStack. This includes charms that manage applications which are not + technically part of the OpenStack project such as Rabbitmq and MySQL. + +OpenStack upgrade + An upgrade of the OpenStack software (packages) that are installed and + managed by the charms. Each OpenStack service is upgraded (by the operator) + via its corresponding (and upgraded) charm. This constitutes an upgrade from + one major release to the next (e.g. Stein to Train). + +Ubuntu Server package upgrade + An upgrade of the software packages on a Juju machine that are not part of + the OpenStack project (e.g. kernel modules, QEMU binaries, KVM kernel + module). + +Series upgrade + An upgrade of the operating system (Ubuntu) on a Juju machine (e.g. Xenial to + Bionic). See appendix `Series upgrade`_ for more information. + +Charm upgrades +-------------- + +All charms should be upgraded to their latest stable revision prior to +performing the OpenStack upgrade. The Juju command to use is +:command:`upgrade-charm`. For extra guidance see `Upgrading applications`_ +in the Juju documentation. + +Although it may be possible to upgrade some charms in parallel it is +recommended that the upgrades be performed in series (i.e. one at a time). +Verify a charm upgrade before moving on to the next. + +In terms of the upgrade order, begin with 'keystone'. After that, the rest of +the charms can be upgraded in any order. + +Do check the `Release Notes`_ for any special instructions regarding charm +upgrades. + +.. caution:: + + Any software changes that may have (exceptionally) been made to a charm + currently running on a unit will be overwritten by the target charm during + the upgrade. + +Before upgrading, a (partial) output to :command:`juju status` may look like: + +.. code:: + + App Version Status Scale Charm Store Rev OS Notes + keystone 15.0.0 active 1 keystone jujucharms 306 ubuntu + + Unit Workload Agent Machine Public address Ports Message + keystone/0* active idle 3/lxd/1 10.248.64.69 5000/tcp Unit is ready + +Here, as deduced from the Keystone **service** version of '15.0.0', the cloud +is running Stein. The 'keystone' **charm** however shows a revision number of +'306'. Upon charm upgrade, the service version will remain unchanged but the +charm revision is expected to increase in number. + +So to upgrade this 'keystone' charm (to the most recent promulgated version in +the Charm Store): + +.. code:: bash + + juju upgrade-charm keystone + +The upgrade progress can be monitored via :command:`juju status`. Any +encountered problem will surface as a message in its output. This sample +(partial) output reflects a successful upgrade: + +.. code:: + + App Version Status Scale Charm Store Rev OS Notes + keystone 15.0.0 active 1 keystone jujucharms 309 ubuntu + + Unit Workload Agent Machine Public address Ports Message + keystone/0* active idle 3/lxd/1 10.248.64.69 5000/tcp Unit is ready + +This shows that the charm now has a revision number of '309' but Keystone +itself remains at '15.0.0'. + +OpenStack upgrades +------------------ + +Go through each of the following sections to ensure a trouble-free OpenStack +upgrade. .. note:: - Upgrading an OpenStack cloud is not without risk; upgrades should be tested - in pre-production testing environments prior to production deployment - upgrades. + The charms only support single-step OpenStack upgrades (N+1). That is, to + upgrade two releases forward you need to upgrade twice. You cannot skip + releases when upgrading OpenStack with charms. -Definitions and Terms ---------------------- +It may be worthwhile to read the upstream OpenStack `Upgrades`_ guide. -Charm Upgrade +Release Notes ~~~~~~~~~~~~~ -This is an upgrade of the charm software which is used to deploy and manage -OpenStack. This will include charms that manage applications which are not -part of the OpenStack project such as Rabbitmq and MySQL. +The OpenStack charms `Release Notes`_ for the corresponding current and target +versions of OpenStack **must** be consulted for any special instructions. In +particular, pay attention to services and/or configuration options that may be +retired, deprecated, or changed. -OpenStack Upgrade -~~~~~~~~~~~~~~~~~ +Manual intervention +~~~~~~~~~~~~~~~~~~~ -This is an upgrade of the OpenStack software (packages) that are installed -and managed by the charms. +It is intended that the now upgraded charms are able to accommodate all +software changes associated with the corresponding OpenStack services to be +upgraded. A new charm will also strive to produce a service as similarly +configured to the pre-upgraded service as possible. Nevertheless, there are +still times when intervention on the part of the operator may be needed, such +as when: -Ubuntu Server Package Upgrade +- a service is removed, added, or replaced +- a software bug affecting the OpenStack upgrade is present in the new charm + +All known issues requiring manual intervention are documented in section `Known +OpenStack upgrade issues`_. You **must** look these over. + +Verify the current deployment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This is an upgrade of the Ubuntu packages on the server that are not part of -the OpenStack project such as kernel modules, QEMU binaries, KVM kernel module -etc. +Confirm that the output for the :command:`juju status` command of the current +deployment is error-free. In addition, if monitoring is in use (e.g. Nagios), +ensure that all alerts have been resolved. This is to make certain that any +issues that may appear after the upgrade are not for pre-existing problems. -Ubuntu Release Upgrade -~~~~~~~~~~~~~~~~~~~~~~ +Perform a database backup +~~~~~~~~~~~~~~~~~~~~~~~~~ -This is an upgrade from one Ubuntu release to the next. - -Testing -------- - -All procedures outlined below should be tested in a non-production environment -first. - -Skipping Releases or Fast Forward Upgrade ------------------------------------------ - -The charms support stepped OpenStack version upgrades (N+1). For example: -Ocata to Pike, then Pike to Queens, Queens to Rocky and so on. - -This stepped N+1 approach in charms is mature, well-tested, and can be used -back-to-back to achieve N+N upgrade results. - -Skipping releases is not supported by many upstream OpenStack projects, and -it is not supported by the charms. - -"Fast-forward-upgrade" is also not supported by the charms. FFU/FFWD is an -upgrade approach where the control plane services are stepped through N+1+1+1 -upgrades, typically to achieve an N+3 upgrade result. - -1. Charm Upgrades ------------------ - -All charms should be upgraded to the latest stable charm revision before -performing an OpenStack upgrade. It is recommended to upgrade the Keystone -charm first. The order of upgrading subsequent charms is usually not important -but check the release notes for each release to ensure there are no -special requirements. - -To upgrade a charm that was deployed from the charm store: +Before making any changes to cloud services perform a backup of the cloud +database by running the ``backup`` action on any single percona-cluster unit: .. code:: bash - juju upgrade-charm + juju run-action --wait percona-cluster/0 backup - -The progress of the upgrade can be monitored by checking the workload status -of the charm which can been with **juju status**. Once the upgrade is complete -the charm status should contain the message 'Unit is ready'. The version of -the deployed software can also been seen from **juju status**: +Now transfer the backup directory to the Juju client with the intention of +subsequently storing it somewhere safe. This command will grab **all** existing +backups: .. code:: bash - juju status - ... - App Version Status Scale Charm Store Rev OS Notes - keystone 11.0.3 active 1 keystone local 0 ubuntu - ... + juju scp -- -r percona-cluster/0:/opt/backups/mysql /path/to/local/directory -This shows that the deployed version of keystone is 11.0.3 (Ocata) +Permissions may first need to be altered on the remote machine. -If the Juju controller is resource constrained it may be beneficial to do the -charm upgrades in series rather than in parallel. After each charm upgrade -check for any unforeseen errors reported in **juju status** before proceeding. +Archive old database data +~~~~~~~~~~~~~~~~~~~~~~~~~ -2. Pre-Upgrade Tasks --------------------- - -2.1 Release Notes -~~~~~~~~~~~~~~~~~ - -Check the release notes for the charm releases for any special instructions. - -2.2 Check current deployment -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Check for any charm errors in **juju status**. If a monitor is in use like -Nagios then make sure any alerts have been cleared before proceeding. This is -to ensure that alerts after the upgrade are not pre-existing problems. - -Also ensure that the current charms must not do not contain any customisations -since that is not supported and they will be overwritten by the upgrade. - -2.3 Database row archiving -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -During the upgrade, database migrations will be run. These can be significantly -sped up by archiving any stale data (such as deleted instances). To perform the -archive of nova data run the nova-cloud-controller action: +During the upgrade, database migrations will be run. This operation can be +optimised by first archiving any stale data (e.g. deleted instances). Do this +by running the ``archive-data`` action on any single nova-cloud-controller +unit: .. code:: bash - juju run-action nova-cloud-controller/0 archive-data + juju run-action --wait nova-cloud-controller/0 archive-data This action may need to be run multiple times until the action output reports -'Nothing was archived' +'Nothing was archived'. -2.4 Purge old compute service entries -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Purge old compute service entries +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Old service entries for compute services on units which are no longer part of -the model should be purged before upgrade. - -Any old service entries will show as 'down' and on machines no longer in the -model when looking at the current list of compute services: +Old compute service entries for units which are no longer part of the model +should be purged before the upgrade. These entries will show as 'down' (and be +hosted on machines no longer in the model) in the current list of compute +services: .. code:: bash - openstack compute service list + openstack compute service list -Services can be removed using the 'compute service delete' command: +To remove a compute service: .. code:: bash - openstack compute service delete + openstack compute service delete +Disable unattended-upgrades +~~~~~~~~~~~~~~~~~~~~~~~~~~~ -3. Upgrade Order ----------------- +When performing a service upgrade on a unit that hosts multiple principle +charms (e.g. ``nova-compute`` and ``ceph-osd``), ensure that +``unattended-upgrades`` is disabled on the underlying machine for the duration +of the upgrade process. This is to prevent the other services from being +upgraded outside of Juju's control. On a unit run: -The charms are grouped together below. The ordering of upgrade within a group -does not matter but all the charms in each group should be upgraded before -moving on to the next group. Any release note guidance overrides the order -listed here. +.. code:: bash + + sudo dpkg-reconfigure -plow unattended-upgrades + +Upgrade order +~~~~~~~~~~~~~ + +The charms are put into groups to indicate the order in which their +corresponding OpenStack services should be upgraded. The order within a group +is unimportant. What matters is that all the charms within the same group are +acted on before those in the next group (i.e. upgrade all charms in group 2 +before moving on to group 3). Any `Release Notes`_ guidance overrides the +information listed here. You may also consult the upstream documentation on the +subject: `Update services`_. + +Each service represented by a charm in the below table will need to be upgraded +individually. +-------+-----------------------+---------------+ | Group | Charm Name | Charm Type | @@ -209,244 +280,282 @@ listed here. | 3 | nova-compute | Compute | +-------+-----------------------+---------------+ -4. Performing The Upgrade -------------------------- +.. important:: -If the service to be upgraded is in a highly-available cluster then the best -way to minimise service interruption is to follow the "HA with pause/resume" -instructions below. If there are multiple units of the service but they are -not clustered then follow the "Action managed" instructions. Finally, if there -is a single unit then follow "Application one-shot". + OpenStack services whose software is not a part of the Ubuntu Cloud Archive + are not represented in the above list. This type of software can only have + their major versions changed during a series (Ubuntu) upgrade on the + associated unit. Common charms where this applies are ``ntp``, + ``memcached``, ``percona-cluster``, and ``rabbitmq-server``. -Some parts of the upgrade, like database migrations, only need to run once per -application and these tasks are handled by the lead unit. It is advisable that -these tasks are run first (this is not applicable for one-shot deployments). To -achieve this run the upgrade on the lead unit first. To check which unit is the -lead unit either check which unit has a '*' next to it in **juju status** or -run: +Perform the upgrade +~~~~~~~~~~~~~~~~~~~ + +The essence of a charmed OpenStack service upgrade is a change of the +corresponding machine software sources so that a more recent combination of +Ubuntu release and OpenStack release is used. This combination is based on the +`Ubuntu Cloud Archive`_ and translates to a configuration known as the "cloud +archive pocket". It takes on the following syntax: + +``cloud:-`` + +For example: + +``cloud:bionic-train`` + +There are three methods available for performing an OpenStack service upgrade. +The appropriate method is chosen based on the actions supported by the charm. +Actions for a charm can be listed in this way: .. code:: bash - juju run --application application-name is-leader + juju actions +All-in-one +^^^^^^^^^^ + +The "all-in-one" method upgrades an application immediately. Although it is the +quickest route, it can be harsh when applied in the context of multi-unit +applications. This is because all the units are upgraded simultaneously, and is +likely to cause a transient service outage. This method must be used if the +application has a sole unit. + +.. attention:: + + The "all-in-one" method should only be used when the charm does not + support the ``openstack-upgrade`` action. + +The syntax is: + +.. code:: bash + + juju config openstack-origin=cloud: + +Charms whose services are not technically part of the OpenStack project will +use a different charm option. The Ceph charms are a classic example: + +.. code:: bash + + juju config source=cloud: + +So to upgrade Cinder across all units (currently running Bionic) from Stein to +Train: + +.. code:: bash + + juju config cinder openstack-origin=cloud:bionic-train + +Single-unit +^^^^^^^^^^^ + +The "single-unit" method builds upon the "all-in-one" method by allowing for +the upgrade of individual units in a controlled manner. It requires the +enablement of charm option ``action-managed-upgrade`` and the charm action +``openstack-upgrade``. + +.. attention:: + + The "single-unit" method should only be used when the charm does not + support the ``pause`` and ``resume`` actions. + +As a general rule, whenever there is the possibility of upgrading units +individually, **always upgrade the application leader first.** The leader is +the unit with a ***** next to it in the :command:`juju status` output. It can +also be discovered via the CLI: + +.. code:: bash + + juju run --application is-leader + +For example, to upgrade a three-unit glance application from Stein to Train +where ``glance/1`` is the leader: + +.. code:: bash + + juju config glance action-managed-upgrade=True + juju config glance openstack-origin=cloud:bionic-train + + juju run-action --wait glance/1 openstack-upgrade + juju run-action --wait glance/0 openstack-upgrade + juju run-action --wait glance/2 openstack-upgrade + +.. note:: + + The ``openstack-upgrade`` action is only available for charms whose services + are part of the OpenStack project. For instance, you will need to use the + "all-in-one" method for the Ceph charms. + +Paused-single-unit +^^^^^^^^^^^^^^^^^^ + +The "paused-single-unit" method extends the "single-unit" method by allowing +for the upgrade of individual units *while paused*. Additional charm +requirements are the ``pause`` and ``resume`` actions. This method provides +more versatility by allowing a unit to be removed from service, upgraded, and +returned to service. Each of these are distinct events whose timing is chosen +by the operator. + +.. attention:: + + The "paused-single-unit" method is the recommended OpenStack service upgrade + method. + +For example, to upgrade a three-unit nova-compute application from Stein to +Train where ``nova-compute/0`` is the leader: + +.. code:: bash + + juju config nova-compute action-managed-upgrade=True + juju config nova-compute openstack-origin=cloud:bionic-train + + juju run-action nova-compute/0 --wait pause + juju run-action nova-compute/0 --wait openstack-upgrade + juju run-action nova-compute/0 --wait resume + + juju run-action nova-compute/1 --wait pause + juju run-action nova-compute/1 --wait openstack-upgrade + juju run-action nova-compute/1 --wait resume + + juju run-action nova-compute/2 --wait pause + juju run-action nova-compute/2 --wait openstack-upgrade + juju run-action nova-compute/2 --wait resume + +In addition, this method also permits a possible hacluster subordinate unit, +which typically manages a VIP, to be paused so that client traffic will not +flow to the associated parent unit while its upgrade is underway. + +.. attention:: + + When there is an hacluster subordinate unit then it is recommended to always + take advantage of the "pause-single-unit" method's ability to pause it + before upgrading the parent unit. + +For example, to upgrade a three-unit keystone application from Stein to Train +where ``keystone/2`` is the leader: + +.. code:: bash + + juju config keystone action-managed-upgrade=True + juju config keystone openstack-origin=cloud:bionic-train + + juju run-action keystone-hacluster/1 --wait pause + juju run-action keystone/2 --wait pause + juju run-action keystone/2 --wait openstack-upgrade + juju run-action keystone/2 --wait resume + juju run-action keystone-hacluster/1 --wait resume + + juju run-action keystone-hacluster/2 --wait pause + juju run-action keystone/1 --wait pause + juju run-action keystone/1 --wait openstack-upgrade + juju run-action keystone/1 --wait resume + juju run-action keystone-hacluster/2 --wait resume + + juju run-action keystone-hacluster/0 --wait pause + juju run-action keystone/0 --wait pause + juju run-action keystone/0 --wait openstack-upgrade + juju run-action keystone/0 --wait resume + juju run-action keystone-hacluster/0 --wait resume .. warning:: - Extra care must be taken when performing OpenStack upgrades in an - environment with a converged architecture. If two principle charms have - been placed on the same unit (e.g. nova-compute and ceph-osd), then - upgrading one of the charms will cause the underlying system to be updated - to point at packages from the next Openstack release. If the machine has - unattended-upgrades enabled, which is the default in xenial and bionic, the - second charm may have its packages updated outside of juju's control. We - recommend disabling unattended upgrades for the duration of the upgrade - process, and to renable unattended-upgrades once complete. + The hacluster subordinate unit number may not necessarily match its parent + unit number. As in the above example, only for keystone/0 do the unit + numbers correspond (i.e. keystone-hacluster/0 is the subordinate unit). +Verify the new deployment +~~~~~~~~~~~~~~~~~~~~~~~~~ -HA with pause/resume -~~~~~~~~~~~~~~~~~~~~ - -The majority of charms support pause and resume actions. These actions can be -used to place units of a charm into a state where maintenance operations can -be carried out. Using these actions along with action managed upgrades allows -a charm to be removed from service, upgraded and returned to service. - -For example, to upgrade a three-unit nova-cloud-controller application -from Ocata to Pike where nova-cloud-controller/2 is the leader: - -.. code:: bash - - juju config nova-cloud-controller action-managed-upgrade=True - juju config nova-cloud-controller openstack-origin='cloud:xenial-pike' - - juju run-action nova-cloud-controller-hacluster/2 --wait pause - juju run-action nova-cloud-controller/2 --wait pause - juju run-action nova-cloud-controller/2 --wait openstack-upgrade - juju run-action nova-cloud-controller/2 --wait resume - juju run-action nova-cloud-controller-hacluster/2 --wait resume - juju run-action nova-cloud-controller-hacluster/1 --wait pause - juju run-action nova-cloud-controller/1 --wait pause - juju run-action nova-cloud-controller/1 --wait openstack-upgrade - juju run-action nova-cloud-controller/1 --wait resume - juju run-action nova-cloud-controller-hacluster/1 --wait resume - juju run-action nova-cloud-controller-hacluster/0 --wait pause - juju run-action nova-cloud-controller/0 --wait pause - juju run-action nova-cloud-controller/0 --wait openstack-upgrade - juju run-action nova-cloud-controller/0 --wait resume - juju run-action nova-cloud-controller-hacluster/0 --wait resume - - -.. warning:: - - The hacluster unit numbers may not match the parent - unit number. In the example above nova-cloud-controller-hacluster/2 might - not be the hacluster subordinate of nova-cloud-controller/2. You should - always pause the hacluster subordinate unit respective to the parent unit - you wish to upgrade, starting from the leader. - - -Action managed -~~~~~~~~~~~~~~ - -If there are multiple units of an application then each unit can be upgraded -one at a time using Juju actions. This allows for rolling upgrades. To use -this feature the charm configuration option action-managed-upgrade must be set -to True. - -For example to upgrade a three node keystone service from Ocata to Pike where -keystone/1 is the leader: - -.. code:: bash - - juju config keystone action-managed-upgrade=True - juju config keystone openstack-origin='cloud:xenial-pike' - juju run-action keystone/1 --wait openstack-upgrade - juju run-action keystone/0 --wait openstack-upgrade - juju run-action keystone/2 --wait openstack-upgrade - - - -Application one-shot -~~~~~~~~~~~~~~~~~~~~ - -This is the simplest and quickest way to perform the upgrade. Using this method -will cause all the units in the application to be upgraded at the same time. -This is likely to cause a service outage while the upgrade completes. If there -is only one unit in the application then this is the only option. - -.. code:: bash - - juju config keystone openstack-origin='cloud:xenial-pike' - - -5. Post-Upgrade Tasks ---------------------- - -Check **juju status** and any monitoring solution for errors. - -Application-specific Upgrade notes ----------------------------------- - -Ceph -~~~~ - -Ensure that Ceph services are upgraded before services that consume Ceph -resources, such as cinder, glance and nova-compute: - -.. code:: - - juju config ceph-mon source=cloud:bionic-train - juju config ceph-osd source=cloud:bionic-train - -Known Issues to be aware of during Upgrades -------------------------------------------- - -Before doing an *OpenStack* upgrade (rather than a charm upgrade), the release -notes for the original and target versions of OpenStack should be read. In -particular pay attention to services or configuration parameters that have -retired, deprecated or changed. Wherever possible, the latest version of a -charm has code to handle almost all changes such that the resultant system -should be configured in the same way. However, removed, added or replaced -services **will** require manual intervention. - -When charms *can't* perform a change, either due to a bug in the charm (i.e. a -system configuration that the charms haven't been programmed to handle) or -because *at the individual charm level* the charm can't change the service -(i.e. when a service is replaced with another service, a *different* charm -would be needed). - -However, the following list is known issues that an operator may encounter that -the charm does not automatically take care of, along with mitigation strategies -to resolve the situation. +Check for errors in :command:`juju status` output and any monitoring service. +Known OpenStack upgrade issues +------------------------------ Nova RPC version mismatches ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Reference Bug `#1825999: [upgrade] versions N and N+1 are not compatible -`_ - -If it is not possible to upgrade neutron and nova within the same maintenance +If it is not possible to upgrade Neutron and Nova within the same maintenance window, be mindful that the RPC communication between nova-cloud-controller, -nova-compute and nova-api-metadata is very likely to present several errors +nova-compute, and nova-api-metadata is very likely to cause several errors while those services are not running the same version. This is due to the fact that currently those charms do not support RPC version pinning or auto-negotiation. +See bug `LP #1825999`_. neutron-gateway charm: upgrading from Mitaka to Newton ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Reference Bug `#1809190: switching from external-network-id and external-port -to data-port and bridge-mappings does not remove incorrect nics from bridges -`_ +Between the Mitaka and Newton OpenStack releases, the ``neutron-gateway`` charm +added two options, ``bridge-mappings`` and ``data-port``, which replaced the +(now) deprecated ``ext-port`` option. This was to provide for more control over +how ``neutron-gateway`` can configure external networking. Unfortunately, the +charm was only designed to work with either ``ext-port`` (no longer +recommended) *or* ``bridge-mappings`` and ``data-port``. -Between the mitaka and newton OpenStack releases, the ``neutron-gateway`` charm -add two options, ``bridge-mappings`` and ``data-port``, which replaced the -(now) deprecated ``ext-port`` option. This was to provide more control over -how ``neutron-gateway`` can configure external networking. - -The charm was designed so that it would work with either ``data-port`` (no -longer recommended) *or* ``bridge-mappings`` and ``data-port``. Unfortunately, -when upgrading from OpenStack Mitaka to Newton the referenced bug above was -been encountered, and therefore may require manual intervention to resolve the -issue. +See bug `LP #1809190`_. cinder/ceph topology change: upgrading from Newton to Ocata ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +If ``cinder`` is directly related to ``ceph-mon`` rather than via +``cinder-ceph`` then upgrading from Newton to Ocata will result in the loss of +some block storage functionality, specifically live migration and snapshotting. +To remedy this situation the deployment should migrate to using the cinder-ceph +charm. This can be done after the upgrade to Ocata. + .. warning:: - Do not attempt to migrate a deployment with existing volumes to use the - cinder-ceph charm prior to Ocata. + Do not attempt to migrate a deployment with existing volumes to use the + ``cinder-ceph`` charm prior to Ocata. -If cinder is directly related to ceph-mon rather than via the cinder-ceph -charm then upgrading from Newton to Ocata will result in the loss of some -block storage functionality, specifically live migration and snapshotting. -To remedy this situation the deployment should migrate to using the -cinder-ceph charm, this can be done after the upgrade to Ocata. +The intervention is detailed in the below three steps. Step 0: Check existing configuration -++++++++++++++++++++++++++++++++++++ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Confirm existing volumes are in rbd pool called 'cinder' +Confirm existing volumes are in an RBD pool called 'cinder': .. code:: bash - $ juju run --unit cinder/0 "rbd --name client.cinder -p cinder ls" - volume-b45066d3-931d-406e-a43e-ad4eca12cf34 - volume-dd733b26-2c56-4355-a8fc-347a964d5d55 + juju run --unit cinder/0 "rbd --name client.cinder -p cinder ls" + +Sample output: + +.. code:: + + volume-b45066d3-931d-406e-a43e-ad4eca12cf34 + volume-dd733b26-2c56-4355-a8fc-347a964d5d55 Step 1: Deploy new topology -+++++++++++++++++++++++++++ +^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Deploy cinder-ceph charm and set the rbd-pool-name to match the -pool that any existing volumes are in (see above): +Deploy the ``cinder-ceph`` charm and set the 'rbd-pool-name' to match the pool +that any existing volumes are in (see above): .. code:: bash - juju deploy --config rbd-pool-name=cinder cs:~openstack-charmers-next/cinder-ceph - juju add-relation cinder cinder-ceph - juju add-relation cinder-ceph ceph-mon - juju remove-relation cinder ceph-mon - juju add-relation cinder-ceph nova-compute + juju deploy --config rbd-pool-name=cinder cs:~openstack-charmers-next/cinder-ceph + juju add-relation cinder cinder-ceph + juju add-relation cinder-ceph ceph-mon + juju remove-relation cinder ceph-mon + juju add-relation cinder-ceph nova-compute Step 2: Update volume configuration -+++++++++++++++++++++++++++++++++++ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The existing volumes now need to be updated to associate them -with the newly defined cinder-ceph backend: +The existing volumes now need to be updated to associate them with the newly +defined cinder-ceph backend: .. code:: bash - juju run-action cinder/0 rename-volume-host currenthost='cinder' \ - newhost='cinder@cinder-ceph#cinder.volume.drivers.rbd.RBDDriver' + juju run-action cinder/0 rename-volume-host currenthost='cinder' \ + newhost='cinder@cinder-ceph#cinder.volume.drivers.rbd.RBDDriver' Placement charm and nova-cloud-controller: upgrading from Stein to Train ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -As of Train, the placement API is managed by the new placement charm and is no -longer managed by the nova-cloud-controller charm. The upgrade to Train +As of Train, the placement API is managed by the new ``placement`` charm and is +no longer managed by the ``nova-cloud-controller`` charm. The upgrade to Train therefore requires some coordination to transition to the new API endpoints. Prior to upgrading nova-cloud-controller to Train, the placement charm must be @@ -457,22 +566,36 @@ the placement charm will migrate existing placement tables from the nova_api database to a new placement database. Once the new placement endpoints are registered, nova-cloud-controller can be resumed. -Here's an example of the steps just described: +Here's an example of the steps just described where `nova-cloud-controller/0` +is the leader: -.. code:: +.. code:: bash - juju deploy --series bionic --config openstack-origin=cloud:bionic-train cs:placement - juju run-action nova-cloud-controller/leader pause - juju add-relation placement mysql - juju add-relation placement keystone - juju add-relation placement nova-cloud-controller - openstack endpoint list # ensure placement endpoints are listening on new placment IP address - juju run-action nova-cloud-controller/leader resume + juju deploy --series bionic --config openstack-origin=cloud:bionic-train cs:placement + juju run-action nova-cloud-controller/0 pause + juju add-relation placement mysql + juju add-relation placement keystone + juju add-relation placement nova-cloud-controller + openstack endpoint list # ensure placement endpoints are listening on new placment IP address + juju run-action nova-cloud-controller/0 resume Only after these steps have been completed can nova-cloud-controller be -upgraded. Here we upgrade all units simultaneously but see `HA with -pause/resume`_ for a more controlled approach: +upgraded. Here we upgrade all units simultaneously but see the +`Paused-single-unit`_ service upgrade method for a more controlled approach: -.. code:: +.. code:: bash - juju config nova-cloud-controller openstack-origin=cloud:bionic-train + juju config nova-cloud-controller openstack-origin=cloud:bionic-train + +.. LINKS + +.. _Series upgrade: app-series-upgrade +.. _Release Notes: https://docs.openstack.org/charm-guide/latest/release-notes.html +.. _Upgrading applications: https://jaas.ai/docs/upgrading-applications +.. _Ubuntu Cloud Archive: https://wiki.ubuntu.com/OpenStack/CloudArchive +.. _Upgrades: https://docs.openstack.org/operations-guide/ops-upgrades.html +.. _Update services: https://docs.openstack.org/operations-guide/ops-upgrades.html#update-services + +.. BUGS +.. _LP #1825999: https://bugs.launchpad.net/charm-nova-compute/+bug/1825999 +.. _LP #1809190: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1809190