Miscellaneous doc improvements

Change-Id: Ie4425ac60bfc649c48a08308ede478b1fd6e7c9c
This commit is contained in:
Peter Matulis 2020-08-05 13:59:11 -04:00
parent b00b122f2a
commit 67151f74b3
6 changed files with 26 additions and 21 deletions

View File

@ -208,9 +208,9 @@ built-in capabilities, and can be called *natively HA*.
.. important::
The nova-compute application cannot be made highly available. :doc:`Charmed
Masakari <app-masakari>` implements cloud instance HA but is not
production-ready at this time.
The nova-compute application cannot be made highly available. However, see
:doc:`Charmed Masakari <app-masakari>` for an implementation of cloud
instance HA.
Native HA
~~~~~~~~~
@ -265,8 +265,7 @@ Generic deployment commands for a three-unit cluster are provided below.
.. code-block:: none
juju deploy --config vip=<ip-address> <charm-name>
juju add-unit -n 2 <charm-name>
juju deploy -n 3 --config vip=<ip-address> <charm-name>
juju deploy --config cluster_count=3 hacluster <charm-name>-hacluster
juju add-relation <charm-name>-hacluster:ha <charm-name>:ha

View File

@ -336,7 +336,7 @@ cluster
2. Shut down all components/clients consuming Ceph before shutting down Ceph
components to avoid application-level data loss.
3. Set the ``noout`` option on the cluster a single MON unit, to prevent data
3. Set the cluster-wide ``noout`` option, on any MON unit, to prevent data
rebalancing from occurring when OSDs start disappearing from the network::
juju run-action --wait ceph-mon/1 set-noout

View File

@ -57,10 +57,10 @@ shared storage must be considered.
The mechanics of instance evacuation is now described:
Masakari Monitors, on a peer hypervisor, detects that its peer is unavailable
and notifies the Masakari API server. This in turn triggers the Masakari engine
to initiate a failover of the instance via Nova. Assuming that Nova concurs
that the hypervisor is absent, it will attempt to start the instance on another
Masakari Monitors, on a hypervisor, detects that its peer is unavailable and
notifies the Masakari API server. This in turn triggers the Masakari engine to
initiate a failover of the instance via Nova. Assuming that Nova concurs that
the hypervisor is absent, it will attempt to start the instance on another
hypervisor. At this point there are two instances competing for the same disk
image, which can lead to data corruption.

View File

@ -319,7 +319,7 @@ with scripting).
By default, an LTS release will not have an upgrade candidate until the "point
release" of the next LTS is published. You can override this policy by using
the ``-d`` (development) option with the ``do-release-upgrade`` command.
the ``-d`` (development) option with the :command:`do-release-upgrade` command.
.. caution::

View File

@ -49,8 +49,13 @@ performing the OpenStack upgrade. The Juju command to use is
:command:`upgrade-charm`. For extra guidance see `Upgrading applications`_
in the Juju documentation.
.. note::
A charm upgrade affects all corresponding units; per-unit upgrades is not
currently supported.
Although it may be possible to upgrade some charms in parallel it is
recommended that the upgrades be performed in series (i.e. one at a time).
recommended that the upgrades be performed sequentially (i.e. one at a time).
Verify a charm upgrade before moving on to the next.
In terms of the upgrade order, begin with 'keystone'. After that, the rest of
@ -334,12 +339,13 @@ Perform the upgrade
The essence of a charmed OpenStack service upgrade is a change of the
corresponding machine software sources so that a more recent combination of
Ubuntu release and OpenStack release is used. This combination is based on the
`Ubuntu Cloud Archive`_ and translates to a configuration known as the "cloud
archive pocket". It takes on the following syntax:
`Ubuntu Cloud Archive`_ and translates to a "cloud archive OpenStack release".
It takes on the following syntax:
``cloud:<ubuntu series>-<openstack-release>``
``<ubuntu series>-<openstack-release>``
For example, for the 'bionic-train' pocket:
For example, the 'bionic-train' UCA release is expressed during configuration
as:
``cloud:bionic-train``
@ -369,7 +375,7 @@ The syntax is:
.. code:: bash
juju config <openstack-charm> openstack-origin=cloud:<cloud-archive-pocket>
juju config <openstack-charm> openstack-origin=cloud:<cloud-archive-release>
Charms whose services are not technically part of the OpenStack project will
use the ``source`` charm option instead. The Ceph charms are a classic example:
@ -381,7 +387,7 @@ use the ``source`` charm option instead. The Ceph charms are a classic example:
.. note::
The ceph-osd and ceph-mon charms are able to maintain service availability
during their upgrade.
during the upgrade.
So to upgrade Cinder across all units (currently running Bionic) from Stein to
Train:

View File

@ -52,7 +52,7 @@ OpenStack release
As the guide's :doc:`Overview <index>` section states, OpenStack Train will be
deployed atop Ubuntu 18.04 LTS (Bionic) cloud nodes. In order to achieve this a
"cloud archive pocket" of 'cloud:bionic-train' will be used during the install
cloud archive release of 'cloud:bionic-train' will be used during the install
of each OpenStack application. Note that some applications are not part of the
OpenStack project per se and therefore do not apply (exceptionally, Ceph
applications do use this method). Not using a more recent OpenStack release in
@ -60,8 +60,8 @@ this way will result in a Queens deployment (i.e. Queens is in the Ubuntu
package archive for Bionic).
See :ref:`Perform the upgrade <perform_the_upgrade>` in the :doc:`OpenStack
Upgrades <app-upgrade-openstack>` appendix for more details on the cloud
archive pocket and how it is used when upgrading OpenStack.
Upgrades <app-upgrade-openstack>` appendix for more details on cloud
archive releases and how they are used when upgrading OpenStack.
.. important::