Retire repository
Fuel (from openstack namespace) and fuel-ccp (in x namespace) repositories are unused and ready to retire. This change removes all content from the repository and adds the usual README file to point out that the repository is retired following the process from https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project See also http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011647.html Depends-On: https://review.opendev.org/699362 Change-Id: I8c94ef1c1c36a74eb57a559fac7cd528086266f2
This commit is contained in:
parent
caa487b9ea
commit
54c87e6f7d
@ -1,7 +0,0 @@
|
||||
[run]
|
||||
branch = True
|
||||
source = fuel_ccp_entrypoint
|
||||
omit = fuel_ccp_entrypoint/openstack/*
|
||||
|
||||
[report]
|
||||
ignore_errors = True
|
55
.gitignore
vendored
55
.gitignore
vendored
@ -1,55 +0,0 @@
|
||||
*.py[cod]
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Packages
|
||||
*.egg*
|
||||
*.egg-info
|
||||
dist
|
||||
build
|
||||
eggs
|
||||
parts
|
||||
bin
|
||||
var
|
||||
sdist
|
||||
develop-eggs
|
||||
.installed.cfg
|
||||
lib
|
||||
lib64
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
cover/
|
||||
.coverage*
|
||||
!.coveragerc
|
||||
.tox
|
||||
nosetests.xml
|
||||
.testrepository
|
||||
.venv
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
|
||||
# Mr Developer
|
||||
.mr.developer.cfg
|
||||
.project
|
||||
.pydevproject
|
||||
|
||||
# Complexity
|
||||
output/*.html
|
||||
output/*/index.html
|
||||
|
||||
# Sphinx
|
||||
doc/build
|
||||
|
||||
# pbr generates these
|
||||
AUTHORS
|
||||
ChangeLog
|
||||
|
||||
# Editors
|
||||
*~
|
||||
.*.swp
|
||||
.*sw?
|
3
.mailmap
3
.mailmap
@ -1,3 +0,0 @@
|
||||
# Format is:
|
||||
# <preferred e-mail> <other e-mail 1>
|
||||
# <preferred e-mail> <other e-mail 2>
|
@ -1,7 +0,0 @@
|
||||
[DEFAULT]
|
||||
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
|
||||
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
|
||||
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
|
||||
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
|
||||
test_id_option=--load-list $IDFILE
|
||||
test_list_option=--list
|
@ -1,17 +0,0 @@
|
||||
If you would like to contribute to the development of OpenStack, you must
|
||||
follow the steps in this page:
|
||||
|
||||
http://docs.openstack.org/infra/manual/developers.html
|
||||
|
||||
If you already have a good understanding of how the system works and your
|
||||
OpenStack accounts are set up, you can skip to the development workflow
|
||||
section of this documentation to learn how changes to OpenStack should be
|
||||
submitted for review via the Gerrit tool:
|
||||
|
||||
http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
|
||||
Pull requests submitted through GitHub will be ignored.
|
||||
|
||||
Bugs should be filed on Launchpad, not GitHub:
|
||||
|
||||
https://bugs.launchpad.net/ms-ext-config
|
@ -1,4 +0,0 @@
|
||||
ms-ext-config Style Commandments
|
||||
===============================================
|
||||
|
||||
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/
|
176
LICENSE
176
LICENSE
@ -1,176 +0,0 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
@ -1,6 +0,0 @@
|
||||
include AUTHORS
|
||||
include ChangeLog
|
||||
exclude .gitignore
|
||||
exclude .gitreview
|
||||
|
||||
global-exclude *.pyc
|
25
README.rst
25
README.rst
@ -1,19 +1,10 @@
|
||||
===============================
|
||||
ms-ext-config
|
||||
===============================
|
||||
This project is no longer maintained.
|
||||
|
||||
OpenStack Boilerplate contains all the boilerplate you need to create an OpenStack package.
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
||||
Please fill here a long description which must be at least 3 lines wrapped on
|
||||
80 cols, so that distribution package maintainers can use it in their packages.
|
||||
Note that this is a hard requirement.
|
||||
|
||||
* Free software: Apache license
|
||||
* Documentation: http://docs.openstack.org/developer/ms-ext-config
|
||||
* Source: http://git.openstack.org/cgit/nextgen/ms-ext-config
|
||||
* Bugs: http://bugs.launchpad.net/fuel
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
* TODO
|
||||
For any further questions, please email
|
||||
openstack-discuss@lists.openstack.org or join #openstack-dev on
|
||||
Freenode.
|
||||
|
@ -1 +0,0 @@
|
||||
# Pin versions of packages that behave badly only for us
|
@ -1,75 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.path.abspath('../..'))
|
||||
# -- General configuration ----------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
#'sphinx.ext.intersphinx',
|
||||
'oslosphinx'
|
||||
]
|
||||
|
||||
# autodoc generation is a bit aggressive and a nuisance when doing heavy
|
||||
# text edit cycles.
|
||||
# execute "export SPHINX_DEBUG=1" in your terminal to disable
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'ms-ext-config'
|
||||
copyright = u'2013, OpenStack Foundation'
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
add_module_names = True
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# -- Options for HTML output --------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
||||
# html_theme_path = ["."]
|
||||
# html_theme = '_theme'
|
||||
# html_static_path = ['static']
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = '%sdoc' % project
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass
|
||||
# [howto/manual]).
|
||||
latex_documents = [
|
||||
('index',
|
||||
'%s.tex' % project,
|
||||
u'%s Documentation' % project,
|
||||
u'OpenStack Foundation', 'manual'),
|
||||
]
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
#intersphinx_mapping = {'http://docs.python.org/': None}
|
@ -1,4 +0,0 @@
|
||||
============
|
||||
Contributing
|
||||
============
|
||||
.. include:: ../../CONTRIBUTING.rst
|
@ -1,25 +0,0 @@
|
||||
.. ms-ext-config documentation master file, created by
|
||||
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to ms-ext-config's documentation!
|
||||
========================================================
|
||||
|
||||
Contents:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
readme
|
||||
installation
|
||||
usage
|
||||
contributing
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
@ -1,12 +0,0 @@
|
||||
============
|
||||
Installation
|
||||
============
|
||||
|
||||
At the command line::
|
||||
|
||||
$ pip install ms-ext-config
|
||||
|
||||
Or, if you have virtualenvwrapper installed::
|
||||
|
||||
$ mkvirtualenv ms-ext-config
|
||||
$ pip install ms-ext-config
|
@ -1 +0,0 @@
|
||||
.. include:: ../../README.rst
|
@ -1,7 +0,0 @@
|
||||
========
|
||||
Usage
|
||||
========
|
||||
|
||||
To use ms-ext-config in a project::
|
||||
|
||||
import fuel_ccp_entrypoint
|
@ -1,19 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import pbr.version
|
||||
|
||||
|
||||
__version__ = pbr.version.VersionInfo(
|
||||
'fuel_ccp_entrypoint').version_string()
|
@ -1,713 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
|
||||
import argparse
|
||||
import functools
|
||||
import logging
|
||||
import os
|
||||
import pwd
|
||||
import re
|
||||
import signal
|
||||
import socket
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
|
||||
import etcd
|
||||
import jinja2
|
||||
import json
|
||||
import netifaces
|
||||
import pykube
|
||||
import requests
|
||||
import six
|
||||
|
||||
|
||||
VARIABLES = {}
|
||||
GLOBALS_PATH = '/etc/ccp/globals/globals.json'
|
||||
GLOBALS_SECRETS_PATH = '/etc/ccp/global-secrets/global-secrets.json'
|
||||
NODES_CONFIG_PATH = '/etc/ccp/nodes-config/nodes-config.json'
|
||||
SERVICE_CONFIG_PATH = '/etc/ccp/service-config/service-config.json'
|
||||
META_FILE = "/etc/ccp/meta/meta.json"
|
||||
CACERT = "/opt/ccp/etc/tls/ca.pem"
|
||||
WORKFLOW_PATH_TEMPLATE = '/etc/ccp/role/%s.json'
|
||||
FILES_DIR = '/etc/ccp/files'
|
||||
EXPORTS_DIR = '/etc/ccp/exports'
|
||||
|
||||
LOG_DATEFMT = "%Y-%m-%d %H:%M:%S"
|
||||
LOG_FORMAT = "%(asctime)s.%(msecs)03d - %(name)s - %(levelname)s - %(message)s"
|
||||
|
||||
logging.basicConfig(format=LOG_FORMAT, datefmt=LOG_DATEFMT)
|
||||
LOG = logging.getLogger(__name__)
|
||||
LOG.setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
class ProcessException(Exception):
|
||||
def __init__(self, exit_code):
|
||||
self.exit_code = exit_code
|
||||
self.msg = "Command exited with code %d" % self.exit_code
|
||||
super(ProcessException, self).__init__(self.msg)
|
||||
|
||||
|
||||
def retry(f):
|
||||
@functools.wraps(f)
|
||||
def wrap(*args, **kwargs):
|
||||
attempts = VARIABLES['etcd']['connection_attempts']
|
||||
delay = VARIABLES['etcd']['connection_delay']
|
||||
while attempts > 1:
|
||||
try:
|
||||
return f(*args, **kwargs)
|
||||
except etcd.EtcdException as e:
|
||||
LOG.warning('Etcd is not ready: %s', str(e))
|
||||
LOG.warning('Retrying in %d seconds...', delay)
|
||||
time.sleep(delay)
|
||||
attempts -= 1
|
||||
return f(*args, **kwargs)
|
||||
return wrap
|
||||
|
||||
|
||||
def get_ip_address(iface):
|
||||
"""Get IP address of the interface connected to the network.
|
||||
|
||||
If there is no such an interface, then localhost is returned.
|
||||
"""
|
||||
|
||||
if iface not in netifaces.interfaces():
|
||||
LOG.warning("Can't find interface '%s' in the host list of interfaces",
|
||||
iface)
|
||||
return '127.0.0.1'
|
||||
|
||||
address_family = netifaces.AF_INET
|
||||
|
||||
if address_family not in netifaces.ifaddresses(iface):
|
||||
LOG.warning("Interface '%s' doesnt configured with ipv4 address",
|
||||
iface)
|
||||
return '127.0.0.1'
|
||||
|
||||
for ifaddress in netifaces.ifaddresses(iface)[address_family]:
|
||||
if 'addr' in ifaddress:
|
||||
return ifaddress['addr']
|
||||
else:
|
||||
LOG.warning("Can't find ip addr for interface '%s'", iface)
|
||||
return '127.0.0.1'
|
||||
|
||||
|
||||
def create_network_topology(meta_info, variables):
|
||||
"""Create a network topology config.
|
||||
|
||||
These config could be used in jinja2 templates to fetch needed variables
|
||||
Example:
|
||||
{{ network_topology["private"]["address"] }}
|
||||
{{ network_topology["public"]["iface"] }}
|
||||
"""
|
||||
|
||||
if meta_info.get("host-net"):
|
||||
LOG.debug("Found 'host-net' flag, trying to fetch host network")
|
||||
priv_iface = variables["private_interface"]
|
||||
pub_iface = variables["public_interface"]
|
||||
network_info = {"private": {"iface": priv_iface,
|
||||
"address": get_ip_address(priv_iface)},
|
||||
"public": {"iface": pub_iface,
|
||||
"address": get_ip_address(pub_iface)}}
|
||||
else:
|
||||
LOG.debug("Can't find 'host-net' flag, fetching ip only from eth0")
|
||||
network_info = {"private": {"iface": "eth0",
|
||||
"address": get_ip_address("eth0")},
|
||||
"public": {"iface": "eth0",
|
||||
"address": get_ip_address("eth0")}}
|
||||
LOG.debug("Network information\n%s", network_info)
|
||||
return network_info
|
||||
|
||||
|
||||
def etcd_path(*path):
|
||||
namespace = VARIABLES.get('namespace', '')
|
||||
return os.path.join('/ccp', namespace, 'status', *path)
|
||||
|
||||
|
||||
def set_status_done(service_name):
|
||||
return _set_status(service_name, "done")
|
||||
|
||||
|
||||
def set_status_ready(service_name, ttl=None):
|
||||
return _set_status(service_name, "ready", ttl=ttl)
|
||||
|
||||
|
||||
@retry
|
||||
def _set_status(service_name, status, ttl=None):
|
||||
etcd_client = get_etcd_client()
|
||||
for dep_type in ['global', VARIABLES['node_name']]:
|
||||
key = etcd_path(dep_type, service_name, status)
|
||||
etcd_client.set(key, "1", ttl=ttl)
|
||||
LOG.info('Status for "%s" was set to "%s"',
|
||||
os.path.join(dep_type, service_name), status)
|
||||
|
||||
|
||||
def check_is_done(dep):
|
||||
return _check_status(dep, "done")
|
||||
|
||||
|
||||
def check_is_ready(dep, etcd_client=None):
|
||||
return _check_status(dep, "ready", etcd_client)
|
||||
|
||||
|
||||
@retry
|
||||
def _check_status(dep, status, etcd_client=None):
|
||||
if not etcd_client:
|
||||
etcd_client = get_etcd_client()
|
||||
dep_name, _, dep_type = dep.partition(":")
|
||||
dep_type = VARIABLES['node_name'] if dep_type == 'local' else 'global'
|
||||
key = etcd_path(dep_type, dep_name, status)
|
||||
return key in etcd_client
|
||||
|
||||
|
||||
def cmd_str(cmd):
|
||||
if isinstance(cmd, six.string_types):
|
||||
return cmd
|
||||
return " ".join(cmd)
|
||||
|
||||
|
||||
def preexec_fn(user_uid, user_gid, user_home):
|
||||
def result():
|
||||
os.setgid(user_gid)
|
||||
os.setuid(user_uid)
|
||||
os.environ["HOME"] = user_home
|
||||
return result
|
||||
|
||||
|
||||
def openstackclient_preexec_fn():
|
||||
def result():
|
||||
os.environ["OS_IDENTITY_API_VERSION"] = "3"
|
||||
os.environ["OS_INTERFACE"] = "internal"
|
||||
os.environ["OS_PROJECT_DOMAIN_NAME"] = 'default'
|
||||
os.environ["OS_USER_DOMAIN_NAME"] = "default"
|
||||
os.environ["OS_PASSWORD"] = VARIABLES['openstack']['user_password']
|
||||
os.environ["OS_USERNAME"] = VARIABLES['openstack']['user_name']
|
||||
os.environ["OS_PROJECT_NAME"] = VARIABLES['openstack']['project_name']
|
||||
if VARIABLES['security']['tls']['create_certificates']:
|
||||
os.environ["OS_CACERT"] = CACERT
|
||||
os.environ["OS_AUTH_URL"] = '%s/v3' % address(
|
||||
'keystone', VARIABLES['keystone']['admin_port'], with_scheme=True)
|
||||
return result
|
||||
|
||||
|
||||
def execute_cmd(cmd, user=None):
|
||||
LOG.debug("Executing cmd:\n%s", cmd_str(cmd))
|
||||
kwargs = {
|
||||
"shell": True,
|
||||
"stdin": sys.stdin,
|
||||
"stdout": sys.stdout,
|
||||
"stderr": sys.stderr}
|
||||
# If openstackclient command is being executed, appropriate environment
|
||||
# variables will be set
|
||||
for prefix in ["openstack ", "neutron ", "murano "]:
|
||||
if cmd.startswith(prefix):
|
||||
kwargs['preexec_fn'] = openstackclient_preexec_fn()
|
||||
break
|
||||
# Execute as user if `user` param is provided, execute as current user
|
||||
# otherwise
|
||||
else:
|
||||
if user:
|
||||
LOG.debug('Executing as user %s', user)
|
||||
pw_record = pwd.getpwnam(user)
|
||||
user_uid = pw_record.pw_uid
|
||||
user_gid = pw_record.pw_gid
|
||||
user_home = pw_record.pw_dir
|
||||
kwargs['preexec_fn'] = preexec_fn(user_uid, user_gid, user_home)
|
||||
return subprocess.Popen(cmd_str(cmd), **kwargs)
|
||||
|
||||
|
||||
def get_ingress_host(ingress_name):
|
||||
return '.'.join((ingress_name, VARIABLES['ingress']['domain']))
|
||||
|
||||
|
||||
def address(service, port=None, external=False, with_scheme=False):
|
||||
addr = None
|
||||
service_name = service.split('-')[0]
|
||||
enable_tls = VARIABLES.get(service_name, {}).get('tls', {}).get('enabled')
|
||||
|
||||
if enable_tls:
|
||||
scheme = 'https'
|
||||
else:
|
||||
scheme = 'http'
|
||||
if external:
|
||||
if not port:
|
||||
raise RuntimeError('Port config is required for external address')
|
||||
if VARIABLES['ingress']['enabled'] and port.get('ingress'):
|
||||
scheme = 'https'
|
||||
addr = "%s:%s" % (get_ingress_host(port['ingress']),
|
||||
VARIABLES['ingress']['port'])
|
||||
elif port.get('node'):
|
||||
addr = '%s:%s' % (VARIABLES['k8s_external_ip'], port['node'])
|
||||
|
||||
current_service = VARIABLES['service_name']
|
||||
if current_service:
|
||||
current_service_def = VARIABLES['services'].get(
|
||||
current_service, {}).get('service_def')
|
||||
if current_service_def == service:
|
||||
service = current_service
|
||||
else:
|
||||
service = VARIABLES['services'].get(current_service, {}).get(
|
||||
'mapping', {}).get(service) or service
|
||||
if addr is None:
|
||||
addr = '.'.join((service, VARIABLES['namespace'], 'svc',
|
||||
VARIABLES['cluster_domain']))
|
||||
if port:
|
||||
addr = '%s:%s' % (addr, port['cont'])
|
||||
|
||||
if with_scheme:
|
||||
addr = "%s://%s" % (scheme, addr)
|
||||
|
||||
return addr
|
||||
|
||||
|
||||
def j2raise(msg):
|
||||
raise AssertionError(msg)
|
||||
|
||||
|
||||
def jinja_render_file(path, lookup_paths=None):
|
||||
file_loaders = [jinja2.FileSystemLoader(os.path.dirname(path))]
|
||||
for p in lookup_paths:
|
||||
file_loaders.append(jinja2.FileSystemLoader(p))
|
||||
env = jinja2.Environment(loader=jinja2.ChoiceLoader(loaders=file_loaders))
|
||||
env.globals['address'] = address
|
||||
env.globals['raise_exception'] = j2raise
|
||||
env.filters['gethostbyname'] = socket.gethostbyname
|
||||
content = env.get_template(os.path.basename(path)).render(VARIABLES)
|
||||
|
||||
return content
|
||||
|
||||
|
||||
def jinja_render_cmd(cmd):
|
||||
env = jinja2.Environment()
|
||||
env.globals['address'] = address
|
||||
env.filters['gethostbyname'] = socket.gethostbyname
|
||||
return env.from_string(cmd).render(VARIABLES)
|
||||
|
||||
|
||||
def create_files(files):
|
||||
LOG.info("Creating files")
|
||||
for config in files:
|
||||
file_template = os.path.join(FILES_DIR, config['name'])
|
||||
file_path = config['path']
|
||||
|
||||
LOG.debug("Creating %s file from %s template" %
|
||||
(file_path, file_template))
|
||||
if not os.path.exists(os.path.dirname(file_path)):
|
||||
os.makedirs(os.path.dirname(file_path))
|
||||
with open(file_path, 'w') as f:
|
||||
rendered_config = jinja_render_file(file_template, [EXPORTS_DIR])
|
||||
f.write(rendered_config)
|
||||
|
||||
user = config.get('user')
|
||||
if user:
|
||||
pw_record = pwd.getpwnam(user)
|
||||
user_uid = pw_record.pw_uid
|
||||
user_gid = pw_record.pw_gid
|
||||
os.chown(file_path, user_uid, user_gid)
|
||||
|
||||
perm = config.get('perm')
|
||||
if perm:
|
||||
os.chmod(file_path, int(perm, 8))
|
||||
|
||||
LOG.info("File %s has been created", file_path)
|
||||
|
||||
|
||||
@retry
|
||||
def get_etcd_client():
|
||||
if VARIABLES["etcd"]["tls"]["enabled"]:
|
||||
LOG.debug("TLS is enabled for etcd, using encrypted connectivity")
|
||||
scheme = "https"
|
||||
ca_cert = CACERT
|
||||
else:
|
||||
scheme = "http"
|
||||
ca_cert = None
|
||||
|
||||
etcd_machines = []
|
||||
# if it's etcd container use local address because container is not
|
||||
# accessible via service due failed readiness check
|
||||
if VARIABLES["role_name"] in ["etcd", "etcd-leader-elector",
|
||||
"etcd-watcher"]:
|
||||
if VARIABLES["etcd"]["tls"]["enabled"]:
|
||||
# If it's etcd container, connectivity goes over IP address, thus
|
||||
# TLS connection will fail. Need to reuse non-TLS
|
||||
# https://github.com/coreos/etcd/issues/4311
|
||||
scheme = "http"
|
||||
ca_cert = None
|
||||
etcd_address = '127.0.0.1'
|
||||
else:
|
||||
etcd_address = VARIABLES["network_topology"]["private"]["address"]
|
||||
etcd_machines.append(
|
||||
(etcd_address, VARIABLES["etcd"]["client_port"]['cont']))
|
||||
else:
|
||||
etcd_machines.append(
|
||||
(address('etcd'), VARIABLES["etcd"]["client_port"]['cont'])
|
||||
)
|
||||
etcd_machines_str = " ".join(["%s:%d" % (h, p) for h, p in etcd_machines])
|
||||
LOG.debug("Using the following etcd urls: \"%s\"", etcd_machines_str)
|
||||
|
||||
return etcd.Client(host=tuple(etcd_machines), allow_reconnect=True,
|
||||
read_timeout=2, protocol=scheme, ca_cert=ca_cert)
|
||||
|
||||
|
||||
def check_dependence(dep, etcd_client):
|
||||
LOG.debug("Waiting for \"%s\" dependency", dep)
|
||||
while True:
|
||||
if check_is_ready(dep, etcd_client):
|
||||
LOG.debug("Dependency \"%s\" is in \"ready\" state", dep)
|
||||
break
|
||||
LOG.debug("Dependency \"%s\" is not ready yet, retrying", dep)
|
||||
time.sleep(5)
|
||||
|
||||
|
||||
def wait_for_dependencies(dependencies, etcd_client):
|
||||
LOG.info('Waiting for dependencies')
|
||||
for dep in dependencies:
|
||||
check_dependence(dep, etcd_client)
|
||||
|
||||
|
||||
def run_cmd(cmd, user=None):
|
||||
rendered_cmd = jinja_render_cmd(cmd)
|
||||
proc = execute_cmd(rendered_cmd, user)
|
||||
proc.communicate()
|
||||
if proc.returncode != 0:
|
||||
raise ProcessException(proc.returncode)
|
||||
|
||||
|
||||
def run_daemon(cmd, user=None):
|
||||
LOG.info("Starting daemon")
|
||||
rendered_cmd = jinja_render_cmd(cmd)
|
||||
proc = execute_cmd(rendered_cmd, user)
|
||||
|
||||
# add signal handler
|
||||
def sig_handler(signum, frame):
|
||||
LOG.info("Caught a signal: %d", signum)
|
||||
proc.send_signal(signum)
|
||||
if signum == signal.SIGHUP:
|
||||
time.sleep(5)
|
||||
if proc.poll() is None:
|
||||
LOG.info("Service restarted")
|
||||
|
||||
signal.signal(signal.SIGHUP, sig_handler)
|
||||
signal.signal(signal.SIGINT, sig_handler)
|
||||
signal.signal(signal.SIGTERM, sig_handler)
|
||||
|
||||
# wait for 5 sec and check that process is running
|
||||
time.sleep(5)
|
||||
if proc.poll() is None:
|
||||
LOG.info("Daemon started")
|
||||
return proc
|
||||
proc.communicate()
|
||||
raise RuntimeError("Process exited with code: %d" % proc.returncode)
|
||||
|
||||
|
||||
def get_pykube_client():
|
||||
os.environ['KUBERNETES_SERVICE_HOST'] = 'kubernetes.default'
|
||||
config = pykube.KubeConfig.from_service_account()
|
||||
return pykube.HTTPClient(config)
|
||||
|
||||
|
||||
def _reload_obj(obj, updated_dict):
|
||||
obj.reload()
|
||||
obj.obj = updated_dict
|
||||
|
||||
|
||||
def get_pykube_object(object_dict, namespace, client):
|
||||
obj_class = getattr(pykube, object_dict["kind"], None)
|
||||
if obj_class is None:
|
||||
raise RuntimeError('"%s" object is not supported, skipping.'
|
||||
% object_dict['kind'])
|
||||
|
||||
if not object_dict['kind'] == 'Namespace':
|
||||
object_dict['metadata']['namespace'] = namespace
|
||||
|
||||
return obj_class(client, object_dict)
|
||||
|
||||
UPDATABLE_OBJECTS = ('ConfigMap', 'Deployment', 'Service', 'Ingress')
|
||||
|
||||
|
||||
def process_pykube_object(object_dict, namespace, client):
|
||||
LOG.debug("Deploying %s: \"%s\"",
|
||||
object_dict["kind"], object_dict["metadata"]["name"])
|
||||
|
||||
obj = get_pykube_object(object_dict, namespace, client)
|
||||
|
||||
if obj.exists():
|
||||
LOG.debug('%s "%s" already exists', object_dict['kind'],
|
||||
object_dict['metadata']['name'])
|
||||
if object_dict['kind'] in UPDATABLE_OBJECTS:
|
||||
if object_dict['kind'] == 'Service':
|
||||
# Reload object and merge new and old fields
|
||||
_reload_obj(obj, object_dict)
|
||||
obj.update()
|
||||
LOG.debug('%s "%s" has been updated', object_dict['kind'],
|
||||
object_dict['metadata']['name'])
|
||||
else:
|
||||
obj.create()
|
||||
LOG.debug('%s "%s" has been created', object_dict['kind'],
|
||||
object_dict['metadata']['name'])
|
||||
return obj
|
||||
|
||||
|
||||
def wait_for_deployment(obj):
|
||||
while True:
|
||||
generation = obj.obj['metadata']['generation']
|
||||
observed_generation = obj.obj['status']['observedGeneration']
|
||||
if observed_generation >= generation:
|
||||
break
|
||||
LOG.info("Waiting for deployment %s to move to new generation")
|
||||
time.sleep(4.2)
|
||||
obj.reload()
|
||||
|
||||
while True:
|
||||
desired = obj.obj['spec']['replicas']
|
||||
status = obj.obj['status']
|
||||
updated = status.get('updatedReplicas', 0)
|
||||
available = status.get('availableReplicas', 0)
|
||||
current = status.get('replicas', 0)
|
||||
if desired == updated == available == current:
|
||||
break
|
||||
LOG.info("Waiting for deployment %s: desired=%s, updated=%s,"
|
||||
" available=%s, current=%s",
|
||||
obj.obj['metadata']['name'],
|
||||
desired, updated, available, current)
|
||||
time.sleep(4.2)
|
||||
obj.reload()
|
||||
|
||||
|
||||
def get_workflow(role_name):
|
||||
workflow_path = WORKFLOW_PATH_TEMPLATE % role_name
|
||||
LOG.info("Getting workflow from %s", workflow_path)
|
||||
with open(workflow_path) as f:
|
||||
workflow = json.load(f).get('workflow')
|
||||
LOG.debug('Workflow template:\n%s', workflow)
|
||||
return workflow
|
||||
|
||||
|
||||
def find_node_config_keys(nodes_config):
|
||||
current_node = os.environ['CCP_NODE_NAME']
|
||||
config_keys = []
|
||||
for node in sorted(nodes_config):
|
||||
if re.match(node, current_node):
|
||||
config_keys.append(node)
|
||||
return config_keys
|
||||
|
||||
|
||||
def merge_configs(variables, node_config):
|
||||
for k, v in node_config.items():
|
||||
if k not in variables:
|
||||
variables[k] = v
|
||||
continue
|
||||
if isinstance(v, dict) and isinstance(variables[k], dict):
|
||||
merge_configs(variables[k], v)
|
||||
else:
|
||||
variables[k] = v
|
||||
|
||||
|
||||
def get_variables(role_name):
|
||||
LOG.info("Getting global variables from %s", GLOBALS_PATH)
|
||||
with open(GLOBALS_PATH) as f:
|
||||
variables = json.load(f)
|
||||
LOG.info("Getting secret variables from %s", GLOBALS_SECRETS_PATH)
|
||||
with open(GLOBALS_SECRETS_PATH) as f:
|
||||
secrets = json.load(f)
|
||||
merge_configs(variables, secrets)
|
||||
if os.path.isfile(SERVICE_CONFIG_PATH):
|
||||
LOG.info("Getting service variables from %s", SERVICE_CONFIG_PATH)
|
||||
with open(SERVICE_CONFIG_PATH) as f:
|
||||
service_config = json.load(f)
|
||||
merge_configs(variables, service_config)
|
||||
LOG.info("Getting nodes variables from %s", NODES_CONFIG_PATH)
|
||||
with open(NODES_CONFIG_PATH) as f:
|
||||
nodes_config = json.load(f)
|
||||
config_keys = find_node_config_keys(nodes_config)
|
||||
if config_keys:
|
||||
# merge configs for all keys and get final node_configs for this node.
|
||||
# Note that if there several override configs, variables will be
|
||||
# override with order of list this configs.
|
||||
node_config = nodes_config[config_keys.pop(0)]
|
||||
for key in config_keys:
|
||||
merge_configs(node_config, nodes_config[key])
|
||||
# and then merge variables with final node_config.
|
||||
merge_configs(variables, node_config)
|
||||
if os.path.exists(META_FILE):
|
||||
LOG.info("Getting meta information from %s", META_FILE)
|
||||
with open(META_FILE) as f:
|
||||
meta_info = json.load(f)
|
||||
else:
|
||||
meta_info = {}
|
||||
variables['role_name'] = role_name
|
||||
LOG.info("Get CCP environment variables")
|
||||
variables['node_name'] = os.environ['CCP_NODE_NAME']
|
||||
variables['pod_name'] = os.environ['CCP_POD_NAME']
|
||||
variables['memory_limit'] = os.environ['MEMORY_LIMIT']
|
||||
variables['cpu_limit'] = os.environ['CPU_LIMIT']
|
||||
LOG.debug("Creating network topology ")
|
||||
variables["network_topology"] = create_network_topology(meta_info,
|
||||
variables)
|
||||
variables["service_name"] = meta_info.get('service-name')
|
||||
return variables
|
||||
|
||||
|
||||
def _get_ca_certificate():
|
||||
name = CACERT
|
||||
if not os.path.isfile(name):
|
||||
with open(CACERT, 'w') as f:
|
||||
f.write(VARIABLES['security']['tls']['ca_cert'])
|
||||
LOG.info("CA certificated saved to %s", CACERT)
|
||||
else:
|
||||
LOG.info("CA file exists, not overwriting it")
|
||||
|
||||
|
||||
def main():
|
||||
action_parser = argparse.ArgumentParser(add_help=False)
|
||||
action_parser.add_argument("action")
|
||||
parser = argparse.ArgumentParser(parents=[action_parser])
|
||||
parser.add_argument("role")
|
||||
args = parser.parse_args(sys.argv[1:])
|
||||
|
||||
global VARIABLES
|
||||
VARIABLES = get_variables(args.role)
|
||||
LOG.debug('Global variables:\n%s', VARIABLES)
|
||||
|
||||
if VARIABLES["security"]["tls"]["create_certificates"]:
|
||||
_get_ca_certificate()
|
||||
if args.action == "provision":
|
||||
do_provision(args.role)
|
||||
elif args.action == "status":
|
||||
do_status(args.role)
|
||||
else:
|
||||
LOG.error("Action %s is not supported", args.action)
|
||||
|
||||
|
||||
def run_probe(probe):
|
||||
if probe["type"] == "exec":
|
||||
run_cmd(probe["command"])
|
||||
elif probe["type"] == "httpGet":
|
||||
scheme = probe.get("scheme", "http")
|
||||
kwargs = {
|
||||
"url": "{}://{}:{}{}".format(
|
||||
scheme,
|
||||
VARIABLES["network_topology"]["private"]["address"],
|
||||
probe["port"],
|
||||
probe.get("path", "/"))
|
||||
}
|
||||
if scheme == "https":
|
||||
kwargs['verify'] = False
|
||||
resp = requests.get(**kwargs)
|
||||
resp.raise_for_status()
|
||||
|
||||
|
||||
def do_status(role_name):
|
||||
workflow = get_workflow(role_name)
|
||||
service_name = workflow["name"]
|
||||
# check local status in etcd
|
||||
local_dep = "%s:local" % service_name
|
||||
if not check_is_done(local_dep):
|
||||
LOG.info("Service is not done")
|
||||
sys.exit(1)
|
||||
LOG.info("Service in done state")
|
||||
# launch readiness probe
|
||||
readiness_probe = workflow.get("readiness")
|
||||
if readiness_probe:
|
||||
if not isinstance(readiness_probe, dict):
|
||||
readiness_probe = {"type": "exec", "command": readiness_probe}
|
||||
run_probe(readiness_probe)
|
||||
# set ready in etcd
|
||||
# ttl 20 because readiness check runs each 10 sec
|
||||
set_status_ready(service_name, ttl=20)
|
||||
|
||||
|
||||
def do_provision(role_name):
|
||||
workflow = get_workflow(role_name)
|
||||
files = workflow.get('files', [])
|
||||
create_files(files)
|
||||
|
||||
dependencies = workflow.get('dependencies')
|
||||
if dependencies:
|
||||
etcd_client = get_etcd_client()
|
||||
wait_for_dependencies(dependencies, etcd_client)
|
||||
|
||||
job = workflow.get("job")
|
||||
daemon = workflow.get("daemon")
|
||||
roll = workflow.get("roll")
|
||||
kill = workflow.get("kill")
|
||||
if job:
|
||||
execute_job(workflow, job)
|
||||
elif daemon:
|
||||
execute_daemon(workflow, daemon)
|
||||
elif roll is not None:
|
||||
execute_roll(workflow, roll)
|
||||
elif kill is not None:
|
||||
execute_kill(workflow, kill)
|
||||
else:
|
||||
LOG.error("Job or daemon is not specified in workflow")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def execute_daemon(workflow, daemon):
|
||||
pre_commands = workflow.get('pre', [])
|
||||
LOG.info('Running pre commands')
|
||||
for cmd in pre_commands:
|
||||
run_cmd(cmd.get('command'), cmd.get('user'))
|
||||
|
||||
proc = run_daemon(daemon.get('command'), daemon.get('user'))
|
||||
|
||||
LOG.info('Running post commands')
|
||||
post_commands = workflow.get('post', [])
|
||||
for cmd in post_commands:
|
||||
run_cmd(cmd.get('command'), cmd.get('user'))
|
||||
|
||||
set_status_done(workflow["name"])
|
||||
|
||||
code = proc.wait()
|
||||
LOG.info("Process exited with code %d", code)
|
||||
sys.exit(code)
|
||||
|
||||
|
||||
def execute_job(workflow, job):
|
||||
LOG.info('Running single command')
|
||||
try:
|
||||
run_cmd(job.get('command'), job.get('user'))
|
||||
except ProcessException as ex:
|
||||
LOG.error("Job execution failed")
|
||||
sys.exit(ex.exit_code)
|
||||
set_status_ready(workflow["name"])
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
def execute_roll(workflow, roll):
|
||||
LOG.info("Running rolling upgrade of service %s", workflow["name"])
|
||||
namespace = VARIABLES["namespace"]
|
||||
client = get_pykube_client()
|
||||
deployments = []
|
||||
for object_dict in roll:
|
||||
obj = process_pykube_object(object_dict, namespace, client)
|
||||
if object_dict['kind'] == 'Deployment':
|
||||
deployments.append(obj)
|
||||
for obj in deployments:
|
||||
wait_for_deployment(obj)
|
||||
set_status_ready(workflow["name"])
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
def execute_kill(workflow, kill):
|
||||
LOG.info("Killing deployments for service %s", workflow["name"])
|
||||
namespace = VARIABLES["namespace"]
|
||||
client = get_pykube_client()
|
||||
objs = []
|
||||
for object_dict in kill:
|
||||
if object_dict['kind'] != 'Deployment':
|
||||
LOG.warn("Don't know how to handle %s, skipping",
|
||||
object_dict['kind'])
|
||||
continue
|
||||
obj = get_pykube_object(object_dict, namespace, client)
|
||||
obj.reload()
|
||||
obj.obj['spec']['replicas'] = 0
|
||||
obj.update()
|
||||
objs.append(obj)
|
||||
for obj in objs:
|
||||
wait_for_deployment(obj)
|
||||
set_status_ready(workflow["name"])
|
||||
sys.exit(0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -1,23 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Copyright 2010-2011 OpenStack Foundation
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslotest import base
|
||||
|
||||
|
||||
class TestCase(base.BaseTestCase):
|
||||
|
||||
"""Test case base class for all unit tests."""
|
@ -1,300 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
|
||||
import etcd
|
||||
import mock
|
||||
|
||||
from fuel_ccp_entrypoint import start_script
|
||||
from fuel_ccp_entrypoint.tests import base
|
||||
|
||||
|
||||
class TestGetIpAddress(base.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestGetIpAddress, self).setUp()
|
||||
self.private_iface = 'eth0'
|
||||
self.public_iface = 'eth1'
|
||||
|
||||
@mock.patch('netifaces.interfaces')
|
||||
@mock.patch('netifaces.ifaddresses')
|
||||
def test_get_ip_address_iface_wrong(self, m_ifaddresses, m_interfaces):
|
||||
m_interfaces.return_value = ['eth10', 'eth99']
|
||||
r_value = start_script.get_ip_address(self.private_iface)
|
||||
self.assertEqual('127.0.0.1', r_value)
|
||||
self.assertEqual(1, len(m_interfaces.mock_calls))
|
||||
self.assertEqual(0, len(m_ifaddresses.mock_calls))
|
||||
|
||||
@mock.patch('netifaces.interfaces')
|
||||
@mock.patch('netifaces.ifaddresses')
|
||||
def test_get_ip_address_address_family_wrong(self, m_ifaddresses,
|
||||
m_interfaces):
|
||||
m_interfaces.return_value = ['eth0', 'eth99']
|
||||
m_ifaddresses.return_value = {3: [{"addr": "8.8.8.8"}]}
|
||||
r_value = start_script.get_ip_address(self.private_iface)
|
||||
self.assertEqual('127.0.0.1', r_value)
|
||||
self.assertEqual(1, len(m_interfaces.mock_calls))
|
||||
self.assertEqual(1, len(m_ifaddresses.mock_calls))
|
||||
|
||||
@mock.patch('netifaces.interfaces')
|
||||
@mock.patch('netifaces.ifaddresses')
|
||||
def test_get_ip_address_address_wrong(self, m_ifaddresses, m_interfaces):
|
||||
m_interfaces.return_value = ['eth0', 'eth99']
|
||||
m_ifaddresses.return_value = {2: [{"notaddr": "8.8.8.8"}]}
|
||||
r_value = start_script.get_ip_address(self.private_iface)
|
||||
self.assertEqual('127.0.0.1', r_value)
|
||||
self.assertEqual(1, len(m_interfaces.mock_calls))
|
||||
self.assertEqual(2, len(m_ifaddresses.mock_calls))
|
||||
|
||||
@mock.patch('netifaces.interfaces')
|
||||
@mock.patch('netifaces.ifaddresses')
|
||||
def test_get_ip_address_address_good(self, m_ifaddresses, m_interfaces):
|
||||
m_interfaces.return_value = ['eth0', 'eth99']
|
||||
m_ifaddresses.return_value = {2: [{"addr": "8.8.8.8"}]}
|
||||
r_value = start_script.get_ip_address(self.private_iface)
|
||||
self.assertEqual('8.8.8.8', r_value)
|
||||
self.assertEqual(1, len(m_interfaces.mock_calls))
|
||||
self.assertEqual(2, len(m_ifaddresses.mock_calls))
|
||||
|
||||
|
||||
class TestGetVariables(base.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestGetVariables, self).setUp()
|
||||
os.environ['CCP_NODE_NAME'] = 'node1'
|
||||
os.environ['CCP_POD_NAME'] = 'pod1'
|
||||
os.environ['MEMORY_LIMIT'] = '7859277824'
|
||||
os.environ['CPU_LIMIT'] = '4'
|
||||
|
||||
def tearDown(self):
|
||||
super(TestGetVariables, self).tearDown()
|
||||
del os.environ['CCP_NODE_NAME']
|
||||
del os.environ['CCP_POD_NAME']
|
||||
del os.environ['MEMORY_LIMIT']
|
||||
del os.environ['CPU_LIMIT']
|
||||
|
||||
@mock.patch('six.moves.builtins.open', mock.mock_open())
|
||||
@mock.patch('json.load')
|
||||
@mock.patch('fuel_ccp_entrypoint.start_script.create_network_topology')
|
||||
def test_get_variables(self, m_create_network_topology, m_json_load):
|
||||
m_json_load.side_effect = [{'glob': 'glob_val'}, {}, {}]
|
||||
m_create_network_topology.return_value = 'network_topology'
|
||||
r_value = start_script.get_variables('role')
|
||||
e_value = {
|
||||
'glob': 'glob_val',
|
||||
'role_name': 'role',
|
||||
'network_topology': 'network_topology',
|
||||
'node_name': 'node1',
|
||||
'pod_name': 'pod1',
|
||||
'cpu_limit': '4',
|
||||
'memory_limit': '7859277824',
|
||||
'service_name': None
|
||||
}
|
||||
self.assertEqual(r_value, e_value)
|
||||
|
||||
@mock.patch('six.moves.builtins.open', mock.mock_open())
|
||||
@mock.patch('json.load')
|
||||
@mock.patch('fuel_ccp_entrypoint.start_script.create_network_topology')
|
||||
def test_get_variables_with_node_config(self, m_create_network_topology,
|
||||
m_json_load):
|
||||
m_json_load.side_effect = [
|
||||
# globals
|
||||
{
|
||||
'a': {
|
||||
'b': {
|
||||
'c': ['d', 'e', 'f'],
|
||||
'g': 'h',
|
||||
},
|
||||
'i': ['j', 'k'],
|
||||
'l': 'm'
|
||||
},
|
||||
'n': ['o', 'p', 'q'],
|
||||
'r': 's'
|
||||
},
|
||||
{},
|
||||
# nodes configs
|
||||
{
|
||||
'node[1-3]': {
|
||||
'a': {
|
||||
'b': {
|
||||
'c': ['e', 'f', 't'],
|
||||
'u': 'v'
|
||||
},
|
||||
'w': {
|
||||
'x': 'y'
|
||||
}
|
||||
},
|
||||
'n': ['o', 'p'],
|
||||
'z': 'NaN'
|
||||
},
|
||||
'node[1-2]': {
|
||||
'aa': {'ab': 'ac'},
|
||||
'r': {'ad': ['ae', 'af', 'ag']}
|
||||
|
||||
}
|
||||
}
|
||||
]
|
||||
m_create_network_topology.return_value = 'network_topology'
|
||||
actual = start_script.get_variables('fake_role')
|
||||
expected = {
|
||||
'role_name': 'fake_role',
|
||||
'network_topology': 'network_topology',
|
||||
'node_name': 'node1',
|
||||
'cpu_limit': '4',
|
||||
'memory_limit': '7859277824',
|
||||
'pod_name': 'pod1',
|
||||
'service_name': None,
|
||||
'a': {
|
||||
'b': {
|
||||
'c': ['e', 'f', 't'],
|
||||
'g': 'h',
|
||||
'u': 'v'
|
||||
},
|
||||
'w': {'x': 'y'},
|
||||
'i': ['j', 'k'],
|
||||
'l': 'm'
|
||||
},
|
||||
'n': ['o', 'p'],
|
||||
'r': {'ad': ['ae', 'af', 'ag']},
|
||||
'z': 'NaN',
|
||||
'aa': {'ab': 'ac'},
|
||||
}
|
||||
self.assertEqual(expected, actual)
|
||||
|
||||
|
||||
class TestRetry(base.TestCase):
|
||||
def setUp(self):
|
||||
super(TestRetry, self).setUp()
|
||||
start_script.VARIABLES = {'etcd': {
|
||||
'connection_attempts': 3,
|
||||
'connection_delay': 0
|
||||
}}
|
||||
|
||||
@start_script.retry
|
||||
def func_test(self):
|
||||
return self.func_ret()
|
||||
|
||||
def test_retry_succeeded(self):
|
||||
self.func_ret = mock.Mock(side_effect=[
|
||||
etcd.EtcdException('test_error'), 'test_result'])
|
||||
self.assertEqual('test_result', self.func_test())
|
||||
self.assertEqual(2, self.func_ret.call_count)
|
||||
|
||||
def test_retry_failed(self):
|
||||
self.func_ret = mock.Mock(side_effect=[
|
||||
etcd.EtcdException('test_error') for _ in range(3)])
|
||||
|
||||
self.assertRaisesRegexp(
|
||||
etcd.EtcdException, 'test_error', self.func_test)
|
||||
self.assertEqual(3, self.func_ret.call_count)
|
||||
|
||||
|
||||
class TestGetETCDClient(base.TestCase):
|
||||
def test_get_etcd_local_client(self):
|
||||
start_script.VARIABLES = {
|
||||
"role_name": "etcd",
|
||||
"etcd": {
|
||||
"tls": {
|
||||
"enabled": False
|
||||
},
|
||||
"client_port": {
|
||||
"cont": 10042
|
||||
},
|
||||
"connection_attempts": 3,
|
||||
"connection_delay": 0,
|
||||
},
|
||||
"network_topology": {
|
||||
"private": {
|
||||
"address": "192.0.2.1"
|
||||
}
|
||||
}
|
||||
}
|
||||
with mock.patch("etcd.Client") as m_etcd:
|
||||
expected_value = object()
|
||||
m_etcd.return_value = expected_value
|
||||
etcd_client = start_script.get_etcd_client()
|
||||
self.assertIs(expected_value, etcd_client)
|
||||
m_etcd.assert_called_once_with(
|
||||
host=(("192.0.2.1", 10042),),
|
||||
allow_reconnect=True,
|
||||
read_timeout=2,
|
||||
protocol='http',
|
||||
ca_cert=None)
|
||||
|
||||
def test_get_etcd_client(self):
|
||||
start_script.VARIABLES = {
|
||||
"role_name": "banana",
|
||||
"namespace": "ccp",
|
||||
"cluster_domain": 'cluster.local',
|
||||
"services": {},
|
||||
"service_name": "test",
|
||||
"etcd": {
|
||||
"tls": {
|
||||
"enabled": False
|
||||
},
|
||||
"client_port": {
|
||||
"cont": 1234
|
||||
},
|
||||
"connection_attempts": 3,
|
||||
"connection_delay": 0,
|
||||
}
|
||||
}
|
||||
with mock.patch("etcd.Client") as m_etcd:
|
||||
expected_value = object()
|
||||
m_etcd.return_value = expected_value
|
||||
etcd_client = start_script.get_etcd_client()
|
||||
self.assertIs(expected_value, etcd_client)
|
||||
m_etcd.assert_called_once_with(
|
||||
host=(('etcd.ccp.svc.cluster.local', 1234),),
|
||||
allow_reconnect=True,
|
||||
read_timeout=2,
|
||||
protocol='http',
|
||||
ca_cert=None)
|
||||
|
||||
def test_get_secured_etcd_client(self):
|
||||
start_script.VARIABLES = {
|
||||
"role_name": "banana",
|
||||
"namespace": "ccp",
|
||||
"cluster_domain": 'cluster.local',
|
||||
"services": {},
|
||||
"service_name": "test",
|
||||
"etcd": {
|
||||
"tls": {
|
||||
"enabled": True
|
||||
},
|
||||
"client_port": {
|
||||
"cont": 1234
|
||||
},
|
||||
"connection_attempts": 3,
|
||||
"connection_delay": 0,
|
||||
}
|
||||
}
|
||||
with mock.patch("etcd.Client") as m_etcd:
|
||||
expected_value = object()
|
||||
m_etcd.return_value = expected_value
|
||||
etcd_client = start_script.get_etcd_client()
|
||||
self.assertIs(expected_value, etcd_client)
|
||||
m_etcd.assert_called_once_with(
|
||||
host=(('etcd.ccp.svc.cluster.local', 1234),),
|
||||
allow_reconnect=True,
|
||||
read_timeout=2,
|
||||
protocol='https',
|
||||
ca_cert='/opt/ccp/etc/tls/ca.pem')
|
||||
|
||||
def test_get_etcd_client_wrong(self):
|
||||
start_script.VARIABLES = {
|
||||
"role_nmae": "banana"
|
||||
}
|
||||
self.assertRaises(KeyError, start_script.get_etcd_client)
|
@ -1,11 +0,0 @@
|
||||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
|
||||
Jinja2!=2.9.0,!=2.9.1,!=2.9.2,!=2.9.3,!=2.9.4,>=2.8 # BSD License (3 clause)
|
||||
PyYAML>=3.10.0 # MIT
|
||||
netifaces>=0.10.4 # MIT
|
||||
pbr>=2.0.0 # Apache-2.0
|
||||
pykube
|
||||
python-etcd>=0.4.3 # MIT License
|
||||
six>=1.9.0 # MIT
|
46
setup.cfg
46
setup.cfg
@ -1,46 +0,0 @@
|
||||
[metadata]
|
||||
name = fuel-ccp-entrypoint
|
||||
summary = Entrypoint script for CCP containers
|
||||
description-file =
|
||||
README.rst
|
||||
author = OpenStack
|
||||
author-email = openstack-dev@lists.openstack.org
|
||||
home-page = http://www.openstack.org/
|
||||
classifier =
|
||||
Environment :: OpenStack
|
||||
Intended Audience :: Information Technology
|
||||
Intended Audience :: System Administrators
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: POSIX :: Linux
|
||||
Programming Language :: Python
|
||||
Programming Language :: Python :: 2
|
||||
Programming Language :: Python :: 2.7
|
||||
Programming Language :: Python :: 3
|
||||
Programming Language :: Python :: 3.3
|
||||
Programming Language :: Python :: 3.4
|
||||
|
||||
[files]
|
||||
packages =
|
||||
fuel_ccp_entrypoint
|
||||
|
||||
[build_sphinx]
|
||||
source-dir = doc/source
|
||||
build-dir = doc/build
|
||||
all_files = 1
|
||||
|
||||
[upload_sphinx]
|
||||
upload-dir = doc/build/html
|
||||
|
||||
[compile_catalog]
|
||||
directory = fuel_ccp_entrypoint/locale
|
||||
domain = fuel_ccp_entrypoint
|
||||
|
||||
[update_catalog]
|
||||
domain = fuel_ccp_entrypoint
|
||||
output_dir = fuel_ccp_entrypoint/locale
|
||||
input_file = fuel_ccp_entrypoint/locale/fuel_ccp_entrypoint.pot
|
||||
|
||||
[extract_messages]
|
||||
keywords = _ gettext ngettext l_ lazy_gettext
|
||||
mapping_file = babel.cfg
|
||||
output_file = fuel_ccp_entrypoint/locale/fuel_ccp_entrypoint.pot
|
29
setup.py
29
setup.py
@ -1,29 +0,0 @@
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||
import setuptools
|
||||
|
||||
# In python < 2.7.4, a lazy loading of package `pbr` will break
|
||||
# setuptools if some other modules registered functions in `atexit`.
|
||||
# solution from: http://bugs.python.org/issue15881#msg170215
|
||||
try:
|
||||
import multiprocessing # noqa
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
setuptools.setup(
|
||||
setup_requires=['pbr>=1.8'],
|
||||
pbr=True)
|
@ -1,15 +0,0 @@
|
||||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
|
||||
hacking<0.11,>=0.10.0
|
||||
|
||||
coverage>=4.0 # Apache-2.0
|
||||
flake8>=2.5.4,<2.6.0 # MIT
|
||||
oslosphinx>=4.7.0 # Apache-2.0
|
||||
oslotest>=1.10.0 # Apache-2.0
|
||||
python-subunit>=0.0.18 # Apache-2.0/BSD
|
||||
sphinx>=1.5.1 # BSD
|
||||
testrepository>=0.0.18 # Apache-2.0/BSD
|
||||
testscenarios>=0.4 # Apache-2.0/BSD
|
||||
testtools>=1.4.0 # MIT
|
37
tox.ini
37
tox.ini
@ -1,37 +0,0 @@
|
||||
[tox]
|
||||
minversion = 1.7
|
||||
envlist = py34,py27,pep8
|
||||
skipsdist = True
|
||||
|
||||
[testenv]
|
||||
usedevelop = True
|
||||
install_command = pip install -c {toxinidir}/constraints.txt -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} -U {opts} {packages}
|
||||
setenv =
|
||||
VIRTUAL_ENV={envdir}
|
||||
deps =
|
||||
-r{toxinidir}/requirements.txt
|
||||
-r{toxinidir}/test-requirements.txt
|
||||
commands = python setup.py test --slowest --testr-args='{posargs}'
|
||||
|
||||
[testenv:pep8]
|
||||
commands = flake8 {posargs}
|
||||
|
||||
[testenv:venv]
|
||||
commands = {posargs}
|
||||
|
||||
[testenv:cover]
|
||||
commands = python setup.py test --coverage --testr-args='{posargs}'
|
||||
|
||||
[testenv:docs]
|
||||
commands = python setup.py build_sphinx
|
||||
|
||||
[testenv:debug]
|
||||
commands = oslo_debug_helper {posargs}
|
||||
|
||||
[flake8]
|
||||
# E123, E125 skipped as they are invalid PEP-8.
|
||||
|
||||
show-source = True
|
||||
ignore = E123,E125,H102
|
||||
builtins = _
|
||||
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build
|
Loading…
x
Reference in New Issue
Block a user