Retire the project
Change-Id: I0a9a6b292d9f0b6064eb770d27721b44f4bc6cf0
This commit is contained in:
parent
ab510184c9
commit
ede787ba2f
51
.gitignore
vendored
51
.gitignore
vendored
@ -1,51 +0,0 @@
|
|||||||
*.py[cod]
|
|
||||||
|
|
||||||
# C extensions
|
|
||||||
*.so
|
|
||||||
|
|
||||||
# Packages
|
|
||||||
*.egg
|
|
||||||
*.egg-info
|
|
||||||
dist
|
|
||||||
build
|
|
||||||
eggs
|
|
||||||
parts
|
|
||||||
bin
|
|
||||||
var
|
|
||||||
sdist
|
|
||||||
develop-eggs
|
|
||||||
.installed.cfg
|
|
||||||
|
|
||||||
# Installer logs
|
|
||||||
pip-log.txt
|
|
||||||
|
|
||||||
# Unit test / coverage reports
|
|
||||||
.coverage
|
|
||||||
.tox
|
|
||||||
nosetests.xml
|
|
||||||
.testrepository
|
|
||||||
|
|
||||||
# Translations
|
|
||||||
*.mo
|
|
||||||
|
|
||||||
# Mr Developer
|
|
||||||
.mr.developer.cfg
|
|
||||||
.project
|
|
||||||
.pydevproject
|
|
||||||
|
|
||||||
# Complexity
|
|
||||||
output/*.html
|
|
||||||
output/*/index.html
|
|
||||||
|
|
||||||
# Sphinx
|
|
||||||
doc/build
|
|
||||||
|
|
||||||
# pbr generates these
|
|
||||||
AUTHORS
|
|
||||||
ChangeLog
|
|
||||||
|
|
||||||
# Editors
|
|
||||||
*~
|
|
||||||
.*.swp
|
|
||||||
.*sw?
|
|
||||||
.idea
|
|
@ -1,4 +0,0 @@
|
|||||||
[gerrit]
|
|
||||||
host=review.openstack.org
|
|
||||||
port=29418
|
|
||||||
project=openstack/sticks.git
|
|
@ -1,16 +0,0 @@
|
|||||||
If you would like to contribute to the development of OpenStack,
|
|
||||||
you must follow the steps in this page:
|
|
||||||
|
|
||||||
http://docs.openstack.org/infra/manual/developers.html
|
|
||||||
|
|
||||||
Once those steps have been completed, changes to OpenStack
|
|
||||||
should be submitted for review via the Gerrit tool, following
|
|
||||||
the workflow documented at:
|
|
||||||
|
|
||||||
http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
|
||||||
|
|
||||||
Pull requests submitted through GitHub will be ignored.
|
|
||||||
|
|
||||||
Bugs should be filed on Launchpad, not GitHub:
|
|
||||||
|
|
||||||
https://bugs.launchpad.net/sticks
|
|
@ -1,4 +0,0 @@
|
|||||||
sticks Style Commandments
|
|
||||||
===============================================
|
|
||||||
|
|
||||||
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/
|
|
176
LICENSE
176
LICENSE
@ -1,176 +0,0 @@
|
|||||||
|
|
||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
@ -1,6 +0,0 @@
|
|||||||
include AUTHORS
|
|
||||||
include ChangeLog
|
|
||||||
exclude .gitignore
|
|
||||||
exclude .gitreview
|
|
||||||
|
|
||||||
global-exclude *.pyc
|
|
21
README.rst
21
README.rst
@ -1,15 +1,10 @@
|
|||||||
===============================
|
This project is no longer maintained.
|
||||||
sticks
|
|
||||||
===============================
|
|
||||||
|
|
||||||
System of tickets management
|
The contents of this repository are still available in the Git
|
||||||
|
source code management system. To see the contents of this
|
||||||
|
repository before it reached its end of life, please check out the
|
||||||
|
previous commit with "git checkout HEAD^1".
|
||||||
|
|
||||||
* Free software: Apache license
|
For any further questions, please email
|
||||||
* Documentation: http://docs.openstack.org/developer/sticks
|
openstack-dev@lists.openstack.org or join #openstack-dev on
|
||||||
* Source: http://git.openstack.org/cgit/openstack/sticks
|
Freenode.
|
||||||
* Bugs: http://bugs.launchpad.net/replace with the name of the project on launchpad
|
|
||||||
|
|
||||||
Features
|
|
||||||
--------
|
|
||||||
|
|
||||||
* TODO
|
|
||||||
|
@ -1,39 +0,0 @@
|
|||||||
# sticks.sh - Devstack extras script to install Sticks
|
|
||||||
|
|
||||||
if is_service_enabled sticks-api sticks-agent; then
|
|
||||||
if [[ "$1" == "source" ]]; then
|
|
||||||
# Initial source
|
|
||||||
source $TOP_DIR/lib/sticks
|
|
||||||
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
|
|
||||||
echo_summary "Installing Sticks"
|
|
||||||
install_sticks
|
|
||||||
install_sticksclient
|
|
||||||
|
|
||||||
if is_service_enabled sticks-dashboard; then
|
|
||||||
install_sticksdashboard
|
|
||||||
fi
|
|
||||||
cleanup_sticks
|
|
||||||
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
|
|
||||||
echo_summary "Configuring Sticks"
|
|
||||||
configure_sticks
|
|
||||||
if is_service_enabled sticks-dashboard; then
|
|
||||||
configure_sticksdashboard
|
|
||||||
fi
|
|
||||||
if is_service_enabled key; then
|
|
||||||
create_sticks_accounts
|
|
||||||
fi
|
|
||||||
|
|
||||||
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
|
|
||||||
# Initialize sticks
|
|
||||||
echo_summary "Initializing Sticks"
|
|
||||||
init_sticks
|
|
||||||
|
|
||||||
# Start the Sticks API and Sticks agent components
|
|
||||||
echo_summary "Starting Sticks"
|
|
||||||
start_sticks
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$1" == "unstack" ]]; then
|
|
||||||
stop_sticks
|
|
||||||
fi
|
|
||||||
fi
|
|
@ -1,225 +0,0 @@
|
|||||||
# lib/sticks
|
|
||||||
# Install and start **Sticks** service
|
|
||||||
|
|
||||||
# To enable a minimal set of Sticks services:
|
|
||||||
# - add the following to localrc:
|
|
||||||
#
|
|
||||||
# enable_service sticks-api sticks-agent
|
|
||||||
#
|
|
||||||
# Dependencies:
|
|
||||||
# - functions
|
|
||||||
# - OS_AUTH_URL for auth in api
|
|
||||||
# - DEST, HORIZON_DIR, DATA_DIR set to the destination directory
|
|
||||||
# - SERVICE_PASSWORD, SERVICE_TENANT_NAME for auth in api
|
|
||||||
# - IDENTITY_API_VERSION for the version of Keystone
|
|
||||||
# - STACK_USER service user
|
|
||||||
|
|
||||||
# stack.sh
|
|
||||||
# ---------
|
|
||||||
# install_sticks
|
|
||||||
# install_sticksclient
|
|
||||||
# configure_sticks
|
|
||||||
# init_sticks
|
|
||||||
# start_sticks
|
|
||||||
# stop_sticks
|
|
||||||
# cleanup_sticks
|
|
||||||
|
|
||||||
# Save trace setting
|
|
||||||
XTRACE=$(set +o | grep xtrace)
|
|
||||||
set +o xtrace
|
|
||||||
|
|
||||||
|
|
||||||
# Defaults
|
|
||||||
# --------
|
|
||||||
|
|
||||||
# Set up default directories
|
|
||||||
STICKS_DIR=$DEST/sticks
|
|
||||||
STICKS_CONF_DIR=/etc/sticks
|
|
||||||
STICKS_CONF=$STICKS_CONF_DIR/sticks.conf
|
|
||||||
STICKS_POLICY=$STICKS_CONF_DIR/policy.json
|
|
||||||
STICKS_API_LOG_DIR=/var/log/sticks
|
|
||||||
STICKS_AUTH_CACHE_DIR=${STICKS_AUTH_CACHE_DIR:-/var/cache/sticks}
|
|
||||||
STICKS_REPORTS_DIR=${DATA_DIR}/sticks/reports
|
|
||||||
STICKS_CLIENT_DIR=$DEST/python-sticksclient
|
|
||||||
STICKS_DASHBOARD_DIR=$DEST/sticks-dashboard
|
|
||||||
|
|
||||||
# Support potential entry-points console scripts
|
|
||||||
if [[ -d $STICKS_DIR/bin ]]; then
|
|
||||||
STICKS_BIN_DIR=$STICKS_DIR/bin
|
|
||||||
else
|
|
||||||
STICKS_BIN_DIR=$(get_python_exec_prefix)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set up database backend
|
|
||||||
STICKS_BACKEND=${STICKS_BACKEND:-sqlite}
|
|
||||||
|
|
||||||
# Set sticks repository
|
|
||||||
STICKS_REPO=${STICKS_REPO:-https://github.com/stackforge/sticks.git}
|
|
||||||
STICKS_BRANCH=${STICKS_BRANCH:-master}
|
|
||||||
STICKS_CLIENT_REPO=${STICKS_CLIENT_REPO:-https://github.com/stackforge/python-sticksclient.git}
|
|
||||||
STICKS_CLIENT_BRANCH=${STICKS_CLIENT_BRANCH:-master}
|
|
||||||
STICKS_DASHBOARD_REPO=${STICKS_DASHBOARD_REPO:-https://github.com/stackforge/sticks-dashboard.git}
|
|
||||||
STICKS_DASHBOARD_BRANCH=${STICKS_DASHBOARD_BRANCH:-master}
|
|
||||||
|
|
||||||
# Set Sticks connection info
|
|
||||||
STICKS_SERVICE_HOST=${STICKS_SERVICE_HOST:-$SERVICE_HOST}
|
|
||||||
STICKS_SERVICE_PORT=${STICKS_SERVICE_PORT:-8888}
|
|
||||||
STICKS_SERVICE_HOSTPORT="$STICKS_SERVICE_HOST:$STICKS_SERVICE_PORT"
|
|
||||||
STICKS_SERVICE_PROTOCOL=${STICKS_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
|
|
||||||
|
|
||||||
# Set Sticks auth info
|
|
||||||
STICKS_ADMIN_USER=${STICKS_ADMIN_USER:-"admin"}
|
|
||||||
STICKS_ADMIN_PASSWORD=${STICKS_ADMIN_PASSWORD:-$ADMIN_PASSWORD}
|
|
||||||
STICKS_ADMIN_TENANT=${STICKS_ADMIN_TENANT:-"admin"}
|
|
||||||
|
|
||||||
# Tell Tempest this project is present
|
|
||||||
TEMPEST_SERVICES+=,sticks
|
|
||||||
|
|
||||||
|
|
||||||
# Functions
|
|
||||||
# ---------
|
|
||||||
|
|
||||||
# create_sticks_accounts() - Set up common required sticks accounts
|
|
||||||
|
|
||||||
# Tenant User Roles
|
|
||||||
# ------------------------------------------------------------------
|
|
||||||
# service sticks admin # if enabled
|
|
||||||
function create_sticks_accounts {
|
|
||||||
|
|
||||||
SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
|
|
||||||
ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
|
|
||||||
|
|
||||||
# Sticks
|
|
||||||
if [[ "$ENABLED_SERVICES" =~ "sticks-api" ]]; then
|
|
||||||
STICKS_USER=$(openstack user create \
|
|
||||||
sticks \
|
|
||||||
--password "$SERVICE_PASSWORD" \
|
|
||||||
--project $SERVICE_TENANT \
|
|
||||||
--email sticks@example.com \
|
|
||||||
| grep " id " | get_field 2)
|
|
||||||
openstack role add \
|
|
||||||
$ADMIN_ROLE \
|
|
||||||
--project $SERVICE_TENANT \
|
|
||||||
--user $STICKS_USER
|
|
||||||
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
|
|
||||||
STICKS_SERVICE=$(openstack service create \
|
|
||||||
sticks \
|
|
||||||
--type=helpdesk \
|
|
||||||
--description="Helpdesk service" \
|
|
||||||
| grep " id " | get_field 2)
|
|
||||||
openstack endpoint create \
|
|
||||||
$STICKS_SERVICE \
|
|
||||||
--region RegionOne \
|
|
||||||
--publicurl "$STICKS_SERVICE_PROTOCOL://$STICKS_SERVICE_HOSTPORT" \
|
|
||||||
--adminurl "$STICKS_SERVICE_PROTOCOL://$STICKS_SERVICE_HOSTPORT" \
|
|
||||||
--internalurl "$STICKS_SERVICE_PROTOCOL://$STICKS_SERVICE_HOSTPORT"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# Test if any Sticks services are enabled
|
|
||||||
# is_sticks_enabled
|
|
||||||
function is_sticks_enabled {
|
|
||||||
[[ ,${ENABLED_SERVICES} =~ ,"sticks-" ]] && return 0
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# cleanup_sticks() - Remove residual data files, anything left over from previous
|
|
||||||
# runs that a clean run would need to clean up
|
|
||||||
function cleanup_sticks {
|
|
||||||
# Clean up dirs
|
|
||||||
rm -rf $STICKS_AUTH_CACHE_DIR/*
|
|
||||||
rm -rf $STICKS_CONF_DIR/*
|
|
||||||
if [[ "$ENABLED_SERVICES" =~ "sticks-dashboard" ]]; then
|
|
||||||
rm -f $HORIZON_DIR/openstack_dashboard/local/enabled/_60_sticks.py
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# configure_sticks() - Set config files, create data dirs, etc
|
|
||||||
function configure_sticks {
|
|
||||||
setup_develop $STICKS_DIR
|
|
||||||
|
|
||||||
sudo mkdir -m 755 -p $STICKS_CONF_DIR
|
|
||||||
sudo chown $STACK_USER $STICKS_CONF_DIR
|
|
||||||
|
|
||||||
sudo mkdir -m 755 -p $STICKS_API_LOG_DIR
|
|
||||||
sudo chown $STACK_USER $STICKS_API_LOG_DIR
|
|
||||||
|
|
||||||
cp $STICKS_DIR$STICKS_CONF.sample $STICKS_CONF
|
|
||||||
cp $STICKS_DIR$STICKS_POLICY $STICKS_POLICY
|
|
||||||
|
|
||||||
# Default
|
|
||||||
iniset $STICKS_CONF DEFAULT verbose True
|
|
||||||
iniset $STICKS_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
|
|
||||||
iniset $STICKS_CONF DEFAULT sql_connection `database_connection_url sticks`
|
|
||||||
|
|
||||||
# auth
|
|
||||||
iniset $STICKS_CONF keystone_authtoken auth_uri "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:5000/v2.0/"
|
|
||||||
iniset $STICKS_CONF keystone_authtoken admin_user sticks
|
|
||||||
iniset $STICKS_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
|
|
||||||
iniset $STICKS_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
|
|
||||||
iniset $STICKS_CONF keystone_authtoken region $REGION_NAME
|
|
||||||
iniset $STICKS_CONF keystone_authtoken auth_host $KEYSTONE_AUTH_HOST
|
|
||||||
iniset $STICKS_CONF keystone_authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
|
|
||||||
iniset $STICKS_CONF keystone_authtoken auth_port $KEYSTONE_AUTH_PORT
|
|
||||||
iniset $STICKS_CONF keystone_authtoken signing_dir $STICKS_AUTH_CACHE_DIR
|
|
||||||
}
|
|
||||||
|
|
||||||
# configure_sticksdashboard()
|
|
||||||
function configure_sticksdashboard {
|
|
||||||
ln -s $STICKS_DASHBOARD_DIR/_sticks.py.example $HORIZON_DIR/openstack_dashboard/local/enabled/_60_sticks.py
|
|
||||||
}
|
|
||||||
|
|
||||||
# init_sticks() - Initialize Sticks database
|
|
||||||
function init_sticks {
|
|
||||||
# Delete existing cache
|
|
||||||
sudo rm -rf $STICKS_AUTH_CACHE_DIR
|
|
||||||
sudo mkdir -p $STICKS_AUTH_CACHE_DIR
|
|
||||||
sudo chown $STACK_USER $STICKS_AUTH_CACHE_DIR
|
|
||||||
}
|
|
||||||
|
|
||||||
# install_sticks() - Collect source and prepare
|
|
||||||
function install_sticks {
|
|
||||||
git_clone $STICKS_REPO $STICKS_DIR $STICKS_BRANCH
|
|
||||||
setup_develop $STICKS_DIR
|
|
||||||
}
|
|
||||||
|
|
||||||
# install_sticksclient() - Collect source and prepare
|
|
||||||
function install_sticksclient {
|
|
||||||
git_clone $STICKS_CLIENT_REPO $STICKS_CLIENT_DIR $STICKS_CLIENT_BRANCH
|
|
||||||
setup_develop $STICKS_CLIENT_DIR
|
|
||||||
}
|
|
||||||
|
|
||||||
# install_sticksdashboard() - Collect source and prepare
|
|
||||||
function install_sticksdashboard {
|
|
||||||
git_clone $STICKS_DASHBOARD_REPO $STICKS_DASHBOARD_DIR $STICKS_DASHBOARD_BRANCH
|
|
||||||
setup_develop $STICKS_DASHBOARD_DIR
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# start_sticks() - Start running processes, including screen
|
|
||||||
function start_sticks {
|
|
||||||
screen_it sticks-agent "cd $STICKS_DIR; $STICKS_BIN_DIR/sticks-agent --config-file=$STICKS_CONF"
|
|
||||||
screen_it sticks-api "cd $STICKS_DIR; $STICKS_BIN_DIR/sticks-api --config-file=$STICKS_CONF"
|
|
||||||
echo "Waiting for sticks-api ($STICKS_SERVICE_HOST:$STICKS_SERVICE_PORT) to start..."
|
|
||||||
if ! timeout $SERVICE_TIMEOUT sh -c "while ! curl --noproxy '*' -s http://$STICKS_SERVICE_HOST:$STICKS_SERVICE_PORT/v1/ >/dev/null; do sleep 1; done"; then
|
|
||||||
die $LINENO "sticks-api did not start"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# stop_sticks() - Stop running processes
|
|
||||||
function stop_sticks {
|
|
||||||
# Kill the sticks screen windows
|
|
||||||
for serv in sticks-api sticks-agent; do
|
|
||||||
screen_stop $serv
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# Restore xtrace
|
|
||||||
$XTRACE
|
|
||||||
|
|
||||||
# Local variables:
|
|
||||||
# mode: shell-script
|
|
||||||
# End:
|
|
@ -1,267 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.abspath('../..'))
|
|
||||||
|
|
||||||
# -- General configuration ------------------------------------------------
|
|
||||||
|
|
||||||
# If your documentation needs a minimal Sphinx version, state it here.
|
|
||||||
#needs_sphinx = '1.0'
|
|
||||||
|
|
||||||
# Add any Sphinx extension module names here, as strings. They can be
|
|
||||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
|
||||||
# ones.
|
|
||||||
extensions = [
|
|
||||||
'sphinx.ext.autodoc',
|
|
||||||
'sphinx.ext.graphviz',
|
|
||||||
'sphinx.ext.intersphinx',
|
|
||||||
'sphinx.ext.viewcode',
|
|
||||||
'wsmeext.sphinxext',
|
|
||||||
'sphinxcontrib.pecanwsme.rest',
|
|
||||||
'sphinxcontrib.httpdomain',
|
|
||||||
'oslosphinx',
|
|
||||||
]
|
|
||||||
|
|
||||||
wsme_protocols = ['restjson', 'restxml']
|
|
||||||
|
|
||||||
# Add any paths that contain templates here, relative to this directory.
|
|
||||||
templates_path = ['_templates']
|
|
||||||
|
|
||||||
# The suffix of source filenames.
|
|
||||||
source_suffix = '.rst'
|
|
||||||
|
|
||||||
# The encoding of source files.
|
|
||||||
#source_encoding = 'utf-8-sig'
|
|
||||||
|
|
||||||
# The master toctree document.
|
|
||||||
master_doc = 'index'
|
|
||||||
|
|
||||||
# General information about the project.
|
|
||||||
project = u'sticks'
|
|
||||||
copyright = u'2015, Eurogiciel'
|
|
||||||
|
|
||||||
# The version info for the project you're documenting, acts as replacement for
|
|
||||||
# |version| and |release|, also used in various other places throughout the
|
|
||||||
# built documents.
|
|
||||||
#
|
|
||||||
# The short X.Y version.
|
|
||||||
version = '0.1'
|
|
||||||
# The full version, including alpha/beta/rc tags.
|
|
||||||
release = '0.1'
|
|
||||||
|
|
||||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
|
||||||
# for a list of supported languages.
|
|
||||||
#language = None
|
|
||||||
|
|
||||||
# There are two options for replacing |today|: either, you set today to some
|
|
||||||
# non-false value, then it is used:
|
|
||||||
#today = ''
|
|
||||||
# Else, today_fmt is used as the format for a strftime call.
|
|
||||||
#today_fmt = '%B %d, %Y'
|
|
||||||
|
|
||||||
# List of patterns, relative to source directory, that match files and
|
|
||||||
# directories to ignore when looking for source files.
|
|
||||||
exclude_patterns = []
|
|
||||||
|
|
||||||
# The reST default role (used for this markup: `text`) to use for all
|
|
||||||
# documents.
|
|
||||||
#default_role = None
|
|
||||||
|
|
||||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
|
||||||
add_function_parentheses = True
|
|
||||||
|
|
||||||
# If true, the current module name will be prepended to all description
|
|
||||||
# unit titles (such as .. function::).
|
|
||||||
add_module_names = True
|
|
||||||
|
|
||||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
|
||||||
# output. They are ignored by default.
|
|
||||||
#show_authors = False
|
|
||||||
|
|
||||||
# The name of the Pygments (syntax highlighting) style to use.
|
|
||||||
pygments_style = 'sphinx'
|
|
||||||
|
|
||||||
# A list of ignored prefixes for module index sorting.
|
|
||||||
modindex_common_prefix = ['sticks.']
|
|
||||||
|
|
||||||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
|
||||||
#keep_warnings = False
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for HTML output ----------------------------------------------
|
|
||||||
|
|
||||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
|
||||||
# a list of builtin themes.
|
|
||||||
html_theme = 'default'
|
|
||||||
|
|
||||||
# Theme options are theme-specific and customize the look and feel of a theme
|
|
||||||
# further. For a list of options available for each theme, see the
|
|
||||||
# documentation.
|
|
||||||
#html_theme_options = {}
|
|
||||||
|
|
||||||
# Add any paths that contain custom themes here, relative to this directory.
|
|
||||||
#html_theme_path = []
|
|
||||||
|
|
||||||
# The name for this set of Sphinx documents. If None, it defaults to
|
|
||||||
# "<project> v<release> documentation".
|
|
||||||
#html_title = None
|
|
||||||
|
|
||||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
|
||||||
#html_short_title = None
|
|
||||||
|
|
||||||
# The name of an image file (relative to this directory) to place at the top
|
|
||||||
# of the sidebar.
|
|
||||||
#html_logo = None
|
|
||||||
|
|
||||||
# The name of an image file (within the static path) to use as favicon of the
|
|
||||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
|
||||||
# pixels large.
|
|
||||||
#html_favicon = None
|
|
||||||
|
|
||||||
# Add any paths that contain custom static files (such as style sheets) here,
|
|
||||||
# relative to this directory. They are copied after the builtin static files,
|
|
||||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
|
||||||
html_static_path = ['_static']
|
|
||||||
|
|
||||||
# Add any extra paths that contain custom files (such as robots.txt or
|
|
||||||
# .htaccess) here, relative to this directory. These files are copied
|
|
||||||
# directly to the root of the documentation.
|
|
||||||
#html_extra_path = []
|
|
||||||
|
|
||||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
|
||||||
# using the given strftime format.
|
|
||||||
#html_last_updated_fmt = '%b %d, %Y'
|
|
||||||
|
|
||||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
|
||||||
# typographically correct entities.
|
|
||||||
#html_use_smartypants = True
|
|
||||||
|
|
||||||
# Custom sidebar templates, maps document names to template names.
|
|
||||||
#html_sidebars = {}
|
|
||||||
|
|
||||||
# Additional templates that should be rendered to pages, maps page names to
|
|
||||||
# template names.
|
|
||||||
#html_additional_pages = {}
|
|
||||||
|
|
||||||
# If false, no module index is generated.
|
|
||||||
#html_domain_indices = True
|
|
||||||
|
|
||||||
# If false, no index is generated.
|
|
||||||
#html_use_index = True
|
|
||||||
|
|
||||||
# If true, the index is split into individual pages for each letter.
|
|
||||||
#html_split_index = False
|
|
||||||
|
|
||||||
# If true, links to the reST sources are added to the pages.
|
|
||||||
#html_show_sourcelink = True
|
|
||||||
|
|
||||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
|
||||||
#html_show_sphinx = True
|
|
||||||
|
|
||||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
|
||||||
#html_show_copyright = True
|
|
||||||
|
|
||||||
# If true, an OpenSearch description file will be output, and all pages will
|
|
||||||
# contain a <link> tag referring to it. The value of this option must be the
|
|
||||||
# base URL from which the finished HTML is served.
|
|
||||||
#html_use_opensearch = ''
|
|
||||||
|
|
||||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
|
||||||
#html_file_suffix = None
|
|
||||||
|
|
||||||
# Output file base name for HTML help builder.
|
|
||||||
htmlhelp_basename = 'sticksdoc'
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for LaTeX output ---------------------------------------------
|
|
||||||
|
|
||||||
latex_elements = {
|
|
||||||
# The paper size ('letterpaper' or 'a4paper').
|
|
||||||
#'papersize': 'letterpaper',
|
|
||||||
|
|
||||||
# The font size ('10pt', '11pt' or '12pt').
|
|
||||||
#'pointsize': '10pt',
|
|
||||||
|
|
||||||
# Additional stuff for the LaTeX preamble.
|
|
||||||
#'preamble': '',
|
|
||||||
}
|
|
||||||
|
|
||||||
# Grouping the document tree into LaTeX files. List of tuples
|
|
||||||
# (source start file, target name, title,
|
|
||||||
# author, documentclass [howto, manual, or own class]).
|
|
||||||
latex_documents = [
|
|
||||||
('index', 'sticks.tex', u'sticks Documentation',
|
|
||||||
u'Eurogiciel', 'manual'),
|
|
||||||
]
|
|
||||||
|
|
||||||
# The name of an image file (relative to this directory) to place at the top of
|
|
||||||
# the title page.
|
|
||||||
#latex_logo = None
|
|
||||||
|
|
||||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
|
||||||
# not chapters.
|
|
||||||
#latex_use_parts = False
|
|
||||||
|
|
||||||
# If true, show page references after internal links.
|
|
||||||
#latex_show_pagerefs = False
|
|
||||||
|
|
||||||
# If true, show URL addresses after external links.
|
|
||||||
#latex_show_urls = False
|
|
||||||
|
|
||||||
# Documents to append as an appendix to all manuals.
|
|
||||||
#latex_appendices = []
|
|
||||||
|
|
||||||
# If false, no module index is generated.
|
|
||||||
#latex_domain_indices = True
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for manual page output ---------------------------------------
|
|
||||||
|
|
||||||
# One entry per manual page. List of tuples
|
|
||||||
# (source start file, name, description, authors, manual section).
|
|
||||||
man_pages = [
|
|
||||||
('index', 'sticks', u'sticks Documentation',
|
|
||||||
[u'Eurogiciel'], 1)
|
|
||||||
]
|
|
||||||
|
|
||||||
# If true, show URL addresses after external links.
|
|
||||||
#man_show_urls = False
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for Texinfo output -------------------------------------------
|
|
||||||
|
|
||||||
# Grouping the document tree into Texinfo files. List of tuples
|
|
||||||
# (source start file, target name, title, author,
|
|
||||||
# dir menu entry, description, category)
|
|
||||||
texinfo_documents = [
|
|
||||||
('index', 'sticks', u'sticks Documentation',
|
|
||||||
u'Eurogiciel', 'sticks', 'One line description of project.',
|
|
||||||
'Miscellaneous'),
|
|
||||||
]
|
|
||||||
|
|
||||||
# Documents to append as an appendix to all manuals.
|
|
||||||
#texinfo_appendices = []
|
|
||||||
|
|
||||||
# If false, no module index is generated.
|
|
||||||
#texinfo_domain_indices = True
|
|
||||||
|
|
||||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
|
||||||
#texinfo_show_urls = 'footnote'
|
|
||||||
|
|
||||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
|
||||||
#texinfo_no_detailmenu = False
|
|
@ -1,4 +0,0 @@
|
|||||||
============
|
|
||||||
Contributing
|
|
||||||
============
|
|
||||||
.. include:: ../../CONTRIBUTING.rst
|
|
@ -1,5 +0,0 @@
|
|||||||
digraph "Sticks's Architecture" {
|
|
||||||
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
@ -1,48 +0,0 @@
|
|||||||
.. sticks documentation master file, created by
|
|
||||||
sphinx-quickstart on Wed May 14 23:05:42 2014.
|
|
||||||
You can adapt this file completely to your liking, but it should at least
|
|
||||||
contain the root `toctree` directive.
|
|
||||||
|
|
||||||
==============================================
|
|
||||||
Welcome to Sticks's developer documentation!
|
|
||||||
==============================================
|
|
||||||
|
|
||||||
Introduction
|
|
||||||
============
|
|
||||||
|
|
||||||
Sticks is a Security As A Service project aimed at integrating security tools
|
|
||||||
inside Openstack.
|
|
||||||
|
|
||||||
Installation
|
|
||||||
============
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
installation
|
|
||||||
|
|
||||||
|
|
||||||
Architecture
|
|
||||||
============
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
arch
|
|
||||||
|
|
||||||
|
|
||||||
API References
|
|
||||||
==============
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
webapi/root
|
|
||||||
webapi/v1
|
|
||||||
|
|
||||||
|
|
||||||
Indices and tables
|
|
||||||
==================
|
|
||||||
|
|
||||||
* :ref:`genindex`
|
|
||||||
* :ref:`search`
|
|
@ -1,130 +0,0 @@
|
|||||||
#######################################
|
|
||||||
Sticks installation and configuration
|
|
||||||
#######################################
|
|
||||||
|
|
||||||
|
|
||||||
Install from source
|
|
||||||
===================
|
|
||||||
|
|
||||||
There is no release of Sticks as of now, the installation can be done from
|
|
||||||
the git repository.
|
|
||||||
|
|
||||||
Retrieve and install Sticks :
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
git clone git://git.openstack.org/stackforge/sticks
|
|
||||||
cd sticks
|
|
||||||
python setup.py install
|
|
||||||
|
|
||||||
This procedure installs the ``sticks`` python library and a few
|
|
||||||
executables:
|
|
||||||
|
|
||||||
* ``sticks-api``: API service
|
|
||||||
|
|
||||||
Install a sample configuration file :
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
mkdir /etc/sticks
|
|
||||||
cp etc/sticks/sticks.conf.sample /etc/sticks/sticks.conf
|
|
||||||
|
|
||||||
Configure Sticks
|
|
||||||
==================
|
|
||||||
|
|
||||||
Edit :file:`/etc/sticks/sticks.conf` to configure Sticks.
|
|
||||||
|
|
||||||
The following shows the basic configuration items:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
verbose = True
|
|
||||||
log_dir = /var/log/sticks
|
|
||||||
|
|
||||||
rabbit_host = RABBIT_HOST
|
|
||||||
rabbit_userid = openstack
|
|
||||||
rabbit_password = RABBIT_PASSWORD
|
|
||||||
|
|
||||||
# Class of tracking plugin, ie redmine, trac, etc.
|
|
||||||
#tracking_plugin=
|
|
||||||
|
|
||||||
# Name of sticks role (default: sticks)
|
|
||||||
#sticks_role_name=sticks
|
|
||||||
|
|
||||||
[auth]
|
|
||||||
username = sticks
|
|
||||||
password = STICKS_PASSWORD
|
|
||||||
tenant = service
|
|
||||||
region = RegionOne
|
|
||||||
url = http://localhost:5000/v2.0
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
username = sticks
|
|
||||||
password = STICKS_PASSWORD
|
|
||||||
project_name = service
|
|
||||||
region = RegionOne
|
|
||||||
auth_url = http://localhost:5000/v2.0
|
|
||||||
auth_plugin = password
|
|
||||||
|
|
||||||
[database]
|
|
||||||
connection = mysql://sticks:STICKS_DBPASS@localhost/sticks
|
|
||||||
|
|
||||||
DEFAULT]
|
|
||||||
|
|
||||||
Setup the database and storage backend
|
|
||||||
======================================
|
|
||||||
|
|
||||||
MySQL/MariaDB is the recommended database engine. To setup the database, use
|
|
||||||
the ``mysql`` client:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
mysql -uroot -p << EOF
|
|
||||||
CREATE DATABASE sticks;
|
|
||||||
GRANT ALL PRIVILEGES ON sticks.* TO 'sticks'@'localhost' IDENTIFIED BY 'STICKS_DBPASS';
|
|
||||||
EOF
|
|
||||||
|
|
||||||
Run the database synchronisation scripts:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
sticks-dbsync upgrade
|
|
||||||
|
|
||||||
Init the storage backend:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
sticks-storage-init
|
|
||||||
|
|
||||||
Setup Keystone
|
|
||||||
==============
|
|
||||||
|
|
||||||
Sticks uses Keystone for authentication.
|
|
||||||
|
|
||||||
To integrate Sticks to Keystone, run the following commands (as OpenStack
|
|
||||||
administrator):
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
keystone user-create --name sticks --pass STICKS_PASS
|
|
||||||
keystone user-role-add --user sticks --role admin --tenant service
|
|
||||||
|
|
||||||
Create the ``Helpdesk`` service and its endpoints:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
keystone service-create --name Sticks --type helpdesk
|
|
||||||
keystone endpoint-create --service-id SECURITY_SERVICE_ID \
|
|
||||||
--publicurl http://localhost:8888 \
|
|
||||||
--adminurl http://localhost:8888 \
|
|
||||||
--internalurl http://localhost:8888
|
|
||||||
|
|
||||||
Start Sticks
|
|
||||||
==============
|
|
||||||
|
|
||||||
Start the API service :
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
sticks-api --config-file /etc/sticks/sticks.conf
|
|
@ -1 +0,0 @@
|
|||||||
.. include:: ../../README.rst
|
|
@ -1,7 +0,0 @@
|
|||||||
========
|
|
||||||
Usage
|
|
||||||
========
|
|
||||||
|
|
||||||
To use sticks in a project::
|
|
||||||
|
|
||||||
import sticks
|
|
@ -1,16 +0,0 @@
|
|||||||
========================
|
|
||||||
Sticks REST API (root)
|
|
||||||
========================
|
|
||||||
|
|
||||||
.. rest-controller:: sticks.api.root:RootController
|
|
||||||
:webprefix: / /
|
|
||||||
.. Dirty hack till the bug is fixed so we can specify root path
|
|
||||||
|
|
||||||
.. autotype:: sticks.api.root.APILink
|
|
||||||
:members:
|
|
||||||
|
|
||||||
.. autotype:: sticks.api.root.APIMediaType
|
|
||||||
:members:
|
|
||||||
|
|
||||||
.. autotype:: sticks.api.root.APIVersion
|
|
||||||
:members:
|
|
@ -1,16 +0,0 @@
|
|||||||
======================
|
|
||||||
Sticks REST API (v1)
|
|
||||||
======================
|
|
||||||
|
|
||||||
|
|
||||||
Tickets
|
|
||||||
=======
|
|
||||||
|
|
||||||
.. rest-controller:: sticks.api.v1.controllers.ticket:TicketsController
|
|
||||||
:webprefix: /v1/tickets
|
|
||||||
|
|
||||||
.. autotype:: sticks.api.v1.datamodels.ticket.TicketResource
|
|
||||||
:members:
|
|
||||||
|
|
||||||
.. autotype:: sticks.api.v1.datamodels.ticket.TicketResourceCollection
|
|
||||||
:members:
|
|
@ -1,4 +0,0 @@
|
|||||||
{
|
|
||||||
"context_is_admin": "role:admin",
|
|
||||||
"default": ""
|
|
||||||
}
|
|
@ -1,648 +0,0 @@
|
|||||||
[DEFAULT]
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in oslo.messaging
|
|
||||||
#
|
|
||||||
|
|
||||||
# Use durable queues in amqp. (boolean value)
|
|
||||||
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
|
|
||||||
#amqp_durable_queues=false
|
|
||||||
|
|
||||||
# Auto-delete queues in amqp. (boolean value)
|
|
||||||
#amqp_auto_delete=false
|
|
||||||
|
|
||||||
# Size of RPC connection pool. (integer value)
|
|
||||||
#rpc_conn_pool_size=30
|
|
||||||
|
|
||||||
# Qpid broker hostname. (string value)
|
|
||||||
#qpid_hostname=sticks
|
|
||||||
|
|
||||||
# Qpid broker port. (integer value)
|
|
||||||
#qpid_port=5672
|
|
||||||
|
|
||||||
# Qpid HA cluster host:port pairs. (list value)
|
|
||||||
#qpid_hosts=$qpid_hostname:$qpid_port
|
|
||||||
|
|
||||||
# Username for Qpid connection. (string value)
|
|
||||||
#qpid_username=
|
|
||||||
|
|
||||||
# Password for Qpid connection. (string value)
|
|
||||||
#qpid_password=
|
|
||||||
|
|
||||||
# Space separated list of SASL mechanisms to use for auth.
|
|
||||||
# (string value)
|
|
||||||
#qpid_sasl_mechanisms=
|
|
||||||
|
|
||||||
# Seconds between connection keepalive heartbeats. (integer
|
|
||||||
# value)
|
|
||||||
#qpid_heartbeat=60
|
|
||||||
|
|
||||||
# Transport to use, either 'tcp' or 'ssl'. (string value)
|
|
||||||
#qpid_protocol=tcp
|
|
||||||
|
|
||||||
# Whether to disable the Nagle algorithm. (boolean value)
|
|
||||||
#qpid_tcp_nodelay=true
|
|
||||||
|
|
||||||
# The number of prefetched messages held by receiver. (integer
|
|
||||||
# value)
|
|
||||||
#qpid_receiver_capacity=1
|
|
||||||
|
|
||||||
# The qpid topology version to use. Version 1 is what was
|
|
||||||
# originally used by impl_qpid. Version 2 includes some
|
|
||||||
# backwards-incompatible changes that allow broker federation
|
|
||||||
# to work. Users should update to version 2 when they are
|
|
||||||
# able to take everything down, as it requires a clean break.
|
|
||||||
# (integer value)
|
|
||||||
#qpid_topology_version=1
|
|
||||||
|
|
||||||
# SSL version to use (valid only if SSL enabled). valid values
|
|
||||||
# are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some
|
|
||||||
# distributions. (string value)
|
|
||||||
#kombu_ssl_version=
|
|
||||||
|
|
||||||
# SSL key file (valid only if SSL enabled). (string value)
|
|
||||||
#kombu_ssl_keyfile=
|
|
||||||
|
|
||||||
# SSL cert file (valid only if SSL enabled). (string value)
|
|
||||||
#kombu_ssl_certfile=
|
|
||||||
|
|
||||||
# SSL certification authority file (valid only if SSL
|
|
||||||
# enabled). (string value)
|
|
||||||
#kombu_ssl_ca_certs=
|
|
||||||
|
|
||||||
# How long to wait before reconnecting in response to an AMQP
|
|
||||||
# consumer cancel notification. (floating point value)
|
|
||||||
#kombu_reconnect_delay=1.0
|
|
||||||
|
|
||||||
# The RabbitMQ broker address where a single node is used.
|
|
||||||
# (string value)
|
|
||||||
#rabbit_host=sticks
|
|
||||||
|
|
||||||
# The RabbitMQ broker port where a single node is used.
|
|
||||||
# (integer value)
|
|
||||||
#rabbit_port=5672
|
|
||||||
|
|
||||||
# RabbitMQ HA cluster host:port pairs. (list value)
|
|
||||||
#rabbit_hosts=$rabbit_host:$rabbit_port
|
|
||||||
|
|
||||||
# Connect over SSL for RabbitMQ. (boolean value)
|
|
||||||
#rabbit_use_ssl=false
|
|
||||||
|
|
||||||
# The RabbitMQ userid. (string value)
|
|
||||||
#rabbit_userid=guest
|
|
||||||
|
|
||||||
# The RabbitMQ password. (string value)
|
|
||||||
#rabbit_password=guest
|
|
||||||
|
|
||||||
# the RabbitMQ login method (string value)
|
|
||||||
#rabbit_login_method=AMQPLAIN
|
|
||||||
|
|
||||||
# The RabbitMQ virtual host. (string value)
|
|
||||||
#rabbit_virtual_host=/
|
|
||||||
|
|
||||||
# How frequently to retry connecting with RabbitMQ. (integer
|
|
||||||
# value)
|
|
||||||
#rabbit_retry_interval=1
|
|
||||||
|
|
||||||
# How long to backoff for between retries when connecting to
|
|
||||||
# RabbitMQ. (integer value)
|
|
||||||
#rabbit_retry_backoff=2
|
|
||||||
|
|
||||||
# Maximum number of RabbitMQ connection retries. Default is 0
|
|
||||||
# (infinite retry count). (integer value)
|
|
||||||
#rabbit_max_retries=0
|
|
||||||
|
|
||||||
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change
|
|
||||||
# this option, you must wipe the RabbitMQ database. (boolean
|
|
||||||
# value)
|
|
||||||
#rabbit_ha_queues=false
|
|
||||||
|
|
||||||
# If passed, use a fake RabbitMQ provider. (boolean value)
|
|
||||||
#fake_rabbit=false
|
|
||||||
|
|
||||||
# ZeroMQ bind address. Should be a wildcard (*), an ethernet
|
|
||||||
# interface, or IP. The "host" option should point or resolve
|
|
||||||
# to this address. (string value)
|
|
||||||
#rpc_zmq_bind_address=*
|
|
||||||
|
|
||||||
# MatchMaker driver. (string value)
|
|
||||||
#rpc_zmq_matchmaker=oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
|
|
||||||
|
|
||||||
# ZeroMQ receiver listening port. (integer value)
|
|
||||||
#rpc_zmq_port=9501
|
|
||||||
|
|
||||||
# Number of ZeroMQ contexts, defaults to 1. (integer value)
|
|
||||||
#rpc_zmq_contexts=1
|
|
||||||
|
|
||||||
# Maximum number of ingress messages to locally buffer per
|
|
||||||
# topic. Default is unlimited. (integer value)
|
|
||||||
#rpc_zmq_topic_backlog=<None>
|
|
||||||
|
|
||||||
# Directory for holding IPC sockets. (string value)
|
|
||||||
#rpc_zmq_ipc_dir=/var/run/openstack
|
|
||||||
|
|
||||||
# Name of this node. Must be a valid hostname, FQDN, or IP
|
|
||||||
# address. Must match "host" option, if running Nova. (string
|
|
||||||
# value)
|
|
||||||
#rpc_zmq_host=sticks
|
|
||||||
|
|
||||||
# Seconds to wait before a cast expires (TTL). Only supported
|
|
||||||
# by impl_zmq. (integer value)
|
|
||||||
#rpc_cast_timeout=30
|
|
||||||
|
|
||||||
# Heartbeat frequency. (integer value)
|
|
||||||
#matchmaker_heartbeat_freq=300
|
|
||||||
|
|
||||||
# Heartbeat time-to-live. (integer value)
|
|
||||||
#matchmaker_heartbeat_ttl=600
|
|
||||||
|
|
||||||
# Size of RPC greenthread pool. (integer value)
|
|
||||||
#rpc_thread_pool_size=64
|
|
||||||
|
|
||||||
# Driver or drivers to handle sending notifications. (multi
|
|
||||||
# valued)
|
|
||||||
#notification_driver=
|
|
||||||
|
|
||||||
# AMQP topic used for OpenStack notifications. (list value)
|
|
||||||
# Deprecated group/name - [rpc_notifier2]/topics
|
|
||||||
#notification_topics=notifications
|
|
||||||
|
|
||||||
# Seconds to wait for a response from a call. (integer value)
|
|
||||||
#rpc_response_timeout=60
|
|
||||||
|
|
||||||
# A URL representing the messaging driver to use and its full
|
|
||||||
# configuration. If not set, we fall back to the rpc_backend
|
|
||||||
# option and driver specific configuration. (string value)
|
|
||||||
#transport_url=<None>
|
|
||||||
|
|
||||||
# The messaging driver to use, defaults to rabbit. Other
|
|
||||||
# drivers include qpid and zmq. (string value)
|
|
||||||
#rpc_backend=rabbit
|
|
||||||
|
|
||||||
# The default exchange under which topics are scoped. May be
|
|
||||||
# overridden by an exchange name specified in the
|
|
||||||
# transport_url option. (string value)
|
|
||||||
#control_exchange=openstack
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.manager
|
|
||||||
#
|
|
||||||
|
|
||||||
# (string value)
|
|
||||||
#tracking_plugin=redmine
|
|
||||||
|
|
||||||
# AMQP topic used for OpenStack notifications (list value)
|
|
||||||
#notification_topics=notifications
|
|
||||||
|
|
||||||
# Messaging URLs to listen for notifications. Example:
|
|
||||||
# transport://user:pass@host1:port[,hostN:portN]/virtual_host
|
|
||||||
# (DEFAULT/transport_url is used if empty) (multi valued)
|
|
||||||
#messaging_urls=
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.service
|
|
||||||
#
|
|
||||||
|
|
||||||
# Name of this node. This can be an opaque identifier. It is
|
|
||||||
# not necessarily a hostname, FQDN, or IP address. However,
|
|
||||||
# the node name must be valid within an AMQP key, and if using
|
|
||||||
# ZeroMQ, a valid hostname, FQDN, or IP address. (string
|
|
||||||
# value)
|
|
||||||
#host=sticks
|
|
||||||
|
|
||||||
# Dispatcher to process data. (multi valued)
|
|
||||||
#dispatcher=database
|
|
||||||
|
|
||||||
# Number of workers for collector service. A single collector
|
|
||||||
# is enabled by default. (integer value)
|
|
||||||
#collector_workers=1
|
|
||||||
|
|
||||||
# Number of workers for notification service. A single
|
|
||||||
# notification agent is enabled by default. (integer value)
|
|
||||||
#notification_workers=1
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.api
|
|
||||||
#
|
|
||||||
|
|
||||||
# The strategy to use for authentication. (string value)
|
|
||||||
#auth_strategy=keystone
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.api.app
|
|
||||||
#
|
|
||||||
|
|
||||||
# Configuration file for WSGI definition of API. (string
|
|
||||||
# value)
|
|
||||||
#api_paste_config=api_paste.ini
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.common.exception
|
|
||||||
#
|
|
||||||
|
|
||||||
# Make exception message format errors fatal (boolean value)
|
|
||||||
#fatal_exception_format_errors=false
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.openstack.common.eventlet_backdoor
|
|
||||||
#
|
|
||||||
|
|
||||||
# Enable eventlet backdoor. Acceptable values are 0, <port>,
|
|
||||||
# and <start>:<end>, where 0 results in listening on a random
|
|
||||||
# tcp port number; <port> results in listening on the
|
|
||||||
# specified port number (and not enabling backdoor if that
|
|
||||||
# port is in use); and <start>:<end> results in listening on
|
|
||||||
# the smallest unused port number within the specified range
|
|
||||||
# of port numbers. The chosen port is displayed in the
|
|
||||||
# service's log file. (string value)
|
|
||||||
#backdoor_port=<None>
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.openstack.common.lockutils
|
|
||||||
#
|
|
||||||
|
|
||||||
# Whether to disable inter-process locks (boolean value)
|
|
||||||
#disable_process_locking=false
|
|
||||||
|
|
||||||
# Directory to use for lock files. (string value)
|
|
||||||
#lock_path=<None>
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.openstack.common.log
|
|
||||||
#
|
|
||||||
|
|
||||||
# Print debugging output (set logging level to DEBUG instead
|
|
||||||
# of default WARNING level). (boolean value)
|
|
||||||
#debug=false
|
|
||||||
|
|
||||||
# Print more verbose output (set logging level to INFO instead
|
|
||||||
# of default WARNING level). (boolean value)
|
|
||||||
#verbose=false
|
|
||||||
|
|
||||||
# Log output to standard error (boolean value)
|
|
||||||
#use_stderr=true
|
|
||||||
|
|
||||||
# Format string to use for log messages with context (string
|
|
||||||
# value)
|
|
||||||
#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
|
|
||||||
|
|
||||||
# Format string to use for log messages without context
|
|
||||||
# (string value)
|
|
||||||
#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
|
|
||||||
|
|
||||||
# Data to append to log format when level is DEBUG (string
|
|
||||||
# value)
|
|
||||||
#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
|
|
||||||
|
|
||||||
# Prefix each line of exception output with this format
|
|
||||||
# (string value)
|
|
||||||
#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s
|
|
||||||
|
|
||||||
# List of logger=LEVEL pairs (list value)
|
|
||||||
#default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN
|
|
||||||
|
|
||||||
# Publish error events (boolean value)
|
|
||||||
#publish_errors=false
|
|
||||||
|
|
||||||
# Make deprecations fatal (boolean value)
|
|
||||||
#fatal_deprecations=false
|
|
||||||
|
|
||||||
# If an instance is passed with the log message, format it
|
|
||||||
# like this (string value)
|
|
||||||
#instance_format="[instance: %(uuid)s] "
|
|
||||||
|
|
||||||
# If an instance UUID is passed with the log message, format
|
|
||||||
# it like this (string value)
|
|
||||||
#instance_uuid_format="[instance: %(uuid)s] "
|
|
||||||
|
|
||||||
# The name of logging configuration file. It does not disable
|
|
||||||
# existing loggers, but just appends specified logging
|
|
||||||
# configuration to any other existing logging options. Please
|
|
||||||
# see the Python logging module documentation for details on
|
|
||||||
# logging configuration files. (string value)
|
|
||||||
# Deprecated group/name - [DEFAULT]/log_config
|
|
||||||
#log_config_append=<None>
|
|
||||||
|
|
||||||
# DEPRECATED. A logging.Formatter log message format string
|
|
||||||
# which may use any of the available logging.LogRecord
|
|
||||||
# attributes. This option is deprecated. Please use
|
|
||||||
# logging_context_format_string and
|
|
||||||
# logging_default_format_string instead. (string value)
|
|
||||||
#log_format=<None>
|
|
||||||
|
|
||||||
# Format string for %%(asctime)s in log records. Default:
|
|
||||||
# %(default)s (string value)
|
|
||||||
#log_date_format=%Y-%m-%d %H:%M:%S
|
|
||||||
|
|
||||||
# (Optional) Name of log file to output to. If no default is
|
|
||||||
# set, logging will go to stdout. (string value)
|
|
||||||
# Deprecated group/name - [DEFAULT]/logfile
|
|
||||||
#log_file=<None>
|
|
||||||
|
|
||||||
# (Optional) The base directory used for relative --log-file
|
|
||||||
# paths (string value)
|
|
||||||
# Deprecated group/name - [DEFAULT]/logdir
|
|
||||||
#log_dir=<None>
|
|
||||||
|
|
||||||
# Use syslog for logging. Existing syslog format is DEPRECATED
|
|
||||||
# during I, and then will be changed in J to honor RFC5424
|
|
||||||
# (boolean value)
|
|
||||||
#use_syslog=false
|
|
||||||
|
|
||||||
# (Optional) Use syslog rfc5424 format for logging. If
|
|
||||||
# enabled, will add APP-NAME (RFC5424) before the MSG part of
|
|
||||||
# the syslog message. The old format without APP-NAME is
|
|
||||||
# deprecated in I, and will be removed in J. (boolean value)
|
|
||||||
#use_syslog_rfc_format=false
|
|
||||||
|
|
||||||
# Syslog facility to receive log lines (string value)
|
|
||||||
#syslog_log_facility=LOG_USER
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.openstack.common.policy
|
|
||||||
#
|
|
||||||
|
|
||||||
# JSON file containing policy (string value)
|
|
||||||
#policy_file=policy.json
|
|
||||||
|
|
||||||
# Rule enforced when requested rule is not found (string
|
|
||||||
# value)
|
|
||||||
#policy_default_rule=default
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.tracking
|
|
||||||
#
|
|
||||||
|
|
||||||
# Required role to issue tickets. (string value)
|
|
||||||
#sticks_role_name=sticks
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.tracking.redmine_tracking
|
|
||||||
#
|
|
||||||
|
|
||||||
# (string value)
|
|
||||||
#tracking_plugin=redmine
|
|
||||||
|
|
||||||
|
|
||||||
[api]
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.api.app
|
|
||||||
#
|
|
||||||
|
|
||||||
# Host serving the API. (string value)
|
|
||||||
#host_ip=0.0.0.0
|
|
||||||
|
|
||||||
# Host port serving the API. (integer value)
|
|
||||||
#port=8303
|
|
||||||
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in keystoneclient.middleware.auth_token
|
|
||||||
#
|
|
||||||
|
|
||||||
# Prefix to prepend at the beginning of the path. Deprecated,
|
|
||||||
# use identity_uri. (string value)
|
|
||||||
#auth_admin_prefix=
|
|
||||||
|
|
||||||
# Host providing the admin Identity API endpoint. Deprecated,
|
|
||||||
# use identity_uri. (string value)
|
|
||||||
#auth_host=127.0.0.1
|
|
||||||
|
|
||||||
# Port of the admin Identity API endpoint. Deprecated, use
|
|
||||||
# identity_uri. (integer value)
|
|
||||||
#auth_port=35357
|
|
||||||
|
|
||||||
# Protocol of the admin Identity API endpoint (http or https).
|
|
||||||
# Deprecated, use identity_uri. (string value)
|
|
||||||
#auth_protocol=https
|
|
||||||
|
|
||||||
# Complete public Identity API endpoint (string value)
|
|
||||||
#auth_uri=<None>
|
|
||||||
|
|
||||||
# Complete admin Identity API endpoint. This should specify
|
|
||||||
# the unversioned root endpoint e.g. https://localhost:35357/
|
|
||||||
# (string value)
|
|
||||||
#identity_uri=<None>
|
|
||||||
|
|
||||||
# API version of the admin Identity API endpoint (string
|
|
||||||
# value)
|
|
||||||
#auth_version=<None>
|
|
||||||
|
|
||||||
# Do not handle authorization requests within the middleware,
|
|
||||||
# but delegate the authorization decision to downstream WSGI
|
|
||||||
# components (boolean value)
|
|
||||||
#delay_auth_decision=false
|
|
||||||
|
|
||||||
# Request timeout value for communicating with Identity API
|
|
||||||
# server. (boolean value)
|
|
||||||
#http_connect_timeout=<None>
|
|
||||||
|
|
||||||
# How many times are we trying to reconnect when communicating
|
|
||||||
# with Identity API Server. (integer value)
|
|
||||||
#http_request_max_retries=3
|
|
||||||
|
|
||||||
# This option is deprecated and may be removed in a future
|
|
||||||
# release. Single shared secret with the Keystone
|
|
||||||
# configuration used for bootstrapping a Keystone
|
|
||||||
# installation, or otherwise bypassing the normal
|
|
||||||
# authentication process. This option should not be used, use
|
|
||||||
# `admin_user` and `admin_password` instead. (string value)
|
|
||||||
#admin_token=<None>
|
|
||||||
|
|
||||||
# Keystone account username (string value)
|
|
||||||
#admin_user=<None>
|
|
||||||
|
|
||||||
# Keystone account password (string value)
|
|
||||||
#admin_password=<None>
|
|
||||||
|
|
||||||
# Keystone service account tenant name to validate user tokens
|
|
||||||
# (string value)
|
|
||||||
#admin_tenant_name=admin
|
|
||||||
|
|
||||||
# Env key for the swift cache (string value)
|
|
||||||
#cache=<None>
|
|
||||||
|
|
||||||
# Required if Keystone server requires client certificate
|
|
||||||
# (string value)
|
|
||||||
#certfile=<None>
|
|
||||||
|
|
||||||
# Required if Keystone server requires client certificate
|
|
||||||
# (string value)
|
|
||||||
#keyfile=<None>
|
|
||||||
|
|
||||||
# A PEM encoded Certificate Authority to use when verifying
|
|
||||||
# HTTPs connections. Defaults to system CAs. (string value)
|
|
||||||
#cafile=<None>
|
|
||||||
|
|
||||||
# Verify HTTPS connections. (boolean value)
|
|
||||||
#insecure=false
|
|
||||||
|
|
||||||
# Directory used to cache files related to PKI tokens (string
|
|
||||||
# value)
|
|
||||||
#signing_dir=<None>
|
|
||||||
|
|
||||||
# Optionally specify a list of memcached server(s) to use for
|
|
||||||
# caching. If left undefined, tokens will instead be cached
|
|
||||||
# in-process. (list value)
|
|
||||||
# Deprecated group/name - [DEFAULT]/memcache_servers
|
|
||||||
#memcached_servers=<None>
|
|
||||||
|
|
||||||
# In order to prevent excessive effort spent validating
|
|
||||||
# tokens, the middleware caches previously-seen tokens for a
|
|
||||||
# configurable duration (in seconds). Set to -1 to disable
|
|
||||||
# caching completely. (integer value)
|
|
||||||
#token_cache_time=300
|
|
||||||
|
|
||||||
# Determines the frequency at which the list of revoked tokens
|
|
||||||
# is retrieved from the Identity service (in seconds). A high
|
|
||||||
# number of revocation events combined with a low cache
|
|
||||||
# duration may significantly reduce performance. (integer
|
|
||||||
# value)
|
|
||||||
#revocation_cache_time=10
|
|
||||||
|
|
||||||
# (optional) if defined, indicate whether token data should be
|
|
||||||
# authenticated or authenticated and encrypted. Acceptable
|
|
||||||
# values are MAC or ENCRYPT. If MAC, token data is
|
|
||||||
# authenticated (with HMAC) in the cache. If ENCRYPT, token
|
|
||||||
# data is encrypted and authenticated in the cache. If the
|
|
||||||
# value is not one of these options or empty, auth_token will
|
|
||||||
# raise an exception on initialization. (string value)
|
|
||||||
#memcache_security_strategy=<None>
|
|
||||||
|
|
||||||
# (optional, mandatory if memcache_security_strategy is
|
|
||||||
# defined) this string is used for key derivation. (string
|
|
||||||
# value)
|
|
||||||
#memcache_secret_key=<None>
|
|
||||||
|
|
||||||
# (optional) indicate whether to set the X-Service-Catalog
|
|
||||||
# header. If False, middleware will not ask for service
|
|
||||||
# catalog on token validation and will not set the X-Service-
|
|
||||||
# Catalog header. (boolean value)
|
|
||||||
#include_service_catalog=true
|
|
||||||
|
|
||||||
# Used to control the use and type of token binding. Can be
|
|
||||||
# set to: "disabled" to not check token binding. "permissive"
|
|
||||||
# (default) to validate binding information if the bind type
|
|
||||||
# is of a form known to the server and ignore it if not.
|
|
||||||
# "strict" like "permissive" but if the bind type is unknown
|
|
||||||
# the token will be rejected. "required" any form of token
|
|
||||||
# binding is needed to be allowed. Finally the name of a
|
|
||||||
# binding method that must be present in tokens. (string
|
|
||||||
# value)
|
|
||||||
#enforce_token_bind=permissive
|
|
||||||
|
|
||||||
# If true, the revocation list will be checked for cached
|
|
||||||
# tokens. This requires that PKI tokens are configured on the
|
|
||||||
# Keystone server. (boolean value)
|
|
||||||
#check_revocations_for_cached=false
|
|
||||||
|
|
||||||
# Hash algorithms to use for hashing PKI tokens. This may be a
|
|
||||||
# single algorithm or multiple. The algorithms are those
|
|
||||||
# supported by Python standard hashlib.new(). The hashes will
|
|
||||||
# be tried in the order given, so put the preferred one first
|
|
||||||
# for performance. The result of the first hash will be stored
|
|
||||||
# in the cache. This will typically be set to multiple values
|
|
||||||
# only while migrating from a less secure algorithm to a more
|
|
||||||
# secure one. Once all the old tokens are expired this option
|
|
||||||
# should be set to a single value for better performance.
|
|
||||||
# (list value)
|
|
||||||
#hash_algorithms=md5
|
|
||||||
|
|
||||||
|
|
||||||
[matchmaker_redis]
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in oslo.messaging
|
|
||||||
#
|
|
||||||
|
|
||||||
# Host to locate redis. (string value)
|
|
||||||
#host=127.0.0.1
|
|
||||||
|
|
||||||
# Use this port to connect to redis host. (integer value)
|
|
||||||
#port=6379
|
|
||||||
|
|
||||||
# Password for Redis server (optional). (string value)
|
|
||||||
#password=<None>
|
|
||||||
|
|
||||||
|
|
||||||
[matchmaker_ring]
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in oslo.messaging
|
|
||||||
#
|
|
||||||
|
|
||||||
# Matchmaker ring file (JSON). (string value)
|
|
||||||
# Deprecated group/name - [DEFAULT]/matchmaker_ringfile
|
|
||||||
#ringfile=/etc/oslo/matchmaker_ring.json
|
|
||||||
|
|
||||||
|
|
||||||
[redmine]
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.tracking.redmine_tracking
|
|
||||||
#
|
|
||||||
|
|
||||||
# Redmine server URL (string value)
|
|
||||||
#redmine_url=http://
|
|
||||||
|
|
||||||
# Redmine API user (string value)
|
|
||||||
#redmine_login=
|
|
||||||
|
|
||||||
# Redmine API password (string value)
|
|
||||||
#redmine_password=
|
|
||||||
|
|
||||||
|
|
||||||
[service_credentials]
|
|
||||||
|
|
||||||
#
|
|
||||||
# Options defined in sticks.service
|
|
||||||
#
|
|
||||||
|
|
||||||
# User name to use for OpenStack service access. (string
|
|
||||||
# value)
|
|
||||||
#os_username=sticks
|
|
||||||
|
|
||||||
# Password to use for OpenStack service access. (string value)
|
|
||||||
#os_password=admin
|
|
||||||
|
|
||||||
# Tenant ID to use for OpenStack service access. (string
|
|
||||||
# value)
|
|
||||||
#os_tenant_id=
|
|
||||||
|
|
||||||
# Tenant name to use for OpenStack service access. (string
|
|
||||||
# value)
|
|
||||||
#os_tenant_name=admin
|
|
||||||
|
|
||||||
# Certificate chain for SSL validation. (string value)
|
|
||||||
#os_cacert=<None>
|
|
||||||
|
|
||||||
# Auth URL to use for OpenStack service access. (string value)
|
|
||||||
#os_auth_url=http://localhost:5000/v2.0
|
|
||||||
|
|
||||||
# Region name to use for OpenStack service endpoints. (string
|
|
||||||
# value)
|
|
||||||
#os_region_name=<None>
|
|
||||||
|
|
||||||
# Type of endpoint in Identity service catalog to use for
|
|
||||||
# communication with OpenStack services. (string value)
|
|
||||||
#os_endpoint_type=publicURL
|
|
||||||
|
|
||||||
# Disables X.509 certificate validation when an SSL connection
|
|
||||||
# to Identity Service is established. (boolean value)
|
|
||||||
#insecure=false
|
|
||||||
|
|
||||||
|
|
@ -1,17 +0,0 @@
|
|||||||
[DEFAULT]
|
|
||||||
|
|
||||||
# The list of modules to copy from oslo-incubator.git
|
|
||||||
# The list of modules to copy from oslo-incubator
|
|
||||||
module=config
|
|
||||||
module=config.generator
|
|
||||||
module=context
|
|
||||||
module=gettextutils
|
|
||||||
module=importutils
|
|
||||||
module=jsonutils
|
|
||||||
module=lockutils
|
|
||||||
module=log
|
|
||||||
module=policy
|
|
||||||
module=service
|
|
||||||
|
|
||||||
# The base module to hold the copy of openstack.common
|
|
||||||
base=sticks
|
|
@ -1,15 +0,0 @@
|
|||||||
# The order of packages is significant, because pip processes them in the order
|
|
||||||
# of appearance. Changing the order has an impact on the overall integration
|
|
||||||
# process, which may cause wedges in the gate later.
|
|
||||||
|
|
||||||
pbr<2.0,>=1.3
|
|
||||||
Babel>=1.3
|
|
||||||
python-redmine
|
|
||||||
paste
|
|
||||||
pecan>=0.8.0
|
|
||||||
oslo.messaging>=1.3.0,<1.5
|
|
||||||
oslo.config>=1.11.0,<=1.15.0 # Apache-2.0
|
|
||||||
oslo.utils<2.0.0
|
|
||||||
oslo.serialization<1.7.0
|
|
||||||
python-keystoneclient>=1.6.0
|
|
||||||
WSME>=0.7
|
|
55
setup.cfg
55
setup.cfg
@ -1,55 +0,0 @@
|
|||||||
[metadata]
|
|
||||||
name = sticks
|
|
||||||
summary = System of tickets management
|
|
||||||
description-file =
|
|
||||||
README.rst
|
|
||||||
author = Eurogiciel
|
|
||||||
author-email = openstack-dev@lists.openstack.org
|
|
||||||
home-page = http://www.openstack.org/
|
|
||||||
classifier =
|
|
||||||
Environment :: OpenStack
|
|
||||||
Intended Audience :: Information Technology
|
|
||||||
Intended Audience :: System Administrators
|
|
||||||
License :: OSI Approved :: Apache Software License
|
|
||||||
Operating System :: POSIX :: Linux
|
|
||||||
Programming Language :: Python
|
|
||||||
Programming Language :: Python :: 2
|
|
||||||
Programming Language :: Python :: 2.7
|
|
||||||
Programming Language :: Python :: 2.6
|
|
||||||
Programming Language :: Python :: 3
|
|
||||||
Programming Language :: Python :: 3.3
|
|
||||||
Programming Language :: Python :: 3.4
|
|
||||||
|
|
||||||
[files]
|
|
||||||
packages =
|
|
||||||
sticks
|
|
||||||
|
|
||||||
[build_sphinx]
|
|
||||||
source-dir = doc/source
|
|
||||||
build-dir = doc/build
|
|
||||||
all_files = 1
|
|
||||||
|
|
||||||
[upload_sphinx]
|
|
||||||
upload-dir = doc/build/html
|
|
||||||
|
|
||||||
[compile_catalog]
|
|
||||||
directory = sticks/locale
|
|
||||||
domain = sticks
|
|
||||||
|
|
||||||
[update_catalog]
|
|
||||||
domain = sticks
|
|
||||||
output_dir = sticks/locale
|
|
||||||
input_file = sticks/locale/sticks.pot
|
|
||||||
|
|
||||||
[extract_messages]
|
|
||||||
keywords = _ gettext ngettext l_ lazy_gettext
|
|
||||||
mapping_file = babel.cfg
|
|
||||||
output_file = sticks/locale/sticks.pot
|
|
||||||
|
|
||||||
[entry_points]
|
|
||||||
console_scripts =
|
|
||||||
sticks-api = sticks.cli.api:main
|
|
||||||
sticks-agent = sticks.cli.agent:main
|
|
||||||
|
|
||||||
sticks.tracking=
|
|
||||||
redmine = sticks.tracking.redmine_tracking:RedmineTracking
|
|
29
setup.py
29
setup.py
@ -1,29 +0,0 @@
|
|||||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
|
||||||
import setuptools
|
|
||||||
|
|
||||||
# In python < 2.7.4, a lazy loading of package `pbr` will break
|
|
||||||
# setuptools if some other modules registered functions in `atexit`.
|
|
||||||
# solution from: http://bugs.python.org/issue15881#msg170215
|
|
||||||
try:
|
|
||||||
import multiprocessing # noqa
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
setuptools.setup(
|
|
||||||
setup_requires=['pbr>=1.3'],
|
|
||||||
pbr=True)
|
|
@ -1,24 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
import eventlet
|
|
||||||
|
|
||||||
eventlet.monkey_patch()
|
|
||||||
|
|
||||||
import pbr.version
|
|
||||||
|
|
||||||
|
|
||||||
__version__ = pbr.version.VersionInfo(
|
|
||||||
'sticks').version_string()
|
|
@ -1,27 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.openstack.common.gettextutils import _ # noqa
|
|
||||||
|
|
||||||
keystone_opts = [
|
|
||||||
cfg.StrOpt('auth_strategy', default='keystone',
|
|
||||||
help=_('The strategy to use for authentication.'))
|
|
||||||
]
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_opts(keystone_opts)
|
|
@ -1,109 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
from wsgiref import simple_server
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
import pecan
|
|
||||||
|
|
||||||
from sticks.api import auth
|
|
||||||
from sticks.api import config as api_config
|
|
||||||
from sticks.api import hooks
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
auth_opts = [
|
|
||||||
cfg.StrOpt('api_paste_config',
|
|
||||||
default="api_paste.ini",
|
|
||||||
help="Configuration file for WSGI definition of API."
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
api_opts = [
|
|
||||||
cfg.StrOpt('host_ip',
|
|
||||||
default="0.0.0.0",
|
|
||||||
help="Host serving the API."
|
|
||||||
),
|
|
||||||
cfg.IntOpt('port',
|
|
||||||
default=8303,
|
|
||||||
help="Host port serving the API."
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_opts(auth_opts)
|
|
||||||
CONF.register_opts(api_opts, group='api')
|
|
||||||
|
|
||||||
|
|
||||||
def get_pecan_config():
|
|
||||||
# Set up the pecan configuration
|
|
||||||
filename = api_config.__file__.replace('.pyc', '.py')
|
|
||||||
return pecan.configuration.conf_from_file(filename)
|
|
||||||
|
|
||||||
|
|
||||||
def setup_app(pecan_config=None, extra_hooks=None):
|
|
||||||
|
|
||||||
if not pecan_config:
|
|
||||||
pecan_config = get_pecan_config()
|
|
||||||
|
|
||||||
app_hooks = [hooks.ConfigHook(),
|
|
||||||
hooks.ContextHook(pecan_config.app.acl_public_routes),
|
|
||||||
]
|
|
||||||
|
|
||||||
if pecan_config.app.enable_acl:
|
|
||||||
app_hooks.append(hooks.AdminAuthHook())
|
|
||||||
|
|
||||||
pecan.configuration.set_config(dict(pecan_config), overwrite=True)
|
|
||||||
|
|
||||||
app = pecan.make_app(
|
|
||||||
pecan_config.app.root,
|
|
||||||
static_root=pecan_config.app.static_root,
|
|
||||||
template_path=pecan_config.app.template_path,
|
|
||||||
debug=CONF.debug,
|
|
||||||
force_canonical=getattr(pecan_config.app, 'fopolicy.jsonrce_canonical',
|
|
||||||
True),
|
|
||||||
hooks=app_hooks,
|
|
||||||
guess_content_type_from_ext=False
|
|
||||||
)
|
|
||||||
|
|
||||||
if pecan_config.app.enable_acl:
|
|
||||||
strategy = auth.strategy(CONF.auth_strategy)
|
|
||||||
return strategy.install(app,
|
|
||||||
cfg.CONF,
|
|
||||||
pecan_config.app.acl_public_routes)
|
|
||||||
|
|
||||||
return app
|
|
||||||
|
|
||||||
|
|
||||||
def build_server():
|
|
||||||
# Create the WSGI server and start it
|
|
||||||
host = CONF.api.host_ip
|
|
||||||
port = CONF.api.port
|
|
||||||
|
|
||||||
server_cls = simple_server.WSGIServer
|
|
||||||
handler_cls = simple_server.WSGIRequestHandler
|
|
||||||
|
|
||||||
app = setup_app()
|
|
||||||
|
|
||||||
srv = simple_server.make_server(
|
|
||||||
host,
|
|
||||||
port,
|
|
||||||
app,
|
|
||||||
server_cls,
|
|
||||||
handler_cls)
|
|
||||||
|
|
||||||
return srv
|
|
@ -1,61 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
from sticks.api.middleware import auth_token
|
|
||||||
|
|
||||||
from sticks.openstack.common import log
|
|
||||||
|
|
||||||
STRATEGIES = {}
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class KeystoneAuth(object):
|
|
||||||
|
|
||||||
OPT_GROUP_NAME = 'keystone_authtoken'
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def _register_opts(cls, conf):
|
|
||||||
"""Register keystoneclient middleware options."""
|
|
||||||
|
|
||||||
if cls.OPT_GROUP_NAME not in conf:
|
|
||||||
conf.register_opts(auth_token.opts, group=cls.OPT_GROUP_NAME)
|
|
||||||
auth_token.CONF = conf
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def install(cls, app, conf, public_routes):
|
|
||||||
"""Install Auth check on application."""
|
|
||||||
LOG.debug(u'Installing Keystone\'s auth protocol')
|
|
||||||
cls._register_opts(conf)
|
|
||||||
conf = dict(conf.get(cls.OPT_GROUP_NAME))
|
|
||||||
return auth_token.AuthTokenMiddleware(app,
|
|
||||||
conf=conf,
|
|
||||||
public_api_routes=public_routes)
|
|
||||||
|
|
||||||
|
|
||||||
STRATEGIES['keystone'] = KeystoneAuth
|
|
||||||
|
|
||||||
|
|
||||||
def strategy(strategy):
|
|
||||||
"""Returns the Auth Strategy.
|
|
||||||
|
|
||||||
:param strategy: String representing
|
|
||||||
the strategy to use
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
return STRATEGIES[strategy]
|
|
||||||
except KeyError:
|
|
||||||
raise RuntimeError
|
|
@ -1,28 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
# Pecan Application Configurations
|
|
||||||
app = {
|
|
||||||
'root': 'sticks.api.root.RootController',
|
|
||||||
'modules': ['sticks.api'],
|
|
||||||
'static_root': '%(confdir)s/public',
|
|
||||||
'template_path': '%(confdir)s/templates',
|
|
||||||
'debug': True,
|
|
||||||
'enable_acl': False,
|
|
||||||
'acl_public_routes': ['/', '/v1'],
|
|
||||||
'member_routes': ['/v1/ticket', ]
|
|
||||||
}
|
|
@ -1,103 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
from pecan import hooks
|
|
||||||
from webob import exc
|
|
||||||
|
|
||||||
from sticks.common import context
|
|
||||||
from sticks.common import policy
|
|
||||||
|
|
||||||
|
|
||||||
class ConfigHook(hooks.PecanHook):
|
|
||||||
"""Attach the config object to the request so controllers can get to it."""
|
|
||||||
|
|
||||||
def before(self, state):
|
|
||||||
state.request.cfg = cfg.CONF
|
|
||||||
|
|
||||||
|
|
||||||
class ContextHook(hooks.PecanHook):
|
|
||||||
"""Configures a request context and attaches it to the request.
|
|
||||||
|
|
||||||
The following HTTP request headers are used:
|
|
||||||
|
|
||||||
X-User-Id or X-User:
|
|
||||||
Used for context.user_id.
|
|
||||||
|
|
||||||
X-Tenant-Id or X-Tenant:
|
|
||||||
Used for context.tenant.
|
|
||||||
|
|
||||||
X-Auth-Token:
|
|
||||||
Used for context.auth_token.
|
|
||||||
|
|
||||||
X-Roles:
|
|
||||||
Used for setting context.is_admin flag to either True or False.
|
|
||||||
The flag is set to True, if X-Roles contains either an administrator
|
|
||||||
or admin substring. Otherwise it is set to False.
|
|
||||||
|
|
||||||
"""
|
|
||||||
def __init__(self, public_api_routes):
|
|
||||||
self.public_api_routes = public_api_routes
|
|
||||||
super(ContextHook, self).__init__()
|
|
||||||
|
|
||||||
def before(self, state):
|
|
||||||
user_id = state.request.headers.get('X-User-Id')
|
|
||||||
user_id = state.request.headers.get('X-User', user_id)
|
|
||||||
tenant_id = state.request.headers.get('X-Tenant-Id')
|
|
||||||
tenant = state.request.headers.get('X-Tenant', tenant_id)
|
|
||||||
domain_id = state.request.headers.get('X-User-Domain-Id')
|
|
||||||
domain_name = state.request.headers.get('X-User-Domain-Name')
|
|
||||||
auth_token = state.request.headers.get('X-Auth-Token')
|
|
||||||
roles = state.request.headers.get('X-Roles', '').split(',')
|
|
||||||
creds = {'roles': roles}
|
|
||||||
|
|
||||||
is_public_api = state.request.environ.get('is_public_api', False)
|
|
||||||
is_admin = policy.enforce('context_is_admin',
|
|
||||||
state.request.headers,
|
|
||||||
creds)
|
|
||||||
|
|
||||||
state.request.context = context.RequestContext(
|
|
||||||
auth_token=auth_token,
|
|
||||||
user=user_id,
|
|
||||||
tenant_id=tenant_id,
|
|
||||||
tenant=tenant,
|
|
||||||
domain_id=domain_id,
|
|
||||||
domain_name=domain_name,
|
|
||||||
is_admin=is_admin,
|
|
||||||
is_public_api=is_public_api,
|
|
||||||
roles=roles)
|
|
||||||
|
|
||||||
|
|
||||||
class AdminAuthHook(hooks.PecanHook):
|
|
||||||
"""Verify that the user has admin rights.
|
|
||||||
|
|
||||||
Checks whether the request context is an admin context and
|
|
||||||
rejects the request if the api is not public.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def is_path_in_routes(self, path):
|
|
||||||
for p in self.member_routes:
|
|
||||||
if path.startswith(p):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def before(self, state):
|
|
||||||
ctx = state.request.context
|
|
||||||
|
|
||||||
if not ctx.is_admin and not ctx.is_public_api and \
|
|
||||||
not self.is_path_in_routes(state.request.path):
|
|
||||||
raise exc.HTTPForbidden()
|
|
@ -1,20 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
from sticks.api.middleware import auth_token
|
|
||||||
|
|
||||||
AuthTokenMiddleware = auth_token.AuthTokenMiddleware
|
|
@ -1,61 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import re
|
|
||||||
|
|
||||||
from keystoneclient.middleware import auth_token
|
|
||||||
|
|
||||||
from sticks.common import exception
|
|
||||||
from sticks.common import safe_utils
|
|
||||||
from sticks.openstack.common import log
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class AuthTokenMiddleware(auth_token.AuthProtocol):
|
|
||||||
"""A wrapper on Keystone auth_token middleware.
|
|
||||||
|
|
||||||
Does not perform verification of authentication tokens
|
|
||||||
for public routes in the API.
|
|
||||||
|
|
||||||
"""
|
|
||||||
def __init__(self, app, conf, public_api_routes=[]):
|
|
||||||
route_pattern_tpl = '%s(\.json|\.xml)?$'
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.public_api_routes = [re.compile(route_pattern_tpl % route_tpl)
|
|
||||||
for route_tpl in public_api_routes]
|
|
||||||
except re.error as e:
|
|
||||||
msg = _('Cannot compile public API routes: %s') % e
|
|
||||||
|
|
||||||
LOG.error(msg)
|
|
||||||
raise exception.ConfigInvalid(error_msg=msg)
|
|
||||||
|
|
||||||
super(AuthTokenMiddleware, self).__init__(app, conf)
|
|
||||||
|
|
||||||
def __call__(self, env, start_response):
|
|
||||||
path = safe_utils.safe_rstrip(env.get('PATH_INFO'), '/')
|
|
||||||
|
|
||||||
# The information whether the API call is being performed against the
|
|
||||||
# public API is required for some other components. Saving it to the
|
|
||||||
# WSGI environment is reasonable thereby.
|
|
||||||
env['is_public_api'] = any(map(lambda pattern: re.match(pattern, path),
|
|
||||||
self.public_api_routes))
|
|
||||||
|
|
||||||
if env['is_public_api']:
|
|
||||||
return self.app(env, start_response)
|
|
||||||
|
|
||||||
return super(AuthTokenMiddleware, self).__call__(env, start_response)
|
|
@ -1,142 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import pecan
|
|
||||||
from pecan import rest
|
|
||||||
from wsme import types as wtypes
|
|
||||||
import wsmeext.pecan as wsme_pecan
|
|
||||||
|
|
||||||
from sticks.api.v1 import controllers as v1_api
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
VERSION_STATUS = wtypes.Enum(wtypes.text, 'EXPERIMENTAL', 'STABLE')
|
|
||||||
|
|
||||||
|
|
||||||
class APILink(wtypes.Base):
|
|
||||||
"""API link description.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
type = wtypes.text
|
|
||||||
"""Type of link."""
|
|
||||||
|
|
||||||
rel = wtypes.text
|
|
||||||
"""Relationship with this link."""
|
|
||||||
|
|
||||||
href = wtypes.text
|
|
||||||
"""URL of the link."""
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def sample(cls):
|
|
||||||
version = 'v1'
|
|
||||||
sample = cls(
|
|
||||||
rel='self',
|
|
||||||
type='text/html',
|
|
||||||
href='http://127.0.0.1:8888/{id}'.format(
|
|
||||||
id=version))
|
|
||||||
return sample
|
|
||||||
|
|
||||||
|
|
||||||
class APIMediaType(wtypes.Base):
|
|
||||||
"""Media type description.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
base = wtypes.text
|
|
||||||
"""Base type of this media type."""
|
|
||||||
|
|
||||||
type = wtypes.text
|
|
||||||
"""Type of this media type."""
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def sample(cls):
|
|
||||||
sample = cls(
|
|
||||||
base='application/json',
|
|
||||||
type='application/vnd.openstack.sticks-v1+json')
|
|
||||||
return sample
|
|
||||||
|
|
||||||
|
|
||||||
class APIVersion(wtypes.Base):
|
|
||||||
"""API Version description.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
id = wtypes.text
|
|
||||||
"""ID of the version."""
|
|
||||||
|
|
||||||
status = VERSION_STATUS
|
|
||||||
"""Status of the version."""
|
|
||||||
|
|
||||||
updated = wtypes.text
|
|
||||||
"Last update in iso8601 format."
|
|
||||||
|
|
||||||
links = [APILink]
|
|
||||||
"""List of links to API resources."""
|
|
||||||
|
|
||||||
media_types = [APIMediaType]
|
|
||||||
"""Types accepted by this API."""
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def sample(cls):
|
|
||||||
version = 'v1'
|
|
||||||
updated = '2014-08-11T16:00:00Z'
|
|
||||||
links = [APILink.sample()]
|
|
||||||
media_types = [APIMediaType.sample()]
|
|
||||||
sample = cls(id=version,
|
|
||||||
status='STABLE',
|
|
||||||
updated=updated,
|
|
||||||
links=links,
|
|
||||||
media_types=media_types)
|
|
||||||
return sample
|
|
||||||
|
|
||||||
|
|
||||||
class RootController(rest.RestController):
|
|
||||||
"""Root REST Controller exposing versions of the API.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
v1 = v1_api.V1Controller()
|
|
||||||
|
|
||||||
@wsme_pecan.wsexpose([APIVersion])
|
|
||||||
def get(self):
|
|
||||||
"""Return the version list
|
|
||||||
|
|
||||||
"""
|
|
||||||
# TODO(sheeprine): Maybe we should store all the API version
|
|
||||||
# informations in every API modules
|
|
||||||
ver1 = APIVersion(
|
|
||||||
id='v1',
|
|
||||||
status='EXPERIMENTAL',
|
|
||||||
updated='2015-03-09T16:00:00Z',
|
|
||||||
links=[
|
|
||||||
APILink(
|
|
||||||
rel='self',
|
|
||||||
href='{scheme}://{host}/v1'.format(
|
|
||||||
scheme=pecan.request.scheme,
|
|
||||||
host=pecan.request.host,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
],
|
|
||||||
media_types=[]
|
|
||||||
)
|
|
||||||
|
|
||||||
versions = []
|
|
||||||
versions.append(ver1)
|
|
||||||
|
|
||||||
return versions
|
|
@ -1,26 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
from pecan import rest
|
|
||||||
|
|
||||||
from sticks.api.v1.controllers import ticket as ticket_api
|
|
||||||
|
|
||||||
|
|
||||||
class V1Controller(rest.RestController):
|
|
||||||
"""API version 1 controller.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
tickets = ticket_api.TicketsController()
|
|
@ -1,64 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
import pecan
|
|
||||||
from pecan import core
|
|
||||||
from pecan import rest
|
|
||||||
from wsme import types as wtypes
|
|
||||||
import wsmeext.pecan as wsme_pecan
|
|
||||||
|
|
||||||
from sticks.api.v1.datamodels import ticket as ticket_models
|
|
||||||
from sticks import manager
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class TicketsController(rest.RestController):
|
|
||||||
"""REST Controller ticket management."""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.sticks_manager = manager.SticksManager()
|
|
||||||
|
|
||||||
@wsme_pecan.wsexpose(ticket_models.TicketResourceCollection,
|
|
||||||
wtypes.text)
|
|
||||||
def get_all(self, project):
|
|
||||||
"""Return all tickets"""
|
|
||||||
return self.sticks_manager.dm.driver.get_tickets(project)
|
|
||||||
|
|
||||||
@wsme_pecan.wsexpose(ticket_models.TicketResource,
|
|
||||||
wtypes.text)
|
|
||||||
def get(self, ticket_id):
|
|
||||||
"""Return ticket"""
|
|
||||||
return self.sticks_manager.dm.driver.get_ticket(ticket_id)
|
|
||||||
|
|
||||||
@wsme_pecan.wsexpose(ticket_models.TicketResource,
|
|
||||||
body=ticket_models.TicketResource)
|
|
||||||
def post(self, data):
|
|
||||||
"""Create a ticket"""
|
|
||||||
return self.sticks_manager.dm.driver.create_ticket(data)
|
|
||||||
|
|
||||||
@pecan.expose()
|
|
||||||
def put(self):
|
|
||||||
""" Modify a ticket """
|
|
||||||
core.response.status = 204
|
|
||||||
return
|
|
||||||
|
|
||||||
@pecan.expose()
|
|
||||||
def delete(self):
|
|
||||||
""" Delete a ticket """
|
|
||||||
core.response.status = 200
|
|
||||||
return
|
|
@ -1,26 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
import wsme
|
|
||||||
from wsme import types as wtypes
|
|
||||||
|
|
||||||
|
|
||||||
class Base(wtypes.Base):
|
|
||||||
|
|
||||||
def as_dict_from_keys(self, keys):
|
|
||||||
return dict((k, getattr(self, k))
|
|
||||||
for k in keys
|
|
||||||
if hasattr(self, k) and
|
|
||||||
getattr(self, k) != wsme.Unset)
|
|
@ -1,63 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
#
|
|
||||||
import datetime
|
|
||||||
|
|
||||||
from sticks.api.v1.datamodels import base
|
|
||||||
from wsme import types as wtypes
|
|
||||||
|
|
||||||
|
|
||||||
class TicketResource(base.Base):
|
|
||||||
"""Type describing a ticket.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
title = wtypes.text
|
|
||||||
"""Title of the ticket."""
|
|
||||||
|
|
||||||
id = wtypes.text
|
|
||||||
"""Id of the ticket."""
|
|
||||||
|
|
||||||
description = wtypes.text
|
|
||||||
"""Description of the ticket."""
|
|
||||||
|
|
||||||
project = wtypes.text
|
|
||||||
"""Associated project of the ticket."""
|
|
||||||
|
|
||||||
start_date = datetime.date
|
|
||||||
"""Start date."""
|
|
||||||
|
|
||||||
status = wtypes.text
|
|
||||||
"""Status."""
|
|
||||||
|
|
||||||
category = wtypes.text
|
|
||||||
"""Category ."""
|
|
||||||
|
|
||||||
def as_dict(self):
|
|
||||||
return self.as_dict_from_keys(['title', 'id', 'project', 'start_date',
|
|
||||||
'status', 'description', 'category'])
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def sample(cls):
|
|
||||||
sample = cls(project='test_project',
|
|
||||||
title='Ticket incident')
|
|
||||||
return sample
|
|
||||||
|
|
||||||
|
|
||||||
class TicketResourceCollection(base.Base):
|
|
||||||
"""A list of Tickets."""
|
|
||||||
|
|
||||||
tickets = [TicketResource]
|
|
@ -1,43 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.common import config
|
|
||||||
from sticks import manager
|
|
||||||
from sticks.openstack.common import log
|
|
||||||
from sticks.openstack.common import service
|
|
||||||
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
|
|
||||||
log.set_defaults(cfg.CONF.default_log_levels)
|
|
||||||
argv = sys.argv
|
|
||||||
config.parse_args(argv)
|
|
||||||
log.setup(cfg.CONF, 'sticks')
|
|
||||||
launcher = service.ProcessLauncher()
|
|
||||||
c_manager = manager.SticksManager()
|
|
||||||
launcher.launch_service(c_manager)
|
|
||||||
launcher.wait()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
@ -1,30 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
from sticks.api import app
|
|
||||||
from sticks import service
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
service.prepare_service()
|
|
||||||
server = app.build_server()
|
|
||||||
try:
|
|
||||||
server.serve_forever()
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
@ -1,15 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
@ -1,70 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import functools
|
|
||||||
|
|
||||||
from keystoneclient.v2_0 import client as keystone_client_v2_0
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.openstack.common import log
|
|
||||||
|
|
||||||
|
|
||||||
cfg.CONF.import_group('service_credentials', 'sticks.service')
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def logged(func):
|
|
||||||
|
|
||||||
@functools.wraps(func)
|
|
||||||
def with_logging(*args, **kwargs):
|
|
||||||
try:
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
except Exception as e:
|
|
||||||
LOG.exception(e)
|
|
||||||
raise
|
|
||||||
|
|
||||||
return with_logging
|
|
||||||
|
|
||||||
|
|
||||||
class Client(object):
|
|
||||||
"""A client which gets information via python-keystoneclient."""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
"""Initialize a keystone client object."""
|
|
||||||
conf = cfg.CONF.service_credentials
|
|
||||||
self.keystone_client_v2_0 = keystone_client_v2_0.Client(
|
|
||||||
username=conf.os_username,
|
|
||||||
password=conf.os_password,
|
|
||||||
tenant_name=conf.os_tenant_name,
|
|
||||||
auth_url=conf.os_auth_url,
|
|
||||||
region_name=conf.os_region_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
@logged
|
|
||||||
def user_detail_get(self, user):
|
|
||||||
"""Returns details for a user."""
|
|
||||||
return self.keystone_client_v2_0.users.get(user)
|
|
||||||
|
|
||||||
@logged
|
|
||||||
def roles_for_user(self, user, tenant=None):
|
|
||||||
"""Returns role for a given id."""
|
|
||||||
return self.keystone_client_v2_0.roles.roles_for_user(user, tenant)
|
|
||||||
|
|
||||||
@logged
|
|
||||||
def project_get(self, project_id):
|
|
||||||
"""Returns details for a project."""
|
|
||||||
return self.keystone_client_v2_0.tenants.get(project_id)
|
|
@ -1,15 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
@ -1,25 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
from sticks import version
|
|
||||||
|
|
||||||
|
|
||||||
def parse_args(argv, default_config_files=None):
|
|
||||||
cfg.CONF(argv[1:],
|
|
||||||
project='sticks',
|
|
||||||
version=version.version_info.release_string(),
|
|
||||||
default_config_files=default_config_files)
|
|
@ -1,64 +0,0 @@
|
|||||||
# -*- encoding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from sticks.openstack.common import context
|
|
||||||
|
|
||||||
|
|
||||||
class RequestContext(context.RequestContext):
|
|
||||||
"""Extends security contexts from the OpenStack common library."""
|
|
||||||
|
|
||||||
def __init__(self, auth_token=None, domain_id=None, domain_name=None,
|
|
||||||
user=None, tenant_id=None, tenant=None, is_admin=False,
|
|
||||||
is_public_api=False, read_only=False, show_deleted=False,
|
|
||||||
request_id=None, roles=None):
|
|
||||||
"""Stores several additional request parameters:
|
|
||||||
|
|
||||||
:param domain_id: The ID of the domain.
|
|
||||||
:param domain_name: The name of the domain.
|
|
||||||
:param is_public_api: Specifies whether the request should be processed
|
|
||||||
without authentication.
|
|
||||||
|
|
||||||
"""
|
|
||||||
self.tenant_id = tenant_id
|
|
||||||
self.is_public_api = is_public_api
|
|
||||||
self.domain_id = domain_id
|
|
||||||
self.domain_name = domain_name
|
|
||||||
self.roles = roles or []
|
|
||||||
|
|
||||||
super(RequestContext, self).__init__(auth_token=auth_token,
|
|
||||||
user=user, tenant=tenant,
|
|
||||||
is_admin=is_admin,
|
|
||||||
read_only=read_only,
|
|
||||||
show_deleted=show_deleted,
|
|
||||||
request_id=request_id)
|
|
||||||
|
|
||||||
def to_dict(self):
|
|
||||||
return {'auth_token': self.auth_token,
|
|
||||||
'user': self.user,
|
|
||||||
'tenant_id': self.tenant_id,
|
|
||||||
'tenant': self.tenant,
|
|
||||||
'is_admin': self.is_admin,
|
|
||||||
'read_only': self.read_only,
|
|
||||||
'show_deleted': self.show_deleted,
|
|
||||||
'request_id': self.request_id,
|
|
||||||
'domain_id': self.domain_id,
|
|
||||||
'roles': self.roles,
|
|
||||||
'domain_name': self.domain_name,
|
|
||||||
'is_public_api': self.is_public_api}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, values):
|
|
||||||
values.pop('user', None)
|
|
||||||
values.pop('tenant', None)
|
|
||||||
return cls(**values)
|
|
@ -1,28 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
class InvalidOperation(Exception):
|
|
||||||
|
|
||||||
def __init__(self, description):
|
|
||||||
super(InvalidOperation, self).__init__(description)
|
|
||||||
|
|
||||||
|
|
||||||
class TicketNotFound(InvalidOperation):
|
|
||||||
|
|
||||||
def __init__(self, id):
|
|
||||||
super(TicketNotFound, self).__init__("Ticket %s does not exist"
|
|
||||||
% str(id))
|
|
@ -1,142 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
"""Sticks base exception handling.
|
|
||||||
|
|
||||||
Includes decorator for re-raising Nova-type exceptions.
|
|
||||||
|
|
||||||
SHOULD include dedicated exception logging.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
import functools
|
|
||||||
import gettext as t
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
import webob.exc
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.common import safe_utils
|
|
||||||
from sticks.openstack.common import excutils
|
|
||||||
|
|
||||||
|
|
||||||
_ = t.gettext
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
exc_log_opts = [
|
|
||||||
cfg.BoolOpt('fatal_exception_format_errors',
|
|
||||||
default=False,
|
|
||||||
help='Make exception message format errors fatal'),
|
|
||||||
]
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_opts(exc_log_opts)
|
|
||||||
|
|
||||||
|
|
||||||
class ConvertedException(webob.exc.WSGIHTTPException):
|
|
||||||
def __init__(self, code=0, title="", explanation=""):
|
|
||||||
self.code = code
|
|
||||||
self.title = title
|
|
||||||
self.explanation = explanation
|
|
||||||
super(ConvertedException, self).__init__()
|
|
||||||
|
|
||||||
|
|
||||||
def _cleanse_dict(original):
|
|
||||||
"""Strip all admin_password, new_pass, rescue_pass keys from a dict."""
|
|
||||||
return dict((k, v) for k, v in original.iteritems() if "_pass" not in k)
|
|
||||||
|
|
||||||
|
|
||||||
def wrap_exception(notifier=None, get_notifier=None):
|
|
||||||
"""This decorator wraps a method to catch any exceptions that may
|
|
||||||
get thrown. It logs the exception as well as optionally sending
|
|
||||||
it to the notification system.
|
|
||||||
"""
|
|
||||||
def inner(f):
|
|
||||||
def wrapped(self, context, *args, **kw):
|
|
||||||
# Don't store self or context in the payload, it now seems to
|
|
||||||
# contain confidential information.
|
|
||||||
try:
|
|
||||||
return f(self, context, *args, **kw)
|
|
||||||
except Exception as e:
|
|
||||||
with excutils.save_and_reraise_exception():
|
|
||||||
if notifier or get_notifier:
|
|
||||||
payload = dict(exception=e)
|
|
||||||
call_dict = safe_utils.getcallargs(f, context,
|
|
||||||
*args, **kw)
|
|
||||||
cleansed = _cleanse_dict(call_dict)
|
|
||||||
payload.update({'args': cleansed})
|
|
||||||
|
|
||||||
# If f has multiple decorators, they must use
|
|
||||||
# functools.wraps to ensure the name is
|
|
||||||
# propagated.
|
|
||||||
event_type = f.__name__
|
|
||||||
|
|
||||||
(notifier or get_notifier()).error(context,
|
|
||||||
event_type,
|
|
||||||
payload)
|
|
||||||
|
|
||||||
return functools.wraps(f)(wrapped)
|
|
||||||
return inner
|
|
||||||
|
|
||||||
|
|
||||||
class SticksException(Exception):
|
|
||||||
"""Base Sticks Exception
|
|
||||||
|
|
||||||
To correctly use this class, inherit from it and define
|
|
||||||
a 'msg_fmt' property. That msg_fmt will get printf'd
|
|
||||||
with the keyword arguments provided to the constructor.
|
|
||||||
|
|
||||||
"""
|
|
||||||
msg_fmt = _("An unknown exception occurred.")
|
|
||||||
code = 500
|
|
||||||
headers = {}
|
|
||||||
safe = False
|
|
||||||
|
|
||||||
def __init__(self, message=None, **kwargs):
|
|
||||||
self.kwargs = kwargs
|
|
||||||
|
|
||||||
if 'code' not in self.kwargs:
|
|
||||||
try:
|
|
||||||
self.kwargs['code'] = self.code
|
|
||||||
except AttributeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if not message:
|
|
||||||
try:
|
|
||||||
message = self.msg_fmt % kwargs
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
exc_info = sys.exc_info()
|
|
||||||
# kwargs doesn't match a variable in the message
|
|
||||||
# log the issue and the kwargs
|
|
||||||
LOG.exception(_('Exception in string format operation'))
|
|
||||||
for name, value in kwargs.iteritems():
|
|
||||||
LOG.error("%s: %s" % (name, value)) # noqa
|
|
||||||
|
|
||||||
if CONF.fatal_exception_format_errors:
|
|
||||||
raise exc_info[0], exc_info[1], exc_info[2]
|
|
||||||
else:
|
|
||||||
# at least get the core message out if something happened
|
|
||||||
message = self.msg_fmt
|
|
||||||
|
|
||||||
super(SticksException, self).__init__(message)
|
|
||||||
|
|
||||||
def format_message(self):
|
|
||||||
# NOTE(mrodden): use the first argument to the python Exception object
|
|
||||||
# which should be our full NovaException message, (see __init__)
|
|
||||||
return self.args[0]
|
|
@ -1,26 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
|
|
||||||
class DateTimeEncoder(json.JSONEncoder):
|
|
||||||
def default(self, obj):
|
|
||||||
"""JSON serializer for objects not serializable by default json code"""
|
|
||||||
if isinstance(obj, datetime.datetime):
|
|
||||||
serial = obj.isoformat()
|
|
||||||
return serial
|
|
@ -1,69 +0,0 @@
|
|||||||
# Copyright (c) 2011 OpenStack Foundation
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Policy Engine For Ironic."""
|
|
||||||
|
|
||||||
# from oslo.concurrency import lockutils
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.openstack.common import policy
|
|
||||||
|
|
||||||
_ENFORCER = None
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
|
|
||||||
# @lockutils.synchronized('policy_enforcer', 'ironic-')
|
|
||||||
def init_enforcer(policy_file=None, rules=None,
|
|
||||||
default_rule=None, use_conf=True):
|
|
||||||
"""Synchronously initializes the policy enforcer
|
|
||||||
|
|
||||||
:param policy_file: Custom policy file to use, if none is specified,
|
|
||||||
`CONF.policy_file` will be used.
|
|
||||||
:param rules: Default dictionary / Rules to use. It will be
|
|
||||||
considered just in the first instantiation.
|
|
||||||
:param default_rule: Default rule to use, CONF.default_rule will
|
|
||||||
be used if none is specified.
|
|
||||||
:param use_conf: Whether to load rules from config file.
|
|
||||||
|
|
||||||
"""
|
|
||||||
global _ENFORCER
|
|
||||||
|
|
||||||
if _ENFORCER:
|
|
||||||
return
|
|
||||||
|
|
||||||
_ENFORCER = policy.Enforcer(policy_file=policy_file,
|
|
||||||
rules=rules,
|
|
||||||
default_rule=default_rule,
|
|
||||||
use_conf=use_conf)
|
|
||||||
|
|
||||||
|
|
||||||
def get_enforcer():
|
|
||||||
"""Provides access to the single instance of Policy enforcer."""
|
|
||||||
|
|
||||||
if not _ENFORCER:
|
|
||||||
init_enforcer()
|
|
||||||
|
|
||||||
return _ENFORCER
|
|
||||||
|
|
||||||
|
|
||||||
def enforce(rule, target, creds, do_raise=False, exc=None, *args, **kwargs):
|
|
||||||
"""A shortcut for policy.Enforcer.enforce()
|
|
||||||
|
|
||||||
Checks authorization of a rule against the target and credentials.
|
|
||||||
|
|
||||||
"""
|
|
||||||
enforcer = get_enforcer()
|
|
||||||
return enforcer.enforce(rule, target, creds, do_raise=do_raise,
|
|
||||||
exc=exc, *args, **kwargs)
|
|
@ -1,70 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
"""Utilities and helper functions that won't produce circular imports."""
|
|
||||||
|
|
||||||
import inspect
|
|
||||||
import six
|
|
||||||
|
|
||||||
from sticks.openstack.common import log
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def getcallargs(function, *args, **kwargs):
|
|
||||||
"""This is a simplified inspect.getcallargs (2.7+).
|
|
||||||
|
|
||||||
It should be replaced when python >= 2.7 is standard.
|
|
||||||
"""
|
|
||||||
keyed_args = {}
|
|
||||||
argnames, varargs, keywords, defaults = inspect.getargspec(function)
|
|
||||||
|
|
||||||
keyed_args.update(kwargs)
|
|
||||||
|
|
||||||
# NOTE(alaski) the implicit 'self' or 'cls' argument shows up in
|
|
||||||
# argnames but not in args or kwargs. Uses 'in' rather than '==' because
|
|
||||||
# some tests use 'self2'.
|
|
||||||
if 'self' in argnames[0] or 'cls' == argnames[0]:
|
|
||||||
# The function may not actually be a method or have im_self.
|
|
||||||
# Typically seen when it's stubbed with mox.
|
|
||||||
if inspect.ismethod(function) and hasattr(function, 'im_self'):
|
|
||||||
keyed_args[argnames[0]] = function.im_self
|
|
||||||
else:
|
|
||||||
keyed_args[argnames[0]] = None
|
|
||||||
|
|
||||||
remaining_argnames = filter(lambda x: x not in keyed_args, argnames)
|
|
||||||
keyed_args.update(dict(zip(remaining_argnames, args)))
|
|
||||||
|
|
||||||
if defaults:
|
|
||||||
num_defaults = len(defaults)
|
|
||||||
for argname, value in zip(argnames[-num_defaults:], defaults):
|
|
||||||
if argname not in keyed_args:
|
|
||||||
keyed_args[argname] = value
|
|
||||||
|
|
||||||
return keyed_args
|
|
||||||
|
|
||||||
|
|
||||||
def safe_rstrip(value, chars=None):
|
|
||||||
"""Removes trailing characters from a string if that does not make it empty
|
|
||||||
:param value: A string value that will be stripped.
|
|
||||||
:param chars: Characters to remove.
|
|
||||||
:return: Stripped value.
|
|
||||||
"""
|
|
||||||
if not isinstance(value, six.string_types):
|
|
||||||
LOG.warn(("Failed to remove trailing character. Returning original "
|
|
||||||
"object. Supplied object is not a string: %s,") % value)
|
|
||||||
return value
|
|
||||||
return value.rstrip(chars) or value
|
|
@ -1,110 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import parser
|
|
||||||
|
|
||||||
|
|
||||||
class JsonSerializer(object):
|
|
||||||
"""A serializer that provides methods to serialize and deserialize JSON
|
|
||||||
dictionaries.
|
|
||||||
|
|
||||||
Note, one of the assumptions this serializer makes is that all objects that
|
|
||||||
it is used to deserialize have a constructor that can take all of the
|
|
||||||
attribute arguments. I.e. If you have an object with 3 attributes, the
|
|
||||||
constructor needs to take those three attributes as keyword arguments.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__attributes__ = None
|
|
||||||
"""The attributes to be serialized by the seralizer.
|
|
||||||
The implementor needs to provide these."""
|
|
||||||
|
|
||||||
__required__ = None
|
|
||||||
"""The attributes that are required when deserializing.
|
|
||||||
The implementor needs to provide these."""
|
|
||||||
|
|
||||||
__attribute_serializer__ = None
|
|
||||||
"""The serializer to use for a specified attribute. If an attribute is not
|
|
||||||
included here, no special serializer will be user.
|
|
||||||
The implementor needs to provide these."""
|
|
||||||
|
|
||||||
__object_class__ = None
|
|
||||||
"""The class that the deserializer should generate.
|
|
||||||
The implementor needs to provide these."""
|
|
||||||
|
|
||||||
serializers = dict(
|
|
||||||
date=dict(
|
|
||||||
serialize=lambda x: x.isoformat(),
|
|
||||||
deserialize=lambda x: parser.parse(x)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def deserialize(self, json, **kwargs):
|
|
||||||
"""Deserialize a JSON dictionary and return a populated object.
|
|
||||||
|
|
||||||
This takes the JSON data, and deserializes it appropriately and then
|
|
||||||
calls the constructor of the object to be created with all of the
|
|
||||||
attributes.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
json: The JSON dict with all of the data
|
|
||||||
**kwargs: Optional values that can be used as defaults if they are
|
|
||||||
not present in the JSON data
|
|
||||||
Returns:
|
|
||||||
The deserialized object.
|
|
||||||
Raises:
|
|
||||||
ValueError: If any of the required attributes are not present
|
|
||||||
"""
|
|
||||||
d = dict()
|
|
||||||
for attr in self.__attributes__:
|
|
||||||
if attr in json:
|
|
||||||
val = json[attr]
|
|
||||||
elif attr in self.__required__:
|
|
||||||
try:
|
|
||||||
val = kwargs[attr]
|
|
||||||
except KeyError:
|
|
||||||
raise ValueError("{} must be set".format(attr))
|
|
||||||
|
|
||||||
serializer = self.__attribute_serializer__.get(attr)
|
|
||||||
if serializer:
|
|
||||||
d[attr] = self.serializers[serializer]['deserialize'](val)
|
|
||||||
else:
|
|
||||||
d[attr] = val
|
|
||||||
|
|
||||||
return self.__object_class__(**d)
|
|
||||||
|
|
||||||
def serialize(self, obj):
|
|
||||||
"""Serialize an object to a dictionary.
|
|
||||||
|
|
||||||
Take all of the attributes defined in self.__attributes__ and create
|
|
||||||
a dictionary containing those values.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
obj: The object to serialize
|
|
||||||
Returns:
|
|
||||||
A dictionary containing all of the serialized data from the object.
|
|
||||||
"""
|
|
||||||
d = dict()
|
|
||||||
for attr in self.__attributes__:
|
|
||||||
val = getattr(obj, attr)
|
|
||||||
if val is None:
|
|
||||||
continue
|
|
||||||
serializer = self.__attribute_serializer__.get(attr)
|
|
||||||
if serializer:
|
|
||||||
d[attr] = self.serializers[serializer]['serialize'](val)
|
|
||||||
else:
|
|
||||||
d[attr] = val
|
|
||||||
|
|
||||||
return d
|
|
@ -1,75 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
from oslo import messaging
|
|
||||||
|
|
||||||
from stevedore import driver
|
|
||||||
|
|
||||||
from sticks.openstack.common import log
|
|
||||||
from sticks.openstack.common import service
|
|
||||||
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
OPTS = [
|
|
||||||
cfg.StrOpt('tracking_plugin', default='redmine'),
|
|
||||||
cfg.ListOpt('notification_topics', default=['notifications', ],
|
|
||||||
help='AMQP topic used for OpenStack notifications'),
|
|
||||||
cfg.MultiStrOpt('messaging_urls',
|
|
||||||
default=[],
|
|
||||||
help="Messaging URLs to listen for notifications. "
|
|
||||||
"Example: transport://user:pass@host1:port"
|
|
||||||
"[,hostN:portN]/virtual_host "
|
|
||||||
"(DEFAULT/transport_url is used if empty)"),
|
|
||||||
]
|
|
||||||
|
|
||||||
cfg.CONF.register_opts(OPTS)
|
|
||||||
|
|
||||||
|
|
||||||
class SticksManager(service.Service):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(SticksManager, self).__init__()
|
|
||||||
self.dm = driver.DriverManager(
|
|
||||||
namespace='sticks.tracking',
|
|
||||||
name=cfg.CONF.tracking_plugin,
|
|
||||||
invoke_on_load=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
def start(self):
|
|
||||||
self.notification_server = None
|
|
||||||
super(SticksManager, self).start()
|
|
||||||
|
|
||||||
targets = []
|
|
||||||
plugins = []
|
|
||||||
|
|
||||||
driver = self.dm.driver
|
|
||||||
LOG.debug(('Event types from %(name)s: %(type)s')
|
|
||||||
% {'name': driver._name,
|
|
||||||
'type': ', '.join(driver._subscribedEvents)})
|
|
||||||
|
|
||||||
driver.register_manager(self)
|
|
||||||
targets.extend(driver.get_targets(cfg.CONF))
|
|
||||||
plugins.append(driver)
|
|
||||||
|
|
||||||
transport = messaging.get_transport(cfg.CONF)
|
|
||||||
|
|
||||||
if transport:
|
|
||||||
self.notification_server = messaging.get_notification_listener(
|
|
||||||
transport, targets, plugins, executor='eventlet')
|
|
||||||
|
|
||||||
self.notification_server.start()
|
|
@ -1,17 +0,0 @@
|
|||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
|
|
||||||
six.add_move(six.MovedModule('mox', 'mox', 'mox3.mox'))
|
|
@ -1,40 +0,0 @@
|
|||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""oslo.i18n integration module.
|
|
||||||
|
|
||||||
See http://docs.openstack.org/developer/oslo.i18n/usage.html
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
import oslo.i18n
|
|
||||||
|
|
||||||
|
|
||||||
# NOTE(dhellmann): This reference to o-s-l-o will be replaced by the
|
|
||||||
# application name when this module is synced into the separate
|
|
||||||
# repository. It is OK to have more than one translation function
|
|
||||||
# using the same domain, since there will still only be one message
|
|
||||||
# catalog.
|
|
||||||
_translators = oslo.i18n.TranslatorFactory(domain='oslo')
|
|
||||||
|
|
||||||
# The primary translation function using the well-known name "_"
|
|
||||||
_ = _translators.primary
|
|
||||||
|
|
||||||
# Translators for log levels.
|
|
||||||
#
|
|
||||||
# The abbreviated names are meant to reflect the usual use of a short
|
|
||||||
# name like '_'. The "L" is for "log" and the other letter comes from
|
|
||||||
# the level.
|
|
||||||
_LI = _translators.log_info
|
|
||||||
_LW = _translators.log_warning
|
|
||||||
_LE = _translators.log_error
|
|
||||||
_LC = _translators.log_critical
|
|
@ -1,307 +0,0 @@
|
|||||||
# Copyright 2012 SINA Corporation
|
|
||||||
# Copyright 2014 Cisco Systems, Inc.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
"""Extracts OpenStack config option info from module(s)."""
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import imp
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import socket
|
|
||||||
import sys
|
|
||||||
import textwrap
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
import six
|
|
||||||
import stevedore.named
|
|
||||||
|
|
||||||
from sticks.openstack.common import gettextutils
|
|
||||||
from sticks.openstack.common import importutils
|
|
||||||
|
|
||||||
gettextutils.install('sticks')
|
|
||||||
|
|
||||||
STROPT = "StrOpt"
|
|
||||||
BOOLOPT = "BoolOpt"
|
|
||||||
INTOPT = "IntOpt"
|
|
||||||
FLOATOPT = "FloatOpt"
|
|
||||||
LISTOPT = "ListOpt"
|
|
||||||
DICTOPT = "DictOpt"
|
|
||||||
MULTISTROPT = "MultiStrOpt"
|
|
||||||
|
|
||||||
OPT_TYPES = {
|
|
||||||
STROPT: 'string value',
|
|
||||||
BOOLOPT: 'boolean value',
|
|
||||||
INTOPT: 'integer value',
|
|
||||||
FLOATOPT: 'floating point value',
|
|
||||||
LISTOPT: 'list value',
|
|
||||||
DICTOPT: 'dict value',
|
|
||||||
MULTISTROPT: 'multi valued',
|
|
||||||
}
|
|
||||||
|
|
||||||
OPTION_REGEX = re.compile(r"(%s)" % "|".join([STROPT, BOOLOPT, INTOPT,
|
|
||||||
FLOATOPT, LISTOPT, DICTOPT,
|
|
||||||
MULTISTROPT]))
|
|
||||||
|
|
||||||
PY_EXT = ".py"
|
|
||||||
BASEDIR = os.path.abspath(os.path.join(os.path.dirname(__file__),
|
|
||||||
"../../../../"))
|
|
||||||
WORDWRAP_WIDTH = 60
|
|
||||||
|
|
||||||
|
|
||||||
def raise_extension_exception(extmanager, ep, err):
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def generate(argv):
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description='generate sample configuration file',
|
|
||||||
)
|
|
||||||
parser.add_argument('-m', dest='modules', action='append')
|
|
||||||
parser.add_argument('-l', dest='libraries', action='append')
|
|
||||||
parser.add_argument('srcfiles', nargs='*')
|
|
||||||
parsed_args = parser.parse_args(argv)
|
|
||||||
|
|
||||||
mods_by_pkg = dict()
|
|
||||||
for filepath in parsed_args.srcfiles:
|
|
||||||
pkg_name = filepath.split(os.sep)[1]
|
|
||||||
mod_str = '.'.join(['.'.join(filepath.split(os.sep)[:-1]),
|
|
||||||
os.path.basename(filepath).split('.')[0]])
|
|
||||||
mods_by_pkg.setdefault(pkg_name, list()).append(mod_str)
|
|
||||||
# NOTE(lzyeval): place top level modules before packages
|
|
||||||
pkg_names = sorted(pkg for pkg in mods_by_pkg if pkg.endswith(PY_EXT))
|
|
||||||
ext_names = sorted(pkg for pkg in mods_by_pkg if pkg not in pkg_names)
|
|
||||||
pkg_names.extend(ext_names)
|
|
||||||
|
|
||||||
# opts_by_group is a mapping of group name to an options list
|
|
||||||
# The options list is a list of (module, options) tuples
|
|
||||||
opts_by_group = {'DEFAULT': []}
|
|
||||||
|
|
||||||
if parsed_args.modules:
|
|
||||||
for module_name in parsed_args.modules:
|
|
||||||
module = _import_module(module_name)
|
|
||||||
if module:
|
|
||||||
for group, opts in _list_opts(module):
|
|
||||||
opts_by_group.setdefault(group, []).append((module_name,
|
|
||||||
opts))
|
|
||||||
|
|
||||||
# Look for entry points defined in libraries (or applications) for
|
|
||||||
# option discovery, and include their return values in the output.
|
|
||||||
#
|
|
||||||
# Each entry point should be a function returning an iterable
|
|
||||||
# of pairs with the group name (or None for the default group)
|
|
||||||
# and the list of Opt instances for that group.
|
|
||||||
if parsed_args.libraries:
|
|
||||||
loader = stevedore.named.NamedExtensionManager(
|
|
||||||
'oslo.config.opts',
|
|
||||||
names=list(set(parsed_args.libraries)),
|
|
||||||
invoke_on_load=False,
|
|
||||||
on_load_failure_callback=raise_extension_exception
|
|
||||||
)
|
|
||||||
for ext in loader:
|
|
||||||
for group, opts in ext.plugin():
|
|
||||||
opt_list = opts_by_group.setdefault(group or 'DEFAULT', [])
|
|
||||||
opt_list.append((ext.name, opts))
|
|
||||||
|
|
||||||
for pkg_name in pkg_names:
|
|
||||||
mods = mods_by_pkg.get(pkg_name)
|
|
||||||
mods.sort()
|
|
||||||
for mod_str in mods:
|
|
||||||
if mod_str.endswith('.__init__'):
|
|
||||||
mod_str = mod_str[:mod_str.rfind(".")]
|
|
||||||
|
|
||||||
mod_obj = _import_module(mod_str)
|
|
||||||
if not mod_obj:
|
|
||||||
raise RuntimeError("Unable to import module %s" % mod_str)
|
|
||||||
|
|
||||||
for group, opts in _list_opts(mod_obj):
|
|
||||||
opts_by_group.setdefault(group, []).append((mod_str, opts))
|
|
||||||
|
|
||||||
print_group_opts('DEFAULT', opts_by_group.pop('DEFAULT', []))
|
|
||||||
for group in sorted(opts_by_group.keys()):
|
|
||||||
print_group_opts(group, opts_by_group[group])
|
|
||||||
|
|
||||||
|
|
||||||
def _import_module(mod_str):
|
|
||||||
try:
|
|
||||||
if mod_str.startswith('bin.'):
|
|
||||||
imp.load_source(mod_str[4:], os.path.join('bin', mod_str[4:]))
|
|
||||||
return sys.modules[mod_str[4:]]
|
|
||||||
else:
|
|
||||||
return importutils.import_module(mod_str)
|
|
||||||
except Exception as e:
|
|
||||||
sys.stderr.write("Error importing module %s: %s\n" % (mod_str, str(e)))
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def _is_in_group(opt, group):
|
|
||||||
"Check if opt is in group."
|
|
||||||
for value in group._opts.values():
|
|
||||||
# NOTE(llu): Temporary workaround for bug #1262148, wait until
|
|
||||||
# newly released oslo.config support '==' operator.
|
|
||||||
if not(value['opt'] != opt):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def _guess_groups(opt, mod_obj):
|
|
||||||
# is it in the DEFAULT group?
|
|
||||||
if _is_in_group(opt, cfg.CONF):
|
|
||||||
return 'DEFAULT'
|
|
||||||
|
|
||||||
# what other groups is it in?
|
|
||||||
for value in cfg.CONF.values():
|
|
||||||
if isinstance(value, cfg.CONF.GroupAttr):
|
|
||||||
if _is_in_group(opt, value._group):
|
|
||||||
return value._group.name
|
|
||||||
|
|
||||||
raise RuntimeError(
|
|
||||||
"Unable to find group for option %s, "
|
|
||||||
"maybe it's defined twice in the same group?"
|
|
||||||
% opt.name
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def _list_opts(obj):
|
|
||||||
def is_opt(o):
|
|
||||||
return (isinstance(o, cfg.Opt) and
|
|
||||||
not isinstance(o, cfg.SubCommandOpt))
|
|
||||||
|
|
||||||
opts = list()
|
|
||||||
for attr_str in dir(obj):
|
|
||||||
attr_obj = getattr(obj, attr_str)
|
|
||||||
if is_opt(attr_obj):
|
|
||||||
opts.append(attr_obj)
|
|
||||||
elif (isinstance(attr_obj, list) and
|
|
||||||
all(map(lambda x: is_opt(x), attr_obj))):
|
|
||||||
opts.extend(attr_obj)
|
|
||||||
|
|
||||||
ret = {}
|
|
||||||
for opt in opts:
|
|
||||||
ret.setdefault(_guess_groups(opt, obj), []).append(opt)
|
|
||||||
return ret.items()
|
|
||||||
|
|
||||||
|
|
||||||
def print_group_opts(group, opts_by_module):
|
|
||||||
print("[%s]" % group)
|
|
||||||
print('')
|
|
||||||
for mod, opts in opts_by_module:
|
|
||||||
print('#')
|
|
||||||
print('# Options defined in %s' % mod)
|
|
||||||
print('#')
|
|
||||||
print('')
|
|
||||||
for opt in opts:
|
|
||||||
_print_opt(opt)
|
|
||||||
print('')
|
|
||||||
|
|
||||||
|
|
||||||
def _get_my_ip():
|
|
||||||
try:
|
|
||||||
csock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
|
||||||
csock.connect(('8.8.8.8', 80))
|
|
||||||
(addr, port) = csock.getsockname()
|
|
||||||
csock.close()
|
|
||||||
return addr
|
|
||||||
except socket.error:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def _sanitize_default(name, value):
|
|
||||||
"""Set up a reasonably sensible default for pybasedir, my_ip and host."""
|
|
||||||
if value.startswith(sys.prefix):
|
|
||||||
# NOTE(jd) Don't use os.path.join, because it is likely to think the
|
|
||||||
# second part is an absolute pathname and therefore drop the first
|
|
||||||
# part.
|
|
||||||
value = os.path.normpath("/usr/" + value[len(sys.prefix):])
|
|
||||||
elif value.startswith(BASEDIR):
|
|
||||||
return value.replace(BASEDIR, '/usr/lib/python/site-packages')
|
|
||||||
elif BASEDIR in value:
|
|
||||||
return value.replace(BASEDIR, '')
|
|
||||||
elif value == _get_my_ip():
|
|
||||||
return '10.0.0.1'
|
|
||||||
elif value in (socket.gethostname(), socket.getfqdn()) and 'host' in name:
|
|
||||||
return 'sticks'
|
|
||||||
elif value.strip() != value:
|
|
||||||
return '"%s"' % value
|
|
||||||
return value
|
|
||||||
|
|
||||||
|
|
||||||
def _print_opt(opt):
|
|
||||||
opt_name, opt_default, opt_help = opt.dest, opt.default, opt.help
|
|
||||||
if not opt_help:
|
|
||||||
sys.stderr.write('WARNING: "%s" is missing help string.\n' % opt_name)
|
|
||||||
opt_help = ""
|
|
||||||
opt_type = None
|
|
||||||
try:
|
|
||||||
opt_type = OPTION_REGEX.search(str(type(opt))).group(0)
|
|
||||||
except (ValueError, AttributeError) as err:
|
|
||||||
sys.stderr.write("%s\n" % str(err))
|
|
||||||
sys.exit(1)
|
|
||||||
opt_help = u'%s (%s)' % (opt_help,
|
|
||||||
OPT_TYPES[opt_type])
|
|
||||||
print('#', "\n# ".join(textwrap.wrap(opt_help, WORDWRAP_WIDTH)))
|
|
||||||
if opt.deprecated_opts:
|
|
||||||
for deprecated_opt in opt.deprecated_opts:
|
|
||||||
if deprecated_opt.name:
|
|
||||||
deprecated_group = (deprecated_opt.group if
|
|
||||||
deprecated_opt.group else "DEFAULT")
|
|
||||||
print('# Deprecated group/name - [%s]/%s' %
|
|
||||||
(deprecated_group,
|
|
||||||
deprecated_opt.name))
|
|
||||||
try:
|
|
||||||
if opt_default is None:
|
|
||||||
print('#%s=<None>' % opt_name)
|
|
||||||
elif opt_type == STROPT:
|
|
||||||
assert(isinstance(opt_default, six.string_types))
|
|
||||||
print('#%s=%s' % (opt_name, _sanitize_default(opt_name,
|
|
||||||
opt_default)))
|
|
||||||
elif opt_type == BOOLOPT:
|
|
||||||
assert(isinstance(opt_default, bool))
|
|
||||||
print('#%s=%s' % (opt_name, str(opt_default).lower()))
|
|
||||||
elif opt_type == INTOPT:
|
|
||||||
assert(isinstance(opt_default, int) and
|
|
||||||
not isinstance(opt_default, bool))
|
|
||||||
print('#%s=%s' % (opt_name, opt_default))
|
|
||||||
elif opt_type == FLOATOPT:
|
|
||||||
assert(isinstance(opt_default, float))
|
|
||||||
print('#%s=%s' % (opt_name, opt_default))
|
|
||||||
elif opt_type == LISTOPT:
|
|
||||||
assert(isinstance(opt_default, list))
|
|
||||||
print('#%s=%s' % (opt_name, ','.join(opt_default)))
|
|
||||||
elif opt_type == DICTOPT:
|
|
||||||
assert(isinstance(opt_default, dict))
|
|
||||||
opt_default_strlist = [str(key) + ':' + str(value)
|
|
||||||
for (key, value) in opt_default.items()]
|
|
||||||
print('#%s=%s' % (opt_name, ','.join(opt_default_strlist)))
|
|
||||||
elif opt_type == MULTISTROPT:
|
|
||||||
assert(isinstance(opt_default, list))
|
|
||||||
if not opt_default:
|
|
||||||
opt_default = ['']
|
|
||||||
for default in opt_default:
|
|
||||||
print('#%s=%s' % (opt_name, default))
|
|
||||||
print('')
|
|
||||||
except Exception:
|
|
||||||
sys.stderr.write('Error in option "%s"\n' % opt_name)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
generate(sys.argv[1:])
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
@ -1,111 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Simple class that stores security context information in the web request.
|
|
||||||
|
|
||||||
Projects should subclass this class if they wish to enhance the request
|
|
||||||
context or provide additional information in their specific WSGI pipeline.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import itertools
|
|
||||||
import uuid
|
|
||||||
|
|
||||||
|
|
||||||
def generate_request_id():
|
|
||||||
return 'req-%s' % str(uuid.uuid4())
|
|
||||||
|
|
||||||
|
|
||||||
class RequestContext(object):
|
|
||||||
|
|
||||||
"""Helper class to represent useful information about a request context.
|
|
||||||
|
|
||||||
Stores information about the security context under which the user
|
|
||||||
accesses the system, as well as additional request information.
|
|
||||||
"""
|
|
||||||
|
|
||||||
user_idt_format = '{user} {tenant} {domain} {user_domain} {p_domain}'
|
|
||||||
|
|
||||||
def __init__(self, auth_token=None, user=None, tenant=None, domain=None,
|
|
||||||
user_domain=None, project_domain=None, is_admin=False,
|
|
||||||
read_only=False, show_deleted=False, request_id=None,
|
|
||||||
instance_uuid=None):
|
|
||||||
self.auth_token = auth_token
|
|
||||||
self.user = user
|
|
||||||
self.tenant = tenant
|
|
||||||
self.domain = domain
|
|
||||||
self.user_domain = user_domain
|
|
||||||
self.project_domain = project_domain
|
|
||||||
self.is_admin = is_admin
|
|
||||||
self.read_only = read_only
|
|
||||||
self.show_deleted = show_deleted
|
|
||||||
self.instance_uuid = instance_uuid
|
|
||||||
if not request_id:
|
|
||||||
request_id = generate_request_id()
|
|
||||||
self.request_id = request_id
|
|
||||||
|
|
||||||
def to_dict(self):
|
|
||||||
user_idt = (
|
|
||||||
self.user_idt_format.format(user=self.user or '-',
|
|
||||||
tenant=self.tenant or '-',
|
|
||||||
domain=self.domain or '-',
|
|
||||||
user_domain=self.user_domain or '-',
|
|
||||||
p_domain=self.project_domain or '-'))
|
|
||||||
|
|
||||||
return {'user': self.user,
|
|
||||||
'tenant': self.tenant,
|
|
||||||
'domain': self.domain,
|
|
||||||
'user_domain': self.user_domain,
|
|
||||||
'project_domain': self.project_domain,
|
|
||||||
'is_admin': self.is_admin,
|
|
||||||
'read_only': self.read_only,
|
|
||||||
'show_deleted': self.show_deleted,
|
|
||||||
'auth_token': self.auth_token,
|
|
||||||
'request_id': self.request_id,
|
|
||||||
'instance_uuid': self.instance_uuid,
|
|
||||||
'user_identity': user_idt}
|
|
||||||
|
|
||||||
|
|
||||||
def get_admin_context(show_deleted=False):
|
|
||||||
context = RequestContext(None,
|
|
||||||
tenant=None,
|
|
||||||
is_admin=True,
|
|
||||||
show_deleted=show_deleted)
|
|
||||||
return context
|
|
||||||
|
|
||||||
|
|
||||||
def get_context_from_function_and_args(function, args, kwargs):
|
|
||||||
"""Find an arg of type RequestContext and return it.
|
|
||||||
|
|
||||||
This is useful in a couple of decorators where we don't
|
|
||||||
know much about the function we're wrapping.
|
|
||||||
"""
|
|
||||||
|
|
||||||
for arg in itertools.chain(kwargs.values(), args):
|
|
||||||
if isinstance(arg, RequestContext):
|
|
||||||
return arg
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def is_user_context(context):
|
|
||||||
"""Indicates if the request context is a normal user."""
|
|
||||||
if not context:
|
|
||||||
return False
|
|
||||||
if context.is_admin:
|
|
||||||
return False
|
|
||||||
if not context.user_id or not context.project_id:
|
|
||||||
return False
|
|
||||||
return True
|
|
@ -1,146 +0,0 @@
|
|||||||
# Copyright (c) 2012 OpenStack Foundation.
|
|
||||||
# Administrator of the National Aeronautics and Space Administration.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
|
|
||||||
import errno
|
|
||||||
import gc
|
|
||||||
import os
|
|
||||||
import pprint
|
|
||||||
import socket
|
|
||||||
import sys
|
|
||||||
import traceback
|
|
||||||
|
|
||||||
import eventlet
|
|
||||||
import eventlet.backdoor
|
|
||||||
import greenlet
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.openstack.common.gettextutils import _LI
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
|
|
||||||
help_for_backdoor_port = (
|
|
||||||
"Acceptable values are 0, <port>, and <start>:<end>, where 0 results "
|
|
||||||
"in listening on a random tcp port number; <port> results in listening "
|
|
||||||
"on the specified port number (and not enabling backdoor if that port "
|
|
||||||
"is in use); and <start>:<end> results in listening on the smallest "
|
|
||||||
"unused port number within the specified range of port numbers. The "
|
|
||||||
"chosen port is displayed in the service's log file.")
|
|
||||||
eventlet_backdoor_opts = [
|
|
||||||
cfg.StrOpt('backdoor_port',
|
|
||||||
default=None,
|
|
||||||
help="Enable eventlet backdoor. %s" % help_for_backdoor_port)
|
|
||||||
]
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_opts(eventlet_backdoor_opts)
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class EventletBackdoorConfigValueError(Exception):
|
|
||||||
def __init__(self, port_range, help_msg, ex):
|
|
||||||
msg = ('Invalid backdoor_port configuration %(range)s: %(ex)s. '
|
|
||||||
'%(help)s' %
|
|
||||||
{'range': port_range, 'ex': ex, 'help': help_msg})
|
|
||||||
super(EventletBackdoorConfigValueError, self).__init__(msg)
|
|
||||||
self.port_range = port_range
|
|
||||||
|
|
||||||
|
|
||||||
def _dont_use_this():
|
|
||||||
print("Don't use this, just disconnect instead")
|
|
||||||
|
|
||||||
|
|
||||||
def _find_objects(t):
|
|
||||||
return [o for o in gc.get_objects() if isinstance(o, t)]
|
|
||||||
|
|
||||||
|
|
||||||
def _print_greenthreads():
|
|
||||||
for i, gt in enumerate(_find_objects(greenlet.greenlet)):
|
|
||||||
print(i, gt)
|
|
||||||
traceback.print_stack(gt.gr_frame)
|
|
||||||
print()
|
|
||||||
|
|
||||||
|
|
||||||
def _print_nativethreads():
|
|
||||||
for threadId, stack in sys._current_frames().items():
|
|
||||||
print(threadId)
|
|
||||||
traceback.print_stack(stack)
|
|
||||||
print()
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_port_range(port_range):
|
|
||||||
if ':' not in port_range:
|
|
||||||
start, end = port_range, port_range
|
|
||||||
else:
|
|
||||||
start, end = port_range.split(':', 1)
|
|
||||||
try:
|
|
||||||
start, end = int(start), int(end)
|
|
||||||
if end < start:
|
|
||||||
raise ValueError
|
|
||||||
return start, end
|
|
||||||
except ValueError as ex:
|
|
||||||
raise EventletBackdoorConfigValueError(port_range, ex,
|
|
||||||
help_for_backdoor_port)
|
|
||||||
|
|
||||||
|
|
||||||
def _listen(host, start_port, end_port, listen_func):
|
|
||||||
try_port = start_port
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
return listen_func((host, try_port))
|
|
||||||
except socket.error as exc:
|
|
||||||
if (exc.errno != errno.EADDRINUSE or
|
|
||||||
try_port >= end_port):
|
|
||||||
raise
|
|
||||||
try_port += 1
|
|
||||||
|
|
||||||
|
|
||||||
def initialize_if_enabled():
|
|
||||||
backdoor_locals = {
|
|
||||||
'exit': _dont_use_this, # So we don't exit the entire process
|
|
||||||
'quit': _dont_use_this, # So we don't exit the entire process
|
|
||||||
'fo': _find_objects,
|
|
||||||
'pgt': _print_greenthreads,
|
|
||||||
'pnt': _print_nativethreads,
|
|
||||||
}
|
|
||||||
|
|
||||||
if CONF.backdoor_port is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
start_port, end_port = _parse_port_range(str(CONF.backdoor_port))
|
|
||||||
|
|
||||||
# NOTE(johannes): The standard sys.displayhook will print the value of
|
|
||||||
# the last expression and set it to __builtin__._, which overwrites
|
|
||||||
# the __builtin__._ that gettext sets. Let's switch to using pprint
|
|
||||||
# since it won't interact poorly with gettext, and it's easier to
|
|
||||||
# read the output too.
|
|
||||||
def displayhook(val):
|
|
||||||
if val is not None:
|
|
||||||
pprint.pprint(val)
|
|
||||||
sys.displayhook = displayhook
|
|
||||||
|
|
||||||
sock = _listen('localhost', start_port, end_port, eventlet.listen)
|
|
||||||
|
|
||||||
# In the case of backdoor port being zero, a port number is assigned by
|
|
||||||
# listen(). In any case, pull the port number out here.
|
|
||||||
port = sock.getsockname()[1]
|
|
||||||
LOG.info(
|
|
||||||
_LI('Eventlet backdoor listening on %(port)s for process %(pid)d') %
|
|
||||||
{'port': port, 'pid': os.getpid()}
|
|
||||||
)
|
|
||||||
eventlet.spawn_n(eventlet.backdoor.backdoor_server, sock,
|
|
||||||
locals=backdoor_locals)
|
|
||||||
return port
|
|
@ -1,113 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# Copyright 2012, Red Hat, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Exception related utilities.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
import traceback
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from sticks.openstack.common.gettextutils import _LE
|
|
||||||
|
|
||||||
|
|
||||||
class save_and_reraise_exception(object):
|
|
||||||
"""Save current exception, run some code and then re-raise.
|
|
||||||
|
|
||||||
In some cases the exception context can be cleared, resulting in None
|
|
||||||
being attempted to be re-raised after an exception handler is run. This
|
|
||||||
can happen when eventlet switches greenthreads or when running an
|
|
||||||
exception handler, code raises and catches an exception. In both
|
|
||||||
cases the exception context will be cleared.
|
|
||||||
|
|
||||||
To work around this, we save the exception state, run handler code, and
|
|
||||||
then re-raise the original exception. If another exception occurs, the
|
|
||||||
saved exception is logged and the new exception is re-raised.
|
|
||||||
|
|
||||||
In some cases the caller may not want to re-raise the exception, and
|
|
||||||
for those circumstances this context provides a reraise flag that
|
|
||||||
can be used to suppress the exception. For example::
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
with save_and_reraise_exception() as ctxt:
|
|
||||||
decide_if_need_reraise()
|
|
||||||
if not should_be_reraised:
|
|
||||||
ctxt.reraise = False
|
|
||||||
|
|
||||||
If another exception occurs and reraise flag is False,
|
|
||||||
the saved exception will not be logged.
|
|
||||||
|
|
||||||
If the caller wants to raise new exception during exception handling
|
|
||||||
he/she sets reraise to False initially with an ability to set it back to
|
|
||||||
True if needed::
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
with save_and_reraise_exception(reraise=False) as ctxt:
|
|
||||||
[if statements to determine whether to raise a new exception]
|
|
||||||
# Not raising a new exception, so reraise
|
|
||||||
ctxt.reraise = True
|
|
||||||
"""
|
|
||||||
def __init__(self, reraise=True):
|
|
||||||
self.reraise = reraise
|
|
||||||
|
|
||||||
def __enter__(self):
|
|
||||||
self.type_, self.value, self.tb, = sys.exc_info()
|
|
||||||
return self
|
|
||||||
|
|
||||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
|
||||||
if exc_type is not None:
|
|
||||||
if self.reraise:
|
|
||||||
logging.error(_LE('Original exception being dropped: %s'),
|
|
||||||
traceback.format_exception(self.type_,
|
|
||||||
self.value,
|
|
||||||
self.tb))
|
|
||||||
return False
|
|
||||||
if self.reraise:
|
|
||||||
six.reraise(self.type_, self.value, self.tb)
|
|
||||||
|
|
||||||
|
|
||||||
def forever_retry_uncaught_exceptions(infunc):
|
|
||||||
def inner_func(*args, **kwargs):
|
|
||||||
last_log_time = 0
|
|
||||||
last_exc_message = None
|
|
||||||
exc_count = 0
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
return infunc(*args, **kwargs)
|
|
||||||
except Exception as exc:
|
|
||||||
this_exc_message = six.u(str(exc))
|
|
||||||
if this_exc_message == last_exc_message:
|
|
||||||
exc_count += 1
|
|
||||||
else:
|
|
||||||
exc_count = 1
|
|
||||||
# Do not log any more frequently than once a minute unless
|
|
||||||
# the exception message changes
|
|
||||||
cur_time = int(time.time())
|
|
||||||
if (cur_time - last_log_time > 60 or
|
|
||||||
this_exc_message != last_exc_message):
|
|
||||||
logging.exception(
|
|
||||||
_LE('Unexpected exception occurred %d time(s)... '
|
|
||||||
'retrying.') % exc_count)
|
|
||||||
last_log_time = cur_time
|
|
||||||
last_exc_message = this_exc_message
|
|
||||||
exc_count = 0
|
|
||||||
# This should be a very rare event. In case it isn't, do
|
|
||||||
# a sleep.
|
|
||||||
time.sleep(1)
|
|
||||||
return inner_func
|
|
@ -1,135 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import contextlib
|
|
||||||
import errno
|
|
||||||
import os
|
|
||||||
import tempfile
|
|
||||||
|
|
||||||
from sticks.openstack.common import excutils
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
_FILE_CACHE = {}
|
|
||||||
|
|
||||||
|
|
||||||
def ensure_tree(path):
|
|
||||||
"""Create a directory (and any ancestor directories required)
|
|
||||||
|
|
||||||
:param path: Directory to create
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
os.makedirs(path)
|
|
||||||
except OSError as exc:
|
|
||||||
if exc.errno == errno.EEXIST:
|
|
||||||
if not os.path.isdir(path):
|
|
||||||
raise
|
|
||||||
else:
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def read_cached_file(filename, force_reload=False):
|
|
||||||
"""Read from a file if it has been modified.
|
|
||||||
|
|
||||||
:param force_reload: Whether to reload the file.
|
|
||||||
:returns: A tuple with a boolean specifying if the data is fresh
|
|
||||||
or not.
|
|
||||||
"""
|
|
||||||
global _FILE_CACHE
|
|
||||||
|
|
||||||
if force_reload and filename in _FILE_CACHE:
|
|
||||||
del _FILE_CACHE[filename]
|
|
||||||
|
|
||||||
reloaded = False
|
|
||||||
mtime = os.path.getmtime(filename)
|
|
||||||
cache_info = _FILE_CACHE.setdefault(filename, {})
|
|
||||||
|
|
||||||
if not cache_info or mtime > cache_info.get('mtime', 0):
|
|
||||||
LOG.debug("Reloading cached file %s" % filename)
|
|
||||||
with open(filename) as fap:
|
|
||||||
cache_info['data'] = fap.read()
|
|
||||||
cache_info['mtime'] = mtime
|
|
||||||
reloaded = True
|
|
||||||
return (reloaded, cache_info['data'])
|
|
||||||
|
|
||||||
|
|
||||||
def delete_if_exists(path, remove=os.unlink):
|
|
||||||
"""Delete a file, but ignore file not found error.
|
|
||||||
|
|
||||||
:param path: File to delete
|
|
||||||
:param remove: Optional function to remove passed path
|
|
||||||
"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
remove(path)
|
|
||||||
except OSError as e:
|
|
||||||
if e.errno != errno.ENOENT:
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
|
||||||
def remove_path_on_error(path, remove=delete_if_exists):
|
|
||||||
"""Protect code that wants to operate on PATH atomically.
|
|
||||||
Any exception will cause PATH to be removed.
|
|
||||||
|
|
||||||
:param path: File to work with
|
|
||||||
:param remove: Optional function to remove passed path
|
|
||||||
"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
yield
|
|
||||||
except Exception:
|
|
||||||
with excutils.save_and_reraise_exception():
|
|
||||||
remove(path)
|
|
||||||
|
|
||||||
|
|
||||||
def file_open(*args, **kwargs):
|
|
||||||
"""Open file
|
|
||||||
|
|
||||||
see built-in file() documentation for more details
|
|
||||||
|
|
||||||
Note: The reason this is kept in a separate module is to easily
|
|
||||||
be able to provide a stub module that doesn't alter system
|
|
||||||
state at all (for unit tests)
|
|
||||||
"""
|
|
||||||
return file(*args, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def write_to_tempfile(content, path=None, suffix='', prefix='tmp'):
|
|
||||||
"""Create temporary file or use existing file.
|
|
||||||
|
|
||||||
This util is needed for creating temporary file with
|
|
||||||
specified content, suffix and prefix. If path is not None,
|
|
||||||
it will be used for writing content. If the path doesn't
|
|
||||||
exist it'll be created.
|
|
||||||
|
|
||||||
:param content: content for temporary file.
|
|
||||||
:param path: same as parameter 'dir' for mkstemp
|
|
||||||
:param suffix: same as parameter 'suffix' for mkstemp
|
|
||||||
:param prefix: same as parameter 'prefix' for mkstemp
|
|
||||||
|
|
||||||
For example: it can be used in database tests for creating
|
|
||||||
configuration files.
|
|
||||||
"""
|
|
||||||
if path:
|
|
||||||
ensure_tree(path)
|
|
||||||
|
|
||||||
(fd, path) = tempfile.mkstemp(suffix=suffix, dir=path, prefix=prefix)
|
|
||||||
try:
|
|
||||||
os.write(fd, content)
|
|
||||||
finally:
|
|
||||||
os.close(fd)
|
|
||||||
return path
|
|
@ -1,448 +0,0 @@
|
|||||||
# Copyright 2012 Red Hat, Inc.
|
|
||||||
# Copyright 2013 IBM Corp.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
gettext for openstack-common modules.
|
|
||||||
|
|
||||||
Usual usage in an openstack.common module:
|
|
||||||
|
|
||||||
from sticks.openstack.common.gettextutils import _
|
|
||||||
"""
|
|
||||||
|
|
||||||
import copy
|
|
||||||
import functools
|
|
||||||
import gettext
|
|
||||||
import locale
|
|
||||||
from logging import handlers
|
|
||||||
import os
|
|
||||||
|
|
||||||
from babel import localedata
|
|
||||||
import six
|
|
||||||
|
|
||||||
_localedir = os.environ.get('sticks'.upper() + '_LOCALEDIR')
|
|
||||||
_t = gettext.translation('sticks', localedir=_localedir, fallback=True)
|
|
||||||
|
|
||||||
# We use separate translation catalogs for each log level, so set up a
|
|
||||||
# mapping between the log level name and the translator. The domain
|
|
||||||
# for the log level is project_name + "-log-" + log_level so messages
|
|
||||||
# for each level end up in their own catalog.
|
|
||||||
_t_log_levels = dict(
|
|
||||||
(level, gettext.translation('sticks' + '-log-' + level,
|
|
||||||
localedir=_localedir,
|
|
||||||
fallback=True))
|
|
||||||
for level in ['info', 'warning', 'error', 'critical']
|
|
||||||
)
|
|
||||||
|
|
||||||
_AVAILABLE_LANGUAGES = {}
|
|
||||||
USE_LAZY = False
|
|
||||||
|
|
||||||
|
|
||||||
def enable_lazy():
|
|
||||||
"""Convenience function for configuring _() to use lazy gettext
|
|
||||||
|
|
||||||
Call this at the start of execution to enable the gettextutils._
|
|
||||||
function to use lazy gettext functionality. This is useful if
|
|
||||||
your project is importing _ directly instead of using the
|
|
||||||
gettextutils.install() way of importing the _ function.
|
|
||||||
"""
|
|
||||||
global USE_LAZY
|
|
||||||
USE_LAZY = True
|
|
||||||
|
|
||||||
|
|
||||||
def _(msg):
|
|
||||||
if USE_LAZY:
|
|
||||||
return Message(msg, domain='sticks')
|
|
||||||
else:
|
|
||||||
if six.PY3:
|
|
||||||
return _t.gettext(msg)
|
|
||||||
return _t.ugettext(msg)
|
|
||||||
|
|
||||||
|
|
||||||
def _log_translation(msg, level):
|
|
||||||
"""Build a single translation of a log message
|
|
||||||
"""
|
|
||||||
if USE_LAZY:
|
|
||||||
return Message(msg, domain='sticks' + '-log-' + level)
|
|
||||||
else:
|
|
||||||
translator = _t_log_levels[level]
|
|
||||||
if six.PY3:
|
|
||||||
return translator.gettext(msg)
|
|
||||||
return translator.ugettext(msg)
|
|
||||||
|
|
||||||
# Translators for log levels.
|
|
||||||
#
|
|
||||||
# The abbreviated names are meant to reflect the usual use of a short
|
|
||||||
# name like '_'. The "L" is for "log" and the other letter comes from
|
|
||||||
# the level.
|
|
||||||
_LI = functools.partial(_log_translation, level='info')
|
|
||||||
_LW = functools.partial(_log_translation, level='warning')
|
|
||||||
_LE = functools.partial(_log_translation, level='error')
|
|
||||||
_LC = functools.partial(_log_translation, level='critical')
|
|
||||||
|
|
||||||
|
|
||||||
def install(domain, lazy=False):
|
|
||||||
"""Install a _() function using the given translation domain.
|
|
||||||
|
|
||||||
Given a translation domain, install a _() function using gettext's
|
|
||||||
install() function.
|
|
||||||
|
|
||||||
The main difference from gettext.install() is that we allow
|
|
||||||
overriding the default localedir (e.g. /usr/share/locale) using
|
|
||||||
a translation-domain-specific environment variable (e.g.
|
|
||||||
NOVA_LOCALEDIR).
|
|
||||||
|
|
||||||
:param domain: the translation domain
|
|
||||||
:param lazy: indicates whether or not to install the lazy _() function.
|
|
||||||
The lazy _() introduces a way to do deferred translation
|
|
||||||
of messages by installing a _ that builds Message objects,
|
|
||||||
instead of strings, which can then be lazily translated into
|
|
||||||
any available locale.
|
|
||||||
"""
|
|
||||||
if lazy:
|
|
||||||
# NOTE(mrodden): Lazy gettext functionality.
|
|
||||||
#
|
|
||||||
# The following introduces a deferred way to do translations on
|
|
||||||
# messages in OpenStack. We override the standard _() function
|
|
||||||
# and % (format string) operation to build Message objects that can
|
|
||||||
# later be translated when we have more information.
|
|
||||||
def _lazy_gettext(msg):
|
|
||||||
"""Create and return a Message object.
|
|
||||||
|
|
||||||
Lazy gettext function for a given domain, it is a factory method
|
|
||||||
for a project/module to get a lazy gettext function for its own
|
|
||||||
translation domain (i.e. nova, glance, cinder, etc.)
|
|
||||||
|
|
||||||
Message encapsulates a string so that we can translate
|
|
||||||
it later when needed.
|
|
||||||
"""
|
|
||||||
return Message(msg, domain=domain)
|
|
||||||
|
|
||||||
from six import moves
|
|
||||||
moves.builtins.__dict__['_'] = _lazy_gettext
|
|
||||||
else:
|
|
||||||
localedir = '%s_LOCALEDIR' % domain.upper()
|
|
||||||
if six.PY3:
|
|
||||||
gettext.install(domain,
|
|
||||||
localedir=os.environ.get(localedir))
|
|
||||||
else:
|
|
||||||
gettext.install(domain,
|
|
||||||
localedir=os.environ.get(localedir),
|
|
||||||
unicode=True)
|
|
||||||
|
|
||||||
|
|
||||||
class Message(six.text_type):
|
|
||||||
"""A Message object is a unicode object that can be translated.
|
|
||||||
|
|
||||||
Translation of Message is done explicitly using the translate() method.
|
|
||||||
For all non-translation intents and purposes, a Message is simply unicode,
|
|
||||||
and can be treated as such.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __new__(cls, msgid, msgtext=None, params=None,
|
|
||||||
domain='sticks', *args):
|
|
||||||
"""Create a new Message object.
|
|
||||||
|
|
||||||
In order for translation to work gettext requires a message ID, this
|
|
||||||
msgid will be used as the base unicode text. It is also possible
|
|
||||||
for the msgid and the base unicode text to be different by passing
|
|
||||||
the msgtext parameter.
|
|
||||||
"""
|
|
||||||
# If the base msgtext is not given, we use the default translation
|
|
||||||
# of the msgid (which is in English) just in case the system locale is
|
|
||||||
# not English, so that the base text will be in that locale by default.
|
|
||||||
if not msgtext:
|
|
||||||
msgtext = Message._translate_msgid(msgid, domain)
|
|
||||||
# We want to initialize the parent unicode with the actual object that
|
|
||||||
# would have been plain unicode if 'Message' was not enabled.
|
|
||||||
msg = super(Message, cls).__new__(cls, msgtext)
|
|
||||||
msg.msgid = msgid
|
|
||||||
msg.domain = domain
|
|
||||||
msg.params = params
|
|
||||||
return msg
|
|
||||||
|
|
||||||
def translate(self, desired_locale=None):
|
|
||||||
"""Translate this message to the desired locale.
|
|
||||||
|
|
||||||
:param desired_locale: The desired locale to translate the message to,
|
|
||||||
if no locale is provided the message will be
|
|
||||||
translated to the system's default locale.
|
|
||||||
|
|
||||||
:returns: the translated message in unicode
|
|
||||||
"""
|
|
||||||
|
|
||||||
translated_message = Message._translate_msgid(self.msgid,
|
|
||||||
self.domain,
|
|
||||||
desired_locale)
|
|
||||||
if self.params is None:
|
|
||||||
# No need for more translation
|
|
||||||
return translated_message
|
|
||||||
|
|
||||||
# This Message object may have been formatted with one or more
|
|
||||||
# Message objects as substitution arguments, given either as a single
|
|
||||||
# argument, part of a tuple, or as one or more values in a dictionary.
|
|
||||||
# When translating this Message we need to translate those Messages too
|
|
||||||
translated_params = _translate_args(self.params, desired_locale)
|
|
||||||
|
|
||||||
translated_message = translated_message % translated_params
|
|
||||||
|
|
||||||
return translated_message
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _translate_msgid(msgid, domain, desired_locale=None):
|
|
||||||
if not desired_locale:
|
|
||||||
system_locale = locale.getdefaultlocale()
|
|
||||||
# If the system locale is not available to the runtime use English
|
|
||||||
if not system_locale[0]:
|
|
||||||
desired_locale = 'en_US'
|
|
||||||
else:
|
|
||||||
desired_locale = system_locale[0]
|
|
||||||
|
|
||||||
locale_dir = os.environ.get(domain.upper() + '_LOCALEDIR')
|
|
||||||
lang = gettext.translation(domain,
|
|
||||||
localedir=locale_dir,
|
|
||||||
languages=[desired_locale],
|
|
||||||
fallback=True)
|
|
||||||
if six.PY3:
|
|
||||||
translator = lang.gettext
|
|
||||||
else:
|
|
||||||
translator = lang.ugettext
|
|
||||||
|
|
||||||
translated_message = translator(msgid)
|
|
||||||
return translated_message
|
|
||||||
|
|
||||||
def __mod__(self, other):
|
|
||||||
# When we mod a Message we want the actual operation to be performed
|
|
||||||
# by the parent class (i.e. unicode()), the only thing we do here is
|
|
||||||
# save the original msgid and the parameters in case of a translation
|
|
||||||
params = self._sanitize_mod_params(other)
|
|
||||||
unicode_mod = super(Message, self).__mod__(params)
|
|
||||||
modded = Message(self.msgid,
|
|
||||||
msgtext=unicode_mod,
|
|
||||||
params=params,
|
|
||||||
domain=self.domain)
|
|
||||||
return modded
|
|
||||||
|
|
||||||
def _sanitize_mod_params(self, other):
|
|
||||||
"""Sanitize the object being modded with this Message.
|
|
||||||
|
|
||||||
- Add support for modding 'None' so translation supports it
|
|
||||||
- Trim the modded object, which can be a large dictionary, to only
|
|
||||||
those keys that would actually be used in a translation
|
|
||||||
- Snapshot the object being modded, in case the message is
|
|
||||||
translated, it will be used as it was when the Message was created
|
|
||||||
"""
|
|
||||||
if other is None:
|
|
||||||
params = (other,)
|
|
||||||
elif isinstance(other, dict):
|
|
||||||
# Merge the dictionaries
|
|
||||||
# Copy each item in case one does not support deep copy.
|
|
||||||
params = {}
|
|
||||||
if isinstance(self.params, dict):
|
|
||||||
for key, val in self.params.items():
|
|
||||||
params[key] = self._copy_param(val)
|
|
||||||
for key, val in other.items():
|
|
||||||
params[key] = self._copy_param(val)
|
|
||||||
else:
|
|
||||||
params = self._copy_param(other)
|
|
||||||
return params
|
|
||||||
|
|
||||||
def _copy_param(self, param):
|
|
||||||
try:
|
|
||||||
return copy.deepcopy(param)
|
|
||||||
except Exception:
|
|
||||||
# Fallback to casting to unicode this will handle the
|
|
||||||
# python code-like objects that can't be deep-copied
|
|
||||||
return six.text_type(param)
|
|
||||||
|
|
||||||
def __add__(self, other):
|
|
||||||
msg = _('Message objects do not support addition.')
|
|
||||||
raise TypeError(msg)
|
|
||||||
|
|
||||||
def __radd__(self, other):
|
|
||||||
return self.__add__(other)
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
# NOTE(luisg): Logging in python 2.6 tries to str() log records,
|
|
||||||
# and it expects specifically a UnicodeError in order to proceed.
|
|
||||||
msg = _('Message objects do not support str() because they may '
|
|
||||||
'contain non-ascii characters. '
|
|
||||||
'Please use unicode() or translate() instead.')
|
|
||||||
raise UnicodeError(msg)
|
|
||||||
|
|
||||||
|
|
||||||
def get_available_languages(domain):
|
|
||||||
"""Lists the available languages for the given translation domain.
|
|
||||||
|
|
||||||
:param domain: the domain to get languages for
|
|
||||||
"""
|
|
||||||
if domain in _AVAILABLE_LANGUAGES:
|
|
||||||
return copy.copy(_AVAILABLE_LANGUAGES[domain])
|
|
||||||
|
|
||||||
localedir = '%s_LOCALEDIR' % domain.upper()
|
|
||||||
find = lambda x: gettext.find(domain,
|
|
||||||
localedir=os.environ.get(localedir),
|
|
||||||
languages=[x])
|
|
||||||
|
|
||||||
# NOTE(mrodden): en_US should always be available (and first in case
|
|
||||||
# order matters) since our in-line message strings are en_US
|
|
||||||
language_list = ['en_US']
|
|
||||||
# NOTE(luisg): Babel <1.0 used a function called list(), which was
|
|
||||||
# renamed to locale_identifiers() in >=1.0, the requirements master list
|
|
||||||
# requires >=0.9.6, uncapped, so defensively work with both. We can remove
|
|
||||||
# this check when the master list updates to >=1.0, and update all projects
|
|
||||||
list_identifiers = (getattr(localedata, 'list', None) or
|
|
||||||
getattr(localedata, 'locale_identifiers'))
|
|
||||||
locale_identifiers = list_identifiers()
|
|
||||||
|
|
||||||
for i in locale_identifiers:
|
|
||||||
if find(i) is not None:
|
|
||||||
language_list.append(i)
|
|
||||||
|
|
||||||
# NOTE(luisg): Babel>=1.0,<1.3 has a bug where some OpenStack supported
|
|
||||||
# locales (e.g. 'zh_CN', and 'zh_TW') aren't supported even though they
|
|
||||||
# are perfectly legitimate locales:
|
|
||||||
# https://github.com/mitsuhiko/babel/issues/37
|
|
||||||
# In Babel 1.3 they fixed the bug and they support these locales, but
|
|
||||||
# they are still not explicitly "listed" by locale_identifiers().
|
|
||||||
# That is why we add the locales here explicitly if necessary so that
|
|
||||||
# they are listed as supported.
|
|
||||||
aliases = {'zh': 'zh_CN',
|
|
||||||
'zh_Hant_HK': 'zh_HK',
|
|
||||||
'zh_Hant': 'zh_TW',
|
|
||||||
'fil': 'tl_PH'}
|
|
||||||
for (locale, alias) in six.iteritems(aliases):
|
|
||||||
if locale in language_list and alias not in language_list:
|
|
||||||
language_list.append(alias)
|
|
||||||
|
|
||||||
_AVAILABLE_LANGUAGES[domain] = language_list
|
|
||||||
return copy.copy(language_list)
|
|
||||||
|
|
||||||
|
|
||||||
def translate(obj, desired_locale=None):
|
|
||||||
"""Gets the translated unicode representation of the given object.
|
|
||||||
|
|
||||||
If the object is not translatable it is returned as-is.
|
|
||||||
If the locale is None the object is translated to the system locale.
|
|
||||||
|
|
||||||
:param obj: the object to translate
|
|
||||||
:param desired_locale: the locale to translate the message to, if None the
|
|
||||||
default system locale will be used
|
|
||||||
:returns: the translated object in unicode, or the original object if
|
|
||||||
it could not be translated
|
|
||||||
"""
|
|
||||||
message = obj
|
|
||||||
if not isinstance(message, Message):
|
|
||||||
# If the object to translate is not already translatable,
|
|
||||||
# let's first get its unicode representation
|
|
||||||
message = six.text_type(obj)
|
|
||||||
if isinstance(message, Message):
|
|
||||||
# Even after unicoding() we still need to check if we are
|
|
||||||
# running with translatable unicode before translating
|
|
||||||
return message.translate(desired_locale)
|
|
||||||
return obj
|
|
||||||
|
|
||||||
|
|
||||||
def _translate_args(args, desired_locale=None):
|
|
||||||
"""Translates all the translatable elements of the given arguments object.
|
|
||||||
|
|
||||||
This method is used for translating the translatable values in method
|
|
||||||
arguments which include values of tuples or dictionaries.
|
|
||||||
If the object is not a tuple or a dictionary the object itself is
|
|
||||||
translated if it is translatable.
|
|
||||||
|
|
||||||
If the locale is None the object is translated to the system locale.
|
|
||||||
|
|
||||||
:param args: the args to translate
|
|
||||||
:param desired_locale: the locale to translate the args to, if None the
|
|
||||||
default system locale will be used
|
|
||||||
:returns: a new args object with the translated contents of the original
|
|
||||||
"""
|
|
||||||
if isinstance(args, tuple):
|
|
||||||
return tuple(translate(v, desired_locale) for v in args)
|
|
||||||
if isinstance(args, dict):
|
|
||||||
translated_dict = {}
|
|
||||||
for (k, v) in six.iteritems(args):
|
|
||||||
translated_v = translate(v, desired_locale)
|
|
||||||
translated_dict[k] = translated_v
|
|
||||||
return translated_dict
|
|
||||||
return translate(args, desired_locale)
|
|
||||||
|
|
||||||
|
|
||||||
class TranslationHandler(handlers.MemoryHandler):
|
|
||||||
"""Handler that translates records before logging them.
|
|
||||||
|
|
||||||
The TranslationHandler takes a locale and a target logging.Handler object
|
|
||||||
to forward LogRecord objects to after translating them. This handler
|
|
||||||
depends on Message objects being logged, instead of regular strings.
|
|
||||||
|
|
||||||
The handler can be configured declaratively in the logging.conf as follows:
|
|
||||||
|
|
||||||
[handlers]
|
|
||||||
keys = translatedlog, translator
|
|
||||||
|
|
||||||
[handler_translatedlog]
|
|
||||||
class = handlers.WatchedFileHandler
|
|
||||||
args = ('/var/log/api-localized.log',)
|
|
||||||
formatter = context
|
|
||||||
|
|
||||||
[handler_translator]
|
|
||||||
class = openstack.common.log.TranslationHandler
|
|
||||||
target = translatedlog
|
|
||||||
args = ('zh_CN',)
|
|
||||||
|
|
||||||
If the specified locale is not available in the system, the handler will
|
|
||||||
log in the default locale.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, locale=None, target=None):
|
|
||||||
"""Initialize a TranslationHandler
|
|
||||||
|
|
||||||
:param locale: locale to use for translating messages
|
|
||||||
:param target: logging.Handler object to forward
|
|
||||||
LogRecord objects to after translation
|
|
||||||
"""
|
|
||||||
# NOTE(luisg): In order to allow this handler to be a wrapper for
|
|
||||||
# other handlers, such as a FileHandler, and still be able to
|
|
||||||
# configure it using logging.conf, this handler has to extend
|
|
||||||
# MemoryHandler because only the MemoryHandlers' logging.conf
|
|
||||||
# parsing is implemented such that it accepts a target handler.
|
|
||||||
handlers.MemoryHandler.__init__(self, capacity=0, target=target)
|
|
||||||
self.locale = locale
|
|
||||||
|
|
||||||
def setFormatter(self, fmt):
|
|
||||||
self.target.setFormatter(fmt)
|
|
||||||
|
|
||||||
def emit(self, record):
|
|
||||||
# We save the message from the original record to restore it
|
|
||||||
# after translation, so other handlers are not affected by this
|
|
||||||
original_msg = record.msg
|
|
||||||
original_args = record.args
|
|
||||||
|
|
||||||
try:
|
|
||||||
self._translate_and_log_record(record)
|
|
||||||
finally:
|
|
||||||
record.msg = original_msg
|
|
||||||
record.args = original_args
|
|
||||||
|
|
||||||
def _translate_and_log_record(self, record):
|
|
||||||
record.msg = translate(record.msg, self.locale)
|
|
||||||
|
|
||||||
# In addition to translating the message, we also need to translate
|
|
||||||
# arguments that were passed to the log method that were not part
|
|
||||||
# of the main message e.g., log.info(_('Some message %s'), this_one))
|
|
||||||
record.args = _translate_args(record.args, self.locale)
|
|
||||||
|
|
||||||
self.target.emit(record)
|
|
@ -1,73 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Import related utilities and helper functions.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import traceback
|
|
||||||
|
|
||||||
|
|
||||||
def import_class(import_str):
|
|
||||||
"""Returns a class from a string including module and class."""
|
|
||||||
mod_str, _sep, class_str = import_str.rpartition('.')
|
|
||||||
try:
|
|
||||||
__import__(mod_str)
|
|
||||||
return getattr(sys.modules[mod_str], class_str)
|
|
||||||
except (ValueError, AttributeError):
|
|
||||||
raise ImportError('Class %s cannot be found (%s)' %
|
|
||||||
(class_str,
|
|
||||||
traceback.format_exception(*sys.exc_info())))
|
|
||||||
|
|
||||||
|
|
||||||
def import_object(import_str, *args, **kwargs):
|
|
||||||
"""Import a class and return an instance of it."""
|
|
||||||
return import_class(import_str)(*args, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def import_object_ns(name_space, import_str, *args, **kwargs):
|
|
||||||
"""Tries to import object from default namespace.
|
|
||||||
|
|
||||||
Imports a class and return an instance of it, first by trying
|
|
||||||
to find the class in a default namespace, then failing back to
|
|
||||||
a full path if not found in the default namespace.
|
|
||||||
"""
|
|
||||||
import_value = "%s.%s" % (name_space, import_str)
|
|
||||||
try:
|
|
||||||
return import_class(import_value)(*args, **kwargs)
|
|
||||||
except ImportError:
|
|
||||||
return import_class(import_str)(*args, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def import_module(import_str):
|
|
||||||
"""Import a module."""
|
|
||||||
__import__(import_str)
|
|
||||||
return sys.modules[import_str]
|
|
||||||
|
|
||||||
|
|
||||||
def import_versioned_module(version, submodule=None):
|
|
||||||
module = 'sticks.v%s' % version
|
|
||||||
if submodule:
|
|
||||||
module = '.'.join((module, submodule))
|
|
||||||
return import_module(module)
|
|
||||||
|
|
||||||
|
|
||||||
def try_import(import_str, default=None):
|
|
||||||
"""Try to import a module and if it fails return default."""
|
|
||||||
try:
|
|
||||||
return import_module(import_str)
|
|
||||||
except ImportError:
|
|
||||||
return default
|
|
@ -1,186 +0,0 @@
|
|||||||
# Copyright 2010 United States Government as represented by the
|
|
||||||
# Administrator of the National Aeronautics and Space Administration.
|
|
||||||
# Copyright 2011 Justin Santa Barbara
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
'''
|
|
||||||
JSON related utilities.
|
|
||||||
|
|
||||||
This module provides a few things:
|
|
||||||
|
|
||||||
1) A handy function for getting an object down to something that can be
|
|
||||||
JSON serialized. See to_primitive().
|
|
||||||
|
|
||||||
2) Wrappers around loads() and dumps(). The dumps() wrapper will
|
|
||||||
automatically use to_primitive() for you if needed.
|
|
||||||
|
|
||||||
3) This sets up anyjson to use the loads() and dumps() wrappers if anyjson
|
|
||||||
is available.
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
|
||||||
import codecs
|
|
||||||
import datetime
|
|
||||||
import functools
|
|
||||||
import inspect
|
|
||||||
import itertools
|
|
||||||
import sys
|
|
||||||
|
|
||||||
if sys.version_info < (2, 7):
|
|
||||||
# On Python <= 2.6, json module is not C boosted, so try to use
|
|
||||||
# simplejson module if available
|
|
||||||
try:
|
|
||||||
import simplejson as json
|
|
||||||
except ImportError:
|
|
||||||
import json
|
|
||||||
else:
|
|
||||||
import json
|
|
||||||
|
|
||||||
import six
|
|
||||||
import six.moves.xmlrpc_client as xmlrpclib
|
|
||||||
|
|
||||||
from sticks.openstack.common import gettextutils
|
|
||||||
from sticks.openstack.common import importutils
|
|
||||||
from sticks.openstack.common import strutils
|
|
||||||
from sticks.openstack.common import timeutils
|
|
||||||
|
|
||||||
netaddr = importutils.try_import("netaddr")
|
|
||||||
|
|
||||||
_nasty_type_tests = [inspect.ismodule, inspect.isclass, inspect.ismethod,
|
|
||||||
inspect.isfunction, inspect.isgeneratorfunction,
|
|
||||||
inspect.isgenerator, inspect.istraceback, inspect.isframe,
|
|
||||||
inspect.iscode, inspect.isbuiltin, inspect.isroutine,
|
|
||||||
inspect.isabstract]
|
|
||||||
|
|
||||||
_simple_types = (six.string_types + six.integer_types
|
|
||||||
+ (type(None), bool, float))
|
|
||||||
|
|
||||||
|
|
||||||
def to_primitive(value, convert_instances=False, convert_datetime=True,
|
|
||||||
level=0, max_depth=3):
|
|
||||||
"""Convert a complex object into primitives.
|
|
||||||
|
|
||||||
Handy for JSON serialization. We can optionally handle instances,
|
|
||||||
but since this is a recursive function, we could have cyclical
|
|
||||||
data structures.
|
|
||||||
|
|
||||||
To handle cyclical data structures we could track the actual objects
|
|
||||||
visited in a set, but not all objects are hashable. Instead we just
|
|
||||||
track the depth of the object inspections and don't go too deep.
|
|
||||||
|
|
||||||
Therefore, convert_instances=True is lossy ... be aware.
|
|
||||||
|
|
||||||
"""
|
|
||||||
# handle obvious types first - order of basic types determined by running
|
|
||||||
# full tests on nova project, resulting in the following counts:
|
|
||||||
# 572754 <type 'NoneType'>
|
|
||||||
# 460353 <type 'int'>
|
|
||||||
# 379632 <type 'unicode'>
|
|
||||||
# 274610 <type 'str'>
|
|
||||||
# 199918 <type 'dict'>
|
|
||||||
# 114200 <type 'datetime.datetime'>
|
|
||||||
# 51817 <type 'bool'>
|
|
||||||
# 26164 <type 'list'>
|
|
||||||
# 6491 <type 'float'>
|
|
||||||
# 283 <type 'tuple'>
|
|
||||||
# 19 <type 'long'>
|
|
||||||
if isinstance(value, _simple_types):
|
|
||||||
return value
|
|
||||||
|
|
||||||
if isinstance(value, datetime.datetime):
|
|
||||||
if convert_datetime:
|
|
||||||
return timeutils.strtime(value)
|
|
||||||
else:
|
|
||||||
return value
|
|
||||||
|
|
||||||
# value of itertools.count doesn't get caught by nasty_type_tests
|
|
||||||
# and results in infinite loop when list(value) is called.
|
|
||||||
if type(value) == itertools.count:
|
|
||||||
return six.text_type(value)
|
|
||||||
|
|
||||||
# FIXME(vish): Workaround for LP bug 852095. Without this workaround,
|
|
||||||
# tests that raise an exception in a mocked method that
|
|
||||||
# has a @wrap_exception with a notifier will fail. If
|
|
||||||
# we up the dependency to 0.5.4 (when it is released) we
|
|
||||||
# can remove this workaround.
|
|
||||||
if getattr(value, '__module__', None) == 'mox':
|
|
||||||
return 'mock'
|
|
||||||
|
|
||||||
if level > max_depth:
|
|
||||||
return '?'
|
|
||||||
|
|
||||||
# The try block may not be necessary after the class check above,
|
|
||||||
# but just in case ...
|
|
||||||
try:
|
|
||||||
recursive = functools.partial(to_primitive,
|
|
||||||
convert_instances=convert_instances,
|
|
||||||
convert_datetime=convert_datetime,
|
|
||||||
level=level,
|
|
||||||
max_depth=max_depth)
|
|
||||||
if isinstance(value, dict):
|
|
||||||
return dict((k, recursive(v)) for k, v in six.iteritems(value))
|
|
||||||
elif isinstance(value, (list, tuple)):
|
|
||||||
return [recursive(lv) for lv in value]
|
|
||||||
|
|
||||||
# It's not clear why xmlrpclib created their own DateTime type, but
|
|
||||||
# for our purposes, make it a datetime type which is explicitly
|
|
||||||
# handled
|
|
||||||
if isinstance(value, xmlrpclib.DateTime):
|
|
||||||
value = datetime.datetime(*tuple(value.timetuple())[:6])
|
|
||||||
|
|
||||||
if convert_datetime and isinstance(value, datetime.datetime):
|
|
||||||
return timeutils.strtime(value)
|
|
||||||
elif isinstance(value, gettextutils.Message):
|
|
||||||
return value.data
|
|
||||||
elif hasattr(value, 'iteritems'):
|
|
||||||
return recursive(dict(value.iteritems()), level=level + 1)
|
|
||||||
elif hasattr(value, '__iter__'):
|
|
||||||
return recursive(list(value))
|
|
||||||
elif convert_instances and hasattr(value, '__dict__'):
|
|
||||||
# Likely an instance of something. Watch for cycles.
|
|
||||||
# Ignore class member vars.
|
|
||||||
return recursive(value.__dict__, level=level + 1)
|
|
||||||
elif netaddr and isinstance(value, netaddr.IPAddress):
|
|
||||||
return six.text_type(value)
|
|
||||||
else:
|
|
||||||
if any(test(value) for test in _nasty_type_tests):
|
|
||||||
return six.text_type(value)
|
|
||||||
return value
|
|
||||||
except TypeError:
|
|
||||||
# Class objects are tricky since they may define something like
|
|
||||||
# __iter__ defined but it isn't callable as list().
|
|
||||||
return six.text_type(value)
|
|
||||||
|
|
||||||
|
|
||||||
def dumps(value, default=to_primitive, **kwargs):
|
|
||||||
return json.dumps(value, default=default, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
def loads(s, encoding='utf-8'):
|
|
||||||
return json.loads(strutils.safe_decode(s, encoding))
|
|
||||||
|
|
||||||
|
|
||||||
def load(fp, encoding='utf-8'):
|
|
||||||
return json.load(codecs.getreader(encoding)(fp))
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
import anyjson
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
anyjson._modules.append((__name__, 'dumps', TypeError,
|
|
||||||
'loads', ValueError, 'load'))
|
|
||||||
anyjson.force_implementation(__name__)
|
|
@ -1,45 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Local storage of variables using weak references"""
|
|
||||||
|
|
||||||
import threading
|
|
||||||
import weakref
|
|
||||||
|
|
||||||
|
|
||||||
class WeakLocal(threading.local):
|
|
||||||
def __getattribute__(self, attr):
|
|
||||||
rval = super(WeakLocal, self).__getattribute__(attr)
|
|
||||||
if rval:
|
|
||||||
# NOTE(mikal): this bit is confusing. What is stored is a weak
|
|
||||||
# reference, not the value itself. We therefore need to lookup
|
|
||||||
# the weak reference and return the inner value here.
|
|
||||||
rval = rval()
|
|
||||||
return rval
|
|
||||||
|
|
||||||
def __setattr__(self, attr, value):
|
|
||||||
value = weakref.ref(value)
|
|
||||||
return super(WeakLocal, self).__setattr__(attr, value)
|
|
||||||
|
|
||||||
|
|
||||||
# NOTE(mikal): the name "store" should be deprecated in the future
|
|
||||||
store = WeakLocal()
|
|
||||||
|
|
||||||
# A "weak" store uses weak references and allows an object to fall out of scope
|
|
||||||
# when it falls out of scope in the code that uses the thread local storage. A
|
|
||||||
# "strong" store will hold a reference to the object so that it never falls out
|
|
||||||
# of scope.
|
|
||||||
weak_store = WeakLocal()
|
|
||||||
strong_store = threading.local()
|
|
@ -1,377 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import contextlib
|
|
||||||
import errno
|
|
||||||
import fcntl
|
|
||||||
import functools
|
|
||||||
import os
|
|
||||||
import shutil
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import tempfile
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
import weakref
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.openstack.common import fileutils
|
|
||||||
from sticks.openstack.common.gettextutils import _, _LE, _LI
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
util_opts = [
|
|
||||||
cfg.BoolOpt('disable_process_locking', default=False,
|
|
||||||
help='Whether to disable inter-process locks'),
|
|
||||||
cfg.StrOpt('lock_path',
|
|
||||||
default=os.environ.get("STICKS_LOCK_PATH"),
|
|
||||||
help=('Directory to use for lock files.'))
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_opts(util_opts)
|
|
||||||
|
|
||||||
|
|
||||||
def set_defaults(lock_path):
|
|
||||||
cfg.set_defaults(util_opts, lock_path=lock_path)
|
|
||||||
|
|
||||||
|
|
||||||
class _FileLock(object):
|
|
||||||
"""Lock implementation which allows multiple locks, working around
|
|
||||||
issues like bugs.debian.org/cgi-bin/bugreport.cgi?bug=632857 and does
|
|
||||||
not require any cleanup. Since the lock is always held on a file
|
|
||||||
descriptor rather than outside of the process, the lock gets dropped
|
|
||||||
automatically if the process crashes, even if __exit__ is not executed.
|
|
||||||
|
|
||||||
There are no guarantees regarding usage by multiple green threads in a
|
|
||||||
single process here. This lock works only between processes. Exclusive
|
|
||||||
access between local threads should be achieved using the semaphores
|
|
||||||
in the @synchronized decorator.
|
|
||||||
|
|
||||||
Note these locks are released when the descriptor is closed, so it's not
|
|
||||||
safe to close the file descriptor while another green thread holds the
|
|
||||||
lock. Just opening and closing the lock file can break synchronisation,
|
|
||||||
so lock files must be accessed only using this abstraction.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, name):
|
|
||||||
self.lockfile = None
|
|
||||||
self.fname = name
|
|
||||||
|
|
||||||
def acquire(self):
|
|
||||||
basedir = os.path.dirname(self.fname)
|
|
||||||
|
|
||||||
if not os.path.exists(basedir):
|
|
||||||
fileutils.ensure_tree(basedir)
|
|
||||||
LOG.info(_LI('Created lock path: %s'), basedir)
|
|
||||||
|
|
||||||
self.lockfile = open(self.fname, 'w')
|
|
||||||
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
# Using non-blocking locks since green threads are not
|
|
||||||
# patched to deal with blocking locking calls.
|
|
||||||
# Also upon reading the MSDN docs for locking(), it seems
|
|
||||||
# to have a laughable 10 attempts "blocking" mechanism.
|
|
||||||
self.trylock()
|
|
||||||
LOG.debug('Got file lock "%s"', self.fname)
|
|
||||||
return True
|
|
||||||
except IOError as e:
|
|
||||||
if e.errno in (errno.EACCES, errno.EAGAIN):
|
|
||||||
# external locks synchronise things like iptables
|
|
||||||
# updates - give it some time to prevent busy spinning
|
|
||||||
time.sleep(0.01)
|
|
||||||
else:
|
|
||||||
raise threading.ThreadError(_("Unable to acquire lock on"
|
|
||||||
" `%(filename)s` due to"
|
|
||||||
" %(exception)s") %
|
|
||||||
{
|
|
||||||
'filename': self.fname,
|
|
||||||
'exception': e,
|
|
||||||
})
|
|
||||||
|
|
||||||
def __enter__(self):
|
|
||||||
self.acquire()
|
|
||||||
return self
|
|
||||||
|
|
||||||
def release(self):
|
|
||||||
try:
|
|
||||||
self.unlock()
|
|
||||||
self.lockfile.close()
|
|
||||||
LOG.debug('Released file lock "%s"', self.fname)
|
|
||||||
except IOError:
|
|
||||||
LOG.exception(_LE("Could not release the acquired lock `%s`"),
|
|
||||||
self.fname)
|
|
||||||
|
|
||||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
|
||||||
self.release()
|
|
||||||
|
|
||||||
def exists(self):
|
|
||||||
return os.path.exists(self.fname)
|
|
||||||
|
|
||||||
def trylock(self):
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
def unlock(self):
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
|
|
||||||
class _WindowsLock(_FileLock):
|
|
||||||
def trylock(self):
|
|
||||||
msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_NBLCK, 1)
|
|
||||||
|
|
||||||
def unlock(self):
|
|
||||||
msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_UNLCK, 1)
|
|
||||||
|
|
||||||
|
|
||||||
class _FcntlLock(_FileLock):
|
|
||||||
def trylock(self):
|
|
||||||
fcntl.lockf(self.lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB)
|
|
||||||
|
|
||||||
def unlock(self):
|
|
||||||
fcntl.lockf(self.lockfile, fcntl.LOCK_UN)
|
|
||||||
|
|
||||||
|
|
||||||
class _PosixLock(object):
|
|
||||||
def __init__(self, name):
|
|
||||||
# Hash the name because it's not valid to have POSIX semaphore
|
|
||||||
# names with things like / in them. Then use base64 to encode
|
|
||||||
# the digest() instead taking the hexdigest() because the
|
|
||||||
# result is shorter and most systems can't have shm sempahore
|
|
||||||
# names longer than 31 characters.
|
|
||||||
h = hashlib.sha1()
|
|
||||||
h.update(name.encode('ascii'))
|
|
||||||
self.name = str((b'/' + base64.urlsafe_b64encode(
|
|
||||||
h.digest())).decode('ascii'))
|
|
||||||
|
|
||||||
def acquire(self, timeout=None):
|
|
||||||
self.semaphore = posix_ipc.Semaphore(self.name,
|
|
||||||
flags=posix_ipc.O_CREAT,
|
|
||||||
initial_value=1)
|
|
||||||
self.semaphore.acquire(timeout)
|
|
||||||
return self
|
|
||||||
|
|
||||||
def __enter__(self):
|
|
||||||
self.acquire()
|
|
||||||
return self
|
|
||||||
|
|
||||||
def release(self):
|
|
||||||
self.semaphore.release()
|
|
||||||
self.semaphore.close()
|
|
||||||
|
|
||||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
|
||||||
self.release()
|
|
||||||
|
|
||||||
def exists(self):
|
|
||||||
try:
|
|
||||||
semaphore = posix_ipc.Semaphore(self.name)
|
|
||||||
except posix_ipc.ExistentialError:
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
semaphore.close()
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
if os.name == 'nt':
|
|
||||||
import msvcrt
|
|
||||||
InterProcessLock = _WindowsLock
|
|
||||||
FileLock = _WindowsLock
|
|
||||||
else:
|
|
||||||
import base64
|
|
||||||
import hashlib
|
|
||||||
import posix_ipc
|
|
||||||
InterProcessLock = _PosixLock
|
|
||||||
FileLock = _FcntlLock
|
|
||||||
|
|
||||||
_semaphores = weakref.WeakValueDictionary()
|
|
||||||
_semaphores_lock = threading.Lock()
|
|
||||||
|
|
||||||
|
|
||||||
def _get_lock_path(name, lock_file_prefix, lock_path=None):
|
|
||||||
# NOTE(mikal): the lock name cannot contain directory
|
|
||||||
# separators
|
|
||||||
name = name.replace(os.sep, '_')
|
|
||||||
if lock_file_prefix:
|
|
||||||
sep = '' if lock_file_prefix.endswith('-') else '-'
|
|
||||||
name = '%s%s%s' % (lock_file_prefix, sep, name)
|
|
||||||
|
|
||||||
local_lock_path = lock_path or CONF.lock_path
|
|
||||||
|
|
||||||
if not local_lock_path:
|
|
||||||
# NOTE(bnemec): Create a fake lock path for posix locks so we don't
|
|
||||||
# unnecessarily raise the RequiredOptError below.
|
|
||||||
if InterProcessLock is not _PosixLock:
|
|
||||||
raise cfg.RequiredOptError('lock_path')
|
|
||||||
local_lock_path = 'posixlock:/'
|
|
||||||
|
|
||||||
return os.path.join(local_lock_path, name)
|
|
||||||
|
|
||||||
|
|
||||||
def external_lock(name, lock_file_prefix=None, lock_path=None):
|
|
||||||
LOG.debug('Attempting to grab external lock "%(lock)s"',
|
|
||||||
{'lock': name})
|
|
||||||
|
|
||||||
lock_file_path = _get_lock_path(name, lock_file_prefix, lock_path)
|
|
||||||
|
|
||||||
# NOTE(bnemec): If an explicit lock_path was passed to us then it
|
|
||||||
# means the caller is relying on file-based locking behavior, so
|
|
||||||
# we can't use posix locks for those calls.
|
|
||||||
if lock_path:
|
|
||||||
return FileLock(lock_file_path)
|
|
||||||
return InterProcessLock(lock_file_path)
|
|
||||||
|
|
||||||
|
|
||||||
def remove_external_lock_file(name, lock_file_prefix=None):
|
|
||||||
"""Remove a external lock file when it's not used anymore
|
|
||||||
This will be helpful when we have a lot of lock files
|
|
||||||
"""
|
|
||||||
with internal_lock(name):
|
|
||||||
lock_file_path = _get_lock_path(name, lock_file_prefix)
|
|
||||||
try:
|
|
||||||
os.remove(lock_file_path)
|
|
||||||
except OSError:
|
|
||||||
LOG.info(_LI('Failed to remove file %(file)s'),
|
|
||||||
{'file': lock_file_path})
|
|
||||||
|
|
||||||
|
|
||||||
def internal_lock(name):
|
|
||||||
with _semaphores_lock:
|
|
||||||
try:
|
|
||||||
sem = _semaphores[name]
|
|
||||||
except KeyError:
|
|
||||||
sem = threading.Semaphore()
|
|
||||||
_semaphores[name] = sem
|
|
||||||
|
|
||||||
LOG.debug('Got semaphore "%(lock)s"', {'lock': name})
|
|
||||||
return sem
|
|
||||||
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
|
||||||
def lock(name, lock_file_prefix=None, external=False, lock_path=None):
|
|
||||||
"""Context based lock
|
|
||||||
|
|
||||||
This function yields a `threading.Semaphore` instance (if we don't use
|
|
||||||
eventlet.monkey_patch(), else `semaphore.Semaphore`) unless external is
|
|
||||||
True, in which case, it'll yield an InterProcessLock instance.
|
|
||||||
|
|
||||||
:param lock_file_prefix: The lock_file_prefix argument is used to provide
|
|
||||||
lock files on disk with a meaningful prefix.
|
|
||||||
|
|
||||||
:param external: The external keyword argument denotes whether this lock
|
|
||||||
should work across multiple processes. This means that if two different
|
|
||||||
workers both run a a method decorated with @synchronized('mylock',
|
|
||||||
external=True), only one of them will execute at a time.
|
|
||||||
"""
|
|
||||||
int_lock = internal_lock(name)
|
|
||||||
with int_lock:
|
|
||||||
if external and not CONF.disable_process_locking:
|
|
||||||
ext_lock = external_lock(name, lock_file_prefix, lock_path)
|
|
||||||
with ext_lock:
|
|
||||||
yield ext_lock
|
|
||||||
else:
|
|
||||||
yield int_lock
|
|
||||||
|
|
||||||
|
|
||||||
def synchronized(name, lock_file_prefix=None, external=False, lock_path=None):
|
|
||||||
"""Synchronization decorator.
|
|
||||||
|
|
||||||
Decorating a method like so::
|
|
||||||
|
|
||||||
@synchronized('mylock')
|
|
||||||
def foo(self, *args):
|
|
||||||
...
|
|
||||||
|
|
||||||
ensures that only one thread will execute the foo method at a time.
|
|
||||||
|
|
||||||
Different methods can share the same lock::
|
|
||||||
|
|
||||||
@synchronized('mylock')
|
|
||||||
def foo(self, *args):
|
|
||||||
...
|
|
||||||
|
|
||||||
@synchronized('mylock')
|
|
||||||
def bar(self, *args):
|
|
||||||
...
|
|
||||||
|
|
||||||
This way only one of either foo or bar can be executing at a time.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def wrap(f):
|
|
||||||
@functools.wraps(f)
|
|
||||||
def inner(*args, **kwargs):
|
|
||||||
try:
|
|
||||||
with lock(name, lock_file_prefix, external, lock_path):
|
|
||||||
LOG.debug('Got semaphore / lock "%(function)s"',
|
|
||||||
{'function': f.__name__})
|
|
||||||
return f(*args, **kwargs)
|
|
||||||
finally:
|
|
||||||
LOG.debug('Semaphore / lock released "%(function)s"',
|
|
||||||
{'function': f.__name__})
|
|
||||||
return inner
|
|
||||||
return wrap
|
|
||||||
|
|
||||||
|
|
||||||
def synchronized_with_prefix(lock_file_prefix):
|
|
||||||
"""Partial object generator for the synchronization decorator.
|
|
||||||
|
|
||||||
Redefine @synchronized in each project like so::
|
|
||||||
|
|
||||||
(in nova/utils.py)
|
|
||||||
from nova.openstack.common import lockutils
|
|
||||||
|
|
||||||
synchronized = lockutils.synchronized_with_prefix('nova-')
|
|
||||||
|
|
||||||
|
|
||||||
(in nova/foo.py)
|
|
||||||
from nova import utils
|
|
||||||
|
|
||||||
@utils.synchronized('mylock')
|
|
||||||
def bar(self, *args):
|
|
||||||
...
|
|
||||||
|
|
||||||
The lock_file_prefix argument is used to provide lock files on disk with a
|
|
||||||
meaningful prefix.
|
|
||||||
"""
|
|
||||||
|
|
||||||
return functools.partial(synchronized, lock_file_prefix=lock_file_prefix)
|
|
||||||
|
|
||||||
|
|
||||||
def main(argv):
|
|
||||||
"""Create a dir for locks and pass it to command from arguments
|
|
||||||
|
|
||||||
If you run this:
|
|
||||||
python -m openstack.common.lockutils python setup.py testr <etc>
|
|
||||||
|
|
||||||
a temporary directory will be created for all your locks and passed to all
|
|
||||||
your tests in an environment variable. The temporary dir will be deleted
|
|
||||||
afterwards and the return value will be preserved.
|
|
||||||
"""
|
|
||||||
|
|
||||||
lock_dir = tempfile.mkdtemp()
|
|
||||||
os.environ["STICKS_LOCK_PATH"] = lock_dir
|
|
||||||
try:
|
|
||||||
ret_val = subprocess.call(argv[1:])
|
|
||||||
finally:
|
|
||||||
shutil.rmtree(lock_dir, ignore_errors=True)
|
|
||||||
return ret_val
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.exit(main(sys.argv))
|
|
@ -1,713 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# Copyright 2010 United States Government as represented by the
|
|
||||||
# Administrator of the National Aeronautics and Space Administration.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""OpenStack logging handler.
|
|
||||||
|
|
||||||
This module adds to logging functionality by adding the option to specify
|
|
||||||
a context object when calling the various log methods. If the context object
|
|
||||||
is not specified, default formatting is used. Additionally, an instance uuid
|
|
||||||
may be passed as part of the log message, which is intended to make it easier
|
|
||||||
for admins to find messages related to a specific instance.
|
|
||||||
|
|
||||||
It also allows setting of formatting information through conf.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
import inspect
|
|
||||||
import itertools
|
|
||||||
import logging
|
|
||||||
import logging.config
|
|
||||||
import logging.handlers
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import traceback
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
import six
|
|
||||||
from six import moves
|
|
||||||
|
|
||||||
from sticks.openstack.common.gettextutils import _
|
|
||||||
from sticks.openstack.common import importutils
|
|
||||||
from sticks.openstack.common import jsonutils
|
|
||||||
from sticks.openstack.common import local
|
|
||||||
|
|
||||||
|
|
||||||
_DEFAULT_LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S"
|
|
||||||
|
|
||||||
_SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password']
|
|
||||||
|
|
||||||
# NOTE(ldbragst): Let's build a list of regex objects using the list of
|
|
||||||
# _SANITIZE_KEYS we already have. This way, we only have to add the new key
|
|
||||||
# to the list of _SANITIZE_KEYS and we can generate regular expressions
|
|
||||||
# for XML and JSON automatically.
|
|
||||||
_SANITIZE_PATTERNS = []
|
|
||||||
_FORMAT_PATTERNS = [r'(%(key)s\s*[=]\s*[\"\']).*?([\"\'])',
|
|
||||||
r'(<%(key)s>).*?(</%(key)s>)',
|
|
||||||
r'([\"\']%(key)s[\"\']\s*:\s*[\"\']).*?([\"\'])',
|
|
||||||
r'([\'"].*?%(key)s[\'"]\s*:\s*u?[\'"]).*?([\'"])']
|
|
||||||
|
|
||||||
for key in _SANITIZE_KEYS:
|
|
||||||
for pattern in _FORMAT_PATTERNS:
|
|
||||||
reg_ex = re.compile(pattern % {'key': key}, re.DOTALL)
|
|
||||||
_SANITIZE_PATTERNS.append(reg_ex)
|
|
||||||
|
|
||||||
|
|
||||||
common_cli_opts = [
|
|
||||||
cfg.BoolOpt('debug',
|
|
||||||
short='d',
|
|
||||||
default=False,
|
|
||||||
help='Print debugging output (set logging level to '
|
|
||||||
'DEBUG instead of default WARNING level).'),
|
|
||||||
cfg.BoolOpt('verbose',
|
|
||||||
short='v',
|
|
||||||
default=False,
|
|
||||||
help='Print more verbose output (set logging level to '
|
|
||||||
'INFO instead of default WARNING level).'),
|
|
||||||
]
|
|
||||||
|
|
||||||
logging_cli_opts = [
|
|
||||||
cfg.StrOpt('log-config-append',
|
|
||||||
metavar='PATH',
|
|
||||||
deprecated_name='log-config',
|
|
||||||
help='The name of logging configuration file. It does not '
|
|
||||||
'disable existing loggers, but just appends specified '
|
|
||||||
'logging configuration to any other existing logging '
|
|
||||||
'options. Please see the Python logging module '
|
|
||||||
'documentation for details on logging configuration '
|
|
||||||
'files.'),
|
|
||||||
cfg.StrOpt('log-format',
|
|
||||||
default=None,
|
|
||||||
metavar='FORMAT',
|
|
||||||
help='DEPRECATED. '
|
|
||||||
'A logging.Formatter log message format string which may '
|
|
||||||
'use any of the available logging.LogRecord attributes. '
|
|
||||||
'This option is deprecated. Please use '
|
|
||||||
'logging_context_format_string and '
|
|
||||||
'logging_default_format_string instead.'),
|
|
||||||
cfg.StrOpt('log-date-format',
|
|
||||||
default=_DEFAULT_LOG_DATE_FORMAT,
|
|
||||||
metavar='DATE_FORMAT',
|
|
||||||
help='Format string for %%(asctime)s in log records. '
|
|
||||||
'Default: %(default)s'),
|
|
||||||
cfg.StrOpt('log-file',
|
|
||||||
metavar='PATH',
|
|
||||||
deprecated_name='logfile',
|
|
||||||
help='(Optional) Name of log file to output to. '
|
|
||||||
'If no default is set, logging will go to stdout.'),
|
|
||||||
cfg.StrOpt('log-dir',
|
|
||||||
deprecated_name='logdir',
|
|
||||||
help='(Optional) The base directory used for relative '
|
|
||||||
'--log-file paths'),
|
|
||||||
cfg.BoolOpt('use-syslog',
|
|
||||||
default=False,
|
|
||||||
help='Use syslog for logging. '
|
|
||||||
'Existing syslog format is DEPRECATED during I, '
|
|
||||||
'and then will be changed in J to honor RFC5424'),
|
|
||||||
cfg.BoolOpt('use-syslog-rfc-format',
|
|
||||||
# TODO(bogdando) remove or use True after existing
|
|
||||||
# syslog format deprecation in J
|
|
||||||
default=False,
|
|
||||||
help='(Optional) Use syslog rfc5424 format for logging. '
|
|
||||||
'If enabled, will add APP-NAME (RFC5424) before the '
|
|
||||||
'MSG part of the syslog message. The old format '
|
|
||||||
'without APP-NAME is deprecated in I, '
|
|
||||||
'and will be removed in J.'),
|
|
||||||
cfg.StrOpt('syslog-log-facility',
|
|
||||||
default='LOG_USER',
|
|
||||||
help='Syslog facility to receive log lines')
|
|
||||||
]
|
|
||||||
|
|
||||||
generic_log_opts = [
|
|
||||||
cfg.BoolOpt('use_stderr',
|
|
||||||
default=True,
|
|
||||||
help='Log output to standard error')
|
|
||||||
]
|
|
||||||
|
|
||||||
log_opts = [
|
|
||||||
cfg.StrOpt('logging_context_format_string',
|
|
||||||
default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
|
|
||||||
'%(name)s [%(request_id)s %(user_identity)s] '
|
|
||||||
'%(instance)s%(message)s',
|
|
||||||
help='Format string to use for log messages with context'),
|
|
||||||
cfg.StrOpt('logging_default_format_string',
|
|
||||||
default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
|
|
||||||
'%(name)s [-] %(instance)s%(message)s',
|
|
||||||
help='Format string to use for log messages without context'),
|
|
||||||
cfg.StrOpt('logging_debug_format_suffix',
|
|
||||||
default='%(funcName)s %(pathname)s:%(lineno)d',
|
|
||||||
help='Data to append to log format when level is DEBUG'),
|
|
||||||
cfg.StrOpt('logging_exception_prefix',
|
|
||||||
default='%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s '
|
|
||||||
'%(instance)s',
|
|
||||||
help='Prefix each line of exception output with this format'),
|
|
||||||
cfg.ListOpt('default_log_levels',
|
|
||||||
default=[
|
|
||||||
'amqp=WARN',
|
|
||||||
'amqplib=WARN',
|
|
||||||
'boto=WARN',
|
|
||||||
'qpid=WARN',
|
|
||||||
'sqlalchemy=WARN',
|
|
||||||
'suds=INFO',
|
|
||||||
'oslo.messaging=INFO',
|
|
||||||
'iso8601=WARN',
|
|
||||||
'requests.packages.urllib3.connectionpool=WARN'
|
|
||||||
],
|
|
||||||
help='List of logger=LEVEL pairs'),
|
|
||||||
cfg.BoolOpt('publish_errors',
|
|
||||||
default=False,
|
|
||||||
help='Publish error events'),
|
|
||||||
cfg.BoolOpt('fatal_deprecations',
|
|
||||||
default=False,
|
|
||||||
help='Make deprecations fatal'),
|
|
||||||
|
|
||||||
# NOTE(mikal): there are two options here because sometimes we are handed
|
|
||||||
# a full instance (and could include more information), and other times we
|
|
||||||
# are just handed a UUID for the instance.
|
|
||||||
cfg.StrOpt('instance_format',
|
|
||||||
default='[instance: %(uuid)s] ',
|
|
||||||
help='If an instance is passed with the log message, format '
|
|
||||||
'it like this'),
|
|
||||||
cfg.StrOpt('instance_uuid_format',
|
|
||||||
default='[instance: %(uuid)s] ',
|
|
||||||
help='If an instance UUID is passed with the log message, '
|
|
||||||
'format it like this'),
|
|
||||||
]
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_cli_opts(common_cli_opts)
|
|
||||||
CONF.register_cli_opts(logging_cli_opts)
|
|
||||||
CONF.register_opts(generic_log_opts)
|
|
||||||
CONF.register_opts(log_opts)
|
|
||||||
|
|
||||||
# our new audit level
|
|
||||||
# NOTE(jkoelker) Since we synthesized an audit level, make the logging
|
|
||||||
# module aware of it so it acts like other levels.
|
|
||||||
logging.AUDIT = logging.INFO + 1
|
|
||||||
logging.addLevelName(logging.AUDIT, 'AUDIT')
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
NullHandler = logging.NullHandler
|
|
||||||
except AttributeError: # NOTE(jkoelker) NullHandler added in Python 2.7
|
|
||||||
class NullHandler(logging.Handler):
|
|
||||||
def handle(self, record):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def emit(self, record):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def createLock(self):
|
|
||||||
self.lock = None
|
|
||||||
|
|
||||||
|
|
||||||
def _dictify_context(context):
|
|
||||||
if context is None:
|
|
||||||
return None
|
|
||||||
if not isinstance(context, dict) and getattr(context, 'to_dict', None):
|
|
||||||
context = context.to_dict()
|
|
||||||
return context
|
|
||||||
|
|
||||||
|
|
||||||
def _get_binary_name():
|
|
||||||
return os.path.basename(inspect.stack()[-1][1])
|
|
||||||
|
|
||||||
|
|
||||||
def _get_log_file_path(binary=None):
|
|
||||||
logfile = CONF.log_file
|
|
||||||
logdir = CONF.log_dir
|
|
||||||
|
|
||||||
if logfile and not logdir:
|
|
||||||
return logfile
|
|
||||||
|
|
||||||
if logfile and logdir:
|
|
||||||
return os.path.join(logdir, logfile)
|
|
||||||
|
|
||||||
if logdir:
|
|
||||||
binary = binary or _get_binary_name()
|
|
||||||
return '%s.log' % (os.path.join(logdir, binary),)
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def mask_password(message, secret="***"):
|
|
||||||
"""Replace password with 'secret' in message.
|
|
||||||
|
|
||||||
:param message: The string which includes security information.
|
|
||||||
:param secret: value with which to replace passwords.
|
|
||||||
:returns: The unicode value of message with the password fields masked.
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
>>> mask_password("'adminPass' : 'aaaaa'")
|
|
||||||
"'adminPass' : '***'"
|
|
||||||
>>> mask_password("'admin_pass' : 'aaaaa'")
|
|
||||||
"'admin_pass' : '***'"
|
|
||||||
>>> mask_password('"password" : "aaaaa"')
|
|
||||||
'"password" : "***"'
|
|
||||||
>>> mask_password("'original_password' : 'aaaaa'")
|
|
||||||
"'original_password' : '***'"
|
|
||||||
>>> mask_password("u'original_password' : u'aaaaa'")
|
|
||||||
"u'original_password' : u'***'"
|
|
||||||
"""
|
|
||||||
message = six.text_type(message)
|
|
||||||
|
|
||||||
# NOTE(ldbragst): Check to see if anything in message contains any key
|
|
||||||
# specified in _SANITIZE_KEYS, if not then just return the message since
|
|
||||||
# we don't have to mask any passwords.
|
|
||||||
if not any(key in message for key in _SANITIZE_KEYS):
|
|
||||||
return message
|
|
||||||
|
|
||||||
secret = r'\g<1>' + secret + r'\g<2>'
|
|
||||||
for pattern in _SANITIZE_PATTERNS:
|
|
||||||
message = re.sub(pattern, secret, message)
|
|
||||||
return message
|
|
||||||
|
|
||||||
|
|
||||||
class BaseLoggerAdapter(logging.LoggerAdapter):
|
|
||||||
|
|
||||||
def audit(self, msg, *args, **kwargs):
|
|
||||||
self.log(logging.AUDIT, msg, *args, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class LazyAdapter(BaseLoggerAdapter):
|
|
||||||
def __init__(self, name='unknown', version='unknown'):
|
|
||||||
self._logger = None
|
|
||||||
self.extra = {}
|
|
||||||
self.name = name
|
|
||||||
self.version = version
|
|
||||||
|
|
||||||
@property
|
|
||||||
def logger(self):
|
|
||||||
if not self._logger:
|
|
||||||
self._logger = getLogger(self.name, self.version)
|
|
||||||
return self._logger
|
|
||||||
|
|
||||||
|
|
||||||
class ContextAdapter(BaseLoggerAdapter):
|
|
||||||
warn = logging.LoggerAdapter.warning
|
|
||||||
|
|
||||||
def __init__(self, logger, project_name, version_string):
|
|
||||||
self.logger = logger
|
|
||||||
self.project = project_name
|
|
||||||
self.version = version_string
|
|
||||||
self._deprecated_messages_sent = dict()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def handlers(self):
|
|
||||||
return self.logger.handlers
|
|
||||||
|
|
||||||
def deprecated(self, msg, *args, **kwargs):
|
|
||||||
"""Call this method when a deprecated feature is used.
|
|
||||||
|
|
||||||
If the system is configured for fatal deprecations then the message
|
|
||||||
is logged at the 'critical' level and :class:`DeprecatedConfig` will
|
|
||||||
be raised.
|
|
||||||
|
|
||||||
Otherwise, the message will be logged (once) at the 'warn' level.
|
|
||||||
|
|
||||||
:raises: :class:`DeprecatedConfig` if the system is configured for
|
|
||||||
fatal deprecations.
|
|
||||||
|
|
||||||
"""
|
|
||||||
stdmsg = _("Deprecated: %s") % msg
|
|
||||||
if CONF.fatal_deprecations:
|
|
||||||
self.critical(stdmsg, *args, **kwargs)
|
|
||||||
raise DeprecatedConfig(msg=stdmsg)
|
|
||||||
|
|
||||||
# Using a list because a tuple with dict can't be stored in a set.
|
|
||||||
sent_args = self._deprecated_messages_sent.setdefault(msg, list())
|
|
||||||
|
|
||||||
if args in sent_args:
|
|
||||||
# Already logged this message, so don't log it again.
|
|
||||||
return
|
|
||||||
|
|
||||||
sent_args.append(args)
|
|
||||||
self.warn(stdmsg, *args, **kwargs)
|
|
||||||
|
|
||||||
def process(self, msg, kwargs):
|
|
||||||
# NOTE(mrodden): catch any Message/other object and
|
|
||||||
# coerce to unicode before they can get
|
|
||||||
# to the python logging and possibly
|
|
||||||
# cause string encoding trouble
|
|
||||||
if not isinstance(msg, six.string_types):
|
|
||||||
msg = six.text_type(msg)
|
|
||||||
|
|
||||||
if 'extra' not in kwargs:
|
|
||||||
kwargs['extra'] = {}
|
|
||||||
extra = kwargs['extra']
|
|
||||||
|
|
||||||
context = kwargs.pop('context', None)
|
|
||||||
if not context:
|
|
||||||
context = getattr(local.store, 'context', None)
|
|
||||||
if context:
|
|
||||||
extra.update(_dictify_context(context))
|
|
||||||
|
|
||||||
instance = kwargs.pop('instance', None)
|
|
||||||
instance_uuid = (extra.get('instance_uuid') or
|
|
||||||
kwargs.pop('instance_uuid', None))
|
|
||||||
instance_extra = ''
|
|
||||||
if instance:
|
|
||||||
instance_extra = CONF.instance_format % instance
|
|
||||||
elif instance_uuid:
|
|
||||||
instance_extra = (CONF.instance_uuid_format
|
|
||||||
% {'uuid': instance_uuid})
|
|
||||||
extra['instance'] = instance_extra
|
|
||||||
|
|
||||||
extra.setdefault('user_identity', kwargs.pop('user_identity', None))
|
|
||||||
|
|
||||||
extra['project'] = self.project
|
|
||||||
extra['version'] = self.version
|
|
||||||
extra['extra'] = extra.copy()
|
|
||||||
return msg, kwargs
|
|
||||||
|
|
||||||
|
|
||||||
class JSONFormatter(logging.Formatter):
|
|
||||||
def __init__(self, fmt=None, datefmt=None):
|
|
||||||
# NOTE(jkoelker) we ignore the fmt argument, but its still there
|
|
||||||
# since logging.config.fileConfig passes it.
|
|
||||||
self.datefmt = datefmt
|
|
||||||
|
|
||||||
def formatException(self, ei, strip_newlines=True):
|
|
||||||
lines = traceback.format_exception(*ei)
|
|
||||||
if strip_newlines:
|
|
||||||
lines = [moves.filter(
|
|
||||||
lambda x: x,
|
|
||||||
line.rstrip().splitlines()) for line in lines]
|
|
||||||
lines = list(itertools.chain(*lines))
|
|
||||||
return lines
|
|
||||||
|
|
||||||
def format(self, record):
|
|
||||||
message = {'message': record.getMessage(),
|
|
||||||
'asctime': self.formatTime(record, self.datefmt),
|
|
||||||
'name': record.name,
|
|
||||||
'msg': record.msg,
|
|
||||||
'args': record.args,
|
|
||||||
'levelname': record.levelname,
|
|
||||||
'levelno': record.levelno,
|
|
||||||
'pathname': record.pathname,
|
|
||||||
'filename': record.filename,
|
|
||||||
'module': record.module,
|
|
||||||
'lineno': record.lineno,
|
|
||||||
'funcname': record.funcName,
|
|
||||||
'created': record.created,
|
|
||||||
'msecs': record.msecs,
|
|
||||||
'relative_created': record.relativeCreated,
|
|
||||||
'thread': record.thread,
|
|
||||||
'thread_name': record.threadName,
|
|
||||||
'process_name': record.processName,
|
|
||||||
'process': record.process,
|
|
||||||
'traceback': None}
|
|
||||||
|
|
||||||
if hasattr(record, 'extra'):
|
|
||||||
message['extra'] = record.extra
|
|
||||||
|
|
||||||
if record.exc_info:
|
|
||||||
message['traceback'] = self.formatException(record.exc_info)
|
|
||||||
|
|
||||||
return jsonutils.dumps(message)
|
|
||||||
|
|
||||||
|
|
||||||
def _create_logging_excepthook(product_name):
|
|
||||||
def logging_excepthook(exc_type, value, tb):
|
|
||||||
extra = {}
|
|
||||||
if CONF.verbose or CONF.debug:
|
|
||||||
extra['exc_info'] = (exc_type, value, tb)
|
|
||||||
getLogger(product_name).critical(
|
|
||||||
"".join(traceback.format_exception_only(exc_type, value)),
|
|
||||||
**extra)
|
|
||||||
return logging_excepthook
|
|
||||||
|
|
||||||
|
|
||||||
class LogConfigError(Exception):
|
|
||||||
|
|
||||||
message = _('Error loading logging config %(log_config)s: %(err_msg)s')
|
|
||||||
|
|
||||||
def __init__(self, log_config, err_msg):
|
|
||||||
self.log_config = log_config
|
|
||||||
self.err_msg = err_msg
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return self.message % dict(log_config=self.log_config,
|
|
||||||
err_msg=self.err_msg)
|
|
||||||
|
|
||||||
|
|
||||||
def _load_log_config(log_config_append):
|
|
||||||
try:
|
|
||||||
logging.config.fileConfig(log_config_append,
|
|
||||||
disable_existing_loggers=False)
|
|
||||||
except moves.configparser.Error as exc:
|
|
||||||
raise LogConfigError(log_config_append, str(exc))
|
|
||||||
|
|
||||||
|
|
||||||
def setup(product_name, version='unknown'):
|
|
||||||
"""Setup logging."""
|
|
||||||
if CONF.log_config_append:
|
|
||||||
_load_log_config(CONF.log_config_append)
|
|
||||||
else:
|
|
||||||
_setup_logging_from_conf(product_name, version)
|
|
||||||
sys.excepthook = _create_logging_excepthook(product_name)
|
|
||||||
|
|
||||||
|
|
||||||
def set_defaults(logging_context_format_string):
|
|
||||||
cfg.set_defaults(log_opts,
|
|
||||||
logging_context_format_string=
|
|
||||||
logging_context_format_string)
|
|
||||||
|
|
||||||
|
|
||||||
def _find_facility_from_conf():
|
|
||||||
facility_names = logging.handlers.SysLogHandler.facility_names
|
|
||||||
facility = getattr(logging.handlers.SysLogHandler,
|
|
||||||
CONF.syslog_log_facility,
|
|
||||||
None)
|
|
||||||
|
|
||||||
if facility is None and CONF.syslog_log_facility in facility_names:
|
|
||||||
facility = facility_names.get(CONF.syslog_log_facility)
|
|
||||||
|
|
||||||
if facility is None:
|
|
||||||
valid_facilities = facility_names.keys()
|
|
||||||
consts = ['LOG_AUTH', 'LOG_AUTHPRIV', 'LOG_CRON', 'LOG_DAEMON',
|
|
||||||
'LOG_FTP', 'LOG_KERN', 'LOG_LPR', 'LOG_MAIL', 'LOG_NEWS',
|
|
||||||
'LOG_AUTH', 'LOG_SYSLOG', 'LOG_USER', 'LOG_UUCP',
|
|
||||||
'LOG_LOCAL0', 'LOG_LOCAL1', 'LOG_LOCAL2', 'LOG_LOCAL3',
|
|
||||||
'LOG_LOCAL4', 'LOG_LOCAL5', 'LOG_LOCAL6', 'LOG_LOCAL7']
|
|
||||||
valid_facilities.extend(consts)
|
|
||||||
raise TypeError(_('syslog facility must be one of: %s') %
|
|
||||||
', '.join("'%s'" % fac
|
|
||||||
for fac in valid_facilities))
|
|
||||||
|
|
||||||
return facility
|
|
||||||
|
|
||||||
|
|
||||||
class RFCSysLogHandler(logging.handlers.SysLogHandler):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
self.binary_name = _get_binary_name()
|
|
||||||
super(RFCSysLogHandler, self).__init__(*args, **kwargs)
|
|
||||||
|
|
||||||
def format(self, record):
|
|
||||||
msg = super(RFCSysLogHandler, self).format(record)
|
|
||||||
msg = self.binary_name + ' ' + msg
|
|
||||||
return msg
|
|
||||||
|
|
||||||
|
|
||||||
def _setup_logging_from_conf(project, version):
|
|
||||||
log_root = getLogger(None).logger
|
|
||||||
for handler in log_root.handlers:
|
|
||||||
log_root.removeHandler(handler)
|
|
||||||
|
|
||||||
if CONF.use_syslog:
|
|
||||||
facility = _find_facility_from_conf()
|
|
||||||
# TODO(bogdando) use the format provided by RFCSysLogHandler
|
|
||||||
# after existing syslog format deprecation in J
|
|
||||||
if CONF.use_syslog_rfc_format:
|
|
||||||
syslog = RFCSysLogHandler(address='/dev/log',
|
|
||||||
facility=facility)
|
|
||||||
else:
|
|
||||||
syslog = logging.handlers.SysLogHandler(address='/dev/log',
|
|
||||||
facility=facility)
|
|
||||||
log_root.addHandler(syslog)
|
|
||||||
|
|
||||||
logpath = _get_log_file_path()
|
|
||||||
if logpath:
|
|
||||||
filelog = logging.handlers.WatchedFileHandler(logpath)
|
|
||||||
log_root.addHandler(filelog)
|
|
||||||
|
|
||||||
if CONF.use_stderr:
|
|
||||||
streamlog = ColorHandler()
|
|
||||||
log_root.addHandler(streamlog)
|
|
||||||
|
|
||||||
elif not logpath:
|
|
||||||
# pass sys.stdout as a positional argument
|
|
||||||
# python2.6 calls the argument strm, in 2.7 it's stream
|
|
||||||
streamlog = logging.StreamHandler(sys.stdout)
|
|
||||||
log_root.addHandler(streamlog)
|
|
||||||
|
|
||||||
if CONF.publish_errors:
|
|
||||||
handler = importutils.import_object(
|
|
||||||
"sticks.openstack.common.log_handler.PublishErrorsHandler",
|
|
||||||
logging.ERROR)
|
|
||||||
log_root.addHandler(handler)
|
|
||||||
|
|
||||||
datefmt = CONF.log_date_format
|
|
||||||
for handler in log_root.handlers:
|
|
||||||
# NOTE(alaski): CONF.log_format overrides everything currently. This
|
|
||||||
# should be deprecated in favor of context aware formatting.
|
|
||||||
if CONF.log_format:
|
|
||||||
handler.setFormatter(logging.Formatter(fmt=CONF.log_format,
|
|
||||||
datefmt=datefmt))
|
|
||||||
log_root.info('Deprecated: log_format is now deprecated and will '
|
|
||||||
'be removed in the next release')
|
|
||||||
else:
|
|
||||||
handler.setFormatter(ContextFormatter(project=project,
|
|
||||||
version=version,
|
|
||||||
datefmt=datefmt))
|
|
||||||
|
|
||||||
if CONF.debug:
|
|
||||||
log_root.setLevel(logging.DEBUG)
|
|
||||||
elif CONF.verbose:
|
|
||||||
log_root.setLevel(logging.INFO)
|
|
||||||
else:
|
|
||||||
log_root.setLevel(logging.WARNING)
|
|
||||||
|
|
||||||
for pair in CONF.default_log_levels:
|
|
||||||
mod, _sep, level_name = pair.partition('=')
|
|
||||||
level = logging.getLevelName(level_name)
|
|
||||||
logger = logging.getLogger(mod)
|
|
||||||
logger.setLevel(level)
|
|
||||||
|
|
||||||
_loggers = {}
|
|
||||||
|
|
||||||
|
|
||||||
def getLogger(name='unknown', version='unknown'):
|
|
||||||
if name not in _loggers:
|
|
||||||
_loggers[name] = ContextAdapter(logging.getLogger(name),
|
|
||||||
name,
|
|
||||||
version)
|
|
||||||
return _loggers[name]
|
|
||||||
|
|
||||||
|
|
||||||
def getLazyLogger(name='unknown', version='unknown'):
|
|
||||||
"""Returns lazy logger.
|
|
||||||
|
|
||||||
Creates a pass-through logger that does not create the real logger
|
|
||||||
until it is really needed and delegates all calls to the real logger
|
|
||||||
once it is created.
|
|
||||||
"""
|
|
||||||
return LazyAdapter(name, version)
|
|
||||||
|
|
||||||
|
|
||||||
class WritableLogger(object):
|
|
||||||
"""A thin wrapper that responds to `write` and logs."""
|
|
||||||
|
|
||||||
def __init__(self, logger, level=logging.INFO):
|
|
||||||
self.logger = logger
|
|
||||||
self.level = level
|
|
||||||
|
|
||||||
def write(self, msg):
|
|
||||||
self.logger.log(self.level, msg.rstrip())
|
|
||||||
|
|
||||||
|
|
||||||
class ContextFormatter(logging.Formatter):
|
|
||||||
"""A context.RequestContext aware formatter configured through flags.
|
|
||||||
|
|
||||||
The flags used to set format strings are: logging_context_format_string
|
|
||||||
and logging_default_format_string. You can also specify
|
|
||||||
logging_debug_format_suffix to append extra formatting if the log level is
|
|
||||||
debug.
|
|
||||||
|
|
||||||
For information about what variables are available for the formatter see:
|
|
||||||
http://docs.python.org/library/logging.html#formatter
|
|
||||||
|
|
||||||
If available, uses the context value stored in TLS - local.store.context
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
"""Initialize ContextFormatter instance
|
|
||||||
|
|
||||||
Takes additional keyword arguments which can be used in the message
|
|
||||||
format string.
|
|
||||||
|
|
||||||
:keyword project: project name
|
|
||||||
:type project: string
|
|
||||||
:keyword version: project version
|
|
||||||
:type version: string
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
self.project = kwargs.pop('project', 'unknown')
|
|
||||||
self.version = kwargs.pop('version', 'unknown')
|
|
||||||
|
|
||||||
logging.Formatter.__init__(self, *args, **kwargs)
|
|
||||||
|
|
||||||
def format(self, record):
|
|
||||||
"""Uses contextstring if request_id is set, otherwise default."""
|
|
||||||
|
|
||||||
# store project info
|
|
||||||
record.project = self.project
|
|
||||||
record.version = self.version
|
|
||||||
|
|
||||||
# store request info
|
|
||||||
context = getattr(local.store, 'context', None)
|
|
||||||
if context:
|
|
||||||
d = _dictify_context(context)
|
|
||||||
for k, v in d.items():
|
|
||||||
setattr(record, k, v)
|
|
||||||
|
|
||||||
# NOTE(sdague): default the fancier formatting params
|
|
||||||
# to an empty string so we don't throw an exception if
|
|
||||||
# they get used
|
|
||||||
for key in ('instance', 'color', 'user_identity'):
|
|
||||||
if key not in record.__dict__:
|
|
||||||
record.__dict__[key] = ''
|
|
||||||
|
|
||||||
if record.__dict__.get('request_id'):
|
|
||||||
self._fmt = CONF.logging_context_format_string
|
|
||||||
else:
|
|
||||||
self._fmt = CONF.logging_default_format_string
|
|
||||||
|
|
||||||
if (record.levelno == logging.DEBUG and
|
|
||||||
CONF.logging_debug_format_suffix):
|
|
||||||
self._fmt += " " + CONF.logging_debug_format_suffix
|
|
||||||
|
|
||||||
# Cache this on the record, Logger will respect our formatted copy
|
|
||||||
if record.exc_info:
|
|
||||||
record.exc_text = self.formatException(record.exc_info, record)
|
|
||||||
return logging.Formatter.format(self, record)
|
|
||||||
|
|
||||||
def formatException(self, exc_info, record=None):
|
|
||||||
"""Format exception output with CONF.logging_exception_prefix."""
|
|
||||||
if not record:
|
|
||||||
return logging.Formatter.formatException(self, exc_info)
|
|
||||||
|
|
||||||
stringbuffer = moves.StringIO()
|
|
||||||
traceback.print_exception(exc_info[0], exc_info[1], exc_info[2],
|
|
||||||
None, stringbuffer)
|
|
||||||
lines = stringbuffer.getvalue().split('\n')
|
|
||||||
stringbuffer.close()
|
|
||||||
|
|
||||||
if CONF.logging_exception_prefix.find('%(asctime)') != -1:
|
|
||||||
record.asctime = self.formatTime(record, self.datefmt)
|
|
||||||
|
|
||||||
formatted_lines = []
|
|
||||||
for line in lines:
|
|
||||||
pl = CONF.logging_exception_prefix % record.__dict__
|
|
||||||
fl = '%s%s' % (pl, line)
|
|
||||||
formatted_lines.append(fl)
|
|
||||||
return '\n'.join(formatted_lines)
|
|
||||||
|
|
||||||
|
|
||||||
class ColorHandler(logging.StreamHandler):
|
|
||||||
LEVEL_COLORS = {
|
|
||||||
logging.DEBUG: '\033[00;32m', # GREEN
|
|
||||||
logging.INFO: '\033[00;36m', # CYAN
|
|
||||||
logging.AUDIT: '\033[01;36m', # BOLD CYAN
|
|
||||||
logging.WARN: '\033[01;33m', # BOLD YELLOW
|
|
||||||
logging.ERROR: '\033[01;31m', # BOLD RED
|
|
||||||
logging.CRITICAL: '\033[01;31m', # BOLD RED
|
|
||||||
}
|
|
||||||
|
|
||||||
def format(self, record):
|
|
||||||
record.color = self.LEVEL_COLORS[record.levelno]
|
|
||||||
return logging.StreamHandler.format(self, record)
|
|
||||||
|
|
||||||
|
|
||||||
class DeprecatedConfig(Exception):
|
|
||||||
message = _("Fatal call to deprecated config: %(msg)s")
|
|
||||||
|
|
||||||
def __init__(self, msg):
|
|
||||||
super(Exception, self).__init__(self.message % dict(msg=msg))
|
|
@ -1,145 +0,0 @@
|
|||||||
# Copyright 2010 United States Government as represented by the
|
|
||||||
# Administrator of the National Aeronautics and Space Administration.
|
|
||||||
# Copyright 2011 Justin Santa Barbara
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from eventlet import event
|
|
||||||
from eventlet import greenthread
|
|
||||||
|
|
||||||
from sticks.openstack.common.gettextutils import _LE, _LW
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
from sticks.openstack.common import timeutils
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class LoopingCallDone(Exception):
|
|
||||||
"""Exception to break out and stop a LoopingCall.
|
|
||||||
|
|
||||||
The poll-function passed to LoopingCall can raise this exception to
|
|
||||||
break out of the loop normally. This is somewhat analogous to
|
|
||||||
StopIteration.
|
|
||||||
|
|
||||||
An optional return-value can be included as the argument to the exception;
|
|
||||||
this return-value will be returned by LoopingCall.wait()
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, retvalue=True):
|
|
||||||
""":param retvalue: Value that LoopingCall.wait() should return."""
|
|
||||||
self.retvalue = retvalue
|
|
||||||
|
|
||||||
|
|
||||||
class LoopingCallBase(object):
|
|
||||||
def __init__(self, f=None, *args, **kw):
|
|
||||||
self.args = args
|
|
||||||
self.kw = kw
|
|
||||||
self.f = f
|
|
||||||
self._running = False
|
|
||||||
self.done = None
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
self._running = False
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
return self.done.wait()
|
|
||||||
|
|
||||||
|
|
||||||
class FixedIntervalLoopingCall(LoopingCallBase):
|
|
||||||
"""A fixed interval looping call."""
|
|
||||||
|
|
||||||
def start(self, interval, initial_delay=None):
|
|
||||||
self._running = True
|
|
||||||
done = event.Event()
|
|
||||||
|
|
||||||
def _inner():
|
|
||||||
if initial_delay:
|
|
||||||
greenthread.sleep(initial_delay)
|
|
||||||
|
|
||||||
try:
|
|
||||||
while self._running:
|
|
||||||
start = timeutils.utcnow()
|
|
||||||
self.f(*self.args, **self.kw)
|
|
||||||
end = timeutils.utcnow()
|
|
||||||
if not self._running:
|
|
||||||
break
|
|
||||||
delay = interval - timeutils.delta_seconds(start, end)
|
|
||||||
if delay <= 0:
|
|
||||||
LOG.warn(_LW('task run outlasted interval by %s sec') %
|
|
||||||
-delay)
|
|
||||||
greenthread.sleep(delay if delay > 0 else 0)
|
|
||||||
except LoopingCallDone as e:
|
|
||||||
self.stop()
|
|
||||||
done.send(e.retvalue)
|
|
||||||
except Exception:
|
|
||||||
LOG.exception(_LE('in fixed duration looping call'))
|
|
||||||
done.send_exception(*sys.exc_info())
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
done.send(True)
|
|
||||||
|
|
||||||
self.done = done
|
|
||||||
|
|
||||||
greenthread.spawn_n(_inner)
|
|
||||||
return self.done
|
|
||||||
|
|
||||||
|
|
||||||
# TODO(mikal): this class name is deprecated in Havana and should be removed
|
|
||||||
# in the I release
|
|
||||||
LoopingCall = FixedIntervalLoopingCall
|
|
||||||
|
|
||||||
|
|
||||||
class DynamicLoopingCall(LoopingCallBase):
|
|
||||||
"""A looping call which sleeps until the next known event.
|
|
||||||
|
|
||||||
The function called should return how long to sleep for before being
|
|
||||||
called again.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def start(self, initial_delay=None, periodic_interval_max=None):
|
|
||||||
self._running = True
|
|
||||||
done = event.Event()
|
|
||||||
|
|
||||||
def _inner():
|
|
||||||
if initial_delay:
|
|
||||||
greenthread.sleep(initial_delay)
|
|
||||||
|
|
||||||
try:
|
|
||||||
while self._running:
|
|
||||||
idle = self.f(*self.args, **self.kw)
|
|
||||||
if not self._running:
|
|
||||||
break
|
|
||||||
|
|
||||||
if periodic_interval_max is not None:
|
|
||||||
idle = min(idle, periodic_interval_max)
|
|
||||||
LOG.debug('Dynamic looping call sleeping for %.02f '
|
|
||||||
'seconds', idle)
|
|
||||||
greenthread.sleep(idle)
|
|
||||||
except LoopingCallDone as e:
|
|
||||||
self.stop()
|
|
||||||
done.send(e.retvalue)
|
|
||||||
except Exception:
|
|
||||||
LOG.exception(_LE('in dynamic looping call'))
|
|
||||||
done.send_exception(*sys.exc_info())
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
done.send(True)
|
|
||||||
|
|
||||||
self.done = done
|
|
||||||
|
|
||||||
greenthread.spawn(_inner)
|
|
||||||
return self.done
|
|
@ -1,897 +0,0 @@
|
|||||||
# Copyright (c) 2012 OpenStack Foundation.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Common Policy Engine Implementation
|
|
||||||
|
|
||||||
Policies can be expressed in one of two forms: A list of lists, or a
|
|
||||||
string written in the new policy language.
|
|
||||||
|
|
||||||
In the list-of-lists representation, each check inside the innermost
|
|
||||||
list is combined as with an "and" conjunction--for that check to pass,
|
|
||||||
all the specified checks must pass. These innermost lists are then
|
|
||||||
combined as with an "or" conjunction. This is the original way of
|
|
||||||
expressing policies, but there now exists a new way: the policy
|
|
||||||
language.
|
|
||||||
|
|
||||||
In the policy language, each check is specified the same way as in the
|
|
||||||
list-of-lists representation: a simple "a:b" pair that is matched to
|
|
||||||
the correct code to perform that check. However, conjunction
|
|
||||||
operators are available, allowing for more expressiveness in crafting
|
|
||||||
policies.
|
|
||||||
|
|
||||||
As an example, take the following rule, expressed in the list-of-lists
|
|
||||||
representation::
|
|
||||||
|
|
||||||
[["role:admin"], ["project_id:%(project_id)s", "role:projectadmin"]]
|
|
||||||
|
|
||||||
In the policy language, this becomes::
|
|
||||||
|
|
||||||
role:admin or (project_id:%(project_id)s and role:projectadmin)
|
|
||||||
|
|
||||||
The policy language also has the "not" operator, allowing a richer
|
|
||||||
policy rule::
|
|
||||||
|
|
||||||
project_id:%(project_id)s and not role:dunce
|
|
||||||
|
|
||||||
It is possible to perform policy checks on the following user
|
|
||||||
attributes (obtained through the token): user_id, domain_id or
|
|
||||||
project_id::
|
|
||||||
|
|
||||||
domain_id:<some_value>
|
|
||||||
|
|
||||||
Attributes sent along with API calls can be used by the policy engine
|
|
||||||
(on the right side of the expression), by using the following syntax::
|
|
||||||
|
|
||||||
<some_value>:user.id
|
|
||||||
|
|
||||||
Contextual attributes of objects identified by their IDs are loaded
|
|
||||||
from the database. They are also available to the policy engine and
|
|
||||||
can be checked through the `target` keyword::
|
|
||||||
|
|
||||||
<some_value>:target.role.name
|
|
||||||
|
|
||||||
All these attributes (related to users, API calls, and context) can be
|
|
||||||
checked against each other or against constants, be it literals (True,
|
|
||||||
<a_number>) or strings.
|
|
||||||
|
|
||||||
Finally, two special policy checks should be mentioned; the policy
|
|
||||||
check "@" will always accept an access, and the policy check "!" will
|
|
||||||
always reject an access. (Note that if a rule is either the empty
|
|
||||||
list ("[]") or the empty string, this is equivalent to the "@" policy
|
|
||||||
check.) Of these, the "!" policy check is probably the most useful,
|
|
||||||
as it allows particular rules to be explicitly disabled.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import ast
|
|
||||||
import re
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
import six
|
|
||||||
import six.moves.urllib.parse as urlparse
|
|
||||||
import six.moves.urllib.request as urlrequest
|
|
||||||
|
|
||||||
from sticks.openstack.common import fileutils
|
|
||||||
from sticks.openstack.common.gettextutils import _, _LE
|
|
||||||
from sticks.openstack.common import jsonutils
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
|
|
||||||
|
|
||||||
policy_opts = [
|
|
||||||
cfg.StrOpt('policy_file',
|
|
||||||
default='policy.json',
|
|
||||||
help=_('JSON file containing policy')),
|
|
||||||
cfg.StrOpt('policy_default_rule',
|
|
||||||
default='default',
|
|
||||||
help=_('Rule enforced when requested rule is not found')),
|
|
||||||
]
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_opts(policy_opts)
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
_checks = {}
|
|
||||||
|
|
||||||
|
|
||||||
class PolicyNotAuthorized(Exception):
|
|
||||||
|
|
||||||
def __init__(self, rule):
|
|
||||||
msg = _("Policy doesn't allow %s to be performed.") % rule
|
|
||||||
super(PolicyNotAuthorized, self).__init__(msg)
|
|
||||||
|
|
||||||
|
|
||||||
class Rules(dict):
|
|
||||||
"""A store for rules. Handles the default_rule setting directly."""
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def load_json(cls, data, default_rule=None):
|
|
||||||
"""Allow loading of JSON rule data."""
|
|
||||||
|
|
||||||
# Suck in the JSON data and parse the rules
|
|
||||||
rules = dict((k, parse_rule(v)) for k, v in
|
|
||||||
jsonutils.loads(data).items())
|
|
||||||
|
|
||||||
return cls(rules, default_rule)
|
|
||||||
|
|
||||||
def __init__(self, rules=None, default_rule=None):
|
|
||||||
"""Initialize the Rules store."""
|
|
||||||
|
|
||||||
super(Rules, self).__init__(rules or {})
|
|
||||||
self.default_rule = default_rule
|
|
||||||
|
|
||||||
def __missing__(self, key):
|
|
||||||
"""Implements the default rule handling."""
|
|
||||||
|
|
||||||
if isinstance(self.default_rule, dict):
|
|
||||||
raise KeyError(key)
|
|
||||||
|
|
||||||
# If the default rule isn't actually defined, do something
|
|
||||||
# reasonably intelligent
|
|
||||||
if not self.default_rule:
|
|
||||||
raise KeyError(key)
|
|
||||||
|
|
||||||
if isinstance(self.default_rule, BaseCheck):
|
|
||||||
return self.default_rule
|
|
||||||
|
|
||||||
# We need to check this or we can get infinite recursion
|
|
||||||
if self.default_rule not in self:
|
|
||||||
raise KeyError(key)
|
|
||||||
|
|
||||||
elif isinstance(self.default_rule, six.string_types):
|
|
||||||
return self[self.default_rule]
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Dumps a string representation of the rules."""
|
|
||||||
|
|
||||||
# Start by building the canonical strings for the rules
|
|
||||||
out_rules = {}
|
|
||||||
for key, value in self.items():
|
|
||||||
# Use empty string for singleton TrueCheck instances
|
|
||||||
if isinstance(value, TrueCheck):
|
|
||||||
out_rules[key] = ''
|
|
||||||
else:
|
|
||||||
out_rules[key] = str(value)
|
|
||||||
|
|
||||||
# Dump a pretty-printed JSON representation
|
|
||||||
return jsonutils.dumps(out_rules, indent=4)
|
|
||||||
|
|
||||||
|
|
||||||
class Enforcer(object):
|
|
||||||
"""Responsible for loading and enforcing rules.
|
|
||||||
|
|
||||||
:param policy_file: Custom policy file to use, if none is
|
|
||||||
specified, `CONF.policy_file` will be
|
|
||||||
used.
|
|
||||||
:param rules: Default dictionary / Rules to use. It will be
|
|
||||||
considered just in the first instantiation. If
|
|
||||||
`load_rules(True)`, `clear()` or `set_rules(True)`
|
|
||||||
is called this will be overwritten.
|
|
||||||
:param default_rule: Default rule to use, CONF.default_rule will
|
|
||||||
be used if none is specified.
|
|
||||||
:param use_conf: Whether to load rules from cache or config file.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, policy_file=None, rules=None,
|
|
||||||
default_rule=None, use_conf=True):
|
|
||||||
self.rules = Rules(rules, default_rule)
|
|
||||||
self.default_rule = default_rule or CONF.policy_default_rule
|
|
||||||
|
|
||||||
self.policy_path = None
|
|
||||||
self.policy_file = policy_file or CONF.policy_file
|
|
||||||
self.use_conf = use_conf
|
|
||||||
|
|
||||||
def set_rules(self, rules, overwrite=True, use_conf=False):
|
|
||||||
"""Create a new Rules object based on the provided dict of rules.
|
|
||||||
|
|
||||||
:param rules: New rules to use. It should be an instance of dict.
|
|
||||||
:param overwrite: Whether to overwrite current rules or update them
|
|
||||||
with the new rules.
|
|
||||||
:param use_conf: Whether to reload rules from cache or config file.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not isinstance(rules, dict):
|
|
||||||
raise TypeError(_("Rules must be an instance of dict or Rules, "
|
|
||||||
"got %s instead") % type(rules))
|
|
||||||
self.use_conf = use_conf
|
|
||||||
if overwrite:
|
|
||||||
self.rules = Rules(rules, self.default_rule)
|
|
||||||
else:
|
|
||||||
self.rules.update(rules)
|
|
||||||
|
|
||||||
def clear(self):
|
|
||||||
"""Clears Enforcer rules, policy's cache and policy's path."""
|
|
||||||
self.set_rules({})
|
|
||||||
self.default_rule = None
|
|
||||||
self.policy_path = None
|
|
||||||
|
|
||||||
def load_rules(self, force_reload=False):
|
|
||||||
"""Loads policy_path's rules.
|
|
||||||
|
|
||||||
Policy file is cached and will be reloaded if modified.
|
|
||||||
|
|
||||||
:param force_reload: Whether to overwrite current rules.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if force_reload:
|
|
||||||
self.use_conf = force_reload
|
|
||||||
|
|
||||||
if self.use_conf:
|
|
||||||
if not self.policy_path:
|
|
||||||
self.policy_path = self._get_policy_path()
|
|
||||||
|
|
||||||
reloaded, data = fileutils.read_cached_file(
|
|
||||||
self.policy_path, force_reload=force_reload)
|
|
||||||
if reloaded or not self.rules:
|
|
||||||
rules = Rules.load_json(data, self.default_rule)
|
|
||||||
self.set_rules(rules)
|
|
||||||
LOG.debug("Rules successfully reloaded")
|
|
||||||
|
|
||||||
def _get_policy_path(self):
|
|
||||||
"""Locate the policy json data file.
|
|
||||||
|
|
||||||
:param policy_file: Custom policy file to locate.
|
|
||||||
|
|
||||||
:returns: The policy path
|
|
||||||
|
|
||||||
:raises: ConfigFilesNotFoundError if the file couldn't
|
|
||||||
be located.
|
|
||||||
"""
|
|
||||||
policy_file = CONF.find_file(self.policy_file)
|
|
||||||
|
|
||||||
if policy_file:
|
|
||||||
return policy_file
|
|
||||||
|
|
||||||
raise cfg.ConfigFilesNotFoundError((self.policy_file,))
|
|
||||||
|
|
||||||
def enforce(self, rule, target, creds, do_raise=False,
|
|
||||||
exc=None, *args, **kwargs):
|
|
||||||
"""Checks authorization of a rule against the target and credentials.
|
|
||||||
|
|
||||||
:param rule: A string or BaseCheck instance specifying the rule
|
|
||||||
to evaluate.
|
|
||||||
:param target: As much information about the object being operated
|
|
||||||
on as possible, as a dictionary.
|
|
||||||
:param creds: As much information about the user performing the
|
|
||||||
action as possible, as a dictionary.
|
|
||||||
:param do_raise: Whether to raise an exception or not if check
|
|
||||||
fails.
|
|
||||||
:param exc: Class of the exception to raise if the check fails.
|
|
||||||
Any remaining arguments passed to check() (both
|
|
||||||
positional and keyword arguments) will be passed to
|
|
||||||
the exception class. If not specified, PolicyNotAuthorized
|
|
||||||
will be used.
|
|
||||||
|
|
||||||
:return: Returns False if the policy does not allow the action and
|
|
||||||
exc is not provided; otherwise, returns a value that
|
|
||||||
evaluates to True. Note: for rules using the "case"
|
|
||||||
expression, this True value will be the specified string
|
|
||||||
from the expression.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# NOTE(flaper87): Not logging target or creds to avoid
|
|
||||||
# potential security issues.
|
|
||||||
LOG.debug("Rule %s will be now enforced" % rule)
|
|
||||||
|
|
||||||
self.load_rules()
|
|
||||||
|
|
||||||
# Allow the rule to be a Check tree
|
|
||||||
if isinstance(rule, BaseCheck):
|
|
||||||
result = rule(target, creds, self)
|
|
||||||
elif not self.rules:
|
|
||||||
# No rules to reference means we're going to fail closed
|
|
||||||
result = False
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
# Evaluate the rule
|
|
||||||
result = self.rules[rule](target, creds, self)
|
|
||||||
except KeyError:
|
|
||||||
LOG.debug("Rule [%s] doesn't exist" % rule)
|
|
||||||
# If the rule doesn't exist, fail closed
|
|
||||||
result = False
|
|
||||||
|
|
||||||
# If it is False, raise the exception if requested
|
|
||||||
if do_raise and not result:
|
|
||||||
if exc:
|
|
||||||
raise exc(*args, **kwargs)
|
|
||||||
|
|
||||||
raise PolicyNotAuthorized(rule)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class BaseCheck(object):
|
|
||||||
"""Abstract base class for Check classes."""
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def __str__(self):
|
|
||||||
"""String representation of the Check tree rooted at this node."""
|
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def __call__(self, target, cred, enforcer):
|
|
||||||
"""Triggers if instance of the class is called.
|
|
||||||
|
|
||||||
Performs the check. Returns False to reject the access or a
|
|
||||||
true value (not necessary True) to accept the access.
|
|
||||||
"""
|
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class FalseCheck(BaseCheck):
|
|
||||||
"""A policy check that always returns False (disallow)."""
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Return a string representation of this check."""
|
|
||||||
|
|
||||||
return "!"
|
|
||||||
|
|
||||||
def __call__(self, target, cred, enforcer):
|
|
||||||
"""Check the policy."""
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
class TrueCheck(BaseCheck):
|
|
||||||
"""A policy check that always returns True (allow)."""
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Return a string representation of this check."""
|
|
||||||
|
|
||||||
return "@"
|
|
||||||
|
|
||||||
def __call__(self, target, cred, enforcer):
|
|
||||||
"""Check the policy."""
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
class Check(BaseCheck):
|
|
||||||
"""A base class to allow for user-defined policy checks."""
|
|
||||||
|
|
||||||
def __init__(self, kind, match):
|
|
||||||
"""Initiates Check instance.
|
|
||||||
|
|
||||||
:param kind: The kind of the check, i.e., the field before the
|
|
||||||
':'.
|
|
||||||
:param match: The match of the check, i.e., the field after
|
|
||||||
the ':'.
|
|
||||||
"""
|
|
||||||
|
|
||||||
self.kind = kind
|
|
||||||
self.match = match
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Return a string representation of this check."""
|
|
||||||
|
|
||||||
return "%s:%s" % (self.kind, self.match)
|
|
||||||
|
|
||||||
|
|
||||||
class NotCheck(BaseCheck):
|
|
||||||
"""Implements the "not" logical operator.
|
|
||||||
|
|
||||||
A policy check that inverts the result of another policy check.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, rule):
|
|
||||||
"""Initialize the 'not' check.
|
|
||||||
|
|
||||||
:param rule: The rule to negate. Must be a Check.
|
|
||||||
"""
|
|
||||||
|
|
||||||
self.rule = rule
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Return a string representation of this check."""
|
|
||||||
|
|
||||||
return "not %s" % self.rule
|
|
||||||
|
|
||||||
def __call__(self, target, cred, enforcer):
|
|
||||||
"""Check the policy.
|
|
||||||
|
|
||||||
Returns the logical inverse of the wrapped check.
|
|
||||||
"""
|
|
||||||
|
|
||||||
return not self.rule(target, cred, enforcer)
|
|
||||||
|
|
||||||
|
|
||||||
class AndCheck(BaseCheck):
|
|
||||||
"""Implements the "and" logical operator.
|
|
||||||
|
|
||||||
A policy check that requires that a list of other checks all return True.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, rules):
|
|
||||||
"""Initialize the 'and' check.
|
|
||||||
|
|
||||||
:param rules: A list of rules that will be tested.
|
|
||||||
"""
|
|
||||||
|
|
||||||
self.rules = rules
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Return a string representation of this check."""
|
|
||||||
|
|
||||||
return "(%s)" % ' and '.join(str(r) for r in self.rules)
|
|
||||||
|
|
||||||
def __call__(self, target, cred, enforcer):
|
|
||||||
"""Check the policy.
|
|
||||||
|
|
||||||
Requires that all rules accept in order to return True.
|
|
||||||
"""
|
|
||||||
|
|
||||||
for rule in self.rules:
|
|
||||||
if not rule(target, cred, enforcer):
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def add_check(self, rule):
|
|
||||||
"""Adds rule to be tested.
|
|
||||||
|
|
||||||
Allows addition of another rule to the list of rules that will
|
|
||||||
be tested. Returns the AndCheck object for convenience.
|
|
||||||
"""
|
|
||||||
|
|
||||||
self.rules.append(rule)
|
|
||||||
return self
|
|
||||||
|
|
||||||
|
|
||||||
class OrCheck(BaseCheck):
|
|
||||||
"""Implements the "or" operator.
|
|
||||||
|
|
||||||
A policy check that requires that at least one of a list of other
|
|
||||||
checks returns True.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, rules):
|
|
||||||
"""Initialize the 'or' check.
|
|
||||||
|
|
||||||
:param rules: A list of rules that will be tested.
|
|
||||||
"""
|
|
||||||
|
|
||||||
self.rules = rules
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
"""Return a string representation of this check."""
|
|
||||||
|
|
||||||
return "(%s)" % ' or '.join(str(r) for r in self.rules)
|
|
||||||
|
|
||||||
def __call__(self, target, cred, enforcer):
|
|
||||||
"""Check the policy.
|
|
||||||
|
|
||||||
Requires that at least one rule accept in order to return True.
|
|
||||||
"""
|
|
||||||
|
|
||||||
for rule in self.rules:
|
|
||||||
if rule(target, cred, enforcer):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def add_check(self, rule):
|
|
||||||
"""Adds rule to be tested.
|
|
||||||
|
|
||||||
Allows addition of another rule to the list of rules that will
|
|
||||||
be tested. Returns the OrCheck object for convenience.
|
|
||||||
"""
|
|
||||||
|
|
||||||
self.rules.append(rule)
|
|
||||||
return self
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_check(rule):
|
|
||||||
"""Parse a single base check rule into an appropriate Check object."""
|
|
||||||
|
|
||||||
# Handle the special checks
|
|
||||||
if rule == '!':
|
|
||||||
return FalseCheck()
|
|
||||||
elif rule == '@':
|
|
||||||
return TrueCheck()
|
|
||||||
|
|
||||||
try:
|
|
||||||
kind, match = rule.split(':', 1)
|
|
||||||
except Exception:
|
|
||||||
LOG.exception(_LE("Failed to understand rule %s") % rule)
|
|
||||||
# If the rule is invalid, we'll fail closed
|
|
||||||
return FalseCheck()
|
|
||||||
|
|
||||||
# Find what implements the check
|
|
||||||
if kind in _checks:
|
|
||||||
return _checks[kind](kind, match)
|
|
||||||
elif None in _checks:
|
|
||||||
return _checks[None](kind, match)
|
|
||||||
else:
|
|
||||||
LOG.error(_LE("No handler for matches of kind %s") % kind)
|
|
||||||
return FalseCheck()
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_list_rule(rule):
|
|
||||||
"""Translates the old list-of-lists syntax into a tree of Check objects.
|
|
||||||
|
|
||||||
Provided for backwards compatibility.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Empty rule defaults to True
|
|
||||||
if not rule:
|
|
||||||
return TrueCheck()
|
|
||||||
|
|
||||||
# Outer list is joined by "or"; inner list by "and"
|
|
||||||
or_list = []
|
|
||||||
for inner_rule in rule:
|
|
||||||
# Elide empty inner lists
|
|
||||||
if not inner_rule:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Handle bare strings
|
|
||||||
if isinstance(inner_rule, six.string_types):
|
|
||||||
inner_rule = [inner_rule]
|
|
||||||
|
|
||||||
# Parse the inner rules into Check objects
|
|
||||||
and_list = [_parse_check(r) for r in inner_rule]
|
|
||||||
|
|
||||||
# Append the appropriate check to the or_list
|
|
||||||
if len(and_list) == 1:
|
|
||||||
or_list.append(and_list[0])
|
|
||||||
else:
|
|
||||||
or_list.append(AndCheck(and_list))
|
|
||||||
|
|
||||||
# If we have only one check, omit the "or"
|
|
||||||
if not or_list:
|
|
||||||
return FalseCheck()
|
|
||||||
elif len(or_list) == 1:
|
|
||||||
return or_list[0]
|
|
||||||
|
|
||||||
return OrCheck(or_list)
|
|
||||||
|
|
||||||
|
|
||||||
# Used for tokenizing the policy language
|
|
||||||
_tokenize_re = re.compile(r'\s+')
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_tokenize(rule):
|
|
||||||
"""Tokenizer for the policy language.
|
|
||||||
|
|
||||||
Most of the single-character tokens are specified in the
|
|
||||||
_tokenize_re; however, parentheses need to be handled specially,
|
|
||||||
because they can appear inside a check string. Thankfully, those
|
|
||||||
parentheses that appear inside a check string can never occur at
|
|
||||||
the very beginning or end ("%(variable)s" is the correct syntax).
|
|
||||||
"""
|
|
||||||
|
|
||||||
for tok in _tokenize_re.split(rule):
|
|
||||||
# Skip empty tokens
|
|
||||||
if not tok or tok.isspace():
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Handle leading parens on the token
|
|
||||||
clean = tok.lstrip('(')
|
|
||||||
for i in range(len(tok) - len(clean)):
|
|
||||||
yield '(', '('
|
|
||||||
|
|
||||||
# If it was only parentheses, continue
|
|
||||||
if not clean:
|
|
||||||
continue
|
|
||||||
else:
|
|
||||||
tok = clean
|
|
||||||
|
|
||||||
# Handle trailing parens on the token
|
|
||||||
clean = tok.rstrip(')')
|
|
||||||
trail = len(tok) - len(clean)
|
|
||||||
|
|
||||||
# Yield the cleaned token
|
|
||||||
lowered = clean.lower()
|
|
||||||
if lowered in ('and', 'or', 'not'):
|
|
||||||
# Special tokens
|
|
||||||
yield lowered, clean
|
|
||||||
elif clean:
|
|
||||||
# Not a special token, but not composed solely of ')'
|
|
||||||
if len(tok) >= 2 and ((tok[0], tok[-1]) in
|
|
||||||
[('"', '"'), ("'", "'")]):
|
|
||||||
# It's a quoted string
|
|
||||||
yield 'string', tok[1:-1]
|
|
||||||
else:
|
|
||||||
yield 'check', _parse_check(clean)
|
|
||||||
|
|
||||||
# Yield the trailing parens
|
|
||||||
for i in range(trail):
|
|
||||||
yield ')', ')'
|
|
||||||
|
|
||||||
|
|
||||||
class ParseStateMeta(type):
|
|
||||||
"""Metaclass for the ParseState class.
|
|
||||||
|
|
||||||
Facilitates identifying reduction methods.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __new__(mcs, name, bases, cls_dict):
|
|
||||||
"""Create the class.
|
|
||||||
|
|
||||||
Injects the 'reducers' list, a list of tuples matching token sequences
|
|
||||||
to the names of the corresponding reduction methods.
|
|
||||||
"""
|
|
||||||
|
|
||||||
reducers = []
|
|
||||||
|
|
||||||
for key, value in cls_dict.items():
|
|
||||||
if not hasattr(value, 'reducers'):
|
|
||||||
continue
|
|
||||||
for reduction in value.reducers:
|
|
||||||
reducers.append((reduction, key))
|
|
||||||
|
|
||||||
cls_dict['reducers'] = reducers
|
|
||||||
|
|
||||||
return super(ParseStateMeta, mcs).__new__(mcs, name, bases, cls_dict)
|
|
||||||
|
|
||||||
|
|
||||||
def reducer(*tokens):
|
|
||||||
"""Decorator for reduction methods.
|
|
||||||
|
|
||||||
Arguments are a sequence of tokens, in order, which should trigger running
|
|
||||||
this reduction method.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def decorator(func):
|
|
||||||
# Make sure we have a list of reducer sequences
|
|
||||||
if not hasattr(func, 'reducers'):
|
|
||||||
func.reducers = []
|
|
||||||
|
|
||||||
# Add the tokens to the list of reducer sequences
|
|
||||||
func.reducers.append(list(tokens))
|
|
||||||
|
|
||||||
return func
|
|
||||||
|
|
||||||
return decorator
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(ParseStateMeta)
|
|
||||||
class ParseState(object):
|
|
||||||
"""Implement the core of parsing the policy language.
|
|
||||||
|
|
||||||
Uses a greedy reduction algorithm to reduce a sequence of tokens into
|
|
||||||
a single terminal, the value of which will be the root of the Check tree.
|
|
||||||
|
|
||||||
Note: error reporting is rather lacking. The best we can get with
|
|
||||||
this parser formulation is an overall "parse failed" error.
|
|
||||||
Fortunately, the policy language is simple enough that this
|
|
||||||
shouldn't be that big a problem.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
"""Initialize the ParseState."""
|
|
||||||
|
|
||||||
self.tokens = []
|
|
||||||
self.values = []
|
|
||||||
|
|
||||||
def reduce(self):
|
|
||||||
"""Perform a greedy reduction of the token stream.
|
|
||||||
|
|
||||||
If a reducer method matches, it will be executed, then the
|
|
||||||
reduce() method will be called recursively to search for any more
|
|
||||||
possible reductions.
|
|
||||||
"""
|
|
||||||
|
|
||||||
for reduction, methname in self.reducers:
|
|
||||||
if (len(self.tokens) >= len(reduction) and
|
|
||||||
self.tokens[-len(reduction):] == reduction):
|
|
||||||
# Get the reduction method
|
|
||||||
meth = getattr(self, methname)
|
|
||||||
|
|
||||||
# Reduce the token stream
|
|
||||||
results = meth(*self.values[-len(reduction):])
|
|
||||||
|
|
||||||
# Update the tokens and values
|
|
||||||
self.tokens[-len(reduction):] = [r[0] for r in results]
|
|
||||||
self.values[-len(reduction):] = [r[1] for r in results]
|
|
||||||
|
|
||||||
# Check for any more reductions
|
|
||||||
return self.reduce()
|
|
||||||
|
|
||||||
def shift(self, tok, value):
|
|
||||||
"""Adds one more token to the state. Calls reduce()."""
|
|
||||||
|
|
||||||
self.tokens.append(tok)
|
|
||||||
self.values.append(value)
|
|
||||||
|
|
||||||
# Do a greedy reduce...
|
|
||||||
self.reduce()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def result(self):
|
|
||||||
"""Obtain the final result of the parse.
|
|
||||||
|
|
||||||
Raises ValueError if the parse failed to reduce to a single result.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if len(self.values) != 1:
|
|
||||||
raise ValueError("Could not parse rule")
|
|
||||||
return self.values[0]
|
|
||||||
|
|
||||||
@reducer('(', 'check', ')')
|
|
||||||
@reducer('(', 'and_expr', ')')
|
|
||||||
@reducer('(', 'or_expr', ')')
|
|
||||||
def _wrap_check(self, _p1, check, _p2):
|
|
||||||
"""Turn parenthesized expressions into a 'check' token."""
|
|
||||||
|
|
||||||
return [('check', check)]
|
|
||||||
|
|
||||||
@reducer('check', 'and', 'check')
|
|
||||||
def _make_and_expr(self, check1, _and, check2):
|
|
||||||
"""Create an 'and_expr'.
|
|
||||||
|
|
||||||
Join two checks by the 'and' operator.
|
|
||||||
"""
|
|
||||||
|
|
||||||
return [('and_expr', AndCheck([check1, check2]))]
|
|
||||||
|
|
||||||
@reducer('and_expr', 'and', 'check')
|
|
||||||
def _extend_and_expr(self, and_expr, _and, check):
|
|
||||||
"""Extend an 'and_expr' by adding one more check."""
|
|
||||||
|
|
||||||
return [('and_expr', and_expr.add_check(check))]
|
|
||||||
|
|
||||||
@reducer('check', 'or', 'check')
|
|
||||||
def _make_or_expr(self, check1, _or, check2):
|
|
||||||
"""Create an 'or_expr'.
|
|
||||||
|
|
||||||
Join two checks by the 'or' operator.
|
|
||||||
"""
|
|
||||||
|
|
||||||
return [('or_expr', OrCheck([check1, check2]))]
|
|
||||||
|
|
||||||
@reducer('or_expr', 'or', 'check')
|
|
||||||
def _extend_or_expr(self, or_expr, _or, check):
|
|
||||||
"""Extend an 'or_expr' by adding one more check."""
|
|
||||||
|
|
||||||
return [('or_expr', or_expr.add_check(check))]
|
|
||||||
|
|
||||||
@reducer('not', 'check')
|
|
||||||
def _make_not_expr(self, _not, check):
|
|
||||||
"""Invert the result of another check."""
|
|
||||||
|
|
||||||
return [('check', NotCheck(check))]
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_text_rule(rule):
|
|
||||||
"""Parses policy to the tree.
|
|
||||||
|
|
||||||
Translates a policy written in the policy language into a tree of
|
|
||||||
Check objects.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Empty rule means always accept
|
|
||||||
if not rule:
|
|
||||||
return TrueCheck()
|
|
||||||
|
|
||||||
# Parse the token stream
|
|
||||||
state = ParseState()
|
|
||||||
for tok, value in _parse_tokenize(rule):
|
|
||||||
state.shift(tok, value)
|
|
||||||
|
|
||||||
try:
|
|
||||||
return state.result
|
|
||||||
except ValueError:
|
|
||||||
# Couldn't parse the rule
|
|
||||||
LOG.exception(_LE("Failed to understand rule %r") % rule)
|
|
||||||
|
|
||||||
# Fail closed
|
|
||||||
return FalseCheck()
|
|
||||||
|
|
||||||
|
|
||||||
def parse_rule(rule):
|
|
||||||
"""Parses a policy rule into a tree of Check objects."""
|
|
||||||
|
|
||||||
# If the rule is a string, it's in the policy language
|
|
||||||
if isinstance(rule, six.string_types):
|
|
||||||
return _parse_text_rule(rule)
|
|
||||||
return _parse_list_rule(rule)
|
|
||||||
|
|
||||||
|
|
||||||
def register(name, func=None):
|
|
||||||
"""Register a function or Check class as a policy check.
|
|
||||||
|
|
||||||
:param name: Gives the name of the check type, e.g., 'rule',
|
|
||||||
'role', etc. If name is None, a default check type
|
|
||||||
will be registered.
|
|
||||||
:param func: If given, provides the function or class to register.
|
|
||||||
If not given, returns a function taking one argument
|
|
||||||
to specify the function or class to register,
|
|
||||||
allowing use as a decorator.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Perform the actual decoration by registering the function or
|
|
||||||
# class. Returns the function or class for compliance with the
|
|
||||||
# decorator interface.
|
|
||||||
def decorator(func):
|
|
||||||
_checks[name] = func
|
|
||||||
return func
|
|
||||||
|
|
||||||
# If the function or class is given, do the registration
|
|
||||||
if func:
|
|
||||||
return decorator(func)
|
|
||||||
|
|
||||||
return decorator
|
|
||||||
|
|
||||||
|
|
||||||
@register("rule")
|
|
||||||
class RuleCheck(Check):
|
|
||||||
def __call__(self, target, creds, enforcer):
|
|
||||||
"""Recursively checks credentials based on the defined rules."""
|
|
||||||
|
|
||||||
try:
|
|
||||||
return enforcer.rules[self.match](target, creds, enforcer)
|
|
||||||
except KeyError:
|
|
||||||
# We don't have any matching rule; fail closed
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
@register("role")
|
|
||||||
class RoleCheck(Check):
|
|
||||||
def __call__(self, target, creds, enforcer):
|
|
||||||
"""Check that there is a matching role in the cred dict."""
|
|
||||||
|
|
||||||
return self.match.lower() in [x.lower() for x in creds['roles']]
|
|
||||||
|
|
||||||
|
|
||||||
@register('http')
|
|
||||||
class HttpCheck(Check):
|
|
||||||
def __call__(self, target, creds, enforcer):
|
|
||||||
"""Check http: rules by calling to a remote server.
|
|
||||||
|
|
||||||
This example implementation simply verifies that the response
|
|
||||||
is exactly 'True'.
|
|
||||||
"""
|
|
||||||
|
|
||||||
url = ('http:' + self.match) % target
|
|
||||||
data = {'target': jsonutils.dumps(target),
|
|
||||||
'credentials': jsonutils.dumps(creds)}
|
|
||||||
post_data = urlparse.urlencode(data)
|
|
||||||
f = urlrequest.urlopen(url, post_data)
|
|
||||||
return f.read() == "True"
|
|
||||||
|
|
||||||
|
|
||||||
@register(None)
|
|
||||||
class GenericCheck(Check):
|
|
||||||
def __call__(self, target, creds, enforcer):
|
|
||||||
"""Check an individual match.
|
|
||||||
|
|
||||||
Matches look like:
|
|
||||||
|
|
||||||
tenant:%(tenant_id)s
|
|
||||||
role:compute:admin
|
|
||||||
True:%(user.enabled)s
|
|
||||||
'Member':%(role.name)s
|
|
||||||
"""
|
|
||||||
|
|
||||||
# TODO(termie): do dict inspection via dot syntax
|
|
||||||
try:
|
|
||||||
match = self.match % target
|
|
||||||
except KeyError:
|
|
||||||
# While doing GenericCheck if key not
|
|
||||||
# present in Target return false
|
|
||||||
return False
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Try to interpret self.kind as a literal
|
|
||||||
leftval = ast.literal_eval(self.kind)
|
|
||||||
except ValueError:
|
|
||||||
try:
|
|
||||||
leftval = creds[self.kind]
|
|
||||||
except KeyError:
|
|
||||||
return False
|
|
||||||
return match == six.text_type(leftval)
|
|
@ -1,504 +0,0 @@
|
|||||||
# Copyright 2010 United States Government as represented by the
|
|
||||||
# Administrator of the National Aeronautics and Space Administration.
|
|
||||||
# Copyright 2011 Justin Santa Barbara
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Generic Node base class for all workers that run on hosts."""
|
|
||||||
|
|
||||||
import errno
|
|
||||||
import logging as std_logging
|
|
||||||
import os
|
|
||||||
import random
|
|
||||||
import signal
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Importing just the symbol here because the io module does not
|
|
||||||
# exist in Python 2.6.
|
|
||||||
from io import UnsupportedOperation # noqa
|
|
||||||
except ImportError:
|
|
||||||
# Python 2.6
|
|
||||||
UnsupportedOperation = None
|
|
||||||
|
|
||||||
import eventlet
|
|
||||||
from eventlet import event
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.openstack.common import eventlet_backdoor
|
|
||||||
from sticks.openstack.common.gettextutils import _LE, _LI, _LW
|
|
||||||
from sticks.openstack.common import importutils
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
from sticks.openstack.common import systemd
|
|
||||||
from sticks.openstack.common import threadgroup
|
|
||||||
|
|
||||||
|
|
||||||
rpc = importutils.try_import('sticks.openstack.common.rpc')
|
|
||||||
CONF = cfg.CONF
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def _sighup_supported():
|
|
||||||
return hasattr(signal, 'SIGHUP')
|
|
||||||
|
|
||||||
|
|
||||||
def _is_daemon():
|
|
||||||
# The process group for a foreground process will match the
|
|
||||||
# process group of the controlling terminal. If those values do
|
|
||||||
# not match, or ioctl() fails on the stdout file handle, we assume
|
|
||||||
# the process is running in the background as a daemon.
|
|
||||||
# http://www.gnu.org/software/bash/manual/bashref.html#Job-Control-Basics
|
|
||||||
try:
|
|
||||||
is_daemon = os.getpgrp() != os.tcgetpgrp(sys.stdout.fileno())
|
|
||||||
except OSError as err:
|
|
||||||
if err.errno == errno.ENOTTY:
|
|
||||||
# Assume we are a daemon because there is no terminal.
|
|
||||||
is_daemon = True
|
|
||||||
else:
|
|
||||||
raise
|
|
||||||
except UnsupportedOperation:
|
|
||||||
# Could not get the fileno for stdout, so we must be a daemon.
|
|
||||||
is_daemon = True
|
|
||||||
return is_daemon
|
|
||||||
|
|
||||||
|
|
||||||
def _is_sighup_and_daemon(signo):
|
|
||||||
if not (_sighup_supported() and signo == signal.SIGHUP):
|
|
||||||
# Avoid checking if we are a daemon, because the signal isn't
|
|
||||||
# SIGHUP.
|
|
||||||
return False
|
|
||||||
return _is_daemon()
|
|
||||||
|
|
||||||
|
|
||||||
def _signo_to_signame(signo):
|
|
||||||
signals = {signal.SIGTERM: 'SIGTERM',
|
|
||||||
signal.SIGINT: 'SIGINT'}
|
|
||||||
if _sighup_supported():
|
|
||||||
signals[signal.SIGHUP] = 'SIGHUP'
|
|
||||||
return signals[signo]
|
|
||||||
|
|
||||||
|
|
||||||
def _set_signals_handler(handler):
|
|
||||||
signal.signal(signal.SIGTERM, handler)
|
|
||||||
signal.signal(signal.SIGINT, handler)
|
|
||||||
if _sighup_supported():
|
|
||||||
signal.signal(signal.SIGHUP, handler)
|
|
||||||
|
|
||||||
|
|
||||||
class Launcher(object):
|
|
||||||
"""Launch one or more services and wait for them to complete."""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
"""Initialize the service launcher.
|
|
||||||
|
|
||||||
:returns: None
|
|
||||||
|
|
||||||
"""
|
|
||||||
self.services = Services()
|
|
||||||
self.backdoor_port = eventlet_backdoor.initialize_if_enabled()
|
|
||||||
|
|
||||||
def launch_service(self, service):
|
|
||||||
"""Load and start the given service.
|
|
||||||
|
|
||||||
:param service: The service you would like to start.
|
|
||||||
:returns: None
|
|
||||||
|
|
||||||
"""
|
|
||||||
service.backdoor_port = self.backdoor_port
|
|
||||||
self.services.add(service)
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
"""Stop all services which are currently running.
|
|
||||||
|
|
||||||
:returns: None
|
|
||||||
|
|
||||||
"""
|
|
||||||
self.services.stop()
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
"""Waits until all services have been stopped, and then returns.
|
|
||||||
|
|
||||||
:returns: None
|
|
||||||
|
|
||||||
"""
|
|
||||||
self.services.wait()
|
|
||||||
|
|
||||||
def restart(self):
|
|
||||||
"""Reload config files and restart service.
|
|
||||||
|
|
||||||
:returns: None
|
|
||||||
|
|
||||||
"""
|
|
||||||
cfg.CONF.reload_config_files()
|
|
||||||
self.services.restart()
|
|
||||||
|
|
||||||
|
|
||||||
class SignalExit(SystemExit):
|
|
||||||
def __init__(self, signo, exccode=1):
|
|
||||||
super(SignalExit, self).__init__(exccode)
|
|
||||||
self.signo = signo
|
|
||||||
|
|
||||||
|
|
||||||
class ServiceLauncher(Launcher):
|
|
||||||
def _handle_signal(self, signo, frame):
|
|
||||||
# Allow the process to be killed again and die from natural causes
|
|
||||||
_set_signals_handler(signal.SIG_DFL)
|
|
||||||
raise SignalExit(signo)
|
|
||||||
|
|
||||||
def handle_signal(self):
|
|
||||||
_set_signals_handler(self._handle_signal)
|
|
||||||
|
|
||||||
def _wait_for_exit_or_signal(self, ready_callback=None):
|
|
||||||
status = None
|
|
||||||
signo = 0
|
|
||||||
|
|
||||||
LOG.debug('Full set of CONF:')
|
|
||||||
CONF.log_opt_values(LOG, std_logging.DEBUG)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if ready_callback:
|
|
||||||
ready_callback()
|
|
||||||
super(ServiceLauncher, self).wait()
|
|
||||||
except SignalExit as exc:
|
|
||||||
signame = _signo_to_signame(exc.signo)
|
|
||||||
LOG.info(_LI('Caught %s, exiting'), signame)
|
|
||||||
status = exc.code
|
|
||||||
signo = exc.signo
|
|
||||||
except SystemExit as exc:
|
|
||||||
status = exc.code
|
|
||||||
finally:
|
|
||||||
self.stop()
|
|
||||||
if rpc:
|
|
||||||
try:
|
|
||||||
rpc.cleanup()
|
|
||||||
except Exception:
|
|
||||||
# We're shutting down, so it doesn't matter at this point.
|
|
||||||
LOG.exception(_LE('Exception during rpc cleanup.'))
|
|
||||||
|
|
||||||
return status, signo
|
|
||||||
|
|
||||||
def wait(self, ready_callback=None):
|
|
||||||
systemd.notify_once()
|
|
||||||
while True:
|
|
||||||
self.handle_signal()
|
|
||||||
status, signo = self._wait_for_exit_or_signal(ready_callback)
|
|
||||||
if not _is_sighup_and_daemon(signo):
|
|
||||||
return status
|
|
||||||
self.restart()
|
|
||||||
|
|
||||||
|
|
||||||
class ServiceWrapper(object):
|
|
||||||
def __init__(self, service, workers):
|
|
||||||
self.service = service
|
|
||||||
self.workers = workers
|
|
||||||
self.children = set()
|
|
||||||
self.forktimes = []
|
|
||||||
|
|
||||||
|
|
||||||
class ProcessLauncher(object):
|
|
||||||
def __init__(self, wait_interval=0.01):
|
|
||||||
"""Constructor.
|
|
||||||
|
|
||||||
:param wait_interval: The interval to sleep for between checks
|
|
||||||
of child process exit.
|
|
||||||
"""
|
|
||||||
self.children = {}
|
|
||||||
self.sigcaught = None
|
|
||||||
self.running = True
|
|
||||||
self.wait_interval = wait_interval
|
|
||||||
rfd, self.writepipe = os.pipe()
|
|
||||||
self.readpipe = eventlet.greenio.GreenPipe(rfd, 'r')
|
|
||||||
self.handle_signal()
|
|
||||||
|
|
||||||
def handle_signal(self):
|
|
||||||
_set_signals_handler(self._handle_signal)
|
|
||||||
|
|
||||||
def _handle_signal(self, signo, frame):
|
|
||||||
self.sigcaught = signo
|
|
||||||
self.running = False
|
|
||||||
|
|
||||||
# Allow the process to be killed again and die from natural causes
|
|
||||||
_set_signals_handler(signal.SIG_DFL)
|
|
||||||
|
|
||||||
def _pipe_watcher(self):
|
|
||||||
# This will block until the write end is closed when the parent
|
|
||||||
# dies unexpectedly
|
|
||||||
self.readpipe.read()
|
|
||||||
|
|
||||||
LOG.info(_LI('Parent process has died unexpectedly, exiting'))
|
|
||||||
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
def _child_process_handle_signal(self):
|
|
||||||
# Setup child signal handlers differently
|
|
||||||
def _sigterm(*args):
|
|
||||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
|
||||||
raise SignalExit(signal.SIGTERM)
|
|
||||||
|
|
||||||
def _sighup(*args):
|
|
||||||
signal.signal(signal.SIGHUP, signal.SIG_DFL)
|
|
||||||
raise SignalExit(signal.SIGHUP)
|
|
||||||
|
|
||||||
signal.signal(signal.SIGTERM, _sigterm)
|
|
||||||
if _sighup_supported():
|
|
||||||
signal.signal(signal.SIGHUP, _sighup)
|
|
||||||
# Block SIGINT and let the parent send us a SIGTERM
|
|
||||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
|
||||||
|
|
||||||
def _child_wait_for_exit_or_signal(self, launcher):
|
|
||||||
status = 0
|
|
||||||
signo = 0
|
|
||||||
|
|
||||||
# NOTE(johannes): All exceptions are caught to ensure this
|
|
||||||
# doesn't fallback into the loop spawning children. It would
|
|
||||||
# be bad for a child to spawn more children.
|
|
||||||
try:
|
|
||||||
launcher.wait()
|
|
||||||
except SignalExit as exc:
|
|
||||||
signame = _signo_to_signame(exc.signo)
|
|
||||||
LOG.info(_LI('Caught %s, exiting'), signame)
|
|
||||||
status = exc.code
|
|
||||||
signo = exc.signo
|
|
||||||
except SystemExit as exc:
|
|
||||||
status = exc.code
|
|
||||||
except BaseException:
|
|
||||||
LOG.exception(_LE('Unhandled exception'))
|
|
||||||
status = 2
|
|
||||||
finally:
|
|
||||||
launcher.stop()
|
|
||||||
|
|
||||||
return status, signo
|
|
||||||
|
|
||||||
def _child_process(self, service):
|
|
||||||
self._child_process_handle_signal()
|
|
||||||
|
|
||||||
# Reopen the eventlet hub to make sure we don't share an epoll
|
|
||||||
# fd with parent and/or siblings, which would be bad
|
|
||||||
eventlet.hubs.use_hub()
|
|
||||||
|
|
||||||
# Close write to ensure only parent has it open
|
|
||||||
os.close(self.writepipe)
|
|
||||||
# Create greenthread to watch for parent to close pipe
|
|
||||||
eventlet.spawn_n(self._pipe_watcher)
|
|
||||||
|
|
||||||
# Reseed random number generator
|
|
||||||
random.seed()
|
|
||||||
|
|
||||||
launcher = Launcher()
|
|
||||||
launcher.launch_service(service)
|
|
||||||
return launcher
|
|
||||||
|
|
||||||
def _start_child(self, wrap):
|
|
||||||
if len(wrap.forktimes) > wrap.workers:
|
|
||||||
# Limit ourselves to one process a second (over the period of
|
|
||||||
# number of workers * 1 second). This will allow workers to
|
|
||||||
# start up quickly but ensure we don't fork off children that
|
|
||||||
# die instantly too quickly.
|
|
||||||
if time.time() - wrap.forktimes[0] < wrap.workers:
|
|
||||||
LOG.info(_LI('Forking too fast, sleeping'))
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
wrap.forktimes.pop(0)
|
|
||||||
|
|
||||||
wrap.forktimes.append(time.time())
|
|
||||||
|
|
||||||
pid = os.fork()
|
|
||||||
if pid == 0:
|
|
||||||
launcher = self._child_process(wrap.service)
|
|
||||||
while True:
|
|
||||||
self._child_process_handle_signal()
|
|
||||||
status, signo = self._child_wait_for_exit_or_signal(launcher)
|
|
||||||
if not _is_sighup_and_daemon(signo):
|
|
||||||
break
|
|
||||||
launcher.restart()
|
|
||||||
|
|
||||||
os._exit(status)
|
|
||||||
|
|
||||||
LOG.info(_LI('Started child %d'), pid)
|
|
||||||
|
|
||||||
wrap.children.add(pid)
|
|
||||||
self.children[pid] = wrap
|
|
||||||
|
|
||||||
return pid
|
|
||||||
|
|
||||||
def launch_service(self, service, workers=1):
|
|
||||||
wrap = ServiceWrapper(service, workers)
|
|
||||||
|
|
||||||
LOG.info(_LI('Starting %d workers'), wrap.workers)
|
|
||||||
while self.running and len(wrap.children) < wrap.workers:
|
|
||||||
self._start_child(wrap)
|
|
||||||
|
|
||||||
def _wait_child(self):
|
|
||||||
try:
|
|
||||||
# Don't block if no child processes have exited
|
|
||||||
pid, status = os.waitpid(0, os.WNOHANG)
|
|
||||||
if not pid:
|
|
||||||
return None
|
|
||||||
except OSError as exc:
|
|
||||||
if exc.errno not in (errno.EINTR, errno.ECHILD):
|
|
||||||
raise
|
|
||||||
return None
|
|
||||||
|
|
||||||
if os.WIFSIGNALED(status):
|
|
||||||
sig = os.WTERMSIG(status)
|
|
||||||
LOG.info(_LI('Child %(pid)d killed by signal %(sig)d'),
|
|
||||||
dict(pid=pid, sig=sig))
|
|
||||||
else:
|
|
||||||
code = os.WEXITSTATUS(status)
|
|
||||||
LOG.info(_LI('Child %(pid)s exited with status %(code)d'),
|
|
||||||
dict(pid=pid, code=code))
|
|
||||||
|
|
||||||
if pid not in self.children:
|
|
||||||
LOG.warning(_LW('pid %d not in child list'), pid)
|
|
||||||
return None
|
|
||||||
|
|
||||||
wrap = self.children.pop(pid)
|
|
||||||
wrap.children.remove(pid)
|
|
||||||
return wrap
|
|
||||||
|
|
||||||
def _respawn_children(self):
|
|
||||||
while self.running:
|
|
||||||
wrap = self._wait_child()
|
|
||||||
if not wrap:
|
|
||||||
# Yield to other threads if no children have exited
|
|
||||||
# Sleep for a short time to avoid excessive CPU usage
|
|
||||||
# (see bug #1095346)
|
|
||||||
eventlet.greenthread.sleep(self.wait_interval)
|
|
||||||
continue
|
|
||||||
while self.running and len(wrap.children) < wrap.workers:
|
|
||||||
self._start_child(wrap)
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
"""Loop waiting on children to die and respawning as necessary."""
|
|
||||||
|
|
||||||
systemd.notify_once()
|
|
||||||
LOG.debug('Full set of CONF:')
|
|
||||||
CONF.log_opt_values(LOG, std_logging.DEBUG)
|
|
||||||
|
|
||||||
try:
|
|
||||||
while True:
|
|
||||||
self.handle_signal()
|
|
||||||
self._respawn_children()
|
|
||||||
if self.sigcaught:
|
|
||||||
signame = _signo_to_signame(self.sigcaught)
|
|
||||||
LOG.info(_LI('Caught %s, stopping children'), signame)
|
|
||||||
if not _is_sighup_and_daemon(self.sigcaught):
|
|
||||||
break
|
|
||||||
|
|
||||||
for pid in self.children:
|
|
||||||
os.kill(pid, signal.SIGHUP)
|
|
||||||
self.running = True
|
|
||||||
self.sigcaught = None
|
|
||||||
except eventlet.greenlet.GreenletExit:
|
|
||||||
LOG.info(_LI("Wait called after thread killed. Cleaning up."))
|
|
||||||
|
|
||||||
for pid in self.children:
|
|
||||||
try:
|
|
||||||
os.kill(pid, signal.SIGTERM)
|
|
||||||
except OSError as exc:
|
|
||||||
if exc.errno != errno.ESRCH:
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Wait for children to die
|
|
||||||
if self.children:
|
|
||||||
LOG.info(_LI('Waiting on %d children to exit'), len(self.children))
|
|
||||||
while self.children:
|
|
||||||
self._wait_child()
|
|
||||||
|
|
||||||
|
|
||||||
class Service(object):
|
|
||||||
"""Service object for binaries running on hosts."""
|
|
||||||
|
|
||||||
def __init__(self, threads=1000):
|
|
||||||
self.tg = threadgroup.ThreadGroup(threads)
|
|
||||||
|
|
||||||
# signal that the service is done shutting itself down:
|
|
||||||
self._done = event.Event()
|
|
||||||
|
|
||||||
def reset(self):
|
|
||||||
# NOTE(Fengqian): docs for Event.reset() recommend against using it
|
|
||||||
self._done = event.Event()
|
|
||||||
|
|
||||||
def start(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
self.tg.stop()
|
|
||||||
self.tg.wait()
|
|
||||||
# Signal that service cleanup is done:
|
|
||||||
if not self._done.ready():
|
|
||||||
self._done.send()
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
self._done.wait()
|
|
||||||
|
|
||||||
|
|
||||||
class Services(object):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.services = []
|
|
||||||
self.tg = threadgroup.ThreadGroup()
|
|
||||||
self.done = event.Event()
|
|
||||||
|
|
||||||
def add(self, service):
|
|
||||||
self.services.append(service)
|
|
||||||
self.tg.add_thread(self.run_service, service, self.done)
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
# wait for graceful shutdown of services:
|
|
||||||
for service in self.services:
|
|
||||||
service.stop()
|
|
||||||
service.wait()
|
|
||||||
|
|
||||||
# Each service has performed cleanup, now signal that the run_service
|
|
||||||
# wrapper threads can now die:
|
|
||||||
if not self.done.ready():
|
|
||||||
self.done.send()
|
|
||||||
|
|
||||||
# reap threads:
|
|
||||||
self.tg.stop()
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
self.tg.wait()
|
|
||||||
|
|
||||||
def restart(self):
|
|
||||||
self.stop()
|
|
||||||
self.done = event.Event()
|
|
||||||
for restart_service in self.services:
|
|
||||||
restart_service.reset()
|
|
||||||
self.tg.add_thread(self.run_service, restart_service, self.done)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def run_service(service, done):
|
|
||||||
"""Service start wrapper.
|
|
||||||
|
|
||||||
:param service: service to run
|
|
||||||
:param done: event to wait on until a shutdown is triggered
|
|
||||||
:returns: None
|
|
||||||
|
|
||||||
"""
|
|
||||||
service.start()
|
|
||||||
done.wait()
|
|
||||||
|
|
||||||
|
|
||||||
def launch(service, workers=1):
|
|
||||||
if workers is None or workers == 1:
|
|
||||||
launcher = ServiceLauncher()
|
|
||||||
launcher.launch_service(service)
|
|
||||||
else:
|
|
||||||
launcher = ProcessLauncher()
|
|
||||||
launcher.launch_service(service, workers=workers)
|
|
||||||
|
|
||||||
return launcher
|
|
@ -1,322 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
System-level utilities and helper functions.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import math
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import unicodedata
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from sticks.openstack.common.gettextutils import _
|
|
||||||
|
|
||||||
|
|
||||||
UNIT_PREFIX_EXPONENT = {
|
|
||||||
'k': 1,
|
|
||||||
'K': 1,
|
|
||||||
'Ki': 1,
|
|
||||||
'M': 2,
|
|
||||||
'Mi': 2,
|
|
||||||
'G': 3,
|
|
||||||
'Gi': 3,
|
|
||||||
'T': 4,
|
|
||||||
'Ti': 4,
|
|
||||||
}
|
|
||||||
UNIT_SYSTEM_INFO = {
|
|
||||||
'IEC': (1024, re.compile(r'(^[-+]?\d*\.?\d+)([KMGT]i?)?(b|bit|B)$')),
|
|
||||||
'SI': (1000, re.compile(r'(^[-+]?\d*\.?\d+)([kMGT])?(b|bit|B)$')),
|
|
||||||
}
|
|
||||||
|
|
||||||
TRUE_STRINGS = ('1', 't', 'true', 'on', 'y', 'yes')
|
|
||||||
FALSE_STRINGS = ('0', 'f', 'false', 'off', 'n', 'no')
|
|
||||||
|
|
||||||
SLUGIFY_STRIP_RE = re.compile(r"[^\w\s-]")
|
|
||||||
SLUGIFY_HYPHENATE_RE = re.compile(r"[-\s]+")
|
|
||||||
|
|
||||||
|
|
||||||
# NOTE(flaper87): The following globals are used by `mask_password`
|
|
||||||
_SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password']
|
|
||||||
|
|
||||||
# NOTE(ldbragst): Let's build a list of regex objects using the list of
|
|
||||||
# _SANITIZE_KEYS we already have. This way, we only have to add the new key
|
|
||||||
# to the list of _SANITIZE_KEYS and we can generate regular expressions
|
|
||||||
# for XML and JSON automatically.
|
|
||||||
_SANITIZE_PATTERNS_2 = []
|
|
||||||
_SANITIZE_PATTERNS_1 = []
|
|
||||||
|
|
||||||
# NOTE(amrith): Some regular expressions have only one parameter, some
|
|
||||||
# have two parameters. Use different lists of patterns here.
|
|
||||||
_FORMAT_PATTERNS_1 = [r'(%(key)s\s*[=]\s*)[^\s^\'^\"]+']
|
|
||||||
_FORMAT_PATTERNS_2 = [r'(%(key)s\s*[=]\s*[\"\']).*?([\"\'])',
|
|
||||||
r'(%(key)s\s+[\"\']).*?([\"\'])',
|
|
||||||
r'([-]{2}%(key)s\s+)[^\'^\"^=^\s]+([\s]*)',
|
|
||||||
r'(<%(key)s>).*?(</%(key)s>)',
|
|
||||||
r'([\"\']%(key)s[\"\']\s*:\s*[\"\']).*?([\"\'])',
|
|
||||||
r'([\'"].*?%(key)s[\'"]\s*:\s*u?[\'"]).*?([\'"])',
|
|
||||||
r'([\'"].*?%(key)s[\'"]\s*,\s*\'--?[A-z]+\'\s*,\s*u?'
|
|
||||||
'[\'"]).*?([\'"])',
|
|
||||||
r'(%(key)s\s*--?[A-z]+\s*)\S+(\s*)']
|
|
||||||
|
|
||||||
for key in _SANITIZE_KEYS:
|
|
||||||
for pattern in _FORMAT_PATTERNS_2:
|
|
||||||
reg_ex = re.compile(pattern % {'key': key}, re.DOTALL)
|
|
||||||
_SANITIZE_PATTERNS_2.append(reg_ex)
|
|
||||||
|
|
||||||
for pattern in _FORMAT_PATTERNS_1:
|
|
||||||
reg_ex = re.compile(pattern % {'key': key}, re.DOTALL)
|
|
||||||
_SANITIZE_PATTERNS_1.append(reg_ex)
|
|
||||||
|
|
||||||
|
|
||||||
def int_from_bool_as_string(subject):
|
|
||||||
"""Interpret a string as a boolean and return either 1 or 0.
|
|
||||||
|
|
||||||
Any string value in:
|
|
||||||
|
|
||||||
('True', 'true', 'On', 'on', '1')
|
|
||||||
|
|
||||||
is interpreted as a boolean True.
|
|
||||||
|
|
||||||
Useful for JSON-decoded stuff and config file parsing
|
|
||||||
"""
|
|
||||||
return bool_from_string(subject) and 1 or 0
|
|
||||||
|
|
||||||
|
|
||||||
def bool_from_string(subject, strict=False, default=False):
|
|
||||||
"""Interpret a string as a boolean.
|
|
||||||
|
|
||||||
A case-insensitive match is performed such that strings matching 't',
|
|
||||||
'true', 'on', 'y', 'yes', or '1' are considered True and, when
|
|
||||||
`strict=False`, anything else returns the value specified by 'default'.
|
|
||||||
|
|
||||||
Useful for JSON-decoded stuff and config file parsing.
|
|
||||||
|
|
||||||
If `strict=True`, unrecognized values, including None, will raise a
|
|
||||||
ValueError which is useful when parsing values passed in from an API call.
|
|
||||||
Strings yielding False are 'f', 'false', 'off', 'n', 'no', or '0'.
|
|
||||||
"""
|
|
||||||
if not isinstance(subject, six.string_types):
|
|
||||||
subject = str(subject)
|
|
||||||
|
|
||||||
lowered = subject.strip().lower()
|
|
||||||
|
|
||||||
if lowered in TRUE_STRINGS:
|
|
||||||
return True
|
|
||||||
elif lowered in FALSE_STRINGS:
|
|
||||||
return False
|
|
||||||
elif strict:
|
|
||||||
acceptable = ', '.join(
|
|
||||||
"'%s'" % s for s in sorted(TRUE_STRINGS + FALSE_STRINGS))
|
|
||||||
msg = _("Unrecognized value '%(val)s', acceptable values are:"
|
|
||||||
" %(acceptable)s") % {'val': subject,
|
|
||||||
'acceptable': acceptable}
|
|
||||||
raise ValueError(msg)
|
|
||||||
else:
|
|
||||||
return default
|
|
||||||
|
|
||||||
|
|
||||||
def safe_decode(text, incoming=None, errors='strict'):
|
|
||||||
"""Decodes incoming text/bytes string using `incoming` if they're not
|
|
||||||
already unicode.
|
|
||||||
|
|
||||||
:param incoming: Text's current encoding
|
|
||||||
:param errors: Errors handling policy. See here for valid
|
|
||||||
values http://docs.python.org/2/library/codecs.html
|
|
||||||
:returns: text or a unicode `incoming` encoded
|
|
||||||
representation of it.
|
|
||||||
:raises TypeError: If text is not an instance of str
|
|
||||||
"""
|
|
||||||
if not isinstance(text, (six.string_types, six.binary_type)):
|
|
||||||
raise TypeError("%s can't be decoded" % type(text))
|
|
||||||
|
|
||||||
if isinstance(text, six.text_type):
|
|
||||||
return text
|
|
||||||
|
|
||||||
if not incoming:
|
|
||||||
incoming = (sys.stdin.encoding or
|
|
||||||
sys.getdefaultencoding())
|
|
||||||
|
|
||||||
try:
|
|
||||||
return text.decode(incoming, errors)
|
|
||||||
except UnicodeDecodeError:
|
|
||||||
# Note(flaper87) If we get here, it means that
|
|
||||||
# sys.stdin.encoding / sys.getdefaultencoding
|
|
||||||
# didn't return a suitable encoding to decode
|
|
||||||
# text. This happens mostly when global LANG
|
|
||||||
# var is not set correctly and there's no
|
|
||||||
# default encoding. In this case, most likely
|
|
||||||
# python will use ASCII or ANSI encoders as
|
|
||||||
# default encodings but they won't be capable
|
|
||||||
# of decoding non-ASCII characters.
|
|
||||||
#
|
|
||||||
# Also, UTF-8 is being used since it's an ASCII
|
|
||||||
# extension.
|
|
||||||
return text.decode('utf-8', errors)
|
|
||||||
|
|
||||||
|
|
||||||
def safe_encode(text, incoming=None,
|
|
||||||
encoding='utf-8', errors='strict'):
|
|
||||||
"""Encodes incoming text/bytes string using `encoding`.
|
|
||||||
|
|
||||||
If incoming is not specified, text is expected to be encoded with
|
|
||||||
current python's default encoding. (`sys.getdefaultencoding`)
|
|
||||||
|
|
||||||
:param incoming: Text's current encoding
|
|
||||||
:param encoding: Expected encoding for text (Default UTF-8)
|
|
||||||
:param errors: Errors handling policy. See here for valid
|
|
||||||
values http://docs.python.org/2/library/codecs.html
|
|
||||||
:returns: text or a bytestring `encoding` encoded
|
|
||||||
representation of it.
|
|
||||||
:raises TypeError: If text is not an instance of str
|
|
||||||
"""
|
|
||||||
if not isinstance(text, (six.string_types, six.binary_type)):
|
|
||||||
raise TypeError("%s can't be encoded" % type(text))
|
|
||||||
|
|
||||||
if not incoming:
|
|
||||||
incoming = (sys.stdin.encoding or
|
|
||||||
sys.getdefaultencoding())
|
|
||||||
|
|
||||||
if isinstance(text, six.text_type):
|
|
||||||
if six.PY3:
|
|
||||||
return text.encode(encoding, errors).decode(incoming)
|
|
||||||
else:
|
|
||||||
return text.encode(encoding, errors)
|
|
||||||
elif text and encoding != incoming:
|
|
||||||
# Decode text before encoding it with `encoding`
|
|
||||||
text = safe_decode(text, incoming, errors)
|
|
||||||
if six.PY3:
|
|
||||||
return text.encode(encoding, errors).decode(incoming)
|
|
||||||
else:
|
|
||||||
return text.encode(encoding, errors)
|
|
||||||
|
|
||||||
return text
|
|
||||||
|
|
||||||
|
|
||||||
def string_to_bytes(text, unit_system='IEC', return_int=False):
|
|
||||||
"""Converts a string into an float representation of bytes.
|
|
||||||
|
|
||||||
The units supported for IEC ::
|
|
||||||
|
|
||||||
Kb(it), Kib(it), Mb(it), Mib(it), Gb(it), Gib(it), Tb(it), Tib(it)
|
|
||||||
KB, KiB, MB, MiB, GB, GiB, TB, TiB
|
|
||||||
|
|
||||||
The units supported for SI ::
|
|
||||||
|
|
||||||
kb(it), Mb(it), Gb(it), Tb(it)
|
|
||||||
kB, MB, GB, TB
|
|
||||||
|
|
||||||
Note that the SI unit system does not support capital letter 'K'
|
|
||||||
|
|
||||||
:param text: String input for bytes size conversion.
|
|
||||||
:param unit_system: Unit system for byte size conversion.
|
|
||||||
:param return_int: If True, returns integer representation of text
|
|
||||||
in bytes. (default: decimal)
|
|
||||||
:returns: Numerical representation of text in bytes.
|
|
||||||
:raises ValueError: If text has an invalid value.
|
|
||||||
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
base, reg_ex = UNIT_SYSTEM_INFO[unit_system]
|
|
||||||
except KeyError:
|
|
||||||
msg = _('Invalid unit system: "%s"') % unit_system
|
|
||||||
raise ValueError(msg)
|
|
||||||
match = reg_ex.match(text)
|
|
||||||
if match:
|
|
||||||
magnitude = float(match.group(1))
|
|
||||||
unit_prefix = match.group(2)
|
|
||||||
if match.group(3) in ['b', 'bit']:
|
|
||||||
magnitude /= 8
|
|
||||||
else:
|
|
||||||
msg = _('Invalid string format: %s') % text
|
|
||||||
raise ValueError(msg)
|
|
||||||
if not unit_prefix:
|
|
||||||
res = magnitude
|
|
||||||
else:
|
|
||||||
res = magnitude * pow(base, UNIT_PREFIX_EXPONENT[unit_prefix])
|
|
||||||
if return_int:
|
|
||||||
return int(math.ceil(res))
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
def to_slug(value, incoming=None, errors="strict"):
|
|
||||||
"""Normalize string.
|
|
||||||
|
|
||||||
Convert to lowercase, remove non-word characters, and convert spaces
|
|
||||||
to hyphens.
|
|
||||||
|
|
||||||
Inspired by Django's `slugify` filter.
|
|
||||||
|
|
||||||
:param value: Text to slugify
|
|
||||||
:param incoming: Text's current encoding
|
|
||||||
:param errors: Errors handling policy. See here for valid
|
|
||||||
values http://docs.python.org/2/library/codecs.html
|
|
||||||
:returns: slugified unicode representation of `value`
|
|
||||||
:raises TypeError: If text is not an instance of str
|
|
||||||
"""
|
|
||||||
value = safe_decode(value, incoming, errors)
|
|
||||||
# NOTE(aababilov): no need to use safe_(encode|decode) here:
|
|
||||||
# encodings are always "ascii", error handling is always "ignore"
|
|
||||||
# and types are always known (first: unicode; second: str)
|
|
||||||
value = unicodedata.normalize("NFKD", value).encode(
|
|
||||||
"ascii", "ignore").decode("ascii")
|
|
||||||
value = SLUGIFY_STRIP_RE.sub("", value).strip().lower()
|
|
||||||
return SLUGIFY_HYPHENATE_RE.sub("-", value)
|
|
||||||
|
|
||||||
|
|
||||||
def mask_password(message, secret="***"):
|
|
||||||
"""Replace password with 'secret' in message.
|
|
||||||
|
|
||||||
:param message: The string which includes security information.
|
|
||||||
:param secret: value with which to replace passwords.
|
|
||||||
:returns: The unicode value of message with the password fields masked.
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
>>> mask_password("'adminPass' : 'aaaaa'")
|
|
||||||
"'adminPass' : '***'"
|
|
||||||
>>> mask_password("'admin_pass' : 'aaaaa'")
|
|
||||||
"'admin_pass' : '***'"
|
|
||||||
>>> mask_password('"password" : "aaaaa"')
|
|
||||||
'"password" : "***"'
|
|
||||||
>>> mask_password("'original_password' : 'aaaaa'")
|
|
||||||
"'original_password' : '***'"
|
|
||||||
>>> mask_password("u'original_password' : u'aaaaa'")
|
|
||||||
"u'original_password' : u'***'"
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
message = six.text_type(message)
|
|
||||||
except UnicodeDecodeError:
|
|
||||||
# NOTE(jecarey): Temporary fix to handle cases where message is a
|
|
||||||
# byte string. A better solution will be provided in Kilo.
|
|
||||||
pass
|
|
||||||
|
|
||||||
# NOTE(ldbragst): Check to see if anything in message contains any key
|
|
||||||
# specified in _SANITIZE_KEYS, if not then just return the message since
|
|
||||||
# we don't have to mask any passwords.
|
|
||||||
if not any(key in message for key in _SANITIZE_KEYS):
|
|
||||||
return message
|
|
||||||
|
|
||||||
substitute = r'\g<1>' + secret + r'\g<2>'
|
|
||||||
for pattern in _SANITIZE_PATTERNS_2:
|
|
||||||
message = re.sub(pattern, substitute, message)
|
|
||||||
|
|
||||||
substitute = r'\g<1>' + secret
|
|
||||||
for pattern in _SANITIZE_PATTERNS_1:
|
|
||||||
message = re.sub(pattern, substitute, message)
|
|
||||||
|
|
||||||
return message
|
|
@ -1,104 +0,0 @@
|
|||||||
# Copyright 2012-2014 Red Hat, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Helper module for systemd service readiness notification.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import socket
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def _abstractify(socket_name):
|
|
||||||
if socket_name.startswith('@'):
|
|
||||||
# abstract namespace socket
|
|
||||||
socket_name = '\0%s' % socket_name[1:]
|
|
||||||
return socket_name
|
|
||||||
|
|
||||||
|
|
||||||
def _sd_notify(unset_env, msg):
|
|
||||||
notify_socket = os.getenv('NOTIFY_SOCKET')
|
|
||||||
if notify_socket:
|
|
||||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
|
|
||||||
try:
|
|
||||||
sock.connect(_abstractify(notify_socket))
|
|
||||||
sock.sendall(msg)
|
|
||||||
if unset_env:
|
|
||||||
del os.environ['NOTIFY_SOCKET']
|
|
||||||
except EnvironmentError:
|
|
||||||
LOG.debug("Systemd notification failed", exc_info=True)
|
|
||||||
finally:
|
|
||||||
sock.close()
|
|
||||||
|
|
||||||
|
|
||||||
def notify():
|
|
||||||
"""Send notification to Systemd that service is ready.
|
|
||||||
For details see
|
|
||||||
http://www.freedesktop.org/software/systemd/man/sd_notify.html
|
|
||||||
"""
|
|
||||||
_sd_notify(False, 'READY=1')
|
|
||||||
|
|
||||||
|
|
||||||
def notify_once():
|
|
||||||
"""Send notification once to Systemd that service is ready.
|
|
||||||
Systemd sets NOTIFY_SOCKET environment variable with the name of the
|
|
||||||
socket listening for notifications from services.
|
|
||||||
This method removes the NOTIFY_SOCKET environment variable to ensure
|
|
||||||
notification is sent only once.
|
|
||||||
"""
|
|
||||||
_sd_notify(True, 'READY=1')
|
|
||||||
|
|
||||||
|
|
||||||
def onready(notify_socket, timeout):
|
|
||||||
"""Wait for systemd style notification on the socket.
|
|
||||||
|
|
||||||
:param notify_socket: local socket address
|
|
||||||
:type notify_socket: string
|
|
||||||
:param timeout: socket timeout
|
|
||||||
:type timeout: float
|
|
||||||
:returns: 0 service ready
|
|
||||||
1 service not ready
|
|
||||||
2 timeout occured
|
|
||||||
"""
|
|
||||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
|
|
||||||
sock.settimeout(timeout)
|
|
||||||
sock.bind(_abstractify(notify_socket))
|
|
||||||
try:
|
|
||||||
msg = sock.recv(512)
|
|
||||||
except socket.timeout:
|
|
||||||
return 2
|
|
||||||
finally:
|
|
||||||
sock.close()
|
|
||||||
if 'READY=1' in msg:
|
|
||||||
return 0
|
|
||||||
else:
|
|
||||||
return 1
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
# simple CLI for testing
|
|
||||||
if len(sys.argv) == 1:
|
|
||||||
notify()
|
|
||||||
elif len(sys.argv) >= 2:
|
|
||||||
timeout = float(sys.argv[1])
|
|
||||||
notify_socket = os.getenv('NOTIFY_SOCKET')
|
|
||||||
if notify_socket:
|
|
||||||
retval = onready(notify_socket, timeout)
|
|
||||||
sys.exit(retval)
|
|
@ -1,147 +0,0 @@
|
|||||||
# Copyright 2012 Red Hat, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
import threading
|
|
||||||
|
|
||||||
import eventlet
|
|
||||||
from eventlet import greenpool
|
|
||||||
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
from sticks.openstack.common import loopingcall
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def _thread_done(gt, *args, **kwargs):
|
|
||||||
"""Callback function to be passed to GreenThread.link() when we spawn()
|
|
||||||
Calls the :class:`ThreadGroup` to notify if.
|
|
||||||
|
|
||||||
"""
|
|
||||||
kwargs['group'].thread_done(kwargs['thread'])
|
|
||||||
|
|
||||||
|
|
||||||
class Thread(object):
|
|
||||||
"""Wrapper around a greenthread, that holds a reference to the
|
|
||||||
:class:`ThreadGroup`. The Thread will notify the :class:`ThreadGroup` when
|
|
||||||
it has done so it can be removed from the threads list.
|
|
||||||
"""
|
|
||||||
def __init__(self, thread, group):
|
|
||||||
self.thread = thread
|
|
||||||
self.thread.link(_thread_done, group=group, thread=self)
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
self.thread.kill()
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
return self.thread.wait()
|
|
||||||
|
|
||||||
def link(self, func, *args, **kwargs):
|
|
||||||
self.thread.link(func, *args, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class ThreadGroup(object):
|
|
||||||
"""The point of the ThreadGroup class is to:
|
|
||||||
|
|
||||||
* keep track of timers and greenthreads (making it easier to stop them
|
|
||||||
when need be).
|
|
||||||
* provide an easy API to add timers.
|
|
||||||
"""
|
|
||||||
def __init__(self, thread_pool_size=10):
|
|
||||||
self.pool = greenpool.GreenPool(thread_pool_size)
|
|
||||||
self.threads = []
|
|
||||||
self.timers = []
|
|
||||||
|
|
||||||
def add_dynamic_timer(self, callback, initial_delay=None,
|
|
||||||
periodic_interval_max=None, *args, **kwargs):
|
|
||||||
timer = loopingcall.DynamicLoopingCall(callback, *args, **kwargs)
|
|
||||||
timer.start(initial_delay=initial_delay,
|
|
||||||
periodic_interval_max=periodic_interval_max)
|
|
||||||
self.timers.append(timer)
|
|
||||||
|
|
||||||
def add_timer(self, interval, callback, initial_delay=None,
|
|
||||||
*args, **kwargs):
|
|
||||||
pulse = loopingcall.FixedIntervalLoopingCall(callback, *args, **kwargs)
|
|
||||||
pulse.start(interval=interval,
|
|
||||||
initial_delay=initial_delay)
|
|
||||||
self.timers.append(pulse)
|
|
||||||
|
|
||||||
def add_thread(self, callback, *args, **kwargs):
|
|
||||||
gt = self.pool.spawn(callback, *args, **kwargs)
|
|
||||||
th = Thread(gt, self)
|
|
||||||
self.threads.append(th)
|
|
||||||
return th
|
|
||||||
|
|
||||||
def thread_done(self, thread):
|
|
||||||
self.threads.remove(thread)
|
|
||||||
|
|
||||||
def _stop_threads(self):
|
|
||||||
current = threading.current_thread()
|
|
||||||
|
|
||||||
# Iterate over a copy of self.threads so thread_done doesn't
|
|
||||||
# modify the list while we're iterating
|
|
||||||
for x in self.threads[:]:
|
|
||||||
if x is current:
|
|
||||||
# don't kill the current thread.
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
x.stop()
|
|
||||||
except Exception as ex:
|
|
||||||
LOG.exception(ex)
|
|
||||||
|
|
||||||
def stop_timers(self):
|
|
||||||
for x in self.timers:
|
|
||||||
try:
|
|
||||||
x.stop()
|
|
||||||
except Exception as ex:
|
|
||||||
LOG.exception(ex)
|
|
||||||
self.timers = []
|
|
||||||
|
|
||||||
def stop(self, graceful=False):
|
|
||||||
"""stop function has the option of graceful=True/False.
|
|
||||||
|
|
||||||
* In case of graceful=True, wait for all threads to be finished.
|
|
||||||
Never kill threads.
|
|
||||||
* In case of graceful=False, kill threads immediately.
|
|
||||||
"""
|
|
||||||
self.stop_timers()
|
|
||||||
if graceful:
|
|
||||||
# In case of graceful=True, wait for all threads to be
|
|
||||||
# finished, never kill threads
|
|
||||||
self.wait()
|
|
||||||
else:
|
|
||||||
# In case of graceful=False(Default), kill threads
|
|
||||||
# immediately
|
|
||||||
self._stop_threads()
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
for x in self.timers:
|
|
||||||
try:
|
|
||||||
x.wait()
|
|
||||||
except eventlet.greenlet.GreenletExit:
|
|
||||||
pass
|
|
||||||
except Exception as ex:
|
|
||||||
LOG.exception(ex)
|
|
||||||
current = threading.current_thread()
|
|
||||||
|
|
||||||
# Iterate over a copy of self.threads so thread_done doesn't
|
|
||||||
# modify the list while we're iterating
|
|
||||||
for x in self.threads[:]:
|
|
||||||
if x is current:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
x.wait()
|
|
||||||
except eventlet.greenlet.GreenletExit:
|
|
||||||
pass
|
|
||||||
except Exception as ex:
|
|
||||||
LOG.exception(ex)
|
|
@ -1,210 +0,0 @@
|
|||||||
# Copyright 2011 OpenStack Foundation.
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""
|
|
||||||
Time related utilities and helper functions.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import calendar
|
|
||||||
import datetime
|
|
||||||
import time
|
|
||||||
|
|
||||||
import iso8601
|
|
||||||
import six
|
|
||||||
|
|
||||||
|
|
||||||
# ISO 8601 extended time format with microseconds
|
|
||||||
_ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f'
|
|
||||||
_ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S'
|
|
||||||
PERFECT_TIME_FORMAT = _ISO8601_TIME_FORMAT_SUBSECOND
|
|
||||||
|
|
||||||
|
|
||||||
def isotime(at=None, subsecond=False):
|
|
||||||
"""Stringify time in ISO 8601 format."""
|
|
||||||
if not at:
|
|
||||||
at = utcnow()
|
|
||||||
st = at.strftime(_ISO8601_TIME_FORMAT
|
|
||||||
if not subsecond
|
|
||||||
else _ISO8601_TIME_FORMAT_SUBSECOND)
|
|
||||||
tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC'
|
|
||||||
st += ('Z' if tz == 'UTC' else tz)
|
|
||||||
return st
|
|
||||||
|
|
||||||
|
|
||||||
def parse_isotime(timestr):
|
|
||||||
"""Parse time from ISO 8601 format."""
|
|
||||||
try:
|
|
||||||
return iso8601.parse_date(timestr)
|
|
||||||
except iso8601.ParseError as e:
|
|
||||||
raise ValueError(six.text_type(e))
|
|
||||||
except TypeError as e:
|
|
||||||
raise ValueError(six.text_type(e))
|
|
||||||
|
|
||||||
|
|
||||||
def strtime(at=None, fmt=PERFECT_TIME_FORMAT):
|
|
||||||
"""Returns formatted utcnow."""
|
|
||||||
if not at:
|
|
||||||
at = utcnow()
|
|
||||||
return at.strftime(fmt)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_strtime(timestr, fmt=PERFECT_TIME_FORMAT):
|
|
||||||
"""Turn a formatted time back into a datetime."""
|
|
||||||
return datetime.datetime.strptime(timestr, fmt)
|
|
||||||
|
|
||||||
|
|
||||||
def normalize_time(timestamp):
|
|
||||||
"""Normalize time in arbitrary timezone to UTC naive object."""
|
|
||||||
offset = timestamp.utcoffset()
|
|
||||||
if offset is None:
|
|
||||||
return timestamp
|
|
||||||
return timestamp.replace(tzinfo=None) - offset
|
|
||||||
|
|
||||||
|
|
||||||
def is_older_than(before, seconds):
|
|
||||||
"""Return True if before is older than seconds."""
|
|
||||||
if isinstance(before, six.string_types):
|
|
||||||
before = parse_strtime(before).replace(tzinfo=None)
|
|
||||||
else:
|
|
||||||
before = before.replace(tzinfo=None)
|
|
||||||
|
|
||||||
return utcnow() - before > datetime.timedelta(seconds=seconds)
|
|
||||||
|
|
||||||
|
|
||||||
def is_newer_than(after, seconds):
|
|
||||||
"""Return True if after is newer than seconds."""
|
|
||||||
if isinstance(after, six.string_types):
|
|
||||||
after = parse_strtime(after).replace(tzinfo=None)
|
|
||||||
else:
|
|
||||||
after = after.replace(tzinfo=None)
|
|
||||||
|
|
||||||
return after - utcnow() > datetime.timedelta(seconds=seconds)
|
|
||||||
|
|
||||||
|
|
||||||
def utcnow_ts():
|
|
||||||
"""Timestamp version of our utcnow function."""
|
|
||||||
if utcnow.override_time is None:
|
|
||||||
# NOTE(kgriffs): This is several times faster
|
|
||||||
# than going through calendar.timegm(...)
|
|
||||||
return int(time.time())
|
|
||||||
|
|
||||||
return calendar.timegm(utcnow().timetuple())
|
|
||||||
|
|
||||||
|
|
||||||
def utcnow():
|
|
||||||
"""Overridable version of utils.utcnow."""
|
|
||||||
if utcnow.override_time:
|
|
||||||
try:
|
|
||||||
return utcnow.override_time.pop(0)
|
|
||||||
except AttributeError:
|
|
||||||
return utcnow.override_time
|
|
||||||
return datetime.datetime.utcnow()
|
|
||||||
|
|
||||||
|
|
||||||
def iso8601_from_timestamp(timestamp):
|
|
||||||
"""Returns a iso8601 formatted date from timestamp."""
|
|
||||||
return isotime(datetime.datetime.utcfromtimestamp(timestamp))
|
|
||||||
|
|
||||||
|
|
||||||
utcnow.override_time = None
|
|
||||||
|
|
||||||
|
|
||||||
def set_time_override(override_time=None):
|
|
||||||
"""Overrides utils.utcnow.
|
|
||||||
|
|
||||||
Make it return a constant time or a list thereof, one at a time.
|
|
||||||
|
|
||||||
:param override_time: datetime instance or list thereof. If not
|
|
||||||
given, defaults to the current UTC time.
|
|
||||||
"""
|
|
||||||
utcnow.override_time = override_time or datetime.datetime.utcnow()
|
|
||||||
|
|
||||||
|
|
||||||
def advance_time_delta(timedelta):
|
|
||||||
"""Advance overridden time using a datetime.timedelta."""
|
|
||||||
assert(not utcnow.override_time is None)
|
|
||||||
try:
|
|
||||||
for dt in utcnow.override_time:
|
|
||||||
dt += timedelta
|
|
||||||
except TypeError:
|
|
||||||
utcnow.override_time += timedelta
|
|
||||||
|
|
||||||
|
|
||||||
def advance_time_seconds(seconds):
|
|
||||||
"""Advance overridden time by seconds."""
|
|
||||||
advance_time_delta(datetime.timedelta(0, seconds))
|
|
||||||
|
|
||||||
|
|
||||||
def clear_time_override():
|
|
||||||
"""Remove the overridden time."""
|
|
||||||
utcnow.override_time = None
|
|
||||||
|
|
||||||
|
|
||||||
def marshall_now(now=None):
|
|
||||||
"""Make an rpc-safe datetime with microseconds.
|
|
||||||
|
|
||||||
Note: tzinfo is stripped, but not required for relative times.
|
|
||||||
"""
|
|
||||||
if not now:
|
|
||||||
now = utcnow()
|
|
||||||
return dict(day=now.day, month=now.month, year=now.year, hour=now.hour,
|
|
||||||
minute=now.minute, second=now.second,
|
|
||||||
microsecond=now.microsecond)
|
|
||||||
|
|
||||||
|
|
||||||
def unmarshall_time(tyme):
|
|
||||||
"""Unmarshall a datetime dict."""
|
|
||||||
return datetime.datetime(day=tyme['day'],
|
|
||||||
month=tyme['month'],
|
|
||||||
year=tyme['year'],
|
|
||||||
hour=tyme['hour'],
|
|
||||||
minute=tyme['minute'],
|
|
||||||
second=tyme['second'],
|
|
||||||
microsecond=tyme['microsecond'])
|
|
||||||
|
|
||||||
|
|
||||||
def delta_seconds(before, after):
|
|
||||||
"""Return the difference between two timing objects.
|
|
||||||
|
|
||||||
Compute the difference in seconds between two date, time, or
|
|
||||||
datetime objects (as a float, to microsecond resolution).
|
|
||||||
"""
|
|
||||||
delta = after - before
|
|
||||||
return total_seconds(delta)
|
|
||||||
|
|
||||||
|
|
||||||
def total_seconds(delta):
|
|
||||||
"""Return the total seconds of datetime.timedelta object.
|
|
||||||
|
|
||||||
Compute total seconds of datetime.timedelta, datetime.timedelta
|
|
||||||
doesn't have method total_seconds in Python2.6, calculate it manually.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
return delta.total_seconds()
|
|
||||||
except AttributeError:
|
|
||||||
return ((delta.days * 24 * 3600) + delta.seconds +
|
|
||||||
float(delta.microseconds) / (10 ** 6))
|
|
||||||
|
|
||||||
|
|
||||||
def is_soon(dt, window):
|
|
||||||
"""Determines if time is going to happen in the next window seconds.
|
|
||||||
|
|
||||||
:param dt: the time
|
|
||||||
:param window: minimum seconds to remain to consider the time not soon
|
|
||||||
|
|
||||||
:return: True if expiration is within the given duration
|
|
||||||
"""
|
|
||||||
soon = (utcnow() + datetime.timedelta(seconds=window))
|
|
||||||
return normalize_time(dt) <= soon
|
|
@ -1,144 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import os
|
|
||||||
import socket
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
from stevedore import named
|
|
||||||
|
|
||||||
from sticks.openstack.common import log as logging
|
|
||||||
from sticks import utils
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
service_opts = [
|
|
||||||
cfg.StrOpt('host',
|
|
||||||
default=socket.getfqdn(),
|
|
||||||
help='Name of this node. This can be an opaque identifier. '
|
|
||||||
'It is not necessarily a hostname, FQDN, or IP address. '
|
|
||||||
'However, the node name must be valid within '
|
|
||||||
'an AMQP key, and if using ZeroMQ, a valid '
|
|
||||||
'hostname, FQDN, or IP address.'),
|
|
||||||
cfg.MultiStrOpt('dispatcher',
|
|
||||||
deprecated_group="collector",
|
|
||||||
default=['database'],
|
|
||||||
help='Dispatcher to process data.'),
|
|
||||||
cfg.IntOpt('collector_workers',
|
|
||||||
default=1,
|
|
||||||
help='Number of workers for collector service. A single '
|
|
||||||
'collector is enabled by default.'),
|
|
||||||
cfg.IntOpt('notification_workers',
|
|
||||||
default=1,
|
|
||||||
help='Number of workers for notification service. A single '
|
|
||||||
'notification agent is enabled by default.'),
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
cfg.CONF.register_opts(service_opts)
|
|
||||||
|
|
||||||
CLI_OPTIONS = [
|
|
||||||
cfg.StrOpt('os-username',
|
|
||||||
deprecated_group="DEFAULT",
|
|
||||||
default=os.environ.get('OS_USERNAME', 'sticks'),
|
|
||||||
help='User name to use for OpenStack service access.'),
|
|
||||||
cfg.StrOpt('os-password',
|
|
||||||
deprecated_group="DEFAULT",
|
|
||||||
secret=True,
|
|
||||||
default=os.environ.get('OS_PASSWORD', 'admin'),
|
|
||||||
help='Password to use for OpenStack service access.'),
|
|
||||||
cfg.StrOpt('os-tenant-id',
|
|
||||||
deprecated_group="DEFAULT",
|
|
||||||
default=os.environ.get('OS_TENANT_ID', ''),
|
|
||||||
help='Tenant ID to use for OpenStack service access.'),
|
|
||||||
cfg.StrOpt('os-tenant-name',
|
|
||||||
deprecated_group="DEFAULT",
|
|
||||||
default=os.environ.get('OS_TENANT_NAME', 'admin'),
|
|
||||||
help='Tenant name to use for OpenStack service access.'),
|
|
||||||
cfg.StrOpt('os-cacert',
|
|
||||||
default=os.environ.get('OS_CACERT'),
|
|
||||||
help='Certificate chain for SSL validation.'),
|
|
||||||
cfg.StrOpt('os-auth-url',
|
|
||||||
deprecated_group="DEFAULT",
|
|
||||||
default=os.environ.get('OS_AUTH_URL',
|
|
||||||
'http://localhost:5000/v2.0'),
|
|
||||||
help='Auth URL to use for OpenStack service access.'),
|
|
||||||
cfg.StrOpt('os-region-name',
|
|
||||||
deprecated_group="DEFAULT",
|
|
||||||
default=os.environ.get('OS_REGION_NAME'),
|
|
||||||
help='Region name to use for OpenStack service endpoints.'),
|
|
||||||
cfg.StrOpt('os-endpoint-type',
|
|
||||||
default=os.environ.get('OS_ENDPOINT_TYPE', 'publicURL'),
|
|
||||||
help='Type of endpoint in Identity service catalog to use for '
|
|
||||||
'communication with OpenStack services.'),
|
|
||||||
cfg.BoolOpt('insecure',
|
|
||||||
default=False,
|
|
||||||
help='Disables X.509 certificate validation when an '
|
|
||||||
'SSL connection to Identity Service is established.'),
|
|
||||||
]
|
|
||||||
cfg.CONF.register_opts(CLI_OPTIONS, group="service_credentials")
|
|
||||||
|
|
||||||
|
|
||||||
class WorkerException(Exception):
|
|
||||||
"""Exception for errors relating to service workers
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class DispatchedService(object):
|
|
||||||
|
|
||||||
DISPATCHER_NAMESPACE = 'sticks.dispatcher'
|
|
||||||
|
|
||||||
def start(self):
|
|
||||||
super(DispatchedService, self).start()
|
|
||||||
LOG.debug(_('loading dispatchers from %s'),
|
|
||||||
self.DISPATCHER_NAMESPACE)
|
|
||||||
self.dispatcher_manager = named.NamedExtensionManager(
|
|
||||||
namespace=self.DISPATCHER_NAMESPACE,
|
|
||||||
names=cfg.CONF.dispatcher,
|
|
||||||
invoke_on_load=True,
|
|
||||||
invoke_args=[cfg.CONF])
|
|
||||||
if not list(self.dispatcher_manager):
|
|
||||||
LOG.warning(_('Failed to load any dispatchers for %s'),
|
|
||||||
self.DISPATCHER_NAMESPACE)
|
|
||||||
|
|
||||||
|
|
||||||
def get_workers(name):
|
|
||||||
workers = (cfg.CONF.get('%s_workers' % name) or
|
|
||||||
utils.cpu_count())
|
|
||||||
if workers and workers < 1:
|
|
||||||
msg = (_("%(worker_name)s value of %(workers)s is invalid, "
|
|
||||||
"must be greater than 0") %
|
|
||||||
{'worker_name': '%s_workers' % name, 'workers': str(workers)})
|
|
||||||
raise WorkerException(msg)
|
|
||||||
return workers
|
|
||||||
|
|
||||||
|
|
||||||
def prepare_service(argv=None):
|
|
||||||
cfg.set_defaults(logging.log_opts,
|
|
||||||
default_log_levels=['amqplib=WARN',
|
|
||||||
'qpid.messaging=INFO',
|
|
||||||
'sqlalchemy=WARN',
|
|
||||||
'keystoneclient=INFO',
|
|
||||||
'stevedore=INFO',
|
|
||||||
'eventlet.wsgi.server=WARN',
|
|
||||||
'iso8601=WARN'
|
|
||||||
])
|
|
||||||
if argv is None:
|
|
||||||
argv = sys.argv
|
|
||||||
cfg.CONF(argv[1:], project='sticks')
|
|
||||||
logging.setup('sticks')
|
|
@ -1,205 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
import pecan.testing
|
|
||||||
|
|
||||||
from sticks.api import auth
|
|
||||||
from sticks.tests import base
|
|
||||||
|
|
||||||
PATH_PREFIX = '/v1'
|
|
||||||
|
|
||||||
|
|
||||||
class TestApiBase(base.TestBase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestApiBase, self).setUp()
|
|
||||||
self.app = self._make_app()
|
|
||||||
cfg.CONF.set_override("auth_version",
|
|
||||||
"v2.0",
|
|
||||||
group=auth.KeystoneAuth.OPT_GROUP_NAME)
|
|
||||||
|
|
||||||
def _make_app(self, enable_acl=False):
|
|
||||||
|
|
||||||
root_dir = self.path_get()
|
|
||||||
|
|
||||||
self.config = {
|
|
||||||
'app': {
|
|
||||||
'root': 'sticks.api.root.RootController',
|
|
||||||
'modules': ['sticks.api'],
|
|
||||||
'static_root': '%s/public' % root_dir,
|
|
||||||
'template_path': '%s/api/templates' % root_dir,
|
|
||||||
'enable_acl': enable_acl,
|
|
||||||
'acl_public_routes': ['/', '/v1']
|
|
||||||
},
|
|
||||||
}
|
|
||||||
return pecan.testing.load_test_app(self.config)
|
|
||||||
|
|
||||||
def _request_json(self, path, params, expect_errors=False, headers=None,
|
|
||||||
method="post", extra_environ=None, status=None,
|
|
||||||
path_prefix=PATH_PREFIX):
|
|
||||||
"""Sends simulated HTTP request to Pecan test app.
|
|
||||||
|
|
||||||
:param path: url path of target service
|
|
||||||
:param params: content for wsgi.input of request
|
|
||||||
:param expect_errors: Boolean value; whether an error is expected based
|
|
||||||
on request
|
|
||||||
:param headers: a dictionary of headers to send along with the request
|
|
||||||
:param method: Request method type. Appropriate method function call
|
|
||||||
should be used rather than passing attribute in.
|
|
||||||
:param extra_environ: a dictionary of environ variables to send along
|
|
||||||
with the request
|
|
||||||
:param status: expected status code of response
|
|
||||||
:param path_prefix: prefix of the url path
|
|
||||||
"""
|
|
||||||
full_path = path_prefix + path
|
|
||||||
print('%s: %s %s' % (method.upper(), full_path, params))
|
|
||||||
response = getattr(self.app, "%s_json" % method)(
|
|
||||||
str(full_path),
|
|
||||||
params=params,
|
|
||||||
headers=headers,
|
|
||||||
status=status,
|
|
||||||
extra_environ=extra_environ,
|
|
||||||
expect_errors=expect_errors
|
|
||||||
)
|
|
||||||
print('GOT:%s' % response)
|
|
||||||
return response
|
|
||||||
|
|
||||||
def put_json(self, path, params, expect_errors=False, headers=None,
|
|
||||||
extra_environ=None, status=None):
|
|
||||||
"""Sends simulated HTTP PUT request to Pecan test app.
|
|
||||||
|
|
||||||
:param path: url path of target service
|
|
||||||
:param params: content for wsgi.input of request
|
|
||||||
:param expect_errors: Boolean value; whether an error is expected based
|
|
||||||
on request
|
|
||||||
:param headers: a dictionary of headers to send along with the request
|
|
||||||
:param extra_environ: a dictionary of environ variables to send along
|
|
||||||
with the request
|
|
||||||
:param status: expected status code of response
|
|
||||||
"""
|
|
||||||
return self._request_json(path=path, params=params,
|
|
||||||
expect_errors=expect_errors,
|
|
||||||
headers=headers, extra_environ=extra_environ,
|
|
||||||
status=status, method="put")
|
|
||||||
|
|
||||||
def post_json(self, path, params, expect_errors=False, headers=None,
|
|
||||||
extra_environ=None, status=None):
|
|
||||||
"""Sends simulated HTTP POST request to Pecan test app.
|
|
||||||
|
|
||||||
:param path: url path of target service
|
|
||||||
:param params: content for wsgi.input of request
|
|
||||||
:param expect_errors: Boolean value; whether an error is expected based
|
|
||||||
on request
|
|
||||||
:param headers: a dictionary of headers to send along with the request
|
|
||||||
:param extra_environ: a dictionary of environ variables to send along
|
|
||||||
with the request
|
|
||||||
:param status: expected status code of response
|
|
||||||
"""
|
|
||||||
return self._request_json(path=path, params=params,
|
|
||||||
expect_errors=expect_errors,
|
|
||||||
headers=headers, extra_environ=extra_environ,
|
|
||||||
status=status, method="post")
|
|
||||||
|
|
||||||
def patch_json(self, path, params, expect_errors=False, headers=None,
|
|
||||||
extra_environ=None, status=None):
|
|
||||||
"""Sends simulated HTTP PATCH request to Pecan test app.
|
|
||||||
|
|
||||||
:param path: url path of target service
|
|
||||||
:param params: content for wsgi.input of request
|
|
||||||
:param expect_errors: Boolean value; whether an error is expected based
|
|
||||||
on request
|
|
||||||
:param headers: a dictionary of headers to send along with the request
|
|
||||||
:param extra_environ: a dictionary of environ variables to send along
|
|
||||||
with the request
|
|
||||||
:param status: expected status code of response
|
|
||||||
"""
|
|
||||||
return self._request_json(path=path, params=params,
|
|
||||||
expect_errors=expect_errors,
|
|
||||||
headers=headers, extra_environ=extra_environ,
|
|
||||||
status=status, method="patch")
|
|
||||||
|
|
||||||
def delete(self, path, expect_errors=False, headers=None,
|
|
||||||
extra_environ=None, status=None, path_prefix=PATH_PREFIX):
|
|
||||||
"""Sends simulated HTTP DELETE request to Pecan test app.
|
|
||||||
|
|
||||||
:param path: url path of target service
|
|
||||||
:param expect_errors: Boolean value; whether an error is expected based
|
|
||||||
on request
|
|
||||||
:param headers: a dictionary of headers to send along with the request
|
|
||||||
:param extra_environ: a dictionary of environ variables to send along
|
|
||||||
with the request
|
|
||||||
:param status: expected status code of response
|
|
||||||
:param path_prefix: prefix of the url path
|
|
||||||
"""
|
|
||||||
full_path = path_prefix + path
|
|
||||||
print('DELETE: %s' % (full_path))
|
|
||||||
response = self.app.delete(str(full_path),
|
|
||||||
headers=headers,
|
|
||||||
status=status,
|
|
||||||
extra_environ=extra_environ,
|
|
||||||
expect_errors=expect_errors)
|
|
||||||
print('GOT:%s' % response)
|
|
||||||
return response
|
|
||||||
|
|
||||||
def get_json(self, path, expect_errors=False, headers=None,
|
|
||||||
extra_environ=None, q=[], path_prefix=PATH_PREFIX, **params):
|
|
||||||
"""Sends simulated HTTP GET request to Pecan test app.
|
|
||||||
|
|
||||||
:param path: url path of target service
|
|
||||||
:param expect_errors: Boolean value;whether an error is expected based
|
|
||||||
on request
|
|
||||||
:param headers: a dictionary of headers to send along with the request
|
|
||||||
:param extra_environ: a dictionary of environ variables to send along
|
|
||||||
with the request
|
|
||||||
:param q: list of queries consisting of: field, value, op, and type
|
|
||||||
keys
|
|
||||||
:param path_prefix: prefix of the url path
|
|
||||||
:param params: content for wsgi.input of request
|
|
||||||
"""
|
|
||||||
full_path = path_prefix + path
|
|
||||||
query_params = {'q.field': [],
|
|
||||||
'q.value': [],
|
|
||||||
'q.op': [],
|
|
||||||
}
|
|
||||||
for query in q:
|
|
||||||
for name in ['field', 'op', 'value']:
|
|
||||||
query_params['q.%s' % name].append(query.get(name, ''))
|
|
||||||
all_params = {}
|
|
||||||
all_params.update(params)
|
|
||||||
if q:
|
|
||||||
all_params.update(query_params)
|
|
||||||
print('GET: %s %r' % (full_path, all_params))
|
|
||||||
response = self.app.get(full_path,
|
|
||||||
params=all_params,
|
|
||||||
headers=headers,
|
|
||||||
extra_environ=extra_environ,
|
|
||||||
expect_errors=expect_errors)
|
|
||||||
if not expect_errors:
|
|
||||||
response = response.json
|
|
||||||
print('GOT:%s' % response)
|
|
||||||
return response
|
|
||||||
|
|
||||||
def validate_link(self, link):
|
|
||||||
"""Checks if the given link can get correct data."""
|
|
||||||
|
|
||||||
# removes 'http://loicalhost' part
|
|
||||||
full_path = link.split('localhost', 1)[1]
|
|
||||||
try:
|
|
||||||
self.get_json(full_path, path_prefix='')
|
|
||||||
return True
|
|
||||||
except Exception:
|
|
||||||
return False
|
|
@ -1,135 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
import mock
|
|
||||||
|
|
||||||
from sticks.tests.api import base
|
|
||||||
|
|
||||||
|
|
||||||
class MockResource(object):
|
|
||||||
name = None
|
|
||||||
id = None
|
|
||||||
|
|
||||||
def __init__(self, name, id):
|
|
||||||
self.name = name
|
|
||||||
self.id = id
|
|
||||||
|
|
||||||
|
|
||||||
class MockStatus(MockResource):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class MockProject(MockResource):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class MockIssue(object):
|
|
||||||
subject = None
|
|
||||||
id = None
|
|
||||||
status = None
|
|
||||||
project = None
|
|
||||||
start_date = None
|
|
||||||
|
|
||||||
def __init__(self, id, project, subject, status, start_date):
|
|
||||||
self.id = id
|
|
||||||
self.subject = subject
|
|
||||||
self.start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d')
|
|
||||||
self.status = MockStatus(status, "123")
|
|
||||||
self.project = MockProject(project, "123")
|
|
||||||
|
|
||||||
|
|
||||||
class TestTicket(base.TestApiBase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestTicket, self).setUp()
|
|
||||||
application = self.app.app.application
|
|
||||||
self.tracking = application.root.v1.tickets.sticks_manager.dm.driver
|
|
||||||
self.tracking.redmine = mock.MagicMock()
|
|
||||||
|
|
||||||
def test_get_all(self):
|
|
||||||
response_body = {'tickets': [{'project': 'test_project_1',
|
|
||||||
'status': 'test_status_1',
|
|
||||||
'start_date': '2015-01-01T00:00:00',
|
|
||||||
'id': 'test_id_1',
|
|
||||||
'title': 'test_subject_1',
|
|
||||||
},
|
|
||||||
{'project': 'test_project_2',
|
|
||||||
'status': 'test_status_2',
|
|
||||||
'start_date': '2015-01-01T00:00:00',
|
|
||||||
'id': 'test_id_2',
|
|
||||||
'title': 'test_subject_2',
|
|
||||||
}]}
|
|
||||||
mock_issues = [MockIssue("test_id_1",
|
|
||||||
"test_project_1",
|
|
||||||
"test_subject_1",
|
|
||||||
"test_status_1",
|
|
||||||
"2015-01-01"),
|
|
||||||
MockIssue("test_id_2",
|
|
||||||
"test_project_2",
|
|
||||||
"test_subject_2",
|
|
||||||
"test_status_2",
|
|
||||||
"2015-01-01"),
|
|
||||||
]
|
|
||||||
|
|
||||||
self.tracking._get_issues = mock.MagicMock(return_value=mock_issues)
|
|
||||||
|
|
||||||
resp = self.app.get('/v1/tickets', {'project': 'foo'})
|
|
||||||
|
|
||||||
self.assertEqual(json.loads(resp.body), response_body)
|
|
||||||
|
|
||||||
def test_get_ticket(self):
|
|
||||||
|
|
||||||
response_body = {'project': 'test_project_1',
|
|
||||||
'status': 'test_status_1',
|
|
||||||
'start_date': '2015-01-01T00:00:00',
|
|
||||||
'id': 'test_id_1',
|
|
||||||
'title': 'test_subject_1',
|
|
||||||
}
|
|
||||||
mock_issue = MockIssue("test_id_1",
|
|
||||||
"test_project_1",
|
|
||||||
"test_subject_1",
|
|
||||||
"test_status_1",
|
|
||||||
"2015-01-01")
|
|
||||||
|
|
||||||
self.tracking.redmine.issue.get = mock.MagicMock(
|
|
||||||
return_value=mock_issue)
|
|
||||||
|
|
||||||
resp = self.app.get('/v1/tickets/1', )
|
|
||||||
|
|
||||||
self.assertEqual(json.loads(resp.body), response_body)
|
|
||||||
|
|
||||||
def test_post_ticket(self):
|
|
||||||
response_body = {'project': 'test_project_1',
|
|
||||||
'status': 'test_status_1',
|
|
||||||
'start_date': '2015-01-01T00:00:00',
|
|
||||||
'id': 'test_id_1',
|
|
||||||
'title': 'test_subject_1',
|
|
||||||
}
|
|
||||||
mock_issue = MockIssue("test_id_1",
|
|
||||||
"test_project_1",
|
|
||||||
"test_subject_1",
|
|
||||||
"test_status_1",
|
|
||||||
"2015-01-01")
|
|
||||||
|
|
||||||
self.tracking._redmine_create = mock.MagicMock(return_value=mock_issue)
|
|
||||||
|
|
||||||
resp = self.post_json('/tickets', {'project': 'foo', 'title': 'bar'})
|
|
||||||
|
|
||||||
# Check message
|
|
||||||
self.assertEqual(json.loads(resp.body), response_body)
|
|
@ -1,53 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
import os
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
from oslotest import base
|
|
||||||
|
|
||||||
from sticks.tests import config_fixture
|
|
||||||
from sticks.tests import policy_fixture
|
|
||||||
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
|
|
||||||
class TestBase(base.BaseTestCase):
|
|
||||||
|
|
||||||
"""Test case base class for all unit tests."""
|
|
||||||
def setUp(self):
|
|
||||||
super(TestBase, self).setUp()
|
|
||||||
self.useFixture(config_fixture.ConfigFixture(CONF))
|
|
||||||
self.policy = self.useFixture(policy_fixture.PolicyFixture())
|
|
||||||
|
|
||||||
def path_get(self, project_file=None):
|
|
||||||
"""Get the absolute path to a file. Used for testing the API.
|
|
||||||
:param project_file: File whose path to return. Default: None.
|
|
||||||
:returns: path to the specified file, or path to project root.
|
|
||||||
"""
|
|
||||||
root = os.path.abspath(os.path.join(os.path.dirname(__file__),
|
|
||||||
'..',
|
|
||||||
'..',
|
|
||||||
)
|
|
||||||
)
|
|
||||||
if project_file:
|
|
||||||
return os.path.join(root, project_file)
|
|
||||||
else:
|
|
||||||
return root
|
|
||||||
|
|
||||||
|
|
||||||
class TestBaseFaulty(TestBase):
|
|
||||||
"""This test ensures we aren't letting any exceptions go unhandled."""
|
|
@ -1,35 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.common import config
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
|
|
||||||
class ConfigFixture(fixtures.Fixture):
|
|
||||||
"""Fixture to manage global conf settings."""
|
|
||||||
|
|
||||||
def __init__(self, conf):
|
|
||||||
self.conf = conf
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(ConfigFixture, self).setUp()
|
|
||||||
self.conf.set_default('verbose', True)
|
|
||||||
config.parse_args([], default_config_files=[])
|
|
||||||
self.addCleanup(self.conf.reset)
|
|
@ -1,22 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
policy_data = """
|
|
||||||
{
|
|
||||||
"context_is_admin": "role:admin",
|
|
||||||
"default": ""
|
|
||||||
}
|
|
||||||
"""
|
|
@ -1,45 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
from oslo.config import cfg
|
|
||||||
|
|
||||||
from sticks.common import policy as sticks_policy
|
|
||||||
from sticks.openstack.common import policy as common_policy
|
|
||||||
from sticks.tests import fake_policy
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
|
|
||||||
class PolicyFixture(fixtures.Fixture):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(PolicyFixture, self).setUp()
|
|
||||||
self.policy_dir = self.useFixture(fixtures.TempDir())
|
|
||||||
self.policy_file_name = os.path.join(self.policy_dir.path,
|
|
||||||
'policy.json')
|
|
||||||
with open(self.policy_file_name, 'w') as policy_file:
|
|
||||||
policy_file.write(fake_policy.policy_data)
|
|
||||||
CONF.set_override('policy_file', self.policy_file_name)
|
|
||||||
sticks_policy._ENFORCER = None
|
|
||||||
self.addCleanup(sticks_policy.get_enforcer().clear)
|
|
||||||
|
|
||||||
def set_rules(self, rules):
|
|
||||||
common_policy.set_rules(common_policy.Rules(
|
|
||||||
dict((k, common_policy.parse_rule(v))
|
|
||||||
for k, v in rules.items())))
|
|
@ -1,35 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
"""
|
|
||||||
test_sticks
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
Tests for `sticks` module.
|
|
||||||
"""
|
|
||||||
from stevedore import driver
|
|
||||||
|
|
||||||
from sticks.tests import base
|
|
||||||
from sticks.tracking import redmine_tracking
|
|
||||||
|
|
||||||
|
|
||||||
class TestSticks(base.TestBase):
|
|
||||||
|
|
||||||
def test_call(self):
|
|
||||||
def invoke(ext, *args, **kwds):
|
|
||||||
return (ext.name, args, kwds)
|
|
||||||
dm = driver.DriverManager('sticks.tracking', 'redmine',
|
|
||||||
invoke_on_load=True,)
|
|
||||||
self.assertIsInstance(dm.driver, redmine_tracking.RedmineTracking)
|
|
@ -1,143 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import fnmatch
|
|
||||||
import six
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
from oslo import messaging
|
|
||||||
|
|
||||||
from sticks.client import keystone_client
|
|
||||||
from sticks.openstack.common.gettextutils import _ # noqa
|
|
||||||
from sticks.openstack.common import log
|
|
||||||
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
sticks_role_opt = [
|
|
||||||
cfg.StrOpt('sticks_role_name', default='sticks',
|
|
||||||
help=_('Required role to issue tickets.'))
|
|
||||||
]
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_opts(sticks_role_opt)
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class TrackingBase(object):
|
|
||||||
"""Base class for tracking plugin."""
|
|
||||||
|
|
||||||
_name = None
|
|
||||||
|
|
||||||
_ROLE_ASSIGNMENT_CREATED = 'identity.created.role_assignment'
|
|
||||||
|
|
||||||
def __init__(self, description=None, provider=None, type=None,
|
|
||||||
tool_name=None):
|
|
||||||
self.default_events = [self._ROLE_ASSIGNMENT_CREATED]
|
|
||||||
self._subscribedEvents = self.default_events
|
|
||||||
self._name = "{0}.{1}".format(self.__class__.__module__,
|
|
||||||
self.__class__.__name__)
|
|
||||||
self.kc = None
|
|
||||||
self.conf = CONF
|
|
||||||
|
|
||||||
def subscribe_event(self, event):
|
|
||||||
if not (event in self._subscribedEvents):
|
|
||||||
self._subscribedEvents.append(event)
|
|
||||||
|
|
||||||
def register_manager(self, manager):
|
|
||||||
"""
|
|
||||||
Enables the plugin to add tasks to the manager
|
|
||||||
:param manager: the task manager to add tasks to
|
|
||||||
"""
|
|
||||||
self.manager = manager
|
|
||||||
|
|
||||||
def _has_sticks_role(self, user_id, role_id, project_id):
|
|
||||||
"""
|
|
||||||
Evaluates whether this user has sticks role. Returns
|
|
||||||
``True`` or ``False``.
|
|
||||||
"""
|
|
||||||
if self.kc is None:
|
|
||||||
self.kc = keystone_client.Client()
|
|
||||||
|
|
||||||
roles = [role.name for role in
|
|
||||||
self.kc.roles_for_user(user_id, project_id)
|
|
||||||
if role.id == role_id]
|
|
||||||
return self.conf.sticks_role_name in [role.lower() for role in roles]
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def create_ticket(self, data):
|
|
||||||
"""Create a ticket with data.
|
|
||||||
|
|
||||||
:param data: A dictionary with string keys and simple types as
|
|
||||||
values.
|
|
||||||
:type data: dict(str:?)
|
|
||||||
:returns: Iterable producing the formatted text.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def create_project(self, data):
|
|
||||||
"""Create a tracking project.
|
|
||||||
|
|
||||||
:param data: A dictionary with string keys and simple types as
|
|
||||||
values.
|
|
||||||
:type data: dict(str:?)
|
|
||||||
:returns: Iterable producing the formatted text.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _handle_event_type(subscribed_events, event_type):
|
|
||||||
"""Check whether event_type should be handled.
|
|
||||||
|
|
||||||
It is according to event_type_to_handle.l
|
|
||||||
"""
|
|
||||||
return any(map(lambda e: fnmatch.fnmatch(event_type, e),
|
|
||||||
subscribed_events))
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_targets(conf):
|
|
||||||
"""Return a sequence of oslo.messaging.Target
|
|
||||||
|
|
||||||
Sequence defining the exchange and topics to be connected for this
|
|
||||||
plugin.
|
|
||||||
"""
|
|
||||||
return [messaging.Target(topic=topic)
|
|
||||||
for topic in cfg.CONF.notification_topics]
|
|
||||||
|
|
||||||
def get_project(self, project_id):
|
|
||||||
|
|
||||||
if self.kc is None:
|
|
||||||
self.kc = keystone_client.Client()
|
|
||||||
|
|
||||||
return self.kc.project_get(project_id)
|
|
||||||
|
|
||||||
def process_notification(self, ctxt, publisher_id, event_type, payload,
|
|
||||||
metadata):
|
|
||||||
""" Process events"""
|
|
||||||
# Default action : create project
|
|
||||||
if event_type == self._ROLE_ASSIGNMENT_CREATED:
|
|
||||||
project = self.get_project(payload['project'])
|
|
||||||
if self._has_sticks_role(payload['user'],
|
|
||||||
payload['role'],
|
|
||||||
payload['project']):
|
|
||||||
self.create_project(payload['project'], project.name)
|
|
||||||
|
|
||||||
def info(self, ctxt, publisher_id, event_type, payload, metadata):
|
|
||||||
# Check if event is registered for plugin
|
|
||||||
if self._handle_event_type(self._subscribedEvents, event_type):
|
|
||||||
self.process_notification(ctxt, publisher_id, event_type, payload,
|
|
||||||
metadata)
|
|
@ -1,142 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
from oslo.config import cfg
|
|
||||||
import redmine
|
|
||||||
|
|
||||||
from sticks.api.v1.datamodels import ticket as ticket_models
|
|
||||||
from sticks.openstack.common import log
|
|
||||||
from sticks import tracking
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
redmine_group = cfg.OptGroup(name='redmine', title='Redmine plugin options')
|
|
||||||
|
|
||||||
redmine_group_opts = [
|
|
||||||
cfg.StrOpt('redmine_url', help='Redmine server URL', default='http://'),
|
|
||||||
cfg.StrOpt('redmine_login', help='Redmine API user', default=''),
|
|
||||||
cfg.StrOpt('redmine_password', help='Redmine API password', default=''),
|
|
||||||
]
|
|
||||||
cfg.CONF.register_group(redmine_group)
|
|
||||||
cfg.CONF.register_opts(redmine_group_opts, group=redmine_group)
|
|
||||||
|
|
||||||
|
|
||||||
OPTS = [
|
|
||||||
cfg.StrOpt('tracking_plugin', default='redmine'),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
class RedmineTracking(tracking.TrackingBase):
|
|
||||||
"""Redmine tracking driver."""
|
|
||||||
|
|
||||||
mapping_ticket = dict(
|
|
||||||
title='subject',
|
|
||||||
project='project_id',
|
|
||||||
)
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
super(RedmineTracking, self).__init__()
|
|
||||||
super(RedmineTracking, self).subscribe_event(
|
|
||||||
self._ROLE_ASSIGNMENT_CREATED)
|
|
||||||
self.redmine = redmine.Redmine(cfg.CONF.redmine.redmine_url,
|
|
||||||
username=cfg.CONF.
|
|
||||||
redmine.redmine_login,
|
|
||||||
password=cfg.CONF.
|
|
||||||
redmine.redmine_password)
|
|
||||||
|
|
||||||
def _from_issue(self, issue):
|
|
||||||
"""Create a TicketResource from redmine Issue
|
|
||||||
|
|
||||||
:param issue: Redmine issue object
|
|
||||||
:return: TicketResource
|
|
||||||
"""
|
|
||||||
return ticket_models.TicketResource(id=str(issue.id),
|
|
||||||
title=issue.subject,
|
|
||||||
status=issue.status.name,
|
|
||||||
start_date=issue.start_date,
|
|
||||||
project=issue.project.name
|
|
||||||
)
|
|
||||||
|
|
||||||
def _to_resource(self, dic, mapping):
|
|
||||||
"""Make mapping betweeb dict representation of Resource Type
|
|
||||||
and redmine resource representation
|
|
||||||
|
|
||||||
:param dic: dict representation of Resource (datamodels)
|
|
||||||
:param mapping: mapping
|
|
||||||
|
|
||||||
:return: json
|
|
||||||
"""
|
|
||||||
return {mapping[k]: v for k, v in dic.items()}
|
|
||||||
|
|
||||||
def _redmine_create(self, data):
|
|
||||||
return self.redmine.issue.create(**data)
|
|
||||||
|
|
||||||
def _get_issues(self, project_id):
|
|
||||||
project = self.redmine.project.get(project_id)
|
|
||||||
return project.issues
|
|
||||||
|
|
||||||
def create_ticket(self, data):
|
|
||||||
"""Create an issue
|
|
||||||
|
|
||||||
:param data: TicketResource
|
|
||||||
:return: TicketResource of created ticket
|
|
||||||
"""
|
|
||||||
resp = self._redmine_create(self._to_resource(
|
|
||||||
data.as_dict(),
|
|
||||||
self.mapping_ticket))
|
|
||||||
|
|
||||||
return self._from_issue(resp)
|
|
||||||
|
|
||||||
def get_tickets(self, project_id):
|
|
||||||
"""Return all issues filtered by project_id
|
|
||||||
|
|
||||||
:param project_id:
|
|
||||||
:return: TicketResourceCollection
|
|
||||||
"""
|
|
||||||
issues = self._get_issues(project_id)
|
|
||||||
|
|
||||||
return ticket_models.TicketResourceCollection(
|
|
||||||
tickets=[self._from_issue(issue) for issue in issues])
|
|
||||||
|
|
||||||
def get_ticket(self, ticket_id):
|
|
||||||
"""Return issue with given id
|
|
||||||
|
|
||||||
:param ticket_id:
|
|
||||||
:return: TicketResource
|
|
||||||
"""
|
|
||||||
issue = self.redmine.issue.get(ticket_id)
|
|
||||||
|
|
||||||
return self._from_issue(issue)
|
|
||||||
|
|
||||||
def process_notification(self, ctxt, publisher_id, event_type, payload,
|
|
||||||
metadata):
|
|
||||||
"""Specific notification processing."""
|
|
||||||
super(RedmineTracking, self).process_notification(ctxt,
|
|
||||||
publisher_id,
|
|
||||||
event_type, payload,
|
|
||||||
metadata)
|
|
||||||
|
|
||||||
def create_project(self, identifier, project_name):
|
|
||||||
"""Create a tracking project
|
|
||||||
|
|
||||||
:param data: ProjectResource
|
|
||||||
:return: ProjectResource of created project
|
|
||||||
"""
|
|
||||||
resp = self.redmine.project.create(project=identifier,
|
|
||||||
name=project_name,
|
|
||||||
identifier=identifier)
|
|
||||||
|
|
||||||
return resp
|
|
163
sticks/utils.py
163
sticks/utils.py
@ -1,163 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2014 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
"""Utilities and helper functions."""
|
|
||||||
|
|
||||||
import calendar
|
|
||||||
import copy
|
|
||||||
import datetime
|
|
||||||
import decimal
|
|
||||||
import multiprocessing
|
|
||||||
|
|
||||||
from oslo.utils import timeutils
|
|
||||||
from oslo.utils import units
|
|
||||||
|
|
||||||
|
|
||||||
def restore_nesting(d, separator=':'):
|
|
||||||
"""Unwinds a flattened dict to restore nesting.
|
|
||||||
"""
|
|
||||||
d = copy.copy(d) if any([separator in k for k in d.keys()]) else d
|
|
||||||
for k, v in d.items():
|
|
||||||
if separator in k:
|
|
||||||
top, rem = k.split(separator, 1)
|
|
||||||
nest = d[top] if isinstance(d.get(top), dict) else {}
|
|
||||||
nest[rem] = v
|
|
||||||
d[top] = restore_nesting(nest, separator)
|
|
||||||
del d[k]
|
|
||||||
return d
|
|
||||||
|
|
||||||
|
|
||||||
def dt_to_decimal(utc):
|
|
||||||
"""Datetime to Decimal.
|
|
||||||
|
|
||||||
Some databases don't store microseconds in datetime
|
|
||||||
so we always store as Decimal unixtime.
|
|
||||||
"""
|
|
||||||
if utc is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
decimal.getcontext().prec = 30
|
|
||||||
return decimal.Decimal(str(calendar.timegm(utc.utctimetuple()))) + \
|
|
||||||
(decimal.Decimal(str(utc.microsecond)) /
|
|
||||||
decimal.Decimal("1000000.0"))
|
|
||||||
|
|
||||||
|
|
||||||
def decimal_to_dt(dec):
|
|
||||||
"""Return a datetime from Decimal unixtime format.
|
|
||||||
"""
|
|
||||||
if dec is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
integer = int(dec)
|
|
||||||
micro = (dec - decimal.Decimal(integer)) * decimal.Decimal(units.M)
|
|
||||||
daittyme = datetime.datetime.utcfromtimestamp(integer)
|
|
||||||
return daittyme.replace(microsecond=int(round(micro)))
|
|
||||||
|
|
||||||
|
|
||||||
def sanitize_timestamp(timestamp):
|
|
||||||
"""Return a naive utc datetime object."""
|
|
||||||
if not timestamp:
|
|
||||||
return timestamp
|
|
||||||
if not isinstance(timestamp, datetime.datetime):
|
|
||||||
timestamp = timeutils.parse_isotime(timestamp)
|
|
||||||
return timeutils.normalize_time(timestamp)
|
|
||||||
|
|
||||||
|
|
||||||
def stringify_timestamps(data):
|
|
||||||
"""Stringify any datetimes in given dict."""
|
|
||||||
isa_timestamp = lambda v: isinstance(v, datetime.datetime)
|
|
||||||
return dict((k, v.isoformat() if isa_timestamp(v) else v)
|
|
||||||
for (k, v) in data.iteritems())
|
|
||||||
|
|
||||||
|
|
||||||
def dict_to_keyval(value, key_base=None):
|
|
||||||
"""Expand a given dict to its corresponding key-value pairs.
|
|
||||||
|
|
||||||
Generated keys are fully qualified, delimited using dot notation.
|
|
||||||
ie. key = 'key.child_key.grandchild_key[0]'
|
|
||||||
"""
|
|
||||||
val_iter, key_func = None, None
|
|
||||||
if isinstance(value, dict):
|
|
||||||
val_iter = value.iteritems()
|
|
||||||
key_func = lambda k: key_base + '.' + k if key_base else k
|
|
||||||
elif isinstance(value, (tuple, list)):
|
|
||||||
val_iter = enumerate(value)
|
|
||||||
key_func = lambda k: key_base + '[%d]' % k
|
|
||||||
|
|
||||||
if val_iter:
|
|
||||||
for k, v in val_iter:
|
|
||||||
key_gen = key_func(k)
|
|
||||||
if isinstance(v, dict) or isinstance(v, (tuple, list)):
|
|
||||||
for key_gen, v in dict_to_keyval(v, key_gen):
|
|
||||||
yield key_gen, v
|
|
||||||
else:
|
|
||||||
yield key_gen, v
|
|
||||||
|
|
||||||
|
|
||||||
def lowercase_keys(mapping):
|
|
||||||
"""Converts the values of the keys in mapping to lowercase."""
|
|
||||||
items = mapping.items()
|
|
||||||
for key, value in items:
|
|
||||||
del mapping[key]
|
|
||||||
mapping[key.lower()] = value
|
|
||||||
|
|
||||||
|
|
||||||
def lowercase_values(mapping):
|
|
||||||
"""Converts the values in the mapping dict to lowercase."""
|
|
||||||
items = mapping.items()
|
|
||||||
for key, value in items:
|
|
||||||
mapping[key] = value.lower()
|
|
||||||
|
|
||||||
|
|
||||||
def update_nested(original_dict, updates):
|
|
||||||
"""Updates the leaf nodes in a nest dict, without replacing
|
|
||||||
entire sub-dicts.
|
|
||||||
"""
|
|
||||||
dict_to_update = copy.deepcopy(original_dict)
|
|
||||||
for key, value in updates.iteritems():
|
|
||||||
if isinstance(value, dict):
|
|
||||||
sub_dict = update_nested(dict_to_update.get(key, {}), value)
|
|
||||||
dict_to_update[key] = sub_dict
|
|
||||||
else:
|
|
||||||
dict_to_update[key] = updates[key]
|
|
||||||
return dict_to_update
|
|
||||||
|
|
||||||
|
|
||||||
def cpu_count():
|
|
||||||
try:
|
|
||||||
return multiprocessing.cpu_count() or 1
|
|
||||||
except NotImplementedError:
|
|
||||||
return 1
|
|
||||||
|
|
||||||
|
|
||||||
def uniq(dupes, attrs):
|
|
||||||
"""Exclude elements of dupes with a duplicated set of attribute values."""
|
|
||||||
key = lambda d: '/'.join([getattr(d, a) or '' for a in attrs])
|
|
||||||
keys = []
|
|
||||||
deduped = []
|
|
||||||
for d in dupes:
|
|
||||||
if key(d) not in keys:
|
|
||||||
deduped.append(d)
|
|
||||||
keys.append(key(d))
|
|
||||||
return deduped
|
|
||||||
|
|
||||||
|
|
||||||
def create_datetime_obj(date):
|
|
||||||
"""
|
|
||||||
'20150109T10:53:50'
|
|
||||||
:param date: The date to build a datetime object. Format: 20150109T10:53:50
|
|
||||||
:return: a datetime object
|
|
||||||
"""
|
|
||||||
return datetime.datetime.strptime(date, '%Y%m%dT%H:%M:%S')
|
|
@ -1,19 +0,0 @@
|
|||||||
#
|
|
||||||
# Copyright (c) 2015 EUROGICIEL
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
import pbr.version
|
|
||||||
|
|
||||||
version_info = pbr.version.VersionInfo('sticks')
|
|
@ -1,21 +0,0 @@
|
|||||||
# The order of packages is significant, because pip processes them in the order
|
|
||||||
# of appearance. Changing the order has an impact on the overall integration
|
|
||||||
# process, which may cause wedges in the gate later.
|
|
||||||
|
|
||||||
hacking<0.10,>=0.9.2
|
|
||||||
# mock object framework
|
|
||||||
mock>=1.2
|
|
||||||
coverage>=3.6
|
|
||||||
discover
|
|
||||||
# fixture stubbing
|
|
||||||
fixtures>=1.3.1
|
|
||||||
oslotest>=1.10.0 # Apache-2.0
|
|
||||||
python-subunit>=0.0.18
|
|
||||||
nose
|
|
||||||
nose-exclude
|
|
||||||
nosexcover
|
|
||||||
# Doc requirements
|
|
||||||
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
|
|
||||||
oslosphinx>=2.5.0 # Apache-2.0
|
|
||||||
sphinxcontrib-httpdomain
|
|
||||||
sphinxcontrib-pecanwsme>=0.8
|
|
@ -1,25 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
PROJECT_NAME=${PROJECT_NAME:-sticks}
|
|
||||||
CFGFILE_NAME=${PROJECT_NAME}.conf.sample
|
|
||||||
|
|
||||||
if [ -e etc/${PROJECT_NAME}/${CFGFILE_NAME} ]; then
|
|
||||||
CFGFILE=etc/${PROJECT_NAME}/${CFGFILE_NAME}
|
|
||||||
elif [ -e etc/${CFGFILE_NAME} ]; then
|
|
||||||
CFGFILE=etc/${CFGFILE_NAME}
|
|
||||||
else
|
|
||||||
echo "${0##*/}: can not find config file"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
TEMPDIR=`mktemp -d /tmp/${PROJECT_NAME}.XXXXXX`
|
|
||||||
trap "rm -rf $TEMPDIR" EXIT
|
|
||||||
|
|
||||||
tools/config/generate_sample.sh -b ./ -p ${PROJECT_NAME} -o ${TEMPDIR}
|
|
||||||
|
|
||||||
if ! diff -u ${TEMPDIR}/${CFGFILE_NAME} ${CFGFILE}
|
|
||||||
then
|
|
||||||
echo "${0##*/}: ${PROJECT_NAME}.conf.sample is not up to date."
|
|
||||||
echo "${0##*/}: Please run ${0%%${0##*/}}generate_sample.sh."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
@ -1,119 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
print_hint() {
|
|
||||||
echo "Try \`${0##*/} --help' for more information." >&2
|
|
||||||
}
|
|
||||||
|
|
||||||
PARSED_OPTIONS=$(getopt -n "${0##*/}" -o hb:p:m:l:o: \
|
|
||||||
--long help,base-dir:,package-name:,output-dir:,module:,library: -- "$@")
|
|
||||||
|
|
||||||
if [ $? != 0 ] ; then print_hint ; exit 1 ; fi
|
|
||||||
|
|
||||||
eval set -- "$PARSED_OPTIONS"
|
|
||||||
|
|
||||||
while true; do
|
|
||||||
case "$1" in
|
|
||||||
-h|--help)
|
|
||||||
echo "${0##*/} [options]"
|
|
||||||
echo ""
|
|
||||||
echo "options:"
|
|
||||||
echo "-h, --help show brief help"
|
|
||||||
echo "-b, --base-dir=DIR project base directory"
|
|
||||||
echo "-p, --package-name=NAME project package name"
|
|
||||||
echo "-o, --output-dir=DIR file output directory"
|
|
||||||
echo "-m, --module=MOD extra python module to interrogate for options"
|
|
||||||
echo "-l, --library=LIB extra library that registers options for discovery"
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
-b|--base-dir)
|
|
||||||
shift
|
|
||||||
BASEDIR=`echo $1 | sed -e 's/\/*$//g'`
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-p|--package-name)
|
|
||||||
shift
|
|
||||||
PACKAGENAME=`echo $1`
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-o|--output-dir)
|
|
||||||
shift
|
|
||||||
OUTPUTDIR=`echo $1 | sed -e 's/\/*$//g'`
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-m|--module)
|
|
||||||
shift
|
|
||||||
MODULES="$MODULES -m $1"
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-l|--library)
|
|
||||||
shift
|
|
||||||
LIBRARIES="$LIBRARIES -l $1"
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--)
|
|
||||||
break
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
BASEDIR=${BASEDIR:-`pwd`}
|
|
||||||
if ! [ -d $BASEDIR ]
|
|
||||||
then
|
|
||||||
echo "${0##*/}: missing project base directory" >&2 ; print_hint ; exit 1
|
|
||||||
elif [[ $BASEDIR != /* ]]
|
|
||||||
then
|
|
||||||
BASEDIR=$(cd "$BASEDIR" && pwd)
|
|
||||||
fi
|
|
||||||
|
|
||||||
PACKAGENAME=${PACKAGENAME:-$(python setup.py --name)}
|
|
||||||
TARGETDIR=$BASEDIR/$PACKAGENAME
|
|
||||||
if ! [ -d $TARGETDIR ]
|
|
||||||
then
|
|
||||||
echo "${0##*/}: invalid project package name" >&2 ; print_hint ; exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
OUTPUTDIR=${OUTPUTDIR:-$BASEDIR/etc}
|
|
||||||
# NOTE(bnemec): Some projects put their sample config in etc/,
|
|
||||||
# some in etc/$PACKAGENAME/
|
|
||||||
if [ -d $OUTPUTDIR/$PACKAGENAME ]
|
|
||||||
then
|
|
||||||
OUTPUTDIR=$OUTPUTDIR/$PACKAGENAME
|
|
||||||
elif ! [ -d $OUTPUTDIR ]
|
|
||||||
then
|
|
||||||
echo "${0##*/}: cannot access \`$OUTPUTDIR': No such file or directory" >&2
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
BASEDIRESC=`echo $BASEDIR | sed -e 's/\//\\\\\//g'`
|
|
||||||
find $TARGETDIR -type f -name "*.pyc" -delete
|
|
||||||
FILES=$(find $TARGETDIR -type f -name "*.py" ! -path "*/tests/*" \
|
|
||||||
-exec grep -l "Opt(" {} + | sed -e "s/^$BASEDIRESC\///g" | sort -u)
|
|
||||||
|
|
||||||
RC_FILE="`dirname $0`/oslo.config.generator.rc"
|
|
||||||
if test -r "$RC_FILE"
|
|
||||||
then
|
|
||||||
source "$RC_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
for mod in ${STICKS_CONFIG_GENERATOR_EXTRA_MODULES}; do
|
|
||||||
MODULES="$MODULES -m $mod"
|
|
||||||
done
|
|
||||||
|
|
||||||
for lib in ${STICKS_CONFIG_GENERATOR_EXTRA_LIBRARIES}; do
|
|
||||||
LIBRARIES="$LIBRARIES -l $lib"
|
|
||||||
done
|
|
||||||
|
|
||||||
export EVENTLET_NO_GREENDNS=yes
|
|
||||||
|
|
||||||
OS_VARS=$(set | sed -n '/^OS_/s/=[^=]*$//gp' | xargs)
|
|
||||||
[ "$OS_VARS" ] && eval "unset \$OS_VARS"
|
|
||||||
DEFAULT_MODULEPATH=sticks.openstack.common.config.generator
|
|
||||||
MODULEPATH=${MODULEPATH:-$DEFAULT_MODULEPATH}
|
|
||||||
OUTPUTFILE=$OUTPUTDIR/$PACKAGENAME.conf.sample
|
|
||||||
python -m $MODULEPATH $MODULES $LIBRARIES $FILES > $OUTPUTFILE
|
|
||||||
|
|
||||||
# Hook to allow projects to append custom config file snippets
|
|
||||||
CONCAT_FILES=$(ls $BASEDIR/tools/config/*.conf.sample 2>/dev/null)
|
|
||||||
for CONCAT_FILE in $CONCAT_FILES; do
|
|
||||||
cat $CONCAT_FILE >> $OUTPUTFILE
|
|
||||||
done
|
|
@ -1,2 +0,0 @@
|
|||||||
export STICKS_CONFIG_GENERATOR_EXTRA_LIBRARIES='oslo.messaging'
|
|
||||||
export STICKS_CONFIG_GENERATOR_EXTRA_MODULES=keystoneclient.middleware.auth_token
|
|
49
tox.ini
49
tox.ini
@ -1,49 +0,0 @@
|
|||||||
[tox]
|
|
||||||
minversion = 1.6
|
|
||||||
envlist = py33,py34,py26,py27,pypy,pep8
|
|
||||||
skipsdist = True
|
|
||||||
|
|
||||||
[testenv]
|
|
||||||
usedevelop = True
|
|
||||||
install_command = pip install -U {opts} {packages}
|
|
||||||
setenv =
|
|
||||||
VIRTUAL_ENV={envdir}
|
|
||||||
deps = -r{toxinidir}/requirements.txt
|
|
||||||
-r{toxinidir}/test-requirements.txt
|
|
||||||
|
|
||||||
[testenv:pep8]
|
|
||||||
commands = flake8
|
|
||||||
|
|
||||||
[testenv:venv]
|
|
||||||
commands = {posargs}
|
|
||||||
|
|
||||||
[testenv:cover]
|
|
||||||
commands =
|
|
||||||
nosetests --with-xunit --with-xcoverage --cover-package=sticks --nocapture --cover-tests --cover-branches --cover-min-percentage=50
|
|
||||||
|
|
||||||
[testenv:docs]
|
|
||||||
commands = python setup.py build_sphinx
|
|
||||||
|
|
||||||
[flake8]
|
|
||||||
# E125 continuation line does not distinguish itself from next logical line
|
|
||||||
# E126 continuation line over-indented for hanging indent
|
|
||||||
# E128 continuation line under-indented for visual indent
|
|
||||||
# E129 visually indented line with same indent as next logical line
|
|
||||||
# E265 block comment should start with ‘# ‘
|
|
||||||
# E713 test for membership should be ‘not in’
|
|
||||||
# F402 import module shadowed by loop variable
|
|
||||||
# F811 redefinition of unused variable
|
|
||||||
# F812 list comprehension redefines name from line
|
|
||||||
# H104 file contains nothing but comments
|
|
||||||
# H237 module is removed in Python 3
|
|
||||||
# H305 imports not grouped correctly
|
|
||||||
# H307 like imports should be grouped together
|
|
||||||
# H401 docstring should not start with a space
|
|
||||||
# H402 one line docstring needs punctuation
|
|
||||||
# H405 multi line docstring summary not separated with an empty line
|
|
||||||
# H904 Wrap long lines in parentheses instead of a backslash
|
|
||||||
# TODO(marun) H404 multi line docstring should start with a summary
|
|
||||||
ignore = E125,E126,E128,E129,E265,E713,F402,F811,F812,H104,H237,H305,H307,H401,H402,H404,H405,H904
|
|
||||||
show-source = true
|
|
||||||
builtins = _
|
|
||||||
exclude = .venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools
|
|
Loading…
x
Reference in New Issue
Block a user