Retire repo

This repo was created by accident, use deb-python-saharaclient
instead.

Depends-On: I2c2e6218cce314225ba956db4a21f1494212c8d3
Needed-By: I1ac1a06931c8b6dd7c2e73620a0302c29e605f03
Change-Id: I81894aea69b9d09b0977039623c26781093a397a
This commit is contained in:
Andreas Jaeger 2017-04-17 21:17:47 +02:00
parent 9a440b9f06
commit f6e436ea5f
138 changed files with 13 additions and 16408 deletions

View File

@ -1,13 +0,0 @@
[run]
branch = True
source = saharaclient
omit =
*/openstack/common/*
.tox/*
saharaclient/tests/*
[paths]
source = saharaclient
[report]
ignore_errors = True

40
.gitignore vendored
View File

@ -1,40 +0,0 @@
*.py[co]
*.egg
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
pip-log.txt
.tox
*.mo
.mr.developer.cfg
.DS_Store
Thumbs.db
.venv
.idea
out
target
*.iml
*.ipr
*.iws
*.db
.coverage
nosetests.xml
pylint-report.txt
ChangeLog
cscope.out
.testrepository
AUTHORS
cover
doc/html
doc/source/apidoc
doc/source/api
doc/build
*.log
# Files created by releasenotes build
releasenotes/build

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/python-saharaclient.git

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover $DISCOVER_DIRECTORY $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,21 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in the "If you're a developer"
section of this page:
http://wiki.openstack.org/HowToContribute
You can find more Sahara-specific info in our How To Participate guide:
http://docs.openstack.org/developer/python-saharaclient/devref/how_to_participate.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://wiki.openstack.org/GerritWorkflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/python-saharaclient

View File

@ -1,12 +0,0 @@
Sahara Style Commandments
=========================
- Step 1: Read the OpenStack Style Commandments
http://docs.openstack.org/developer/hacking/
- Step 2: Read on
Sahara Specific Commandments
----------------------------
None so far

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,9 +0,0 @@
include AUTHORS
include README.rst
include ChangeLog
include LICENSE
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,41 +0,0 @@
Python bindings to the OpenStack Sahara API
===========================================
.. image:: https://img.shields.io/pypi/v/python-saharaclient.svg
:target: https://pypi.python.org/pypi/python-saharaclient/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/python-saharaclient.svg
:target: https://pypi.python.org/pypi/python-saharaclient/
:alt: Downloads
This is a client for the OpenStack Sahara API. There's a Python API (the
``saharaclient`` module), and a command-line script (``sahara``). Each
implements the OpenStack Sahara API. You can find documentation for both
Python bindings and CLI in `Docs`_.
Development takes place via the usual OpenStack processes as outlined
in the `developer guide
<http://docs.openstack.org/infra/manual/developers.html>`_.
.. _Docs: http://docs.openstack.org/developer/python-saharaclient/
* License: Apache License, Version 2.0
* `PyPi`_ - package installation
* `Online Documentation`_
* `Launchpad project`_ - release management
* `Blueprints`_ - feature specifications
* `Bugs`_ - issue tracking
* `Source`_
* `Specs`_
* `How to Contribute`_
.. _PyPi: https://pypi.python.org/pypi/python-saharaclient
.. _Online Documentation: http://docs.openstack.org/developer/python-saharaclient
.. _Launchpad project: https://launchpad.net/python-saharaclient
.. _Blueprints: https://blueprints.launchpad.net/python-saharaclient
.. _Bugs: https://bugs.launchpad.net/python-saharaclient
.. _Source: https://git.openstack.org/cgit/openstack/python-saharaclient
.. _How to Contribute: http://docs.openstack.org/infra/manual/developers.html
.. _Specs: http://specs.openstack.org/openstack/sahara-specs/

13
README.txt Normal file
View File

@ -0,0 +1,13 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Use instead the project deb-python-saharaclient at
http://git.openstack.org/cgit/openstack/deb-python-saharaclient .
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

View File

@ -1,90 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import os
import sys
from docutils import nodes
from . import ext
def _get_command(classes):
"""Associates each command class with command depending on setup.cfg
"""
commands = {}
setup_file = os.path.join(
os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')),
'setup.cfg')
for line in open(setup_file, 'r'):
for cl in classes:
if cl in line:
commands[cl] = line.split(' = ')[0].strip().replace('_', ' ')
return commands
class ArgParseDirectiveOSC(ext.ArgParseDirective):
"""Sphinx extension that automatically documents commands and options
of the module that contains OpenstackClient/cliff command objects
Usage example:
.. cli::
:module: saharaclient.osc.v1.clusters
"""
def run(self):
module_name = self.options['module']
mod = __import__(module_name, globals(), locals())
classes = inspect.getmembers(sys.modules[module_name], inspect.isclass)
classes_names = [cl[0] for cl in classes]
commands = _get_command(classes_names)
items = []
for cl in classes:
parser = cl[1](None, None).get_parser(None)
parser.prog = commands[cl[0]]
items.append(nodes.subtitle(text=commands[cl[0]]))
result = ext.parse_parser(
parser, skip_default_values='nodefault' in self.options)
result = ext.parser_navigate(result, '')
nested_content = ext.nodes.paragraph()
self.state.nested_parse(
self.content, self.content_offset, nested_content)
nested_content = nested_content.children
for item in nested_content:
if not isinstance(item, ext.nodes.definition_list):
items.append(item)
if 'description' in result:
items.append(self._nested_parse_paragraph(result['description']))
items.append(ext.nodes.literal_block(text=result['usage']))
items.append(ext.print_command_args_and_opts(
ext.print_arg_list(result, nested_content),
ext.print_opt_list(result, nested_content),
ext.print_subcommand_list(result, nested_content)
))
if 'epilog' in result:
items.append(self._nested_parse_paragraph(result['epilog']))
return items
def setup(app):
app.add_directive('cli', ArgParseDirectiveOSC)

View File

@ -1,386 +0,0 @@
# Copyright (c) 2013 Alex Rudakov
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from argparse import ArgumentParser
import os
from docutils import nodes
from docutils.statemachine import StringList
from docutils.parsers.rst.directives import flag, unchanged
from sphinx.util.compat import Directive
from sphinx.util.nodes import nested_parse_with_titles
from .parser import parse_parser, parser_navigate
def map_nested_definitions(nested_content):
if nested_content is None:
raise Exception('Nested content should be iterable, not null')
# build definition dictionary
definitions = {}
for item in nested_content:
if not isinstance(item, nodes.definition_list):
continue
for subitem in item:
if not isinstance(subitem, nodes.definition_list_item):
continue
if not len(subitem.children) > 0:
continue
classifier = '@after'
idx = subitem.first_child_matching_class(nodes.classifier)
if idx is not None:
ci = subitem[idx]
if len(ci.children) > 0:
classifier = ci.children[0].astext()
if classifier is not None and classifier not in (
'@replace', '@before', '@after'):
raise Exception('Unknown classifier: %s' % classifier)
idx = subitem.first_child_matching_class(nodes.term)
if idx is not None:
ch = subitem[idx]
if len(ch.children) > 0:
term = ch.children[0].astext()
idx = subitem.first_child_matching_class(nodes.definition)
if idx is not None:
def_node = subitem[idx]
def_node.attributes['classifier'] = classifier
definitions[term] = def_node
return definitions
def print_arg_list(data, nested_content):
definitions = map_nested_definitions(nested_content)
items = []
if 'args' in data:
for arg in data['args']:
my_def = [nodes.paragraph(text=arg['help'])] if arg['help'] else []
name = arg['name']
my_def = apply_definition(definitions, my_def, name)
if len(my_def) == 0:
my_def.append(nodes.paragraph(text='Undocumented'))
if 'choices' in arg:
my_def.append(nodes.paragraph(
text=('Possible choices: %s' % ', '.join([str(c) for c in arg['choices']]))))
items.append(
nodes.option_list_item(
'', nodes.option_group('', nodes.option_string(text=name)),
nodes.description('', *my_def)))
return nodes.option_list('', *items) if items else None
def print_opt_list(data, nested_content):
definitions = map_nested_definitions(nested_content)
items = []
if 'options' in data:
for opt in data['options']:
names = []
my_def = [nodes.paragraph(text=opt['help'])] if opt['help'] else []
for name in opt['name']:
option_declaration = [nodes.option_string(text=name)]
if opt['default'] is not None \
and opt['default'] != '==SUPPRESS==':
option_declaration += nodes.option_argument(
'', text='=' + str(opt['default']))
names.append(nodes.option('', *option_declaration))
my_def = apply_definition(definitions, my_def, name)
if len(my_def) == 0:
my_def.append(nodes.paragraph(text='Undocumented'))
if 'choices' in opt:
my_def.append(nodes.paragraph(
text=('Possible choices: %s' % ', '.join([str(c) for c in opt['choices']]))))
items.append(
nodes.option_list_item(
'', nodes.option_group('', *names),
nodes.description('', *my_def)))
return nodes.option_list('', *items) if items else None
def print_command_args_and_opts(arg_list, opt_list, sub_list=None):
items = []
if arg_list:
items.append(nodes.definition_list_item(
'', nodes.term(text='Positional arguments:'),
nodes.definition('', arg_list)))
if opt_list:
items.append(nodes.definition_list_item(
'', nodes.term(text='Options:'),
nodes.definition('', opt_list)))
if sub_list and len(sub_list):
items.append(nodes.definition_list_item(
'', nodes.term(text='Sub-commands:'),
nodes.definition('', sub_list)))
return nodes.definition_list('', *items)
def apply_definition(definitions, my_def, name):
if name in definitions:
definition = definitions[name]
classifier = definition['classifier']
if classifier == '@replace':
return definition.children
if classifier == '@after':
return my_def + definition.children
if classifier == '@before':
return definition.children + my_def
raise Exception('Unknown classifier: %s' % classifier)
return my_def
def print_subcommand_list(data, nested_content):
definitions = map_nested_definitions(nested_content)
items = []
if 'children' in data:
for child in data['children']:
my_def = [nodes.paragraph(
text=child['help'])] if child['help'] else []
name = child['name']
my_def = apply_definition(definitions, my_def, name)
if len(my_def) == 0:
my_def.append(nodes.paragraph(text='Undocumented'))
if 'description' in child:
my_def.append(nodes.paragraph(text=child['description']))
my_def.append(nodes.literal_block(text=child['usage']))
my_def.append(print_command_args_and_opts(
print_arg_list(child, nested_content),
print_opt_list(child, nested_content),
print_subcommand_list(child, nested_content)
))
items.append(
nodes.definition_list_item(
'',
nodes.term('', '', nodes.strong(text=name)),
nodes.definition('', *my_def)
)
)
return nodes.definition_list('', *items)
class ArgParseDirective(Directive):
has_content = True
option_spec = dict(module=unchanged, func=unchanged, ref=unchanged,
prog=unchanged, path=unchanged, nodefault=flag,
manpage=unchanged, nosubcommands=unchanged, passparser=flag)
def _construct_manpage_specific_structure(self, parser_info):
"""
Construct a typical man page consisting of the following elements:
NAME (automatically generated, out of our control)
SYNOPSIS
DESCRIPTION
OPTIONS
FILES
SEE ALSO
BUGS
"""
# SYNOPSIS section
synopsis_section = nodes.section(
'',
nodes.title(text='Synopsis'),
nodes.literal_block(text=parser_info["bare_usage"]),
ids=['synopsis-section'])
# DESCRIPTION section
description_section = nodes.section(
'',
nodes.title(text='Description'),
nodes.paragraph(text=parser_info.get(
'description', parser_info.get(
'help', "undocumented").capitalize())),
ids=['description-section'])
nested_parse_with_titles(
self.state, self.content, description_section)
if parser_info.get('epilog'):
# TODO: do whatever sphinx does to understand ReST inside
# docstrings magically imported from other places. The nested
# parse method invoked above seem to be able to do this but
# I haven't found a way to do it for arbitrary text
description_section += nodes.paragraph(
text=parser_info['epilog'])
# OPTIONS section
options_section = nodes.section(
'',
nodes.title(text='Options'),
ids=['options-section'])
if 'args' in parser_info:
options_section += nodes.paragraph()
options_section += nodes.subtitle(text='Positional arguments:')
options_section += self._format_positional_arguments(parser_info)
if 'options' in parser_info:
options_section += nodes.paragraph()
options_section += nodes.subtitle(text='Optional arguments:')
options_section += self._format_optional_arguments(parser_info)
items = [
# NOTE: we cannot generate NAME ourselves. It is generated by
# docutils.writers.manpage
synopsis_section,
description_section,
# TODO: files
# TODO: see also
# TODO: bugs
]
if len(options_section.children) > 1:
items.append(options_section)
if 'nosubcommands' not in self.options:
# SUBCOMMANDS section (non-standard)
subcommands_section = nodes.section(
'',
nodes.title(text='Sub-Commands'),
ids=['subcommands-section'])
if 'children' in parser_info:
subcommands_section += self._format_subcommands(parser_info)
if len(subcommands_section) > 1:
items.append(subcommands_section)
if os.getenv("INCLUDE_DEBUG_SECTION"):
import json
# DEBUG section (non-standard)
debug_section = nodes.section(
'',
nodes.title(text="Argparse + Sphinx Debugging"),
nodes.literal_block(text=json.dumps(parser_info, indent=' ')),
ids=['debug-section'])
items.append(debug_section)
return items
def _format_positional_arguments(self, parser_info):
assert 'args' in parser_info
items = []
for arg in parser_info['args']:
arg_items = []
if arg['help']:
arg_items.append(nodes.paragraph(text=arg['help']))
else:
arg_items.append(nodes.paragraph(text='Undocumented'))
if 'choices' in arg:
arg_items.append(
nodes.paragraph(
text='Possible choices: ' + ', '.join(arg['choices'])))
items.append(
nodes.option_list_item(
'',
nodes.option_group(
'', nodes.option(
'', nodes.option_string(text=arg['metavar'])
)
),
nodes.description('', *arg_items)))
return nodes.option_list('', *items)
def _format_optional_arguments(self, parser_info):
assert 'options' in parser_info
items = []
for opt in parser_info['options']:
names = []
opt_items = []
for name in opt['name']:
option_declaration = [nodes.option_string(text=name)]
if opt['default'] is not None \
and opt['default'] != '==SUPPRESS==':
option_declaration += nodes.option_argument(
'', text='=' + str(opt['default']))
names.append(nodes.option('', *option_declaration))
if opt['help']:
opt_items.append(nodes.paragraph(text=opt['help']))
else:
opt_items.append(nodes.paragraph(text='Undocumented'))
if 'choices' in opt:
opt_items.append(
nodes.paragraph(
text='Possible choices: ' + ', '.join(opt['choices'])))
items.append(
nodes.option_list_item(
'', nodes.option_group('', *names),
nodes.description('', *opt_items)))
return nodes.option_list('', *items)
def _format_subcommands(self, parser_info):
assert 'children' in parser_info
items = []
for subcmd in parser_info['children']:
subcmd_items = []
if subcmd['help']:
subcmd_items.append(nodes.paragraph(text=subcmd['help']))
else:
subcmd_items.append(nodes.paragraph(text='Undocumented'))
items.append(
nodes.definition_list_item(
'',
nodes.term('', '', nodes.strong(
text=subcmd['bare_usage'])),
nodes.definition('', *subcmd_items)))
return nodes.definition_list('', *items)
def _nested_parse_paragraph(self, text):
content = nodes.paragraph()
self.state.nested_parse(StringList(text.split("\n")), 0, content)
return content
def run(self):
if 'module' in self.options and 'func' in self.options:
module_name = self.options['module']
attr_name = self.options['func']
elif 'ref' in self.options:
_parts = self.options['ref'].split('.')
module_name = '.'.join(_parts[0:-1])
attr_name = _parts[-1]
else:
raise self.error(
':module: and :func: should be specified, or :ref:')
mod = __import__(module_name, globals(), locals(), [attr_name])
if not hasattr(mod, attr_name):
raise self.error((
'Module "%s" has no attribute "%s"\n'
'Incorrect argparse :module: or :func: values?'
) % (module_name, attr_name))
func = getattr(mod, attr_name)
if isinstance(func, ArgumentParser):
parser = func
elif 'passparser' in self.options:
parser = ArgumentParser()
func(parser)
else:
parser = func()
if 'path' not in self.options:
self.options['path'] = ''
path = str(self.options['path'])
if 'prog' in self.options:
parser.prog = self.options['prog']
result = parse_parser(
parser, skip_default_values='nodefault' in self.options)
result = parser_navigate(result, path)
if 'manpage' in self.options:
return self._construct_manpage_specific_structure(result)
nested_content = nodes.paragraph()
self.state.nested_parse(
self.content, self.content_offset, nested_content)
nested_content = nested_content.children
items = []
# add common content between
for item in nested_content:
if not isinstance(item, nodes.definition_list):
items.append(item)
if 'description' in result:
items.append(self._nested_parse_paragraph(result['description']))
items.append(nodes.literal_block(text=result['usage']))
items.append(print_command_args_and_opts(
print_arg_list(result, nested_content),
print_opt_list(result, nested_content),
print_subcommand_list(result, nested_content)
))
if 'epilog' in result:
items.append(self._nested_parse_paragraph(result['epilog']))
return items
def setup(app):
app.add_directive('argparse', ArgParseDirective)

View File

@ -1,138 +0,0 @@
# Copyright (c) 2013 Alex Rudakov
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from argparse import _HelpAction, _SubParsersAction
import re
class NavigationException(Exception):
pass
def parser_navigate(parser_result, path, current_path=None):
if isinstance(path, str):
if path == '':
return parser_result
path = re.split('\s+', path)
current_path = current_path or []
if len(path) == 0:
return parser_result
if 'children' not in parser_result:
raise NavigationException(
'Current parser have no children elements. (path: %s)' %
' '.join(current_path))
next_hop = path.pop(0)
for child in parser_result['children']:
if child['name'] == next_hop:
current_path.append(next_hop)
return parser_navigate(child, path, current_path)
raise NavigationException(
'Current parser have no children element with name: %s (path: %s)' % (
next_hop, ' '.join(current_path)))
def _try_add_parser_attribute(data, parser, attribname):
attribval = getattr(parser, attribname, None)
if attribval is None:
return
if not isinstance(attribval, str):
return
if len(attribval) > 0:
data[attribname] = attribval
def _format_usage_without_prefix(parser):
"""
Use private argparse APIs to get the usage string without
the 'usage: ' prefix.
"""
fmt = parser._get_formatter()
fmt.add_usage(parser.usage, parser._actions,
parser._mutually_exclusive_groups, prefix='')
return fmt.format_help().strip()
def parse_parser(parser, data=None, **kwargs):
if data is None:
data = {
'name': '',
'usage': parser.format_usage().strip(),
'bare_usage': _format_usage_without_prefix(parser),
'prog': parser.prog,
}
_try_add_parser_attribute(data, parser, 'description')
_try_add_parser_attribute(data, parser, 'epilog')
for action in parser._get_positional_actions():
if isinstance(action, _HelpAction):
continue
if isinstance(action, _SubParsersAction):
helps = {}
for item in action._choices_actions:
helps[item.dest] = item.help
# commands which share an existing parser are an alias,
# don't duplicate docs
subsection_alias = {}
subsection_alias_names = set()
for name, subaction in action._name_parser_map.items():
if subaction not in subsection_alias:
subsection_alias[subaction] = []
else:
subsection_alias[subaction].append(name)
subsection_alias_names.add(name)
for name, subaction in action._name_parser_map.items():
if name in subsection_alias_names:
continue
subalias = subsection_alias[subaction]
subaction.prog = '%s %s' % (parser.prog, name)
subdata = {
'name': name if not subalias else
'%s (%s)' % (name, ', '.join(subalias)),
'help': helps.get(name, ''),
'usage': subaction.format_usage().strip(),
'bare_usage': _format_usage_without_prefix(subaction),
}
parse_parser(subaction, subdata, **kwargs)
data.setdefault('children', []).append(subdata)
continue
if 'args' not in data:
data['args'] = []
arg = {
'name': action.dest,
'help': action.help or '',
'metavar': action.metavar
}
if action.choices:
arg['choices'] = action.choices
data['args'].append(arg)
show_defaults = (
('skip_default_values' not in kwargs)
or (kwargs['skip_default_values'] is False))
for action in parser._get_optional_actions():
if isinstance(action, _HelpAction):
continue
if 'options' not in data:
data['options'] = []
option = {
'name': action.option_strings,
'default': action.default if show_defaults else '==SUPPRESS==',
'help': action.help or ''
}
if action.choices:
option['choices'] = action.choices
if "==SUPPRESS==" not in option['help']:
data['options'].append(option)
return data

View File

@ -1,11 +0,0 @@
<h3>Useful Links</h3>
<ul>
<li><a href="https://wiki.openstack.org/wiki/Sahara">Sahara @ OpenStack Wiki</a></li>
<li><a href="https://launchpad.net/sahara">Sahara @ Launchpad</a></li>
</ul>
{% if READTHEDOCS %}
<script type='text/javascript'>
$('div.body').css('margin', 0)
</script>
{% endif %}

View File

@ -1,4 +0,0 @@
{% extends "basic/layout.html" %}
{% set css_files = css_files + ['_static/tweaks.css'] %}
{% block relbar1 %}{% endblock relbar1 %}

View File

@ -1,4 +0,0 @@
[theme]
inherit = nature
stylesheet = nature.css
pygments_style = tango

View File

@ -1,167 +0,0 @@
Sahara Client
=============
Overview
--------
Sahara Client provides a list of Python interfaces to communicate with the
Sahara REST API. Sahara Client enables users to perform most of the existing
operations like retrieving template lists, creating Clusters, submitting EDP
Jobs, etc.
Instantiating a Client
----------------------
To start using the Sahara Client users have to create an instance of the
`Client` class. The client constructor has a list of parameters to authenticate
and locate Sahara endpoint.
.. autoclass:: saharaclient.api.client.Client
:members:
**Important!**
It is not a mandatory rule to provide all of the parameters above. The minimum
number should be enough to determine Sahara endpoint, check user
authentication and tenant to operate in.
Authentication check
~~~~~~~~~~~~~~~~~~~~
Passing authentication parameters to Sahara Client is deprecated. Keystone
Session object should be used for this purpose. For example:
.. sourcecode:: python
from keystoneauth1.identity import v2
from keystoneauth1 import session
from saharaclient import client
auth = v2.Password(auth_url=AUTH_URL,
username=USERNAME,
password=PASSWORD,
tenant_name=PROJECT_ID)
ses = session.Session(auth=auth)
sahara = client.Client('1.1', session=ses)
..
For more information about Keystone Sessions, see `Using Sessions`_.
.. _Using Sessions: http://docs.openstack.org/developer/python-keystoneclient/using-sessions.html
Sahara endpoint discovery
~~~~~~~~~~~~~~~~~~~~~~~~~
If user has a direct URL pointing to Sahara REST API, it may be specified as
`sahara_url`. If this parameter is missing, Sahara client will use Keystone
Service Catalog to find the endpoint. There are two parameters: `service_type`
and `endpoint_type` to configure endpoint search. Both parameters have
default values.
.. sourcecode:: python
from keystoneauth1.identity import v2
from keystoneauth1 import session
from saharaclient import client
auth = v2.Password(auth_url=AUTH_URL,
username=USERNAME,
password=PASSWORD,
tenant_name=PROJECT_ID)
ses = session.Session(auth=auth)
sahara = client.Client('1.1', session=ses,
service_type="non-default-service-type",
endpoint_type="internalURL")
..
Object managers
---------------
Sahara Client has a list of fields to operate with:
* plugins
* clusters
* cluster_templates
* node_group_templates
* images
* data_sources
* job_binaries
* job_binary_internals
* job_executions
* job_types
Each of this fields is a reference to a Manager for a corresponding group of
REST calls.
Supported operations
--------------------
Plugin ops
~~~~~~~~~~
.. autoclass:: saharaclient.api.plugins.PluginManager
:members:
Image Registry ops
~~~~~~~~~~~~~~~~~~
.. autoclass:: saharaclient.api.images.ImageManager
:members:
Node Group Template ops
~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: saharaclient.api.node_group_templates.NodeGroupTemplateManager
:members:
Cluster Template ops
~~~~~~~~~~~~~~~~~~~~
.. autoclass:: saharaclient.api.cluster_templates.ClusterTemplateManager
:members:
Cluster ops
~~~~~~~~~~~
.. autoclass:: saharaclient.api.clusters.ClusterManager
:members:
Data Source ops
~~~~~~~~~~~~~~~
.. autoclass:: saharaclient.api.data_sources.DataSourceManager
:members:
Job Binary Internal ops
~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: saharaclient.api.job_binary_internals.JobBinaryInternalsManager
:members: create, update
Job Binary ops
~~~~~~~~~~~~~~
.. autoclass:: saharaclient.api.job_binaries.JobBinariesManager
:members:
Job ops
~~~~~~~
.. autoclass:: saharaclient.api.jobs.JobsManager
:members:
Job Execution ops
~~~~~~~~~~~~~~~~~
.. autoclass:: saharaclient.api.job_executions.JobExecutionsManager
:members:
Job Types ops
~~~~~~~~~~~~~
.. autoclass:: saharaclient.api.job_types.JobTypesManager
:members:

View File

@ -1,64 +0,0 @@
Sahara CLI Commands
===================
The following commands are currently supported by the Sahara CLI:
Plugins
-------
.. cli::
:module: saharaclient.osc.v1.plugins
Images
------
.. cli::
:module: saharaclient.osc.v1.images
Node Group Templates
--------------------
.. cli::
:module: saharaclient.osc.v1.node_group_templates
Cluster Templates
-----------------
.. cli::
:module: saharaclient.osc.v1.cluster_templates
Clusters
--------
.. cli::
:module: saharaclient.osc.v1.clusters
Data Sources
------------
.. cli::
:module: saharaclient.osc.v1.data_sources
Job Binaries
------------
.. cli::
:module: saharaclient.osc.v1.job_binaries
Job Types
---------
.. cli::
:module: saharaclient.osc.v1.job_types
Job Templates
-------------
.. cli::
:module: saharaclient.osc.v1.job_templates
Jobs
----
.. cli::
:module: saharaclient.osc.v1.jobs

View File

@ -1,271 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import subprocess
import sys
import os
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('../../saharaclient'))
sys.path.append(os.path.abspath('..'))
sys.path.append(os.path.abspath('../bin'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.viewcode', 'ext.cli']
if not on_rtd:
extensions.append('oslosphinx')
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Sahara Client'
copyright = u'2013, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# Version info
from saharaclient.version import version_info as saharaclient_version
release = saharaclient_version.release_string()
# The short X.Y version.
version = saharaclient_version.version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
if on_rtd:
html_theme_path = ['.']
html_theme = '_theme_rtd'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = 'Sahara Client'
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
html_last_updated_fmt = subprocess.Popen(
git_cmd, stdout=subprocess.PIPE).communicate()[0]
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'index': ['sidebarlinks.html', 'localtoc.html', 'searchbox.html', 'sourcelink.html'],
'**': ['localtoc.html', 'relations.html',
'searchbox.html', 'sourcelink.html']
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'SaharaClientDoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'saharaclientdoc.tex', u'Sahara Client',
u'OpenStack Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'saharaclient', u'Sahara Client',
[u'OpenStack Foundation'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Sahara Client', u'Sahara Client',
u'OpenStack Foundation', 'Sahara Client', 'Sahara Client',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'

View File

@ -1,68 +0,0 @@
How to Participate
==================
Getting started
---------------
* Create account on `Github <https://github.com/openstack/sahara>`_
(if you don't have one)
* Make sure that your local git is properly configured by executing
``git config --list``. If not, configure ``user.name``, ``user.email``
* Create account on `Launchpad <https://launchpad.net/sahara>`_
(if you don't have one)
* Subscribe to `OpenStack general mail-list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>`_
* Subscribe to `OpenStack development mail-list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`_
* Create `OpenStack profile <https://www.openstack.org/profile/>`_
* Login to `OpenStack Gerrit <https://review.openstack.org/>`_ with your
Launchpad id
* Sign `OpenStack Individual Contributor License Agreement <https://review.openstack.org/#/settings/agreements>`_
* Make sure that your email is listed in `identities <https://review.openstack.org/#/settings/web-identities>`_
* Subscribe to code-reviews. Go to your settings on http://review.openstack.org
* Go to ``watched projects``
* Add ``openstack/sahara``, ``openstack/sahara-dashboard``,
``openstack/sahara-extra``, ``openstack/python-saharaclient``,
``openstack/sahara-image-elements``, ``openstack/horizon``
How to stay in touch with the community?
----------------------------------------
* If you have something to discuss use
`OpenStack development mail-list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`_.
Prefix mail subject with ``[Sahara]``
* Join ``#openstack-sahara`` IRC channel on `freenode <http://freenode.net/>`_
* Join public weekly meetings on *Thursdays at 18:00 UTC* on
``#openstack-meeting-alt`` IRC channel
* Join public weekly meetings on *Thursdays at 14:00 UTC* on
``#openstack-meeting-3`` IRC channel
How to send your first patch on review?
---------------------------------------
* Checkout Sahara code from `Github <https://github.com/openstack/sahara>`_
* Carefully read https://wiki.openstack.org/wiki/Gerrit_Workflow
* Pay special attention to https://wiki.openstack.org/wiki/Gerrit_Workflow#Committing_Changes
* Apply and commit your changes
* Make sure that your code passes ``PEP8`` checks and unit-tests
* Send your patch on review
* Monitor status of your patch review on https://review.openstack.org/#/

View File

@ -1,42 +0,0 @@
Python bindings to the OpenStack Sahara API
===========================================
This is a client for OpenStack Sahara API. There's :doc:`a Python API
<api>` (the :mod:`saharaclient` module), and a :doc:`command-line utility
<shell>` (installed as an OpenStackClient plugin). Each implements the entire
OpenStack Sahara API.
You'll need credentials for an OpenStack cloud that implements the
Data Processing API, in order to use the sahara client.
You may want to read the `OpenStack Sahara Docs`__ -- the overview, at
least -- to get an idea of the concepts. By understanding the concepts
this library should make more sense.
__ http://docs.openstack.org/developer/sahara/
Contents:
.. toctree::
:maxdepth: 2
api
shell
cli
how_to_participate
Contributing
============
Code is hosted in `review.o.o`_ and mirrored to `github`_ and `git.o.o`_ .
Submit bugs to the Sahara project on `launchpad`_ and to the Sahara client on
`launchpad_client`_. Submit code to the openstack/python-saharaclient project
using `gerrit`_.
.. _review.o.o: https://review.openstack.org
.. _github: https://github.com/openstack/python-saharaclient
.. _git.o.o: http://git.openstack.org/cgit/openstack/python-saharaclient
.. _launchpad: https://launchpad.net/sahara
.. _launchpad_client: https://launchpad.net/python-saharaclient
.. _gerrit: http://docs.openstack.org/infra/manual/developers.html#development-workflow

View File

@ -1,64 +0,0 @@
Sahara CLI
==========
The Sahara shell utility now is part of the OpenStackClient, so all
shell commands take the following form:
.. code-block:: bash
$ openstack dataprocessing <command> [arguments...]
To get a list of all possible commands you can run:
.. code-block:: bash
$ openstack help dataprocessing
To get detailed help for the command you can run:
.. code-block:: bash
$ openstack help dataprocessing <command>
For more information about commands and their parameters you can refer to
:doc:`the Sahara CLI commands <cli>`.
For more information about abilities and features of OpenStackClient CLI you
can refer to `OpenStackClient documentation <http://docs.openstack.org/developer/python-openstackclient/>`_
Configuration
-------------
The CLI is configured via environment variables and command-line options which
are described in http://docs.openstack.org/developer/python-openstackclient/authentication.html.
Authentication using username/password is most commonly used and can be
provided with environment variables:
.. code-block:: bash
export OS_AUTH_URL=<url-to-openstack-identity>
export OS_PROJECT_NAME=<project-name>
export OS_USERNAME=<username>
export OS_PASSWORD=<password> # (optional)
or command-line options:
.. code-block:: bash
--os-auth-url <url>
--os-project-name <project-name>
--os-username <username>
[--os-password <password>]
Additionally :program:`sahara` API url can be configured with parameter:
.. code-block:: bash
--os-data-processing-url
or with environment variable:
.. code-block:: bash
export OS_DATA_PROCESSING_URL=<url-to-sahara-API>

View File

@ -1,7 +0,0 @@
[DEFAULT]
base=saharaclient
module=apiclient.auth
module=apiclient.exceptions
module=cliutils
module=_i18n

View File

@ -1,4 +0,0 @@
---
features:
- >
Automatically generated documentation for saharaclient API was added.

View File

@ -1,4 +0,0 @@
---
features:
- >
Automatically generated documentation for saharaclient CLI was added.

View File

@ -1,4 +0,0 @@
---
deprecations:
- >
Old CLI is deprecated and will not be maintained.

View File

@ -1,4 +0,0 @@
---
features:
- Added integration of Designate for hostname resolution through dns
servers

View File

@ -1,4 +0,0 @@
---
features:
- Providing ability to make dump of event logs for clusters.
Also displaying shorten version of event logs by option.

View File

@ -1,5 +0,0 @@
---
fixes:
- >
[`bug 1534050 <https://bugs.launchpad.net/python-saharaclient/+bug/1534050>`_]
Now object's fields can be unset with ``update`` calls.

View File

@ -1,4 +0,0 @@
---
features:
- >
Pagination for list operations is implemented.

View File

@ -1,6 +0,0 @@
---
fixes:
- >
[`bug 1508406 <https://bugs.launchpad.net/python-saharaclient/+bug/1508406>`_]
Now ``description`` and ``extra`` parameters of jobs ``create`` method
are optional.

View File

@ -1,6 +0,0 @@
---
fixes:
- >
[`bug 1506448 <https://bugs.launchpad.net/python-saharaclient/+bug/1506448>`_]
Now ``mains``, ``libs`` and ``description`` parameters of jobs ``create``
method are optional.

View File

@ -1,6 +0,0 @@
---
fixes:
- >
[`bug 1507966 <https://bugs.launchpad.net/python-saharaclient/+bug/1507966>`_]
Now input_id, output_id, configs parameters of job executions create
method are optional.

View File

@ -1,4 +0,0 @@
---
features:
- >
New CLI as part of the openstackclient was implemented.

View File

@ -1,4 +0,0 @@
---
features:
- Plugins updates are supported now in saharaclient. Also
information about plugin labels is available for users.

View File

@ -1,4 +0,0 @@
---
prelude: >
Functional tests were replaced to sahara-tests repository. Please refer to
README of sahara-tests about how to run these tests now.

View File

@ -1,5 +0,0 @@
---
deprecations:
- >
[`bug 1519510 <https://bugs.launchpad.net/python-saharaclient/+bug/1519510>`_]
Support of python 2.6 was dropped.

View File

@ -1,5 +0,0 @@
---
deprecations:
- >
[`bug 1526170 <https://bugs.launchpad.net/python-saharaclient/+bug/1526170>`_]
Support of python 3.3 was dropped.

View File

@ -1,10 +0,0 @@
---
upgrade:
- Option 'version' is replaced by 'plugin-version' option.
fixes:
- Option 'version' is a global option, which is used for getting
the client version. So there were problems with the OpenStack client,
when we specified 'version' of the plugin, but OSC treated
that as a request for getting the current client version. Hence, to fix
this problem, 'version' is replaced by 'plugin-version'.
Related bug 1565775.

View File

@ -1,4 +0,0 @@
---
features:
- >
Now shares can be edited on an existing cluster.

View File

@ -1,4 +0,0 @@
---
other:
- >
Start using reno to manage release notes.

View File

@ -1,5 +0,0 @@
---
fixes:
- >
[`bug 1500790 <https://bugs.launchpad.net/python-saharaclient/+bug/1500790>`_]
Now tags can be added and removed simultaneously in one call.

View File

@ -1,5 +0,0 @@
---
fixes:
- >
[`bug 1510470 <https://bugs.launchpad.net/python-saharaclient/+bug/1510470>`_]
Now ``desc`` parameter of ``update_image`` is optional.

View File

@ -1,6 +0,0 @@
---
fixes:
- >
[`bug 1499697 <https://bugs.launchpad.net/python-saharaclient/+bug/1499697>`_]
Now node group templates can be created and updated with
``volume_mount_prefix`` parameter.

View File

@ -1,219 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Sahara Client Release Notes documentation build configuration file
extensions = [
'oslosphinx',
'reno.sphinxext',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Saharaclient Release Notes'
copyright = u'2015, Sahara Developers'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
from saharaclient.version import version_info as saharaclient_version
# The full version, including alpha/beta/rc tags.
release = saharaclient_version.version_string_with_vcs()
# The short X.Y version.
version = saharaclient_version.canonical_version_string()
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'SaharaClientReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'SaharaClientReleaseNotes.tex',
u'Sahara Client Release Notes Documentation',
u'Sahara Client Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'saharaclientreleasenotes',
u'Sahara Client Release Notes Documentation',
[u'Sahara Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'SaharaClientReleaseNotes',
u'Sahara Client Release Notes Documentation',
u'Sahara Developers', 'SaharaClientReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False

View File

@ -1,9 +0,0 @@
===========================
Saharaclient Release Notes
===========================
.. toctree::
:maxdepth: 1
unreleased
mitaka

View File

@ -1,6 +0,0 @@
===================================
Mitaka Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/mitaka

View File

@ -1,5 +0,0 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

View File

@ -1,18 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6 # Apache-2.0
Babel>=2.3.4 # BSD
keystoneauth1>=2.10.0 # Apache-2.0
osc-lib>=0.4.0 # Apache-2.0
oslo.log>=1.14.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.i18n>=2.1.0 # Apache-2.0
oslo.utils>=3.16.0 # Apache-2.0
python-keystoneclient!=1.8.0,!=2.1.0,>=1.7.0 # Apache-2.0
python-openstackclient>=2.1.0 # Apache-2.0
requests>=2.10.0 # Apache-2.0
six>=1.9.0 # MIT
PrettyTable<0.8,>=0.7 # BSD

View File

@ -1,164 +0,0 @@
#!/bin/bash
set -eu
function usage {
echo "Usage: $0 [OPTION]..."
echo "Run python-saharaclient test suite"
echo ""
echo " -V, --virtual-env Always use virtualenv. Install automatically if not present"
echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment"
echo " -s, --no-site-packages Isolate the virtualenv from the global Python environment"
echo " -x, --stop Stop running tests after the first error or failure."
echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added."
echo " -p, --pep8 Just run pep8"
echo " -P, --no-pep8 Don't run pep8"
echo " -c, --coverage Generate coverage report"
echo " -h, --help Print this usage message"
echo " --hide-elapsed Don't print the elapsed time for each test along with slow test list"
echo ""
echo "Note: with no options specified, the script will try to run the tests in a virtual environment,"
echo " If no virtualenv is found, the script will ask if you would like to create one. If you "
echo " prefer to run tests NOT in a virtual environment, simply pass the -N option."
exit
}
function process_option {
case "$1" in
-h|--help) usage;;
-V|--virtual-env) always_venv=1; never_venv=0;;
-N|--no-virtual-env) always_venv=0; never_venv=1;;
-s|--no-site-packages) no_site_packages=1;;
-f|--force) force=1;;
-p|--pep8) just_pep8=1;;
-P|--no-pep8) no_pep8=1;;
-c|--coverage) coverage=1;;
-*) testropts="$testropts $1";;
*) testrargs="$testrargs $1"
esac
}
venv=.venv
with_venv=tools/with_venv.sh
always_venv=0
never_venv=0
force=0
no_site_packages=0
installvenvopts=
testrargs=
testropts=
wrapper=""
just_pep8=0
no_pep8=0
coverage=0
LANG=en_US.UTF-8
LANGUAGE=en_US:en
LC_ALL=C
for arg in "$@"; do
process_option $arg
done
if [ $no_site_packages -eq 1 ]; then
installvenvopts="--no-site-packages"
fi
function init_testr {
if [ ! -d .testrepository ]; then
${wrapper} testr init
fi
}
function run_tests {
# Cleanup *pyc
${wrapper} find . -type f -name "*.pyc" -delete
if [ $coverage -eq 1 ]; then
# Do not test test_coverage_ext when gathering coverage.
if [ "x$testrargs" = "x" ]; then
testrargs="^(?!.*test_coverage_ext).*$"
fi
export PYTHON="${wrapper} coverage run --source saharaclient --parallel-mode"
fi
# Just run the test suites in current environment
set +e
TESTRTESTS="$TESTRTESTS $testrargs"
echo "Running \`${wrapper} $TESTRTESTS\`"
${wrapper} $TESTRTESTS
RESULT=$?
set -e
copy_subunit_log
return $RESULT
}
function copy_subunit_log {
LOGNAME=`cat .testrepository/next-stream`
LOGNAME=$(($LOGNAME - 1))
LOGNAME=".testrepository/${LOGNAME}"
cp $LOGNAME subunit.log
}
function run_pep8 {
echo "Running flake8 ..."
${wrapper} flake8
}
TESTRTESTS="testr run --parallel $testropts"
if [ $never_venv -eq 0 ]
then
# Remove the virtual environment if --force used
if [ $force -eq 1 ]; then
echo "Cleaning virtualenv..."
rm -rf ${venv}
fi
if [ -e ${venv} ]; then
wrapper="${with_venv}"
else
if [ $always_venv -eq 1 ]; then
# Automatically install the virtualenv
python tools/install_venv.py $installvenvopts
wrapper="${with_venv}"
else
echo -e "No virtual environment found...create one? (Y/n) \c"
read use_ve
if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
# Install the virtualenv and run the test suite in it
python tools/install_venv.py $installvenvopts
wrapper=${with_venv}
fi
fi
fi
fi
# Delete old coverage data from previous runs
if [ $coverage -eq 1 ]; then
${wrapper} coverage erase
fi
if [ $just_pep8 -eq 1 ]; then
run_pep8
exit
fi
init_testr
run_tests
# NOTE(sirp): we only want to run pep8 when we're running the full-test suite,
# not when we're running tests individually. To handle this, we need to
# distinguish between options (noseopts), which begin with a '-', and
# arguments (testrargs).
if [ -z "$testrargs" ]; then
if [ $no_pep8 -eq 0 ]; then
run_pep8
fi
fi
if [ $coverage -eq 1 ]; then
echo "Generating coverage report in covhtml/"
${wrapper} coverage combine
${wrapper} coverage html --include='saharaclient/*' --omit='saharaclient/openstack/common/*' -d covhtml -i
fi

View File

@ -1,281 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import json
import logging
import six
from six.moves.urllib import parse
from saharaclient.openstack.common._i18n import _
LOG = logging.getLogger(__name__)
class Resource(object):
resource_name = 'Something'
defaults = {}
def __init__(self, manager, info):
self.manager = manager
info = info.copy()
self._info = info
self._set_defaults(info)
self._add_details(info)
def _set_defaults(self, info):
for name, value in six.iteritems(self.defaults):
if name not in info:
info[name] = value
def _add_details(self, info):
for (k, v) in six.iteritems(info):
try:
setattr(self, k, v)
self._info[k] = v
except AttributeError:
# In this case we already defined the attribute on the class
pass
def to_dict(self):
return copy.deepcopy(self._info)
def __str__(self):
return '%s %s' % (self.resource_name, str(self._info))
def _check_items(obj, searches):
try:
return all(getattr(obj, attr) == value for (attr, value) in searches)
except AttributeError:
return False
class NotUpdated(object):
"""A sentinel class to signal that parameter should not be updated."""
def __repr__(self):
return 'NotUpdated'
class ResourceManager(object):
resource_class = None
def __init__(self, api):
self.api = api
def find(self, **kwargs):
return [i for i in self.list() if _check_items(i, kwargs.items())]
def find_unique(self, **kwargs):
found = self.find(**kwargs)
if not found:
raise APIException(error_code=404,
error_message=_("No matches found."))
if len(found) > 1:
raise APIException(error_code=409,
error_message=_("Multiple matches found."))
return found[0]
def _copy_if_defined(self, data, **kwargs):
for var_name, var_value in six.iteritems(kwargs):
if var_value is not None:
data[var_name] = var_value
def _copy_if_updated(self, data, **kwargs):
for var_name, var_value in six.iteritems(kwargs):
if not isinstance(var_value, NotUpdated):
data[var_name] = var_value
def _create(self, url, data, response_key=None, dump_json=True):
if dump_json:
kwargs = {'json': data}
else:
kwargs = {'data': data}
resp = self.api.post(url, **kwargs)
if resp.status_code != 202:
self._raise_api_exception(resp)
if response_key is not None:
data = get_json(resp)[response_key]
else:
data = get_json(resp)
return self.resource_class(self, data)
def _update(self, url, data, response_key=None, dump_json=True):
if dump_json:
kwargs = {'json': data}
else:
kwargs = {'data': data}
resp = self.api.put(url, **kwargs)
if resp.status_code != 202:
self._raise_api_exception(resp)
if response_key is not None:
data = get_json(resp)[response_key]
else:
data = get_json(resp)
return self.resource_class(self, data)
def _patch(self, url, data, response_key=None, dump_json=True):
if dump_json:
kwargs = {'json': data}
else:
kwargs = {'data': data}
resp = self.api.patch(url, **kwargs)
if resp.status_code != 202:
self._raise_api_exception(resp)
if response_key is not None:
data = get_json(resp)[response_key]
else:
data = get_json(resp)
return self.resource_class(self, data)
def _post(self, url, data, response_key=None, dump_json=True):
if dump_json:
kwargs = {'json': data}
else:
kwargs = {'data': data}
resp = self.api.post(url, **kwargs)
if resp.status_code != 202:
self._raise_api_exception(resp)
if response_key is not None:
data = get_json(resp)[response_key]
else:
data = get_json(resp)
return self.resource_class(self, data)
def _list(self, url, response_key):
resp = self.api.get(url)
if resp.status_code == 200:
data = get_json(resp)[response_key]
return [self.resource_class(self, res)
for res in data]
else:
self._raise_api_exception(resp)
def _page(self, url, response_key, limit=None):
resp = self.api.get(url)
if resp.status_code == 200:
result = get_json(resp)
data = result[response_key]
meta = result.get('markers')
next, prev = None, None
if meta:
prev = meta.get('prev')
next = meta.get('next')
l = [self.resource_class(self, res)
for res in data]
return Page(l, prev, next, limit)
else:
self._raise_api_exception(resp)
def _get(self, url, response_key=None):
resp = self.api.get(url)
if resp.status_code == 200:
if response_key is not None:
data = get_json(resp)[response_key]
else:
data = get_json(resp)
return self.resource_class(self, data)
else:
self._raise_api_exception(resp)
def _delete(self, url):
resp = self.api.delete(url)
if resp.status_code != 204:
self._raise_api_exception(resp)
def _plurify_resource_name(self):
return self.resource_class.resource_name + 's'
def _raise_api_exception(self, resp):
try:
error_data = get_json(resp)
except Exception:
msg = _("Failed to parse response from Sahara: %s") % resp.reason
raise APIException(
error_code=resp.status_code,
error_message=msg)
raise APIException(error_code=error_data.get("error_code"),
error_name=error_data.get("error_name"),
error_message=error_data.get("error_message"))
def get_json(response):
"""Provide backward compatibility with old versions of requests library."""
json_field_or_function = getattr(response, 'json', None)
if callable(json_field_or_function):
return response.json()
else:
return json.loads(response.content)
class APIException(Exception):
def __init__(self, error_code=None, error_name=None, error_message=None):
super(APIException, self).__init__(error_message)
self.error_code = error_code
self.error_name = error_name
self.error_message = error_message
def get_query_string(search_opts, limit=None, marker=None, sort_by=None,
reverse=None):
opts = {}
if marker is not None:
opts['marker'] = marker
if limit is not None:
opts['limit'] = limit
if sort_by is not None:
if reverse:
opts['sort_by'] = "-%s" % sort_by
else:
opts['sort_by'] = sort_by
if search_opts is not None:
opts.update(search_opts)
if opts:
qparams = sorted(opts.items(), key=lambda x: x[0])
query_string = "?%s" % parse.urlencode(qparams, doseq=True)
else:
query_string = ""
return query_string
class Page(list):
def __init__(self, l, prev, next, limit):
super(Page, self).__init__(l)
self.prev = prev
self.next = next
self.limit = limit

View File

@ -1,188 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import warnings
from keystoneauth1 import adapter
from keystoneauth1 import exceptions
from keystoneauth1.identity import v2
from keystoneauth1.identity import v3
from keystoneauth1 import session as keystone_session
from keystoneauth1 import token_endpoint
from saharaclient.api import cluster_templates
from saharaclient.api import clusters
from saharaclient.api import data_sources
from saharaclient.api import images
from saharaclient.api import job_binaries
from saharaclient.api import job_binary_internals
from saharaclient.api import job_executions
from saharaclient.api import job_types
from saharaclient.api import jobs
from saharaclient.api import node_group_templates
from saharaclient.api import plugins
USER_AGENT = 'python-saharaclient'
class HTTPClient(adapter.Adapter):
def request(self, *args, **kwargs):
kwargs.setdefault('raise_exc', False)
return super(HTTPClient, self).request(*args, **kwargs)
class Client(object):
"""Client for the OpenStack Data Processing v1 API.
:param str username: Username for Keystone authentication.
:param str api_key: Password for Keystone authentication.
:param str project_id: Keystone Tenant id.
:param str project_name: Keystone Tenant name.
:param str auth_url: Keystone URL that will be used for authentication.
:param str sahara_url: Sahara REST API URL to communicate with.
:param str endpoint_type: Desired Sahara endpoint type.
:param str service_type: Sahara service name in Keystone catalog.
:param str input_auth_token: Keystone authorization token.
:param session: Keystone Session object.
:param auth: Keystone Authentication Plugin object.
:param boolean insecure: Allow insecure.
:param string cacert: Path to the Privacy Enhanced Mail (PEM) file
which contains certificates needed to establish
SSL connection with the identity service.
:param string region_name: Name of a region to select when choosing an
endpoint from the service catalog.
"""
def __init__(self, username=None, api_key=None, project_id=None,
project_name=None, auth_url=None, sahara_url=None,
endpoint_type='publicURL', service_type='data-processing',
input_auth_token=None, session=None, auth=None,
insecure=False, cacert=None, region_name=None, **kwargs):
if not session:
warnings.simplefilter('once', category=DeprecationWarning)
warnings.warn('Passing authentication parameters to saharaclient '
'is deprecated. Please construct and pass an '
'authenticated session object directly.',
DeprecationWarning)
warnings.resetwarnings()
if input_auth_token:
auth = token_endpoint.Token(sahara_url, input_auth_token)
else:
auth = self._get_keystone_auth(auth_url=auth_url,
username=username,
api_key=api_key,
project_id=project_id,
project_name=project_name)
verify = True
if insecure:
verify = False
elif cacert:
verify = cacert
session = keystone_session.Session(verify=verify)
if not auth:
auth = session.auth
# NOTE(Toan): bug #1512801. If sahara_url is provided, it does not
# matter if service_type is orthographically correct or not.
# Only find Sahara service_type and endpoint in Keystone catalog
# if sahara_url is not provided.
if not sahara_url:
service_type = self._determine_service_type(session,
auth,
service_type,
endpoint_type)
kwargs['user_agent'] = USER_AGENT
kwargs.setdefault('interface', endpoint_type)
kwargs.setdefault('endpoint_override', sahara_url)
client = HTTPClient(session=session,
auth=auth,
service_type=service_type,
region_name=region_name,
**kwargs)
self.clusters = clusters.ClusterManager(client)
self.cluster_templates = (
cluster_templates.ClusterTemplateManager(client)
)
self.node_group_templates = (
node_group_templates.NodeGroupTemplateManager(client)
)
self.plugins = plugins.PluginManager(client)
self.images = images.ImageManager(client)
self.data_sources = data_sources.DataSourceManager(client)
self.jobs = jobs.JobsManager(client)
self.job_executions = job_executions.JobExecutionsManager(client)
self.job_binaries = job_binaries.JobBinariesManager(client)
self.job_binary_internals = (
job_binary_internals.JobBinaryInternalsManager(client)
)
self.job_types = job_types.JobTypesManager(client)
def _get_keystone_auth(self, username=None, api_key=None, auth_url=None,
project_id=None, project_name=None):
if not auth_url:
raise RuntimeError("No auth url specified")
if 'v2.0' in auth_url:
return v2.Password(auth_url=auth_url,
username=username,
password=api_key,
tenant_id=project_id,
tenant_name=project_name)
else:
# NOTE(jamielennox): Setting these to default is what
# keystoneclient does in the event they are not passed.
return v3.Password(auth_url=auth_url,
username=username,
password=api_key,
user_domain_id='default',
project_id=project_id,
project_name=project_name,
project_domain_id='default')
@staticmethod
def _determine_service_type(session, auth, service_type, interface):
"""Check a catalog for data-processing or data_processing"""
# NOTE(jamielennox): calling get_endpoint forces an auth on
# initialization which is required for backwards compatibility. It
# also allows us to reset the service type if not in the catalog.
for st in (service_type, service_type.replace('-', '_')):
try:
url = auth.get_endpoint(session,
service_type=st,
interface=interface)
except exceptions.Unauthorized:
raise RuntimeError("Not Authorized")
except exceptions.EndpointNotFound:
# NOTE(jamielennox): bug #1428447. This should not be
# raised, instead None should be returned. Handle in case
# it changes in the future
url = None
if url:
return st
raise RuntimeError("Could not find Sahara endpoint in catalog")

View File

@ -1,99 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import base
class ClusterTemplate(base.Resource):
resource_name = 'Cluster Template'
class ClusterTemplateManager(base.ResourceManager):
resource_class = ClusterTemplate
NotUpdated = base.NotUpdated()
def create(self, name, plugin_name, hadoop_version, description=None,
cluster_configs=None, node_groups=None, anti_affinity=None,
net_id=None, default_image_id=None, use_autoconfig=None,
shares=None, is_public=None, is_protected=None,
domain_name=None):
"""Create a Cluster Template."""
data = {
'name': name,
'plugin_name': plugin_name,
'hadoop_version': hadoop_version,
}
self._copy_if_defined(data,
description=description,
cluster_configs=cluster_configs,
node_groups=node_groups,
anti_affinity=anti_affinity,
neutron_management_network=net_id,
default_image_id=default_image_id,
use_autoconfig=use_autoconfig,
shares=shares,
is_public=is_public,
is_protected=is_protected,
domain_name=domain_name)
return self._create('/cluster-templates', data, 'cluster_template')
def update(self, cluster_template_id, name=NotUpdated,
plugin_name=NotUpdated, hadoop_version=NotUpdated,
description=NotUpdated, cluster_configs=NotUpdated,
node_groups=NotUpdated, anti_affinity=NotUpdated,
net_id=NotUpdated, default_image_id=NotUpdated,
use_autoconfig=NotUpdated, shares=NotUpdated,
is_public=NotUpdated, is_protected=NotUpdated,
domain_name=NotUpdated):
"""Update a Cluster Template."""
data = {}
self._copy_if_updated(data, name=name,
plugin_name=plugin_name,
hadoop_version=hadoop_version,
description=description,
cluster_configs=cluster_configs,
node_groups=node_groups,
anti_affinity=anti_affinity,
neutron_management_network=net_id,
default_image_id=default_image_id,
use_autoconfig=use_autoconfig,
shares=shares,
is_public=is_public,
is_protected=is_protected,
domain_name=domain_name)
return self._update('/cluster-templates/%s' % cluster_template_id,
data, 'cluster_template')
def list(self, search_opts=None, marker=None,
limit=None, sort_by=None, reverse=None):
"""Get list of Cluster Templates."""
query = base.get_query_string(search_opts, marker=marker, limit=limit,
sort_by=sort_by, reverse=reverse)
url = "/cluster-templates%s" % query
return self._page(url, 'cluster_templates', limit)
def get(self, cluster_template_id):
"""Get information about a Cluster Template."""
return self._get('/cluster-templates/%s' % cluster_template_id,
'cluster_template')
def delete(self, cluster_template_id):
"""Delete a Cluster Template."""
self._delete('/cluster-templates/%s' % cluster_template_id)

View File

@ -1,138 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from six.moves.urllib import parse
from saharaclient.api import base
class Cluster(base.Resource):
resource_name = 'Cluster'
class ClusterManager(base.ResourceManager):
resource_class = Cluster
NotUpdated = base.NotUpdated()
def create(self, name, plugin_name, hadoop_version,
cluster_template_id=None, default_image_id=None,
is_transient=None, description=None, cluster_configs=None,
node_groups=None, user_keypair_id=None,
anti_affinity=None, net_id=None, count=None,
use_autoconfig=None, shares=None,
is_public=None, is_protected=None):
"""Launch a Cluster."""
data = {
'name': name,
'plugin_name': plugin_name,
'hadoop_version': hadoop_version,
}
# Checking if count is greater than 1, otherwise we set it to None
# so the created dict in the _copy_if_defined method does not contain
# the count parameter.
if count and count <= 1:
count = None
self._copy_if_defined(data,
cluster_template_id=cluster_template_id,
is_transient=is_transient,
default_image_id=default_image_id,
description=description,
cluster_configs=cluster_configs,
node_groups=node_groups,
user_keypair_id=user_keypair_id,
anti_affinity=anti_affinity,
neutron_management_network=net_id,
count=count,
use_autoconfig=use_autoconfig,
shares=shares,
is_public=is_public,
is_protected=is_protected)
if count:
return self._create('/clusters/multiple', data)
return self._create('/clusters', data, 'cluster')
def scale(self, cluster_id, scale_object):
"""Scale an existing Cluster.
:param scale_object: dict that describes scaling operation
:Example:
The following `scale_object` can be used to change the number of
instances in the node group and add instances of new node group to
existing cluster:
.. sourcecode:: json
{
"add_node_groups": [
{
"count": 3,
"name": "new_ng",
"node_group_template_id": "ngt_id"
}
],
"resize_node_groups": [
{
"count": 2,
"name": "old_ng"
}
]
}
"""
return self._update('/clusters/%s' % cluster_id, scale_object)
def list(self, search_opts=None, limit=None, marker=None,
sort_by=None, reverse=None):
"""Get a list of Clusters."""
query = base.get_query_string(search_opts, limit=limit, marker=marker,
sort_by=sort_by, reverse=reverse)
url = "/clusters%s" % query
return self._page(url, 'clusters', limit)
def get(self, cluster_id, show_progress=False):
"""Get information about a Cluster."""
url = ('/clusters/%(cluster_id)s?%(params)s' %
{"cluster_id": cluster_id,
"params": parse.urlencode({"show_progress": show_progress})})
return self._get(url, 'cluster')
def delete(self, cluster_id):
"""Delete a Cluster."""
self._delete('/clusters/%s' % cluster_id)
def update(self, cluster_id, name=NotUpdated, description=NotUpdated,
is_public=NotUpdated, is_protected=NotUpdated,
shares=NotUpdated):
"""Update a Cluster."""
data = {}
self._copy_if_updated(data, name=name, description=description,
is_public=is_public, is_protected=is_protected,
shares=shares)
return self._patch('/clusters/%s' % cluster_id, data)
def verification_update(self, cluster_id, status):
"""Start a verification for a Cluster."""
data = {'verification': {'status': status}}
return self._patch("/clusters/%s" % cluster_id, data)

View File

@ -1,80 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import base
class DataSources(base.Resource):
resource_name = 'Data Source'
class DataSourceManager(base.ResourceManager):
resource_class = DataSources
def create(self, name, description, data_source_type,
url, credential_user=None, credential_pass=None,
is_public=None, is_protected=None):
"""Create a Data Source."""
data = {
'name': name,
'description': description,
'type': data_source_type,
'url': url,
'credentials': {}
}
self._copy_if_defined(data['credentials'],
user=credential_user,
password=credential_pass)
self._copy_if_defined(data, is_public=is_public,
is_protected=is_protected)
return self._create('/data-sources', data, 'data_source')
def list(self, search_opts=None, limit=None, marker=None,
sort_by=None, reverse=None):
"""Get a list of Data Sources."""
query = base.get_query_string(search_opts, limit=limit, marker=marker,
sort_by=sort_by, reverse=reverse)
url = "/data-sources%s" % query
return self._page(url, 'data_sources', limit)
def get(self, data_source_id):
"""Get information about a Data Source."""
return self._get('/data-sources/%s' % data_source_id, 'data_source')
def delete(self, data_source_id):
"""Delete a Data Source."""
self._delete('/data-sources/%s' % data_source_id)
def update(self, data_source_id, update_data):
"""Update a Data Source.
:param dict update_data: dict that contains fields that should be
updated with new values.
Fields that can be updated:
* name
* description
* type
* url
* is_public
* is_protected
* credentials - dict with `user` and `password` keyword arguments
"""
return self._update('/data-sources/%s' % data_source_id,
update_data)

View File

@ -1,76 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import parameters as params
class Helpers(object):
def __init__(self, sahara_client):
self.sahara = sahara_client
self.plugins = self.sahara.plugins
def _get_node_processes(self, plugin):
processes = []
for proc_lst in plugin.node_processes.values():
processes += proc_lst
return [(proc_name, proc_name) for proc_name in processes]
def get_node_processes(self, plugin_name, hadoop_version):
plugin = self.plugins.get_version_details(plugin_name, hadoop_version)
return self._get_node_processes(plugin)
def _extract_parameters(self, configs, scope, applicable_target):
parameters = []
for config in configs:
if (config['scope'] == scope and
config['applicable_target'] == applicable_target):
parameters.append(params.Parameter(config))
return parameters
def get_cluster_general_configs(self, plugin_name, hadoop_version):
plugin = self.plugins.get_version_details(plugin_name, hadoop_version)
return self._extract_parameters(plugin.configs, 'cluster', "general")
def get_general_node_group_configs(self, plugin_name, hadoop_version):
plugin = self.plugins.get_version_details(plugin_name, hadoop_version)
return self._extract_parameters(plugin.configs, 'node', 'general')
def get_targeted_node_group_configs(self, plugin_name, hadoop_version):
plugin = self.plugins.get_version_details(plugin_name, hadoop_version)
parameters = dict()
for service in plugin.node_processes.keys():
parameters[service] = self._extract_parameters(plugin.configs,
'node', service)
return parameters
def get_targeted_cluster_configs(self, plugin_name, hadoop_version):
plugin = self.plugins.get_version_details(plugin_name, hadoop_version)
parameters = dict()
for service in plugin.node_processes.keys():
parameters[service] = self._extract_parameters(plugin.configs,
'cluster', service)
return parameters

View File

@ -1,72 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import base
class Image(base.Resource):
resource_name = 'Image'
defaults = {'description': ''}
class ImageManager(base.ResourceManager):
resource_class = Image
def list(self, search_opts=None):
"""Get a list of registered images."""
query = base.get_query_string(search_opts)
return self._list('/images%s' % query, 'images')
def get(self, id):
"""Get information about an image"""
return self._get('/images/%s' % id, 'image')
def unregister_image(self, image_id):
"""Remove an Image from Sahara Image Registry."""
self._delete('/images/%s' % image_id)
def update_image(self, image_id, user_name, desc=None):
"""Create or update an Image in Image Registry."""
desc = desc if desc else ''
data = {"username": user_name,
"description": desc}
return self._post('/images/%s' % image_id, data)
def update_tags(self, image_id, new_tags):
"""Update an Image tags.
:param list new_tags: list of tags that will replace currently
assigned tags
"""
old_image = self.get(image_id)
old_tags = frozenset(old_image.tags)
new_tags = frozenset(new_tags)
to_add = list(new_tags - old_tags)
to_remove = list(old_tags - new_tags)
add_response, remove_response = None, None
if to_add:
add_response = self._post('/images/%s/tag' % image_id,
{'tags': to_add}, 'image')
if to_remove:
remove_response = self._post('/images/%s/untag' % image_id,
{'tags': to_remove}, 'image')
return remove_response or add_response or self.get(image_id)

View File

@ -1,79 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import base
class JobBinaries(base.Resource):
resource_name = 'Job Binary'
class JobBinariesManager(base.ResourceManager):
resource_class = JobBinaries
def create(self, name, url, description=None, extra=None, is_public=None,
is_protected=None):
"""Create a Job Binary."""
data = {
"name": name,
"url": url
}
self._copy_if_defined(data, description=description, extra=extra,
is_public=is_public, is_protected=is_protected)
return self._create('/job-binaries', data, 'job_binary')
def list(self, search_opts=None, limit=None, marker=None,
sort_by=None, reverse=None):
"""Get a list of Job Binaries."""
query = base.get_query_string(search_opts, limit=limit, marker=marker,
sort_by=sort_by, reverse=reverse)
url = "/job-binaries%s" % query
return self._page(url, 'binaries', limit)
def get(self, job_binary_id):
"""Get information about a Job Binary."""
return self._get('/job-binaries/%s' % job_binary_id, 'job_binary')
def delete(self, job_binary_id):
"""Delete a Job Binary."""
self._delete('/job-binaries/%s' % job_binary_id)
def get_file(self, job_binary_id):
"""Download a Job Binary."""
resp = self.api.get('/job-binaries/%s/data' % job_binary_id)
if resp.status_code != 200:
self._raise_api_exception(resp)
return resp.content
def update(self, job_binary_id, data):
"""Update Job Binary.
:param dict data: dict that contains fields that should be updated
with new values.
Fields that can be updated:
* name
* description
* url
* is_public
* is_protected
* extra - dict with `user` and `password` keyword arguments
"""
return self._update(
'/job-binaries/%s' % job_binary_id, data, 'job_binary')

View File

@ -1,63 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from six.moves.urllib import parse as urlparse
from saharaclient.api import base
class JobBinaryInternal(base.Resource):
resource_name = 'JobBinaryInternal'
class JobBinaryInternalsManager(base.ResourceManager):
resource_class = JobBinaryInternal
NotUpdated = base.NotUpdated()
def create(self, name, data):
"""Create a Job Binary Internal.
:param str data: raw data ot script text
"""
return self._update('/job-binary-internals/%s' %
urlparse.quote(name.encode('utf-8')), data,
'job_binary_internal', dump_json=False)
def list(self, search_opts=None, limit=None, marker=None,
sort_by=None, reverse=None):
"""Get a list of Job Binary Internals."""
query = base.get_query_string(search_opts, limit=limit, marker=marker,
sort_by=sort_by, reverse=reverse)
url = "/job-binary-internals%s" % query
return self._page(url, 'binaries', limit)
def get(self, job_binary_id):
"""Get information about a Job Binary Internal."""
return self._get('/job-binary-internals/%s' % job_binary_id,
'job_binary_internal')
def delete(self, job_binary_id):
"""Delete a Job Binary Internal."""
self._delete('/job-binary-internals/%s' % job_binary_id)
def update(self, job_binary_id, name=NotUpdated, is_public=NotUpdated,
is_protected=NotUpdated):
"""Update a Job Binary Internal."""
data = {}
self._copy_if_updated(data, name=name, is_public=is_public,
is_protected=is_protected)
return self._patch('/job-binary-internals/%s' % job_binary_id, data)

View File

@ -1,65 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import base
class JobExecution(base.Resource):
resource_name = 'JobExecution'
class JobExecutionsManager(base.ResourceManager):
resource_class = JobExecution
NotUpdated = base.NotUpdated()
def list(self, search_opts=None, marker=None, limit=None,
sort_by=None, reverse=None):
"""Get a list of Job Executions."""
query = base.get_query_string(search_opts, limit=limit, marker=marker,
sort_by=sort_by, reverse=reverse)
url = "/job-executions%s" % query
return self._page(url, 'job_executions', limit)
def get(self, obj_id):
"""Get information about a Job Execution."""
return self._get('/job-executions/%s' % obj_id, 'job_execution')
def delete(self, obj_id):
"""Delete a Job Execution."""
self._delete('/job-executions/%s' % obj_id)
def create(self, job_id, cluster_id, input_id=None,
output_id=None, configs=None, interface=None, is_public=None,
is_protected=None):
"""Launch a Job."""
url = "/jobs/%s/execute" % job_id
data = {
"cluster_id": cluster_id,
}
self._copy_if_defined(data, input_id=input_id, output_id=output_id,
job_configs=configs, interface=interface,
is_public=is_public, is_protected=is_protected)
return self._create(url, data, 'job_execution')
def update(self, obj_id, is_public=NotUpdated, is_protected=NotUpdated):
"""Update a Job Execution."""
data = {}
self._copy_if_updated(data, is_public=is_public,
is_protected=is_protected)
return self._patch('/job-executions/%s' % obj_id, data)

View File

@ -1,29 +0,0 @@
# Copyright (c) 2015 Red Hat Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import base
class JobType(base.Resource):
resource_name = 'JobType'
class JobTypesManager(base.ResourceManager):
resource_class = JobType
def list(self, search_opts=None):
"""Get a list of job types supported by plugins."""
query = base.get_query_string(search_opts)
return self._list('/job-types%s' % query, 'job_types')

View File

@ -1,69 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import base
class Job(base.Resource):
resource_name = 'Job'
class JobsManager(base.ResourceManager):
resource_class = Job
NotUpdated = base.NotUpdated()
def create(self, name, type, mains=None, libs=None, description=None,
interface=None, is_public=None, is_protected=None):
"""Create a Job."""
data = {
'name': name,
'type': type
}
self._copy_if_defined(data, description=description, mains=mains,
libs=libs, interface=interface,
is_public=is_public, is_protected=is_protected)
return self._create('/jobs', data, 'job')
def list(self, search_opts=None, limit=None,
marker=None, sort_by=None, reverse=None):
"""Get a list of Jobs."""
query = base.get_query_string(search_opts, limit=limit, marker=marker,
sort_by=sort_by, reverse=reverse)
url = "/jobs%s" % query
return self._page(url, 'jobs', limit)
def get(self, job_id):
"""Get information about a Job"""
return self._get('/jobs/%s' % job_id, 'job')
def get_configs(self, job_type):
"""Get config hints for a specified Job type."""
return self._get('/jobs/config-hints/%s' % job_type)
def delete(self, job_id):
"""Delete a Job"""
self._delete('/jobs/%s' % job_id)
def update(self, job_id, name=NotUpdated, description=NotUpdated,
is_public=NotUpdated, is_protected=NotUpdated):
"""Update a Job."""
data = {}
self._copy_if_updated(data, name=name, description=description,
is_public=is_public, is_protected=is_protected)
return self._patch('/jobs/%s' % job_id, data)

View File

@ -1,129 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from saharaclient.api import base
class NodeGroupTemplate(base.Resource):
resource_name = 'Node Group Template'
class NodeGroupTemplateManager(base.ResourceManager):
resource_class = NodeGroupTemplate
NotUpdated = base.NotUpdated()
def create(self, name, plugin_name, hadoop_version, flavor_id,
description=None, volumes_per_node=None, volumes_size=None,
node_processes=None, node_configs=None, floating_ip_pool=None,
security_groups=None, auto_security_group=None,
availability_zone=None, volumes_availability_zone=None,
volume_type=None, image_id=None, is_proxy_gateway=None,
volume_local_to_instance=None, use_autoconfig=None,
shares=None, is_public=None, is_protected=None,
volume_mount_prefix=None):
"""Create a Node Group Template."""
data = {
'name': name,
'plugin_name': plugin_name,
'hadoop_version': hadoop_version,
'flavor_id': flavor_id,
'node_processes': node_processes
}
self._copy_if_defined(data,
description=description,
node_configs=node_configs,
floating_ip_pool=floating_ip_pool,
security_groups=security_groups,
auto_security_group=auto_security_group,
availability_zone=availability_zone,
image_id=image_id,
is_proxy_gateway=is_proxy_gateway,
use_autoconfig=use_autoconfig,
shares=shares,
is_public=is_public,
is_protected=is_protected
)
if volumes_per_node:
data.update({"volumes_per_node": volumes_per_node,
"volumes_size": volumes_size})
if volumes_availability_zone:
data.update({"volumes_availability_zone":
volumes_availability_zone})
if volume_type:
data.update({"volume_type": volume_type})
if volume_local_to_instance:
data.update(
{"volume_local_to_instance": volume_local_to_instance})
if volume_mount_prefix:
data.update({"volume_mount_prefix": volume_mount_prefix})
return self._create('/node-group-templates', data,
'node_group_template')
def update(self, ng_template_id, name=NotUpdated, plugin_name=NotUpdated,
hadoop_version=NotUpdated, flavor_id=NotUpdated,
description=NotUpdated, volumes_per_node=NotUpdated,
volumes_size=NotUpdated, node_processes=NotUpdated,
node_configs=NotUpdated, floating_ip_pool=NotUpdated,
security_groups=NotUpdated, auto_security_group=NotUpdated,
availability_zone=NotUpdated,
volumes_availability_zone=NotUpdated, volume_type=NotUpdated,
image_id=NotUpdated, is_proxy_gateway=NotUpdated,
volume_local_to_instance=NotUpdated, use_autoconfig=NotUpdated,
shares=NotUpdated, is_public=NotUpdated,
is_protected=NotUpdated, volume_mount_prefix=NotUpdated):
"""Update a Node Group Template."""
data = {}
self._copy_if_updated(
data, name=name, plugin_name=plugin_name,
hadoop_version=hadoop_version, flavor_id=flavor_id,
description=description, volumes_per_node=volumes_per_node,
volumes_size=volumes_size, node_processes=node_processes,
node_configs=node_configs, floating_ip_pool=floating_ip_pool,
security_groups=security_groups,
auto_security_group=auto_security_group,
availability_zone=availability_zone,
volumes_availability_zone=volumes_availability_zone,
volume_type=volume_type, image_id=image_id,
is_proxy_gateway=is_proxy_gateway,
volume_local_to_instance=volume_local_to_instance,
use_autoconfig=use_autoconfig, shares=shares,
is_public=is_public, is_protected=is_protected,
volume_mount_prefix=volume_mount_prefix
)
return self._update('/node-group-templates/%s' % ng_template_id, data,
'node_group_template')
def list(self, search_opts=None, marker=None,
limit=None, sort_by=None, reverse=None):
"""Get a list of Node Group Templates."""
query = base.get_query_string(search_opts, limit=limit, marker=marker,
sort_by=sort_by, reverse=reverse)
url = "/node-group-templates%s" % query
return self._page(url, 'node_group_templates', limit)
def get(self, ng_template_id):
"""Get information about a Node Group Template."""
return self._get('/node-group-templates/%s' % ng_template_id,
'node_group_template')
def delete(self, ng_template_id):
"""Delete a Node Group Template."""
self._delete('/node-group-templates/%s' % ng_template_id)

View File

@ -1,26 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class Parameter(object):
"""This bean is used for building config entries."""
def __init__(self, config):
self.name = config['name']
self.description = config.get('description', "No description")
self.required = not config['is_optional']
self.default_value = config.get('default_value', None)
self.initial_value = self.default_value
self.param_type = config['config_type']
self.priority = int(config.get('priority', 2))

View File

@ -1,75 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from six.moves.urllib import parse as urlparse
from saharaclient.api import base
class Plugin(base.Resource):
resource_name = 'Plugin'
def __init__(self, manager, info):
base.Resource.__init__(self, manager, info)
# Horizon requires each object in table to have an id
self.id = self.name
class PluginManager(base.ResourceManager):
resource_class = Plugin
def list(self, search_opts=None):
"""Get a list of Plugins."""
query = base.get_query_string(search_opts)
return self._list('/plugins%s' % query, 'plugins')
def get(self, plugin_name):
"""Get information about a Plugin."""
return self._get('/plugins/%s' % plugin_name, 'plugin')
def get_version_details(self, plugin_name, hadoop_version):
"""Get version details
Get the list of Services and Service Parameters for a specified
Plugin and Plugin Version.
"""
return self._get('/plugins/%s/%s' % (plugin_name, hadoop_version),
'plugin')
def update(self, plugin_name, values):
"""Update plugin and then return updated result to user
"""
return self._patch("/plugins/%s" % plugin_name, values, 'plugin')
def convert_to_cluster_template(self, plugin_name, hadoop_version,
template_name, filecontent):
"""Convert to cluster template
Create Cluster Template directly, avoiding Cluster Template
mechanism.
"""
resp = self.api.post('/plugins/%s/%s/convert-config/%s' %
(plugin_name,
hadoop_version,
urlparse.quote(template_name)),
data=filecontent)
if resp.status_code != 202:
raise RuntimeError('Failed to upload template file for plugin "%s"'
' and version "%s"' %
(plugin_name, hadoop_version))
else:
return base.get_json(resp)['cluster_template']

File diff suppressed because it is too large Load Diff

View File

@ -1,47 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_utils import importutils
class UnsupportedVersion(Exception):
"""Indication for using an unsupported version of the API.
Indicates that the user is trying to use an unsupported
version of the API.
"""
pass
def get_client_class(version):
version_map = {
'1.0': 'saharaclient.api.client.Client',
'1.1': 'saharaclient.api.client.Client',
}
try:
client_path = version_map[str(version)]
except (KeyError, ValueError):
supported_versions = ', '.join(version_map.keys())
msg = ("Invalid client version '%(version)s'; must be one of: "
"%(versions)s") % {'version': version,
'versions': supported_versions}
raise UnsupportedVersion(msg)
return importutils.import_class(client_path)
def Client(version, *args, **kwargs):
client_class = get_client_class(version)
return client_class(*args, **kwargs)

View File

@ -1,45 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""oslo.i18n integration module.
See http://docs.openstack.org/developer/oslo.i18n/usage.html
"""
try:
import oslo_i18n
# NOTE(dhellmann): This reference to o-s-l-o will be replaced by the
# application name when this module is synced into the separate
# repository. It is OK to have more than one translation function
# using the same domain, since there will still only be one message
# catalog.
_translators = oslo_i18n.TranslatorFactory(domain='saharaclient')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical
except ImportError:
# NOTE(dims): Support for cases where a project wants to use
# code from oslo-incubator, but is not ready to be internationalized
# (like tempest)
_ = _LI = _LW = _LE = _LC = lambda x: x

View File

@ -1,234 +0,0 @@
# Copyright 2013 OpenStack Foundation
# Copyright 2013 Spanish National Research Council.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# E0202: An attribute inherited from %s hide this method
# pylint: disable=E0202
########################################################################
#
# THIS MODULE IS DEPRECATED
#
# Please refer to
# https://etherpad.openstack.org/p/kilo-saharaclient-library-proposals for
# the discussion leading to this deprecation.
#
# We recommend checking out the python-openstacksdk project
# (https://launchpad.net/python-openstacksdk) instead.
#
########################################################################
import abc
import argparse
import os
import six
from stevedore import extension
from saharaclient.openstack.common.apiclient import exceptions
_discovered_plugins = {}
def discover_auth_systems():
"""Discover the available auth-systems.
This won't take into account the old style auth-systems.
"""
global _discovered_plugins
_discovered_plugins = {}
def add_plugin(ext):
_discovered_plugins[ext.name] = ext.plugin
ep_namespace = "saharaclient.openstack.common.apiclient.auth"
mgr = extension.ExtensionManager(ep_namespace)
mgr.map(add_plugin)
def load_auth_system_opts(parser):
"""Load options needed by the available auth-systems into a parser.
This function will try to populate the parser with options from the
available plugins.
"""
group = parser.add_argument_group("Common auth options")
BaseAuthPlugin.add_common_opts(group)
for name, auth_plugin in six.iteritems(_discovered_plugins):
group = parser.add_argument_group(
"Auth-system '%s' options" % name,
conflict_handler="resolve")
auth_plugin.add_opts(group)
def load_plugin(auth_system):
try:
plugin_class = _discovered_plugins[auth_system]
except KeyError:
raise exceptions.AuthSystemNotFound(auth_system)
return plugin_class(auth_system=auth_system)
def load_plugin_from_args(args):
"""Load required plugin and populate it with options.
Try to guess auth system if it is not specified. Systems are tried in
alphabetical order.
:type args: argparse.Namespace
:raises: AuthPluginOptionsMissing
"""
auth_system = args.os_auth_system
if auth_system:
plugin = load_plugin(auth_system)
plugin.parse_opts(args)
plugin.sufficient_options()
return plugin
for plugin_auth_system in sorted(six.iterkeys(_discovered_plugins)):
plugin_class = _discovered_plugins[plugin_auth_system]
plugin = plugin_class()
plugin.parse_opts(args)
try:
plugin.sufficient_options()
except exceptions.AuthPluginOptionsMissing:
continue
return plugin
raise exceptions.AuthPluginOptionsMissing(["auth_system"])
@six.add_metaclass(abc.ABCMeta)
class BaseAuthPlugin(object):
"""Base class for authentication plugins.
An authentication plugin needs to override at least the authenticate
method to be a valid plugin.
"""
auth_system = None
opt_names = []
common_opt_names = [
"auth_system",
"username",
"password",
"tenant_name",
"token",
"auth_url",
]
def __init__(self, auth_system=None, **kwargs):
self.auth_system = auth_system or self.auth_system
self.opts = dict((name, kwargs.get(name))
for name in self.opt_names)
@staticmethod
def _parser_add_opt(parser, opt):
"""Add an option to parser in two variants.
:param opt: option name (with underscores)
"""
dashed_opt = opt.replace("_", "-")
env_var = "OS_%s" % opt.upper()
arg_default = os.environ.get(env_var, "")
arg_help = "Defaults to env[%s]." % env_var
parser.add_argument(
"--os-%s" % dashed_opt,
metavar="<%s>" % dashed_opt,
default=arg_default,
help=arg_help)
parser.add_argument(
"--os_%s" % opt,
metavar="<%s>" % dashed_opt,
help=argparse.SUPPRESS)
@classmethod
def add_opts(cls, parser):
"""Populate the parser with the options for this plugin.
"""
for opt in cls.opt_names:
# use `BaseAuthPlugin.common_opt_names` since it is never
# changed in child classes
if opt not in BaseAuthPlugin.common_opt_names:
cls._parser_add_opt(parser, opt)
@classmethod
def add_common_opts(cls, parser):
"""Add options that are common for several plugins.
"""
for opt in cls.common_opt_names:
cls._parser_add_opt(parser, opt)
@staticmethod
def get_opt(opt_name, args):
"""Return option name and value.
:param opt_name: name of the option, e.g., "username"
:param args: parsed arguments
"""
return (opt_name, getattr(args, "os_%s" % opt_name, None))
def parse_opts(self, args):
"""Parse the actual auth-system options if any.
This method is expected to populate the attribute `self.opts` with a
dict containing the options and values needed to make authentication.
"""
self.opts.update(dict(self.get_opt(opt_name, args)
for opt_name in self.opt_names))
def authenticate(self, http_client):
"""Authenticate using plugin defined method.
The method usually analyses `self.opts` and performs
a request to authentication server.
:param http_client: client object that needs authentication
:type http_client: HTTPClient
:raises: AuthorizationFailure
"""
self.sufficient_options()
self._do_authenticate(http_client)
@abc.abstractmethod
def _do_authenticate(self, http_client):
"""Protected method for authentication.
"""
def sufficient_options(self):
"""Check if all required options are present.
:raises: AuthPluginOptionsMissing
"""
missing = [opt
for opt in self.opt_names
if not self.opts.get(opt)]
if missing:
raise exceptions.AuthPluginOptionsMissing(missing)
@abc.abstractmethod
def token_and_endpoint(self, endpoint_type, service_type):
"""Return token and endpoint.
:param service_type: Service type of the endpoint
:type service_type: string
:param endpoint_type: Type of endpoint.
Possible values: public or publicURL,
internal or internalURL,
admin or adminURL
:type endpoint_type: string
:returns: tuple of token and endpoint strings
:raises: EndpointException
"""

View File

@ -1,479 +0,0 @@
# Copyright 2010 Jacob Kaplan-Moss
# Copyright 2011 Nebula, Inc.
# Copyright 2013 Alessio Ababilov
# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Exception definitions.
"""
########################################################################
#
# THIS MODULE IS DEPRECATED
#
# Please refer to
# https://etherpad.openstack.org/p/kilo-saharaclient-library-proposals for
# the discussion leading to this deprecation.
#
# We recommend checking out the python-openstacksdk project
# (https://launchpad.net/python-openstacksdk) instead.
#
########################################################################
import inspect
import sys
import six
from saharaclient.openstack.common._i18n import _
class ClientException(Exception):
"""The base exception class for all exceptions this library raises.
"""
pass
class ValidationError(ClientException):
"""Error in validation on API client side."""
pass
class UnsupportedVersion(ClientException):
"""User is trying to use an unsupported version of the API."""
pass
class CommandError(ClientException):
"""Error in CLI tool."""
pass
class AuthorizationFailure(ClientException):
"""Cannot authorize API client."""
pass
class ConnectionError(ClientException):
"""Cannot connect to API service."""
pass
class ConnectionRefused(ConnectionError):
"""Connection refused while trying to connect to API service."""
pass
class AuthPluginOptionsMissing(AuthorizationFailure):
"""Auth plugin misses some options."""
def __init__(self, opt_names):
super(AuthPluginOptionsMissing, self).__init__(
_("Authentication failed. Missing options: %s") %
", ".join(opt_names))
self.opt_names = opt_names
class AuthSystemNotFound(AuthorizationFailure):
"""User has specified an AuthSystem that is not installed."""
def __init__(self, auth_system):
super(AuthSystemNotFound, self).__init__(
_("AuthSystemNotFound: %r") % auth_system)
self.auth_system = auth_system
class NoUniqueMatch(ClientException):
"""Multiple entities found instead of one."""
pass
class EndpointException(ClientException):
"""Something is rotten in Service Catalog."""
pass
class EndpointNotFound(EndpointException):
"""Could not find requested endpoint in Service Catalog."""
pass
class AmbiguousEndpoints(EndpointException):
"""Found more than one matching endpoint in Service Catalog."""
def __init__(self, endpoints=None):
super(AmbiguousEndpoints, self).__init__(
_("AmbiguousEndpoints: %r") % endpoints)
self.endpoints = endpoints
class HttpError(ClientException):
"""The base exception class for all HTTP exceptions.
"""
http_status = 0
message = _("HTTP Error")
def __init__(self, message=None, details=None,
response=None, request_id=None,
url=None, method=None, http_status=None):
self.http_status = http_status or self.http_status
self.message = message or self.message
self.details = details
self.request_id = request_id
self.response = response
self.url = url
self.method = method
formatted_string = "%s (HTTP %s)" % (self.message, self.http_status)
if request_id:
formatted_string += " (Request-ID: %s)" % request_id
super(HttpError, self).__init__(formatted_string)
class HTTPRedirection(HttpError):
"""HTTP Redirection."""
message = _("HTTP Redirection")
class HTTPClientError(HttpError):
"""Client-side HTTP error.
Exception for cases in which the client seems to have erred.
"""
message = _("HTTP Client Error")
class HttpServerError(HttpError):
"""Server-side HTTP error.
Exception for cases in which the server is aware that it has
erred or is incapable of performing the request.
"""
message = _("HTTP Server Error")
class MultipleChoices(HTTPRedirection):
"""HTTP 300 - Multiple Choices.
Indicates multiple options for the resource that the client may follow.
"""
http_status = 300
message = _("Multiple Choices")
class BadRequest(HTTPClientError):
"""HTTP 400 - Bad Request.
The request cannot be fulfilled due to bad syntax.
"""
http_status = 400
message = _("Bad Request")
class Unauthorized(HTTPClientError):
"""HTTP 401 - Unauthorized.
Similar to 403 Forbidden, but specifically for use when authentication
is required and has failed or has not yet been provided.
"""
http_status = 401
message = _("Unauthorized")
class PaymentRequired(HTTPClientError):
"""HTTP 402 - Payment Required.
Reserved for future use.
"""
http_status = 402
message = _("Payment Required")
class Forbidden(HTTPClientError):
"""HTTP 403 - Forbidden.
The request was a valid request, but the server is refusing to respond
to it.
"""
http_status = 403
message = _("Forbidden")
class NotFound(HTTPClientError):
"""HTTP 404 - Not Found.
The requested resource could not be found but may be available again
in the future.
"""
http_status = 404
message = _("Not Found")
class MethodNotAllowed(HTTPClientError):
"""HTTP 405 - Method Not Allowed.
A request was made of a resource using a request method not supported
by that resource.
"""
http_status = 405
message = _("Method Not Allowed")
class NotAcceptable(HTTPClientError):
"""HTTP 406 - Not Acceptable.
The requested resource is only capable of generating content not
acceptable according to the Accept headers sent in the request.
"""
http_status = 406
message = _("Not Acceptable")
class ProxyAuthenticationRequired(HTTPClientError):
"""HTTP 407 - Proxy Authentication Required.
The client must first authenticate itself with the proxy.
"""
http_status = 407
message = _("Proxy Authentication Required")
class RequestTimeout(HTTPClientError):
"""HTTP 408 - Request Timeout.
The server timed out waiting for the request.
"""
http_status = 408
message = _("Request Timeout")
class Conflict(HTTPClientError):
"""HTTP 409 - Conflict.
Indicates that the request could not be processed because of conflict
in the request, such as an edit conflict.
"""
http_status = 409
message = _("Conflict")
class Gone(HTTPClientError):
"""HTTP 410 - Gone.
Indicates that the resource requested is no longer available and will
not be available again.
"""
http_status = 410
message = _("Gone")
class LengthRequired(HTTPClientError):
"""HTTP 411 - Length Required.
The request did not specify the length of its content, which is
required by the requested resource.
"""
http_status = 411
message = _("Length Required")
class PreconditionFailed(HTTPClientError):
"""HTTP 412 - Precondition Failed.
The server does not meet one of the preconditions that the requester
put on the request.
"""
http_status = 412
message = _("Precondition Failed")
class RequestEntityTooLarge(HTTPClientError):
"""HTTP 413 - Request Entity Too Large.
The request is larger than the server is willing or able to process.
"""
http_status = 413
message = _("Request Entity Too Large")
def __init__(self, *args, **kwargs):
try:
self.retry_after = int(kwargs.pop('retry_after'))
except (KeyError, ValueError):
self.retry_after = 0
super(RequestEntityTooLarge, self).__init__(*args, **kwargs)
class RequestUriTooLong(HTTPClientError):
"""HTTP 414 - Request-URI Too Long.
The URI provided was too long for the server to process.
"""
http_status = 414
message = _("Request-URI Too Long")
class UnsupportedMediaType(HTTPClientError):
"""HTTP 415 - Unsupported Media Type.
The request entity has a media type which the server or resource does
not support.
"""
http_status = 415
message = _("Unsupported Media Type")
class RequestedRangeNotSatisfiable(HTTPClientError):
"""HTTP 416 - Requested Range Not Satisfiable.
The client has asked for a portion of the file, but the server cannot
supply that portion.
"""
http_status = 416
message = _("Requested Range Not Satisfiable")
class ExpectationFailed(HTTPClientError):
"""HTTP 417 - Expectation Failed.
The server cannot meet the requirements of the Expect request-header field.
"""
http_status = 417
message = _("Expectation Failed")
class UnprocessableEntity(HTTPClientError):
"""HTTP 422 - Unprocessable Entity.
The request was well-formed but was unable to be followed due to semantic
errors.
"""
http_status = 422
message = _("Unprocessable Entity")
class InternalServerError(HttpServerError):
"""HTTP 500 - Internal Server Error.
A generic error message, given when no more specific message is suitable.
"""
http_status = 500
message = _("Internal Server Error")
# NotImplemented is a python keyword.
class HttpNotImplemented(HttpServerError):
"""HTTP 501 - Not Implemented.
The server either does not recognize the request method, or it lacks
the ability to fulfill the request.
"""
http_status = 501
message = _("Not Implemented")
class BadGateway(HttpServerError):
"""HTTP 502 - Bad Gateway.
The server was acting as a gateway or proxy and received an invalid
response from the upstream server.
"""
http_status = 502
message = _("Bad Gateway")
class ServiceUnavailable(HttpServerError):
"""HTTP 503 - Service Unavailable.
The server is currently unavailable.
"""
http_status = 503
message = _("Service Unavailable")
class GatewayTimeout(HttpServerError):
"""HTTP 504 - Gateway Timeout.
The server was acting as a gateway or proxy and did not receive a timely
response from the upstream server.
"""
http_status = 504
message = _("Gateway Timeout")
class HttpVersionNotSupported(HttpServerError):
"""HTTP 505 - HttpVersion Not Supported.
The server does not support the HTTP protocol version used in the request.
"""
http_status = 505
message = _("HTTP Version Not Supported")
# _code_map contains all the classes that have http_status attribute.
_code_map = dict(
(getattr(obj, 'http_status', None), obj)
for name, obj in six.iteritems(vars(sys.modules[__name__]))
if inspect.isclass(obj) and getattr(obj, 'http_status', False)
)
def from_response(response, method, url):
"""Returns an instance of :class:`HttpError` or subclass based on response.
:param response: instance of `requests.Response` class
:param method: HTTP method used for request
:param url: URL used for request
"""
req_id = response.headers.get("x-openstack-request-id")
# NOTE(hdd) true for older versions of nova and cinder
if not req_id:
req_id = response.headers.get("x-compute-request-id")
kwargs = {
"http_status": response.status_code,
"response": response,
"method": method,
"url": url,
"request_id": req_id,
}
if "retry-after" in response.headers:
kwargs["retry_after"] = response.headers["retry-after"]
content_type = response.headers.get("Content-Type", "")
if content_type.startswith("application/json"):
try:
body = response.json()
except ValueError:
pass
else:
if isinstance(body, dict):
error = body.get(list(body)[0])
if isinstance(error, dict):
kwargs["message"] = (error.get("message") or
error.get("faultstring"))
kwargs["details"] = (error.get("details") or
six.text_type(body))
elif content_type.startswith("text/"):
kwargs["details"] = response.text
try:
cls = _code_map[response.status_code]
except KeyError:
if 500 <= response.status_code < 600:
cls = HttpServerError
elif 400 <= response.status_code < 500:
cls = HTTPClientError
else:
cls = HttpError
return cls(**kwargs)

View File

@ -1,272 +0,0 @@
# Copyright 2012 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# W0603: Using the global statement
# W0621: Redefining name %s from outer scope
# pylint: disable=W0603,W0621
from __future__ import print_function
import getpass
import inspect
import os
import sys
import textwrap
from oslo_utils import encodeutils
from oslo_utils import strutils
import prettytable
import six
from six import moves
from saharaclient.openstack.common._i18n import _
class MissingArgs(Exception):
"""Supplied arguments are not sufficient for calling a function."""
def __init__(self, missing):
self.missing = missing
msg = _("Missing arguments: %s") % ", ".join(missing)
super(MissingArgs, self).__init__(msg)
def validate_args(fn, *args, **kwargs):
"""Check that the supplied args are sufficient for calling a function.
>>> validate_args(lambda a: None)
Traceback (most recent call last):
...
MissingArgs: Missing argument(s): a
>>> validate_args(lambda a, b, c, d: None, 0, c=1)
Traceback (most recent call last):
...
MissingArgs: Missing argument(s): b, d
:param fn: the function to check
:param arg: the positional arguments supplied
:param kwargs: the keyword arguments supplied
"""
argspec = inspect.getargspec(fn)
num_defaults = len(argspec.defaults or [])
required_args = argspec.args[:len(argspec.args) - num_defaults]
def isbound(method):
return getattr(method, '__self__', None) is not None
if isbound(fn):
required_args.pop(0)
missing = [arg for arg in required_args if arg not in kwargs]
missing = missing[len(args):]
if missing:
raise MissingArgs(missing)
def arg(*args, **kwargs):
"""Decorator for CLI args.
Example:
>>> @arg("name", help="Name of the new entity")
... def entity_create(args):
... pass
"""
def _decorator(func):
add_arg(func, *args, **kwargs)
return func
return _decorator
def env(*args, **kwargs):
"""Returns the first environment variable set.
If all are empty, defaults to '' or keyword arg `default`.
"""
for arg in args:
value = os.environ.get(arg)
if value:
return value
return kwargs.get('default', '')
def add_arg(func, *args, **kwargs):
"""Bind CLI arguments to a shell.py `do_foo` function."""
if not hasattr(func, 'arguments'):
func.arguments = []
# NOTE(sirp): avoid dups that can occur when the module is shared across
# tests.
if (args, kwargs) not in func.arguments:
# Because of the semantics of decorator composition if we just append
# to the options list positional options will appear to be backwards.
func.arguments.insert(0, (args, kwargs))
def unauthenticated(func):
"""Adds 'unauthenticated' attribute to decorated function.
Usage:
>>> @unauthenticated
... def mymethod(f):
... pass
"""
func.unauthenticated = True
return func
def isunauthenticated(func):
"""Checks if the function does not require authentication.
Mark such functions with the `@unauthenticated` decorator.
:returns: bool
"""
return getattr(func, 'unauthenticated', False)
def print_list(objs, fields, formatters=None, sortby_index=0,
mixed_case_fields=None, field_labels=None):
"""Print a list of objects as a table, one row per object.
:param objs: iterable of :class:`Resource`
:param fields: attributes that correspond to columns, in order
:param formatters: `dict` of callables for field formatting
:param sortby_index: index of the field for sorting table rows
:param mixed_case_fields: fields corresponding to object attributes that
have mixed case names (e.g., 'serverId')
:param field_labels: Labels to use in the heading of the table, default to
fields.
"""
formatters = formatters or {}
mixed_case_fields = mixed_case_fields or []
field_labels = field_labels or fields
if len(field_labels) != len(fields):
raise ValueError(_("Field labels list %(labels)s has different number "
"of elements than fields list %(fields)s"),
{'labels': field_labels, 'fields': fields})
if sortby_index is None:
kwargs = {}
else:
kwargs = {'sortby': field_labels[sortby_index]}
pt = prettytable.PrettyTable(field_labels)
pt.align = 'l'
for o in objs:
row = []
for field in fields:
if field in formatters:
row.append(formatters[field](o))
else:
if field in mixed_case_fields:
field_name = field.replace(' ', '_')
else:
field_name = field.lower().replace(' ', '_')
data = getattr(o, field_name, '')
row.append(data)
pt.add_row(row)
if six.PY3:
print(encodeutils.safe_encode(pt.get_string(**kwargs)).decode())
else:
print(encodeutils.safe_encode(pt.get_string(**kwargs)))
def print_dict(dct, dict_property="Property", wrap=0, dict_value='Value'):
"""Print a `dict` as a table of two columns.
:param dct: `dict` to print
:param dict_property: name of the first column
:param wrap: wrapping for the second column
:param dict_value: header label for the value (second) column
"""
pt = prettytable.PrettyTable([dict_property, dict_value])
pt.align = 'l'
for k, v in sorted(dct.items()):
# convert dict to str to check length
if isinstance(v, dict):
v = six.text_type(v)
if wrap > 0:
v = textwrap.fill(six.text_type(v), wrap)
# if value has a newline, add in multiple rows
# e.g. fault with stacktrace
if v and isinstance(v, six.string_types) and r'\n' in v:
lines = v.strip().split(r'\n')
col1 = k
for line in lines:
pt.add_row([col1, line])
col1 = ''
else:
pt.add_row([k, v])
if six.PY3:
print(encodeutils.safe_encode(pt.get_string()).decode())
else:
print(encodeutils.safe_encode(pt.get_string()))
def get_password(max_password_prompts=3):
"""Read password from TTY."""
verify = strutils.bool_from_string(env("OS_VERIFY_PASSWORD"))
pw = None
if hasattr(sys.stdin, "isatty") and sys.stdin.isatty():
# Check for Ctrl-D
try:
for __ in moves.range(max_password_prompts):
pw1 = getpass.getpass("OS Password: ")
if verify:
pw2 = getpass.getpass("Please verify: ")
else:
pw2 = pw1
if pw1 == pw2 and pw1:
pw = pw1
break
except EOFError:
pass
return pw
def service_type(stype):
"""Adds 'service_type' attribute to decorated function.
Usage:
.. code-block:: python
@service_type('volume')
def mymethod(f):
...
"""
def inner(f):
f.service_type = stype
return f
return inner
def get_service_type(f):
"""Retrieves service type from function."""
return getattr(f, 'service_type', None)
def pretty_choice_list(l):
return ', '.join("'%s'" % i for i in l)
def exit(msg=''):
if msg:
print (msg, file=sys.stderr)
sys.exit(1)

View File

@ -1,67 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from osc_lib import utils
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
DEFAULT_DATA_PROCESSING_API_VERSION = "1.1"
API_VERSION_OPTION = "os_data_processing_api_version"
API_NAME = "data_processing"
API_VERSIONS = {
"1.1": "saharaclient.api.client.Client"
}
def make_client(instance):
data_processing_client = utils.get_client_class(
API_NAME,
instance._api_version[API_NAME],
API_VERSIONS)
LOG.debug('Instantiating data-processing client: %s',
data_processing_client)
kwargs = utils.build_kwargs_dict('endpoint_type', instance._interface)
client = data_processing_client(
session=instance.session,
region_name=instance._region_name,
cacert=instance._cacert,
insecure=instance._insecure,
sahara_url=instance._cli_options.data_processing_url,
**kwargs
)
return client
def build_option_parser(parser):
"""Hook to add global options."""
parser.add_argument(
"--os-data-processing-api-version",
metavar="<data-processing-api-version>",
default=utils.env(
'OS_DATA_PROCESSING_API_VERSION',
default=DEFAULT_DATA_PROCESSING_API_VERSION),
help=("Data processing API version, default=" +
DEFAULT_DATA_PROCESSING_API_VERSION +
' (Env: OS_DATA_PROCESSING_API_VERSION)'))
parser.add_argument(
"--os-data-processing-url",
default=utils.env(
"OS_DATA_PROCESSING_URL"),
help=("Data processing API URL, "
"(Env: OS_DATA_PROCESSING_API_URL)"))
return parser

View File

@ -1,508 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import sys
from osc_lib.command import command
from osc_lib import exceptions
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from saharaclient.osc.v1 import utils
CT_FIELDS = ['id', 'name', 'plugin_name', 'plugin_version', 'description',
'node_groups', 'anti_affinity', 'use_autoconfig', 'is_default',
'is_protected', 'is_public', 'domain_name']
def _format_node_groups_list(node_groups):
return ', '.join(
['%s:%s' % (ng['name'], ng['count']) for ng in node_groups])
def _format_ct_output(data):
data['plugin_version'] = data.pop('hadoop_version')
data['node_groups'] = _format_node_groups_list(data['node_groups'])
data['anti_affinity'] = osc_utils.format_list(data['anti_affinity'])
def _configure_node_groups(node_groups, client):
node_groups_list = dict(
map(lambda x: x.split(':', 1), node_groups))
node_groups = []
plugins_versions = set()
for name, count in node_groups_list.items():
ng = utils.get_resource(client.node_group_templates, name)
node_groups.append({'name': ng.name,
'count': int(count),
'node_group_template_id': ng.id})
plugins_versions.add((ng.plugin_name, ng.hadoop_version))
if len(plugins_versions) != 1:
raise exceptions.CommandError('Node groups with the same plugins '
'and versions must be specified')
plugin, plugin_version = plugins_versions.pop()
return plugin, plugin_version, node_groups
class CreateClusterTemplate(command.ShowOne):
"""Creates cluster template"""
log = logging.getLogger(__name__ + ".CreateClusterTemplate")
def get_parser(self, prog_name):
parser = super(CreateClusterTemplate, self).get_parser(prog_name)
parser.add_argument(
'--name',
metavar="<name>",
help="Name of the cluster template [REQUIRED if JSON is not "
"provided]",
)
parser.add_argument(
'--node-groups',
metavar="<node-group:instances_count>",
nargs="+",
help="List of the node groups(names or IDs) and numbers of "
"instances for each one of them [REQUIRED if JSON is not "
"provided]"
)
parser.add_argument(
'--anti-affinity',
metavar="<anti-affinity>",
nargs="+",
help="List of processes that should be added to an anti-affinity "
"group"
)
parser.add_argument(
'--description',
metavar="<description>",
help='Description of the cluster template'
)
parser.add_argument(
'--autoconfig',
action='store_true',
default=False,
help='If enabled, instances of the cluster will be '
'automatically configured',
)
parser.add_argument(
'--public',
action='store_true',
default=False,
help='Make the cluster template public (Visible from other '
'tenants)',
)
parser.add_argument(
'--protected',
action='store_true',
default=False,
help='Make the cluster template protected',
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the cluster template. Other '
'arguments will not be taken into account if this one is '
'provided'
)
parser.add_argument(
'--shares',
metavar='<filename>',
help='JSON representation of the manila shares'
)
parser.add_argument(
'--configs',
metavar='<filename>',
help='JSON representation of the cluster template configs'
)
parser.add_argument(
'--domain-name',
metavar='<domain-name>',
help='Domain name for instances of this cluster template. This '
'option is available if \'use_designate\' config is True'
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
if 'neutron_management_network' in template:
template['net_id'] = template.pop('neutron_management_network')
data = client.cluster_templates.create(**template).to_dict()
else:
if not parsed_args.name or not parsed_args.node_groups:
raise exceptions.CommandError(
'At least --name , --node-groups arguments should be '
'specified or json template should be provided with '
'--json argument')
configs = None
if parsed_args.configs:
blob = osc_utils.read_blob_file_contents(parsed_args.configs)
try:
configs = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'configs from file %s: %s' % (parsed_args.configs, e))
shares = None
if parsed_args.shares:
blob = osc_utils.read_blob_file_contents(parsed_args.shares)
try:
shares = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'shares from file %s: %s' % (parsed_args.shares, e))
plugin, plugin_version, node_groups = _configure_node_groups(
parsed_args.node_groups, client)
data = client.cluster_templates.create(
name=parsed_args.name,
plugin_name=plugin,
hadoop_version=plugin_version,
description=parsed_args.description,
node_groups=node_groups,
use_autoconfig=parsed_args.autoconfig,
cluster_configs=configs,
shares=shares,
is_public=parsed_args.public,
is_protected=parsed_args.protected,
domain_name=parsed_args.domain_name
).to_dict()
_format_ct_output(data)
data = utils.prepare_data(data, CT_FIELDS)
return self.dict2columns(data)
class ListClusterTemplates(command.Lister):
"""Lists cluster templates"""
log = logging.getLogger(__name__ + ".ListClusterTemplates")
def get_parser(self, prog_name):
parser = super(ListClusterTemplates, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
parser.add_argument(
'--plugin',
metavar="<plugin>",
help="List cluster templates for specific plugin"
)
parser.add_argument(
'--plugin-version',
metavar="<plugin_version>",
help="List cluster templates with specific version of the "
"plugin"
)
parser.add_argument(
'--name',
metavar="<name-substring>",
help="List cluster templates with specific substring in the "
"name"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
search_opts = {}
if parsed_args.plugin:
search_opts['plugin_name'] = parsed_args.plugin
if parsed_args.plugin_version:
search_opts['hadoop_version'] = parsed_args.plugin_version
data = client.cluster_templates.list(search_opts=search_opts)
if parsed_args.name:
data = utils.get_by_name_substring(data, parsed_args.name)
if parsed_args.long:
columns = ('name', 'id', 'plugin_name', 'hadoop_version',
'node_groups', 'description')
column_headers = utils.prepare_column_headers(
columns, {'hadoop_version': 'plugin_version'})
else:
columns = ('name', 'id', 'plugin_name', 'hadoop_version')
column_headers = utils.prepare_column_headers(
columns, {'hadoop_version': 'plugin_version'})
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns,
formatters={
'node_groups': _format_node_groups_list
}
) for s in data)
)
class ShowClusterTemplate(command.ShowOne):
"""Display cluster template details"""
log = logging.getLogger(__name__ + ".ShowClusterTemplate")
def get_parser(self, prog_name):
parser = super(ShowClusterTemplate, self).get_parser(prog_name)
parser.add_argument(
"cluster_template",
metavar="<cluster-template>",
help="Name or id of the cluster template to display",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = utils.get_resource(
client.cluster_templates, parsed_args.cluster_template).to_dict()
_format_ct_output(data)
data = utils.prepare_data(data, CT_FIELDS)
return self.dict2columns(data)
class DeleteClusterTemplate(command.Command):
"""Deletes cluster template"""
log = logging.getLogger(__name__ + ".DeleteClusterTemplate")
def get_parser(self, prog_name):
parser = super(DeleteClusterTemplate, self).get_parser(prog_name)
parser.add_argument(
"cluster_template",
metavar="<cluster-template>",
nargs="+",
help="Name(s) or id(s) of the cluster template(s) to delete",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
for ct in parsed_args.cluster_template:
ct_id = utils.get_resource_id(client.cluster_templates, ct)
client.cluster_templates.delete(ct_id)
sys.stdout.write(
'Cluster template "{ct}" has been removed '
'successfully.\n'.format(ct=ct))
class UpdateClusterTemplate(command.ShowOne):
"""Updates cluster template"""
log = logging.getLogger(__name__ + ".UpdateClusterTemplate")
def get_parser(self, prog_name):
parser = super(UpdateClusterTemplate, self).get_parser(prog_name)
parser.add_argument(
'cluster_template',
metavar="<cluster-template>",
help="Name or ID of the cluster template [REQUIRED]",
)
parser.add_argument(
'--name',
metavar="<name>",
help="New name of the cluster template",
)
parser.add_argument(
'--node-groups',
metavar="<node-group:instances_count>",
nargs="+",
help="List of the node groups(names or IDs) and numbers of"
"instances for each one of them"
)
parser.add_argument(
'--anti-affinity',
metavar="<anti-affinity>",
nargs="+",
help="List of processes that should be added to an anti-affinity "
"group"
)
parser.add_argument(
'--description',
metavar="<description>",
help='Description of the cluster template'
)
autoconfig = parser.add_mutually_exclusive_group()
autoconfig.add_argument(
'--autoconfig-enable',
action='store_true',
help='Instances of the cluster will be '
'automatically configured',
dest='use_autoconfig'
)
autoconfig.add_argument(
'--autoconfig-disable',
action='store_false',
help='Instances of the cluster will not be '
'automatically configured',
dest='use_autoconfig'
)
public = parser.add_mutually_exclusive_group()
public.add_argument(
'--public',
action='store_true',
help='Make the cluster template public '
'(Visible from other tenants)',
dest='is_public'
)
public.add_argument(
'--private',
action='store_false',
help='Make the cluster template private '
'(Visible only from this tenant)',
dest='is_public'
)
protected = parser.add_mutually_exclusive_group()
protected.add_argument(
'--protected',
action='store_true',
help='Make the cluster template protected',
dest='is_protected'
)
protected.add_argument(
'--unprotected',
action='store_false',
help='Make the cluster template unprotected',
dest='is_protected'
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the cluster template. Other '
'arguments will not be taken into account if this one is '
'provided'
)
parser.add_argument(
'--shares',
metavar='<filename>',
help='JSON representation of the manila shares'
)
parser.add_argument(
'--configs',
metavar='<filename>',
help='JSON representation of the cluster template configs'
)
parser.add_argument(
'--domain-name',
metavar='<domain-name>',
default=None,
help='Domain name for instances of this cluster template. This '
'option is available if \'use_designate\' config is True'
)
parser.set_defaults(is_public=None, is_protected=None,
use_autoconfig=None)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
ct_id = utils.get_resource_id(
client.cluster_templates, parsed_args.cluster_template)
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
data = client.cluster_templates.update(
ct_id, **template).to_dict()
else:
plugin, plugin_version, node_groups = None, None, None
if parsed_args.node_groups:
plugin, plugin_version, node_groups = _configure_node_groups(
parsed_args.node_groups, client)
configs = None
if parsed_args.configs:
blob = osc_utils.read_blob_file_contents(parsed_args.configs)
try:
configs = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'configs from file %s: %s' % (parsed_args.configs, e))
shares = None
if parsed_args.shares:
blob = osc_utils.read_blob_file_contents(parsed_args.shares)
try:
shares = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'shares from file %s: %s' % (parsed_args.shares, e))
update_dict = utils.create_dict_from_kwargs(
name=parsed_args.name,
plugin_name=plugin,
hadoop_version=plugin_version,
description=parsed_args.description,
node_groups=node_groups,
use_autoconfig=parsed_args.use_autoconfig,
cluster_configs=configs,
shares=shares,
is_public=parsed_args.is_public,
is_protected=parsed_args.is_protected,
domain_name=parsed_args.domain_name
)
data = client.cluster_templates.update(
ct_id, **update_dict).to_dict()
_format_ct_output(data)
data = utils.prepare_data(data, CT_FIELDS)
return self.dict2columns(data)

View File

@ -1,663 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import sys
from osc_lib.command import command
from osc_lib import exceptions
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from oslo_serialization import jsonutils
from saharaclient.osc.v1 import utils
CLUSTER_FIELDS = ["cluster_template_id", "use_autoconfig", "user_keypair_id",
"status", "image", "node_groups", "id",
"anti_affinity", "plugin_version", "name", "is_transient",
"is_protected", "description", "is_public",
"neutron_management_network", "plugin_name"]
def _format_node_groups_list(node_groups):
return ', '.join(
['%s:%s' % (ng['name'], ng['count']) for ng in node_groups])
def _format_cluster_output(data):
data['plugin_version'] = data.pop('hadoop_version')
data['image'] = data.pop('default_image_id')
data['node_groups'] = _format_node_groups_list(data['node_groups'])
data['anti_affinity'] = osc_utils.format_list(data['anti_affinity'])
def _prepare_health_checks(data):
additional_data = {}
ver = data.get('verification', {})
additional_fields = ['verification_status']
additional_data['verification_status'] = ver.get('status', 'UNKNOWN')
for check in ver.get('checks', []):
row_name = "Health check (%s)" % check['name']
additional_data[row_name] = check['status']
additional_fields.append(row_name)
return additional_data, additional_fields
def _get_plugin_version(cluster_template, client):
ct = utils.get_resource(client.cluster_templates, cluster_template)
return ct.plugin_name, ct.hadoop_version, ct.id
class CreateCluster(command.ShowOne):
"""Creates cluster"""
log = logging.getLogger(__name__ + ".CreateCluster")
def get_parser(self, prog_name):
parser = super(CreateCluster, self).get_parser(prog_name)
parser.add_argument(
'--name',
metavar="<name>",
help="Name of the cluster [REQUIRED if JSON is not provided]",
)
parser.add_argument(
'--cluster-template',
metavar="<cluster-template>",
help="Cluster template name or ID [REQUIRED if JSON is not "
"provided]"
)
parser.add_argument(
'--image',
metavar="<image>",
help='Image that will be used for cluster deployment (Name or ID) '
'[REQUIRED if JSON is not provided]'
)
parser.add_argument(
'--description',
metavar="<description>",
help='Description of the cluster'
)
parser.add_argument(
'--user-keypair',
metavar="<keypair>",
help='User keypair to get acces to VMs after cluster creation'
)
parser.add_argument(
'--neutron-network',
metavar="<network>",
help='Instances of the cluster will get fixed IP addresses in '
'this network. (Name or ID should be provided)'
)
parser.add_argument(
'--count',
metavar="<count>",
type=int,
help='Number of clusters to be created'
)
parser.add_argument(
'--public',
action='store_true',
default=False,
help='Make the cluster public (Visible from other tenants)',
)
parser.add_argument(
'--protected',
action='store_true',
default=False,
help='Make the cluster protected',
)
parser.add_argument(
'--transient',
action='store_true',
default=False,
help='Create transient cluster',
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the cluster. Other '
'arguments (except for --wait) will not be taken into '
'account if this one is provided'
)
parser.add_argument(
'--wait',
action='store_true',
default=False,
help='Wait for the cluster creation to complete',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
network_client = self.app.client_manager.network
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
if 'neutron_management_network' in template:
template['net_id'] = template.pop('neutron_management_network')
if 'count' in template:
parsed_args.count = template['count']
data = client.clusters.create(**template).to_dict()
else:
if not parsed_args.name or not parsed_args.cluster_template \
or not parsed_args.image:
raise exceptions.CommandError(
'At least --name , --cluster-template, --image arguments '
'should be specified or json template should be provided '
'with --json argument')
plugin, plugin_version, template_id = _get_plugin_version(
parsed_args.cluster_template, client)
image_id = utils.get_resource_id(client.images, parsed_args.image)
net_id = (network_client.find_network(
parsed_args.neutron_network, ignore_missing=False).id if
parsed_args.neutron_network else None)
data = client.clusters.create(
name=parsed_args.name,
plugin_name=plugin,
hadoop_version=plugin_version,
cluster_template_id=template_id,
default_image_id=image_id,
description=parsed_args.description,
is_transient=parsed_args.transient,
user_keypair_id=parsed_args.user_keypair,
net_id=net_id,
count=parsed_args.count,
is_public=parsed_args.public,
is_protected=parsed_args.protected
).to_dict()
if parsed_args.count and parsed_args.count > 1:
clusters = [
utils.get_resource(client.clusters, id)
for id in data['clusters']]
if parsed_args.wait:
for cluster in clusters:
if not osc_utils.wait_for_status(
client.clusters.get, cluster.id):
self.log.error(
'Error occurred during cluster creation: %s' %
data['id'])
data = {}
for cluster in clusters:
data[cluster.name] = cluster.id
else:
if parsed_args.wait:
if not osc_utils.wait_for_status(
client.clusters.get, data['id']):
self.log.error(
'Error occurred during cluster creation: %s' %
data['id'])
data = client.clusters.get(data['id']).to_dict()
_format_cluster_output(data)
data = utils.prepare_data(data, CLUSTER_FIELDS)
return self.dict2columns(data)
class ListClusters(command.Lister):
"""Lists clusters"""
log = logging.getLogger(__name__ + ".ListClusters")
def get_parser(self, prog_name):
parser = super(ListClusters, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
parser.add_argument(
'--plugin',
metavar="<plugin>",
help="List clusters with specific plugin"
)
parser.add_argument(
'--plugin-version',
metavar="<plugin_version>",
help="List clusters with specific version of the "
"plugin"
)
parser.add_argument(
'--name',
metavar="<name-substring>",
help="List clusters with specific substring in the name"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
search_opts = {}
if parsed_args.plugin:
search_opts['plugin_name'] = parsed_args.plugin
if parsed_args.plugin_version:
search_opts['hadoop_version'] = parsed_args.plugin_version
data = client.clusters.list(search_opts=search_opts)
if parsed_args.name:
data = utils.get_by_name_substring(data, parsed_args.name)
if parsed_args.long:
columns = ('name', 'id', 'plugin_name', 'hadoop_version',
'status', 'description', 'default_image_id')
column_headers = utils.prepare_column_headers(
columns, {'hadoop_version': 'plugin_version',
'default_image_id': 'image'})
else:
columns = ('name', 'id', 'plugin_name', 'hadoop_version', 'status')
column_headers = utils.prepare_column_headers(
columns, {'hadoop_version': 'plugin_version',
'default_image_id': 'image'})
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns
) for s in data)
)
class ShowCluster(command.ShowOne):
"""Display cluster details"""
log = logging.getLogger(__name__ + ".ShowCluster")
def get_parser(self, prog_name):
parser = super(ShowCluster, self).get_parser(prog_name)
parser.add_argument(
"cluster",
metavar="<cluster>",
help="Name or id of the cluster to display",
)
parser.add_argument(
'--verification',
action='store_true',
default=False,
help='List additional fields for verifications',
)
parser.add_argument(
'--show-progress',
action='store_true',
default=False,
help='Provides ability to show brief details of event logs.'
)
parser.add_argument(
'--full-dump-events',
action='store_true',
default=False,
help='Provides ability to make full dump with event log details.'
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
kwargs = {}
if parsed_args.show_progress or parsed_args.full_dump_events:
kwargs['show_progress'] = True
data = utils.get_resource(
client.clusters, parsed_args.cluster, **kwargs).to_dict()
provision_steps = data.get('provision_progress', [])
provision_steps = utils.created_at_sorted(provision_steps)
if parsed_args.full_dump_events:
file_name = utils.random_name('event-logs')
# making full dump
with open(file_name, 'w') as file:
jsonutils.dump(provision_steps, file, indent=4)
sys.stdout.write('Event log dump saved to file: %s\n' % file_name)
_format_cluster_output(data)
fields = []
if parsed_args.verification:
ver_data, fields = _prepare_health_checks(data)
data.update(ver_data)
fields.extend(CLUSTER_FIELDS)
data = self.dict2columns(utils.prepare_data(data, fields))
if parsed_args.show_progress:
output_steps = []
for step in provision_steps:
st_name, st_type = step['step_name'], step['step_type']
description = "%s: %s" % (st_type, st_name)
if step['successful'] is None:
progress = "Step in progress"
elif step['successful']:
progress = "Step completed successfully"
else:
progress = 'Step has failed events'
output_steps += [(description, progress)]
data = utils.extend_columns(data, output_steps)
return data
class DeleteCluster(command.Command):
"""Deletes cluster"""
log = logging.getLogger(__name__ + ".DeleteCluster")
def get_parser(self, prog_name):
parser = super(DeleteCluster, self).get_parser(prog_name)
parser.add_argument(
"cluster",
metavar="<cluster>",
nargs="+",
help="Name(s) or id(s) of the cluster(s) to delete",
)
parser.add_argument(
'--wait',
action='store_true',
default=False,
help='Wait for the cluster(s) delete to complete',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
clusters = []
for cluster in parsed_args.cluster:
cluster_id = utils.get_resource_id(
client.clusters, cluster)
client.clusters.delete(cluster_id)
clusters.append((cluster_id, cluster))
sys.stdout.write(
'Cluster "{cluster}" deletion has been started.\n'.format(
cluster=cluster))
if parsed_args.wait:
for cluster_id, cluster_arg in clusters:
if not utils.wait_for_delete(client.clusters, cluster_id):
self.log.error(
'Error occurred during cluster deleting: %s' %
cluster_id)
else:
sys.stdout.write(
'Cluster "{cluster}" has been removed '
'successfully.\n'.format(cluster=cluster_arg))
class UpdateCluster(command.ShowOne):
"""Updates cluster"""
log = logging.getLogger(__name__ + ".UpdateCluster")
def get_parser(self, prog_name):
parser = super(UpdateCluster, self).get_parser(prog_name)
parser.add_argument(
'cluster',
metavar="<cluster>",
help="Name or ID of the cluster",
)
parser.add_argument(
'--name',
metavar="<name>",
help="New name of the cluster",
)
parser.add_argument(
'--description',
metavar="<description>",
help='Description of the cluster'
)
parser.add_argument(
'--shares',
metavar="<filename>",
help='JSON representation of the manila shares'
)
public = parser.add_mutually_exclusive_group()
public.add_argument(
'--public',
action='store_true',
help='Make the cluster public '
'(Visible from other tenants)',
dest='is_public'
)
public.add_argument(
'--private',
action='store_false',
help='Make the cluster private '
'(Visible only from this tenant)',
dest='is_public'
)
protected = parser.add_mutually_exclusive_group()
protected.add_argument(
'--protected',
action='store_true',
help='Make the cluster protected',
dest='is_protected'
)
protected.add_argument(
'--unprotected',
action='store_false',
help='Make the cluster unprotected',
dest='is_protected'
)
parser.set_defaults(is_public=None, is_protected=None)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
cluster_id = utils.get_resource_id(
client.clusters, parsed_args.cluster)
shares = None
if parsed_args.shares:
blob = osc_utils.read_blob_file_contents(parsed_args.shares)
try:
shares = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'shares from file %s: %s' % (parsed_args.shares, e))
update_dict = utils.create_dict_from_kwargs(
name=parsed_args.name,
description=parsed_args.description,
is_public=parsed_args.is_public,
is_protected=parsed_args.is_protected,
shares=shares
)
data = client.clusters.update(cluster_id, **update_dict).cluster
_format_cluster_output(data)
data = utils.prepare_data(data, CLUSTER_FIELDS)
return self.dict2columns(data)
class ScaleCluster(command.ShowOne):
"""Scales cluster"""
log = logging.getLogger(__name__ + ".ScaleCluster")
def get_parser(self, prog_name):
parser = super(ScaleCluster, self).get_parser(prog_name)
parser.add_argument(
'cluster',
metavar="<cluster>",
help="Name or ID of the cluster",
)
parser.add_argument(
'--instances',
nargs='+',
metavar='<node-group-template:instances_count>',
help='Node group templates and number of their instances to be '
'scale to [REQUIRED if JSON is not provided]'
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the cluster scale object. Other '
'arguments (except for --wait) will not be taken into '
'account if this one is provided'
)
parser.add_argument(
'--wait',
action='store_true',
default=False,
help='Wait for the cluster scale to complete',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
cluster = utils.get_resource(
client.clusters, parsed_args.cluster)
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
data = client.clusters.scale(cluster.id, template).to_dict()
else:
scale_object = {
"add_node_groups": [],
"resize_node_groups": []
}
scale_node_groups = dict(
map(lambda x: x.split(':', 1), parsed_args.instances))
cluster_ng_map = {
ng['node_group_template_id']: ng['name'] for ng
in cluster.node_groups}
for name, count in scale_node_groups.items():
ngt = utils.get_resource(client.node_group_templates, name)
if ngt.id in cluster_ng_map:
scale_object["resize_node_groups"].append({
"name": cluster_ng_map[ngt.id],
"count": int(count)
})
else:
scale_object["add_node_groups"].append({
"node_group_template_id": ngt.id,
"name": ngt.name,
"count": int(count)
})
if not scale_object['add_node_groups']:
del scale_object['add_node_groups']
if not scale_object['resize_node_groups']:
del scale_object['resize_node_groups']
data = client.clusters.scale(cluster.id, scale_object).cluster
sys.stdout.write(
'Cluster "{cluster}" scaling has been started.\n'.format(
cluster=parsed_args.cluster))
if parsed_args.wait:
if not osc_utils.wait_for_status(
client.clusters.get, data['id']):
self.log.error(
'Error occurred during cluster scaling: %s' %
cluster.id)
data = client.clusters.get(cluster.id).to_dict()
_format_cluster_output(data)
data = utils.prepare_data(data, CLUSTER_FIELDS)
return self.dict2columns(data)
class VerificationUpdateCluster(command.ShowOne):
"""Updates cluster verifications"""
log = logging.getLogger(__name__ + ".VerificationUpdateCluster")
def get_parser(self, prog_name):
parser = super(VerificationUpdateCluster, self).get_parser(prog_name)
parser.add_argument(
'cluster',
metavar="<cluster>",
help="Name or ID of the cluster",
)
status = parser.add_mutually_exclusive_group(required=True)
status.add_argument(
'--start',
action='store_const',
const='START',
help='Start health verification for the cluster',
dest='status'
)
status.add_argument(
'--show',
help='Show health of the cluster',
action='store_true'
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if parsed_args.show:
data = utils.get_resource(
client.clusters, parsed_args.cluster).to_dict()
ver_data, ver_fields = _prepare_health_checks(data)
data = utils.prepare_data(ver_data, ver_fields)
return self.dict2columns(data)
else:
cluster_id = utils.get_resource_id(
client.clusters, parsed_args.cluster)
client.clusters.verification_update(
cluster_id, parsed_args.status)
if parsed_args.status == 'START':
print_status = 'started'
sys.stdout.write(
'Cluster "{cluster}" health verification has been '
'{status}.'.format(cluster=parsed_args.cluster,
status=print_status))
return {}, {}

View File

@ -1,303 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from osc_lib.command import command
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from saharaclient.osc.v1 import utils
DATA_SOURCE_FIELDS = ['name', 'id', 'type', 'url', 'description', 'is_public',
'is_protected']
DATA_SOURCE_CHOICES = ["swift", "hdfs", "maprfs", "manila"]
class CreateDataSource(command.ShowOne):
"""Creates data source"""
log = logging.getLogger(__name__ + ".CreateDataSource")
def get_parser(self, prog_name):
parser = super(CreateDataSource, self).get_parser(prog_name)
parser.add_argument(
'name',
metavar="<name>",
help="Name of the data source",
)
parser.add_argument(
'--type',
metavar="<type>",
choices=DATA_SOURCE_CHOICES,
help="Type of the data source (%s) "
"[REQUIRED]" % ', '.join(DATA_SOURCE_CHOICES),
required=True
)
parser.add_argument(
'--url',
metavar="<url>",
help="Url for the data source [REQUIRED]",
required=True
)
parser.add_argument(
'--username',
metavar="<username>",
help="Username for accessing the data source url"
)
parser.add_argument(
'--password',
metavar="<password>",
help="Password for accessing the data source url"
)
parser.add_argument(
'--description',
metavar="<description>",
help="Description of the data source"
)
parser.add_argument(
'--public',
action='store_true',
default=False,
help='Make the data source public',
)
parser.add_argument(
'--protected',
action='store_true',
default=False,
help='Make the data source protected',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
description = parsed_args.description or ''
data = client.data_sources.create(
name=parsed_args.name, description=description,
data_source_type=parsed_args.type, url=parsed_args.url,
credential_user=parsed_args.username,
credential_pass=parsed_args.password,
is_public=parsed_args.public,
is_protected=parsed_args.protected).to_dict()
data = utils.prepare_data(data, DATA_SOURCE_FIELDS)
return self.dict2columns(data)
class ListDataSources(command.Lister):
"""Lists data sources"""
log = logging.getLogger(__name__ + ".ListDataSources")
def get_parser(self, prog_name):
parser = super(ListDataSources, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
parser.add_argument(
'--type',
metavar="<type>",
choices=DATA_SOURCE_CHOICES,
help="List data sources of specific type "
"(%s)" % ', '.join(DATA_SOURCE_CHOICES)
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
search_opts = {'type': parsed_args.type} if parsed_args.type else {}
data = client.data_sources.list(search_opts=search_opts)
if parsed_args.long:
columns = DATA_SOURCE_FIELDS
column_headers = utils.prepare_column_headers(columns)
else:
columns = ('name', 'id', 'type')
column_headers = utils.prepare_column_headers(columns)
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns
) for s in data)
)
class ShowDataSource(command.ShowOne):
"""Display data source details"""
log = logging.getLogger(__name__ + ".ShowDataSource")
def get_parser(self, prog_name):
parser = super(ShowDataSource, self).get_parser(prog_name)
parser.add_argument(
"data_source",
metavar="<data-source>",
help="Name or id of the data source to display",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = utils.get_resource(
client.data_sources, parsed_args.data_source).to_dict()
data = utils.prepare_data(data, DATA_SOURCE_FIELDS)
return self.dict2columns(data)
class DeleteDataSource(command.Command):
"""Delete data source"""
log = logging.getLogger(__name__ + ".DeleteDataSource")
def get_parser(self, prog_name):
parser = super(DeleteDataSource, self).get_parser(prog_name)
parser.add_argument(
"data_source",
metavar="<data-source>",
nargs="+",
help="Name(s) or id(s) of the data source(s) to delete",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
for ds in parsed_args.data_source:
data_source_id = utils.get_resource_id(
client.data_sources, ds)
client.data_sources.delete(data_source_id)
sys.stdout.write(
'Data Source "{ds}" has been removed '
'successfully.\n'.format(ds=ds))
class UpdateDataSource(command.ShowOne):
"""Update data source"""
log = logging.getLogger(__name__ + ".UpdateDataSource")
def get_parser(self, prog_name):
parser = super(UpdateDataSource, self).get_parser(prog_name)
parser.add_argument(
'data_source',
metavar="<data-source>",
help="Name or id of the data source",
)
parser.add_argument(
'--name',
metavar="<name>",
help="New name of the data source",
)
parser.add_argument(
'--type',
metavar="<type>",
choices=DATA_SOURCE_CHOICES,
help="Type of the data source "
"(%s)" % ', '.join(DATA_SOURCE_CHOICES)
)
parser.add_argument(
'--url',
metavar="<url>",
help="Url for the data source"
)
parser.add_argument(
'--username',
metavar="<username>",
help="Username for accessing the data source url"
)
parser.add_argument(
'--password',
metavar="<password>",
help="Password for accessing the data source url"
)
parser.add_argument(
'--description',
metavar="<description>",
help="Description of the data source"
)
public = parser.add_mutually_exclusive_group()
public.add_argument(
'--public',
action='store_true',
dest='is_public',
help='Make the data source public (Visible from other tenants)',
)
public.add_argument(
'--private',
action='store_false',
dest='is_public',
help='Make the data source private (Visible only from this '
'tenant)',
)
protected = parser.add_mutually_exclusive_group()
protected.add_argument(
'--protected',
action='store_true',
dest='is_protected',
help='Make the data source protected',
)
protected.add_argument(
'--unprotected',
action='store_false',
dest='is_protected',
help='Make the data source unprotected',
)
parser.set_defaults(is_public=None, is_protected=None)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
credentials = {}
if parsed_args.username:
credentials['user'] = parsed_args.username
if parsed_args.password:
credentials['password'] = parsed_args.password
if not credentials:
credentials = None
update_fields = utils.create_dict_from_kwargs(
name=parsed_args.name,
description=parsed_args.description,
type=parsed_args.type, url=parsed_args.url,
credentials=credentials,
is_public=parsed_args.is_public,
is_protected=parsed_args.is_protected)
ds_id = utils.get_resource_id(
client.data_sources, parsed_args.data_source)
data = client.data_sources.update(ds_id, update_fields).data_source
data = utils.prepare_data(data, DATA_SOURCE_FIELDS)
return self.dict2columns(data)

View File

@ -1,309 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from osc_lib.command import command
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from saharaclient.osc.v1 import utils
IMAGE_FIELDS = ['name', 'id', 'username', 'tags', 'status', 'description']
class ListImages(command.Lister):
"""Lists registered images"""
log = logging.getLogger(__name__ + ".ListImages")
def get_parser(self, prog_name):
parser = super(ListImages, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
parser.add_argument(
'--name',
metavar="<name-regex>",
help="Regular expression to match image name"
)
parser.add_argument(
'--tags',
metavar="<tag>",
nargs="+",
help="List images with specific tag(s)"
)
parser.add_argument(
'--username',
metavar="<username>",
help="List images with specific username"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
search_opts = {'tags': parsed_args.tags} if parsed_args.tags else {}
data = client.images.list(search_opts=search_opts)
if parsed_args.name:
data = utils.get_by_name_substring(data, parsed_args.name)
if parsed_args.username:
data = [i for i in data if parsed_args.username in i.username]
if parsed_args.long:
columns = IMAGE_FIELDS
column_headers = [c.capitalize() for c in columns]
else:
columns = ('name', 'id', 'username', 'tags')
column_headers = [c.capitalize() for c in columns]
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns,
formatters={
'tags': osc_utils.format_list
},
) for s in data)
)
class ShowImage(command.ShowOne):
"""Display image details"""
log = logging.getLogger(__name__ + ".ShowImage")
def get_parser(self, prog_name):
parser = super(ShowImage, self).get_parser(prog_name)
parser.add_argument(
"image",
metavar="<image>",
help="Name or id of the image to display",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = utils.get_resource(
client.images, parsed_args.image).to_dict()
data['tags'] = osc_utils.format_list(data['tags'])
data = utils.prepare_data(data, IMAGE_FIELDS)
return self.dict2columns(data)
class RegisterImage(command.ShowOne):
"""Register an image"""
log = logging.getLogger(__name__ + ".RegisterImage")
def get_parser(self, prog_name):
parser = super(RegisterImage, self).get_parser(prog_name)
parser.add_argument(
"image",
metavar="<image>",
help="Name or ID of the image to register",
)
parser.add_argument(
"--username",
metavar="<username>",
help="Username of privileged user in the image [REQUIRED]",
required=True
)
parser.add_argument(
"--description",
metavar="<description>",
help="Description of the image. If not provided, description of "
"the image will be reset to empty",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
image_client = self.app.client_manager.image
image_id = osc_utils.find_resource(
image_client.images, parsed_args.image).id
data = client.images.update_image(
image_id, user_name=parsed_args.username,
desc=parsed_args.description).image
data['tags'] = osc_utils.format_list(data['tags'])
data = utils.prepare_data(data, IMAGE_FIELDS)
return self.dict2columns(data)
class UnregisterImage(command.Command):
"""Unregister image(s)"""
log = logging.getLogger(__name__ + ".RegisterImage")
def get_parser(self, prog_name):
parser = super(UnregisterImage, self).get_parser(prog_name)
parser.add_argument(
"image",
metavar="<image>",
nargs="+",
help="Name(s) or id(s) of the image(s) to unregister",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
for image in parsed_args.image:
image_id = utils.get_resource_id(client.images, image)
client.images.unregister_image(image_id)
sys.stdout.write(
'Image "{image}" has been unregistered '
'successfully.\n'.format(image=image))
class SetImageTags(command.ShowOne):
"""Set image tags (Replace current image tags with provided ones)"""
log = logging.getLogger(__name__ + ".AddImageTags")
def get_parser(self, prog_name):
parser = super(SetImageTags, self).get_parser(prog_name)
parser.add_argument(
"image",
metavar="<image>",
help="Name or id of the image",
)
parser.add_argument(
'--tags',
metavar="<tag>",
nargs="+",
required=True,
help="Tag(s) to set [REQUIRED]"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
image_id = utils.get_resource_id(client.images, parsed_args.image)
data = client.images.update_tags(image_id, parsed_args.tags).to_dict()
data['tags'] = osc_utils.format_list(data['tags'])
data = utils.prepare_data(data, IMAGE_FIELDS)
return self.dict2columns(data)
class AddImageTags(command.ShowOne):
"""Add image tags"""
log = logging.getLogger(__name__ + ".AddImageTags")
def get_parser(self, prog_name):
parser = super(AddImageTags, self).get_parser(prog_name)
parser.add_argument(
"image",
metavar="<image>",
help="Name or id of the image",
)
parser.add_argument(
'--tags',
metavar="<tag>",
nargs="+",
required=True,
help="Tag(s) to add [REQUIRED]"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
image = utils.get_resource(client.images, parsed_args.image)
parsed_args.tags.extend(image.tags)
data = client.images.update_tags(
image.id, list(set(parsed_args.tags))).to_dict()
data['tags'] = osc_utils.format_list(data['tags'])
data = utils.prepare_data(data, IMAGE_FIELDS)
return self.dict2columns(data)
class RemoveImageTags(command.ShowOne):
"""Remove image tags"""
log = logging.getLogger(__name__ + ".RemoveImageTags")
def get_parser(self, prog_name):
parser = super(RemoveImageTags, self).get_parser(prog_name)
parser.add_argument(
"image",
metavar="<image>",
help="Name or id of the image",
)
group = parser.add_mutually_exclusive_group()
group.add_argument(
'--tags',
metavar="<tag>",
nargs="+",
help="Tag(s) to remove"
),
group.add_argument(
'--all',
action='store_true',
default=False,
help='Remove all tags from image',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
image = utils.get_resource(client.images, parsed_args.image)
if parsed_args.all:
data = client.images.update_tags(image.id, []).to_dict()
else:
parsed_args.tags = parsed_args.tags or []
new_tags = list(set(image.tags) - set(parsed_args.tags))
data = client.images.update_tags(image.id, new_tags).to_dict()
data['tags'] = osc_utils.format_list(data['tags'])
data = utils.prepare_data(data, IMAGE_FIELDS)
return self.dict2columns(data)

View File

@ -1,429 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from os import path
import sys
from osc_lib.command import command
from osc_lib import exceptions
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from oslo_serialization import jsonutils
from saharaclient.api import base
from saharaclient.osc.v1 import utils
JOB_BINARY_FIELDS = ['name', 'id', 'url', 'description', 'is_public',
'is_protected']
class CreateJobBinary(command.ShowOne):
"""Creates job binary"""
log = logging.getLogger(__name__ + ".CreateJobBinary")
def get_parser(self, prog_name):
parser = super(CreateJobBinary, self).get_parser(prog_name)
parser.add_argument(
'--name',
metavar="<name>",
help="Name of the job binary [REQUIRED if JSON is not provided]",
)
creation_type = parser.add_mutually_exclusive_group()
creation_type.add_argument(
'--data',
metavar='<file>',
help='File that will be stored in the internal DB [REQUIRED if '
'JSON and URL are not provided]'
)
creation_type.add_argument(
'--url',
metavar='<url>',
help='URL for the job binary [REQUIRED if JSON and file are '
'not provided]'
)
parser.add_argument(
'--description',
metavar="<description>",
help="Description of the job binary"
)
parser.add_argument(
'--username',
metavar='<username>',
help='Username for accessing the job binary URL',
)
password = parser.add_mutually_exclusive_group()
password.add_argument(
'--password',
metavar='<password>',
help='Password for accessing the job binary URL',
)
password.add_argument(
'--password-prompt',
dest="password_prompt",
action="store_true",
help='Prompt interactively for password',
)
parser.add_argument(
'--public',
action='store_true',
default=False,
help='Make the job binary public',
)
parser.add_argument(
'--protected',
action='store_true',
default=False,
help='Make the job binary protected',
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the job binary. Other '
'arguments will not be taken into account if this one is '
'provided'
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = jsonutils.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
data = client.job_binaries.create(**template).to_dict()
else:
if parsed_args.data:
data = open(parsed_args.data).read()
jbi_id = client.job_binary_internals.create(
parsed_args.name, data).id
parsed_args.url = 'internal-db://' + jbi_id
if parsed_args.password_prompt:
parsed_args.password = osc_utils.get_password(
self.app.stdin, confirm=False)
if parsed_args.password and not parsed_args.username:
raise exceptions.CommandError(
'Username via --username should be provided with password')
if parsed_args.username and not parsed_args.password:
raise exceptions.CommandError(
'Password should be provided via --password or entered '
'interactively with --password-prompt')
if parsed_args.password and parsed_args.username:
extra = {
'user': parsed_args.username,
'password': parsed_args.password
}
else:
extra = None
data = client.job_binaries.create(
name=parsed_args.name, url=parsed_args.url,
description=parsed_args.description, extra=extra,
is_public=parsed_args.public,
is_protected=parsed_args.protected).to_dict()
data = utils.prepare_data(data, JOB_BINARY_FIELDS)
return self.dict2columns(data)
class ListJobBinaries(command.Lister):
"""Lists job binaries"""
log = logging.getLogger(__name__ + ".ListJobBinaries")
def get_parser(self, prog_name):
parser = super(ListJobBinaries, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
parser.add_argument(
'--name',
metavar="<name-substring>",
help="List job binaries with specific substring in the "
"name"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = client.job_binaries.list()
if parsed_args.name:
data = utils.get_by_name_substring(data, parsed_args.name)
if parsed_args.long:
columns = ('name', 'id', 'url', 'description', 'is_public',
'is_protected')
column_headers = utils.prepare_column_headers(columns)
else:
columns = ('name', 'id', 'url')
column_headers = utils.prepare_column_headers(columns)
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns
) for s in data)
)
class ShowJobBinary(command.ShowOne):
"""Display job binary details"""
log = logging.getLogger(__name__ + ".ShowJobBinary")
def get_parser(self, prog_name):
parser = super(ShowJobBinary, self).get_parser(prog_name)
parser.add_argument(
"job_binary",
metavar="<job-binary>",
help="Name or ID of the job binary to display",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = utils.get_resource(
client.job_binaries, parsed_args.job_binary).to_dict()
data = utils.prepare_data(data, JOB_BINARY_FIELDS)
return self.dict2columns(data)
class DeleteJobBinary(command.Command):
"""Deletes job binary"""
log = logging.getLogger(__name__ + ".DeleteJobBinary")
def get_parser(self, prog_name):
parser = super(DeleteJobBinary, self).get_parser(prog_name)
parser.add_argument(
"job_binary",
metavar="<job-binary>",
nargs="+",
help="Name(s) or id(s) of the job binary(ies) to delete",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
for jb in parsed_args.job_binary:
jb = utils.get_resource(client.job_binaries, jb)
if jb.url.startswith("internal-db"):
jbi_id = jb.url.replace('internal-db://', '')
try:
client.job_binary_internals.delete(jbi_id)
except base.APIException as ex:
# check if job binary internal was already deleted for
# some reasons
if not ex.error_code == '404':
raise
client.job_binaries.delete(jb.id)
sys.stdout.write(
'Job binary "{jb}" has been removed '
'successfully.\n'.format(jb=jb))
class UpdateJobBinary(command.ShowOne):
"""Updates job binary"""
log = logging.getLogger(__name__ + ".UpdateJobBinary")
def get_parser(self, prog_name):
parser = super(UpdateJobBinary, self).get_parser(prog_name)
parser.add_argument(
'job_binary',
metavar="<job-binary>",
help="Name or ID of the job binary",
)
parser.add_argument(
'--name',
metavar="<name>",
help="New name of the job binary",
)
parser.add_argument(
'--url',
metavar='<url>',
help='URL for the job binary [Internal DB URL can not be updated]'
)
parser.add_argument(
'--description',
metavar="<description>",
help='Description of the job binary'
)
parser.add_argument(
'--username',
metavar='<username>',
help='Username for accessing the job binary URL',
)
password = parser.add_mutually_exclusive_group()
password.add_argument(
'--password',
metavar='<password>',
help='Password for accessing the job binary URL',
)
password.add_argument(
'--password-prompt',
dest="password_prompt",
action="store_true",
help='Prompt interactively for password',
)
public = parser.add_mutually_exclusive_group()
public.add_argument(
'--public',
action='store_true',
help='Make the job binary public (Visible from other tenants)',
dest='is_public'
)
public.add_argument(
'--private',
action='store_false',
help='Make the job binary private (Visible only from this tenant)',
dest='is_public'
)
protected = parser.add_mutually_exclusive_group()
protected.add_argument(
'--protected',
action='store_true',
help='Make the job binary protected',
dest='is_protected'
)
protected.add_argument(
'--unprotected',
action='store_false',
help='Make the job binary unprotected',
dest='is_protected'
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the update object. Other '
'arguments will not be taken into account if this one is '
'provided'
)
parser.set_defaults(is_public=None, is_protected=None)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
jb_id = utils.get_resource_id(
client.job_binaries, parsed_args.job_binary)
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = jsonutils.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
data = client.job_binaries.update(jb_id, template).to_dict()
else:
if parsed_args.password_prompt:
parsed_args.password = osc_utils.get_password(
self.app.stdin, confirm=False)
extra = {}
if parsed_args.password:
extra['password'] = parsed_args.password
if parsed_args.username:
extra['user'] = parsed_args.username
if not extra:
extra = None
update_fields = utils.create_dict_from_kwargs(
name=parsed_args.name, url=parsed_args.url,
description=parsed_args.description,
extra=extra, is_public=parsed_args.is_public,
is_protected=parsed_args.is_protected
)
data = client.job_binaries.update(
jb_id, update_fields).to_dict()
data = utils.prepare_data(data, JOB_BINARY_FIELDS)
return self.dict2columns(data)
class DownloadJobBinary(command.Command):
"""Downloads job binary"""
log = logging.getLogger(__name__ + ".DownloadJobBinary")
def get_parser(self, prog_name):
parser = super(DownloadJobBinary, self).get_parser(prog_name)
parser.add_argument(
"job_binary",
metavar="<job-binary>",
help="Name or ID of the job binary to download",
)
parser.add_argument(
'--file',
metavar="<file>",
help='Destination file (defaults to job binary name)',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if not parsed_args.file:
parsed_args.file = parsed_args.job_binary
jb_id = utils.get_resource_id(
client.job_binaries, parsed_args.job_binary)
data = client.job_binaries.get_file(jb_id)
if path.exists(parsed_args.file):
self.log.error('File "%s" already exists. Chose another one with '
'--file argument.' % parsed_args.file)
else:
with open(parsed_args.file, 'w') as f:
f.write(data)
sys.stdout.write(
'Job binary "{jb}" has been downloaded '
'successfully.\n'.format(jb=parsed_args.job_binary))

View File

@ -1,327 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from osc_lib.command import command
from osc_lib import exceptions
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from oslo_serialization import jsonutils
from saharaclient.osc.v1 import utils
JOB_TEMPLATE_FIELDS = ['name', 'id', 'type', 'mains', 'libs', 'description',
'is_public', 'is_protected']
JOB_TYPES_CHOICES = ['Hive', 'Java', 'MapReduce', 'Storm', 'Storm.Pyleus',
'Pig', 'Shell', 'MapReduce.Streaming', 'Spark']
def _format_job_template_output(data):
data['mains'] = osc_utils.format_list(
['%s:%s' % (m['name'], m['id']) for m in data['mains']])
data['libs'] = osc_utils.format_list(
['%s:%s' % (l['name'], l['id']) for l in data['libs']])
class CreateJobTemplate(command.ShowOne):
"""Creates job template"""
log = logging.getLogger(__name__ + ".CreateJobTemplate")
def get_parser(self, prog_name):
parser = super(CreateJobTemplate, self).get_parser(prog_name)
parser.add_argument(
'--name',
metavar="<name>",
help="Name of the job template [REQUIRED if JSON is not provided]",
)
parser.add_argument(
'--type',
metavar="<type>",
choices=JOB_TYPES_CHOICES,
help="Type of the job (%s) "
"[REQUIRED if JSON is not provided]" % ', '.join(
JOB_TYPES_CHOICES)
)
parser.add_argument(
'--mains',
metavar="<main>",
nargs='+',
help="Name(s) or ID(s) for job's main job binary(s)",
)
parser.add_argument(
'--libs',
metavar="<lib>",
nargs='+',
help="Name(s) or ID(s) for job's lib job binary(s)",
)
parser.add_argument(
'--description',
metavar="<description>",
help="Description of the job template"
)
parser.add_argument(
'--public',
action='store_true',
default=False,
help='Make the job template public',
)
parser.add_argument(
'--protected',
action='store_true',
default=False,
help='Make the job template protected',
)
parser.add_argument(
'--interface',
metavar='<filename>',
help='JSON representation of the interface'
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the job template'
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = jsonutils.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
data = client.jobs.create(**template).to_dict()
else:
if parsed_args.interface:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
parsed_args.interface = jsonutils.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'interface from file %s: %s' % (parsed_args.json, e))
mains_ids = [utils.get_resource_id(client.job_binaries, m) for m
in parsed_args.mains] if parsed_args.mains else None
libs_ids = [utils.get_resource_id(client.job_binaries, m) for m
in parsed_args.libs] if parsed_args.libs else None
data = client.jobs.create(
name=parsed_args.name, type=parsed_args.type, mains=mains_ids,
libs=libs_ids, description=parsed_args.description,
interface=parsed_args.interface, is_public=parsed_args.public,
is_protected=parsed_args.protected).to_dict()
_format_job_template_output(data)
data = utils.prepare_data(data, JOB_TEMPLATE_FIELDS)
return self.dict2columns(data)
class ListJobTemplates(command.Lister):
"""Lists job templates"""
log = logging.getLogger(__name__ + ".ListJobTemplates")
def get_parser(self, prog_name):
parser = super(ListJobTemplates, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
parser.add_argument(
'--type',
metavar="<type>",
choices=JOB_TYPES_CHOICES,
help="List job templates of specific type"
)
parser.add_argument(
'--name',
metavar="<name-substring>",
help="List job templates with specific substring in the "
"name"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
search_opts = {'type': parsed_args.type} if parsed_args.type else {}
data = client.jobs.list(search_opts=search_opts)
if parsed_args.name:
data = utils.get_by_name_substring(data, parsed_args.name)
if parsed_args.long:
columns = ('name', 'id', 'type', 'description', 'is_public',
'is_protected')
column_headers = utils.prepare_column_headers(columns)
else:
columns = ('name', 'id', 'type')
column_headers = utils.prepare_column_headers(columns)
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns
) for s in data)
)
class ShowJobTemplate(command.ShowOne):
"""Display job template details"""
log = logging.getLogger(__name__ + ".ShowJobTemplate")
def get_parser(self, prog_name):
parser = super(ShowJobTemplate, self).get_parser(prog_name)
parser.add_argument(
"job_template",
metavar="<job-template>",
help="Name or ID of the job template to display",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = utils.get_resource(
client.jobs, parsed_args.job_template).to_dict()
_format_job_template_output(data)
data = utils.prepare_data(data, JOB_TEMPLATE_FIELDS)
return self.dict2columns(data)
class DeleteJobTemplate(command.Command):
"""Deletes job template"""
log = logging.getLogger(__name__ + ".DeleteJobTemplate")
def get_parser(self, prog_name):
parser = super(DeleteJobTemplate, self).get_parser(prog_name)
parser.add_argument(
"job_template",
metavar="<job-template>",
nargs="+",
help="Name(s) or id(s) of the job template(s) to delete",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
for jt in parsed_args.job_template:
jt_id = utils.get_resource_id(client.jobs, jt)
client.jobs.delete(jt_id)
sys.stdout.write(
'Job template "{jt}" has been removed '
'successfully.\n'.format(jt=jt))
class UpdateJobTemplate(command.ShowOne):
"""Updates job template"""
log = logging.getLogger(__name__ + ".UpdateJobTemplate")
def get_parser(self, prog_name):
parser = super(UpdateJobTemplate, self).get_parser(prog_name)
parser.add_argument(
'job_template',
metavar="<job-template>",
help="Name or ID of the job template",
)
parser.add_argument(
'--name',
metavar="<name>",
help="New name of the job template",
)
parser.add_argument(
'--description',
metavar="<description>",
help='Description of the job template'
)
public = parser.add_mutually_exclusive_group()
public.add_argument(
'--public',
action='store_true',
help='Make the job template public '
'(Visible from other tenants)',
dest='is_public'
)
public.add_argument(
'--private',
action='store_false',
help='Make the job_template private '
'(Visible only from this tenant)',
dest='is_public'
)
protected = parser.add_mutually_exclusive_group()
protected.add_argument(
'--protected',
action='store_true',
help='Make the job template protected',
dest='is_protected'
)
protected.add_argument(
'--unprotected',
action='store_false',
help='Make the job template unprotected',
dest='is_protected'
)
parser.set_defaults(is_public=None, is_protected=None)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
jt_id = utils.get_resource_id(
client.jobs, parsed_args.job_template)
update_data = utils.create_dict_from_kwargs(
name=parsed_args.name,
description=parsed_args.description,
is_public=parsed_args.is_public,
is_protected=parsed_args.is_protected
)
data = client.jobs.update(jt_id, **update_data).job
_format_job_template_output(data)
data = utils.prepare_data(data, JOB_TEMPLATE_FIELDS)
return self.dict2columns(data)

View File

@ -1,133 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from os import path
import sys
from osc_lib.command import command
from osc_lib import exceptions
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from oslo_serialization import jsonutils
from saharaclient.osc.v1.job_templates import JOB_TYPES_CHOICES
from saharaclient.osc.v1 import utils
class ListJobTypes(command.Lister):
"""Lists job types supported by plugins"""
log = logging.getLogger(__name__ + ".ListJobTypes")
def get_parser(self, prog_name):
parser = super(ListJobTypes, self).get_parser(prog_name)
parser.add_argument(
'--type',
metavar="<type>",
choices=JOB_TYPES_CHOICES,
help="Get information about specific job type"
)
parser.add_argument(
'--plugin',
metavar="<plugin>",
help="Get only job types supported by this plugin"
)
parser.add_argument(
'--plugin-version',
metavar="<plugin_version>",
help="Get only job types supported by specific version of the "
"plugin. This parameter will be taken into account only if "
"plugin is provided"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
search_opts = {}
if parsed_args.type:
search_opts['type'] = parsed_args.type
if parsed_args.plugin:
search_opts['plugin'] = parsed_args.plugin
if parsed_args.plugin_version:
search_opts['plugin_version'] = parsed_args.plugin_version
elif parsed_args.plugin_version:
raise exceptions.CommandError(
'--plugin-version argument should be specified with --plugin '
'argument')
data = client.job_types.list(search_opts=search_opts)
for job in data:
plugins = []
for plugin in job.plugins:
versions = ", ".join(sorted(plugin["versions"].keys()))
if versions:
versions = "(" + versions + ")"
plugins.append(plugin["name"] + versions)
job.plugins = ', '.join(plugins)
columns = ('name', 'plugins')
column_headers = utils.prepare_column_headers(columns)
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns
) for s in data)
)
class GetJobTypeConfigs(command.Command):
"""Get job type configs"""
log = logging.getLogger(__name__ + ".GetJobTypeConfigs")
def get_parser(self, prog_name):
parser = super(GetJobTypeConfigs, self).get_parser(prog_name)
parser.add_argument(
"job_type",
metavar="<job-type>",
choices=JOB_TYPES_CHOICES,
help="Type of the job to provide config information about",
)
parser.add_argument(
'--file',
metavar="<file>",
help='Destination file (defaults to job type)',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if not parsed_args.file:
parsed_args.file = parsed_args.job_type
data = client.jobs.get_configs(parsed_args.job_type).to_dict()
if path.exists(parsed_args.file):
self.log.error('File "%s" already exists. Choose another one with '
'--file argument.' % parsed_args.file)
else:
with open(parsed_args.file, 'w') as f:
jsonutils.dump(data, f, indent=4)
sys.stdout.write(
'"%(type)s" job configs were saved in "%(file)s"'
'file' % {'type': parsed_args.job_type,
'file': parsed_args.file})

View File

@ -1,379 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from osc_lib.command import command
from osc_lib import exceptions
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from oslo_serialization import jsonutils
from saharaclient.osc.v1 import utils
JOB_FIELDS = ['id', 'job_template_id', 'cluster_id', 'input_id', 'output_id',
'start_time', 'end_time', 'status', 'is_public', 'is_protected',
'engine_job_id']
JOB_STATUS_CHOICES = ['done-with-error', 'failed', 'killed', 'pending',
'running', 'succeeded', 'to-be-killed']
def _format_job_output(data):
data['status'] = data['info']['status']
del data['info']
data['job_template_id'] = data.pop('job_id')
class ExecuteJob(command.ShowOne):
"""Executes job"""
log = logging.getLogger(__name__ + ".ExecuteJob")
def get_parser(self, prog_name):
parser = super(ExecuteJob, self).get_parser(prog_name)
parser.add_argument(
'--job-template',
metavar="<job-template>",
help="Name or ID of the job template "
"[REQUIRED if JSON is not provided]",
)
parser.add_argument(
'--cluster',
metavar="<cluster>",
help="Name or ID of the cluster "
"[REQUIRED if JSON is not provided]",
)
parser.add_argument(
'--input',
metavar="<input>",
help="Name or ID of the input data source",
)
parser.add_argument(
'--output',
metavar="<output>",
help="Name or ID of the output data source",
)
parser.add_argument(
'--params',
metavar="<name:value>",
nargs='+',
help="Parameters to add to the job"
)
parser.add_argument(
'--args',
metavar="<argument>",
nargs='+',
help="Arguments to add to the job"
)
parser.add_argument(
'--public',
action='store_true',
default=False,
help='Make the job public',
)
parser.add_argument(
'--protected',
action='store_true',
default=False,
help='Make the job protected',
)
configs = parser.add_mutually_exclusive_group()
configs.add_argument(
'--config-json',
metavar='<filename>',
help='JSON representation of the job configs'
)
configs.add_argument(
'--configs',
metavar="<name:value>",
nargs='+',
help="Configs to add to the job"
)
parser.add_argument(
'--interface',
metavar='<filename>',
help='JSON representation of the interface'
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the job. Other arguments will not be '
'taken into account if this one is provided'
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = jsonutils.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
if 'job_configs' in template:
template['configs'] = template.pop('job_configs')
data = client.job_executions.create(**template).to_dict()
else:
if not parsed_args.cluster or not parsed_args.job_template:
raise exceptions.CommandError(
'At least --cluster, --job-template, arguments should be '
'specified or json template should be provided with '
'--json argument')
job_configs = {}
if parsed_args.interface:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
parsed_args.interface = jsonutils.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'interface from file %s: %s' % (parsed_args.json, e))
if parsed_args.config_json:
blob = osc_utils.read_blob_file_contents(parsed_args.configs)
try:
job_configs['configs'] = jsonutils.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'configs from file %s: %s' % (parsed_args.json, e))
elif parsed_args.configs:
job_configs['configs'] = dict(
map(lambda x: x.split(':', 1), parsed_args.configs))
if parsed_args.args:
job_configs['args'] = parsed_args.args
if parsed_args.params:
job_configs['params'] = dict(
map(lambda x: x.split(':', 1), parsed_args.params))
jt_id = utils.get_resource_id(
client.jobs, parsed_args.job_template)
cluster_id = utils.get_resource_id(
client.clusters, parsed_args.cluster)
input_id = utils.get_resource_id(
client.data_sources, parsed_args.input)
output_id = utils.get_resource_id(
client.data_sources, parsed_args.output)
data = client.job_executions.create(
job_id=jt_id, cluster_id=cluster_id, input_id=input_id,
output_id=output_id, interface=parsed_args.interface,
configs=job_configs, is_public=parsed_args.public,
is_protected=parsed_args.protected).to_dict()
sys.stdout.write(
'Job "{job}" has been started successfully.\n'.format(
job=data['id']))
_format_job_output(data)
data = utils.prepare_data(data, JOB_FIELDS)
return self.dict2columns(data)
class ListJobs(command.Lister):
"""Lists jobs"""
log = logging.getLogger(__name__ + ".ListJobs")
def get_parser(self, prog_name):
parser = super(ListJobs, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
parser.add_argument(
'--status',
metavar="<status>",
choices=JOB_STATUS_CHOICES,
help="List jobs with specific status"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = client.job_executions.list()
for job in data:
job.status = job.info['status']
if parsed_args.status:
data = [job for job in data
if job.info['status'] == parsed_args.status.replace(
'-', '').upper()]
if parsed_args.long:
columns = ('id', 'cluster id', 'job id', 'status', 'start time',
'end time')
column_headers = utils.prepare_column_headers(columns)
else:
columns = ('id', 'cluster id', 'job id', 'status')
column_headers = utils.prepare_column_headers(columns)
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns
) for s in data)
)
class ShowJob(command.ShowOne):
"""Display job details"""
log = logging.getLogger(__name__ + ".ShowJob")
def get_parser(self, prog_name):
parser = super(ShowJob, self).get_parser(prog_name)
parser.add_argument(
"job",
metavar="<job>",
help="ID of the job to display",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = client.job_executions.get(parsed_args.job).to_dict()
_format_job_output(data)
data = utils.prepare_data(data, JOB_FIELDS)
return self.dict2columns(data)
class DeleteJob(command.Command):
"""Deletes job"""
log = logging.getLogger(__name__ + ".DeleteJob")
def get_parser(self, prog_name):
parser = super(DeleteJob, self).get_parser(prog_name)
parser.add_argument(
"job",
metavar="<job>",
nargs="+",
help="ID(s) of the job(s) to delete",
)
parser.add_argument(
'--wait',
action='store_true',
default=False,
help='Wait for the job(s) delete to complete',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
for job_id in parsed_args.job:
client.job_executions.delete(job_id)
sys.stdout.write(
'Job "{job}" deletion has been started.\n'.format(job=job_id))
if parsed_args.wait:
for job_id in parsed_args.job:
if not utils.wait_for_delete(client.job_executions, job_id):
self.log.error(
'Error occurred during job deleting: %s' %
job_id)
else:
sys.stdout.write(
'Job "{job}" has been removed successfully.\n'.format(
job=job_id))
class UpdateJob(command.ShowOne):
"""Updates job"""
log = logging.getLogger(__name__ + ".UpdateJob")
def get_parser(self, prog_name):
parser = super(UpdateJob, self).get_parser(prog_name)
parser.add_argument(
'job',
metavar="<job>",
help="ID of the job to update",
)
public = parser.add_mutually_exclusive_group()
public.add_argument(
'--public',
action='store_true',
help='Make the job public (Visible from other tenants)',
dest='is_public'
)
public.add_argument(
'--private',
action='store_false',
help='Make the job private (Visible only from this tenant)',
dest='is_public'
)
protected = parser.add_mutually_exclusive_group()
protected.add_argument(
'--protected',
action='store_true',
help='Make the job protected',
dest='is_protected'
)
protected.add_argument(
'--unprotected',
action='store_false',
help='Make the job unprotected',
dest='is_protected'
)
parser.set_defaults(is_public=None, is_protected=None)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
update_dict = utils.create_dict_from_kwargs(
is_public=parsed_args.is_public,
is_protected=parsed_args.is_protected)
data = client.job_executions.update(
parsed_args.job, **update_dict).job_execution
_format_job_output(data)
data = utils.prepare_data(data, JOB_FIELDS)
return self.dict2columns(data)

View File

@ -1,691 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import sys
from osc_lib.command import command
from osc_lib import exceptions
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from saharaclient.osc.v1 import utils
NGT_FIELDS = ['id', 'name', 'plugin_name', 'plugin_version', 'node_processes',
'description', 'auto_security_group', 'security_groups',
'availability_zone', 'flavor_id', 'floating_ip_pool',
'volumes_per_node', 'volumes_size',
'volume_type', 'volume_local_to_instance', 'volume_mount_prefix',
'volumes_availability_zone', 'use_autoconfig',
'is_proxy_gateway', 'is_default', 'is_protected', 'is_public']
def _format_ngt_output(data):
data['node_processes'] = osc_utils.format_list(data['node_processes'])
data['plugin_version'] = data.pop('hadoop_version')
if data['volumes_per_node'] == 0:
del data['volume_local_to_instance']
del data['volume_mount_prefix']
del data['volume_type'],
del data['volumes_availability_zone']
del data['volumes_size']
class CreateNodeGroupTemplate(command.ShowOne):
"""Creates node group template"""
log = logging.getLogger(__name__ + ".CreateNodeGroupTemplate")
def get_parser(self, prog_name):
parser = super(CreateNodeGroupTemplate, self).get_parser(prog_name)
parser.add_argument(
'--name',
metavar="<name>",
help="Name of the node group template [REQUIRED if JSON is not "
"provided]",
)
parser.add_argument(
'--plugin',
metavar="<plugin>",
help="Name of the plugin [REQUIRED if JSON is not provided]"
)
parser.add_argument(
'--plugin-version',
metavar="<plugin_version>",
help="Version of the plugin [REQUIRED if JSON is not provided]"
)
parser.add_argument(
'--processes',
metavar="<processes>",
nargs="+",
help="List of the processes that will be launched on each "
"instance [REQUIRED if JSON is not provided]"
)
parser.add_argument(
'--flavor',
metavar="<flavor>",
help="Name or ID of the flavor [REQUIRED if JSON is not provided]"
)
parser.add_argument(
'--security-groups',
metavar="<security-groups>",
nargs="+",
help="List of the security groups for the instances in this node "
"group"
)
parser.add_argument(
'--auto-security-group',
action='store_true',
default=False,
help='Indicates if an additional security group should be created '
'for the node group',
)
parser.add_argument(
'--availability-zone',
metavar="<availability-zone>",
help="Name of the availability zone where instances "
"will be created"
)
parser.add_argument(
'--floating-ip-pool',
metavar="<floating-ip-pool>",
help="ID of the floating IP pool"
)
parser.add_argument(
'--volumes-per-node',
type=int,
metavar="<volumes-per-node>",
help="Number of volumes attached to every node"
)
parser.add_argument(
'--volumes-size',
type=int,
metavar="<volumes-size>",
help='Size of volumes attached to node (GB). '
'This parameter will be taken into account only '
'if volumes-per-node is set and non-zero'
)
parser.add_argument(
'--volumes-type',
metavar="<volumes-type>",
help='Type of the volumes. '
'This parameter will be taken into account only '
'if volumes-per-node is set and non-zero'
)
parser.add_argument(
'--volumes-availability-zone',
metavar="<volumes-availability-zone>",
help='Name of the availability zone where volumes will be created.'
' This parameter will be taken into account only '
'if volumes-per-node is set and non-zero'
)
parser.add_argument(
'--volumes-mount-prefix',
metavar="<volumes-mount-prefix>",
help='Prefix for mount point directory. '
'This parameter will be taken into account only '
'if volumes-per-node is set and non-zero'
)
parser.add_argument(
'--volumes-locality',
action='store_true',
default=False,
help='If enabled, instance and attached volumes will be created on'
' the same physical host. This parameter will be taken into '
'account only if volumes-per-node is set and non-zero',
)
parser.add_argument(
'--description',
metavar="<description>",
help='Description of the node group template'
)
parser.add_argument(
'--autoconfig',
action='store_true',
default=False,
help='If enabled, instances of the node group will be '
'automatically configured',
)
parser.add_argument(
'--proxy-gateway',
action='store_true',
default=False,
help='If enabled, instances of the node group will be used to '
'access other instances in the cluster',
)
parser.add_argument(
'--public',
action='store_true',
default=False,
help='Make the node group template public (Visible from other '
'tenants)',
)
parser.add_argument(
'--protected',
action='store_true',
default=False,
help='Make the node group template protected',
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the node group template. Other '
'arguments will not be taken into account if this one is '
'provided'
)
parser.add_argument(
'--shares',
metavar='<filename>',
help='JSON representation of the manila shares'
)
parser.add_argument(
'--configs',
metavar='<filename>',
help='JSON representation of the node group template configs'
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
data = client.node_group_templates.create(**template).to_dict()
else:
if (not parsed_args.name or not parsed_args.plugin or
not parsed_args.plugin_version or not parsed_args.flavor or
not parsed_args.processes):
raise exceptions.CommandError(
'At least --name, --plugin, --plugin-version, --processes,'
' --flavor arguments should be specified or json template '
'should be provided with --json argument')
configs = None
if parsed_args.configs:
blob = osc_utils.read_blob_file_contents(parsed_args.configs)
try:
configs = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'configs from file %s: %s' % (parsed_args.configs, e))
shares = None
if parsed_args.shares:
blob = osc_utils.read_blob_file_contents(parsed_args.shares)
try:
shares = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'shares from file %s: %s' % (parsed_args.shares, e))
compute_client = self.app.client_manager.compute
flavor_id = osc_utils.find_resource(
compute_client.flavors, parsed_args.flavor).id
data = client.node_group_templates.create(
name=parsed_args.name,
plugin_name=parsed_args.plugin,
hadoop_version=parsed_args.plugin_version,
flavor_id=flavor_id,
description=parsed_args.description,
volumes_per_node=parsed_args.volumes_per_node,
volumes_size=parsed_args.volumes_size,
node_processes=parsed_args.processes,
floating_ip_pool=parsed_args.floating_ip_pool,
security_groups=parsed_args.security_groups,
auto_security_group=parsed_args.auto_security_group,
availability_zone=parsed_args.availability_zone,
volume_type=parsed_args.volumes_type,
is_proxy_gateway=parsed_args.proxy_gateway,
volume_local_to_instance=parsed_args.volumes_locality,
use_autoconfig=parsed_args.autoconfig,
is_public=parsed_args.public,
is_protected=parsed_args.protected,
node_configs=configs,
shares=shares,
volumes_availability_zone=parsed_args.volumes_availability_zone
).to_dict()
_format_ngt_output(data)
data = utils.prepare_data(data, NGT_FIELDS)
return self.dict2columns(data)
class ListNodeGroupTemplates(command.Lister):
"""Lists node group templates"""
log = logging.getLogger(__name__ + ".ListNodeGroupTemplates")
def get_parser(self, prog_name):
parser = super(ListNodeGroupTemplates, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
parser.add_argument(
'--plugin',
metavar="<plugin>",
help="List node group templates for specific plugin"
)
parser.add_argument(
'--plugin-version',
metavar="<plugin_version>",
help="List node group templates with specific version of the "
"plugin"
)
parser.add_argument(
'--name',
metavar="<name-substring>",
help="List node group templates with specific substring in the "
"name"
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
search_opts = {}
if parsed_args.plugin:
search_opts['plugin_name'] = parsed_args.plugin
if parsed_args.plugin_version:
search_opts['hadoop_version'] = parsed_args.plugin_version
data = client.node_group_templates.list(search_opts=search_opts)
if parsed_args.name:
data = utils.get_by_name_substring(data, parsed_args.name)
if parsed_args.long:
columns = ('name', 'id', 'plugin_name', 'hadoop_version',
'node_processes', 'description')
column_headers = utils.prepare_column_headers(
columns, {'hadoop_version': 'plugin_version'})
else:
columns = ('name', 'id', 'plugin_name', 'hadoop_version')
column_headers = utils.prepare_column_headers(
columns, {'hadoop_version': 'plugin_version'})
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns,
formatters={
'node_processes': osc_utils.format_list
}
) for s in data)
)
class ShowNodeGroupTemplate(command.ShowOne):
"""Display node group template details"""
log = logging.getLogger(__name__ + ".ShowNodeGroupTemplate")
def get_parser(self, prog_name):
parser = super(ShowNodeGroupTemplate, self).get_parser(prog_name)
parser.add_argument(
"node_group_template",
metavar="<node-group-template>",
help="Name or id of the node group template to display",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = utils.get_resource(
client.node_group_templates,
parsed_args.node_group_template).to_dict()
_format_ngt_output(data)
data = utils.prepare_data(data, NGT_FIELDS)
return self.dict2columns(data)
class DeleteNodeGroupTemplate(command.Command):
"""Deletes node group template"""
log = logging.getLogger(__name__ + ".DeleteNodeGroupTemplate")
def get_parser(self, prog_name):
parser = super(DeleteNodeGroupTemplate, self).get_parser(prog_name)
parser.add_argument(
"node_group_template",
metavar="<node-group-template>",
nargs="+",
help="Name(s) or id(s) of the node group template(s) to delete",
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
for ngt in parsed_args.node_group_template:
ngt_id = utils.get_resource_id(
client.node_group_templates, ngt)
client.node_group_templates.delete(ngt_id)
sys.stdout.write(
'Node group template "{ngt}" has been removed '
'successfully.\n'.format(ngt=ngt))
class UpdateNodeGroupTemplate(command.ShowOne):
"""Updates node group template"""
log = logging.getLogger(__name__ + ".UpdateNodeGroupTemplate")
def get_parser(self, prog_name):
parser = super(UpdateNodeGroupTemplate, self).get_parser(prog_name)
parser.add_argument(
'node_group_template',
metavar="<node-group-template>",
help="Name or ID of the node group template",
)
parser.add_argument(
'--name',
metavar="<name>",
help="New name of the node group template",
)
parser.add_argument(
'--plugin',
metavar="<plugin>",
help="Name of the plugin"
)
parser.add_argument(
'--plugin-version',
metavar="<plugin_version>",
help="Version of the plugin"
)
parser.add_argument(
'--processes',
metavar="<processes>",
nargs="+",
help="List of the processes that will be launched on each "
"instance"
)
parser.add_argument(
'--security-groups',
metavar="<security-groups>",
nargs="+",
help="List of the security groups for the instances in this node "
"group"
)
autosecurity = parser.add_mutually_exclusive_group()
autosecurity.add_argument(
'--auto-security-group-enable',
action='store_true',
help='Additional security group should be created '
'for the node group',
dest='use_auto_security_group'
)
autosecurity.add_argument(
'--auto-security-group-disable',
action='store_false',
help='Additional security group should not be created '
'for the node group',
dest='use_auto_security_group'
)
parser.add_argument(
'--availability-zone',
metavar="<availability-zone>",
help="Name of the availability zone where instances "
"will be created"
)
parser.add_argument(
'--flavor',
metavar="<flavor>",
help="Name or ID of the flavor"
)
parser.add_argument(
'--floating-ip-pool',
metavar="<floating-ip-pool>",
help="ID of the floating IP pool"
)
parser.add_argument(
'--volumes-per-node',
type=int,
metavar="<volumes-per-node>",
help="Number of volumes attached to every node"
)
parser.add_argument(
'--volumes-size',
type=int,
metavar="<volumes-size>",
help='Size of volumes attached to node (GB). '
'This parameter will be taken into account only '
'if volumes-per-node is set and non-zero'
)
parser.add_argument(
'--volumes-type',
metavar="<volumes-type>",
help='Type of the volumes. '
'This parameter will be taken into account only '
'if volumes-per-node is set and non-zero'
)
parser.add_argument(
'--volumes-availability-zone',
metavar="<volumes-availability-zone>",
help='Name of the availability zone where volumes will be created.'
' This parameter will be taken into account only '
'if volumes-per-node is set and non-zero'
)
parser.add_argument(
'--volumes-mount-prefix',
metavar="<volumes-mount-prefix>",
help='Prefix for mount point directory. '
'This parameter will be taken into account only '
'if volumes-per-node is set and non-zero'
)
volumelocality = parser.add_mutually_exclusive_group()
volumelocality.add_argument(
'--volumes-locality-enable',
action='store_true',
help='Instance and attached volumes will be created on '
'the same physical host. This parameter will be taken into '
'account only if volumes-per-node is set and non-zero',
dest='volume_locality'
)
volumelocality.add_argument(
'--volumes-locality-disable',
action='store_false',
help='Instance and attached volumes creation on the same physical '
'host will not be regulated. This parameter will be taken'
'into account only if volumes-per-node is set and non-zero',
dest='volume_locality'
)
parser.add_argument(
'--description',
metavar="<description>",
help='Description of the node group template'
)
autoconfig = parser.add_mutually_exclusive_group()
autoconfig.add_argument(
'--autoconfig-enable',
action='store_true',
help='Instances of the node group will be '
'automatically configured',
dest='use_autoconfig'
)
autoconfig.add_argument(
'--autoconfig-disable',
action='store_false',
help='Instances of the node group will not be '
'automatically configured',
dest='use_autoconfig'
)
proxy = parser.add_mutually_exclusive_group()
proxy.add_argument(
'--proxy-gateway-enable',
action='store_true',
help='Instances of the node group will be used to '
'access other instances in the cluster',
dest='is_proxy_gateway'
)
proxy.add_argument(
'--proxy-gateway-disable',
action='store_false',
help='Instances of the node group will not be used to '
'access other instances in the cluster',
dest='is_proxy_gateway'
)
public = parser.add_mutually_exclusive_group()
public.add_argument(
'--public',
action='store_true',
help='Make the node group template public '
'(Visible from other tenants)',
dest='is_public'
)
public.add_argument(
'--private',
action='store_false',
help='Make the node group template private '
'(Visible only from this tenant)',
dest='is_public'
)
protected = parser.add_mutually_exclusive_group()
protected.add_argument(
'--protected',
action='store_true',
help='Make the node group template protected',
dest='is_protected'
)
protected.add_argument(
'--unprotected',
action='store_false',
help='Make the node group template unprotected',
dest='is_protected'
)
parser.add_argument(
'--json',
metavar='<filename>',
help='JSON representation of the node group template update '
'fields. Other arguments will not be taken into account if '
'this one is provided'
)
parser.add_argument(
'--shares',
metavar='<filename>',
help='JSON representation of the manila shares'
)
parser.add_argument(
'--configs',
metavar='<filename>',
help='JSON representation of the node group template configs'
)
parser.set_defaults(is_public=None, is_protected=None,
is_proxy_gateway=None, volume_locality=None,
use_auto_security_group=None, use_autoconfig=None)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
ngt_id = utils.get_resource_id(
client.node_group_templates, parsed_args.node_group_template)
if parsed_args.json:
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
template = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'template from file %s: %s' % (parsed_args.json, e))
data = client.node_group_templates.update(
ngt_id, **template).to_dict()
else:
configs = None
if parsed_args.configs:
blob = osc_utils.read_blob_file_contents(parsed_args.configs)
try:
configs = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'configs from file %s: %s' % (parsed_args.configs, e))
shares = None
if parsed_args.shares:
blob = osc_utils.read_blob_file_contents(parsed_args.shares)
try:
shares = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'shares from file %s: %s' % (parsed_args.shares, e))
flavor_id = None
if parsed_args.flavor:
compute_client = self.app.client_manager.compute
flavor_id = osc_utils.find_resource(
compute_client.flavors, parsed_args.flavor).id
update_dict = utils.create_dict_from_kwargs(
name=parsed_args.name,
plugin_name=parsed_args.plugin,
hadoop_version=parsed_args.plugin_version,
flavor_id=flavor_id,
description=parsed_args.description,
volumes_per_node=parsed_args.volumes_per_node,
volumes_size=parsed_args.volumes_size,
node_processes=parsed_args.processes,
floating_ip_pool=parsed_args.floating_ip_pool,
security_groups=parsed_args.security_groups,
auto_security_group=parsed_args.use_auto_security_group,
availability_zone=parsed_args.availability_zone,
volume_type=parsed_args.volumes_type,
is_proxy_gateway=parsed_args.is_proxy_gateway,
volume_local_to_instance=parsed_args.volume_locality,
use_autoconfig=parsed_args.use_autoconfig,
is_public=parsed_args.is_public,
is_protected=parsed_args.is_protected,
node_configs=configs,
shares=shares,
volumes_availability_zone=parsed_args.volumes_availability_zone
)
data = client.node_group_templates.update(
ngt_id, **update_dict).to_dict()
_format_ngt_output(data)
data = utils.prepare_data(data, NGT_FIELDS)
return self.dict2columns(data)

View File

@ -1,218 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from os import path
import sys
from osc_lib.command import command
from osc_lib import exceptions
from osc_lib import utils as osc_utils
from oslo_log import log as logging
from oslo_serialization import jsonutils
import six
from saharaclient.osc.v1 import utils
def _serialize_label_items(plugin):
labels = {}
pl_labels = plugin.get('plugin_labels', {})
for label, data in six.iteritems(pl_labels):
labels['plugin: %s' % label] = data['status']
vr_labels = plugin.get('version_labels', {})
for version, version_data in six.iteritems(vr_labels):
for label, data in six.iteritems(version_data):
labels[
'plugin version %s: %s' % (version, label)] = data['status']
labels = utils.prepare_data(labels, list(labels.keys()))
return sorted(labels.items())
class ListPlugins(command.Lister):
"""Lists plugins"""
log = logging.getLogger(__name__ + ".ListPlugins")
def get_parser(self, prog_name):
parser = super(ListPlugins, self).get_parser(prog_name)
parser.add_argument(
'--long',
action='store_true',
default=False,
help='List additional fields in output',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
data = client.plugins.list()
if parsed_args.long:
columns = ('name', 'title', 'versions', 'description')
column_headers = utils.prepare_column_headers(columns)
else:
columns = ('name', 'versions')
column_headers = utils.prepare_column_headers(columns)
return (
column_headers,
(osc_utils.get_item_properties(
s,
columns,
formatters={
'versions': osc_utils.format_list
},
) for s in data)
)
class ShowPlugin(command.ShowOne):
"""Display plugin details"""
log = logging.getLogger(__name__ + ".ShowPlugin")
def get_parser(self, prog_name):
parser = super(ShowPlugin, self).get_parser(prog_name)
parser.add_argument(
"plugin",
metavar="<plugin>",
help="Name of the plugin to display",
)
parser.add_argument(
"--plugin-version",
metavar="<plugin_version>",
help='Version of the plugin to display'
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if parsed_args.plugin_version:
data = client.plugins.get_version_details(
parsed_args.plugin, parsed_args.plugin_version).to_dict()
processes = data.pop('node_processes')
for k, v in processes.items():
processes[k] = osc_utils.format_list(v)
data['required_image_tags'] = osc_utils.format_list(
data['required_image_tags'])
label_items = _serialize_label_items(data)
data = utils.prepare_data(
data, ['required_image_tags', 'name', 'description', 'title'])
data = self.dict2columns(data)
data = utils.extend_columns(data, label_items)
data = utils.extend_columns(
data, [('Service:', 'Available processes:')])
data = utils.extend_columns(
data, sorted(processes.items()))
else:
data = client.plugins.get(parsed_args.plugin).to_dict()
data['versions'] = osc_utils.format_list(data['versions'])
items = _serialize_label_items(data)
data = utils.prepare_data(
data, ['versions', 'name', 'description', 'title'])
data = utils.extend_columns(self.dict2columns(data), items)
return data
class GetPluginConfigs(command.Command):
"""Get plugin configs"""
log = logging.getLogger(__name__ + ".GetPluginConfigs")
def get_parser(self, prog_name):
parser = super(GetPluginConfigs, self).get_parser(prog_name)
parser.add_argument(
"plugin",
metavar="<plugin>",
help="Name of the plugin to provide config information about",
)
parser.add_argument(
"plugin_version",
metavar="<plugin_version>",
help="Version of the plugin to provide config information about",
)
parser.add_argument(
'--file',
metavar="<file>",
help='Destination file (defaults to plugin name)',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
if not parsed_args.file:
parsed_args.file = parsed_args.plugin
data = client.plugins.get_version_details(
parsed_args.plugin, parsed_args.plugin_version).to_dict()
if path.exists(parsed_args.file):
self.log.error('File "%s" already exists. Chose another one with '
'--file argument.' % parsed_args.file)
else:
with open(parsed_args.file, 'w') as f:
jsonutils.dump(data, f, indent=4)
sys.stdout.write(
'"%(plugin)s" plugin configs was saved in "%(file)s"'
'file' % {'plugin': parsed_args.plugin,
'file': parsed_args.file})
class UpdatePlugin(command.ShowOne):
log = logging.getLogger(__name__ + ".UpdatePlugin")
def get_parser(self, prog_name):
parser = super(UpdatePlugin, self).get_parser(prog_name)
parser.add_argument(
"plugin",
metavar="<plugin>",
help="Name of the plugin to provide config information about",
)
parser.add_argument(
'json',
metavar="<json>",
help='JSON representation of the plugin update dictionary',
)
return parser
def take_action(self, parsed_args):
self.log.debug("take_action(%s)" % parsed_args)
client = self.app.client_manager.data_processing
blob = osc_utils.read_blob_file_contents(parsed_args.json)
try:
update_dict = json.loads(blob)
except ValueError as e:
raise exceptions.CommandError(
'An error occurred when reading '
'update dict from file %s: %s' % (parsed_args.json, e))
plugin = client.plugins.update(parsed_args.plugin, update_dict)
data = plugin.to_dict()
data['versions'] = osc_utils.format_list(data['versions'])
items = _serialize_label_items(data)
data = utils.prepare_data(
data, ['versions', 'name', 'description', 'title'])
data = utils.extend_columns(self.dict2columns(data), items)
return data

View File

@ -1,102 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import six
import time
from oslo_utils import timeutils
from oslo_utils import uuidutils
from saharaclient.api import base
def get_resource(manager, name_or_id, **kwargs):
if uuidutils.is_uuid_like(name_or_id):
return manager.get(name_or_id, **kwargs)
else:
resource = manager.find_unique(name=name_or_id)
if kwargs:
# we really need additional call to apply kwargs
resource = manager.get(resource.id, **kwargs)
return resource
def created_at_sorted(objs, reverse=False):
return sorted(objs, key=created_at_key, reverse=reverse)
def random_name(prefix=None):
return "%s-%s" % (prefix, uuidutils.generate_uuid()[:8])
def created_at_key(obj):
return timeutils.parse_isotime(obj["created_at"])
def get_resource_id(manager, name_or_id):
if uuidutils.is_uuid_like(name_or_id):
return name_or_id
else:
return manager.find_unique(name=name_or_id).id
def create_dict_from_kwargs(**kwargs):
return dict((k, v) for (k, v) in six.iteritems(kwargs) if v is not None)
def prepare_data(data, fields):
new_data = {}
for f in fields:
if f in data:
new_data[f.replace('_', ' ').capitalize()] = data[f]
return new_data
def unzip(data):
return zip(*data)
def extend_columns(columns, items):
return unzip(list(unzip(columns)) + [('', '')] + items)
def prepare_column_headers(columns, remap=None):
remap = remap if remap else {}
new_columns = []
for c in columns:
for old, new in remap.items():
c = c.replace(old, new)
new_columns.append(c.replace('_', ' ').capitalize())
return new_columns
def get_by_name_substring(data, name):
return [obj for obj in data if name in obj.name]
def wait_for_delete(manager, obj_id, sleep_time=5, timeout=3000):
s_time = timeutils.utcnow()
while timeutils.delta_seconds(s_time, timeutils.utcnow()) < timeout:
try:
manager.get(obj_id)
except base.APIException as ex:
if ex.error_code == 404:
return True
raise
time.sleep(sleep_time)
return False

View File

@ -1,732 +0,0 @@
# Copyright 2010 Jacob Kaplan-Moss
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
###
# This code is taken from python-novaclient. Goal is minimal modification.
###
"""
Command-line interface to the OpenStack Sahara API.
"""
from __future__ import print_function
import argparse
import getpass
import logging
import sys
import warnings
import six
HAS_KEYRING = False
all_errors = ValueError
try:
import keyring
HAS_KEYRING = True
try:
if isinstance(keyring.get_keyring(), keyring.backend.GnomeKeyring):
import gnomekeyring
all_errors = (ValueError,
gnomekeyring.IOError,
gnomekeyring.NoKeyringDaemonError)
except Exception:
pass
except ImportError:
pass
from keystoneauth1.identity.generic import password
from keystoneauth1.identity.generic import token
from keystoneauth1.loading import session
from keystoneclient.auth.identity import v3 as identity
from oslo_utils import encodeutils
from oslo_utils import strutils
from saharaclient.api import client
from saharaclient.api import shell as shell_api
from saharaclient.openstack.common.apiclient import auth
from saharaclient.openstack.common.apiclient import exceptions as exc
from saharaclient.openstack.common import cliutils
from saharaclient import version
DEFAULT_API_VERSION = 'api'
DEFAULT_ENDPOINT_TYPE = 'publicURL'
DEFAULT_SERVICE_TYPE = 'data-processing'
logger = logging.getLogger(__name__)
def positive_non_zero_float(text):
if text is None:
return None
try:
value = float(text)
except ValueError:
msg = "%s must be a float" % text
raise argparse.ArgumentTypeError(msg)
if value <= 0:
msg = "%s must be greater than 0" % text
raise argparse.ArgumentTypeError(msg)
return value
class SecretsHelper(object):
def __init__(self, args, client):
self.args = args
self.client = client
self.key = None
def _validate_string(self, text):
if text is None or len(text) == 0:
return False
return True
def _make_key(self):
if self.key is not None:
return self.key
keys = [
self.client.auth_url,
self.client.projectid,
self.client.user,
self.client.region_name,
self.client.endpoint_type,
self.client.service_type,
self.client.service_name,
self.client.volume_service_name,
]
for (index, key) in enumerate(keys):
if key is None:
keys[index] = '?'
else:
keys[index] = str(keys[index])
self.key = "/".join(keys)
return self.key
def _prompt_password(self, verify=True):
pw = None
if hasattr(sys.stdin, 'isatty') and sys.stdin.isatty():
# Check for Ctl-D
try:
while True:
pw1 = getpass.getpass('OS Password: ')
if verify:
pw2 = getpass.getpass('Please verify: ')
else:
pw2 = pw1
if pw1 == pw2 and self._validate_string(pw1):
pw = pw1
break
except EOFError:
pass
return pw
def save(self, auth_token, management_url, tenant_id):
if not HAS_KEYRING or not self.args.os_cache:
return
if (auth_token == self.auth_token and
management_url == self.management_url):
# Nothing changed....
return
if not all([management_url, auth_token, tenant_id]):
raise ValueError("Unable to save empty management url/auth token")
value = "|".join([str(auth_token),
str(management_url),
str(tenant_id)])
keyring.set_password("saharaclient_auth", self._make_key(), value)
@property
def password(self):
if self._validate_string(self.args.os_password):
return self.args.os_password
verify_pass = (
strutils.bool_from_string(cliutils.env("OS_VERIFY_PASSWORD"))
)
return self._prompt_password(verify_pass)
@property
def management_url(self):
if not HAS_KEYRING or not self.args.os_cache:
return None
management_url = None
try:
block = keyring.get_password('saharaclient_auth',
self._make_key())
if block:
_token, management_url, _tenant_id = block.split('|', 2)
except all_errors:
pass
return management_url
@property
def auth_token(self):
# Now is where it gets complicated since we
# want to look into the keyring module, if it
# exists and see if anything was provided in that
# file that we can use.
if not HAS_KEYRING or not self.args.os_cache:
return None
token = None
try:
block = keyring.get_password('saharaclient_auth',
self._make_key())
if block:
token, _management_url, _tenant_id = block.split('|', 2)
except all_errors:
pass
return token
@property
def tenant_id(self):
if not HAS_KEYRING or not self.args.os_cache:
return None
tenant_id = None
try:
block = keyring.get_password('saharaclient_auth',
self._make_key())
if block:
_token, _management_url, tenant_id = block.split('|', 2)
except all_errors:
pass
return tenant_id
class SaharaClientArgumentParser(argparse.ArgumentParser):
def __init__(self, *args, **kwargs):
super(SaharaClientArgumentParser, self).__init__(*args, **kwargs)
def error(self, message):
"""error(message: string)
Prints a usage message incorporating the message to stderr and
exits.
"""
self.print_usage(sys.stderr)
# FIXME(lzyeval): if changes occur in argparse.ArgParser._check_value
choose_from = ' (choose from'
progparts = self.prog.partition(' ')
self.exit(2, "error: %(errmsg)s\nTry '%(mainp)s help %(subp)s'"
" for more information.\n" %
{'errmsg': message.split(choose_from)[0],
'mainp': progparts[0],
'subp': progparts[2]})
class OpenStackSaharaShell(object):
def get_base_parser(self):
parser = SaharaClientArgumentParser(
prog='sahara',
description=__doc__.strip(),
epilog='See "sahara help COMMAND" '
'for help on a specific command.',
add_help=False,
formatter_class=OpenStackHelpFormatter,
)
# Global arguments
parser.add_argument('-h', '--help',
action='store_true',
help=argparse.SUPPRESS)
parser.add_argument('--version',
action='version',
version=version.version_info.version_string())
parser.add_argument('--debug',
default=False,
action='store_true',
help="Print debugging output.")
parser.add_argument('--os-cache',
default=strutils.bool_from_string(
cliutils.env('OS_CACHE', default=False)),
action='store_true',
help="Use the auth token cache. Defaults to False "
"if env[OS_CACHE] is not set.")
# TODO(mattf) - add get_timings support to Client
# parser.add_argument('--timings',
# default=False,
# action='store_true',
# help="Print call timing info")
# TODO(mattf) - use timeout
# parser.add_argument('--timeout',
# default=600,
# metavar='<seconds>',
# type=positive_non_zero_float,
# help="Set HTTP call timeout (in seconds)")
parser.add_argument('--region-name',
metavar='<region-name>',
default=cliutils.env('SAHARA_REGION_NAME',
'OS_REGION_NAME'),
help='Defaults to env[OS_REGION_NAME].')
parser.add_argument('--region_name',
help=argparse.SUPPRESS)
parser.add_argument('--service-type',
metavar='<service-type>',
help='Defaults to data-processing for all '
'actions.')
parser.add_argument('--service_type',
help=argparse.SUPPRESS)
# NA
# parser.add_argument('--service-name',
# metavar='<service-name>',
# default=utils.env('SAHARA_SERVICE_NAME'),
# help='Defaults to env[SAHARA_SERVICE_NAME]')
# parser.add_argument('--service_name',
# help=argparse.SUPPRESS)
# NA
# parser.add_argument('--volume-service-name',
# metavar='<volume-service-name>',
# default=utils.env('NOVA_VOLUME_SERVICE_NAME'),
# help='Defaults to env[NOVA_VOLUME_SERVICE_NAME]')
# parser.add_argument('--volume_service_name',
# help=argparse.SUPPRESS)
parser.add_argument('--endpoint-type',
metavar='<endpoint-type>',
default=cliutils.env(
'SAHARA_ENDPOINT_TYPE',
'OS_ENDPOINT_TYPE',
default=DEFAULT_ENDPOINT_TYPE),
help=('Defaults to env[SAHARA_ENDPOINT_TYPE] or'
' env[OS_ENDPOINT_TYPE] or ')
+ DEFAULT_ENDPOINT_TYPE + '.')
# NOTE(dtroyer): We can't add --endpoint_type here due to argparse
# thinking usage-list --end is ambiguous; but it
# works fine with only --endpoint-type present
# Go figure. I'm leaving this here for doc purposes.
# parser.add_argument('--endpoint_type',
# help=argparse.SUPPRESS)
parser.add_argument('--sahara-api-version',
metavar='<sahara-api-ver>',
default=cliutils.env(
'SAHARA_API_VERSION',
default=DEFAULT_API_VERSION),
help='Accepts "api", '
'defaults to env[SAHARA_API_VERSION].')
parser.add_argument('--sahara_api_version',
help=argparse.SUPPRESS)
parser.add_argument('--bypass-url',
metavar='<bypass-url>',
default=cliutils.env('BYPASS_URL', default=None),
dest='bypass_url',
help="Use this API endpoint instead of the "
"Service Catalog.")
parser.add_argument('--bypass_url',
help=argparse.SUPPRESS)
parser.add_argument('--os-tenant-name',
default=cliutils.env('OS_TENANT_NAME'),
help='Defaults to env[OS_TENANT_NAME].')
parser.add_argument('--os-tenant-id',
default=cliutils.env('OS_TENANT_ID'),
help='Defaults to env[OS_TENANT_ID].')
parser.add_argument('--os-auth-system',
default=cliutils.env('OS_AUTH_SYSTEM'),
help='Defaults to env[OS_AUTH_SYSTEM].')
parser.add_argument('--os-auth-token',
default=cliutils.env('OS_AUTH_TOKEN'),
help='Defaults to env[OS_AUTH_TOKEN].')
# Use Keystoneclient/Keystoneauth API to parse authentication arguments
session.Session().register_argparse_arguments(parser)
identity.Password.register_argparse_arguments(parser)
return parser
def get_subcommand_parser(self, version):
parser = self.get_base_parser()
self.subcommands = {}
subparsers = parser.add_subparsers(metavar='<subcommand>')
try:
actions_module = {
'api': shell_api,
}[version]
except KeyError:
actions_module = shell_api
actions_module = shell_api
self._find_actions(subparsers, actions_module)
self._find_actions(subparsers, self)
self._add_bash_completion_subparser(subparsers)
return parser
def _add_bash_completion_subparser(self, subparsers):
subparser = (
subparsers.add_parser('bash_completion',
add_help=False,
formatter_class=OpenStackHelpFormatter)
)
self.subcommands['bash_completion'] = subparser
subparser.set_defaults(func=self.do_bash_completion)
def _find_actions(self, subparsers, actions_module):
for attr in (a for a in dir(actions_module) if a.startswith('do_')):
# I prefer to be hyphen-separated instead of underscores.
command = attr[3:].replace('_', '-')
callback = getattr(actions_module, attr)
desc = callback.__doc__ or ''
action_help = desc.strip()
arguments = getattr(callback, 'arguments', [])
subparser = (
subparsers.add_parser(command,
help=action_help,
description=desc,
add_help=False,
formatter_class=OpenStackHelpFormatter)
)
subparser.add_argument('-h', '--help',
action='help',
help=argparse.SUPPRESS,)
self.subcommands[command] = subparser
for (args, kwargs) in arguments:
subparser.add_argument(*args, **kwargs)
subparser.set_defaults(func=callback)
def setup_debugging(self, debug):
if not debug:
return
streamformat = "%(levelname)s (%(module)s:%(lineno)d) %(message)s"
# Set up the root logger to debug so that the submodules can
# print debug messages
logging.basicConfig(level=logging.DEBUG,
format=streamformat)
def _get_keystone_auth(self, session, auth_url, **kwargs):
auth_token = kwargs.pop('auth_token', None)
if auth_token:
return token.Token(auth_url, auth_token, **kwargs)
else:
return password.Password(
auth_url,
username=kwargs.pop('username'),
user_id=kwargs.pop('user_id'),
password=kwargs.pop('password'),
user_domain_id=kwargs.pop('user_domain_id'),
user_domain_name=kwargs.pop('user_domain_name'),
**kwargs)
def main(self, argv):
# Parse args once to find version and debug settings
parser = self.get_base_parser()
(options, args) = parser.parse_known_args(argv)
self.setup_debugging(options.debug)
self.options = options
# NOTE(dtroyer): Hackery to handle --endpoint_type due to argparse
# thinking usage-list --end is ambiguous; but it
# works fine with only --endpoint-type present
# Go figure.
if '--endpoint_type' in argv:
spot = argv.index('--endpoint_type')
argv[spot] = '--endpoint-type'
subcommand_parser = (
self.get_subcommand_parser(options.sahara_api_version)
)
self.parser = subcommand_parser
if options.help or not argv:
subcommand_parser.print_help()
return 0
args = subcommand_parser.parse_args(argv)
# Short-circuit and deal with help right away.
if args.func == self.do_help:
self.do_help(args)
return 0
elif args.func == self.do_bash_completion:
self.do_bash_completion(args)
return 0
# (os_username, os_tenant_name, os_tenant_id, os_auth_url,
# os_region_name, os_auth_system, endpoint_type, insecure,
# service_type, service_name, volume_service_name,
# bypass_url, os_cache, cacert) = ( #, timeout) = (
# args.os_username,
# args.os_tenant_name, args.os_tenant_id,
# args.os_auth_url,
# args.os_region_name,
# args.os_auth_system,
# args.endpoint_type, args.insecure,
# args.service_type,
# args.service_name, args.volume_service_name,
# args.bypass_url, args.os_cache,
# args.os_cacert, args.timeout)
(os_username, os_tenant_name, os_tenant_id,
os_auth_url, os_auth_system, endpoint_type,
service_type, bypass_url, os_cacert, insecure, region_name) = (
(args.os_username, args.os_tenant_name, args.os_tenant_id,
args.os_auth_url, args.os_auth_system, args.endpoint_type,
args.service_type, args.bypass_url, args.os_cacert, args.insecure,
args.region_name)
)
if os_auth_system and os_auth_system != "keystone":
auth_plugin = auth.load_plugin(os_auth_system)
else:
auth_plugin = None
# Fetched and set later as needed
os_password = None
if not endpoint_type:
endpoint_type = DEFAULT_ENDPOINT_TYPE
if not service_type:
service_type = DEFAULT_SERVICE_TYPE
# NA - there is only one service this CLI accesses
# service_type = utils.get_service_type(args.func) or service_type
# FIXME(usrleon): Here should be restrict for project id same as
# for os_username or os_password but for compatibility it is not.
if not cliutils.isunauthenticated(args.func):
if auth_plugin:
auth_plugin.parse_opts(args)
if not auth_plugin or not auth_plugin.opts:
if not os_username:
raise exc.CommandError("You must provide a username "
"via either --os-username or "
"env[OS_USERNAME]")
if not os_auth_url:
if os_auth_system and os_auth_system != 'keystone':
os_auth_url = auth_plugin.get_auth_url()
if not os_auth_url:
raise exc.CommandError("You must provide an auth url "
"via either --os-auth-url or "
"env[OS_AUTH_URL] or specify an "
"auth_system which defines a "
"default url with --os-auth-system "
"or env[OS_AUTH_SYSTEM]")
# NA
# if (options.os_compute_api_version and
# options.os_compute_api_version != '1.0'):
# if not os_tenant_name and not os_tenant_id:
# raise exc.CommandError("You must provide a tenant name "
# "or tenant id via --os-tenant-name, "
# "--os-tenant-id, env[OS_TENANT_NAME] "
# "or env[OS_TENANT_ID]")
#
# if not os_auth_url:
# raise exc.CommandError("You must provide an auth url "
# "via either --os-auth-url or env[OS_AUTH_URL]")
# NOTE: The Sahara client authenticates when you create it. So instead of
# creating here and authenticating later, which is what the novaclient
# does, we just create the client later.
# Now check for the password/token of which pieces of the
# identifying keyring key can come from the underlying client
if not cliutils.isunauthenticated(args.func):
# NA - Client can't be used with SecretsHelper
# helper = SecretsHelper(args, self.cs.client)
if (auth_plugin and auth_plugin.opts and
"os_password" not in auth_plugin.opts):
use_pw = False
else:
use_pw = True
# tenant_id, auth_token, management_url = (helper.tenant_id,
# helper.auth_token,
# helper.management_url)
#
# if tenant_id and auth_token and management_url:
# self.cs.client.tenant_id = tenant_id
# self.cs.client.auth_token = auth_token
# self.cs.client.management_url = management_url
# # authenticate just sets up some values in this case, no REST
# # calls
# self.cs.authenticate()
if use_pw:
# Auth using token must have failed or not happened
# at all, so now switch to password mode and save
# the token when its gotten... using our keyring
# saver
# os_password = helper.password
os_password = args.os_password
if not os_password:
raise exc.CommandError(
'Expecting a password provided via either '
'--os-password, env[OS_PASSWORD], or '
'prompted response')
# self.cs.client.password = os_password
# self.cs.client.keyring_saver = helper
# V3 stuff
project_info_provided = (self.options.os_tenant_name or
self.options.os_tenant_id or
(self.options.os_project_name and
(self.options.os_project_domain_name or
self.options.os_project_domain_id)) or
self.options.os_project_id)
if (not project_info_provided):
raise exc.CommandError(
("You must provide a tenant_name, tenant_id, "
"project_id or project_name (with "
"project_domain_name or project_domain_id) via "
" --os-tenant-name (env[OS_TENANT_NAME]),"
" --os-tenant-id (env[OS_TENANT_ID]),"
" --os-project-id (env[OS_PROJECT_ID])"
" --os-project-name (env[OS_PROJECT_NAME]),"
" --os-project-domain-id "
"(env[OS_PROJECT_DOMAIN_ID])"
" --os-project-domain-name "
"(env[OS_PROJECT_DOMAIN_NAME])"))
if not os_auth_url:
raise exc.CommandError(
"You must provide an auth url "
"via either --os-auth-url or env[OS_AUTH_URL]")
keystone_session = None
keystone_auth = None
if not auth_plugin:
project_id = args.os_project_id or args.os_tenant_id
project_name = args.os_project_name or args.os_tenant_name
keystone_session = (session.Session().
load_from_argparse_arguments(args))
keystone_auth = self._get_keystone_auth(
keystone_session,
args.os_auth_url,
username=args.os_username,
user_id=args.os_user_id,
user_domain_id=args.os_user_domain_id,
user_domain_name=args.os_user_domain_name,
password=args.os_password,
auth_token=args.os_auth_token,
project_id=project_id,
project_name=project_name,
project_domain_id=args.os_project_domain_id,
project_domain_name=args.os_project_domain_name)
self.cs = client.Client(username=os_username,
api_key=os_password,
project_id=os_tenant_id,
project_name=os_tenant_name,
auth_url=os_auth_url,
sahara_url=bypass_url,
endpoint_type=endpoint_type,
session=keystone_session,
auth=keystone_auth,
cacert=os_cacert,
insecure=insecure,
service_type=service_type,
region_name=region_name)
args.func(self.cs, args)
# TODO(mattf) - add get_timings support to Client
# if args.timings:
# self._dump_timings(self.cs.get_timings())
def _dump_timings(self, timings):
class Tyme(object):
def __init__(self, url, seconds):
self.url = url
self.seconds = seconds
results = [Tyme(url, end - start) for url, start, end in timings]
total = 0.0
for tyme in results:
total += tyme.seconds
results.append(Tyme("Total", total))
cliutils.print_list(results, ["url", "seconds"], sortby_index=None)
def do_bash_completion(self, _args):
"""Prints arguments for bash-completion.
Prints all of the commands and options to stdout so that the
sahara.bash_completion script doesn't have to hard code them.
"""
commands = set()
options = set()
for sc_str, sc in self.subcommands.items():
commands.add(sc_str)
for option in sc._optionals._option_string_actions.keys():
options.add(option)
commands.remove('bash-completion')
commands.remove('bash_completion')
print(' '.join(commands | options))
@cliutils.arg('command', metavar='<subcommand>', nargs='?',
help='Display help for <subcommand>.')
def do_help(self, args):
"""Display help about this program or one of its subcommands."""
if args.command:
if args.command in self.subcommands:
self.subcommands[args.command].print_help()
else:
raise exc.CommandError("'%s' is not a valid subcommand" %
args.command)
else:
self.parser.print_help()
# I'm picky about my shell help.
class OpenStackHelpFormatter(argparse.HelpFormatter):
def start_section(self, heading):
# Title-case the headings
heading = '%s%s' % (heading[0].upper(), heading[1:])
super(OpenStackHelpFormatter, self).start_section(heading)
def main():
warnings.simplefilter('once', category=DeprecationWarning)
warnings.warn('The sahara CLI is deprecated in favor of OpenStackClient '
'plugin and will not be maintained anymore. '
'For a Python library, continue using python-saharaclient.',
DeprecationWarning)
warnings.resetwarnings()
try:
argv = [encodeutils.safe_decode(a) for a in sys.argv[1:]]
OpenStackSaharaShell().main(argv)
except Exception as e:
logger.debug(e, exc_info=1)
print("ERROR: %s" % encodeutils.safe_encode(six.text_type(e)),
file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -1,47 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
import testtools
from saharaclient.api import base
from saharaclient.api import client
from requests_mock.contrib import fixture
class BaseTestCase(testtools.TestCase):
URL = 'http://localhost:8386'
TOKEN = 'token'
def setUp(self):
super(BaseTestCase, self).setUp()
self.responses = self.useFixture(fixture.Fixture())
self.client = client.Client(sahara_url=self.URL,
input_auth_token=self.TOKEN)
def assertFields(self, body, obj):
for key, value in six.iteritems(body):
self.assertEqual(value, getattr(obj, key))
class TestResource(base.Resource):
resource_name = 'Test Resource'
defaults = {'description': 'Test Description',
'extra': "extra"}
class TestManager(base.ResourceManager):
resource_class = TestResource

View File

@ -1,366 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
import sys
import fixtures
import mock
import six
from testtools import matchers
import saharaclient.api.client
from saharaclient.openstack.common.apiclient import exceptions
import saharaclient.shell
from saharaclient.tests.unit.nova import utils
FAKE_ENV = {'OS_USERNAME': 'username',
'OS_PASSWORD': 'password',
'OS_TENANT_NAME': 'tenant_name',
'OS_AUTH_URL': 'http://no.where'}
FAKE_ENV2 = {'OS_USERNAME': 'username',
'OS_PASSWORD': 'password',
'OS_TENANT_ID': 'tenant_id',
'OS_AUTH_URL': 'http://no.where'}
class FakePlugin(object):
name = 'fake'
versions = ['1.0', ]
title = 'a fake plugin'
class FakePluginManager(object):
def list(self):
return (FakePlugin(),)
class FakeImage(object):
name = 'fake'
id = 'aaa-bb-ccc'
username = 'you'
description = None
tags = []
class FakeImageManager(object):
def list(self):
return (FakeImage(),)
class FakePluginClient(object):
def __init__(self, *args, **kwargs):
self.plugins = FakePluginManager()
self.images = FakeImageManager()
class ShellTest(utils.TestCase):
def make_env(self, exclude=None, fake_env=FAKE_ENV):
env = dict((k, v) for k, v in fake_env.items() if k != exclude)
self.useFixture(fixtures.MonkeyPatch('os.environ', env))
def setUp(self):
super(ShellTest, self).setUp()
# NA atm
# self.useFixture(fixtures.MonkeyPatch(
# 'novaclient.client.get_client_class',
# mock.MagicMock))
# self.nc_util = mock.patch('novaclient.utils.isunauthenticated').start()
# self.nc_util.return_value = False
def shell(self, argstr, exitcodes=(0,)):
orig = sys.stdout
orig_stderr = sys.stderr
try:
sys.stdout = six.StringIO()
sys.stderr = six.StringIO()
_shell = saharaclient.shell.OpenStackSaharaShell()
_shell.main(argstr.split())
except SystemExit:
exc_type, exc_value, exc_traceback = sys.exc_info()
self.assertIn(exc_value.code, exitcodes)
finally:
stdout = sys.stdout.getvalue()
sys.stdout.close()
sys.stdout = orig
stderr = sys.stderr.getvalue()
sys.stderr.close()
sys.stderr = orig_stderr
return (stdout, stderr)
def test_help_unknown_command(self):
self.assertRaises(exceptions.CommandError, self.shell, 'help foofoo')
# NA
# def test_invalid_timeout(self):
# for f in [0, -1, -10]:
# cmd_text = '--timeout %s' % (f)
# stdout, stderr = self.shell(cmd_text, exitcodes=[0, 2])
# required = [
# 'argument --timeout: %s must be greater than 0' % (f),
# ]
# for r in required:
# self.assertIn(r, stderr)
def test_help(self):
required = [
'.*?^usage: sahara',
'.*?^\s+plugin-list\s+Print a list of available plugins.',
'.*?^See "sahara help COMMAND" for help on a specific command',
]
stdout, stderr = self.shell('help')
for r in required:
self.assertThat((stdout + stderr),
matchers.MatchesRegex(r, re.DOTALL | re.MULTILINE))
def test_help_on_subcommand(self):
required = [
'.*?^usage: sahara plugin-list',
'.*?^Print a list of available plugins.',
]
stdout, stderr = self.shell('help plugin-list')
for r in required:
self.assertThat((stdout + stderr),
matchers.MatchesRegex(r, re.DOTALL | re.MULTILINE))
def test_help_no_options(self):
required = [
'.*?^usage: sahara',
'.*?^\s+plugin-list\s+Print a list of available plugins.',
'.*?^See "sahara help COMMAND" for help on a specific command',
]
stdout, stderr = self.shell('')
for r in required:
self.assertThat((stdout + stderr),
matchers.MatchesRegex(r, re.DOTALL | re.MULTILINE))
def test_bash_completion(self):
stdout, stderr = self.shell('bash-completion')
# just check we have some output
required = [
'.*help',
'.*plugin-list',
'.*plugin-show',
'.*--name']
for r in required:
self.assertThat((stdout + stderr),
matchers.MatchesRegex(r, re.DOTALL | re.MULTILINE))
def test_no_username(self):
required = ('You must provide a username'
' via either --os-username or env[OS_USERNAME]',)
self.make_env(exclude='OS_USERNAME')
try:
self.shell('plugin-list')
except exceptions.CommandError as message:
self.assertEqual(required, message.args)
else:
self.fail('CommandError not raised')
def test_no_tenant_name(self):
required = (
'You must provide a tenant_name, tenant_id, '
'project_id or project_name (with '
'project_domain_name or project_domain_id) via '
' --os-tenant-name (env[OS_TENANT_NAME]),'
' --os-tenant-id (env[OS_TENANT_ID]),'
' --os-project-id (env[OS_PROJECT_ID])'
' --os-project-name (env[OS_PROJECT_NAME]),'
' --os-project-domain-id '
'(env[OS_PROJECT_DOMAIN_ID])'
' --os-project-domain-name '
'(env[OS_PROJECT_DOMAIN_NAME])',
)
self.make_env(exclude='OS_TENANT_NAME')
try:
self.shell('plugin-list')
except exceptions.CommandError as message:
self.assertEqual(required, message.args)
else:
self.fail('CommandError not raised')
def test_no_tenant_id(self):
required = (
'You must provide a tenant_name, tenant_id, '
'project_id or project_name (with '
'project_domain_name or project_domain_id) via '
' --os-tenant-name (env[OS_TENANT_NAME]),'
' --os-tenant-id (env[OS_TENANT_ID]),'
' --os-project-id (env[OS_PROJECT_ID])'
' --os-project-name (env[OS_PROJECT_NAME]),'
' --os-project-domain-id '
'(env[OS_PROJECT_DOMAIN_ID])'
' --os-project-domain-name '
'(env[OS_PROJECT_DOMAIN_NAME])',
)
self.make_env(exclude='OS_TENANT_ID', fake_env=FAKE_ENV2)
try:
self.shell('plugin-list')
except exceptions.CommandError as message:
self.assertEqual(required, message.args)
else:
self.fail('CommandError not raised')
def test_no_auth_url(self):
required = ('You must provide an auth url'
' via either --os-auth-url or env[OS_AUTH_URL] or'
' specify an auth_system which defines a default url'
' with --os-auth-system or env[OS_AUTH_SYSTEM]',)
self.make_env(exclude='OS_AUTH_URL')
try:
self.shell('plugin-list')
except exceptions.CommandError as message:
self.assertEqual(required, message.args)
else:
self.fail('CommandError not raised')
# @mock.patch('sys.stdin', side_effect=mock.MagicMock)
# @mock.patch('getpass.getpass', return_value='password')
# def test_password(self, mock_getpass, mock_stdin):
@mock.patch('saharaclient.api.client.Client', FakePluginClient)
def test_password(self):
ex = (
'+------+----------+---------------+\n'
'| name | versions | title |\n'
'+------+----------+---------------+\n'
'| fake | 1.0 | a fake plugin |\n'
'+------+----------+---------------+\n'
)
# self.make_env(exclude='OS_PASSWORD')
self.make_env()
stdout, stderr = self.shell('plugin-list')
self.assertEqual(ex, (stdout + stderr))
# @mock.patch('sys.stdin', side_effect=mock.MagicMock)
# @mock.patch('getpass.getpass', side_effect=EOFError)
# def test_no_password(self, mock_getpass, mock_stdin):
def test_no_password(self):
required = ('Expecting a password provided'
' via either --os-password, env[OS_PASSWORD],'
' or prompted response',)
self.make_env(exclude='OS_PASSWORD')
try:
self.shell('plugin-list')
except exceptions.CommandError as message:
self.assertEqual(required, message.args)
else:
self.fail('CommandError not raised')
# TODO(mattf) Only one version of API right now
# def _test_service_type(self, version, service_type, mock_client):
# if version is None:
# cmd = 'list'
# else:
# cmd = '--os-compute-api-version %s list' % version
# self.make_env()
# self.shell(cmd)
# _, client_kwargs = mock_client.call_args
# self.assertEqual(service_type, client_kwargs['service_type'])
#
# @mock.patch('novaclient.client.Client')
# def test_default_service_type(self, mock_client):
# self._test_service_type(None, 'compute', mock_client)
#
# @mock.patch('novaclient.client.Client')
# def test_v1_1_service_type(self, mock_client):
# self._test_service_type('1.1', 'compute', mock_client)
#
# @mock.patch('novaclient.client.Client')
# def test_v2_service_type(self, mock_client):
# self._test_service_type('2', 'compute', mock_client)
#
# @mock.patch('novaclient.client.Client')
# def test_v3_service_type(self, mock_client):
# self._test_service_type('3', 'computev3', mock_client)
#
# @mock.patch('novaclient.client.Client')
# def test_v_unknown_service_type(self, mock_client):
# self._test_service_type('unknown', 'compute', mock_client)
@mock.patch('saharaclient.api.client.Client', FakePluginClient)
def test_image_list(self):
ex = (
'+------+------------+----------+------+-------------+\n'
'| name | id | username | tags | description |\n'
'+------+------------+----------+------+-------------+\n'
'| fake | aaa-bb-ccc | you | | None |\n'
'+------+------------+----------+------+-------------+\n'
)
self.make_env()
stdout, stderr = self.shell('image-list')
self.assertEqual(ex, (stdout + stderr))
class ShellTestKeystoneV3(ShellTest):
FAKE_V3_ENV = {'OS_USERNAME': 'username',
'OS_PASSWORD': 'password',
'OS_PROJECT_NAME': 'project_name',
'OS_PROJECT_DOMAIN_NAME': 'project_domain_name',
'OS_USER_DOMAIN_NAME': 'user_domain_name',
'OS_AUTH_URL': 'http://no.where/v3'}
version_id = u'v3'
links = [{u'href': u'http://no.where/v3', u'rel': u'self'}]
def make_env(self, exclude=None, fake_env=FAKE_V3_ENV):
if 'OS_AUTH_URL' in fake_env:
fake_env.update({'OS_AUTH_URL': 'http://no.where/v3'})
env = dict((k, v) for k, v in fake_env.items() if k != exclude)
self.useFixture(fixtures.MonkeyPatch('os.environ', env))
def test_no_tenant_name(self):
# In V3, tenant_name = project_name
required = (
'You must provide a tenant_name, tenant_id, '
'project_id or project_name (with '
'project_domain_name or project_domain_id) via '
' --os-tenant-name (env[OS_TENANT_NAME]),'
' --os-tenant-id (env[OS_TENANT_ID]),'
' --os-project-id (env[OS_PROJECT_ID])'
' --os-project-name (env[OS_PROJECT_NAME]),'
' --os-project-domain-id '
'(env[OS_PROJECT_DOMAIN_ID])'
' --os-project-domain-name '
'(env[OS_PROJECT_DOMAIN_NAME])',
)
self.make_env(exclude='OS_PROJECT_NAME')
try:
self.shell('plugin-list')
except exceptions.CommandError as message:
self.assertEqual(required, message.args)
else:
self.fail('CommandError not raised')
def test_job_list(self):
expected = '\n'.join([
'+----+------------+------------+--------+',
'| id | cluster_id | start_time | status |',
'+----+------------+------------+--------+',
'+----+------------+------------+--------+',
''
])
mock_get_service_type_method_name = (
'saharaclient.api.client.Client._determine_service_type')
mock_job_executions_class_name = (
'saharaclient.api.job_executions.JobExecutionsManager')
with mock.patch(mock_get_service_type_method_name) as mock_st:
with mock.patch(mock_job_executions_class_name):
mock_st.return_value = 'data-processing'
self.make_env()
stdout, stderr = self.shell('job-list')
self.assertEqual((stdout + stderr), expected)

Some files were not shown because too many files have changed in this diff Show More