Retire stackforge/puppet_openstack_builder

This commit is contained in:
Monty Taylor 2015-10-17 16:04:28 -04:00
parent 38fcbea817
commit 20915a6e1e
136 changed files with 7 additions and 7660 deletions

8
.gitignore vendored
View File

@ -1,8 +0,0 @@
modules
module/*
.tmp
.vagrant
*.log*
data/hiera_data/jenkins.yaml
data/global_hiera_params/jenkins.yaml
*.pyc

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=stackforge/puppet_openstack_builder

201
LICENSE
View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,203 +0,0 @@
# the account where the Openstack modules should come from
#
# this file also accepts a few environment variables
#
git_protocol=ENV['git_protocol'] || 'git'
openstack_version=ENV['openstack_version'] || 'icehouse'
#
# this modulefile has been configured to use two sets of repos.
# The downstream repos that Cisco has forked, or the upstream repos
# that they are derived from (and should be maintained in sync with)
#
#
# this is just targeting the upstream stackforge modules
# right now, and the logic for using downstream does not
# work yet
#
unless ['grizzly', 'havana', 'icehouse'].include?(openstack_version)
abort("Only grizzly, havana, and icehouse are currently supported")
end
if openstack_version == 'grizzly'
neutron_name = 'quantum'
else
neutron_name = 'neutron'
end
if ENV['repos_to_use'] == 'downstream'
# this assumes downstream which is the Cisco branches
branch_name = "#{openstack_version}"
cisco_branch_name = branch_name
openstack_module_branch = branch_name
openstack_module_account = 'CiscoSystems'
puppetlabs_module_prefix = 'CiscoSystems/puppet-'
apache_branch = branch_name
mysql_branch = branch_name
rabbitmq_branch = branch_name
else
if openstack_version == 'grizzly'
openstack_module_branch = 'stable/grizzly'
elsif openstack_version == 'havana'
openstack_module_branch = 'stable/havana'
elsif openstack_version == 'icehouse'
openstack_module_branch = 'stable/icehouse'
else
abort('only grizzly, havana, and icehouse are supported')
end
# use the upstream modules where they exist
branch_name = 'master'
cisco_branch_name = "#{openstack_version}"
openstack_module_account = 'stackforge'
puppetlabs_module_prefix = 'puppetlabs/puppetlabs-'
apache_branch = '0.x'
mysql_branch = '2.2.x'
rabbitmq_branch = '2.x'
end
base_url = "#{git_protocol}://github.com"
###### module under development #####
# this following modules are still undergoing their initial development
# and have not yet been ported to CiscoSystems.
mod 'bodepd/scenario_node_terminus',
:git => 'https://github.com/bodepd/scenario_node_terminus'
mod 'CiscoSystems/coi',
:git => "#{base_url}/CiscoSystems/puppet-coi",
:ref => cisco_branch_name
mod 'puppetlabs/postgresql',
:git => "#{base_url}/puppetlabs/puppetlabs-postgresql",
:ref => '2.5.0'
mod 'puppetlabs/puppetdb',
:git => "#{base_url}/puppetlabs/puppetlabs-puppetdb",
:ref => '2.0.0'
mod 'puppetlabs/vcsrepo',
:git => "#{base_url}/puppetlabs/puppetlabs-vcsrepo",
:ref => '0.1.2'
mod 'ripienaar/ruby-puppetdb',
:git => "#{base_url}/ripienaar/ruby-puppetdb"
mod 'ripienaar/catalog-diff',
:git => "#{base_url}/ripienaar/puppet-catalog-diff",
:ref => 'master'
mod 'puppetlabs/firewall',
:git => "#{base_url}/puppetlabs/puppetlabs-firewall",
:ref => '0.4.0'
mod 'stephenjohrnson/puppet',
:git => "#{base_url}/stephenrjohnson/puppetlabs-puppet",
:ref => '0.0.18'
###### stackforge openstack modules #####
openstack_repo_prefix = "#{base_url}/#{openstack_module_account}/puppet-"
[
'openstack',
'cinder',
'glance',
'keystone',
'horizon',
'nova',
neutron_name,
'swift',
'tempest',
'heat',
].each do |module_name|
mod "stackforge/#{module_name}",
:git => "#{openstack_repo_prefix}#{module_name}",
:ref => openstack_module_branch
end
# stackforge module with no grizzly release
[
'ceilometer',
'vswitch'
].each do |module_name|
mod "stackforge/#{module_name}",
:git => "#{openstack_repo_prefix}#{module_name}",
:ref => 'master'
end
##### Puppet Labs modules #####
# this module needs to be alighed with upstream
mod 'puppetlabs/apt',
:git => "#{base_url}/CiscoSystems/puppet-apt",
:ref => cisco_branch_name
[
'stdlib',
'xinetd',
'ntp',
'rsync',
'inifile'
# 'mongodb'
].each do |module_name|
mod "puppetlabs/#{module_name}",
:git => "#{base_url}/#{puppetlabs_module_prefix}#{module_name}",
:ref => branch_name
end
## PuppetLabs modules that are too unstable to use master ##
{
'mysql' => mysql_branch,
'rabbitmq' => rabbitmq_branch,
'apache' => apache_branch
}.each do |module_name, ref|
mod "puppetlabs/#{module_name}",
:git => "#{base_url}/#{puppetlabs_module_prefix}#{module_name}",
:ref => ref
end
##### modules with other upstreams #####
mod 'saz/memcached',
:git => "#{base_url}/CiscoSystems/puppet-memcached",
:ref => cisco_branch_name
mod 'saz/ssh',
:git => "#{base_url}/bodepd/puppet-ssh",
:ref => 'master'
mod 'duritong/sysctl',
:git => "#{base_url}/CiscoSystems/puppet-sysctl",
:ref => cisco_branch_name
##### Modules without upstreams #####
cisco_module_prefix = "#{base_url}/CiscoSystems/puppet-"
[
'cephdeploy',
'coe',
'cobbler',
'concat',
'apt-cacher-ng',
'collectd',
'graphite',
'pip',
'dnsmasq',
].each do |module_name|
mod "CiscoSystems/#{module_name}",
:git => "#{cisco_module_prefix}#{module_name}",
:ref => cisco_branch_name
end
#### HA Modules ###
[
'augeas',
'filemapper',
'galera',
'haproxy',
'keepalived',
'network',
'openstack-ha',
'boolean'
].each do |module_name|
mod "CiscoSystems/#{module_name}",
:git => "#{cisco_module_prefix}#{module_name}",
:ref => cisco_branch_name
end

166
README.md
View File

@ -1,166 +0,0 @@
Openstack Installer
================
Project for building out OpenStack COE.
## Spinning up VMs with Vagrant
This project historically supported spinning up VMs to test OpenStack with Vagrant.
This approach is recommended for development environment or for users who want
to get up and going in the simplest way possible.
### requirements
This setup requires that a few additional dependencies are installed:
* virtualbox
* vagrant
### Developer instructions
Developers should be started by installing the following simple utility:
gem install librarian-puppet-simple
or, if you want to build from scratch, or keep these gems separate:
mkdir vendor
export GEM_HOME=`pwd`/vendor
gem install thor --no-ri --no-rdoc
git clone git://github.com/bodepd/librarian-puppet-simple vendor/librarian-puppet-simple
export PATH=`pwd`/vendor/librarian-puppet-simple/bin/:$PATH
Once this library is installed, you can run the following command from this project's
root directory. This will use the Puppetfile to clone the openstack modules and the COE manifests, into the modules directory, and can be easily configured to pull from your own repo instead of the Cisco or Stackforge repos. The default is to use the stackforge modules
To use the CiscoSystems releases of the puppet modules:
export repos_to_use=downstream
To download modules
librarian-puppet install --verbose
### Configuration
There is a config.yaml file that can be edited to suit the environment.
The apt-cache server can be any server running apt-cacher-ng - it doesn't have to be the cache instance mentioned below if you already have one handy. It can be set to false to disable use of apt-cacher altogether.
The apt-mirror will be used to set sources.list on each machine, and on the build server it will be used to import the 30MB ubuntu netboot image used during the PXE deploy process.
Make sure the domain matches the domain specified in the site.pp in the manifests you intend to use.
### Spinning up virtual machines with vagrant
Now that you have set up the puppet content, the next step is to build
out your multi-node environment using vagrant.
First, deploy the apt-ng-cacher instance:
vagrant up cache
Next, bring up the build server:
vagrant up build
Now, bring up the blank boxes so that they can PXE boot against the master
vagrant up control_basevm
vagrant up compute_basevm
Now, you have created a fully functional openstack environment, now have a look at some services:
* service dashboard: http://192.168.242.100/
* horizon: http://192.168.242.10/ (username: admin, password: Cisco123)
Log into your controller:
vagrant ssh control_basevm
and run through the 'Deploy Your First VM' section of this document:
http://docwiki.cisco.com/wiki/OpenStack:Folsom-Multinode#Creating_a_build_server
## Spinning up virtual machines with Openstack
The data model in this repository can be consumed by the scenariobuilder tool. To install it, use pip:
pip install scenariobuilder
The 'sb' tool can then be used with Openstack credentials to instantiate the data model in VMs on an Openstack cloud. For more information see: https://github.com/CiscoSystems/scenariobuilder
# Basic install against already provisioned nodes (Ubuntu 12.04.3 LTS):
### install your All-in-one Build, Control, Network, Compute, and Cinder node:
These instructions assume you will be building against a machine that has two interfaces:
'eth0' for management, and API access, and also to be used for GRE/VXlan tunnel via OVS
'eth1' for 'external' network access (in single provider router mode). This interface
is expected to provide an external router, and IP address range, and will leverage the
l3_agent functionality to provide outbound overloaded NAT to the VMs and 1:1 NAT with
Floating IPs. The current default setup also assumes a very small "generic" Cinder
setup, unless you create an LVM volume group called cinder-volume with free space
for persistent block volumes to be deployed against.
Log in to your all_in_one node, and bootstrap it into production:
bash <(curl -fsS https://raw.github.com/stackforge/puppet\_openstack\_builder/master/install-scripts/install.sh)
You can over-ride the default parameters, such as ethernet interface names, or hostname, and default ip address if you choose:
scenario : change this to a scenario defined in data/scenarios, defaults to all_in_one
build_server : Hostname for your build-server, defaults to `` `hostname` ``
domain_name : Domain name for your system, defaults to `` `hostname -d` ``
default_interface : This is the interface name for your management and API interfaces (and tunnel endpoints), defautls to eth0
external_interface : This is the interface name for your "l3_agent provider router external network", defaults to eth1
build_server_ip : This is the IP that any additional devices can reach your build server on, defaults to the default_interface IP address
ntp_server : This is needed to keep puppet in sync across multiple nodes, defaults to ntp.esl.cisco.com
puppet_run_mode : Defaults to apply, and for AIO there is not a puppetmaster yet.
To change these parameters, do something like:
scenario=2_role bash <(curl.....master.sh)
### add additional nodes
Adding additional nodes is fairly straight forward (for all_in_one scenarion compute nodes can be added, others require a bit of additional effort by expanding the all_in_one scenario)
1) on the All-in-one node, add a role mapping for the new node:
echo "compute_node_name: compute" >> /etc/puppet/data/role_mappings.yaml
2) Build the phyiscal or virtual compute node
3) Configure the system to point ot the all_in_one node for puppet deployment and set up the right version of puppet on the node:
export build_server_ip=X.X.X.X ; export master=false ; bash <(curl -fsS https://raw.github.com/stackforge/puppet\_openstack\_builder/master/install-scripts/install.sh)
After which you may still have to run puppet in "agent" mode to actually deploy the openstack elements:
``
puppet agent -td --server build-server.`hostname -d` --certname `hostname -f`
``
### If other role types are desired
At the scenario leve, choices are in:
/etc/puppet/data/scenarios
And you can extend the all_in_one scenario, or leverage a different variant all together.
Defaults for end user data should be located in one of the following files:
/etc/puppet/data/hiera_data/user.yaml
/etc/puppet/data/hiera_data/user.common.yaml
/etc/puppet/data/hiera_data/user.<scenario>.yaml
### Using a vendor modification
You can specify a vendor, which will change both the apt repository and the git repository the data defining the deployment is drawn from. Currently the only option is cisco, which can be set by:
export vendor=cisco
###Additional information on the data model being leveraged is available in the data directory of this repository.

7
README.rst Normal file
View File

@ -0,0 +1,7 @@
This project is no longer maintained.
The contents of this repository are still available in the Git source code
management system. To see the contents of this repository before it reached
its end of life, please check out the previous commit with
"git checkout HEAD^1".

259
Vagrantfile vendored
View File

@ -1,259 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
require 'fileutils'
# Four networks:
# 0 - VM host NAT
# 1 - COE build/deploy
# 2 - COE openstack internal
# 3 - COE openstack external (public)
def parse_vagrant_config(
config_file=File.expand_path(File.join(File.dirname(__FILE__), 'data', 'config.yaml'))
)
config = {
'gui_mode' => false,
'operatingsystem' => 'ubuntu',
'verbose' => false,
'update_repos' => true,
'scenario' => '2_role'
}
if File.exists?(config_file)
overrides = YAML.load_file(config_file)
config.merge!(overrides)
end
config
end
#
# process the node group that is used to determine the
# nodes that should be provisioned. The group of nodes
# can be set with the node_group param from config.yaml
# and maps to its corresponding file in the nodes directory.
#
def process_nodes(config)
v_config = parse_vagrant_config
node_group = v_config['scenario']
node_group_file = File.expand_path(File.join(File.dirname(__FILE__), 'data', 'nodes', "#{node_group}.yaml"))
abort('node_group much be specific in config') unless node_group
abort('file must exist for node group') unless File.exists?(node_group_file)
(YAML.load_file(node_group_file)['nodes'] || {}).each do |name, options|
config.vm.define(options['vagrant_name'] || name) do |config|
apt_cache_proxy = ''
unless options['apt_cache'] == false || options['apt_cache'] == 'false'
if v_config['apt_cache'] != 'false'
apt_cache_proxy = 'echo "Acquire::http { Proxy \"http://%s:3142\"; };" > /etc/apt/apt.conf.d/01apt-cacher-ng-proxy;' % ( options['apt_cache'] || v_config['apt_cache'] )
end
end
configure_openstack_node(
config,
name,
options['memory'],
options['image_name'] || v_config['operatingsystem'],
options['ip_number'],
options['puppet_type'] || 'agent',
apt_cache_proxy,
v_config,
options['post_config']
)
end
end
end
# get the correct box based on the specidied type
# currently, this just retrieves a single box for precise64
def get_box(config, box_type)
if box_type == 'precise64' || box_type == 'ubuntu'
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
elsif box_type == 'centos' || box_type == 'redhat'
config.vm.box = 'centos'
config.vm.box_url = 'http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130427.box'
else
abort("Box type: #{box_type} is no good.")
end
end
#
# setup networks for openstack. Currently, this just sets up
# 4 virtual interfaces as follows:
#
# * eth1 => 192.168.242.0/24
# this is the network that the openstack services use to communicate with each other
# * eth2 => 10.2.3.0/24
# * eth3 => 10.2.3.0/24
#
# == Parameters
# config - vm config object
# number - the lowest octal in a /24 network
# options - additional options
# eth1_mac - mac address to set for eth1 (used for PXE booting)
#
def setup_networks(config, number, options = {})
config.vm.network :hostonly, "192.168.242.#{number}", :mac => options[:eth1_mac]
config.vm.network :hostonly, "10.2.3.#{number}"
config.vm.network :hostonly, "10.3.3.#{number}"
# set eth3 in promiscuos mode
config.vm.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
# set the boot priority to use eth1
config.vm.customize(['modifyvm', :id ,'--nicbootprio2','1'])
end
#
# setup the hostname of our box
#
def setup_hostname(config, hostname)
config.vm.customize ['modifyvm', :id, '--name', hostname]
config.vm.host_name = hostname
end
#
# run puppet apply on the site manifest
#
def apply_manifest(config, v_config, manifest_name='site.pp', certname=nil, puppet_type=nil)
options = []
if v_config['verbose']
options = options + ['--verbose', '--trace', '--debug', '--show_diff']
end
if certname
options.push("--certname #{certname}")
else
# I need to add a special certname here to
# ensure it's hostname does not match the ENC
# which could cause the node to be configured
# from the setup manifest on the second run
options.push('--certname setup')
end
# ensure that when puppet applies the site manifest, it has hiera configured
if manifest_name == 'site.pp'
config.vm.share_folder("data", '/etc/puppet/data', './data')
end
config.vm.share_folder("ssh", '/root/.ssh', './dot-ssh')
# Explicitly mount the shared folders, so we dont break with newer versions of vagrant
config.vm.share_folder("modules", '/etc/puppet/modules', './modules/')
config.vm.share_folder("manifests", '/etc/puppet/manifests', './manifests/')
config.vm.provision :shell do |shell|
script =
"if grep 127.0.1.1 /etc/hosts ; then \n" +
" sed -i -e \"s/127.0.1.1.*/127.0.1.1 $(hostname).#{v_config['domain']} $(hostname)/\" /etc/hosts\n" +
"else\n" +
" echo '127.0.1.1 $(hostname).#{v_config['domain']} $(hostname)' >> /etc/hosts\n" +
"fi ;"
shell.inline = script
end
config.vm.provision(:puppet, :pp_path => "/etc/puppet") do |puppet|
puppet.manifests_path = 'manifests'
puppet.manifest_file = manifest_name
puppet.module_path = 'modules'
puppet.options = options
puppet.facter = {
"build_server_ip" => "192.168.242.100",
"build_server_domain_name" => v_config['domain'],
"puppet_run_mode" => puppet_type,
}
end
# uninstall the puppet gem b/c setup.pp installs the puppet package
if manifest_name == 'setup.pp'
config.vm.provision :shell do |shell|
shell.inline = "gem uninstall -x -a puppet;echo -e '#!/bin/bash\npuppet agent $@' > /sbin/puppetd;chmod a+x /sbin/puppetd"
end
end
end
# run the puppet agent
def run_puppet_agent(
config,
node_name,
v_config = {},
master = "build-server.#{v_config['domain']}"
)
options = ["--certname #{node_name}", '-t', '--pluginsync']
if v_config['verbose']
options = options + ['--trace', '--debug', '--show_diff']
end
config.vm.provision(:puppet_server) do |puppet|
puppet.puppet_server = master
puppet.options = options
end
end
#
# configure apt repos with mirrors and proxies and what-not
# I really want to move this to puppet
#
def configure_apt_mirror(config, apt_mirror, apt_cache_proxy)
# Configure apt mirror
config.vm.provision :shell do |shell|
shell.inline = "sed -i 's/us.archive.ubuntu.com/%s/g' /etc/apt/sources.list" % apt_mirror
end
config.vm.provision :shell do |shell|
shell.inline = '%s apt-get update;apt-get install ubuntu-cloud-keyring' % apt_cache_proxy
end
end
#
# methods that performs all openstack config
#
def configure_openstack_node(
config,
node_name,
memory,
box_name,
net_id,
puppet_type,
apt_cache_proxy,
v_config,
post_config = false
)
cert_name = node_name
get_box(config, box_name)
setup_hostname(config, node_name)
config.vm.customize ["modifyvm", :id, "--memory", memory]
setup_networks(config, net_id)
if v_config['operatingsystem'] == 'ubuntu' and apt_cache_proxy
configure_apt_mirror(config, v_config['apt_mirror'], apt_cache_proxy)
end
apply_manifest(config, v_config, 'setup.pp', nil, puppet_type)
if puppet_type == 'apply'
apply_manifest(config, v_config, 'site.pp', cert_name)
elsif puppet_type == 'agent'
run_puppet_agent(config, cert_name, v_config)
else
abort("Unexpected puppet_type #{puppet_type}")
end
if post_config
Array(post_config).each do |shell_command|
config.vm.provision :shell do |shell|
shell.inline = shell_command
end
end
end
end
Vagrant::Config.run do |config|
process_nodes(config)
end

BIN
blank.box

Binary file not shown.

View File

@ -1 +0,0 @@
Directory to contain scripts not related to installation.

View File

@ -1,98 +0,0 @@
Openstack by Aptira
===================
## Overview
This is a revision of the data model with the following goals:
- Remove dependency on the scenario_node_terminus
- Implement data model in pure hiera
- Support Centos/RHEL targets
- Support masterless deployment
- Simplify node bootstrapping
- Make HA a core feature rather than an add-on
- Move all modules to master branch
While providing a clean migration path from the current method.
## Requirements
Currently, this distribution assumes it has been provided with already-provisioned
Centos 6 servers each with more than one network interface. For production
deployments it is recommended to have additional interfaces, as the data model can
distinguish between the following network functions and assign an interface to each:
- deployment network
- public API network
- private network
- external floating IP network
## Installation
Before installing the distribution, review the following options which are available:
Set an http proxy to use for installation (default: not set)
export proxy='http://my_proxy:8000'
Set the network interface to use for deployment (default: eth1)
export network='eth0'
set install destination for the distribution (default: $HOME)
export dest='/var/lib/stacktira'
Once you have set the appropriate customisations, to install the aptira distribution,
run the following command:
\curl -sSL https://raw.github.com/michaeltchapman/puppet_openstack_builder/stacktira/contrib/aptira/installer/bootstrap.sh | bash
## Configuration
The distribution is most easily customised by editing the file
/etc/puppet/data/hiera_data/user.yaml. A sample will be placed there if
one doesn't exist during installation and this should be reviewed before
continuing. In particular, make sure all the IP addresses and interfaces
are correct for your deployment.
## Deployment
To deploy a control node, run the following command:
puppet apply /etc/puppet/manifests/site.pp --certname control-`hostname`
To deploy a compute node, run the following command:
puppet apply /etc/puppet/manifests/site.pp --certname compute-`hostname`
## Development Environment Installation
First, clone the repo and checkout the experimental stacktira branch
git clone https://github.com/michaeltchapman/puppet_openstack_builder
git checkout stacktira
The conversion from scenario_node_terminus yaml to pure hiera is done by
a script which require PyYaml. Install this library either via distro
package manager or using pip.
pip install PyYaml
Run the conversion script. This will replace the Puppetfile, Vagrantfile,
manifests and data directories with the stacktira version:
python contrib/aptira/build/convert.py
Install the modules:
mkdir -p vendor
export GEM_HOME=vendor
gem install librarian-puppet
vendor/bin/librarian-puppet install
Now you can boot using the control* and compute* vms, or using rawbox to test
out the public tarball available from Aptira.
## Authors
Michael Chapman

View File

@ -1,238 +0,0 @@
git_protocol = ENV['git_protocol'] || 'https'
reposource = ENV['reposource'] || 'downstream'
git_protocol = 'https'
if reposource == 'downstream'
author = 'aptira'
ref = 'stacktira'
else
ref = 'master'
end
# apache
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/apache', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-apache.git", :ref => ref
# apt
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/apt', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-apt.git", :ref => ref
# ceilometer
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/ceilometer', :git => "#{git_protocol}://github.com/#{author}/puppet-ceilometer.git", :ref => ref
# cinder
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/cinder', :git => "#{git_protocol}://github.com/#{author}/puppet-cinder.git", :ref => ref
# concat
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/concat', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-concat.git", :ref => ref
# devtools
if reposource != 'downstream'
author = 'Spredzy'
end
mod 'Spredzy/devtools', :git => "#{git_protocol}://github.com/#{author}/puppet-devtools.git", :ref => ref
# dnsmasq
if reposource != 'downstream'
author = 'netmanagers'
end
mod 'netmanagers/dnsmasq', :git => "#{git_protocol}://github.com/#{author}/puppet-dnsmasq.git", :ref => ref
# edeploy
if reposource != 'downstream'
author = 'michaeltchapman'
end
mod 'michaeltchapman/edeploy', :git => "#{git_protocol}://github.com/#{author}/puppet-edeploy.git", :ref => ref
# firewall
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/firewall', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-firewall.git", :ref => ref
# galera
if reposource != 'downstream'
author = 'michaeltchapman'
end
mod 'michaeltchapman/galera', :git => "#{git_protocol}://github.com/#{author}/puppet-galera.git", :ref => ref
# glance
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/glance', :git => "#{git_protocol}://github.com/#{author}/puppet-glance.git", :ref => ref
# haproxy
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/haproxy', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-haproxy.git", :ref => ref
# heat
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/heat', :git => "#{git_protocol}://github.com/#{author}/puppet-heat.git", :ref => ref
# horizon
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/horizon', :git => "#{git_protocol}://github.com/#{author}/puppet-horizon.git", :ref => ref
# inifile
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/inifile', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-inifile.git", :ref => ref
# keepalived
if reposource != 'downstream'
author = 'arioch'
end
mod 'arioch/keepalived', :git => "#{git_protocol}://github.com/#{author}/puppet-keepalived.git", :ref => ref
# keystone
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/keystone', :git => "#{git_protocol}://github.com/#{author}/puppet-keystone.git", :ref => ref
# memcached
if reposource != 'downstream'
author = 'saz'
end
mod 'saz/memcached', :git => "#{git_protocol}://github.com/#{author}/puppet-memcached.git", :ref => ref
# mysql
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/mysql', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-mysql.git", :ref => ref
# neutron
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/neutron', :git => "#{git_protocol}://github.com/#{author}/puppet-neutron.git", :ref => ref
# nova
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/nova', :git => "#{git_protocol}://github.com/#{author}/puppet-nova.git", :ref => ref
# openstack
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/openstack', :git => "#{git_protocol}://github.com/#{author}/puppet-openstack.git", :ref => ref
# openstacklib
if reposource != 'downstream'
author = 'michaeltchapman'
end
mod 'michaeltchapman/openstacklib', :git => "#{git_protocol}://github.com/#{author}/puppet-openstacklib.git", :ref => ref
# postgresql
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/postgresql', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-postgresql.git", :ref => ref
# puppet
if reposource != 'downstream'
author = 'stephenrjohnson'
end
mod 'stephenrjohnson/puppet', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-puppet.git", :ref => ref
# puppetdb
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/puppetdb', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-puppetdb.git", :ref => ref
# rabbitmq
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/rabbitmq', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-rabbitmq.git", :ref => ref
# rsync
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/rsync', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-rsync.git", :ref => ref
# ruby-puppetdb
if reposource != 'downstream'
author = 'ripienaar'
end
mod 'ripienaar/ruby-puppetdb', :git => "#{git_protocol}://github.com/#{author}/ruby-puppetdb.git", :ref => ref
# staging
if reposource != 'downstream'
author = 'nanliu'
end
mod 'nanliu/staging', :git => "#{git_protocol}://github.com/#{author}/puppet-staging.git", :ref => ref
# stdlib
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/stdlib', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-stdlib.git", :ref => ref
# swift
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/swift', :git => "#{git_protocol}://github.com/#{author}/puppet-swift.git", :ref => ref
# sysctl
if reposource != 'downstream'
author = 'thias'
end
mod 'thias/sysctl', :git => "#{git_protocol}://github.com/#{author}/puppet-sysctl.git", :ref => ref
# tempest
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/tempest', :git => "#{git_protocol}://github.com/#{author}/puppet-tempest.git", :ref => ref
# tftp
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/tftp', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-tftp.git", :ref => ref
# vcsrepo
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/vcsrepo', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-vcsrepo.git", :ref => ref
# vswitch
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/vswitch', :git => "#{git_protocol}://github.com/#{author}/puppet-vswitch.git", :ref => ref
# xinetd
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/xinetd', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-xinetd.git", :ref => ref

View File

@ -1,208 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
require 'fileutils'
# Four networks:
# 0 - VM host NAT
# 1 - COE build/deploy
# 2 - COE openstack internal
# 3 - COE openstack external (public)
def parse_vagrant_config(
config_file=File.expand_path(File.join(File.dirname(__FILE__), 'data', 'config.yaml'))
)
config = {
'gui_mode' => false,
'operatingsystem' => 'redhat',
'verbose' => false,
'update_repos' => true,
'scenario' => 'stacktira'
}
if File.exists?(config_file)
overrides = YAML.load_file(config_file)
config.merge!(overrides)
end
config
end
#
# process the node group that is used to determine the
# nodes that should be provisioned. The group of nodes
# can be set with the node_group param from config.yaml
# and maps to its corresponding file in the nodes directory.
#
def process_nodes(config)
v_config = parse_vagrant_config
node_group = v_config['scenario']
node_group_file = File.expand_path(File.join(File.dirname(__FILE__), 'data', 'nodes', "#{node_group}.yaml"))
abort('node_group much be specific in config') unless node_group
abort('file must exist for node group') unless File.exists?(node_group_file)
(YAML.load_file(node_group_file)['nodes'] || {}).each do |name, options|
config.vm.define(options['vagrant_name'] || name) do |config|
configure_openstack_node(
config,
name,
options['memory'],
options['image_name'] || v_config['operatingsystem'],
options['ip_number'],
options['puppet_type'] || 'agent',
v_config,
options['environment'],
options['role'],
options['network'],
options['post_config']
)
end
end
end
# get the correct box based on the specidied type
# currently, this just retrieves a single box for precise64
def get_box(config, box_type)
if box_type == 'precise64' || box_type == 'ubuntu'
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
elsif box_type == 'centos' || box_type == 'redhat'
config.vm.box = 'centos64'
config.vm.box_url = 'http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130427.box'
else
abort("Box type: #{box_type} is no good.")
end
end
#
# setup networks for openstack. Currently, this just sets up
# 4 virtual interfaces as follows:
#
# * eth1 => 192.168.242.0/24
# this is the network that the openstack services use to communicate with each other
# * eth2 => 10.2.3.0/24
# * eth3 => 10.2.3.0/24
#
# == Parameters
# config - vm config object
# number - the lowest octal in a /24 network
# options - additional options
# eth1_mac - mac address to set for eth1 (used for PXE booting)
#
def setup_networks(config, number, network)
config.vm.network "private_network", :ip => "192.168.242.#{number}"
config.vm.network "private_network", ip: "#{network}.2.3.#{number}"
config.vm.network "private_network", ip: "#{network}.3.3.#{number}"
# set eth3 in promiscuos mode
config.vm.provider "virtualbox" do |vconfig|
vconfig.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
# set the boot priority to use eth1
vconfig.customize(['modifyvm', :id ,'--nicbootprio2','1'])
end
end
#
# setup the hostname of our box
#
def setup_hostname(config, hostname)
config.vm.provider "virtualbox" do |vconfig|
vconfig.customize ['modifyvm', :id, '--name', hostname]
end
config.vm.host_name = hostname
end
#
# methods that performs all openstack config
#
def configure_openstack_node(
config,
node_name,
memory,
box_name,
net_id,
puppet_type,
v_config,
environment = false,
role = false,
network = false,
post_config = false
)
cert_name = node_name
get_box(config, box_name)
setup_hostname(config, node_name)
config.vm.provider "virtualbox" do |vconfig|
vconfig.customize ["modifyvm", :id, "--memory", memory]
end
network ||= '10'
setup_networks(config, net_id, network)
config.vm.synced_folder "./modules", "/etc/puppet/modules"
config.vm.synced_folder "./", "/root/stacktira"
options = ''
if v_config['proxy']
options += " -p " + v_config['proxy']
end
if role
options += " -o " + role
end
if environment
options += " -e " + environment
end
config.vm.provision :shell do |shell|
shell.inline = '/root/stacktira/contrib/aptira/installer/bootstrap.sh' + options
end
config.vm.provision :shell do |shell|
shell.inline = 'puppet apply /etc/puppet/manifests/site.pp'
end
if post_config
Array(post_config).each do |shell_command|
config.vm.provision :shell do |shell|
shell.inline = shell_command
end
end
end
end
Vagrant.configure("2") do |config|
process_nodes(config)
end
Vagrant.configure("2") do |config|
# A 'blank' node that will pxeboot on the first private network
# use this to test deployment tools like cobbler
config.vm.define "target" do |target|
target.vm.box = "blank"
# This IP won't actually come up - you'll need to run a dhcp
# server on another node
target.vm.network "private_network", ip: "192.168.242.55"
target.vm.provider "virtualbox" do |vconfig|
vconfig.customize ['modifyvm', :id ,'--nicbootprio2','1']
vconfig.customize ['modifyvm', :id ,'--memory','1024']
vconfig.gui = true
end
end
# a node with no mounts, that will test a web install
# hostname is also not set to force --certname usage
config.vm.define "rawbox" do |target|
target.vm.box = "centos64"
setup_networks(target, 150, '10')
config.vm.provision :shell do |shell|
shell.inline = '\curl -sSL https://raw.github.com/michaeltchapman/puppet_openstack_builder/stacktira/contrib/aptira/installer/bootstrap.sh | bash'
end
config.vm.provision :shell do |shell|
shell.inline = 'puppet apply /etc/puppet/manifests/site.pp --certname control1'
end
end
end

View File

@ -1,303 +0,0 @@
import os
import shutil
import yaml
import re
dpath = './data'
def prepare_target():
print "=============================="
print "= Preparing target directory ="
print "=============================="
dirs = os.listdir('.')
if 'data.new' not in dirs:
os.mkdir('./data.new')
print 'created data.new'
dirs = os.listdir('./data.new')
if 'hiera_data' not in dirs:
shutil.copytree(dpath + '/hiera_data', './data.new/hiera_data')
print 'copied tree from ' + dpath + '/hiera_data to /data.new/hiera_data'
# Nodes used for vagrant info
shutil.copytree(dpath + '/nodes', './data.new/nodes')
print 'copied tree from ' + dpath + '/nodes to /data.new/nodes'
shutil.copyfile('./contrib/aptira/build/Vagrantfile', './Vagrantfile')
shutil.copyfile('./contrib/aptira/build/Puppetfile', './Puppetfile')
shutil.copyfile('./contrib/aptira/puppet/config.yaml', './data.new/config.yaml')
shutil.copyfile('./contrib/aptira/puppet/site.pp', './manifests/site.pp')
shutil.copyfile('./contrib/aptira/puppet/user.yaml', './data.new/hiera_data/user.yaml')
dirs = os.listdir('./data.new/hiera_data')
if 'roles' not in dirs:
os.mkdir('./data.new/hiera_data/roles')
print 'made role dir'
if 'contrib' not in dirs:
os.mkdir('./data.new/hiera_data/contrib')
print 'made contrib dir'
def hierafy_mapping(mapping):
new_mapping = []
if '{' in mapping:
for c in mapping:
if c == '}':
new_mapping.append("')}")
elif c == '{':
new_mapping.append("{hiera('")
else:
new_mapping.append(c)
return "".join(new_mapping)
else:
return "".join(['%{hiera(\'', mapping, '\')}'])
def scenarios():
print "=============================="
print "===== Handling Scenarios ====="
print "=============================="
scenarios = {}
# This will be a mapping with scenario as key, to
# a mapping of roles to a list of classes
scenarios_as_hiera = {}
for root,dirs,files in os.walk(dpath + '/scenarios'):
for name in files:
print os.path.join(root,name)
with open(os.path.join(root,name)) as yf:
scenarios[name[:-5]] = yaml.load(yf.read())
for scenario, yaml_data in scenarios.items():
if not os.path.exists('./data.new/hiera_data/scenario/' + scenario):
os.makedirs('./data.new/hiera_data/scenario/' + scenario)
for description in yaml_data.values():
for role, values in description.items():
if os.path.isfile('./data.new/hiera_data/scenario/' + scenario + '/' + role + '.yaml'):
with open('./data.new/hiera_data/scenario/' + scenario + '/' + role + '.yaml', 'a') as yf:
if 'classes' in values:
yf.write('classes:\n')
for c in values['classes']:
yf.write(' - \"' + c + '\"\n')
if 'class_groups' in values:
yf.write('class_groups:\n')
for c in values['class_groups']:
yf.write(' - \"' + c + '\"\n')
else:
with open('./data.new/hiera_data/scenario/' + scenario + '/' + role + '.yaml', 'w') as yf:
if 'classes' in values:
yf.write('classes:\n')
for c in values['classes']:
yf.write(' - \"' + c + '\"\n')
if 'class_groups' in values:
yf.write('class_groups:\n')
for c in values['class_groups']:
yf.write(' - \"' + c + '\"\n')
def class_groups():
print "=============================="
print "=== Handling Class Groups ===="
print "=============================="
# Classes and class groups can contain interpolation, which
# should be handled
with open('./data.new/hiera_data/class_groups.yaml', 'w') as class_groups:
for root,dirs,files in os.walk(dpath + '/class_groups'):
for name in files:
if 'README' not in name:
print os.path.join(root,name)
with open(os.path.join(root,name)) as yf:
cg_yaml = yaml.load(yf.read())
class_groups.write(name[:-5] + ':\n')
if 'classes' in cg_yaml:
for clss in cg_yaml['classes']:
class_groups.write(' - \"' + clss + '\"\n')
class_groups.write('\n')
with open('./data.new/hiera_data/class_groups.yaml', 'r') as class_groups:
s = class_groups.read()
os.remove('./data.new/hiera_data/class_groups.yaml')
s.replace('%{', "%{hiera(\'").replace('}', "\')}")
with open('./data.new/hiera_data/class_groups.yaml', 'w') as class_groups:
class_groups.write(s)
def global_hiera():
print "=============================="
print "=== Handling Global Hiera ===="
print "=============================="
scenarios = {}
globals_as_hiera = {}
for root,dirs,files in os.walk(dpath + '/global_hiera_params'):
for name in files:
print os.path.join(root,name)
with open(os.path.join(root,name)) as yf:
path = os.path.join(root,name).replace(dpath,'./data.new') \
.replace('global_hiera_params', 'hiera_data')
scenarios[path] = yaml.load(yf.read())
for key in scenarios.keys():
print key
for scenario, yaml_data in scenarios.items():
if not os.path.exists(scenario):
with open(scenario, 'w') as yf:
yf.write('# Global Hiera Params:\n')
for key, value in yaml_data.items():
if value == False or value == True:
yf.write(key + ': ' + str(value).lower() + '\n')
else:
yf.write(key + ': ' + str(value) + '\n')
else:
with open(scenario, 'a') as yf:
yf.write('# Global Hiera Params:\n')
for key, value in yaml_data.items():
if value == False or value == True:
yf.write(key + ': ' + str(value).lower() + '\n')
else:
yf.write(key + ': ' + str(value) + '\n')
def find_array_mappings():
print "=============================="
print "=== Array Data Mappings ======"
print "=============================="
print "Hiera will flatten arrays when"
print "using introspection, so arrays"
print "and hashes are handled using "
print "YAML anchors. This means they "
print "must be within a single file."
print "=============================="
array_mappings = {}
# File path : [lines to change]
lines = {}
for root,dirs,files in os.walk(dpath + '/hiera_data'):
for name in files:
path = os.path.join(root,name)
with open(path) as yf:
y = yaml.load(yf.read())
for key, value in y.items():
# Numbers and strings interpolate reasonably well, and things
# that aren't mappings will be for passing variables, and thus
# should contain the double colon for scope in most cases.
# This method is certainly fallible.
if (not isinstance(value, str) and ('::' not in key)):
print key + ' IS NON STRING MAPPING: ' + str(value)
if path.replace('/data/', '/data.new/') not in lines:
lines[path.replace('/data/', '/data.new/')] = {}
for nroot,ndirs,nfiles in os.walk(dpath + '/data_mappings'):
for nname in nfiles:
with open(os.path.join(nroot,nname)) as nyf:
ny = yaml.load(nyf.read())
if key in ny.keys():
print key + ' is found, maps to: ' + str(ny[key]) + ' in ' + path
for m in ny[key]:
if key not in lines[path.replace('/data/', '/data.new/')]:
lines[path.replace('/data/', '/data.new/')][key] = [m]
else:
lines[path.replace('/data/', '/data.new/')][key].append(m)
# Inform data_mappings it can ignore these values
array_mappings[key] = value
# modify the files that contain the problem mappings
# to contain anchor sources
for source, mappings in lines.items():
print 'handling non-string mapping in ' + str(source)
# read original file and replace mappings
# with yaml anchor sources
with open(source, 'r') as rf:
ofile = rf.read()
for map_from in mappings.keys():
if ('\n' + map_from + ':') not in ofile:
print 'WARNING: mapping ' + map_from + 'not found in file ' + source
ofile = ofile.replace('\n' + map_from + ':','\n' + map_from + ': &' + map_from + ' ')
with open(source, 'w') as wf:
wf.write(ofile)
# appen anchor references to files
for source, mappings in lines.items():
with open(source, 'a') as wf:
wf.write('\n')
wf.write("#########################################\n")
wf.write('# Anchor mappings for non-string elements\n')
wf.write("#########################################\n\n")
for map_from, map_to in mappings.items():
for param in map_to:
wf.write(param + ': *' + map_from + '\n')
return array_mappings
def data_mappings():
""" Take everything from common.yaml and put
it in data_mappings.yaml in hiera_data, and everything
else try to append to its appropriate switch in the
hierarchy """
array_mappings = find_array_mappings()
print "=============================="
print "=== Handling Data Mappings ==="
print "=============================="
data_mappings = {}
mappings_as_hiera = {}
for root,dirs,files in os.walk(dpath + '/data_mappings'):
for name in files:
print os.path.join(root,name)
with open(os.path.join(root,name)) as yf:
path = os.path.join(root,name).replace(dpath,'data.new/') \
.replace('data_mappings', 'hiera_data')
data_mappings[path] = yaml.load(yf.read())
mappings_as_hiera[path] = []
# create a list of things to append for each file
for source, yaml_mapping in data_mappings.items():
for mapping, list_of_values in yaml_mapping.items():
if mapping in array_mappings.keys():
print mapping + ' found in ' + source + ', skipping non-string mapping'
else:
mappings_as_hiera[source].append('# ' + mapping)
for entry in list_of_values:
mappings_as_hiera[source].append(entry + ": \"" + hierafy_mapping(mapping) + '\"')
mappings_as_hiera[source].append('')
for key, values in mappings_as_hiera.items():
folder = os.path.dirname(key)
if not os.path.exists(folder):
os.makedirs(folder)
if os.path.isfile(key):
print "appending to path "+ key
with open(key, 'a') as map_file:
map_file.write("#################\n")
map_file.write("# Data Mappings #\n")
map_file.write("#################\n\n")
map_file.write("\n".join(values))
else:
print "writing to new path "+ key
with open(key, 'w') as map_file:
map_file.write("#################\n")
map_file.write("# Data Mappings #\n")
map_file.write("#################\n\n")
map_file.write('\n'.join(values))
def move_dirs():
shutil.move(dpath, './data.old')
shutil.move('./data.new', './data')
if __name__ == "__main__":
prepare_target()
data_mappings()
scenarios()
class_groups()
global_hiera()
move_dirs()

View File

@ -1,17 +0,0 @@
if [ ! -d stacktira ] ; then
mkdir stacktira
else
rm -rf stacktira/*
fi
cd stacktira
cp -r ../modules .
cp -r ../contrib .
cp -r ../data .
find . | grep .git | xargs rm -rf
cd ..
tar -cvf stacktira.tar stacktira
rm -rf stacktira

View File

@ -1,38 +0,0 @@
apache
apt
ceilometer
cinder
concat
devtools
dnsmasq
edeploy
firewall
galera
glance
haproxy
heat
horizon
inifile
keepalived
keystone
memcached
mysql
neutron
nova
openstack
openstacklib
postgresql
puppet
puppetdb
rabbitmq
rsync
ruby-puppetdb
staging
stdlib
swift
sysctl
tempest
tftp
vcsrepo
vswitch
xinetd

View File

@ -1,19 +0,0 @@
# convert data model to pure hiera
python contrib/aptira/build/convert.py
# install puppet modules
mkdir -p vendor
mkdir -p modules
export GEM_HOME=vendor
gem install librarian-puppet-simple
vendor/bin/librarian-puppet install
# get package caches
rm -rf stacktira
rm -rf stacktira.tar
wget https://bitbucket.org/michaeltchapman/puppet_openstack_builder/downloads/stacktira.tar
tar -xvf stacktira.tar
cp -r stacktira/contrib/aptira/gemcache contrib/aptira
cp -r stacktira/contrib/aptira/packages contrib/aptira
vagrant up control1

View File

@ -1,368 +0,0 @@
#!/usr/bin/env bash
# Parameters can be set via env vars or passed as
# arguments. Arguments take priority over
# env vars.
proxy="${proxy:-}"
desired_ruby="${desired_ruby:-2.0.0p353}"
desired_puppet="${desired_puppet:-3.4.3}"
network="${network:-eth1}"
dest="${destination:-$HOME}"
environment="${environment:-}"
role="${role:-}"
loose_facts="${loose_facts:-}"
tarball_source="${tarball_source:-https://bitbucket.org/michaeltchapman/puppet_openstack_builder/downloads/stacktira.tar}"
while getopts "h?p:r:o:t:u:n:e:d:" opt; do
case "$opt" in
h|\?)
echo "Not helpful help message"
exit 0
;;
p) proxy=$OPTARG
;;
r) desired_ruby=$OPTARG
;;
o) role=$OPTARG
;;
l) loose_facts=$OPTARG
;;
t) tarball_source=$OPTARG
;;
u) desired_puppet=$OPTARG
;;
n) network=$OPTARG
;;
e) environment=$OPTARG
;;
d) destination=$OPTARG
;;
esac
done
# Set either yum or apt to use an http proxy.
if [ $proxy ] ; then
echo 'setting proxy'
export http_proxy=$proxy
if [ -f /etc/redhat-release ] ; then
if [ ! $(cat /etc/yum.conf | grep '^proxy=') ] ; then
echo "proxy=$proxy" >> /etc/yum.conf
fi
elif [ -f /etc/debian_version ] ; then
if [ ! -f /etc/apt/apt.conf.d/01apt-cacher-ng-proxy ] ; then
echo "Acquire::http { Proxy \"$proxy\"; };" > /etc/apt/apt.conf.d/01apt-cacher-ng-proxy;
apt-get update -q
fi
else
echo "OS not detected! Weirdness inbound!"
fi
else
echo 'not setting proxy'
fi
# Install wget if necessary
hash wget 2>/dev/null || {
echo 'installing wget'
if [ -f /etc/redhat-release ] ; then
yum install -y wget -q
elif [ -f /etc/debian_version] ; then
apt-get install wget -y
fi
}
# Set wget proxy if desired
if [ $proxy ] ; then
if [ ! $(cat /etc/wgetrc | grep '^http_proxy =') ] ; then
echo "http_proxy = $proxy" >> /etc/wgetrc
fi
fi
cd $dest
# Download the data model tarball and reinstall
# if needed. Unless we're running on vagrant.
if [ ! -d /vagrant ] ; then
if [ ! -f $dest/stacktira.tar ] ; then
echo 'downloading data model'
wget $tarball_source
md5new=`md5sum stacktira.tar | cut -d ' ' -f 1`
if [ -f $dest/stacktira.tar.current ] ; then
md5old=`md5sum stacktira.tar.current | cut -d ' ' -f 1`
echo "Tarball old md5sum is $md5old"
echo "Tarball new md5sum is $md5new"
else
md5old=0
fi
if [ "${md5old}" != "${md5new}" ] ; then
echo "A new version of stacktira has been downloaded. Installing."
rm -rf stacktira
tar -xvf stacktira.tar &> /dev/null
else
echo "No install needed"
fi
mv stacktira.tar stacktira.tar.current -f
else
echo "data model installed in $dest/stacktira"
fi
fi
# Ensure both puppet and ruby are
# installed, the correct version, and ready to run.
#
# It will install from $dest/stacktira/aptira/packages
# if possible, otherwise it will wget from the
# internet. If this machine is unable to run yum
# or apt install, and unable to wget, this script
# will fail.
# libyaml is needed by ruby and comes from epel
if ! yum repolist | grep epel ; then
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6*
rm epel-release-6*
fi
# libyaml is needed by ruby and comes from epel
if ! yum list installed | grep libyaml ; then
yum install -y libyaml -q
fi
if hash ruby 2>/dev/null; then
ruby_version=$(ruby --version | cut -d ' ' -f 2)
else
ruby_version=0
fi
# Ruby 1.8.7 (standard on rhel 6) can give segfaults, so
# purge and install ruby 2.0.0
if [ "${ruby_version}" != "${desired_ruby}" ] ; then
echo "installing ruby version $desired_ruby"
if [ -f /etc/redhat-release ] ; then
# Purge current ruby
yum remove ruby puppet ruby-augeas ruby-shadow -y -q
# Install ruby 2.0.0
if [ -f $dest/stacktira/contrib/aptira/packages/ruby-2.0.0p353-1.el6.x86_64.rpm ] ; then
yum localinstall -y $dest/stacktira/contrib/aptira/packages/ruby-2.0.0p353-1.el6.x86_64.rpm
else
echo 'downloading ruby 2.0.0 rpm'
# wget_rpm_from_somewhere
yum localinstall ruby-2.0.0p353-1.el6.x86_64.rpm -y -q
fi
elif [ -f /etc/debian_version ] ; then
apt-get remove puppet ruby -y
apt-get install ruby -y
fi
else
echo "ruby version $desired_ruby already installed"
fi
# Ruby-augeas is needed for puppet, but is handled separately
# since it requires native extensions to be compiled
if ! gem list | grep ruby-augeas ; then
# comes from updates repo
yum install augeas-devel -y -q
yum install -y -q gcc
if [ -f $dest/stacktira/contrib/aptira/gemcache/ruby-augeas* ] ; then
gem install --no-ri --no-rdoc --force --local $dest/stacktira/contrib/aptira/gemcache/ruby-augeas*
else
gem install ruby-augeas --no-ri --no-rdoc
fi
yum remove -y -q gcc cpp
fi
# Install puppet from gem. This is not best practice, but avoids
# repackaging large numbers of rpms and debs for ruby 2.0.0
hash puppet 2>/dev/null || {
puppet_version=0
}
if [ "${puppet_version}" != '0' ] ; then
puppet_version=$(puppet --version)
fi
if [ "${puppet_version}" != "${desired_puppet}" ] ; then
echo "installing puppet version $desired_puppet"
if [ -f $dest/stacktira/contrib/aptira/gemcache/puppet-$desired_puppet.gem ] ; then
echo "installing from local gem cache"
cd $dest/stacktira/contrib/aptira/gemcache
for i in $(ls | grep -v augeas); do
gem install --no-ri --no-rdoc --force --local $i
done
cd -
else
echo "no local gem cache found, installing puppet gem from internet"
gem install puppet --no-ri --no-rdoc
fi
else
echo "puppet version $desired_puppet already installed"
fi
# Ensure puppet user and group are configured
if ! grep puppet /etc/group; then
echo 'adding puppet group'
groupadd puppet
fi
if ! grep puppet /etc/passwd; then
echo 'adding puppet user'
useradd puppet -g puppet -d /var/lib/puppet -s /sbin/nologin
fi
# Set up minimal puppet directory structure
if [ ! -d /etc/puppet ]; then
echo 'creating /etc/puppet'
mkdir /etc/puppet
fi
if [ ! -d /etc/puppet/manifests ]; then
echo 'creating /etc/puppet/manifests'
mkdir /etc/puppet/manifests
fi
if [ ! -d /etc/puppet/modules ]; then
echo 'creating /etc/puppet/modules'
mkdir /etc/puppet/modules
fi
# Don't overwrite the one vagrant places there
if [ ! -f /etc/puppet/manifests/site.pp ]; then
echo 'copying site.pp'
cp $dest/stacktira/contrib/aptira/puppet/site.pp /etc/puppet/manifests
fi
# Create links for all modules, but if a dir is already there,
# ignore it (for dev envs)
for i in $(cat $dest/stacktira/contrib/aptira/build/modules.list); do
if [ ! -L /etc/puppet/modules/$i ] && [ ! -d /etc/puppet/modules/$i ] ; then
echo "Installing module $i"
ln -s $dest/stacktira/modules/$i /etc/puppet/modules/$i
fi
done
echo 'all modules installed'
if [ ! -d /etc/puppet/data ]; then
echo 'creating /etc/puppet/data'
mkdir /etc/puppet/data
fi
if [ ! -d /etc/puppet/data/hiera_data ]; then
echo 'linking /etc/puppet/data/hiera_data'
ln -s $dest/stacktira/data/hiera_data /etc/puppet/data/hiera_data
fi
echo 'hiera data ready'
# copy hiera.yaml to etc, so that we can query without
# running puppet just yet
if [ ! -f /etc/hiera.yaml ] ; then
echo 'setting /etc/hiera.yaml'
cp $dest/stacktira/contrib/aptira/puppet/hiera.yaml /etc/hiera.yaml
fi
# copy hiera.yaml to puppet
if [ ! -f /etc/puppet/hiera.yaml ] ; then
echo 'setting /etc/puppet/hiera.yaml'
cp $dest/stacktira/contrib/aptira/puppet/hiera.yaml /etc/puppet/hiera.yaml
fi
# Copy site data if any. Otherwise install the sample
if [ -d $dest/stacktira/contrib/aptira/site ] ; then
echo "Installing user config"
cp -r $dest/stacktira/contrib/aptira/site/* /etc/puppet/data/hiera_data
else
if [ ! -f /etc/puppet/data/hiera_data/user.yaml ] ; then
echo 'No user.yaml found: installing sample'
cp $dest/stacktira/contrib/aptira/puppet/user.yaml /etc/puppet/data/hiera_data/user.yaml
fi
fi
mkdir -p /etc/facter/facts.d
# set environment external fact
# Requires facter > 1.7
if [ -n $environment ] ; then
if [ ! -f /etc/facter/facts.d/environment.yaml ] ; then
echo "environment: $environment" > /etc/facter/facts.d/environment.yaml
elif ! grep -q "environment" /etc/facter/facts.d/environment.yaml ; then
echo "environment: $environment" >> /etc/facter/facts.d/environment.yaml
fi
if [ ! -d $dest/stacktira/contrib/aptira/site ] ; then
if [ ! -f /etc/puppet/data/hiera_data/user.$environment.yaml ] ; then
if [ -f $dest/stacktira/contrib/aptira/puppet/user.$environment.yaml ] ; then
cp $dest/stacktira/contrib/aptira/puppet/user.$environment.yaml /etc/puppet/data/hiera_data/user.$environment.yaml
fi
fi
fi
fi
# set role external fact
# Requires facter > 1.7
if [ -n $role ] ; then
if [ ! -f /etc/facter/facts.d/role.yaml ] ; then
echo "role: $role" > /etc/facter/facts.d/role.yaml
elif ! grep -q "role" /etc/facter/facts.d/role.yaml ; then
echo "role: $role" >> /etc/facter/facts.d/role.yaml
fi
fi
# Ensure puppet isn't going to sign a cert with the wrong time or
# name
ipaddress=$(facter ipaddress_$network)
fqdn=$(facter hostname).$(hiera domain_name)
facter_fqdn=$(facter fqdn)
# If it doesn't match what puppet will be setting for fqdn, just redo
# to the point where we can see the master and have fqdn
if [ "${facter_fqdn}" != "${fqdn}" ] ; then
if ! grep -q "$ipaddress\s$fqdn" /etc/hosts ; then
echo 'configuring /etc/hosts for fqdn'
if [ -f /etc/redhat-release ] ; then
echo "$ipaddress $fqdn $(hostname)" > /etc/hosts
echo "127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hosts
echo "::1 localhost localhost.localdomain localhost6 localhost6.localdomain6" >> /etc/hosts
echo "$(hiera build_server_ip) $(hiera build_server_name) $(hiera build_server_name).$(hiera domain_name)" >> /etc/hosts
elif [ -f /etc/debian_version ] ; then
echo "$ipaddress $fqdn $(hostname)" > /etc/hosts
echo "127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hosts
echo "::1 localhost localhost.localdomain localhost6 localhost6.localdomain6" >> /etc/hosts
echo "$(hiera build_server_ip) $(hiera build_server_name) $(hiera build_server_name).$(hiera domain_name)" >> /etc/hosts
fi
fi
fi
# install ntpdate if necessary
hash ntpdate 2>/dev/null || {
echo 'installing ntpdate'
if [ -f /etc/redhat-release ] ; then
yum install -y ntpdate -q
elif [ -f /etc/debian_version] ; then
apt-get install ntpdate -y
fi
}
# this may be a list, so just take the first one
ntpdate $(hiera ntp_servers | cut -d '"' -f 2)
if [ ! -d $dest/stacktira/contrib/aptira/site ] ; then
if [ ! -f /etc/puppet/data/hiera_data/user.yaml ] ; then
echo 'No user.yaml found: installing sample'
cp $dest/stacktira/contrib/aptira/puppet/user.yaml /etc/puppet/data/hiera_data/user.yaml
fi
fi
if ! $loose_facts; then
if [ ! -f '/etc/facter/facts.d/ipaddress.yaml' ] ; then
facter | grep ipaddress_ > /etc/facter/facts.d/ipaddress.yaml
sed -i 's/\ =>/:/' /etc/facter/facts.d/ipaddress.yaml
fi
fi
echo 'This server has been successfully prepared to run puppet using'
echo 'the Openstack data model. Please take a moment to review your'
echo 'configuration in /etc/puppet/data/hiera_data/user.yaml'
echo
echo "When you\'re ready, run puppet apply /etc/puppet/manifests/site.pp"

View File

@ -1,48 +0,0 @@
# To deploy experimental support for Centos6, change os to
# redhat and scenario to stacktira
os: redhat
scenario: stacktira
#proxy: 'http://192.168.0.18:8000'
# Additional Config available for use by scenariobuilder during
# the bootstrap process.
# [*initial_ntp*]
# This needs be set before puppet runs, otherwise the certs
# may have the wrong timestamps and agent won't connect to master
# [*installer_repo*]
# These determine which github account+branch to get for the
# puppet_openstack_builder repo when it is cloned onto the
# test VMs as part of the bootstrap script in cloud-init.
# installer_repo: stackforge
# [*installer_branch*]
# installer_branch: master
# [*openstack_version*]
# The release of openstack to install. Note that grizzly will require switching back to Quantum
# Options: havana, grizzly
# [*git_protocol*]
# (optional) Git protocol to use when cloning modules on testing VMs
# Defaults to https
# Options: git, https.
# [*apt_mirror_ip*]
# (optional) Sets the apt mirror IP by doing a sed on the image
# [*apt_proxy_host*]
# (optional) Sets apt-get installs and git clones to go via a proxy
# [*apt_proxy_port*]
# (optional) Sets the port for the apt_proxy_host if used
# [*custom_module*]
# (optional) The name of a module to take from a different source
# [*custom_branch*]
# (optional) The branch to use for the custom module
# [*custom_repo*]
# (optional) The github account the custom module is hosted under

View File

@ -1,30 +0,0 @@
---
:backends:
- yaml
:yaml:
:datadir: /etc/puppet/data/hiera_data
:hierarchy:
- "hostname/%{hostname}"
- "client/%{clientcert}"
- "user.%{role}"
- "user.%{environment}"
- user
- "user.%{scenario}"
- user.common
- "osfamily/%{osfamily}"
- "cinder_backend/%{cinder_backend}"
- "glance_backend/%{glance_backend}"
- "rpc_type/%{rpc_type}"
- "db_type/%{db_type}"
- "tenant_network_type/%{tenant_network_type}"
- "network_type/%{network_type}"
- "network_plugin/%{network_plugin}"
- "password_management/%{password_management}"
- "contrib/networking/%{networking}"
- "contrib/storage/%{storage}"
- "contrib/monitoring/%{monitoring}"
- "scenario/%{scenario}"
- "scenario/%{scenario}/%{role}"
- common
- class_groups

View File

@ -1,73 +0,0 @@
# Globals
# Role may be set by using external facts, or can
# fall back to using the first word in the clientcert
if ! $::role {
$role = regsubst($::clientcert, '([a-zA-Z]+)[^a-zA-Z].*', '\1')
}
$scenario = hiera('scenario', "")
$cinder_backend = hiera('cinder_backend', "")
$glance_backend = hiera('glance_backend', "")
$rpc_type = hiera('rpc_type', "")
$db_type = hiera('db_type', "")
$tenant_network_type = hiera('tenant_network_type', "")
$network_type = hiera('network_type', "")
$network_plugin = hiera('network_plugin', "")
$network_service = hiera('network_service', "")
$storage = hiera('storage', "")
$networking = hiera('networking', "")
$monitoring = hiera('monitoring', "")
$password_management = hiera('password_management', "")
$compute_type = hiera('compute_type', "")
node default {
notice("my scenario is ${scenario}")
notice("my role is ${role}")
# Should be defined in scenario/[name_of_scenario]/[name_of_role].yaml
$node_class_groups = hiera('class_groups', undef)
notice("class groups: ${node_class_groups}")
if $node_class_groups {
class_group { $node_class_groups: }
}
$node_classes = hiera('classes', undef)
if $node_classes {
include $node_classes
notify { " Including node classes : ${node_classes}": }
}
# get a list of contribs to include.
$stg = hiera("${role}_storage", [])
notice("storage includes ${stg}")
if (size($stg) > 0) {
contrib_group { $stg: }
}
# get a list of contribs to include.
$networking = hiera("${role}_networking", [])
notice("networking includes ${networking}")
if (size($networking) > 0) {
contrib_group { $networking: }
}
# get a list of contribs to include.
$monitoring = hiera('${role}_monitoring', [])
notice("monitoring includes ${monitoring}")
if (size($monitoring) > 0) {
contrib_group { $monitoring: }
}
}
define class_group {
include hiera($name)
notice($name)
$x = hiera($name)
notice( "including ${x}" )
}
define contrib_group {
include hiera("${name}_classes")
notice($name)
$x = hiera("${name}_classes")
notice( "including ${x}" )
}

View File

@ -1,132 +0,0 @@
# This is the sample user.yaml for the stacktira scenario
# For additional things that can be configured, look at
# user.stacktira.yaml, or user.common.
#
# Warning:
# When working with non-string types, remember to keep yaml
# anchors within a single file - hiera cannot look them
# up across files. For this reason, editing the lower section
# of this file is not recommended.
enabled_services: &enabled_services
- nova
- neutron
- cinder
- heat
scenario: stacktira
networking: none
storage: none
monitoring: none
# The default network config is as follows:
# eth0: vagrant network in testing
# eth1: deploy network
# eth2: public api network
# eth3: private service network + GRE
# eth4: external data network
build_server_name: build-server
build_server_ip: 192.168.242.100
# These are legacy mappings, and should have no effect
controller_public_address: 10.2.3.105
controller_internal_address: 10.3.3.105
controller_admin_address: 10.3.3.105
# Interface that will be stolen by the l3 router on
# the control node.
external_interface: eth2
# for a provider network on this interface instead of
# an l3 agent use these options
openstacklib::openstack::provider::interface: eth2
neutron::plugins::ovs::network_vlan_ranges: default
# Gre tunnel address for each node
internal_ip: "%{ipaddress_eth3}"
# This is the interface that each node will be binding
# various services on.
deploy_bind_ip: "%{ipaddress_eth1}"
public_bind_ip: "%{ipaddress_eth2}"
private_bind_ip: "%{ipaddress_eth3}"
# The public VIP, where all API services are exposed to users.
public_vip: 10.2.3.105
# The private VIP, where internal services are exposed to openstack services.
private_vip: 10.3.3.105
# List of IP addresses for controllers on the public network
control_servers_public: &control_servers_public [ '10.2.3.110', '10.2.3.111', '10.2.3.112']
# List of IP addresses for controllers on the private network
control_servers_private: &control_servers_private [ '10.3.3.110', '10.3.3.111', '10.3.3.112']
# A hash of hostnames to private network IPs. Used for rabbitmq hosts
# resolution
openstacklib::hosts::cluster_hash:
regsubr1private:
ip: '10.3.3.110'
regsubr2private:
ip: '10.3.3.111'
regsubr3private:
ip: '10.3.3.112'
# List of controller hostnames. Used for rabbitmq hosts list
cluster_names: &cluster_names [ 'regsubr1private', 'regsubr2private', 'regsubr3private' ]
# Virtual router IDs for the VIPs in this cluster. If you are
# running multiple VIPs on one network these need to be different
# for each VIP
openstacklib::loadbalance::haproxy::public_vrid: 60
openstacklib::loadbalance::haproxy::private_vrid: 61
#Libvirt type
nova::compute::libvirt::libvirt_virt_type: qemu
horizon::wsgi::apache::bind_address: "%{ipaddress_eth2}"
# Use these to set an apt proxy if running on a Debian-like
apt::proxy_host: 192.168.0.18
apt::proxy_port: 8000
# This node will be used to bootstrap the cluster on initial deployment
# or if there is a total failure of the control cluster
galera::galera_master: 'regsubr1.domain.name'
# Proxy configuration of either apt or yum
openstacklib::repo::apt_proxy_host: '192.168.0.18'
openstacklib::repo::apt_proxy_port: '8000'
openstacklib::repo::yum_http_proxy: 'http://192.168.0.18:8000'
openstacklib::repo::yum_epel_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::repo::yum_base_mirror: 'http://mirror.aarnet.edu.au'
#########################################
# Anchor mappings for non-string elements
#########################################
neutron::rabbit_hosts: *cluster_names
nova::rabbit_hosts: *cluster_names
cinder::rabbit_hosts: *cluster_names
rabbitmq::cluster_nodes: *cluster_names
openstacklib::loadbalance::haproxy::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::ceilometer::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::cinder::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::heat::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::mysql::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::neutron::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::nova::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::rabbitmq::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::ceilometer::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::cinder::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::heat::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::neutron::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::nova::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::mysql::cluster_addresses: *control_servers_private
openstacklib::loadbalance::haproxy::rabbitmq::cluster_addresses: *control_servers_private
galera::galera_servers: *control_servers_private
openstacklib::openstack::databases::enabled_services: *enabled_services

View File

@ -1,131 +0,0 @@
# An example where two regions share keystone and glance
openstacklib::openstack::regions::regions_hash:
RegionOne:
public_ip: 10.2.3.105
private_ip: 10.3.3.105
services:
- heat
- nova
- neutron
- cinder
- ec2
RegionTwo:
public_ip: 10.2.3.205
private_ip: 10.3.3.205
services:
- heat
- nova
- neutron
- cinder
- ec2
shared:
public_ip: 10.2.3.5
private_ip: 10.3.3.5
services:
- keystone
- glance
# This will create the correct databases for the region controller
# normally this would also make endpoints, but that is covered
# by the above region hash in multi-region environments
enabled_services: &enabled_services
- glance
- keystone
openstacklib::openstack::regions::nova_user_pw: "%{hiera('nova_service_password')}"
openstacklib::openstack::regions::neutron_user_pw: "%{hiera('network_service_password')}"
openstacklib::openstack::regions::glance_user_pw: "%{hiera('glance_service_password')}"
openstacklib::openstack::regions::heat_user_pw: "%{hiera('heat_service_password')}"
openstacklib::openstack::regions::cinder_user_pw: "%{hiera('cinder_service_password')}"
openstacklib::openstack::regions::ceilometer_user_pw: "%{hiera('ceilometer_service_password')}"
# The default network config is as follows:
# eth0: vagrant network in testing
# eth1: deploy network
# eth2: public api network
# eth3: private service network + GRE
# eth4: external data network
build_server_name: build-server
build_server_ip: 192.168.242.100
# These are legacy mappings, and should have no effect
controller_public_address: 10.2.3.5
controller_internal_address: 10.3.3.5
controller_admin_address: 10.3.3.5
# This is the interface that each node will be binding
# various services on.
deploy_bind_ip: "%{ipaddress_eth1}"
public_bind_ip: "%{ipaddress_eth2}"
private_bind_ip: "%{ipaddress_eth3}"
# The public VIP, where all API services are exposed to users.
public_vip: 10.2.3.5
# The private VIP, where internal services are exposed to openstack services.
private_vip: 10.3.3.5
# List of IP addresses for controllers on the public network
control_servers_public: &control_servers_public [ '10.2.3.10', '10.2.3.11', '10.2.3.12']
# List of IP addresses for controllers on the private network
control_servers_private: &control_servers_private [ '10.3.3.10', '10.3.3.11', '10.3.3.12']
# A hash of hostnames to private network IPs. Used for rabbitmq hosts
# resolution
openstacklib::hosts::cluster_hash:
regcon1private:
ip: '10.3.3.10'
regcon2private:
ip: '10.3.3.11'
regcon3private:
ip: '10.3.3.12'
# List of controller hostnames. Used for rabbitmq hosts list
cluster_names: &cluster_names [ 'regcon1private', 'regcon2private', 'regcon3private' ]
horizon::wsgi::apache::bind_address: "%{ipaddress_eth2}"
# Use these to set an apt proxy if running on a Debian-like
apt::proxy_host: 192.168.0.18
apt::proxy_port: 8000
# This node will be used to bootstrap the cluster on initial deployment
# or if there is a total failure of the control cluster
galera::galera_master: 'regcon1.domain.name'
# Database allowed hosts
allowed_hosts: 10.3.3.%
# Allowed cidrs for the different interfaces. Only
# Ports used by openstack will be allowed
deploy_control_firewall_source: '192.168.242.0/24'
public_control_firewall_source: '10.2.3.0/24'
private_control_firewall_source: '10.3.3.0/24'
# Proxy configuration of either apt or yum
openstacklib::repo::apt_proxy_host: '192.168.0.18'
openstacklib::repo::apt_proxy_port: '8000'
openstacklib::repo::yum_http_proxy: 'http://192.168.0.18:8000'
openstacklib::repo::yum_epel_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::repo::yum_base_mirror: 'http://mirror.aarnet.edu.au'
#########################################
# Anchor mappings for non-string elements
#########################################
openstacklib::loadbalance::haproxy::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::dashboard::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::glance::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::keystone::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::mysql::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::dashboard::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::glance::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::keystone::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::mysql::cluster_addresses: *control_servers_private
galera::galera_servers: *control_servers_private
openstacklib::openstack::databases::enabled_services: *enabled_services

View File

@ -1,147 +0,0 @@
# This is the sample user.yaml for the stacktira scenario
# For additional things that can be configured, look at
# user.stacktira.yaml, or user.common.
#
# Warning:
# When working with non-string types, remember to keep yaml
# anchors within a single file - hiera cannot look them
# up across files. For this reason, editing the lower section
# of this file is not recommended.
scenario: stacktira
networking: none
storage: none
monitoring: none
# The default network config is as follows:
# eth0: vagrant network in testing
# eth1: deploy network
# eth2: public api network
# eth3: private service network + GRE
# eth4: external data network
build_server_name: build-server
build_server_ip: 192.168.242.100
# These are legacy mappings, and should have no effect
controller_public_address: 10.2.3.5
controller_internal_address: 10.3.3.5
controller_admin_address: 10.3.3.5
# Interface that will be stolen by the l3 router on
# the control node.
external_interface: eth4
# for a provider network on this interface instead of
# an l3 agent use these options
#openstacklib::openstack::provider::interface: eth4
#neutron::plugins::ovs::network_vlan_ranges: default
# Gre tunnel address for each node
internal_ip: "%{ipaddress_eth3}"
# This is the interface that each node will be binding
# various services on.
deploy_bind_ip: "%{ipaddress_eth1}"
public_bind_ip: "%{ipaddress_eth2}"
private_bind_ip: "%{ipaddress_eth3}"
# The public VIP, where all API services are exposed to users.
public_vip: 10.2.3.5
# The private VIP, where internal services are exposed to openstack services.
private_vip: 10.3.3.5
# List of IP addresses for controllers on the public network
control_servers_public: &control_servers_public [ '10.2.3.10', '10.2.3.11', '10.2.3.12']
# List of IP addresses for controllers on the private network
control_servers_private: &control_servers_private [ '10.3.3.10', '10.3.3.11', '10.3.3.12']
# A hash of hostnames to private network IPs. Used for rabbitmq hosts
# resolution
openstacklib::hosts::cluster_hash:
control1private:
ip: '10.3.3.10'
control2private:
ip: '10.3.3.11'
control3private:
ip: '10.3.3.12'
# List of controller hostnames. Used for rabbitmq hosts list
cluster_names: &cluster_names [ 'control1private', 'control2private', 'control3private' ]
# For the case where the node hostname already resolves to something else,
# force the nodename to be the private shortname we're using above.
rabbitmq::environment_variables:
'NODENAME': "rabbit@%{hostname}private"
#Libvirt type
nova::compute::libvirt::libvirt_virt_type: qemu
horizon::wsgi::apache::bind_address: "%{ipaddress_eth2}"
# Use these to set an apt proxy if running on a Debian-like
apt::proxy_host: 192.168.0.18
apt::proxy_port: 8000
# CIDRs for the three networks.
deploy_control_firewall_source: '192.168.242.0/24'
public_control_firewall_source: '10.2.3.0/24'
private_control_firewall_source: '10.3.3.0/24'
# Proxy configuration of either apt or yum
openstacklib::repo::apt_proxy_host: '192.168.0.18'
openstacklib::repo::apt_proxy_port: '8000'
openstacklib::repo::yum_http_proxy: 'http://192.168.0.18:8000'
openstacklib::repo::yum_epel_mirror: 'http://fedora.mirror.uber.com.au/epel'
openstacklib::repo::yum_base_mirror: 'http://centos.mirror.uber.com.au'
enabled_services: &enabled_services
- keystone
- glance
- nova
- neutron
- cinder
# Openstack version to install
openstack_release: icehouse
openstacklib::repo::uca::release: icehouse
openstacklib::repo::rdo::release: icehouse
openstacklib::compat::release: icehouse
#########################################
# Anchor mappings for non-string elements
#########################################
neutron::rabbit_hosts: *cluster_names
nova::rabbit_hosts: *cluster_names
cinder::rabbit_hosts: *cluster_names
rabbitmq::cluster_nodes: *cluster_names
openstacklib::loadbalance::haproxy::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::ceilometer::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::cinder::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::dashboard::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::glance::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::heat::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::keystone::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::mysql::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::neutron::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::nova::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::rabbitmq::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::ceilometer::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::cinder::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::dashboard::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::glance::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::heat::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::keystone::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::neutron::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::nova::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::mysql::cluster_addresses: *control_servers_private
openstacklib::loadbalance::haproxy::rabbitmq::cluster_addresses: *control_servers_private
openstacklib::loadbalance::haproxy::keystone::cluster_addresses: *control_servers_private
galera::galera_servers: *control_servers_private
openstacklib::openstack::databases::enabled_services: *enabled_services
openstacklib::openstack::endpoints::enabled_services: *enabled_services

View File

@ -1,11 +0,0 @@
# Bring up the control node and then reboot it to ensure
# it has an ip netns capable kernel
vagrant up control1
vagrant halt control1
vagrant up control1
vagrant provision control1
# Bring up compute node
vagrant up compute1
vagrant ssh -c "bash /vagrant/contrib/aptira/tests/$1/test.sh"

View File

@ -1,36 +0,0 @@
#!/bin/bash
#
# assumes that openstack credentails are set in this file
source /root/openrc
# Grab an image. Cirros is a nice small Linux that's easy to deploy
wget --quiet http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
# Add it to glance so that we can use it in Openstack
glance add name='cirros' is_public=true container_format=bare disk_format=qcow2 < cirros-0.3.2-x86_64-disk.img
# Capture the Image ID so that we can call the right UUID for this image
IMAGE_ID=`glance index | grep 'cirros' | head -1 | awk -F' ' '{print $1}'`
# Flat provider network.
neutron net-create --provider:physical_network=default --shared --provider:network_type=flat public
neutron subnet-create --name publicsub --allocation-pool start=10.2.3.100,end=10.2.3.200 --router:external=True public 10.2.3.0/24
neutron_net=`neutron net-list | grep net1 | awk -F' ' '{print $2}'`
# For access to the instance
nova keypair-add test > /tmp/test.private
chmod 0600 /tmp/test.private
# Allow ping and ssh
neutron security-group-rule-create --protocol icmp --direction ingress default
neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default
# Boot instance
nova boot --flavor 1 --image cirros --key-name test --nic net-id=$neutron_net providervm
sleep 15
address=$(nova show providervm | grep public | cut -d '|' -f '3')
ip netns exec qdhcp-$neutron_net ssh -i /tmp/test.private $address -lcirros -o StrictHostKeyChecking=no hostname

View File

@ -1 +0,0 @@
These are sample user.yaml and config.yaml files that can be used to run the heat template against the Cisco CI cluster, and serve as a template for others who wish to do similar things.

View File

@ -1,27 +0,0 @@
# This is a sample set of config options that can be used with the
# heat templates. The options are documented in data/config.yaml.
# To use these options, copy this file to the data directory
domain: 'domain.name'
verbose: 'false'
operatingsystem: ubuntu
scenario: 2_role
initial_ntp: 192.168.26.186
installer_repo: stackforge
installer_branch: master
openstack_version: icehouse
git_protocol: git
apt_mirror_ip: 192.168.26.170
apt_proxy_host: 192.168.26.170
apt_proxy_port: 8000
# These can be used to checkout a different version of a particular
# module. This example will clone puppet-keystone from the github account
# belonging to michaeltchapman and use the havana branch. This can be useful
# for testing patches to other modules.
#custom_module: keystone
#custom_branch: havana
#custom_repo: michaeltchapman

View File

@ -1,31 +0,0 @@
# These are user config options that are needed to work with the heat
# templates. Copy this file to data/hiera_data if you wish to create
# a heat template that will launch the data model. You can make other
# additions to this file changing other aspects of the install, but
# if you change any of the options below, the cluster may not
# build properly. This has only been tested with the 2_role scenario.
# Heat will use one pre-existing network with external routing
# (eth0) to deploy using puppet and create two internal networks
# to serve internal traffic (eth1) and a pretend external
# interface (eth2)
internal_ip: "%{ipaddress_eth1}"
nova::compute::vncserver_proxyclient_address: "%{ipaddress_eth0}"
swift_local_net_ip: "%{ipaddress_eth1}"
# The following is a hack to avoid cyclic dependencies in heat.
# The nodes will have /etc/hosts entries for the nodes that they
# depend on, but the first node at the top of the tree is the
# build server, which needs to contain the controller IP when
# deploying normally. To avoid this, we set all normal instances
# of the controller_ip to be the hostname, which compute and control
# nodes will have in /etc/hosts, and set the bind address for
# mysql and postgres to be open. There is also an unused host
# resource in the coi::base module which we give dummy values.
postgresql::config::listen_addresses: "%{ipaddress_eth0}"
mysql::config::bind_address: "0.0.0.0"
controller_public_address: control-server.domain.name
controller_internal_address: control-server.domain.name
controller_admin_address: control-server.domain.name
coe::base::controller_node_internal: 192.168.1.1
coe::base::controller_hostname: derp

View File

@ -1,28 +0,0 @@
# This is a short script that can be used to create RDoc documentation
# for all modules used by puppet_openstack_builder
#
# Only tested on Ubuntu 12.04
#
# Usage:
# git clone https://github.com/stackforge/puppet_openstack_builder
# cd puppet_openstack_builder
# sudo bash contrib/doc/build_doc.sh
#
apt-get install -y git rubygems ruby
mkdir vendor
export GEM_HOME=`pwd`/vendor
gem install thor --no-ri --no-rdoc
gem install puppet --no-ri --no-rdoc
git clone git://github.com/bodepd/librarian-puppet-simple vendor/librarian-puppet-simple
export PATH=`pwd`/vendor/librarian-puppet-simple/bin/:$PATH
librarian-puppet install --verbose
rm -r modules/*/tests
rm -r modules/*/examples
rm modules/augeas/spec/fixtures/manifests/site.pp
mkdir build
vendor/bin/puppet doc --mode rdoc --outputdir build/doc --modulepath modules

View File

@ -1,15 +0,0 @@
import os
import base64
import struct
import time
import uuid
# create mon secret
key = os.urandom(16)
header = struct.pack('<hiih', 1, int(time.time()), 0, len(key))
# create mon key
fsid = uuid.uuid4()
print "Your ceph_monitor_secret is: " + base64.b64encode(header + key)
print "Your ceph_monitor_fsid is: " + str(fsid)

View File

@ -1,312 +0,0 @@
# configuring openstack as data
####Table of Contents
1. [Why Data?](#why-data)
2. [Users - getting started](#getting-started-as-a-user)
* [Scenario Selcetion](#selecting-a-scenario)
* [Configuring Globals](#configuring-globals)
* [Scenarios](#scenarios)
* [User Data](#user-data)
* [Role Mappings](#role-mappings)
## Why Data
This is intended to replace the stackforge/puppet-openstack class
as well as other tools that look at composing the core stackforge modules
into openstack roles.
The puppet-openstack (and other models that I looked at) suffered
from the following problems:
### The roles were not composable enough.
Multiple reference architectures for openstack (ie: all\_in\_one,
compte\_controller) should all be composed of the same building
blocks in order to reduce the total amount of code needed to express
these deployment scenarios.
For example, an all\_in\_one deployment should be expressed as a
compute + controller + network\_controller.
Reuse of components was difficult in the old model because of it's use of
parameterized classes. When classes are declared with the following syntax,
they cannot be redeclared. This means that components that use class
declarations like this cannot be re-used by other components that want to
configure them in a different manner.:
class { 'class_name':
parameter => value,
}
This issue is most visible if you look at the amount of duplicated code
between the openstack::all class and the openstack::controller class.
### Data forwarding was too much work
Explicit parameter-forwarding through nested class declarations
(ie: openstack::controller -> openstack::nova::controller -> nova)
has proved too difficult to maintain and too easy to mess up.
Adding a single parameter to an OpenStack role in the current model can
require that same parameter be explicitly forwarded through 2 or 3 different
class interfaces.
In fact, a large percent of all of the pull requests for all module are to
add parameters to the Openstack classes.
### Puppet manifests are not introspectable enough
As we move towards the creation of user interfaces that drive the
configuration of multiple different reference architectures, we need
a way to inspect the current state of our deployment model to understand
what input needs to be provided by the end user.
For example:
The data that a user need to provide to deploy a 3 role model:
(compute/controller/network controller) is different from the data used to
deploy a 2 role model (compute/controller)
To make matters even a bit more complicated:
Each of those models also supports a large number of configurable backends
that each require their own specific configurations. Even with a 2 role scenario,
you could select ceph, or swift, or file as the glance backend. Each of these
selections require their own sets of data that needs to be provided by the end
user.
This need to programatically compile a model into a consumable user interface is
the requirement that led to the adoption of a data model, as opposed to something
more like [roles/profiles](http://www.craigdunn.org/2012/05/239/).
Puppet provides a great way to express interfaces for encapsulating system resources,
but that content is only designed to be consumed by Puppet's internal lexer and parser,
it is not designed to be introspectable by other tools. In order to support the selection
of both multiple reference architectures as well as multiple backends, we need to
be able to programatically understand the selected classes to provide the user with the
correct interface.
## Setup
Applying [setup.pp](https://github.com/stackforge/puppet_openstack_builder/blob/master/manifests/setup.pp)
will configure your nodes to use the data model. It does the following:
1. Installs a version of Puppet greater than 3.0.
2. [Sets the node\_terminus as scenario.](https://github.com/stackforge/puppet_openstack_builder/blob/master/manifests/setup.pp#L97)
3. [Configures hiera](https://github.com/stackforge/puppet_openstack_builder/blob/master/manifests/setup.pp#L63)
## Getting Started as a User
This section is intended to provider users with what they need to know in order
to use the data model to deploy a customized openstack deployment.
However, it is recommended that users understand the internals so that they
can debug things. Full documentation of the implementation can be found here:
[scenario node terminus](https://github.com/bodepd/scenario_node_terminus/blob/master/README.md).
The data model should be configured before you install any of your openstack
components. It is responsible for building a deployment model that is used
to assign both classes as well as data to each node that needs to be configured.
### Selecting a Scenario
The first step as an end user is to select a scenario. Scenarios are defined
in data/config.yaml as:
scenario: all_in_one
The scenarios represents the currently deployment model, and are used to
determine the roles available as a part of that model.
Currently, the following scenarios are supported:
* *all\_in\_one* - installs everything on one node
* *2\_role* -splits compute/controller
* *full\_ha* - splits out an HA install that requires 13 nodes
The following command returns your current scenario:
puppet scenario get_scenario
Once you have selected a scenario, you can see how it effects your deployment model:
puppet scenario get_roles
### Configuring Globals
This directory contains sets of global variables that can be used to determine
what roles should be deployed as a part of your deployment model as well a how
data should be assigned to those roles.
In general, the following types of things can be configured:
* Pluggable backend selection for components (ie: what backend should cinder use)
* Selections that augment roles (ie: should tempest be installed, should a ceph
role exist)
As a user, you should specify any of these variables that you wish to override in:
global_hiera_params/user.yaml
The current supported variables are:
+ *db_type* selects the database to use (defaults to mysql)
+ *rpc_type* Selects the rpc type to use (defaults to rabbitmq)
+ *cinder_backend* Selects the backend to be used with cinder. Defaults to iscsi.
(currently supports iscsi and ceph)
+ *glance_backend* Selects the backend that should be used by glance
(currently supports swift, ceph, and file)
+ *compute_type* The backend to use for compute (defaults to libvirt)
+ *network_service* Network service to use. This hack is used to select between
quantum and neutron and will hopefully be deprecated once grizzly support is
dropped.
+ *network_plugin* Network plugin to use
Support ovs and linuxbridge. Defaults to ovs in common.yaml.
+ *network_type* The type of network (defaults to per-tenant-router)
+ *tenant_network_type* Type of tenant network to use. (defaults to gre).
### Scenarios
Once you have selected your globals and scenario, you can now query the system to see
what the scenarios looks like for your current deployment model:
puppet scenario get_roles
The output here shows 2 things:
* what roles can be assigned to nodes
* what classes are included as a part of those roles
### User Data
Once you have know your roles, you will want to customize the data used
to configure your deployment model.
You can get a list of the default data a user should consider setting with:
puppet scenario get_user_inputs
This command shows a list of data that a user may want to provide along with
it's current default value.
> NOTE: The current view of user data is not perfect. It still needs some
> refinement.
Each of these values can be overridden by setting a key value pair in ``data/hiera_data/user.yaml``.
Values can either receive static values:
controller_admin_address: 192.168.242.10
Or values that are set with facts (or hiera global params):
internal_ip: "%{ipaddress_eth3}"
Once you have supplied all of your data, you can see how that data is applied to
your roles by invoking:
puppet scenario compile_role <role_name>
Alternatively, as long as the node terminus is set in your main stanza of
puppet.conf,you can run:
puppet node find --certname controller
To see the exact data that is returned to Puppet for a specific node.
### Role Mappings
You can map roles to nodes (via puppet cert names) in the file: ``data/role_mappings.yaml``
For example, if the I run the following puppet command
puppet agent --certname controller
Then I can map that certname to a role in this file:
controller: controller
> NOTE: certname defaults to hostname when not provided
## Getting started as a developer
If you intend to expand the data model, you should be familiar with
how it work.
[Data model Documentation](https://github.com/bodepd/scenario_node_terminus/blob/master/README.md)
There are may ways that you may wish to extend the data model.
- adding new scenarios
- addition new backends for openstack components
- updating default data
- Adding new data mappings
### Adjusting Scenarios
New scenarios should be added here:
data/scenario/<new_scenario>.yaml
When you add a new scenario, you also need to consider what data mappings
and hiera data defaults should be supplied with that scenario.
### Adding new global config
When you add new global config, you should consider the following:
* are there additional roles that should be added when this data is set?
* should classes be added to any existing roles?
* are there specific data mappings that should be added?
* are there defaults that should be set for this data?
You will also need to add this data to you hierarchy.
### Setting data defaults
The default value to the provided for a class parameter should be supplied
in the hier\_data directory.
First, identify when the default value should be set.
1. If it should be set by default, it belongs in common.yaml
2. If this default is specific to a scenario, it should be set in scenarios/<scenario\_name>.yaml
3. If it is based on a global parameter, it should be supplied in the hiera data file for that
parameter.
### Setting user specific data
All data that a user should supply should be set as a data mapping.
## CI/CD specific constructs:
### nodes
Nodes are currently used to express the nodes that can be built
in order to test deployments of various scenarios. This is currently
used for deployments for CI.
Casual users should be able to ignore this.
We are currently performing research to see if this part of the data
should be replaces by a HEAT template.
## scenario command line tools
For a full list of debugging tools, run:
puppet help scenario
More in-depth documentation be be found [here](https://github.com/bodepd/scenario_node_terminus#command-line-debugging-tools).

View File

@ -1 +0,0 @@
do swift please :)

View File

@ -1,4 +0,0 @@
# what is this?
class groups are intended to be a place where we can group lists of classes
together as sets that can be deployed as a part of your roles.

View File

@ -1,7 +0,0 @@
classes:
- apache
- collectd
# - coi::profiles::cobbler_server
- coi::profiles::cache_server
- coi::profiles::puppet::master
- graphite

View File

@ -1,4 +0,0 @@
classes:
- ceilometer
- ceilometer::agent::auth
- ceilometer::agent::compute

View File

@ -1,13 +0,0 @@
classes:
# NOTE: uncomment this to include mongo
# class definition if used as backend
# - mongodb
- ceilometer
- ceilometer::keystone::auth
- ceilometer::db
- ceilometer::collector
- ceilometer::agent::auth
- ceilometer::agent::central
- ceilometer::api
- ceilometer::alarm::notifier
- ceilometer::alarm::evaluator

View File

@ -1,3 +0,0 @@
class_groups:
- ceph_mon
- ceph_osd

View File

@ -1,2 +0,0 @@
classes:
- cephdeploy::mon

View File

@ -1,2 +0,0 @@
classes:
- cephdeploy::osdwrapper

View File

@ -1,3 +0,0 @@
class_groups:
- cinder_controller
- cinder_volume

View File

@ -1,5 +0,0 @@
classes:
- cinder
- cinder::api
- cinder::config
- cinder::scheduler

View File

@ -1,5 +0,0 @@
classes:
- cinder
- cinder::volume
- cinder::config
- "cinder::volume::%{cinder_backend}"

View File

@ -1,5 +0,0 @@
classes:
- openstacklib::firewall
- openstacklib::firewall::base
- openstacklib::firewall::ssh
- openstacklib::firewall::compute

View File

@ -1,16 +0,0 @@
classes:
- openstacklib::firewall
- openstacklib::firewall::base
- openstacklib::firewall::ssh
- openstacklib::firewall::memcached
- openstacklib::firewall::nova
- openstacklib::firewall::dhcp
- openstacklib::firewall::keystone
- openstacklib::firewall::glance
- openstacklib::firewall::heat
- openstacklib::firewall::neutron
- openstacklib::firewall::cinder
- openstacklib::firewall::rabbitmq
- openstacklib::firewall::dashboard
- openstacklib::firewall::keepalived
- galera::firewall

View File

@ -1,4 +0,0 @@
classes:
- galera
- mysql::server::account_security
- coi::profiles::openstack::databases::mysql

View File

@ -1,9 +0,0 @@
classes:
- glance
- glance::api
- glance::config
- glance::registry
- "glance::backend::%{glance_backend}"
- glance::cache::pruner
- glance::cache::cleaner
- "glance::notify::%{rpc_type}"

View File

@ -1,9 +0,0 @@
classes:
- heat
- heat::api
- heat::api_cfn
- heat::api_cloudwatch
- heat::db::mysql
- heat::engine
- heat::keystone::auth
- heat::keystone::auth_cfn

View File

@ -1,4 +0,0 @@
classes:
- memcached
- horizon
- apache

View File

@ -1,8 +0,0 @@
classes:
- keystone
- keystone::roles::admin
- keystone::config
# the endpoint additions and database additions
# are a little difficult for me
# I am not sure how it fits into the ideal model
- coi::profiles::openstack::endpoints

View File

@ -1,11 +0,0 @@
classes:
- openstacklib::loadbalance::haproxy
- openstacklib::loadbalance::haproxy::mysql
- openstacklib::loadbalance::haproxy::nova
- openstacklib::loadbalance::haproxy::keystone
- openstacklib::loadbalance::haproxy::glance
- openstacklib::loadbalance::haproxy::heat
- openstacklib::loadbalance::haproxy::neutron
- openstacklib::loadbalance::haproxy::cinder
- openstacklib::loadbalance::haproxy::rabbitmq
- openstacklib::loadbalance::haproxy::dashboard

View File

@ -1,4 +0,0 @@
classes:
- mysql::server
- mysql::server::account_security
- coi::profiles::openstack::databases::mysql

View File

@ -1,14 +0,0 @@
classes:
- "%{network_service}"
- "%{network_service}::server"
- "%{network_service}::server::notifications"
- "%{network_service}::config"
- "%{network_service}::agents::metadata"
- "%{network_service}::agents::l3"
- "%{network_service}::agents::lbaas"
- "%{network_service}::agents::vpnaas"
- "%{network_service}::agents::dhcp"
- "%{network_service}::agents::%{network_plugin}"
- "%{network_service}::services::fwaas"
- "%{network_service}::plugins::%{network_plugin}"
- "%{network_service}::config"

View File

@ -1,12 +0,0 @@
classes:
- "%{network_service}"
- "%{network_service}::server"
- "%{network_service}::server::notifications"
- "%{network_service}::agents::metadata"
- "%{network_service}::agents::l3"
- "%{network_service}::agents::lbaas"
- "%{network_service}::agents::vpnaas"
- "%{network_service}::agents::dhcp"
- "%{network_service}::agents::ml2::%{network_plugin}"
- "%{network_service}::services::fwaas"
- "%{network_service}::plugins::ml2"

View File

@ -1,9 +0,0 @@
classes:
- nova
- nova::compute
- nova::config
- "nova::compute::%{compute_type}"
- "nova::network::%{network_service}"
- "nova::compute::%{network_service}"
- "%{network_service}"
- "%{network_service}::agents::%{network_plugin}"

View File

@ -1,10 +0,0 @@
classes:
- "nova"
- "nova::compute"
- "nova::config"
- "nova::compute::%{compute_type}"
- "nova::network::%{network_service}"
- "nova::compute::%{network_service}"
- "%{network_service}::agents::ml2::%{network_plugin}"
- "%{network_service}"
- "%{network_service}::plugins::ml2"

View File

@ -1,12 +0,0 @@
classes:
- nova
- nova::api
- nova::config
- nova::scheduler
- nova::objectstore
- nova::cert
- nova::consoleauth
- nova::conductor
- "nova::network::%{network_service}"
- nova::vncproxy
- nova::scheduler::filter

View File

@ -1,6 +0,0 @@
classes:
- "%{network_service}"
- "%{network_service}::agents::dhcp"
- "%{network_service}::agents::%{network_plugin}"
- "%{network_service}::server::notifications"
- "%{network_service}::config"

View File

@ -1,8 +0,0 @@
#
# these classes are required to
# deploy the simple test script
#
classes:
- openstack::client
- openstack::auth_file
- openstack::test_file

View File

@ -1,54 +0,0 @@
---
# vagrant config
apt_cache: '192.168.242.99' # comment out this line to disable apt cache
apt_mirror: 'us.archive.ubuntu.com'
domain: 'domain.name'
verbose: false
# Set this to 'Ubuntu' if using scenariobuilder, otherwise
# Facter will get very confused.
operatingsystem: ubuntu
scenario: 2_role
# Additional Config available for use by scenariobuilder during
# the bootstrap process.
# [*initial_ntp*]
# This needs be set before puppet runs, otherwise the certs
# may have the wrong timestamps and agent won't connect to master
# [*installer_repo*]
# These determine which github account+branch to get for the
# puppet_openstack_builder repo when it is cloned onto the
# test VMs as part of the bootstrap script in cloud-init.
# installer_repo: stackforge
# [*installer_branch*]
# installer_branch: master
# [*openstack_version*]
# The release of openstack to install. Note that grizzly will require switching back to Quantum
# Options: icehouse, havana, grizzly
# [*git_protocol*]
# (optional) Git protocol to use when cloning modules on testing VMs
# Defaults to https
# Options: git, https.
# [*apt_mirror_ip*]
# (optional) Sets the apt mirror IP by doing a sed on the image
# [*apt_proxy_host*]
# (optional) Sets apt-get installs and git clones to go via a proxy
# [*apt_proxy_port*]
# (optional) Sets the port for the apt_proxy_host if used
# [*custom_module*]
# (optional) The name of a module to take from a different source
# [*custom_branch*]
# (optional) The branch to use for the custom module
# [*custom_repo*]
# (optional) The github account the custom module is hosted under

View File

@ -1,2 +0,0 @@
cinder_volumes_name:
- cinder::volume::iscsi::volume_group

View File

@ -1,44 +0,0 @@
ceph_deploy_password:
- cephdeploy::ceph_deploy_password
- cephdeploy::client::ceph_deploy_password
ceph_deploy_user:
- cephdeploy::ceph_deploy_user
- cephdeploy::client::ceph_deploy_user
- cephdeploy::mon::ceph_deploy_user
- cephdeploy::osdwrapper::ceph_deploy_user
ceph_monitor_fsid:
- cephdeploy::ceph_monitor_fsid
- cinder::volume::rbd::rbd_secret_uuid
mon_initial_members:
- cephdeploy::mon_initial_members
ceph_monitor_address:
- cephdeploy::ceph_monitor_address
ceph_primary_mon:
- cephdeploy::client::primary_mon
- cephdeploy::mon::ceph_primary_mon
- cephdeploy::osdwrapper::ceph_primary_mon
ceph_public_interface:
- cephdeploy::mon::ceph_public_interface
ceph_cluster_interface:
- cephdeploy::osdwrapper::ceph_cluster_interface
ceph_cluster_name:
- cephdeploy::mon::ceph_cluster_name
ceph_public_network:
- cephdeploy::ceph_public_network
- cephdeploy::mon::ceph_public_network
ceph_cluster_network:
- cephdeploy::ceph_cluster_network
- cephdeploy::osdwrapper::ceph_cluster_network
ceph_cluster_name:
- cephdeploy::mon::ceph_cluster_name
ceph_monitor_secret:
- cephdeploy::ceph_monitor_secret
glance_ceph_pool:
- cephdeploy::osdwrapper::glance_ceph_pool
cinder_rbd_pool:
- cephdeploy::osdwrapper::cinder_rbd_pool
- cinder::volume::rbd::rbd_pool
ceph_openstack_user:
- cinder::volume::rbd::rbd_user
ceph_configuration_file:
- cinder::volumes::rbd::rbd_ceph_conf

View File

@ -1,397 +0,0 @@
debug:
- cinder::debug
- glance::api::debug
- glance::registry::debug
- horizon::django_debug
- keystone::debug
- quantum::debug
- neutron::debug
- neutron::agents::lbaas::debug
- quatum::agents::dhcp::debug
- quatum::agents::metadata::debug
- ceilometer::debug
- heat::debug
- nova::debug
verbose:
- cinder::verbose
- glance::api::verbose
- glance::registry::verbose
- keystone::verbose
- quantum::verbose
- neutron::verbose
- ceilometer::verbose
- heat::verbose
- nova::verbose
admin_tenant:
- keystone::roles::admin::admin_tenant
- openstack::controller::nova_admin_tenant_name
service_tenant:
- keystone::roles::admin::service_tenant
- neutron::server::notifications::nova_admin_tenant_name
nova_admin_username:
- nova::keystone::auth_name
- neutron::server::notifications::nova_admin_username
admin_email:
- keystone::roles::admin::email
- ceilometer::keystone::auth::email
# this needs to be supplied as a default
# b/c the default to guest is kind of annoying
# (and not entirely reasonable)
rpc_user:
- cinder::qpid_username
- cinder::rabbit_userid
- nova::qpid::user
- nova::rabbitmq::userid
- nova::rabbit_userid
- nova::qpid_username
- quantum::rabbit_user
- neutron::rabbit_user
- quantum::qpid_username
- neutron::qpid_username
- ceilometer::rabbit_userid
- ceilometer::qpid_username
- heat::rabbit_userid
- glance::notify::rabbitmq::rabbit_userid
rpc_password:
- cinder::qpid_password
- cinder::rabbit_password
- ceilometer::qpid_password
- ceilometer::rabbit_password
- glance::notify::qpid::qpid_password
- glance::notify::rabbitmq::rabbit_password
- heat::rabbit_password
- neutron::qpid_password
- neutron::rabbit_password
- nova::qpid_password
- nova::rabbit_password
- nova::qpid::qpid_password
- nova::rabbitmq::password
enabled_services:
- coi::profiles::openstack::endpoints::enabled_services
- coi::profiles::openstack::databases::mysql::enabled_services
allowed_hosts:
- ceilometer::db::mysql::allowed_hosts
- cinder::db::mysql::allowed_hosts
- glance::db::mysql::allowed_hosts
- keystone::db::mysql::allowed_hosts
- nova::db::mysql::allowed_hosts
- quantum::db::mysql::allowed_hosts
- neutron::db::mysql::allowed_hosts
- heat::db::mysql::allowed_hosts
#
# The all_in_one specification of how to map services to each other
# is assumed to be a default. When you want to move away from all_in_one,
# you should override these things with a custom scenario data mapping
#
controller_internal_address:
- glance::api::registry_host
- glance::notify::rabbitmq::rabbit_host
- cinder::qpid_hostname
- cinder::rabbit_host
- nova::rabbit_host
- nova::qpid_hostname
- heat::qpid_hostname
- heat::rabbit_host
- quantum::rabbit_host
- quantum::qpid_hostname
- neutron::qpid_hostname
- neutron::rabbit_host
- ceilometer::db::mysql::host
- ceilometer::rabbit_host
- ceilometer::qpid_hostname
- cinder::db::mysql::host
- glance::db::mysql::host
- keystone::db::mysql::host
- nova::db::mysql::host
- quantum::db::mysql::host
- neutron::db::mysql::host
# internal endpoint addresses are the same as this
- cinder::keystone::auth::internal_address
- glance::keystone::auth::internal_address
- nova::keystone::auth::internal_address
- heat::keystone::auth::internal_address
- heat::keystone::auth_cfn::internal_address
- cinder::api::keystone_auth_host
- glance::api::auth_host
- glance::registry::auth_host
- nova::api::auth_host
- quantum::server::auth_host
- neutron::server::auth_host
- quantum::keystone::auth::internal_address
- neutron::keystone::auth::internal_address
- openstack::auth_file::controller_node
- postgresql::config::listen_addresses
# deprecated past 0.x
- mysql::config::bind_address
- quantum::agents::metadata::metadata_ip
- neutron::agents::metadata::metadata_ip
- openstack::swift::proxy::keystone_host
- swift::keystone::auth::internal_address
- ceilometer::keystone::auth::internal_address
- ceilometer::api::keystone_host
- heat::keystone_host
- coe::base::controller_node_internal
- heat::db::mysql::host
controller_public_address:
- nova::vncproxy::host
- nova::compute::vncproxy_host
- cinder::keystone::auth::public_address
- glance::keystone::auth::public_address
- nova::keystone::auth::public_address
- heat::keystone::auth::public_address
- heat::keystone::auth_cfn::public_address
- quantum::keystone::auth::public_address
- neutron::keystone::auth::public_address
- swift::keystone::auth::public_address
- ceilometer::keystone::auth::public_address
- openstack::swift::proxy::swift_proxy_net_ip
- horizon::fqdn
- horizon::servername
controller_public_protocol:
- ceilometer::keystone::auth::public_protocol
- cinder::keystone::auth::public_protocol
- glance::keystone::auth::public_protocol
- heat::keystone::auth::public_protocol
- heat::keystone::auth_cfn::public_protocol
- neutron::keystone::auth::public_protocol
- nova::keystone::auth::public_protocol
- swift::keystone::auth::public_protocol
controller_admin_address:
- cinder::keystone::auth::admin_address
- glance::keystone::auth::admin_address
- nova::keystone::auth::admin_address
- heat::keystone::auth::admin_address
- heat::keystone::auth_cfn::admin_address
- quantum::keystone::auth::admin_address
- neutron::keystone::auth::admin_address
- swift::keystone::auth::admin_address
- ceilometer::keystone::auth::admin_address
controller_public_url:
- keystone::endpoint::public_url
"%{controller_public_protocol}://%{controller_public_address}:8774/v2":
- neutron::server::notifications::nova_url
controller_admin_url:
- keystone::endpoint::admin_url
controller_internal_url:
- keystone::endpoint::internal_url
"%{controller_public_url}/v2.0/":
- horizon::keystone_url
"%{controller_internal_url}/v2.0/":
- neutron::server::notifications::nova_admin_auth_url
swift_local_net_ip:
- openstack::swift::proxy::swift_local_net_ip
- openstack::swift::storage-node::swift_local_net_ip
# right now , the sql conneciton creates a tight coupling between the scenario
# and the key used to retrieve its password. This is an indicator that this needs to
# be changed.
"%{db_type}://cinder:%{cinder_db_password}@%{controller_internal_address}/cinder":
- cinder::database_connection
"%{db_type}://glance:%{glance_db_password}@%{controller_internal_address}/glance":
- glance::api::sql_connection
- glance::registry::sql_connection
"%{db_type}://keystone:%{keystone_db_password}@%{controller_internal_address}/keystone":
- keystone::sql_connection
"%{db_type}://nova:%{nova_db_password}@%{controller_internal_address}/nova":
- nova::database_connection
"%{db_type}://%{network_service}:%{network_db_password}@%{controller_internal_address}/%{network_service}":
- quantum::plugins::ovs::sql_connection
- quantum::plugins::linuxbridge::sql_connection
- neutron::server::database_connection
"%{ceilometer_db_type}://ceilometer:%{ceilometer_db_password}@%{controller_internal_address}/ceilometer":
# NOTE: Workaround for connection issues with mongo
#"%{ceilometer_db_type}://127.0.0.1:27017/ceilometer":
- ceilometer::db::database_connection
"%{db_type}://heat:%{heat_db_password}@%{controller_internal_address}/heat":
- heat::sql_connection
"http://%{controller_internal_address}:9696":
- nova::network::quantum::quantum_url
- nova::network::neutron::neutron_url
"http://%{controller_internal_address}:35357/v2.0":
- nova::network::quantum::quantum_admin_auth_url
- nova::network::neutron::neutron_admin_auth_url
- quantum::agents::metadata::auth_url
- neutron::agents::metadata::auth_url
- ceilometer::agent::compute::auth_url
- ceilometer::agent::auth::auth_url
"http://%{controller_internal_address}:5000/v2.0/ec2tokens":
- heat::keystone_ec2_uri
"%{controller_internal_address}:9292":
- nova::glance_api_servers
- cinder::glance::glance_api_servers
"http://%{controller_internal_address}:8000":
- heat::engine::heat_metadata_server_url
"http://%{controller_internal_address}:8000/v1/waitcondition":
- heat::engine::heat_waitcondition_server_url
"http://%{controller_internal_address}:8003":
- heat::engine::heat_watch_server_url
"%{controller_public_protocol}://%{controller_public_address}:5000/v2.0/":
- keystone::public_endpoint
"%{controller_public_protocol}://%{controller_admin_address}:35357/v2.0/":
- keystone::admin_endpoint
# cisco specific data
build_node_name:
- coe::base::build_node_name
- collectd::graphitehost
- graphite::graphitehost
domain_name:
- coe::base::domain_name
public_interface:
- collectd::management_interface
pocket:
- coe::base::pocket
- coi::profiles::cobbler_server::pocket
openstack_release:
- coe::base::openstack_release
- coi::profiles::cobbler_server::openstack_release
openstack_repo_location:
- coe::base::openstack_repo_location
- coi::profiles::cobbler_server::openstack_repo_location
supplemental_repo:
- coe::base::supplemental_repo
- coi::profiles::cobbler_server::supplemental_repo
puppet_repo:
- coi::profiles::openstack::base::puppet_repo
- coe::base::puppet_repo
puppet_repo_location:
- coi::profiles::openstack::base::puppet_repo_location
- coe::base::puppet_repo_location
use_syslog:
- ceilometer::use_syslog
- cinder::use_syslog
- glance::registry::use_syslog
- glance::api::use_syslog
- heat::use_syslog
- keystone::use_syslog
- neutron::use_syslog
- nova::use_syslog
log_facility:
- ceilometer::log_facility
- cinder::log_facility
- glance::registry::log_facility
- glance::api::log_facility
- heat::log_facility
- keystone::log_facility
- neutron::log_facility
- nova::log_facility
enable_nova:
- nova::cert::enabled
- nova::api::enabled
- nova::compute::enabled
- nova::conductor::enabled
- nova::consoleauth::enabled
- nova::network::enabled
- nova::objectstore::enabled
- nova::qpid::enabled
- nova::scheduler::enabled
- nova::vncproxy::enabled
- nova::volume::enabled
enable_lbaas:
- neutron::agents::lbaas::enabled
enable_vpnaas:
- neutron::agents::vpnaas::enabled
enable_fwaas:
- neutron::services::fwaas::enabled
horizon_neutron_options:
- horizon::neutron_options
interface_driver:
- neutron::agents::vpnaas::interface_driver
external_network_bridge:
- neutron::agents::vpnaas::external_network_bridge
package_ensure:
- cinder::api::package_ensure
- cinder::scheduler::package_ensure
- cinder::volume::package_ensure
- glance::package_ensure
- keystone::package_ensure
- nova::api::ensure_package
- nova::cert::ensure_package
- nova::client::ensure
- nova::compute::ensure_package
- nova::conductor::ensure_package
- nova::consoleauth::ensure_package
- nova::ensure_package
- nova::network::ensure_package
- nova::objectstore::ensure_package
- nova::scheduler::ensure_package
- nova::vncproxy::ensure_package
- ceilometer::package_ensure
- heat::package_ensure
- ntp::package_ensure
- neutron::agents::vpnaas::package_ensure
region:
- cinder::keystone::auth::region
- glance::keystone::auth::region
- nova::keystone::auth::region
- quantum::keystone::auth::region
- neutron::keystone::auth::region
- neutron::server::notifications::nova_region_name
- keystone::endpoint::region
- nova::network::quantumclient::quantum_region_name
- nova::network::neutronclient::neutron_region_name
- quantum::agents::metadata::auth_region
- neutron::agents::metadata::auth_region
- ceilometer::keystone::auth::region
- ceilometer::agent::auth::auth_region
- heat::keystone::auth::region
- heat::keystone::auth_cfn::region
- nova::network::neutron::neutron_region_name
- neutron::keystone::auth::region
- openstack-ha::controller::region
- openstack::controller::region
- openstack::keystone::region
- openstack::all::region
- openstack::auth_file::region_name
- swift::keystone::auth::region
neutron_sync_db:
- neutron::server::sync_db
# SSL support
enable_ssl:
- keystone::enable_ssl
ssl_certfile:
- keystone::ssl_certfile
ssl_keyfile:
- keystone::ssl_keyfile
ssl_ca_certs:
- keystone::ssl_ca_certs
ssl_ca_key:
- keystone::ssl_ca_key
ssl_cert_subject:
- keystone::ssl_cert_subject
# MySQL module version settings
puppet_mysql_version:
- cinder::db::mysql::mysql_module
- cinder::mysql_module
- glance::api::mysql_module
- glance::db::mysql::mysql_module
- glance::registry::mysql_module
- heat::db::mysql::mysql_module
- heat::mysql_module
- neutron::db::mysql::mysql_module
- neutron::server::mysql_module
- keystone::db::mysql::mysql_module
- keystone::mysql_module
- ceilometer::db::mysql::mysql_module
- ceilometer::db::mysql_module
- nova::db::mysql::mysql_module
- nova::mysql_module
# MySQL char set and collation order
mysql_default_charset:
- ceilometer::db::mysql::charset
- cinder::db::mysql::charset
- glance::db::mysql::charset
- heat::db::mysql::charset
- keystone::db::mysql::charset
- neutron::db::mysql::charset
- nova::db::mysql::charset
mysql_default_collation:
- ceilometer::db::mysql::collate
- cinder::db::mysql::collate
- glance::db::mysql::collate
- heat::db::mysql::collate
- keystone::db::mysql::collate
- neutron::db::mysql::collate
- nova::db::mysql::collate

View File

@ -1,93 +0,0 @@
sql_idle_timeout:
- keystone::idle_timeout
- glance::registry::sql_idle_timeout
- glance::api::sql_idle_timeout
- nova::database_idle_timeout
- cinder::sql_idle_timeout
- quantum::plugins::ovs::sql_idle_timeout
- neutron::server::database_idle_timeout
- heat::database_idle_timeout
rabbit_hosts:
- quantum::rabbit_hosts
- neutron::rabbit_hosts
- nova::rabbit_hosts
- cinder::rabbit_hosts
- heat::rabbit_hosts
localhost:
- ceilometer::db::mysql::host
- cinder::db::mysql::host
- glance::db::mysql::host
- keystone::db::mysql::host
- nova::db::mysql::host
- quantum::db::mysql::host
- neutron::db::mysql::host
allowed_hosts:
- ceilometer::db::mysql::allowed_hosts
- cinder::db::mysql::allowed_hosts
- glance::db::mysql::allowed_hosts
- keystone::db::mysql::allowed_hosts
- nova::db::mysql::allowed_hosts
- quantum::db::mysql::allowed_hosts
- neutron::db::mysql::allowed_hosts
bind_address:
- galera::local_ip
- galera::bind_address
- horizon::bind_address
- horizon::cache_server_ip
- ceilometer::api::host
- cinder::api::bind_host
- glance::registry::bind_host
- glance::api::bind_host
- nova::vncproxy::host
- nova::api::api_bind_address
- mysql::config::bind_address
- keystone::bind_host
- memcached::listen_ip
- quantum::bind_host
- neutron::bind_host
- heat::api::bind_host
- heat::api_cfn::bind_host
- heat::api_cloudwatch::bind_host
- openstack::swift::proxy::swift_proxy_net_ip
"mysql://cinder:%{cinder_db_password}@%{controller_internal_address}/cinder?charset=utf8":
- cinder::sql_connection
"mysql://glance:%{glance_db_password}@%{controller_internal_address}/glance":
- glance::api::sql_connection
- glance::registry::sql_connection
"mysql://keystone:%{keystone_db_password}@%{controller_internal_address}/keystone":
- keystone::sql_connection
"mysql://nova:%{nova_db_password}@%{controller_internal_address}/nova":
- nova::database_connection
"mysql://%{network_service}:%{network_db_password}@%{controller_internal_address}/%{network_service}?charset=utf8":
- quantum::plugins::ovs::sql_connection
- quantum::plugins::linuxbridge::sql_connection
- neutron::server::database_connection
"mysql://heat:%{heat_db_password}@%{controller_internal_address}/heat":
- heat::sql_connection
"http://%{controller_public_address}:5000/v2.0/":
- glance::backend::swift::swift_store_auth_address
- glance::api::auth_url
controller_internal_address:
- openstack-ha::load-balancer::controller_virtual_ip
swift_admin_address:
- swift::keystone::auth::admin_address
swift_internal_address:
- swift::keystone::auth::internal_address
- openstack-ha::load-balancer::swift_proxy_virtual_ip
swift_public_address:
- swift::keystone::auth::public_address
swift_storage_interface:
- openstack-ha::load-balancer::swift_proxy_interface
private_interface:
- openstack-ha::load-balancer::controller_interface
controller_names:
- nova::rabbitmq::cluster_disk_nodes
- openstack-ha::load-balancer::controller_names
galera_master_ipaddress:
- openstack-ha::load-balancer::galera_master_ipaddress
galera_backup_ipaddresses:
- openstack-ha::load-balancer::galera_backup_ipaddresses
galera_master_name:
- openstack-ha::load-balancer::galera_master_name
galera_backup_names:
- openstack-ha::load-balancer::galera_backup_names

View File

@ -1,6 +0,0 @@
ceph_openstack_user:
- glance::backend::rbd::rbd_store_user
ceph_configuration_file:
- glance::backend::rbd::rbd_store_ceph_conf
glance_ceph_pool:
- glance::backend::rbd::rbd_store_pool

View File

@ -1,73 +0,0 @@
# manages all credentials using a single password.
# It allows for fewer parameters, but has some
# security implications.
#
secret_key:
- horizon::secret_key
- quantum::agents::metadata::shared_secret
- neutron::agents::metadata::shared_secret
- nova::api::quantum_metadata_proxy_shared_secret
- nova::api::neutron_metadata_proxy_shared_secret
# TODO this should place the data right into the underlying class
- openstack::swift::proxy::swift_hash_suffix
- openstack::swift::storage-node::swift_hash_suffix
password:
- cinder::rabbit_password
- cinder::qpid_password
- nova::qpid::password
- nova::rabbitmq::password
- nova::rabbit_password
- nova::qpid_password
- quantum::rabbit_password
- quantum::qpid_password
- neutron::rabbit_password
- neutron::qpid_password
- glance::notify::rabbitmq::rabbit_password
- glance::notify::qpid::qpid_password
- cinder::db::mysql::password
- cinder::db::postgresql::password
- galera::root_password
- glance::db::mysql::password
- glance::db::postgresql::password
- keystone::db::mysql::password
- keystone::db::postgresql::password
- nova::db::mysql::password
- nova::db::postgresql::password
- quantum::db::mysql::password
- quantum::db::postgresql::password
- neutron::db::mysql::password
- neutron::db::postgresql::password
- mysql::config::root_password
- postgresql::config::postgres_password
- cinder::api::keystone_password
- cinder::keystone::auth::password
- glance::keystone::auth::password
- glance::api::keystone_password
- glance::registry::keystone_password
- nova::keystone::auth::password
- swift::keystone::auth::password
# TODO this should place the data into the next layer down
- openstack::swift::proxy::swift_user_password
- nova::api::admin_password
- keystone::admin_token
- keystone::roles::admin::password
- quantum::keystone::auth::password
- neutron::keystone::auth::password
- quantum::server::auth_password
- neutron::server::auth_password
- neutron::server::notifications::nova_admin_password
- nova::network::quantum::quantum_admin_password
- nova::network::neutron::neutron_admin_password
- quantum::agents::metadata::auth_password
- neutron::agents::metadata::auth_password
- openstack::auth_file::admin_password
- ceilometer::keystone::auth::password
- ceilometer::api::keystone_password
- ceilometer::db::mysql::password
- ceilometer::agent::auth::auth_password
- ceilometer::qpid_password
- ceilometer::rabbit_password
- heat::db::mysql::password
- heat::keystone::auth::password
- heat::keystone::auth_cfn::password
- heat::rabbit_password

View File

@ -1,91 +0,0 @@
#
# sets individual passwords for each login
#
cinder_db_password:
- cinder::db::mysql::password
- cinder::db::postgresql::password
glance_db_password:
- glance::db::mysql::password
- glance::db::postgresql::password
keystone_db_password:
- keystone::db::mysql::password
- keystone::db::postgresql::password
nova_db_password:
- nova::db::mysql::password
- nova::db::postgresql::password
network_db_password:
- quantum::db::mysql::password
- quantum::db::postgresql::password
- neutron::db::mysql::password
- neutron::db::postgresql::password
database_root_password:
- mysql::config::root_password
- galera::root_password
- postgresql::config::postgres_password
cinder_service_password:
- cinder::api::keystone_password
- cinder::keystone::auth::password
glance_service_password:
- glance::keystone::auth::password
- glance::api::keystone_password
- glance::registry::keystone_password
nova_service_password:
- nova::keystone::auth::password
- nova::api::admin_password
- neutron::server::notifications::nova_admin_password
admin_token:
- keystone::admin_token
admin_password:
- keystone::roles::admin::password
- openstack::auth_file::admin_password
network_service_password:
- quantum::keystone::auth::password
- neutron::keystone::auth::password
- quantum::server::auth_password
- neutron::server::auth_password
- nova::network::quantum::quantum_admin_password
- nova::network::neutron::neutron_admin_password
- quantum::agents::metadata::auth_password
- neutron::agents::metadata::auth_password
swift_service_password:
- swift::keystone::auth::password
- openstack::swift::proxy::swift_user_password
swift_hash:
- openstack::swift::proxy::swift_hash_suffix
- openstack::swift::storage-node::swift_hash_suffix
- swift::swift_hash_suffix
rpc_password:
- cinder::rabbit_password
- cinder::qpid_password
- nova::qpid::password
- nova::rabbitmq::password
- nova::rabbit_password
- nova::qpid_password
- quantum::rabbit_password
- quantum::qpid_password
- neutron::rabbit_password
- neutron::qpid_password
- ceilometer::rabbit_password
- ceilometer::qpid_password
- heat::rabbit_password
metadata_shared_secret:
- quantum::agents::metadata::shared_secret
- neutron::agents::metadata::shared_secret
- nova::api::quantum_metadata_proxy_shared_secret
- nova::api::neutron_metadata_proxy_shared_secret
horizon_secret_key:
- horizon::secret_key
ceilometer_db_password:
- ceilometer::db::mysql::password
ceilometer_metering_secret:
- ceilometer::metering_secret
ceilometer_service_password:
- ceilometer::keystone::auth::password
- ceilometer::api::keystone_password
- ceilometer::agent::auth::auth_password
heat_db_password:
- heat::db::mysql::password
heat_service_password:
- heat::keystone::auth::password
- heat::keystone::auth_cfn::password
- heat::keystone_password

View File

@ -1,14 +0,0 @@
# all of the connection specific data-mappings are
# stored here b/c all connections on via the controller
# addresses in this scenario
#
swift_admin_address:
- swift::keystone::auth::admin_address
swift_internal_address:
- swift::keystone::auth::internal_address
swift_public_address:
- swift::keystone::auth::public_address
- openstack::swift::proxy::swift_proxy_net_ip
swift_local_net_ip:
- openstack::swift::proxy::swift_local_net_ip
- openstack::swift::storage-node::swift_local_net_ip

View File

@ -1,10 +0,0 @@
# mappings for swift in a full_ha environment
#
# note that much of what in other scenarios maps locally gets mapped
# to the proxy virtual address
#
swift_proxy_net_ip:
- swift::ringserver::local_net_ip
- openstack::swift::storage-node::ring_server
- openstack::swift::proxy::swift_local_net_ip

View File

@ -1,195 +0,0 @@
cluster_names:
- quantum::rabbit_hosts
- neutron::rabbit_hosts
- nova::rabbit_hosts
- cinder::rabbit_hosts
- rabbitmq::cluster_nodes
- openstacklib::loadbalance::haproxy::cluster_names
- openstacklib::loadbalance::haproxy::ceilometer::cluster_names
- openstacklib::loadbalance::haproxy::cinder::cluster_names
- openstacklib::loadbalance::haproxy::dashboard::cluster_names
- openstacklib::loadbalance::haproxy::glance::cluster_names
- openstacklib::loadbalance::haproxy::heat::cluster_names
- openstacklib::loadbalance::haproxy::keystone::cluster_names
- openstacklib::loadbalance::haproxy::mysql::cluster_names
- openstacklib::loadbalance::haproxy::neutron::cluster_names
- openstacklib::loadbalance::haproxy::nova::cluster_names
- openstacklib::loadbalance::haproxy::rabbitmq::cluster_names
mysql_module:
- ceilometer::db::mysql_module
- ceilometer::db::mysql::mysql_module
- cinder::db::mysql::mysql_module
- glance::db::mysql::mysql_module
- glance::api::mysql_module
- glance::registry::mysql_module
- heat::db::mysql::mysql_module
- heat::mysql_module
- keystone::db::mysql::mysql_module
- keystone::mysql_module
- neutron::db::mysql::mysql_module
- neutron::server::mysql_module
- nova::mysql_module
- nova::db::mysql::mysql_module
control_servers_private:
- galera::galera_servers
- openstacklib::loadbalance::haproxy::mysql::cluster_addresses
- openstacklib::loadbalance::haproxy::rabbitmq::cluster_addresses
- openstacklib::loadbalance::haproxy::keystone::cluster_addresses
control_servers_public:
- openstacklib::loadbalance::haproxy::cluster_addresses
- openstacklib::loadbalance::haproxy::ceilometer::cluster_addresses
- openstacklib::loadbalance::haproxy::cinder::cluster_addresses
- openstacklib::loadbalance::haproxy::dashboard::cluster_addresses
- openstacklib::loadbalance::haproxy::glance::cluster_addresses
- openstacklib::loadbalance::haproxy::heat::cluster_addresses
- openstacklib::loadbalance::haproxy::neutron::cluster_addresses
- openstacklib::loadbalance::haproxy::nova::cluster_addresses
domain_name:
- openstacklib::hosts::domain
deploy_control_firewall_source:
- openstacklib::firewall::edeploy::source
- openstacklib::firewall::puppet::source
- openstacklib::firewall::ssh::source
public_control_firewall_source:
- openstacklib::firewall::cinder::source
- openstacklib::firewall::ceilometer::source
- openstacklib::firewall::dashboard::source
- openstacklib::firewall::glance::source
- openstacklib::firewall::heat::source
- openstacklib::firewall::keystone::source
- openstacklib::firewall::nova::source
- openstacklib::firewall::neutron::source
private_control_firewall_source:
- openstacklib::firewall::rabbitmq::source
- galera::firewall::source
- openstacklib::firewall::cinder::internal_source
- openstacklib::firewall::ceilometer::internal_source
- openstacklib::firewall::memcached::source
- openstacklib::firewall::glance::internal_source
- openstacklib::firewall::heat::internal_source
- openstacklib::firewall::keystone::internal_source
- openstacklib::firewall::nova::internal_source
- openstacklib::firewall::neutron::internal_source
public_bind_ip:
- cinder::api::bind_host
- glance::api::bind_host
- glance::registry::bind_host
- heat::api_cfn::bind_host
- heat::api_cloudwatch::bind_host
- heat::api::bind_host
- neutron::bind_host
- nova::api::api_bind_address
- nova::api::metadata_listen
- nova::objectstore::bind_address
- nova::vncproxy::host
- horizon::wsgi::apache::bind_address
- horizon::bind_address
private_bind_ip:
- galera::bind_address
- galera::local_ip
- rabbitmq::node_ip_address
- keystone::admin_bind_host
- keystone::public_bind_host
public_vip:
- glance::api::registry_host
- openstacklib::loadbalance::haproxy::cluster_public_vip
- openstacklib::loadbalance::haproxy::ceilometer::vip
- openstacklib::loadbalance::haproxy::cinder::vip
- openstacklib::loadbalance::haproxy::dashboard::vip
- openstacklib::loadbalance::haproxy::glance::vip
- openstacklib::loadbalance::haproxy::heat::vip
- openstacklib::loadbalance::haproxy::keystone::vip
- openstacklib::loadbalance::haproxy::nova::vip
- openstacklib::loadbalance::haproxy::neutron::vip
private_vip:
- openstacklib::loadbalance::haproxy::cluster_private_vip
- openstacklib::loadbalance::haproxy::mysql::vip
- openstacklib::loadbalance::haproxy::rabbitmq::vip
- openstacklib::loadbalance::haproxy::keystone::internal_vip
- openstacklib::loadbalance::haproxy::ceilometer::internal_vip
- openstacklib::loadbalance::haproxy::cinder::internal_vip
- openstacklib::loadbalance::haproxy::dashboard::internal_vip
- openstacklib::loadbalance::haproxy::glance::internal_vip
- openstacklib::loadbalance::haproxy::heat::internal_vip
- openstacklib::loadbalance::haproxy::keystone::internal_vip
- openstacklib::loadbalance::haproxy::nova::internal_vip
- openstacklib::loadbalance::haproxy::neutron::internal_vip
- glance::notify::rabbitmq::rabbit_host
- cinder::qpid_hostname
- cinder::rabbit_host
- nova::rabbit_host
- nova::qpid_hostname
- heat::qpid_hostname
- heat::rabbit_host
- quantum::rabbit_host
- quantum::qpid_hostname
- neutron::qpid_hostname
- neutron::rabbit_host
- ceilometer::db::mysql::host
- ceilometer::rabbit_host
- ceilometer::qpid_hostname
- cinder::db::mysql::host
- glance::db::mysql::host
- keystone::db::mysql::host
- nova::db::mysql::host
- quantum::db::mysql::host
- neutron::db::mysql::host
- cinder::keystone::auth::internal_address
- glance::keystone::auth::internal_address
- nova::keystone::auth::internal_address
- heat::keystone::auth::internal_address
- heat::keystone::auth_cfn::internal_address
- cinder::api::keystone_auth_host
- keystone::endpoint::internal_address
- glance::api::auth_host
- glance::registry::auth_host
- horizon::keystone_host
- nova::api::auth_host
- quantum::server::auth_host
- neutron::server::auth_host
- quantum::keystone::auth::internal_address
- neutron::keystone::auth::internal_address
- openstack::auth_file::controller_node
- quantum::agents::metadata::metadata_ip
- neutron::agents::metadata::metadata_ip
- openstack::swift::proxy::keystone_host
- swift::keystone::auth::internal_address
- ceilometer::keystone::auth::internal_address
- ceilometer::api::keystone_host
- heat::keystone_host
- heat::db::mysql::host
- cinder::keystone::auth::admin_address
- glance::keystone::auth::admin_address
- nova::keystone::auth::admin_address
- heat::keystone::auth::admin_address
- heat::keystone::auth_cfn::admin_address
- keystone::endpoint::admin_address
- quantum::keystone::auth::admin_address
- neutron::keystone::auth::admin_address
- swift::keystone::auth::admin_address
- ceilometer::keystone::auth::admin_address
- openstacklib::openstack::auth_file::controller_node
openstack_release:
- openstacklib::compat::nova::openstack_release
- openstacklib::compat::keystone::openstack_release
"%{private_protocol}://%{private_vip}:8774/v2":
- neutron::server::notifications::nova_url
"%{private_protocol}://%{private_vip}:35357/v2.0/":
- neutron::server::notifications::nova_admin_auth_url
admin_password:
- openstacklib::openstack::auth_file::admin_password

View File

@ -1,15 +0,0 @@
# all of the connection specific data-mappings are
# stored here b/c all connections on via the controller
# addresses in this scenario
#
swift_admin_address:
- swift::keystone::auth::admin_address
swift_internal_address:
- swift::keystone::auth::internal_address
- openstack::swift::storage-node::ring_server
swift_public_address:
- swift::keystone::auth::public_address
- openstack::swift::proxy::swift_proxy_net_ip
swift_local_net_ip:
- openstack::swift::proxy::swift_local_net_ip
- openstack::swift::storage-node::swift_local_net_ip

View File

@ -1,8 +0,0 @@
"%{external_network_bridge}:%{external_interface}":
- quantum::agents::ovs::bridge_uplinks
- neutron::agents::ovs::bridge_uplinks
- neutron::agents::ml2::ovs::bridge_uplinks
internal_ip:
- quantum::agents::ovs::local_ip
- neutron::agents::ovs::local_ip
- neutron::agents::ml2::ovs::local_ip

View File

@ -1,3 +0,0 @@
"%{external_network_bridge}:%{external_interface}":
- quantum::agents::ovs::bridge_uplinks
- neutron::agents::ovs::bridge_uplinks

View File

@ -1,21 +0,0 @@
---
application:
- openstack
password_management: individual
db_type: mysql
ceilometer_db_type: mysql
rpc_type: rabbitmq
cinder_backend: iscsi
glance_backend: file
compute_type: libvirt
# networking options
network_service: neutron
# supports linuxbridge and ovs
network_plugin: ovs
# supports single-flat, provider-router, and per-tenant-router
network_type: per-tenant-router
# supports gre or vlan
tenant_network_type: gre
enable_ha: false
install_tempest: false

View File

@ -1,15 +0,0 @@
---
db_type: galera
rpc_type: rabbitmq
cinder_backend: iscsi
glance_backend: swift
compute_type: libvirt
# networking options
network_service: neutron
# supports linuxbridge and ovs
network_plugin: ovs
# supports single-flat, provider-router, and per-tenant-router
network_type: provider-router
# supports gre or vlan
tenant_network_type: vlan
enable_ha: true

View File

@ -1,17 +0,0 @@
---
db_type: mysql
rpc_type: rabbitmq
cinder_backend: iscsi
glance_backend: file
compute_type: libvirt
# networking options
network_service: neutron
# supports linuxbridge and ovs
network_plugin: ovs
# supports single-flat, provider-router, and per-tenant-router
network_type: provider-router
# supports gre or vlan
tenant_network_type: gre
password_management: individual
install_tempest: false

View File

@ -1,242 +0,0 @@
# The version of the puppetmaster package to be installed
# initially on the build node. Note that this may be overridden
# later by package updates (such as an apt-get upgrade). Can be
# set to 'present' to use whatever is the latest version in your
# package repository.
puppet::master::version: '3.2.3-1puppetlabs1'
# The fully qualified domain name of the Puppet Master node.
puppet_master_address: "%{fqdn}"
# Enable PuppetDB as a report processor
puppet::master::reports: 'store,puppetdb'
# Services to be enabled.
enabled_services:
- glance
- cinder
- keystone
- nova
# 'neutron', 'quantum', or "%{network_service} to use
# the value configured in global_hiera_params/common.yaml
- "%{network_service}"
- swift
# NOTE: if using mongodb, comment this line out so mysql is not
# redundantly included
- ceilometer
# Hosts to be allowed access to the backend database used to
# persist data for OpenStack services (e.g. MySQL or similar).
allowed_hosts: "%"
### Common OpenStack Parameters
# The email address of the administrator user.
admin_email: root@localhost
# The userid to be assigned for the RPC backend service
# (e.g. RabbitMQ, Qpid, etc).
rpc_user: openstack_rabbit_user
# Most OpenStack services can be configured with 'verbose'
# and/or 'debug' in their config files in order to increase
# logging verbosity. Set these to 'true' to set the corresponding
# values in OpenStack services.
verbose: false
debug: false
# The scheduler driver to be used for Cinder.
cinder::scheduler::scheduler_driver: 'cinder.scheduler.simple.SimpleScheduler'
# The libvirt VIF driver to be used for Neutron.
nova::compute::neutron::libvirt_vif_driver: nova.virt.libvirt.vif.LibvirtGenericVIFDriver
# Whether or not Neutron should send notification to Nova when port
# status is active.
neutron::server::notifications::notify_nova_on_port_status_changes: true
# Whether or not Neutron should send notification to Nova when port
# data (fixed_ips/floatingips) change so nova can update it's cache.
neutron::server::notifications::notify_nova_on_port_data_changes: true
# Number of seconds between sending events to nova if there are any events
# to send.
neutron::server::notifications::send_events_interval: 2
# The services and admin tenant names.
service_tenant: services
admin_tenant: openstack
# The nova admin username
nova_admin_username: 'nova'
# Many manifests that install OpenStack packages can either ensure
# that a package is 'present' (e.g. installed) or that it is 'latest'
# (e.g. the puppet agent will check for a newer version on each
# catalog run and install a newer version if one is available in your
# package repository). Using 'latest' could potentially be disruptive
# to your cloud, so use with caution.
package_ensure: present
# The libvirt driver to use for Nova. This is set to Qemu by
# default to allow OpenStack to be deployed in nested virt setups
# (common in CI or automated testing setups), but is frequently
# overridden by a higher-level yaml file such as user.$scenario.yaml.
nova::compute::libvirt::libvirt_virt_type: qemu
# The disk_cachemodes setting in nova.conf can be useful for
# improving the performance of instance launches and tuning
# I/O. By default it's set to an empty list and therefore
# isn't propagated to nova.conf. If you wish to set cachemodes,
# uncomment the lines below and adjust accordingly.
#nova::compute::libvirt::libvirt_disk_cachemodes:
# - 'file=writethrough'
# - 'block=none'
# The IP address on which vncserver should listen. Setting to
# 0.0.0.0 allows it to listen on all interfaces.
nova::compute::libvirt::vncserver_listen: 0.0.0.0
# enable nova to do live migration
# vncserver_listen also must be set to 0.0.0.0
nova::compute::libvirt::migration_support: true
# Set the libvirt CPU mode. Setting 'false' here will use
# 'host-model' on Qemu/KVM and 'None' on other hypervisors.
# For most users the defaults are fine, but you may wish to
# change the setting if you're using nested virtualization or
# wish to expose different CPU settings.
nova::compute::libvirt::libvirt_cpu_mode: false
# Normally Nova uses the metadata service to pass configuration data
# to instances. However the metadata services depends on network
# connectivity and can be problematic in some scenarios. An alternative
# is to use the force_config_drive setting in nova.conf, which tells
# Nova to always create a config drive and force injection to take place
# for each instance launched.
nova::compute::force_config_drive: true
### Package repository setup
# These directives set up the package repository to be used when
# installing OpenStack software. These settings may be overridden
# in data/hiera_data/vendor/* files if using a specific vendor's
# repositories.
# The package repository to be used. Valid values include 'cloud_archive'
# (use the Ubuntu Cloud Archive) and 'cisco_repo' (use the Cisco Systems
# OpenStack repository).
coe::base::package_repo: cloud_archive
# The base version of OpenStack to be installed (e.g. 'havana').
openstack_release: icehouse
# The name of the pocket to use for both
# the supplemental and main repos. This setting may not be useful for
# all vendor repositories, so setting it to false or an empty string
# is usually safe. Setting this to an empty string
# will generally point you to the stable pocket. For the Cisco Systems
# repository you can can specify the -proposed pocket ('-proposed') to
# use unreleased development code or a snapshot ('/snapshots/h.0') to
# use code from prior Cisco releases.
pocket: false
# The 'openstack_repo_location' parameter should be the complete
# URL of the repository you want to use to fetch OpenStack
# packages (e.g. http://openstack-repo.cisco.com/openstack/cisco).
# This setting is not used by all vendors.
openstack_repo_location: false
# The 'supplemental_repo' parameter should be the complete URL
# of the repository you want to use for supplemental packages
# (e.g. http://openstack-repo.cisco.com/openstack/cisco_supplemental).
# This setting is not used by all vendors.
supplemental_repo: false
# If you wish to run a basic functionality test after the cloud is set
# up, you can set the type of image to be booted. Use 'cirros' to
# use a Cirros image, or any other value to use an Ubuntu Precise image.
openstack::test_file::image_type: cirros
# Whether or not to install the ceilometer client library.
# This is often overridden in higher-layer yaml files.
openstack::client::ceilometer: false
# How to set the swift_store_user directive in swift.conf.
# This should be an account name and a username separated by
# a colon.
glance::backend::swift::swift_store_user: services:swift
# Whether or not to have swift create a container when it
# receives a PUT request.
glance::backend::swift::swift_store_create_container_on_put: true
# The type of backend storage to use for swift. This can either be
# 'loopback' (use a loopback device which is useful if you don't have
# dedicated storage disks for swift) or 'disk' (a more performant
# option if you do have dedicated storage disks). This is set to
# loopback by default since this value works on almost any setup,
# but will commonly be overridden in higher-level yaml files for
# production usage.
openstack::swift::storage-node::storage_type: loopback
# The disks or loopback devices to be used for Swift storage.
# If you are using dedicated disks, this will be a list of disk names
# (such as 'sdb'). If you are using loopback devices, this will
# be a list of filenames which will be created in /srv/loopback-device
# (you can specify an alternate location using
# openstack::storage::loopback::base_dir).
openstack::swift::storage-node::storage_devices:
- 1
- 2
- 3
# The Apache MPM module to use on the build node. Options may vary
# by platform, but frequently-used settings include 'prefork' and 'mpm'.
apache::mpm_module: prefork
# Enable or disable Django debugging for Horizon.
horizon::django_debug: true
### Tempest provisioning setup
# These parameters are used by Tempest to set up a bare OpenStack
# deployment and are only useful in all_in_one deployments.
# The URL of the identity service (keystone).
openstack::provision::identity_uri: 'http://127.0.0.1:5000/v2.0/'
# The admin tenant name and password to set up.
openstack::provision::admin_tenant_name: openstack
openstack::provision::admin_password: Cisco123
# Whether or not instance resizing is available.
openstack::provision::resize_available: false
# Whether or not the compute change_password feature is enabled.
openstack::provision::change_password_available: false
# The name to give to the public network created if using Neutron.
openstack::provision::public_network_name: nova
# Syslog
use_syslog: false
log_facility: LOG_USER
# Enable nova services.
enable_nova: true
# Package status
ensure_package: installed
# Endpoint region
region: RegionOne
# The version of the puppetlabs-mysql module you're using.
# Supported versions are 0.9 and 2.2.
puppet_mysql_version: '2.2'
# Allow rabbitmq cookie to be configured
rabbitmq::server::wipe_db_on_cookie_change: true
# Default character set and collation order for MySQL
mysql_default_charset: utf8
mysql_default_collation: utf8_unicode_ci

View File

@ -1,54 +0,0 @@
#
# This following data is HA specific, but should be exposed to the end user
# I guess this means that it should be added to the data_mappings
keystone::token_driver: keystone.token.backends.sql.Token
keystone::token_format: UUID
enabled: true
#
# always use the cisco repos when HA is enabled
#
coe::base::openstack_repo_location: http://openstack-repo.cisco.com/openstack/cisco
coe::base::package_repo: cisco_repo
# supplemental_repo is not needed for icehouse+trusty
coe::base::supplemental_repo: false
# this should be overridden per node and can be
# master or back-up
openstack-ha::load-balancer::controller_state: MASTER
openstack-ha::load-balancer::swift_proxy_state: MASTER
sql_idle_timeout: 30
# Galera / Percona settings
#
# get packages from Ubuntu, not from Percona repos
galera::configure_repo: false
galera::configure_firewall: false
# due to bug #1315528 must use either xtrabackup-v2 or mysqldump with
# trusty. The more common rsync method will not work
galera::wsrep_sst_method: xtrabackup-v2
# tenant hard-codings
keystone::roles::admin::admin_tenant: admin
openstack::auth_file::admin_tenant: admin
allowed_hosts: "%"
localhost: 127.0.0.1
nova::api::enabled_apis: 'ec2,osapi_compute'
nova::compute::libvirt::libvirt_virt_type: kvm
apache::default_vhost: false
# NOTE: Uncomment the following with appropriate values if using mongo
# as backend for ceilometer
# mongo replica set name
#mongodb::replset: 'rsmain'
# mongodb bind addresses
#mongodb::bind_ip: ['127.0.0.1', "%{ipaddress}"]
#
#

View File

@ -1,7 +0,0 @@
# Settings which apply only to the build server
# This file should not need to be used for AIO installs but will need
# customization for other scenarios.
#
# Apache default hosts are needed on standalone build servers
# or any other scenario where the build server is not a load balancer
apache::default_vhost: true

View File

@ -1,14 +0,0 @@
# has compute must be set for any server running nova compute
# nova uses the secret from virsh
cephdeploy::has_compute: true
# these are the disks for this particular host that you wish to use as OSDs.
# specify disks here will DESTROY any data on this disk during the first
# puppet run the format to use is disk:journal. If you want to have the
# journal on the same disk as your OSD, use disk:disk
cephdeploy::osdwrapper::disks:
# ex. using an SSD journal (/dev/sdb)
- sdc:sdb
- sdd:sdb
# ex. placing the journal on the same disk as the OSD
- sde:sde

View File

@ -1,2 +0,0 @@
openstack-ha::load-balancer::controller_state: MASTER
openstack-ha::load-balancer::swift_proxy_state: BACKUP

View File

@ -1,2 +0,0 @@
openstack-ha::load-balancer::controller_state: BACKUP
openstack-ha::load-balancer::swift_proxy_state: MASTER

View File

@ -1,4 +0,0 @@
openstack::swift::storage-node::swift_zone: 1
coe::network::interface::interface_name: "%{swift_storage_interface}"
coe::network::interface::ipaddress: "%{swift_local_net_ip}"
coe::network::interface::netmask: "%{swift_storage_netmask}"

View File

@ -1,4 +0,0 @@
openstack::swift::storage-node::swift_zone: 2
coe::network::interface::interface_name: "%{swift_storage_interface}"
coe::network::interface::ipaddress: "%{swift_local_net_ip}"
coe::network::interface::netmask: "%{swift_storage_netmask}"

View File

@ -1,4 +0,0 @@
openstack::swift::storage-node::swift_zone: 3
coe::network::interface::interface_name: "%{swift_storage_interface}"
coe::network::interface::ipaddress: "%{swift_local_net_ip}"
coe::network::interface::netmask: "%{swift_storage_netmask}"

View File

@ -1,6 +0,0 @@
quantum::core_plugin: quantum.plugins.linuxbridge.lb_quantum_plugin.LinuxBridgePluginV2
neutron::core_plugin: neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2
quantum::agents::l3::interface_driver: quantum.agent.linux.interface.BridgeInterfaceDriver
neutron::agents::l3::interface_driver: neutron.agent.linux.interface.BridgeInterfaceDriver
quantum::agents::dhcp::interface_driver: quantum.agent.linux.interface.BridgeInterfaceDriver
neutron::agents::dhcp::interface_driver: neutron.agent.linux.interface.BridgeInterfaceDriver

View File

@ -1,7 +0,0 @@
quantum::core_plugin: quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
neutron::core_plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
quantum::agents::l3::interface_driver: quantum.agent.linux.interface.OVSInterfaceDriver
neutron::agents::l3::interface_driver: neutron.agent.linux.interface.OVSInterfaceDriver
quantum::agents::dhcp::interface_driver: quantum.agent.linux.interface.OVSInterfaceDriver
neutron::agents::dhcp::interface_driver: neutron.agent.linux.interface.OVSInterfaceDriver

View File

@ -1,6 +0,0 @@
quantum::allow_overlapping_ips: true
neutron::allow_overlapping_ips: true
quantum::agents::l3::use_namespaces: true
neutron::agents::l3::use_namespaces: true
quantum::agents::dhcp::use_namespaces: true
neutron::agents::dhcp::use_namespaces: true

View File

@ -1,6 +0,0 @@
quantum::allow_overlapping_ips: true
neutron::allow_overlapping_ips: true
quantum::agents::l3::use_namespaces: false
neutron::agents::l3::use_namespaces: false
quantum::agents::dhcp::use_namespaces: true
neutron::agents::dhcp::use_namespaces: true

View File

@ -1,6 +0,0 @@
quantum::allow_overlapping_ips: true
neutron::allow_overlapping_ips: true
quantum::agents::l3::use_namespaces: false
neutron::agents::l3::use_namespaces: false
quantum::agents::dhcp::use_namespaces: false
neutron::agents::dhcp::use_namespaces: false

View File

@ -1,2 +0,0 @@
puppet::master::version: 3.2.3-1puppetlabs1
puppet::agent::version: 3.2.3-1puppetlabs1

View File

@ -1,2 +0,0 @@
puppet::master::version: 3.2.3-1.el6
puppet::agent::version: 3.2.3-1.el6

View File

@ -1,10 +0,0 @@
cinder::rpc_type:
'cinder.openstack.common.rpc.impl_qpid'
nova::rpc_backend:
'nova.openstack.common.rpc.impl_qpid'
quantum::rpc_backend:
'neutron.openstack.common.rpc.impl_qpid'
neutron::rpc_backend:
'neutron.openstack.common.rpc.impl_qpid'
ceilometer:rpc_backend:
'ceilometer.openstack.common.rpc.impl_qpid'

View File

@ -1,10 +0,0 @@
cinder::rpc_type:
'cinder.openstack.common.rpc.impl_kombu'
nova::rpc_backend:
'nova.openstack.common.rpc.impl_kombu'
quantum::rpc_backend:
'quantum.openstack.common.rpc.impl_kombu'
neutron::rpc_backend:
'neutron.openstack.common.rpc.impl_kombu'
ceilometer:rpc_backend:
'ceilometer.openstack.common.rpc.impl_kombu'

View File

@ -1,25 +0,0 @@
quantum::agents::ovs::bridge_mappings:
- "default:br-ex"
neutron::agents::ovs::bridge_mappings:
- "default:br-ex"
quantum::agents::ovs::enable_tunneling: true
neutron::agents::ovs::enable_tunneling: true
quantum::plugins::ovs::tenant_network_type: gre
neutron::plugins::ovs::tenant_network_type: gre
# ML2 Agent
neutron::agents::ml2::ovs::bridge_mappings:
- "default:br-ex"
neutron::agents::ml2::ovs::enable_tunneling: true
neutron::agents::ml2::ovs::tunnel_types:
- gre
# ML2 Plugin
neutron::plugins::ml2::type_drivers:
- gre
neutron::plugins::ml2::tenant_network_types:
- gre
neutron::plugins::ml2::mechanism_drivers:
- openvswitch

View File

@ -1,11 +0,0 @@
# TODO - finish vlan config
quantum::plugins::ovs::network_vlan_ranges: physnet1:1000:2000
quantum::agents::ovs::bridge_mappings:
- "physnet1:br-ex"
neutron::agents::ovs::bridge_mappings:
- "physnet1:br-ex"
neutron::plugins::ovs::network_vlan_ranges: physnet1:1000:2000
quantum::plugins::ovs::tenant_network_type: vlan
neutron::plugins::ovs::tenant_network_type: vlan
quantum::agents::ovs::enable_tunneling: false
neutron::agents::ovs::enable_tunneling: false

View File

@ -1,16 +0,0 @@
#
# Parameters specified here are generally specific to the all_in_one
# scenario and will override any parameters of the same name that
# are found in user.common.yaml.
#
# Disable memcached from front-ending swift as this may cause resource
# conflicts.
openstack::swift::proxy::memcached: false
# Set the zone for our lone storage node.
openstack::swift::storage-node::swift_zone: 1
# Set the Swift ring server to localhost since this is an
# all-in-one setup.
openstack::swift::storage-node::ring_server: '127.0.0.1'

View File

@ -1,472 +0,0 @@
########### NTP Configuration ############
# Change this to the location of a time server or servers in your
# organization accessible to the build server. The build server will
# synchronize with this time server, and will in turn function as the time
# server for your OpenStack nodes.
ntp_servers:
- 1.pool.ntp.org
# The time zone that clocks should be set to. See /usr/share/zoneinfo
# for valid values, such as "UTC" and "US/Eastern".
time_zone: UTC
######### Node Addresses ##############
# Change the following to the short host name you have given your build node.
# This name should be in all lower case letters due to a Puppet limitation
# (refer to http://projects.puppetlabs.com/issues/1168).
build_node_name: build-server
# Change the following to the host name you have given to your control
# node. This name should be in all lower case letters due to a Puppet
# limitation (refer to http://projects.puppetlabs.com/issues/1168).
coe::base::controller_hostname: control-server
# This domain name will be the name your build and compute nodes use for the
# local DNS. It doesn't have to be the name of your corporate DNS - a local
# DNS server on the build node will serve addresses in this domain - but if
# it is, you can also add entries for the nodes in your corporate DNS
# environment they will be usable *if* the above addresses are routeable
# from elsewhere in your network.
domain_name: domain.name
# The IP address to be used to connect to Horizon and external
# services on the control node. In the compressed_ha or full_ha scenarios,
# this will be an address to be configured as a VIP on the HAProxy
# load balancers, not the address of the control node itself.
controller_public_address: 192.168.242.10
# The protocol used to access API services on the control node.
# Can be 'http' or 'https'.
controller_public_protocol: 'http'
# The IP address used for internal communication with the control node.
# In the compressed_ha or full_ha scenarios, this will be an address
# to be configured as a VIP on the HAProxy load balancers, not the address
# of the control node itself.
controller_internal_address: 192.168.242.10
# The IP address used for management functions (such as monitoring)
# on the control node. In the compressed_ha or full_ha scenarios, this will
# be an address to be configured as a VIP on the HAProxy
# load balancers, not the address of the control node itself.
controller_admin_address: 192.168.242.10
# Controller public url
controller_public_url: "http://192.168.242.10:5000"
# Controller admin url
controller_admin_url: "http://192.168.242.10:35357"
# Controller admin url
controller_internal_url: "http://192.168.242.10:35357"
# Control node interfaces.
# internal_ip be used for the ovs local_ip setting for GRE tunnels.
# This sets the IP for the private(internal) interface of controller nodes
# (which is predefined already in $controller_node_internal, and the internal
# interface for compute nodes. It is generally also the IP address
# used in Cobbler node definitions.
internal_ip: "%{ipaddress_eth3}"
# The external_interface is used to provide a Layer2 path for
# the l3_agent external router interface. It is expected that
# this interface be attached to an upstream device that provides
# a L3 router interface, with the default router configuration
# assuming that the first non "network" address in the external
# network IP subnet will be used as the default forwarding path
# if no more specific host routes are added.
external_interface: eth2
# The public_interface will have an IP address reachable by
# all other nodes in the openstack cluster. This address will
# be used for API Access, for the Horizon UI, and as an endpoint
# for the default GRE tunnel mechanism used in the OVS network
# configuration.
public_interface: eth1
# The interface used for VM networking connectivity. This will usually
# be set to the same interface as public_interface.
private_interface: eth1
# iSCSI listener interface. Set this the s ame as public_interface.
cinder::volume::iscsi::iscsi_ip_address: "%{ipaddress_eth1}"
### Cobbler config
# The IP address of the node on which Cobbler will be installed and
# on which it will listen.
cobbler_node_ip: 192.168.242.10
# The subnet address of the subnet on which Cobbler should serve DHCP
# addresses.
node_subnet: '192.168.242.0'
# The netmask of the subnet on which Cobbler should serve DHCP addresses.
node_netmask: '255.255.255.0'
# The default gateway that should be provided to DHCP clients that acquire
# an address from Cobbler.
node_gateway: '192.168.242.1'
# The admin username and crypted password used to authenticate to Cobbler.
admin_user: localadmin
password_crypted: $6$UfgWxrIv$k4KfzAEMqMg.fppmSOTd0usI4j6gfjs0962.JXsoJRWa5wMz8yQk4SfInn4.WZ3L/MCt5u.62tHDGB36EhiKF1
# Cobbler can instruct nodes being provisioned to start a Puppet agent
# immediately upon bootup. This is generally desirable as it allows
# the node to immediately begin configuring itself upon bootup without
# further human intervention. However, it may be useful for debugging
# purposes to prevent Puppet from starting automatically upon bootup.
# If you want Puppet to run automatically on bootup, set this to true.
# Otherwise, set it to false.
autostart_puppet: true
# If you are using Cisco UCS servers managed by UCSM, set the port on
# which Cobbler should connect to UCSM in order to power nodes off and on.
# If set to 443, the connection will use SSL, which is generally
# desirable and is usually enabled on UCS systems.
ucsm_port: 443
# The name of the hard drive on which Cobbler should install the operating
# system.
install_drive: /dev/sda
# Set to 1 to enable ipv6 route advertisement. Otherwise, comment out
# this line or set it to 0.
#ipv6_ra: 1
# Uncomment this line and set it to true if you want to use bonded
# ethernet interfaces.
#interface_bonding: true
# The IP address on which vncserver proxyclient should listen.
# This should generally be an address that is accessible via
# horizon. You can set it to an actual IP address (e.g. "192.168.1.1"),
# or use facter to get the IP address assigned to a particular interface.
nova::compute::vncserver_proxyclient_address: "%{ipaddress_eth3}"
# If you wish to customize the list of filters that the nova
# scheduler will use when scheduling instances, change the line
# below to be a comma-separated list of filters. If set to false,
# the nova default filter list will be used.
# Example: 'RetryFilter,AvailabilityZoneFilter,RamFilter,
# ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter'
nova::scheduler::filter::scheduler_default_filters: false
# The following is a set of arbitrary config entries to be
# created in nova.conf. You can add arbitrary entries here
# that are not parameterized in the puppet-nova module for special
# use cases.
nova::config::nova_config:
# Allow destination machien to match source for resize
'DEFAULT/allow_resize_to_same_host':
value: 'true'
# Automatically confirm resizes after N seconds. Set to 0 to disable.
'DEFAULT/resize_confirm_window':
value: '0'
### The following are passwords and usernames used for
### individual services. You may wish to change the passwords below
### in order to better secure your installation.
cinder_db_password: cinder_pass
glance_db_password: glance_pass
keystone_db_password: key_pass
nova_db_password: nova_pass
network_db_password: quantum_pass
database_root_password: mysql_pass
cinder_service_password: cinder_pass
glance_service_password: glance_pass
nova_service_password: nova_pass
ceilometer_service_password: ceilometer_pass
admin_password: Cisco123
admin_token: keystone_admin_token
network_service_password: quantum_pass
rpc_password: openstack_rabbit_password
metadata_shared_secret: metadata_shared_secret
horizon_secret_key: horizon_secret_key
ceilometer_metering_secret: ceilometer_metering_secret
ceilometer_db_password: ceilometer
heat_db_password: heat
heat_service_password: heat_pass
heat::engine::auth_encryption_key: 'notgood but just long enough i think'
# Set this parameter to use a single secret for the Horizon secret
# key, neutron agents, Nova API metadata proxies, swift hashes,etc.
# This prevents you from needing to specify individual secrets above,
# but has some security implications in that all services are using
# the same secret (creating more vulnerable services if it should be
# compromised).
secret_key: secret
# Set this parameter to use a single password for all the services above.
# This prevents you from needing to specify individual passwords above,
# but has some security implications in that all services are using
# the same password (creating more vulnerable services if it should be
# compromised).
password: password123
# Manage the Horizon vhost priority. Apache defaults to '25'. Here we
# set it to '10' so it is the default site. This allows Horizon to load
# at both it's vhost name and at the server's IP address.
horizon::wsgi::apache::priority: 10
### Swift configuration
# The username used to authenticate to the Swift cluster.
swift_service_password: swift_pass
# The password hash used to authenticate to the Swift cluster.
swift_hash: super_secret_swift_hash
# The IP address used by Swift on the control node to communicate with
# other members of the Swift cluster. In the compressed_ha or full_ha
# scenarios, this will be the address to be configured as a VIP on
# the HAProxy load balancers, not the address of an individual Swift node.
swift_internal_address: "%{ipaddress_eth3}"
# The IP address which external entities will use to connect to Swift,
# including clients wishing to upload or retrieve objects. In the
# compressed_ha or full_ha scenarios, this will be the address to
# be configured as a VIP on the HAProxy load balancers, not the address
# of an individual Swift node.
swift_public_address: "%{ipaddress_eth3}"
# The IP address over which administrative traffic for the Swift
# cluster will flow. In the compressed_ha or full_ha
# scenarios, this will be the address to be configured as a VIP on
# the HAProxy load balancers, not the address of an individual Swift node.
swift_admin_address: "%{ipaddress_eth3}"
# The interface on which Swift will run data storage traffic.
# This should generally be a different interface than is used for
# management traffic to avoid congestion.
swift_storage_interface: eth0.222
# The IP address to be configured on the Swift storage interface.
swift_local_net_ip: "%{ipaddress_eth0_222}"
# The netmask to be configured on the Swift storage interface.
swift_storage_netmask: 255.255.255.0
# The IP address of the Swift proxy server. This is the address which
# is used for management, and is often on a separate network from
# swift_local_net_ip.
swift_proxy_net_ip: "%{ipaddress_eth0}"
### The following three parameters are only used if you are configuring
### Swift to serve as a backend for the Glance image service.
# Enable Glance to use Ceph RBD as it's backend storage by uncommenting
# the line below. It can also be set to "file" or "swift".
#glance_backend: rbd
# The key used by Glance to connect to Swift.
glance::backend::swift::swift_store_key: secret_key
# The IP address to which Glance should connect in order to talk
# to the Swift cluster.
glance::backend::swift::swift_store_auth_address: '127.0.0.1'
# The volume name when using iSCSI for cinder
cinder_volumes_name: 'cinder-volumes'
### Ceph configuration
# The name of the Ceph cluster to be deployed.
ceph_cluster_name: 'ceph'
### Ceph configuration file
ceph_configuration_file: '/etc/ceph/ceph.conf'
# The FSID of the Ceph monitor node. This should take the form
# of a UUID.
ceph_monitor_fsid: 'e80afa94-a64c-486c-9e34-d55e85f26406'
# The shared secret used to connect to the Ceph monitors. This
# should be a crypted password.
ceph_monitor_secret: 'AQAJzNxR+PNRIRAA7yUp9hJJdWZ3PVz242Xjiw=='
# The short hostname (e.g. 'ceph-mon01', not 'ceph-mon01.domain.com') of
# the initial members of the Ceph monitor set.
mon_initial_members: 'ceph-mon01'
# The short hostname (e.g. 'ceph-mon01', no 'ceph-mon01.domain.com') of
# the primary monitor node.
ceph_primary_mon: 'ceph-mon01'
# The IP address used to connect to the primary monitor node.
ceph_monitor_address: '10.0.0.1'
# The rbd account OpenStack will use to communicate with ceph.
ceph_openstack_user: 'admin'
# Ceph will be deployed using the cephdeploy tool. This tool requires
# a username and password to authenticate.
ceph_deploy_user: 'cephdeploy'
ceph_deploy_password: '9jfd29k9kd9'
# The name of the network interface used to connect to Ceph nodes.
# This interface will be used to pass traffic between Ceph nodes.
ceph_cluster_interface: 'eth1'
# The subnet on which Ceph intra-cluster traffic will be passed.
ceph_cluster_network: '10.0.0.0/24'
# The interface on which entities that want to import data into or
# extract data from the cluster will connect.
ceph_public_interface: 'eth1'
# The subnet on which external entities will connect to the Ceph cluster.
ceph_public_network: '10.0.0.0/24'
### The following four parameters are used only if you are configuring
### Ceph to be a backend for the Cinder volume service.
# Enable Cinder to use Ceph RBD as it's backend storage by uncommenting
# the line below. It can also be set to 'iscsi'.
#cinder_backend: rbd
# The name of the pool used to store Cinder volumes.`
cinder_rbd_pool: 'volumes'
### The following parameter is used only if you are deploying Ceph
### as a backend for the Glance image service.
# The name of the pool used to store glance images.
glance_ceph_pool: 'images'
# Return the URL that references where the data is stored on
# the backend storage system. For example, if using the
# file system store a URL of 'file:///path/to/image' will
# be returned to the user in the 'direct_url' meta-data field.
# This gives glance the ability to COW images, but revealing the
# storage location may be a security risk.
glance::api::show_image_direct_url: false
### The following parameters relate to Neutron L4-L7 services.
# A boolean specifying whether to enable the Neutron Load Balancing
# as a Service agent.
enable_lbaas: true
# A boolean specifying whether to enable the Neutron Firewall as
# a Service feature.
enable_fwaas: true
# A boolean specifying whether to enable the Neutron VPN as a
# Service feature.
enable_vpnaas: true
# Neutron core plugin to use. This should always be
# ML2 from Juno onwards
neutron::core_plugin: ml2
# An array of Neutron service plugins to enable.
neutron::service_plugins:
- router
- lbaas
- vpnaas
- firewall
# Set the interface driver
interface_driver: 'neutron.agent.linux.interface.OVSInterfaceDriver'
# Set the external network bridge.
# NOTE: If you change this, make sure to update the
# bridge mapping in tenant_network_type/*.yaml
external_network_bridge: 'br-ex'
# A hash of Neutron services to enable GUI support for in Horizon.
# enable_lb: Enables Neutron LBaaS agent support.
# enable_firewall: Enables Neutron FWaaS support.
# enable_vpn: Enables Neutron VPNaaS support
horizon_neutron_options:
'enable_lb': true
'enable_firewall': true
'enable_vpn': true
# A boolean stating whether to run a "neutron-db-manage" on the
# nodes running neutron-server after installing packages. In most
# cases this is not necessary and may cause problems if the database
# connection information is only located in the neutron.conf file
# rather than also being present in the Neturon plugin's conf file.
neutron_sync_db: false
## The following parameters are used to enable SSL endpoint support
# in keystone.
# Enable ssl in keystone config
enable_ssl: false
### NOTE: If enable_ssl is true, Replace the following lines
### with valid SSL certs. To generate you own self signed certs
### refer to the instructions from the following url
### https://help.ubuntu.com/12.04/serverguide/certificates-and-security.html
### After generating your certs, make sure the certs are copied
### to /etc/keystone/ssl/ on your control nodes and rerun puppet agent.
# SSL client certificate
ssl_certfile: '/etc/keystone/ssl/certs/keystone.pem'
# SSL certificate key
ssl_keyfile: '/etc/keystone/ssl/private/keystonekey.pem'
# SSL CA Cert
ssl_ca_certs: '/etc/keystone/ssl/certs/ca.pem'
# SSL CA Key
ssl_ca_key: '/etc/keystone/ssl/private/cakey.pem'
# SSL cert subject
ssl_cert_subject: '/C=US/ST=Unset/L=Unset/O=Unset/CN=localhost'
# MySQL server options
#
# override_options is used to pass a hash of options to be set in
# /etc/mysql/my.cnf. The options are separated by the section of my.cnf
# to which they belong
#
# Options which need to be set in the mysqld section of my.cnf include:
# bind-address: specifies the IP address on which the MySQL daemon listens
# max-connections: specifies the number of simultaneous connections permitted
# max_connect_errors: production deployments typically set this to 2^32-1
#
# In the isamchk section, key_buffer_size determines how much memory will be
# used to buffer index blocks for MyISAM tables
#
# Note that any other valid MySQL config file parameters can be added as
# needed by using this override_options mechanism. See override_options in
# https://github.com/puppetlabs/puppetlabs-mysql/tree/2.2.x#reference for more
# details on using this parameter with custom options.
mysql::server::override_options:
mysqld:
bind-address: 192.168.242.10
max-connections: 8192
max_connect_errors: 4294967295
isamchk:
key_buffer_size: 64M
# if true, restart MySQL when config options are changed. true is appropriate
# for most installations
mysql::server::restart: true
## NFS Live migration options ##
# If on a debian-based distro, you must also set the nova uid/gid
# when using live migration. This is not needed for RHEL distros
# as they predefine the uid/gid.
# You can also manage the nova user's uid and gid. You will have to enable
# this in order to use NFS or Ceph live migrations. This option should NOT
# be used on RedHat platforms, as they already predefine the uid/gid for the
# nova user.
# nova::nova_user_id: '499'
# nova::nova_group_id: '499'
# migration_support: enable NFS mount handling.
# nfs_mount_path: the path where nova instances are stored.
# nfs_mount_device: the full path to your NFS resource.
# nfs_fs_type: the mount type.
# nfs_mount_options: options, as they would appear in fstab
# Example config:
# coe::compute::migration::migration_support: true
# coe::compute::migration::nfs_mount_path: '/var/lib/nova/instances'
# coe::compute::migration::nfs_mount_device: 'nfs.domain.com:/myinstances'
# coe::compute::migration::nfs_fs_type: 'nfs'
# coe::compute::migration::nfs_mount_options: 'auto'

View File

@ -1,151 +0,0 @@
#
# this emulates provided user data
#
# This file lists settings which are required for full_ha deployments of
# OpenStack.
#
# In addition to this file, also edit user.common.yaml. Most variables
# which apply to all OpenStack deployments regardless of scenario are
# found there.
#
# This is the short hostname (not FQDN) used to refer to the VIP which
# sits in front of all clustered OpenStack control services.
coe::base::controller_hostname: control
# Specify the URI to be used by horizon to access keystone. For full_ha
# this should use the controller VIP to access keystone.
horizon::keystone_url: 'http://192.168.220.40:5000/v2.0/'
# Most passwords are set in user.common.yaml. Password settings which
# need to be customized for full_ha are defined here.
#
# metadata_shared_secret needs to be undefined for full_ha deployments
metadata_shared_secret: false
#
# HA connections
#
# controller_names sets the short hostnames (not FQDN) of the
# controller nodes.
controller_names:
- control01
- control02
- control03
# controller_ipaddresses lists the real IP addresses of the controller
# nodes which are being clustered behind the controller VIP.
openstack-ha::load-balancer::controller_ipaddresses:
- 192.168.220.41
- 192.168.220.42
- 192.168.220.43
# controller_vrid sets the VRID of the VRRP router used for HA of OpenStack
# control services. Change this if the default value conflicts with
# existing VRRP groups in your environment.
openstack-ha::load-balancer::controller_vrid: '50'
# swift_proxy_names sets the short hostnames (not FQDN) of the Swift
# proxy nodes.
openstack-ha::load-balancer::swift_proxy_names:
- swift-proxy01
- swift-proxy02
# swift_proxy_ipaddresses lists the real IP addresses of the Swift
# proxy nodes which are being clustered behind the Swift VIP.
openstack-ha::load-balancer::swift_proxy_ipaddresses:
- 192.168.220.61
- 192.168.220.62
# swift_proxy_net_ip lists the VIP used in front of swift_proxy_ipaddresses
openstack::swift::proxy::swift_proxy_net_ip: 192.168.220.60
# swift_vrid sets the VRID of the VRRP router used for HA of Swift
# services. Change this if the default value conflicts with existing
# VRRP groups in your environment.
openstack-ha::load-balancer::swift_vrid: '51'
# memcached_servers lists the real IP addresses and ports of the
# memcached services on the controller nodes.
nova::memcached_servers:
- 192.168.220.41:11211
- 192.168.220.42:11211
- 192.168.220.43:11211
# swift_memcache_servers lists the real IP addresses and ports of
# the memcached services on the Swift proxy nodes.
openstack::swift::proxy::swift_memcache_servers:
- 192.168.222.61:11211
- 192.168.222.62:11211
# rabbit_hosts lists the short hostnames (not FQDN) and ports of the
# RabbitMQ services on the control nodes.
rabbit_hosts:
- control01:5672
- control02:5672
- control03:5672
# galera_servers lists the IP addresses of the nodes which comprise
# the HA MySQL database
galera::galera_servers:
- 192.168.220.41
- 192.168.220.42
- 192.168.220.43
# galera_master defines which node of the galera_servers is the 'master'
# node for bootstrapping purposes; once the cluster is functional this
# distinction has no meaning. This should be the FQDN for that node.
galera::galera_master: control01.local.domain
# galera is currently configured with an active-passive load balancer in
# front of it to restrict writes to a single node. This is needed as a
# workaround for compatibility issues with Galera and certain sql statements
# currently used in OpenStack code. See
# https://bugs.launchpad.net/openstack-cisco/+bug/1321734 for more details.
#
# galera_master_ipaddress is the IP address of the initial node in the
# Galera which gets writes by default
galera_master_ipaddress: 192.168.220.41
# galera_backup_ipaddresses lists the IP addresses of the other nodes in
# the Galera cluster. HAProxy will direct writes to them if
# galera_master_ipaddress is not reachable
galera_backup_ipaddresses:
- 192.168.220.42
- 192.168.220.43
# galera_master_name is the bare hostname (not FQDN) of galera_master_ipaddress
galera_master_name: control01
# galera_backup_names is the bare hostname (not FQDN) of the nodes listed in
# galera_backup_ipaddresses
galera_backup_names:
- control02
- control03
# NOTE: Uncomment the following with appropriate values if using mongo
# as backend for ceilometer
# The short hostnames and port numbers of the hosts on which
# mongodb is running
#mongodb::replset::sets:
# rsmain:
# members:
# - control01:27017
# - control02:27017
# - control03:27017
# bind_address specifies the IP address on each control node to which
# the OpenStack APIs should bind.
bind_address: "%{ipaddress_eth0}"
# If you wish to use Cinder iSCSI, then uncomment and configure this
# line. Full HA defaults to using ceph, which does not need this param.
# Ceph settings are configured in user.common.yaml
# cinder::volume::iscsi::iscsi_ip_address: "%{ipaddress_eth0}"
# cinder_backend configures cinder with specific volume driver(s).
# in full_ha, Ceph is the source for block storage
cinder_backend: rbd
# full_ha uses the OpenStack provider networking model. In this model,
# network_vlan_ranges specifies the VLAN tag range being provided to OVS.
quantum::plugins::ovs::network_vlan_ranges: physnet1:223:225
neutron::plugins::ovs::network_vlan_ranges: physnet1:223:225
# storage_type specifies what type of storage is being used on the Swift
# storage nodes. This should typically be left as 'disk', though loop-back
# files can be used instead by setting this to 'loopback'.
openstack::swift::storage-node::storage_type: disk
# storage_devices specifies the disks on each Swift storage node which
# are dedicated for Swift storage. Individual server deviations from this
# default can be specified in the hostname.yaml files.
openstack::swift::storage-node::storage_devices:
- 'sdb'
- 'sdc'
- 'sdd'
- 'sde'
- 'sdf'

View File

@ -1,216 +0,0 @@
# eth0: vagrant network in testing
# eth1: deploy network
# eth2: public api network
# eth3: private service network + GRE
# eth4: external data network
# The IP address to be used to connect to Horizon and external
# services on the control node. In the compressed_ha or full_ha scenarios,
# this will be an address to be configured as a VIP on the HAProxy
# load balancers, not the address of the control node itself.
controller_public_address: 10.2.3.5
# The IP address used for internal communication with the control node.
# In the compressed_ha or full_ha scenarios, this will be an address
# to be configured as a VIP on the HAProxy load balancers, not the address
# of the control node itself.
controller_internal_address: 10.3.3.5
# This is the address of the admin endpoints for Openstack
# services. In most cases, the admin address is the same as
# the public one.
controller_admin_address: 10.3.3.5
# Interface that will be stolen by the l3 router on
# the contorl node. The IP will be unreachable so don't
# set this to anything you were using
external_interface: eth4
# Gre tunnel address for each node
internal_ip: "%{ipaddress_eth3}"
# This is the interface that each node will be binding
# various services on.
deploy_bind_ip: "%{ipaddress_eth1}"
public_bind_ip: "%{ipaddress_eth2}"
private_bind_ip: "%{ipaddress_eth3}"
# The public VIP, where all API services are exposed to users.
public_vip: 10.2.3.5
# The private VIP, where services are exposed to openstack services.
private_vip: 10.3.3.5
# The protocol to use for public API services
public_protocol: http
# The protocol to use for internal API services
private_protocol: http
# List of IP addresses for controllers on the public network
control_servers_public: [ '10.2.3.10', '10.2.3.11', '10.2.3.12']
# List of IP addresses for controllers on the private network
control_servers_private: [ '10.3.3.10', '10.3.3.11', '10.3.3.12']
# A hash of hostnames to private network IPs. Used for rabbitmq hosts
# resolution
openstacklib::hosts::cluster_hash:
control1.private: '10.3.3.10'
control2.private: '10.3.3.11'
control3.private: '10.3.3.12'
# List of controller hostnames. Used for rabbitmq hosts list
cluster_names: [ 'control1.private', 'control2.private', 'control3.private' ]
# Allowed hosts for mysql users
allowed_hosts: 10.3.3.%
#Galera status checking
galera::status::status_allow: "%{hiera('allowed_hosts')}"
galera::status::status_password: clustercheck
galera::status::status_host: "%{hiera('private_vip')}"
# Edeploy is a tool from eNovance for provisioning servers based on
# chroots created on the build node.
edeploy::serv: '%{ipaddress_eth1}'
edeploy::hserv: '%{ipaddress_eth1}'
edeploy::rserv: '%{ipaddress_eth1}'
edeploy::hserv_port: 8082
edeploy::http_install_port: 8082
edeploy::install_apache: false
edeploy::giturl: 'https://github.com/michaeltchapman/edeploy.git'
edeploy::rsync_exports:
'install':
'path': '/var/lib/debootstrap/install'
'comment': 'The Install Path'
'metadata':
'path': '/var/lib/edeploy/metadata'
'comment': 'The Metadata Path'
# Dnsmasq is used by edeploy to provide dhcp on the deploy
# network.
dnsmasq::domain_needed: false
dnsmasq::interface: 'eth1'
dnsmasq::dhcp_range: ['192.168.242.3, 192.168.242.50']
dnsmasq::dhcp_boot: ['pxelinux.0']
apache::default_vhost: false
#apache::ip: "%{ipaddress_eth2}"
horizon::wsgi::apache::bind_address: "%{ipaddress_eth2}"
# Use these to set an apt proxy if running on a Debian-like
apt::proxy_host: 192.168.0.18
apt::proxy_port: 8000
# We are using the new version of puppetlabs-mysql, which
# requires this parameter for compatibility.
mysql_module: '2.2'
# Install the python mysql bindings on all hosts
# that include mysql::bindings
mysql::bindings::python_enable: true
# This node will be used to bootstrap the cluster on initial deployment
# or if there is a total failure of the control cluster
galera::galera_master: 'control1.domain.name'
# This can be either percona or mariadb, depending on preference
galera::vendor_type: 'mariadb'
# epel is included by openstack::repo::rdo, so we
# don't need it from other modules
devtools::manage_epel: false
galera::repo::epel_needed: false
# We are using the new rabbitmq module, which removes
# the rabbitmq::server class in favor of ::rabbitmq
nova::rabbitmq::rabbitmq_class: '::rabbitmq'
# We don't want to get Rabbit from the upstream, instead
# preferring the RDO/UCA version.
rabbitmq::manage_repos: false
rabbitmq::package_source: false
# Change this to apt on debians
rabbitmq::package_provider: yum
# The rabbit module expects the upstream rabbit package, which
# includes plugins that the distro packages do not.
rabbitmq::admin_enable: false
# Rabbit clustering configuration
rabbitmq::config_cluster: true
rabbitmq::config_mirrored_queues: true
rabbitmq::cluster_node_type: 'disc'
rabbitmq::wipe_db_on_cookie_change: true
# This is the port range for rabbit clustering
rabbitmq::config_kernel_variables:
inet_dist_listen_min: 9100
inet_dist_listen_max: 9105
# Openstack version to install
openstack_release: icehouse
openstack::repo::uca::release: 'icehouse'
openstack::repo::rdo::release: 'icehouse'
# Proxy configuration of either apt or yum
openstacklib::repo::apt_proxy_host: '192.168.0.18'
openstacklib::repo::apt_proxy_port: '8000'
openstacklib::repo::yum_http_proxy: 'http://192.168.0.18:8000'
openstacklib::repo::yum_epel_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::repo::yum_base_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::hosts::build_server_ip: '192.168.242.100'
openstacklib::hosts::build_server_name: 'build-server'
openstacklib::hosts::domain: 'domain.name'
openstacklib::hosts::mgmt_ip: "%{ipaddress_eth1}"
# Loadbalancer configuration
openstacklib::loadbalance::haproxy::vip_secret: 'vip_password'
openstacklib::loadbalance::haproxy::public_iface: 'eth2'
openstacklib::loadbalance::haproxy::private_iface: 'eth3'
openstacklib::loadbalance::haproxy::cluster_master: 'control1.domain.name'
# CIDRs for the three networks.
deploy_control_firewall_source: '192.168.242.0/24'
public_control_firewall_source: '10.2.3.0/24'
private_control_firewall_source: '10.3.3.0/24'
# Allow internal comms on compute node
openstacklib::firewall::compute::interface: eth3
# Store reports in puppetdb
puppet::master::reports: 'store,puppetdb'
# This purges config files to remove entries not set by puppet.
# This is essential on RDO where qpid is the default
glance::api::purge_config: true
# PKI will cause issues when using load balancing because each
# keystone will be a different CA, so use uuid.
keystone::token_provider: 'keystone.token.providers.uuid.Provider'
# Validate keystone connection via VIP before
# evaluating custom types
keystone::validate_service: true
# Haproxy is installed via puppetlabs-haproxy, so we don't need to install it
# via lbaas agent
neutron::agents::lbaas::manage_haproxy_package: false
neutron::agents::vpnaas::enabled: false
neutron::agents::lbaas::enabled: false
neutron::agents::fwaas::enabled: false
neutron::agents::metadata::shared_secret: "%{hiera('metadata_shared_secret')}"
# Multi-region mappings. See contrib/aptira/puppet/user.regcon.yaml for a sample
# on setting multiple regions
openstacklib::openstack::regions::nova_user_pw: "%{hiera('nova_service_password')}"
openstacklib::openstack::regions::neutron_user_pw: "%{hiera('network_service_password')}"
openstacklib::openstack::regions::glance_user_pw: "%{hiera('glance_service_password')}"
openstacklib::openstack::regions::heat_user_pw: "%{hiera('heat_service_password')}"
openstacklib::openstack::regions::cinder_user_pw: "%{hiera('cinder_service_password')}"
openstacklib::openstack::regions::ceilometer_user_pw: "%{hiera('ceilometer_service_password')}"

View File

@ -1,38 +0,0 @@
# Cisco COI parameters overriding common.yaml baseline
#
# Use the Cisco COI package repos. This information sets
# up apt repositories both on nodes performing catalog runs
# and in the preseed template if you're using COI's Cobbler
# setup to perform baremetal provisioning of nodes.
# * The 'coe::base::package_repo' setting tells us to use the Cisco package
# repositories rather than other vendor repositories such as
# the Ubuntu Cloud Archive.
# * The 'openstack_repo_location' parameter should be the complete
# URL of the repository you want to use to fetch OpenStack
# packages (e.g. http://openstack-repo.cisco.com/openstack/cisco).
# * The 'supplemental_repo' parameter should be the complete URL
# of the repository you want to use for supplemental packages
# (e.g. http://openstack-repo.cisco.com/openstack/cisco_supplemental).
# * The 'puppet_repo_location' parameter should be the complete
# URL of the repository you want to use for Puppet module packages.
# (e.g. http://openstack-repo.cisco.com/openstack/puppet).
# * The 'puppet_repo' parameter setting tells us to use the Cisco package
# repositories for Puppet modules rather than other vendor repositories.
# * The 'pocket' parameter should be the repo pocket to use for both
# the supplmental and main repos. Setting this to an empty string
# will point you to the stable pocket, or you can specify the
# proposed pocket ('-proposed') or a snapshot ('/snapshots/h.0').
coe::base::package_repo: cisco_repo
openstack_repo_location: 'http://openstack-repo.cisco.com/openstack/cisco'
# supplemental_repo is not needed for icehouse+trusty
supplemental_repo: false
puppet_repo_location: 'http://openstack-repo.cisco.com/openstack/puppet'
puppet_repo: 'cisco_repo'
pocket: ''
# Use the latest Puppet packages from the Cisco COI repos
puppet::master::version: latest
puppet::agent::version: latest
coi::profiles::puppet::master::puppetlabs_repo: false

Some files were not shown because too many files have changed in this diff Show More