Retire stackforge/puppet-openstack_dev_env

This commit is contained in:
Monty Taylor 2015-10-17 16:04:26 -04:00
parent e60314da9b
commit 204ff59c50
22 changed files with 7 additions and 1429 deletions

11
.gitignore vendored
View File

@ -1,11 +0,0 @@
modules
Puppetfile.lock
.librarian/
.vagrant
.tmp
.github_auth
.current_testing
my.log
*swp
logs
hiera_data/jenkins.yaml

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=stackforge/puppet-openstack_dev_env.git

10
Gemfile
View File

@ -1,10 +0,0 @@
source "https://rubygems.org"
gem "vagrant", "~>1.0"
gem "librarian-puppet-simple"
gem "github_api", "0.8.1"
gem 'rake'
group :unit_tests do
gem 'puppetlabs_spec_helper'
gem 'puppet', '3.1.0'
gem 'rspec-puppet'
end

View File

@ -1,37 +0,0 @@
forge "http://forge.puppetlabs.com"
mod 'puppetlabs/openstack', :git => 'git://github.com/stackforge/puppet-openstack'
mod 'puppetlabs/nova', :git => 'git://github.com/stackforge/puppet-nova'
mod 'puppetlabs/glance', :git => 'git://github.com/stackforge/puppet-glance'
mod 'puppetlabs/keystone', :git => 'git://github.com/stackforge/puppet-keystone'
mod 'puppetlabs/horizon', :git => 'git://github.com/stackforge/puppet-horizon'
mod 'puppetlabs/swift', :git => 'git://github.com/stackforge/puppet-swift'
mod 'puppetlabs/cinder', :git => 'git://github.com/stackforge/puppet-cinder'
mod 'puppetlabs/tempest', :git => 'git://github.com/puppetlabs/puppetlabs-tempest'
mod 'puppet/quantum', :git => 'git://github.com/stackforge/puppet-quantum/'
# openstack middleware
mod 'puppet/vswitch', :git => 'git://github.com/bodepd/puppet-vswitch'
mod 'puppetlabs/rabbitmq', :git => 'git://github.com/puppetlabs/puppetlabs-rabbitmq'
mod 'puppetlabs/mysql', :git => 'git://github.com/puppetlabs/puppetlabs-mysql'
mod 'puppetlabs/git', :git => 'git://github.com/puppetlabs/puppetlabs-git'
mod 'puppetlabs/vcsrepo', :git => 'git://github.com/puppetlabs/puppetlabs-vcsrepo'
mod 'saz/memcached', :git => 'git://github.com/saz/puppet-memcached'
mod 'puppetlabs/rsync', :git => 'git://github.com/puppetlabs/puppetlabs-rsync'
mod 'puppetlabs/apache', :git => 'git://github.com/puppetlabs/puppetlabs-apache', :ref => '94ebca3aaaf2144a7b9ce7ca6a13837ec48a7e2a'
# other deps
mod 'puppetlabs/xinetd', :git => 'git://github.com/puppetlabs/puppetlabs-xinetd'
mod 'saz/ssh', :git => 'git://github.com/saz/puppet-ssh'
mod 'saz/sudo', :git => 'git://github.com/saz/puppet-sudo'
mod 'puppetlabs/stdlib', :git => 'git://github.com/puppetlabs/puppetlabs-stdlib'
mod 'puppetlabs/apt', :git => 'git://github.com/puppetlabs/puppetlabs-apt'
mod 'puppetlabs/firewall', :git => 'git://github.com/puppetlabs/puppetlabs-firewall'
mod 'ripienaar/concat', :git => 'git://github.com/ripienaar/puppet-concat'
mod 'duritong/sysctl', :git => 'git://github.com/duritong/puppet-sysctl.git'
mod 'cprice404/inifile', :git => 'git://github.com/cprice-puppet/puppetlabs-inifile'
# puppet related modules
mod 'ripienaar/hiera_puppet', :git => 'https://github.com/ripienaar/hiera-puppet'
mod 'puppetlabs/ruby', :git => 'https://github.com/puppetlabs/puppetlabs-ruby'
mod 'puppet/puppet', :git => 'git://github.com/stephenrjohnson/puppetlabs-puppet.git', :ref => '6244079f8ce37901a167f45fadd5d9cc055f83db'
mod 'puppetlabs/puppetdb', :git => 'git://github.com/bodepd/puppetlabs-puppetdb.git'
mod 'puppetlabs/postgresql', :git => 'git://github.com/bodepd/puppet-postgresql.git'
mod 'ripienaar/ruby-puppetdb', :git => 'git://github.com/ripienaar/ruby-puppetdb'

View File

@ -1,93 +0,0 @@
# sharable openstack puppet dev environment
This project contains everything that you need to rebuild the
same development environment that I built initilaly for the
folsom implementation of the openstack puppet modules.
# prereqs
1. Ensure that you have rubygems installed
2. install vagrant and dependencies:
vagrant should be installed (the latest version of vagrant is generally available as a package)
> gem install vagrant
virtualbox should be installed
3. Install librarian-puppet-simple.
> gem install librarian-puppet-simple
3. it is strongly recommended that you set up a proxy (like squid!) to speed up perforance
of package installation. If you do not use a proxy, you need to change some settings in
your site manifest.
# project contents
This project contains the following files
Vagrantfile
specifies virtual machines that build openstack test/dev environments.
Puppetfile
used by librarian puppet to install the required modules
manifests/setup/hosts.pp
stores basic host setup (ip addresses for vagrant targets)
manifests/setup/percise64.pp
stores apt setup, configured to use a proxy, and folsom package pointer(s)
manifests/setup/centos.pp
stores yum setup, configuration for a local yum repo machine, and folsom package pointer(s)
manifests/site.pp
stores site manifests for configuring openstack
# installing module deps
# cd in to the project directory
> librarian-puppet install
# getting started
Configure the precise64.pp file to point to your apt cache
(or comment out the proxy host and port from the following resource)
(similar for centos.pp)
class { 'apt':
proxy_host => '172.16.0.1',
proxy_port => '3128',
}
Too see a list of the virtual machines that are managed by vagrant, run
> vagrant status
devstack not created
openstack_controller not created
compute1 not created
nova_controller not created
glance not created
keystone not created
mysql not created
The best maintained examples are for a two node install
based on a compute and controller.
Deploy a controller and a compute node:
> vagrant up openstack_controller
# wait until this finishes
> vagrant up compute1
# wait until this finishes
Once these finish successfully, login to the controller:
> vagrant ssh openstack_controller
Run the following test script:
[controller]# bash /tmp/test_nova.sh

7
README.rst Normal file
View File

@ -0,0 +1,7 @@
This project is no longer maintained.
The contents of this repository are still available in the Git source code
management system. To see the contents of this repository before it reached
its end of life, please check out the previous commit with
"git checkout HEAD^1".

227
Vagrantfile vendored
View File

@ -1,227 +0,0 @@
def parse_vagrant_config(
config_file=File.expand_path(File.join(File.dirname(__FILE__), 'config.yaml'))
)
require 'yaml'
config = {
'gui_mode' => false,
'operatingsystem' => 'ubuntu',
'verbose' => false,
'update_repos' => true
}
if File.exists?(config_file)
overrides = YAML.load_file(config_file)
config.merge!(overrides)
end
config
end
Vagrant::Config.run do |config|
v_config = parse_vagrant_config
ssh_forward_port = 2244
[
{'devstack' =>
{
'memory' => 512,
'ip1' => '172.16.0.2',
}
},
{'openstack_controller' =>
{'memory' => 2000,
'ip1' => '172.16.0.3'
}
},
{'compute1' =>
{
'memory' => 2512,
'ip1' => '172.16.0.4'
}
},
# huge compute instance with tons of RAM
# intended to be used for tempest tests
{'compute2' =>
{
'memory' => 12000,
'ip1' => '172.16.0.14'
}
},
#{'nova_controller' =>
# {
# 'memory' => 512,
# 'ip1' => '172.16.0.5'
# }
#},
#{'glance' =>
# {
# 'memory' => 512,
# 'ip1' => '172.16.0.6'
# }
#},
#{'keystone' =>
# {
# 'memory' => 512,
# 'ip1' => '172.16.0.7'
# }
#},
#{'mysql' =>
# {
# 'memory' => 512,
# 'ip1' => '172.16.0.8'
# }
#},
#{'cinder' =>
# {
# 'memory' => 512,
# 'ip1' => '172.16.0.9'
# }
#},
#{ 'quantum_agent' => {
# 'memory' => 512,
# 'ip1' => '172.16.0.10'
# }
#},
{ 'swift_proxy' => {
'memory' => 512,
'ip1' => '172.16.0.21',
'run_mode' => :agent
}
},
{ 'swift_storage_1' => {
'memory' => 512,
'ip1' => '172.16.0.22',
'run_mode' => :agent
}
},
{ 'swift_storage_2' => {
'memory' => 512,
'ip1' => '172.16.0.23',
'run_mode' => :agent
}
},
{ 'swift_storage_3' => {
'memory' => 512,
'ip1' => '172.16.0.24',
'run_mode' => :agent
}
},
# keystone instance to build out for testing swift
{
'swift_keystone' => {
'memory' => 512,
'ip1' => '172.16.0.25',
'run_mode' => :agent
}
},
{ 'puppetmaster' => {
'memory' => 512,
'ip1' => '172.16.0.31',
# I dont care for the moment if this suppors redhat
# eventually it should, but I care a lot more about testing
# openstack on RHEL than the puppetmaster
'operatingsystem' => 'ubuntu'
}
},
{ 'openstack_all' => { 'memory' => 2512, 'ip1' => '172.16.0.11'} }
].each do |hash|
name = hash.keys.first
props = hash.values.first
raise "Malformed vhost hash" if hash.size > 1
config.vm.define name.intern do |agent|
# let nodes override their OS
operatingsystem = (props['operatingsystem'] || v_config['operatingsystem']).downcase
# default to config file, but let hosts override it
if operatingsystem and operatingsystem != ''
if operatingsystem == 'redhat'
os_name = 'centos'
agent.vm.box = 'centos'
agent.vm.box_url = 'https://dl.dropbox.com/u/7225008/Vagrant/CentOS-6.3-x86_64-minimal.box'
elsif operatingsystem == 'ubuntu'
os_name = 'precise64'
agent.vm.box = 'precise64'
agent.vm.box_url = 'http://files.vagrantup.com/precise64.box'
else
raise(Exception, "undefined operatingsystem: #{operatingsystem}")
end
end
number = props['ip1'].gsub(/\d+\.\d+\.\d+\.(\d+)/, '\1').to_i
agent.vm.forward_port(22, ssh_forward_port + number)
# host only network
agent.vm.network :hostonly, props['ip1'], :adapter => 2
agent.vm.network :hostonly, props['ip1'].gsub(/(\d+\.\d+)\.\d+\.(\d+)/) {|x| "#{$1}.1.#{$2}" }, :adapter => 3
agent.vm.network :hostonly, props['ip1'].gsub(/(\d+\.\d+)\.\d+\.(\d+)/) {|x| "#{$1}.2.#{$2}" }, :adapter => 4
agent.vm.customize ["modifyvm", :id, "--memory", props['memory'] || 2048 ]
agent.vm.boot_mode = 'gui' if v_config['gui_mode']
agent.vm.customize ["modifyvm", :id, "--name", "#{name}.puppetlabs.lan"]
agent.vm.host_name = "#{name.gsub('_', '-')}.puppetlabs.lan"
if name == 'puppetmaster' || name =~ /^swift/
node_name = "#{name.gsub('_', '-')}.puppetlabs.lan"
else
node_name = "#{name.gsub('_', '-')}-#{Time.now.strftime('%Y%m%d%m%s')}"
end
if os_name =~ /precise/
agent.vm.provision :shell, :inline => "apt-get update"
elsif os_name =~ /centos/
agent.vm.provision :shell, :inline => "yum clean all"
end
puppet_options = ["--certname=#{node_name}"]
puppet_options.merge!(['--verbose', '--show_diff']) if v_config['verbose']
# configure hosts, install hiera
# perform pre-steps that always need to occur
agent.vm.provision(:puppet, :pp_path => "/etc/puppet") do |puppet|
puppet.manifests_path = 'manifests'
puppet.manifest_file = "setup/hosts.pp"
puppet.module_path = 'modules'
puppet.options = puppet_options
end
if v_config['update_repos'] == true
agent.vm.provision(:puppet, :pp_path => "/etc/puppet") do |puppet|
puppet.manifests_path = 'manifests'
puppet.manifest_file = "setup/#{os_name}.pp"
puppet.module_path = 'modules'
puppet.options = puppet_options
end
end
# export a data directory that can be used by hiera
agent.vm.share_folder("hiera_data", '/etc/puppet/hiera_data', './hiera_data/')
run_mode = props['run_mode'] || :apply
if run_mode == :apply
agent.vm.provision(:puppet, :pp_path => "/etc/puppet") do |puppet|
puppet.manifests_path = 'manifests'
puppet.manifest_file = 'site.pp'
puppet.module_path = 'modules'
puppet.options = puppet_options
end
elsif run_mode == :agent
agent.vm.provision(:puppet_server) do |puppet|
puppet.puppet_server = 'puppetmaster.puppetlabs.lan'
puppet.options = puppet_options + ['-t', '--pluginsync']
end
else
puts "Found unexpected run_mode #{run_mode}"
end
end
end
end

View File

@ -1,3 +0,0 @@
---
gui_mode: false
operatingsystem: ubuntu

View File

@ -1,44 +0,0 @@
---
# set the version to use
openstack_version: grizzly
# database password
mysql_root_password: mysql_root_password
keystone_db_password: keystone_db_password
glance_db_password: glance_db_password
nova_db_password: nova_db_password
cinder_db_password: cinder_db_password
quantum_db_password: quantum_db_password
# mysql allowed hosts
allowed_hosts:
- %
# keystone settings
admin_token: service_token
admin_email: keystone@localhost
admin_password: ChangeMe
glance_user_password: glance_user_password
nova_user_password: nova_user_password
cinder_user_password: cinder_user_password
quantum_user_password: quantum_user_password
verbose: True
# nova settings
public_interface: eth0
private_interface: eth2
rabbit_password: rabbit_password
rabbit_user: my_rabbit_user
secret_key: secret_key
libvirt_type: qemu
#libvirt_type: kvm
network_type: nova
#network_type: quantum
fixed_network_range: 10.0.0.0/24
floating_network_range: 172.16.0.128/25
auto_assign_floating_ip: false
# openstack controller ip address
openstack_controller: 172.16.0.3
# swift settings
swift_admin_email: dan@example_company.com
swift_user_password: swift_pass
swift_shared_secret: changeme
swift_local_net_ip: "%{ipaddress_eth1}"
swift_proxy_address: 172.16.0.21
swift_controller_node_public: 172.16.0.21

View File

@ -1,2 +0,0 @@
---
swift_zone: 1

View File

@ -1,2 +0,0 @@
---
swift_zone: 2

View File

@ -1,3 +0,0 @@
---
swift_zone: 3

View File

@ -1,70 +0,0 @@
#
# deploys a single all in one installation
# uses variables set in site.pp
#
#
node /openstack-all/ {
keystone_config {
'DEFAULT/log_config': ensure => absent,
}
class { 'openstack::test_file':
quantum => $use_quantum,
}
# create a test volume on a loopback device for testing
class { 'cinder::setup_test_volume': } -> Service<||>
include 'apache'
class { 'openstack::all':
public_address => $ipaddress_eth1,
internal_address => $ipaddress_eth1,
public_interface => $public_interface,
private_interface => $private_interface,
mysql_root_password => $mysql_root_password,
secret_key => $secret_key,
admin_email => $admin_email,
admin_password => $admin_password,
keystone_db_password => $keystone_db_password,
keystone_admin_token => $admin_token,
nova_db_password => $nova_db_password,
nova_user_password => $nova_user_password,
glance_db_password => $glance_db_password,
glance_user_password => $glance_user_password,
quantum_user_password => $quantum_user_password,
quantum_db_password => $quantum_db_password,
cinder_user_password => $cinder_user_password,
cinder_db_password => $cinder_db_password,
rabbit_password => $rabbit_password,
rabbit_user => $rabbit_user,
libvirt_type => $libvirt_type,
floating_range => $floating_network_range,
fixed_range => $fixed_network_range,
verbose => $verbose,
auto_assign_floating_ip => $auto_assign_floating_ip,
quantum => $use_quantum,
#vncproxy_host => $ipaddress_eth1,
}
class { 'openstack::auth_file':
admin_password => $admin_password,
keystone_admin_token => $keystone_admin_token,
controller_node => '127.0.0.1',
}
# TODO not sure why this is required
# this has a bug, and is constantly added to the file
Package['libvirt'] ->
file_line { 'quemu_hack':
line => 'cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet", "/dev/net/tun",]',
path => '/etc/libvirt/qemu.conf',
ensure => present,
} ~> Service['libvirt']
}

View File

@ -1,129 +0,0 @@
#
# this file contains instructions for installing
# a multi-role deployments
#
#### controller/compute mode settings ####
$mysql_host = '172.16.0.8'
$keystone_host = '172.16.0.7'
$glance_host = '172.16.0.6'
$nova_host = '172.16.0.5'
node /mysql/ {
class { 'openstack::db::mysql':
mysql_root_password => $mysql_root_password,
keystone_db_password => $keystone_db_password,
glance_db_password => $glance_db_password,
nova_db_password => $nova_db_password,
cinder_db_password => $cinder_db_password,
quantum_db_password => $quantum_db_password,
allowed_hosts => $allowed_hosts,
}
}
node /^keystone/ {
# TODO keystone logging seems to be totally broken in folsom
# this can be removed once it starts working
keystone_config {
'DEFAULT/log_config': ensure => absent,
}
class { 'openstack::keystone':
db_host => $mysql_host,
db_password => $keystone_db_password,
admin_token => $admin_token,
admin_email => $admin_email,
admin_password => $admin_password,
glance_user_password => $glance_user_password,
nova_user_password => $nova_user_password,
cinder_user_password => $cinder_user_password,
quantum_user_password => $quantum_user_password,
public_address => $keystone_host,
glance_public_address => $glance_host,
nova_public_address => $nova_host,
verbose => $verbose,
}
}
node /glance/ {
class { 'openstack::glance':
db_host => $mysql_host,
glance_user_password => $glance_user_password,
glance_db_password => $glance_db_password,
keystone_host => $keystone_host,
auth_uri => "http://${keystone_host}:5000/",
verbose => $verbose,
}
class { 'openstack::auth_file':
admin_password => $admin_password,
keystone_admin_token => $admin_token,
controller_node => $keystone_host,
}
}
node /nova-controller/ {
# deploy a script that can be used to test nova
class { 'openstack::test_file': }
class { 'openstack::nova::controller':
public_address => '172.16.0.5',
public_interface => $public_interface,
private_interface => $private_interface,
db_host => '172.16.0.8',
rabbit_password => $rabbit_password,
nova_user_password => $nova_user_password,
nova_db_password => $nova_db_password,
network_manager => 'nova.network.manager.FlatDHCPManager',
verbose => $verbose,
multi_host => true,
glance_api_servers => '172.16.0.6:9292',
keystone_host => '172.16.0.7',
#floating_range => $floating_network_range,
#fixed_range => $fixed_network_range,
}
class { 'openstack::horizon':
secret_key => $secret_key,
cache_server_ip => '127.0.0.1',
cache_server_port => '11211',
swift => false,
quantum => false,
horizon_app_links => undef,
keystone_host => '172.16.0.7',
keystone_default_role => 'Member',
}
class { 'openstack::auth_file':
admin_password => $admin_password,
keystone_admin_token => $admin_token,
controller_node => '172.16.0.7',
}
}
node /nova-compute/ {
fail('nova compute node has not been defined')
}
node /cinder/ {
fail('the individual cinder role is not fully tested yet..')
class { 'cinder':
rabbit_password => $rabbit_password,
# TODO what about the rabbit user?
rabbit_host => $openstack_controller,
sql_connection => "mysql://cinder:${cinder_db_password}@${openstack_controller}/cinder?charset=utf8",
verbose => $verbose,
}
class { 'cinder::volume': }
class { 'cinder::volume::iscsi': }
}

View File

@ -1,36 +0,0 @@
import 'hosts.pp'
file { '/etc/yum.repos.d':
ensure => directory,
}
file { '/tmp/setup_epel.sh':
content =>
'
#!/bin/bash
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6*.rpm'
}
exec { '/bin/bash /tmp/setup_epel.sh':
refreshonly => true,
subscribe => File['/tmp/setup_epel.sh']
}
ini_setting { 'enable_epel_testing':
path => '/etc/yum.repos.d/epel-testing.repo',
section => 'epel-testing',
setting => 'enabled',
value => '1',
ensure => present,
require => Exec['/bin/bash /tmp/setup_epel.sh'],
}
ini_setting { 'yum_proxy':
path => '/etc/yum.conf',
section => 'main',
setting => 'proxy',
value => 'http://172.16.0.1:3128',
ensure => present,
require => Exec['/bin/bash /tmp/setup_epel.sh'],
}

View File

@ -1,99 +0,0 @@
#
# this manifest performs essentially environment configuration
# that needs to be run before anything is configured
#
#
# setup basic dns in /etc/hosts
#
host {
'puppetmaster': ip => '172.16.0.31', host_aliases => ['puppetmaster.puppetlabs.lan'];
'openstackcontroller': ip => '172.16.0.3';
'compute1': ip => '172.16.0.4';
'compute2': ip => '172.16.0.14';
'novacontroller': ip => '172.16.0.5';
'glance': ip => '172.16.0.6';
'keystone': ip => '172.16.0.7';
'mysql': ip => '172.16.0.8';
'cinderclient': ip => '172.16.0.9';
'quantumagent': ip => '172.16.0.10';
'swift_proxy': ip => '172.16.0.21';
'swift_storage_1': ip => '172.16.0.22';
'swift_storage_2': ip => '172.16.0.23';
'swift_storage_3': ip => '172.16.0.24';
}
group { 'puppet':
ensure => 'present',
}
# lay down a file that you run run for testing
file { '/root/run_puppet.sh':
content =>
"#!/bin/bash
puppet apply --modulepath /etc/puppet/modules-0/ --certname ${clientcert} /etc/puppet/manifests/site.pp $*"
}
package { ['make', 'gcc']:
ensure => present,
}
if $::puppetversion !~ /(?i:enterprise)/ {
# install hiera, to support Puppet pre 3.0
# note that we don't need to do this on PE
# as hiera is installed with PE by default
package { ['hiera', 'hiera-puppet', 'ruby-debug']:
ensure => present,
provider => 'gem',
} <- Package['make', 'gcc']
}
file { "${settings::confdir}/hiera.yaml":
content =>
'
---
:backends:
- yaml
:hierarchy:
- "%{hostname}"
- jenkins
- common
:yaml:
:datadir: /etc/puppet/hiera_data'
}
package { 'wget':
ensure => present,
}
file_line { 'wgetrc_proxy':
ensure => present,
line => "https_proxy = http://172.16.0.1:3128/",
path => '/etc/wgetrc',
require => Package['wget'],
}
# not sure if this is the best place for my puppetmaster config
node /puppetmaster/ {
Ini_setting {
path => $settings::config,
section => 'main',
ensure => present,
}
ini_setting {'vardir':
setting => 'vardir',
value => '/var/lib/puppet/',
}
ini_setting {'ssldir':
setting => 'ssldir',
value => '/var/lib/puppet/ssl/',
}
ini_setting {'rundir':
setting => 'rundir',
value => '/var/run/puppet/',
}
}
node default { }

View File

@ -1,39 +0,0 @@
#import 'hosts.pp'
#
# This puppet manifest is already applied first to do some environment specific things
#
$openstack_version = hiera('openstack_version', 'folsom')
apt::source { 'openstack_cloud_archive':
location => "http://ubuntu-cloud.archive.canonical.com/ubuntu",
release => "precise-updates/${openstack_version}",
repos => "main",
required_packages => 'ubuntu-cloud-keyring',
}
#
# configure apt to use my squid proxy
# I highly recommend that anyone doing development on
# OpenStack set up a proxy to cache packages.
#
class { 'apt':
proxy_host => '172.16.0.1',
proxy_port => '3128',
}
# an apt-get update is usally required to ensure that
# we get the latest version of the openstack packages
exec { '/usr/bin/apt-get update':
require => Class['apt'],
refreshonly => true,
subscribe => [Class['apt'], Apt::Source["openstack_cloud_archive"]],
logoutput => true,
}
# run the apt get update before any packages are installed!
Exec['/usr/bin/apt-get update'] -> Package<||>
package { 'vim': ensure => present }

View File

@ -1,412 +0,0 @@
## This document serves as an example of how to deploy
# basic single and multi-node openstack environments.
#
####### shared variables ##################
#Exec {
# logoutput => true,
#}
# database config
$mysql_root_password = hiera('mysql_root_password', 'mysql_root_password')
$keystone_db_password = hiera('keystone_db_password', 'keystone_db_password')
$glance_db_password = hiera('glance_db_password', 'glance_db_password')
$nova_db_password = hiera('nova_db_password', 'nova_db_password')
$cinder_db_password = hiera('cinder_db_password', 'cinder_db_password')
$quantum_db_password = hiera('quantum_db_password', 'quantum_db_password')
$allowed_hosts = hiera('allowed_hosts', ['%'])
# keystone settings)
$admin_token = hiera('admin_token', 'service_token')
$admin_email = hiera('admin_email', 'keystone@localhost')
$admin_password = hiera('admin_password', 'ChangeMe')
$glance_user_password = hiera('glance_user_password', 'glance_user_password')
$nova_user_password = hiera('nova_user_password', 'nova_user_password')
$cinder_user_password = hiera('cinder_user_password', 'cinder_user_password')
$quantum_user_password = hiera('quantum_user_password', 'quantum_user_password')
$verbose = hiera('verbose', 'True')
$public_interface = hiera('public_interface', 'eth0')
$private_interface = hiera('private_interface', 'eth2')
$rabbit_password = hiera('rabbit_password', 'rabbit_password')
$rabbit_user = hiera('rabbit_user', 'nova')
$secret_key = hiera('secret_key', 'secret_key')
$libvirt_type = hiera('libvirt_type', 'qemu')
#$network_type = hiera('', 'quantum')
$network_type = hiera('network_type', 'nova')
if $network_type == 'nova' {
$use_quantum = false
$multi_host = true
$nova_network = true
} else {
$nova_network = false
$use_quantum = true
}
$fixed_network_range = hiera('fixed_network_range', '10.0.0.0/24')
$floating_network_range = hiera('floating_network_range', '172.16.0.128/25')
$auto_assign_floating_ip = hiera('auto_assign_floating_ip', false)
#### end shared variables #################
#### controller/compute mode settings ####
$openstack_controller = hiera('openstack_controller', '172.16.0.3')
#### controller/compute mode settings ####
$openstack_version = hiera('openstack_version', 'grizzly')
# node declaration for all in one
import 'scenarios/all_in_one.pp'
# node declarations for a single server per role
import 'scenarios/multi_role.pp'
# import external swift definitions
import '/etc/puppet/modules-0/swift/examples/site.pp'
node /openstack-controller/ {
# deploy a script that can be used to test nova
class { 'openstack::test_file':
quantum => $use_quantum,
sleep_time => 120,
floating_ip => $nova_network,
}
if $::osfamily == 'Redhat' {
# redhat specific dashboard stuff
file_line { 'nova_sudoers':
line => 'nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf *',
path => '/etc/sudoers',
before => Package['nova-common'],
}
nova_config { 'DEFAULT/rpc_backend': value => 'nova.openstack.common.rpc.impl_kombu';}
cinder_config { 'DEFAULT/rpc_backend': value => 'cinder.openstack.common.rpc.impl_kombu';}
#selboolean{'httpd_can_network_connect':
# value => on,
# persistent => true,
#}
firewall { '001 horizon incomming':
proto => 'tcp',
dport => ['80'],
action => 'accept',
}
firewall { '001 glance incomming':
proto => 'tcp',
dport => ['9292'],
action => 'accept',
}
firewall { '001 keystone incomming':
proto => 'tcp',
dport => ['5000', '35357'],
action => 'accept',
}
firewall { '001 mysql incomming':
proto => 'tcp',
dport => ['3306'],
action => 'accept',
}
firewall { '001 novaapi incomming':
proto => 'tcp',
dport => ['8773', '8774', '8776'],
action => 'accept',
}
firewall { '001 qpid incomming':
proto => 'tcp',
dport => ['5672'],
action => 'accept',
}
firewall { '001 novncproxy incomming':
proto => 'tcp',
dport => ['6080'],
action => 'accept',
}
}
class { 'openstack::controller':
#floating_range => $floating_network_range,
# Required Network
public_address => $openstack_controller,
public_interface => $public_interface,
private_interface => $private_interface,
# Required Database
mysql_root_password => $mysql_root_password,
# Required Keystone
admin_email => $admin_email,
admin_password => $admin_password,
keystone_db_password => $keystone_db_password,
keystone_admin_token => $admin_token,
# Required Glance
glance_db_password => $glance_db_password,
glance_user_password => $glance_user_password,
# Required Nov a
nova_db_password => $nova_db_password,
nova_user_password => $nova_user_password,
# cinder
cinder_db_password => $cinder_db_password,
cinder_user_password => $cinder_user_password,
cinder => true,
# quantum
quantum => $use_quantum,
quantum_db_password => $quantum_db_password,
quantum_user_password => $quantum_user_password,
# horizon
secret_key => $secret_key,
# need to sort out networking...
network_manager => 'nova.network.manager.FlatDHCPManager',
fixed_range => $fixed_network_range,
floating_range => $floating_network_range,
create_networks => true,
multi_host => $multi_host,
db_host => '127.0.0.1',
db_type => 'mysql',
mysql_account_security => true,
# TODO - this should not allow all
allowed_hosts => '%',
# Keystone
# Glance
glance_api_servers => '127.0.0.1:9292',
rabbit_password => $rabbit_password,
rabbit_user => $rabbit_user,
# Horizon
cache_server_ip => '127.0.0.1',
cache_server_port => '11211',
horizon_app_links => undef,
# General
verbose => $verbose,
purge_nova_config => false,
}
package { 'python-cliff':
ensure => present,
}
class { 'openstack::auth_file':
admin_password => $admin_password,
keystone_admin_token => $admin_token,
controller_node => '127.0.0.1',
}
keystone_config {
'DEFAULT/log_config': ensure => absent,
}
}
node /compute/ {
# TODO not sure why this is required
# this has a bug, and is constantly added to the file
if $libvirt_type == 'qemu' {
if $::osfamily == 'Debian' {
Package['libvirt'] ->
file_line { 'quemu_hack':
line => 'cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet", "/dev/net/tun",]',
path => '/etc/libvirt/qemu.conf',
ensure => present,
} ~> Service['libvirt']
} elsif $::osfamily == 'RedHat' {
cinder_config { 'DEFAULT/rpc_backend': value => 'cinder.openstack.common.rpc.impl_kombu';}
file_line { 'nova_sudoers':
line => 'nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf *',
path => '/etc/sudoers',
before => Service['nova-network'],
}
file_line { 'cinder_sudoers':
line => 'cinder ALL = (root) NOPASSWD: /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf *',
path => '/etc/sudoers',
before => Service['cinder-volume'],
}
nova_config { 'DEFAULT/rpc_backend': value => 'nova.openstack.common.rpc.impl_kombu';}
nova_config{
"DEFAULT/network_host": value => $openstack_controller;
"DEFAULT/libvirt_inject_partition": value => "-1";
}
if $libvirt_type == "qemu" {
file { "/usr/bin/qemu-system-x86_64":
ensure => link,
target => "/usr/libexec/qemu-kvm",
notify => Service["nova-compute"],
}
}
firewall { '001 vnc listen incomming':
proto => 'tcp',
dport => ['6080'],
action => 'accept',
}
firewall { '001 volume incomming':
proto => 'tcp',
dport => ['3260'],
action => 'accept',
}
}
}
class { 'cinder::setup_test_volume': } -> Service<||>
class { 'openstack::compute':
public_interface => $public_interface,
private_interface => $private_interface,
internal_address => $::ipaddress_eth1,
libvirt_type => $libvirt_type,
db_host => $openstack_controller,
cinder_db_password => $cinder_db_password,
nova_db_password => $nova_db_password,
multi_host => $multi_host,
fixed_range => $fixed_network_range,
nova_user_password => $nova_user_password,
quantum => $use_quantum,
quantum_host => $openstack_controller,
quantum_user_password => $quantum_user_password,
glance_api_servers => ["${openstack_controller}:9292"],
rabbit_user => $rabbit_user,
rabbit_password => $rabbit_password,
rabbit_host => $openstack_controller,
keystone_host => $openstack_controller,
vncproxy_host => $openstack_controller,
vnc_enabled => true,
verbose => $verbose,
}
}
node /tempest/ {
if $::openstack_version == 'folsom' {
# this assumes that tempest is being run on the same node
# as the openstack controller
if $osfamily == 'redhat' {
$nova_api_service_name = 'openstack-nova-api'
} else {
$nova_api_service_name = 'nova-api'
}
service { 'nova-api':
name => $nova_api_service_name
}
Nova_config<||> ~> Service['nova-api']
Nova_paste_api_ini<||> ~> Service['nova-api']
nova_config { 'DEFAULT/api_rate_limit': value => 'false' }
# remove rate limiting
# this may be folsom specific
nova_paste_api_ini {
'composite:openstack_compute_api_v2/noauth': value => 'faultwrap sizelimit noauth osapi_compute_app_v2';
'composite:openstack_compute_api_v2/keystone': value => 'faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2';
'composite:openstack_volume_api_v1/noauth': value => 'faultwrap sizelimit noauth osapi_volume_app_v1';
'composite:openstack_volume_api_v1/keystone': value => 'faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1';
}
}
if ($::openstack_version == 'grizzly') {
$revision = 'master'
} else {
$revision = $::openstack_version
}
class { 'tempest':
identity_host => $::openstack_controller,
identity_port => '35357',
identity_api_version => 'v2.0',
# non admin user
username => 'user1',
password => 'user1_password',
tenant_name => 'tenant1',
# another non-admin user
alt_username => 'user2',
alt_password => 'user2_password',
alt_tenant_name => 'tenant2',
# image information
image_id => 'XXXXXXX',#<%= image_id %>,
image_id_alt => 'XXXXXXX',#<%= image_id_alt %>,
flavor_ref => 1,
flavor_ref_alt => 2,
# the version of the openstack images api to use
image_api_version => '1',
image_host => $::openstack_controller,
image_port => '9292',
# this should be the username of a user with administrative privileges
admin_username => 'admin',
admin_password => $::admin_password,
admin_tenant_name => 'admin',
nova_db_uri => 'mysql://nova:nova_db_password@127.0.0.1/nova',
version_to_test => $revision,
}
class { 'openstack::auth_file':
admin_password => $::admin_password,
keystone_admin_token => $::admin_token,
controller_node => $::openstack_controller,
}
}
node /devstack/ {
class { 'devstack': }
}
node default {
notify { $clientcert: }
}
node puppetmaster {
$hostname = 'puppetmaster'
### Add the puppetlabs repo
apt::source { 'puppetlabs':
location => 'http://apt.puppetlabs.com',
repos => 'main',
key => '4BD6EC30',
key_server => 'pgp.mit.edu',
tag => ['puppet'],
}
Exec["apt_update"] -> Package <| |>
package { ['hiera', 'hiera-puppet']:
ensure => present,
provider => 'gem',
require => Package['puppetmaster'],
}
class { 'puppet::master':
autosign => true,
modulepath => '/etc/puppet/modules-0',
}
class { 'puppetdb':
require => Class['puppet::master'],
}
# Configure the puppet master to use puppetdb.
class { 'puppetdb::master::config':
restart_puppet => false,
puppetdb_startup_timeout => 240,
notify => Class['apache'],
}
}

View File

@ -1 +0,0 @@
require 'puppetlabs_spec_helper/module_spec_helper'

View File

@ -1,35 +0,0 @@
require 'puppetlabs/os_tester'
describe 'build out a swift cluster and test it' do
def base_dir
File.join(File.dirname(__FILE__), '..')
end
include Puppetlabs::OsTester
before :all do
cmd_system('vagrant destroy -f')
end
before :each do
destroy_swift_vms
deploy_puppetmaster
end
['ubuntu'].each do |os|
describe "testing #{os}" do
before :each do
update_vagrant_os(os)
end
it 'should be able to build out a full swift cluster' do
deploy_swift_cluster
result = test_swift
puts result.inspect
result.split("\n").last.should =~ /Dude/
end
end
end
end

View File

@ -1,43 +0,0 @@
require File.join(
File.dirname(__FILE__),
'..',
'lib',
'puppetlabs',
'os_tester'
)
describe 'test various two node configurations' do
def base_dir
File.join(File.dirname(__FILE__), '..')
end
include Puppetlabs::OsTester
before :each do
cmd_system('vagrant destroy -f')
end
[
'redhat',
'ubuntu'
].each do |os|
describe "test #{os}" do
it 'should be able to build out a two node environment' do
update_vagrant_os(os)
deploy_two_node
# on box runs as sudo
result = on_box('openstack_controller', 'bash /tmp/test_nova.sh;exit $?')
result.split("\n").last.should == 'cirros'
end
end
end
after :all do
end
end

View File

@ -1,129 +0,0 @@
#!/bin/bash
#
# script to build a two node openstack environment and test.
# this script is intended to be run as a jenkins parameterized build with
# the following build parameters:
# $operatingsystem - OS to test OS install on (accepts Redhat/Ubuntu)
# $openstack_version - openstack version to test (accepts folsom/grizzly)
# $test_mode type of test to run (accepts: tempest_full, tempest_smoke, puppet_openstack, unit)
# $module_install_method - how to install modules (accepts librarian or pmt)
#
# it also allows the following optional build parameters
# $checkout_patch_command - command that is run after alls gems and modules have been installed. This is
# intended to be a place holder for logic that checks out branches
#
# # I am running it as follows:
# mkdir $BUILD_ID
# cd $BUILD_ID
# git clone git://github.com/puppetlabs/puppetlabs-openstack_dev_env
# cd puppetlabs-openstack_dev_env
# bash test_scripts/openstack_test.sh
# TODO figure out if I should add pull request support
set -e
set -u
# install gem dependencies
mkdir .vendor
export GEM_HOME=`pwd`/.vendor
# install gem dependencies
bundle install
# install required modules
if [ $module_install_method = 'librarian' ]; then
bundle exec librarian-puppet install
elif [ $module_install_method = 'pmt' ]; then
puppet module install --modulepath=`pwd`/modules puppetlabs-openstack
git clone https://github.com/ripienaar/hiera-puppet modules/hiera_puppet
git clone git://github.com/puppetlabs/puppetlabs-swift modules/swift
git clone git://github.com/puppetlabs/puppetlabs-tempest modules/tempest
git clone git://github.com/puppetlabs/puppetlabs-vcsrepo modules/vcsrepo
fi
if [ -n "${module_repo:-}" ]; then
if [ ! "${module_repo:-}" = 'openstack_dev_env' ]; then
pushd $module_repo
fi
if [ -n "${checkout_branch_command:-}" ]; then
eval $checkout_branch_command
fi
if [ ! "${module_repo:-}" = 'openstack_dev_env' ]; then
popd
fi
fi
# only build out integration test environment if we are not running unit tests
if [ ! $test_mode = 'unit' ]; then
# set operatingsystem to use for integration tests tests
echo "operatingsystem: ${operatingsystem}" > config.yaml
if [ $openstack_version = 'grizzly' ]; then
echo 'openstack_version: grizzly' > hiera_data/jenkins.yaml
else
echo 'openstack_version: folsom' > hiera_data/jenkins.yaml
fi
if [ "${module_repo:-}" = 'modules/swift' ] ; then
# build out a swift test environment (requires a puppetmaster)
# setup environemnt for a swift test
# install a controller and compute instance
for i in puppetmaster swift_storage_1 swift_storage_2 swift_storage_3 swift_proxy swift_keystone; do
# cleanup running swift instances
if VBoxManage list vms | grep ${i}.puppetlabs.lan; then
VBoxManage controlvm ${i}.puppetlabs.lan poweroff || true
VBoxManage unregistervm ${i}.puppetlabs.lan --delete
fi
done
# build out a puppetmaster
bundle exec vagrant up puppetmaster
# deploy swift
bundle exec rake openstack:deploy_swift
else
# build out an openstack environment
# install a controller and compute instance
# check that the VM is not currently running
# if it is, stop that VM
if VBoxManage list vms | grep openstack_controller.puppetlabs.lan; then
VBoxManage controlvm openstack_controller.puppetlabs.lan poweroff || true
VBoxManage unregistervm openstack_controller.puppetlabs.lan --delete
fi
bundle exec vagrant up openstack_controller
# check if the compute VM is running, if so stop the VM before launching ours
if VBoxManage list vms | grep compute2.puppetlabs.lan; then
VBoxManage controlvm compute2.puppetlabs.lan poweroff || true
VBoxManage unregistervm compute2.puppetlabs.lan --delete
fi
bundle exec vagrant up compute2
# install tempest on the controller
bundle exec vagrant status
fi
fi
# decide what kind of tests to run
if [ $test_mode = 'puppet_openstack' ]; then
# run my simple puppet integration tests
bundle exec vagrant ssh -c 'sudo bash /tmp/test_nova.sh;exit $?' openstack_controller
elif [ $test_mode = 'tempest_smoke' ]; then
# run the tempest smoke tests
bundle exec vagrant ssh -c 'sudo puppet apply --certname tempest --modulepath=/etc/puppet/modules-0/ /etc/puppet/manifests/site.pp --trace --debug' openstack_controller
# run tempest tests
bundle exec vagrant ssh -c 'cd /var/lib/tempest/;sudo ./jenkins_launch_script.sh --smoke;exit $?;' openstack_controller
elif [ $test_mode = 'tempest_full' ]; then
bundle exec vagrant ssh -c 'cd /var/lib/tempest/;sudo ./jenkins_launch_script.sh;exit $?;' openstack_controller
elif [ $test_mode = 'unit' ]; then
bundle exec rake test:unit
elif [ $test_mode = 'puppet_swift' ] ; then
# assume that if the repo was swift that we are running our special little swift tests
bundle exec vagrant ssh -c 'sudo ruby /tmp/swift_test_file.rb;exit $?' swift_proxy
else
echo "Unsupported testnode ${test_mode}, this test matrix only support tempest_smoke and puppet_openstack tests"
fi