merge from trunk

This commit is contained in:
Scott Moser 2013-03-07 16:27:47 -05:00
commit 7bb21a6384
86 changed files with 17607 additions and 659 deletions

115
ChangeLog
View File

@ -2,6 +2,53 @@
- add a debian watch file
- add 'sudo' entry to ubuntu's default user (LP: #1080717)
- fix resizefs module when 'noblock' was provided (LP: #1080985)
- make sure there is no blank line before cloud-init entry in
there are no blank lines in /etc/ca-certificates.conf (LP: #1077020)
- fix sudoers writing when entry is a string (LP: #1079002)
- tools/write-ssh-key-fingerprints: use '-s' rather than '--stderr'
option (LP: #1083715)
- make install of puppet configurable (LP: #1090205) [Craig Tracey]
- support omnibus installer for chef [Anatoliy Dobrosynets]
- fix bug where cloud-config in user-data could not modify system_info
settings (LP: #1090482)
- fix CloudStack DataSource to use Virtual Router as described by
CloudStack documentation if it is available by searching through dhclient
lease files. If it is not available, then fall back to the default
gateway. (LP: #1089989)
- fix redaction of password field in log (LP: #1096417)
- fix to cloud-config user setup. Previously, lock_passwd was broken and
all accounts would be locked unless 'system' was given (LP: #1096423).
- Allow 'sr0' (or sr[0-9]) to be specified without /dev/ as a source for
mounts. [Vlastimil Holer]
- allow config-drive-data to come from a CD device by more correctly
filtering out partitions. (LP: #1100545)
- setup docs to be available on read-the-docs
https://cloudinit.readthedocs.org/en/latest/ (LP: #1093039)
- add HACKING file for information on contributing
- handle the legacy 'user:' configuration better, making it affect the
configured OS default user (LP: #1100920)
- Adding a resolv.conf configuration module (LP: #1100434). Currently only
working on redhat systems (no support for resolvconf)
- support grouping linux distros into "os_families". This allows a module
to operate on the family (redhat or debian) rather than the distro (ubuntu,
debian, fedora, rhel) (LP: #1100029)
- fix /etc/hosts writing when templates are used (LP: #1100036)
- add package versioning logic to package installation
functionality (LP: #1108047)
- fix documentation for write_files to correctly list 'permissions'
rather than 'perms' (LP: #1111205)
- cloud-init-container.conf: ensure /run/network before running ifquery
- DataSourceNoCloud: allow user-data and meta-data to be specified
in config (LP: #1115833).
- improve debian support in sysvinit scripts, package build scripts, and
split sources.list template to be distro specific.
- support for resizing btrfs root filesystems [Blair Zajac]
- fix issue when writing ssh keys to .ssh/authorized_keys (LP: #1136343)
- upstart: cloud-init-nonet.conf trap the TERM signal, so that dmesg or other
output does not get a 'killed by TERM signal' message.
- support resizing partitions via growpart or parted (LP: #1136936)
- allow specifying apt-get command in distro config ('apt_get_command')
0.7.1:
- sysvinit: fix missing dependency in cloud-init job for RHEL 5.6
- config-drive: map hostname to local-hostname (LP: #1061964)
@ -38,12 +85,13 @@
- fix how string escaping was not working when the string was a unicode
string which was causing the warning message not to be written
out (LP: #1075756)
- for boto > 0.6.0 there was a lazy load of the metadata added, when cloud-init
runs the usage of this lazy loading is hidden and since that lazy loading
will be performed on future attribute access we must traverse the lazy loaded
dictionary and force it to full expand so that if cloud-init blocks the ec2
metadata port the lazy loaded dictionary will continue working properly
instead of trying to make additional url calls which will fail (LP: #1068801)
- for boto > 0.6.0 there was a lazy load of the metadata added, when
cloud-init runs the usage of this lazy loading is hidden and since that lazy
loading will be performed on future attribute access we must traverse the
lazy loaded dictionary and force it to full expand so that if cloud-init
blocks the ec2 metadata port the lazy loaded dictionary will continue
working properly instead of trying to make additional url calls which will
fail (LP: #1068801)
- use a set of helper/parsing classes to perform system configuration
for easier test. (/etc/sysconfig, /etc/hostname, resolv.conf, /etc/hosts)
- add power_state_change config module for shutting down stystem after
@ -58,7 +106,8 @@
- do not 'start networking' in cloud-init-nonet, but add
cloud-init-container job that runs only if in container and emits
net-device-added (LP: #1031065)
- search only top level dns for 'instance-data' in DataSourceEc2 (LP: #1040200)
- search only top level dns for 'instance-data' in
DataSourceEc2 (LP: #1040200)
- add support for config-drive-v2 (LP:#1037567)
- support creating users, including the default user.
[Ben Howard] (LP: #1028503)
@ -148,8 +197,8 @@
detailed information on python 2.6 and 2.7
- forced all code loading, moving, chmod, writing files and other system
level actions to go through standard set of util functions, this greatly
helps in debugging and determining exactly which system actions cloud-init is
performing
helps in debugging and determining exactly which system actions cloud-init
is performing
- adjust url fetching and url trying to go through a single function that
reads urls in the new 'url helper' file, this helps in tracing, debugging
and knowing which urls are being called and/or posted to from with-in
@ -185,28 +234,30 @@
very simple set of ec2 meta-data back to callers
- useful for testing or for understanding what the ec2 meta-data
service can provide in terms of data or functionality
- for ssh key and authorized key file parsing add in classes and util functions
that maintain the state of individual lines, allowing for a clearer
separation of parsing and modification (useful for testing and tracing)
- for ssh key and authorized key file parsing add in classes and util
functions that maintain the state of individual lines, allowing for a
clearer separation of parsing and modification (useful for testing and
tracing)
- add a set of 'base' init.d scripts that can be used on systems that do
not have full upstart or systemd support (or support that does not match
the standard fedora/ubuntu implementation)
- currently these are being tested on RHEL 6.2
- separate the datasources into there own subdirectory (instead of being
a top-level item), this matches how config 'modules' and user-data 'handlers'
are also in there own subdirectory (thus helping new developers and others
understand the code layout in a quicker manner)
a top-level item), this matches how config 'modules' and user-data
'handlers' are also in there own subdirectory (thus helping new developers
and others understand the code layout in a quicker manner)
- add the building of rpms based off a new cli tool and template 'spec' file
that will templatize and perform the necessary commands to create a source
and binary package to be used with a cloud-init install on a 'rpm' supporting
system
and binary package to be used with a cloud-init install on a 'rpm'
supporting system
- uses the new standard set of requires and converts those pypi requirements
into a local set of package requirments (that are known to exist on RHEL
systems but should also exist on fedora systems)
- adjust the bdeb builder to be a python script (instead of a shell script) and
make its 'control' file a template that takes in the standard set of pypi
dependencies and uses a local mapping (known to work on ubuntu) to create the
packages set of dependencies (that should also work on ubuntu-like systems)
- adjust the bdeb builder to be a python script (instead of a shell script)
and make its 'control' file a template that takes in the standard set of
pypi dependencies and uses a local mapping (known to work on ubuntu) to
create the packages set of dependencies (that should also work on
ubuntu-like systems)
- pythonify a large set of various pieces of code
- remove wrapping return statements with () when it has no effect
- upper case all constants used
@ -217,8 +268,8 @@
there own equality)
- use context managers on locks, tempdir, chdir, file, selinux, umask,
unmounting commands so that these actions do not have to be closed and/or
cleaned up manually in finally blocks, which is typically not done and will
eventually be a bug in the future
cleaned up manually in finally blocks, which is typically not done and
will eventually be a bug in the future
- use the 'abc' module for abstract classes base where possible
- applied in the datasource root class, the distro root class, and the
user-data v2 root class
@ -248,17 +299,18 @@
config without sections better (and it also maintains comments instead of
removing them)
- use the new defaulting config parser (that will not raise errors on sections
that do not exist or return errors when values are fetched that do not exist)
in the 'puppet' module
that do not exist or return errors when values are fetched that do not
exist) in the 'puppet' module
- for config 'modules' add in the ability for the module to provide a list of
distro names which it is known to work with, if when ran and the distro being
used name does not match one of those in this list, a warning will be written
out saying that this module may not work correctly on this distrobution
distro names which it is known to work with, if when ran and the distro
being used name does not match one of those in this list, a warning will be
written out saying that this module may not work correctly on this
distrobution
- for all dynamically imported modules ensure that they are fixed up before
they are used by ensuring that they have certain attributes, if they do not
have those attributes they will be set to a sensible set of defaults instead
- adjust all 'config' modules and handlers to use the adjusted util functions
and the new distro objects where applicable so that those pieces of code can
and the new distro objects where applicable so that those pieces of code can
benefit from the unified and enhanced functionality being provided in that
util module
- fix a potential bug whereby when a #includeonce was encountered it would
@ -266,8 +318,8 @@
it would continue checking against that cache, instead of refetching (which
would likely be the expected case)
- add a openstack/nova based pep8 extension utility ('hacking.py') that allows
for custom checks (along with the standard pep8 checks) to occur when running
'make pep8' and its derivatives
for custom checks (along with the standard pep8 checks) to occur when
running 'make pep8' and its derivatives
0.6.4:
- support relative path in AuthorizedKeysFile (LP: #970071).
- make apt-get update run with --quiet (suitable for logging) (LP: #1012613)
@ -455,3 +507,4 @@
- make the message on 'disable_root' more clear (LP: #672417)
- do not require public key if private is given in ssh cloud-config
(LP: #648905)
# vi: syntax=text textwidth=79

48
HACKING.rst Normal file
View File

@ -0,0 +1,48 @@
=====================
Hacking on cloud-init
=====================
To get changes into cloud-init, the process to follow is:
* If you have not already, be sure to sign the CCA:
- `Canonical Contributor Agreement`_
* Get your changes into a local bzr branch.
Initialize a repo, and checkout trunk (init repo is to share bzr info across multiple checkouts, its different than git):
- ``bzr init-repo cloud-init``
- ``bzr branch lp:cloud-init trunk.dist``
- ``bzr branch trunk.dist my-topic-branch``
* Commit your changes (note, you can make multiple commits, fixes, more commits.):
- ``bzr commit``
* Check pylint and pep8 and test, and address any issues:
- ``make test pylint pep8``
* Push to launchpad to a personal branch:
- ``bzr push lp:~<YOUR_USERNAME>/cloud-init/<BRANCH_NAME>``
* Propose that for a merge into lp:cloud-init via web browser.
- Open the branch in `Launchpad`_
- It will typically be at ``https://code.launchpad.net/<YOUR_USERNAME>/<PROJECT>/<BRANCH_NAME>``
- ie. https://code.launchpad.net/~smoser/cloud-init/mybranch
* Click 'Propose for merging'
* Select 'lp:cloud-init' as the target branch
Then, someone on cloud-init-dev (currently `Scott Moser`_ and `Joshua Harlow`_) will
review your changes and follow up in the merge request.
Feel free to ping and/or join #cloud-init on freenode (irc) if you have any questions.
.. _Launchpad: https://launchpad.net
.. _Canonical Contributor Agreement: http://www.canonical.com/contributors
.. _Scott Moser: https://launchpad.net/~smoser
.. _Joshua Harlow: https://launchpad.net/~harlowja

View File

@ -52,5 +52,7 @@ def fixup_module(mod, def_freq=PER_INSTANCE):
if freq and freq not in FREQUENCIES:
LOG.warn("Module %s has an unknown frequency %s", mod, freq)
if not hasattr(mod, 'distros'):
setattr(mod, 'distros', None)
setattr(mod, 'distros', [])
if not hasattr(mod, 'osfamilies'):
setattr(mod, 'osfamilies', [])
return mod

View File

@ -140,10 +140,13 @@ def get_release():
def generate_sources_list(codename, mirrors, cloud, log):
template_fn = cloud.get_template_filename('sources.list')
template_fn = cloud.get_template_filename('sources.list.%s' %
(cloud.distro.name))
if not template_fn:
log.warn("No template found, not rendering /etc/apt/sources.list")
return
template_fn = cloud.get_template_filename('sources.list')
if not template_fn:
log.warn("No template found, not rendering /etc/apt/sources.list")
return
params = {'codename': codename}
for k in mirrors:

View File

@ -45,8 +45,15 @@ def add_ca_certs(certs):
# First ensure they are strings...
cert_file_contents = "\n".join([str(c) for c in certs])
util.write_file(CA_CERT_FULL_PATH, cert_file_contents, mode=0644)
# Append cert filename to CA_CERT_CONFIG file.
util.write_file(CA_CERT_CONFIG, "\n%s" % CA_CERT_FILENAME, omode="ab")
# We have to strip the content because blank lines in the file
# causes subsequent entries to be ignored. (LP: #1077020)
orig = util.load_file(CA_CERT_CONFIG)
cur_cont = '\n'.join([l for l in orig.splitlines()
if l != CA_CERT_FILENAME])
out = "%s\n%s\n" % (cur_cont.rstrip(), CA_CERT_FILENAME)
util.write_file(CA_CERT_CONFIG, out, omode="wb")
def remove_default_ca_certs():

View File

@ -22,6 +22,7 @@ import json
import os
from cloudinit import templater
from cloudinit import url_helper
from cloudinit import util
RUBY_VERSION_DEFAULT = "1.8"
@ -35,6 +36,8 @@ CHEF_DIRS = [
'/var/run/chef',
]
OMNIBUS_URL = "https://www.opscode.com/chef/install.sh"
def handle(name, cfg, cloud, log, _args):
@ -83,7 +86,9 @@ def handle(name, cfg, cloud, log, _args):
util.write_file('/etc/chef/firstboot.json', json.dumps(initial_json))
# If chef is not installed, we install chef based on 'install_type'
if not os.path.isfile('/usr/bin/chef-client'):
if (not os.path.isfile('/usr/bin/chef-client') or
util.get_cfg_option_bool(chef_cfg, 'force_install', default=False)):
install_type = util.get_cfg_option_str(chef_cfg, 'install_type',
'packages')
if install_type == "gems":
@ -99,6 +104,14 @@ def handle(name, cfg, cloud, log, _args):
elif install_type == 'packages':
# this will install and run the chef-client from packages
cloud.distro.install_packages(('chef',))
elif install_type == 'omnibus':
url = util.get_cfg_option_str(chef_cfg, "omnibus_url", OMNIBUS_URL)
content = url_helper.readurl(url=url, retries=5)
with util.tempdir() as tmpd:
# use tmpd over tmpfile to avoid 'Text file busy' on execute
tmpf = "%s/chef-omnibus-install" % tmpd
util.write_file(tmpf, content, mode=0700)
util.subp([tmpf], capture=False)
else:
log.warn("Unknown chef install type %s", install_type)

View File

@ -0,0 +1,272 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
#
# Author: Scott Moser <scott.moser@canonical.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import os.path
import re
import stat
from cloudinit import log as logging
from cloudinit.settings import PER_ALWAYS
from cloudinit import util
frequency = PER_ALWAYS
DEFAULT_CONFIG = {
'mode': 'auto',
'devices': ['/'],
}
def enum(**enums):
return type('Enum', (), enums)
RESIZE = enum(SKIPPED="SKIPPED", CHANGED="CHANGED", NOCHANGE="NOCHANGE",
FAILED="FAILED")
LOG = logging.getLogger(__name__)
def resizer_factory(mode):
resize_class = None
if mode == "auto":
for (_name, resizer) in RESIZERS:
cur = resizer()
if cur.available():
resize_class = cur
break
if not resize_class:
raise ValueError("No resizers available")
else:
mmap = {}
for (k, v) in RESIZERS:
mmap[k] = v
if mode not in mmap:
raise TypeError("unknown resize mode %s" % mode)
mclass = mmap[mode]()
if mclass.available():
resize_class = mclass
if not resize_class:
raise ValueError("mode %s not available" % mode)
return resize_class
class ResizeFailedException(Exception):
pass
class ResizeParted(object):
def available(self):
myenv = os.environ.copy()
myenv['LANG'] = 'C'
try:
(out, _err) = util.subp(["parted", "--help"], env=myenv)
if re.search(r"COMMAND.*resizepart\s+", out, re.DOTALL):
return True
except util.ProcessExecutionError:
pass
return False
def resize(self, diskdev, partnum, partdev):
before = get_size(partdev)
try:
util.subp(["parted", "resizepart", diskdev, partnum])
except util.ProcessExecutionError as e:
raise ResizeFailedException(e)
return (before, get_size(partdev))
class ResizeGrowPart(object):
def available(self):
myenv = os.environ.copy()
myenv['LANG'] = 'C'
try:
(out, _err) = util.subp(["growpart", "--help"], env=myenv)
if re.search(r"--update\s+", out, re.DOTALL):
return True
except util.ProcessExecutionError:
pass
return False
def resize(self, diskdev, partnum, partdev):
before = get_size(partdev)
try:
util.subp(["growpart", '--dry-run', diskdev, partnum])
except util.ProcessExecutionError as e:
if e.exit_code != 1:
util.logexc(LOG, ("Failed growpart --dry-run for (%s, %s)" %
(diskdev, partnum)))
raise ResizeFailedException(e)
return (before, before)
try:
util.subp(["growpart", diskdev, partnum])
except util.ProcessExecutionError as e:
util.logexc(LOG, "Failed: growpart %s %s" % (diskdev, partnum))
raise ResizeFailedException(e)
return (before, get_size(partdev))
def get_size(filename):
fd = os.open(filename, os.O_RDONLY)
try:
return os.lseek(fd, 0, os.SEEK_END)
finally:
os.close(fd)
def device_part_info(devpath):
# convert an entry in /dev/ to parent disk and partition number
# input of /dev/vdb or /dev/disk/by-label/foo
# rpath is hopefully a real-ish path in /dev (vda, sdb..)
rpath = os.path.realpath(devpath)
bname = os.path.basename(rpath)
syspath = "/sys/class/block/%s" % bname
if not os.path.exists(syspath):
raise ValueError("%s had no syspath (%s)" % (devpath, syspath))
ptpath = os.path.join(syspath, "partition")
if not os.path.exists(ptpath):
raise TypeError("%s not a partition" % devpath)
ptnum = util.load_file(ptpath).rstrip()
# for a partition, real syspath is something like:
# /sys/devices/pci0000:00/0000:00:04.0/virtio1/block/vda/vda1
rsyspath = os.path.realpath(syspath)
disksyspath = os.path.dirname(rsyspath)
diskmajmin = util.load_file(os.path.join(disksyspath, "dev")).rstrip()
diskdevpath = os.path.realpath("/dev/block/%s" % diskmajmin)
# diskdevpath has something like 253:0
# and udev has put links in /dev/block/253:0 to the device name in /dev/
return (diskdevpath, ptnum)
def devent2dev(devent):
if devent.startswith("/dev/"):
return devent
else:
result = util.get_mount_info(devent)
if not result:
raise ValueError("Could not determine device of '%s' % dev_ent")
return result[0]
def resize_devices(resizer, devices):
# returns a tuple of tuples containing (entry-in-devices, action, message)
info = []
for devent in devices:
try:
blockdev = devent2dev(devent)
except ValueError as e:
info.append((devent, RESIZE.SKIPPED,
"unable to convert to device: %s" % e,))
continue
try:
statret = os.stat(blockdev)
except OSError as e:
info.append((devent, RESIZE.SKIPPED,
"stat of '%s' failed: %s" % (blockdev, e),))
continue
if not stat.S_ISBLK(statret.st_mode):
info.append((devent, RESIZE.SKIPPED,
"device '%s' not a block device" % blockdev,))
continue
try:
(disk, ptnum) = device_part_info(blockdev)
except (TypeError, ValueError) as e:
info.append((devent, RESIZE.SKIPPED,
"device_part_info(%s) failed: %s" % (blockdev, e),))
continue
try:
(old, new) = resizer.resize(disk, ptnum, blockdev)
if old == new:
info.append((devent, RESIZE.NOCHANGE,
"no change necessary (%s, %s)" % (disk, ptnum),))
else:
info.append((devent, RESIZE.CHANGED,
"changed (%s, %s) from %s to %s" %
(disk, ptnum, old, new),))
except ResizeFailedException as e:
info.append((devent, RESIZE.FAILED,
"failed to resize: disk=%s, ptnum=%s: %s" %
(disk, ptnum, e),))
return info
def handle(_name, cfg, _cloud, log, _args):
if 'growpart' not in cfg:
log.debug("No 'growpart' entry in cfg. Using default: %s" %
DEFAULT_CONFIG)
cfg['growpart'] = DEFAULT_CONFIG
mycfg = cfg.get('growpart')
if not isinstance(mycfg, dict):
log.warn("'growpart' in config was not a dict")
return
mode = mycfg.get('mode', "auto")
if util.is_false(mode):
log.debug("growpart disabled: mode=%s" % mode)
return
devices = util.get_cfg_option_list(cfg, "devices", ["/"])
if not len(devices):
log.debug("growpart: empty device list")
return
try:
resizer = resizer_factory(mode)
except (ValueError, TypeError) as e:
log.debug("growpart unable to find resizer for '%s': %s" % (mode, e))
if mode != "auto":
raise e
return
resized = resize_devices(resizer, devices)
for (entry, action, msg) in resized:
if action == RESIZE.CHANGED:
log.info("'%s' resized: %s" % (entry, msg))
else:
log.debug("'%s' %s: %s" % (entry, action, msg))
RESIZERS = (('parted', ResizeParted), ('growpart', ResizeGrowPart))

View File

@ -63,7 +63,7 @@ def handle(_name, cfg, cloud, log, _args):
if not ls_cloudcfg:
return
cloud.distro.install_packages(["landscape-client"])
cloud.distro.install_packages(('landscape-client',))
merge_data = [
LSC_BUILTIN_CFG,

View File

@ -25,8 +25,8 @@ import re
from cloudinit import type_utils
from cloudinit import util
# Shortname matches 'sda', 'sda1', 'xvda', 'hda', 'sdb', xvdb, vda, vdd1
SHORTNAME_FILTER = r"^[x]{0,1}[shv]d[a-z][0-9]*$"
# Shortname matches 'sda', 'sda1', 'xvda', 'hda', 'sdb', xvdb, vda, vdd1, sr0
SHORTNAME_FILTER = r"^([x]{0,1}[shv]d[a-z][0-9]*|sr[0-9]+)$"
SHORTNAME = re.compile(SHORTNAME_FILTER)
WS = re.compile("[%s]+" % (whitespace))
FSTAB_PATH = "/etc/fstab"

View File

@ -75,7 +75,7 @@ def load_power_state(cfg):
','.join(opt_map.keys()))
delay = pstate.get("delay", "now")
if delay != "now" and not re.match("\+[0-9]+", delay):
if delay != "now" and not re.match(r"\+[0-9]+", delay):
raise TypeError("power_state[delay] must be 'now' or '+m' (minutes).")
args = ["shutdown", opt_map[mode], delay]

View File

@ -57,8 +57,16 @@ def handle(name, cfg, cloud, log, _args):
puppet_cfg = cfg['puppet']
# Start by installing the puppet package ...
cloud.distro.install_packages(["puppet"])
# Start by installing the puppet package if necessary...
install = util.get_cfg_option_bool(puppet_cfg, 'install', True)
version = util.get_cfg_option_str(puppet_cfg, 'version', None)
if not install and version:
log.warn(("Puppet install set false but version supplied,"
" doing nothing."))
elif install:
log.debug(("Attempting to install puppet %s,"),
version if version else 'latest')
cloud.distro.install_packages(('puppet', version))
# ... and then update the puppet configuration
if 'conf' in puppet_cfg:

View File

@ -27,43 +27,30 @@ from cloudinit import util
frequency = PER_ALWAYS
def _resize_btrfs(mount_point, devpth): # pylint: disable=W0613
return ('btrfs', 'filesystem', 'resize', 'max', mount_point)
def _resize_ext(mount_point, devpth): # pylint: disable=W0613
return ('resize2fs', devpth)
def _resize_xfs(mount_point, devpth): # pylint: disable=W0613
return ('xfs_growfs', devpth)
# Do not use a dictionary as these commands should be able to be used
# for multiple filesystem types if possible, e.g. one command for
# ext2, ext3 and ext4.
RESIZE_FS_PREFIXES_CMDS = [
('ext', 'resize2fs'),
('xfs', 'xfs_growfs'),
('btrfs', _resize_btrfs),
('ext', _resize_ext),
('xfs', _resize_xfs),
]
NOBLOCK = "noblock"
def nodeify_path(devpth, where, log):
try:
st_dev = os.stat(where).st_dev
dev = os.makedev(os.major(st_dev), os.minor(st_dev))
os.mknod(devpth, 0400 | stat.S_IFBLK, dev)
return st_dev
except:
if util.is_container():
log.debug("Inside container, ignoring mknod failure in resizefs")
return
log.warn("Failed to make device node to resize %s at %s",
where, devpth)
raise
def get_fs_type(st_dev, path, log):
try:
dev_entries = util.find_devs_with(tag='TYPE', oformat='value',
no_cache=True, path=path)
if not dev_entries:
return None
return dev_entries[0].strip()
except util.ProcessExecutionError:
util.logexc(log, ("Failed to get filesystem type"
" of maj=%s, min=%s for path %s"),
os.major(st_dev), os.minor(st_dev), path)
raise
def handle(name, cfg, _cloud, log, args):
if len(args) != 0:
resize_root = args[0]
@ -80,52 +67,47 @@ def handle(name, cfg, _cloud, log, args):
# TODO(harlowja): allow what is to be resized to be configurable??
resize_what = "/"
with util.ExtendedTemporaryFile(prefix="cloudinit.resizefs.",
dir=resize_root_d, delete=True) as tfh:
devpth = tfh.name
result = util.get_mount_info(resize_what, log)
if not result:
log.warn("Could not determine filesystem type of %s", resize_what)
return
# Delete the file so that mknod will work
# but don't change the file handle to know that its
# removed so that when a later call that recreates
# occurs this temporary file will still benefit from
# auto deletion
tfh.unlink_now()
(devpth, fs_type, mount_point) = result
st_dev = nodeify_path(devpth, resize_what, log)
fs_type = get_fs_type(st_dev, devpth, log)
if not fs_type:
log.warn("Could not determine filesystem type of %s", resize_what)
return
# Ensure the path is a block device.
if not stat.S_ISBLK(os.stat(devpth).st_mode):
log.debug("The %s device which was found for mount point %s for %s "
"is not a block device" % (devpth, mount_point, resize_what))
return
resizer = None
fstype_lc = fs_type.lower()
for (pfix, root_cmd) in RESIZE_FS_PREFIXES_CMDS:
if fstype_lc.startswith(pfix):
resizer = root_cmd
break
resizer = None
fstype_lc = fs_type.lower()
for (pfix, root_cmd) in RESIZE_FS_PREFIXES_CMDS:
if fstype_lc.startswith(pfix):
resizer = root_cmd
break
if not resizer:
log.warn("Not resizing unknown filesystem type %s for %s",
fs_type, resize_what)
return
if not resizer:
log.warn("Not resizing unknown filesystem type %s for %s",
fs_type, resize_what)
return
log.debug("Resizing %s (%s) using %s", resize_what, fs_type, resizer)
resize_cmd = [resizer, devpth]
resize_cmd = resizer(resize_what, devpth)
log.debug("Resizing %s (%s) using %s", resize_what, fs_type,
' '.join(resize_cmd))
if resize_root == NOBLOCK:
# Fork to a child that will run
# the resize command
util.fork_cb(do_resize, resize_cmd, log)
# Don't delete the file now in the parent
tfh.delete = False
else:
do_resize(resize_cmd, log)
if resize_root == NOBLOCK:
# Fork to a child that will run
# the resize command
util.fork_cb(do_resize, resize_cmd, log)
else:
do_resize(resize_cmd, log)
action = 'Resized'
if resize_root == NOBLOCK:
action = 'Resizing (via forking)'
log.debug("%s root filesystem (type=%s, maj=%i, min=%i, val=%s)",
action, fs_type, os.major(st_dev), os.minor(st_dev), resize_root)
log.debug("%s root filesystem (type=%s, val=%s)", action, fs_type,
resize_root)
def do_resize(resize_cmd, log):

View File

@ -0,0 +1,107 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2013 Craig Tracey
#
# Author: Craig Tracey <craigtracey@gmail.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# Note:
# This module is intended to manage resolv.conf in environments where
# early configuration of resolv.conf is necessary for further
# bootstrapping and/or where configuration management such as puppet or
# chef own dns configuration. As Debian/Ubuntu will, by default, utilize
# resovlconf, and similarly RedHat will use sysconfig, this module is
# likely to be of little use unless those are configured correctly.
#
# For RedHat with sysconfig, be sure to set PEERDNS=no for all DHCP
# enabled NICs. And, in Ubuntu/Debian it is recommended that DNS
# be configured via the standard /etc/network/interfaces configuration
# file.
#
#
# Usage Example:
#
# #cloud-config
# manage_resolv_conf: true
#
# resolv_conf:
# nameservers: ['8.8.4.4', '8.8.8.8']
# searchdomains:
# - foo.example.com
# - bar.example.com
# domain: example.com
# options:
# rotate: true
# timeout: 1
#
from cloudinit.settings import PER_INSTANCE
from cloudinit import templater
from cloudinit import util
frequency = PER_INSTANCE
distros = ['fedora', 'rhel']
def generate_resolv_conf(cloud, log, params):
template_fn = cloud.get_template_filename('resolv.conf')
if not template_fn:
log.warn("No template found, not rendering /etc/resolv.conf")
return
flags = []
false_flags = []
if 'options' in params:
for key, val in params['options'].iteritems():
if type(val) == bool:
if val:
flags.append(key)
else:
false_flags.append(key)
for flag in flags + false_flags:
del params['options'][flag]
params['flags'] = flags
log.debug("Writing resolv.conf from template %s" % template_fn)
templater.render_to_file(template_fn, '/etc/resolv.conf', params)
def handle(name, cfg, _cloud, log, _args):
"""
Handler for resolv.conf
@param name: The module name "resolv-conf" from cloud.cfg
@param cfg: A nested dict containing the entire cloud config contents.
@param cloud: The L{CloudInit} object in use.
@param log: Pre-initialized Python logger object to use for logging.
@param args: Any module arguments from cloud.cfg
"""
if "manage_resolv_conf" not in cfg:
log.debug(("Skipping module named %s,"
" no 'manage_resolv_conf' key in configuration"), name)
return
if not util.get_cfg_option_bool(cfg, "manage_resolv_conf", False):
log.debug(("Skipping module named %s,"
" 'manage_resolv_conf' present but set to False"), name)
return
if not "resolv_conf" in cfg:
log.warn("manage_resolv_conf True but no parameters provided!")
generate_resolv_conf(_cloud, log, cfg["resolv_conf"])
return

View File

@ -31,7 +31,7 @@ def handle(name, cfg, cloud, log, _args):
salt_cfg = cfg['salt_minion']
# Start by installing the salt package ...
cloud.distro.install_packages(["salt-minion"])
cloud.distro.install_packages(('salt-minion',))
# Ensure we can configure files at the right dir
config_dir = salt_cfg.get("config_dir", '/etc/salt')

View File

@ -126,7 +126,7 @@ def apply_credentials(keys, user, disable_root, disable_root_opts):
keys = set(keys)
if user:
ssh_util.setup_user_keys(keys, user, '')
ssh_util.setup_user_keys(keys, user)
if disable_root:
if not user:
@ -135,4 +135,4 @@ def apply_credentials(keys, user, disable_root, disable_root_opts):
else:
key_prefix = ''
ssh_util.setup_user_keys(keys, 'root', key_prefix)
ssh_util.setup_user_keys(keys, 'root', options=key_prefix)

View File

@ -37,10 +37,11 @@ def handle(name, cfg, cloud, log, _args):
# Render from a template file
tpl_fn_name = cloud.get_template_filename("hosts.%s" %
(cloud.distro.name))
(cloud.distro.osfamily))
if not tpl_fn_name:
raise RuntimeError(("No hosts template could be"
" found for distro %s") % (cloud.distro.name))
" found for distro %s") %
(cloud.distro.osfamily))
templater.render_to_file(tpl_fn_name, '/etc/hosts',
{'hostname': hostname, 'fqdn': fqdn})

View File

@ -36,6 +36,11 @@ from cloudinit import util
from cloudinit.distros.parsers import hosts
OSFAMILIES = {
'debian': ['debian', 'ubuntu'],
'redhat': ['fedora', 'rhel']
}
LOG = logging.getLogger(__name__)
@ -69,7 +74,7 @@ class Distro(object):
self._apply_hostname(hostname)
@abc.abstractmethod
def package_command(self, cmd, args=None):
def package_command(self, cmd, args=None, pkgs=None):
raise NotImplementedError()
@abc.abstractmethod
@ -144,6 +149,16 @@ class Distro(object):
def _select_hostname(self, hostname, fqdn):
raise NotImplementedError()
@staticmethod
def expand_osfamily(family_list):
distros = []
for family in family_list:
if not family in OSFAMILIES:
raise ValueError("No distibutions found for osfamily %s"
% (family))
distros.extend(OSFAMILIES[family])
return distros
def update_hostname(self, hostname, fqdn, prev_hostname_fn):
applying_hostname = hostname
@ -298,22 +313,26 @@ class Distro(object):
"no_create_home": "-M",
}
redact_fields = ['passwd']
# Now check the value and create the command
for option in kwargs:
value = kwargs[option]
if option in adduser_opts and value \
and isinstance(value, str):
adduser_cmd.extend([adduser_opts[option], value])
# Redact the password field from the logs
if option != "password":
x_adduser_cmd.extend([adduser_opts[option], value])
else:
# Redact certain fields from the logs
if option in redact_fields:
x_adduser_cmd.extend([adduser_opts[option], 'REDACTED'])
else:
x_adduser_cmd.extend([adduser_opts[option], value])
elif option in adduser_opts_flags and value:
adduser_cmd.append(adduser_opts_flags[option])
x_adduser_cmd.append(adduser_opts_flags[option])
# Redact certain fields from the logs
if option in redact_fields:
x_adduser_cmd.append('REDACTED')
else:
x_adduser_cmd.append(adduser_opts_flags[option])
# Default to creating home directory unless otherwise directed
# Also, we do not create home directories for system users.
@ -335,10 +354,9 @@ class Distro(object):
if 'plain_text_passwd' in kwargs and kwargs['plain_text_passwd']:
self.set_passwd(name, kwargs['plain_text_passwd'])
# Default locking down the account.
if ('lock_passwd' not in kwargs and
('lock_passwd' in kwargs and kwargs['lock_passwd']) or
'system' not in kwargs):
# Default locking down the account. 'lock_passwd' defaults to True.
# lock account unless lock_password is False.
if kwargs.get('lock_passwd', True):
try:
util.subp(['passwd', '--lock', name])
except Exception as e:
@ -353,7 +371,7 @@ class Distro(object):
# Import SSH keys
if 'ssh_authorized_keys' in kwargs:
keys = set(kwargs['ssh_authorized_keys']) or []
ssh_util.setup_user_keys(keys, name, key_prefix=None)
ssh_util.setup_user_keys(keys, name, options=None)
return True
@ -703,41 +721,68 @@ def _normalize_users(u_cfg, def_user_cfg=None):
def normalize_users_groups(cfg, distro):
if not cfg:
cfg = {}
users = {}
groups = {}
if 'groups' in cfg:
groups = _normalize_groups(cfg['groups'])
# Handle the previous style of doing this...
old_user = None
# Handle the previous style of doing this where the first user
# overrides the concept of the default user if provided in the user: XYZ
# format.
old_user = {}
if 'user' in cfg and cfg['user']:
old_user = str(cfg['user'])
if not 'users' in cfg:
cfg['users'] = old_user
old_user = None
if 'users' in cfg:
default_user_config = None
try:
default_user_config = distro.get_default_user()
except NotImplementedError:
LOG.warn(("Distro has not implemented default user "
"access. No default user will be normalized."))
base_users = cfg['users']
if old_user:
if isinstance(base_users, (list)):
if len(base_users):
# The old user replaces user[0]
base_users[0] = {'name': old_user}
else:
# Just add it on at the end...
base_users.append({'name': old_user})
elif isinstance(base_users, (dict)):
if old_user not in base_users:
base_users[old_user] = True
elif isinstance(base_users, (str, basestring)):
# Just append it on to be re-parsed later
base_users += ",%s" % (old_user)
users = _normalize_users(base_users, default_user_config)
old_user = cfg['user']
# Translate it into the format that is more useful
# going forward
if isinstance(old_user, (basestring, str)):
old_user = {
'name': old_user,
}
if not isinstance(old_user, (dict)):
LOG.warn(("Format for 'user' key must be a string or "
"dictionary and not %s"), util.obj_name(old_user))
old_user = {}
# If no old user format, then assume the distro
# provides what the 'default' user maps to, but notice
# that if this is provided, we won't automatically inject
# a 'default' user into the users list, while if a old user
# format is provided we will.
distro_user_config = {}
try:
distro_user_config = distro.get_default_user()
except NotImplementedError:
LOG.warn(("Distro has not implemented default user "
"access. No distribution provided default user"
" will be normalized."))
# Merge the old user (which may just be an empty dict when not
# present with the distro provided default user configuration so
# that the old user style picks up all the distribution specific
# attributes (if any)
default_user_config = util.mergemanydict([old_user, distro_user_config])
base_users = cfg.get('users', [])
if not isinstance(base_users, (list, dict, str, basestring)):
LOG.warn(("Format for 'users' key must be a comma separated string"
" or a dictionary or a list and not %s"),
util.obj_name(base_users))
base_users = []
if old_user:
# Ensure that when user: is provided that this user
# always gets added (as the default user)
if isinstance(base_users, (list)):
# Just add it on at the end...
base_users.append({'name': 'default'})
elif isinstance(base_users, (dict)):
base_users['default'] = dict(base_users).get('default', True)
elif isinstance(base_users, (str, basestring)):
# Just append it on to be re-parsed later
base_users += ",default"
users = _normalize_users(base_users, default_user_config)
return (users, groups)

View File

@ -33,6 +33,10 @@ from cloudinit.settings import PER_INSTANCE
LOG = logging.getLogger(__name__)
APT_GET_COMMAND = ('apt-get', '--option=Dpkg::Options::=--force-confold',
'--option=Dpkg::options::=--force-unsafe-io',
'--assume-yes', '--quiet')
class Distro(distros.Distro):
hostname_conf_fn = "/etc/hostname"
@ -48,6 +52,7 @@ class Distro(distros.Distro):
# calls from repeatly happening (when they
# should only happen say once per instance...)
self._runner = helpers.Runners(paths)
self.osfamily = 'debian'
def apply_locale(self, locale, out_fn=None):
if not out_fn:
@ -64,7 +69,7 @@ class Distro(distros.Distro):
def install_packages(self, pkglist):
self.update_package_sources()
self.package_command('install', pkglist)
self.package_command('install', pkgs=pkglist)
def _write_network(self, settings):
util.write_file(self.network_conf_fn, settings)
@ -141,15 +146,26 @@ class Distro(distros.Distro):
# This ensures that the correct tz will be used for the system
util.copy(tz_file, self.tz_local_fn)
def package_command(self, command, args=None):
def package_command(self, command, args=None, pkgs=None):
if pkgs is None:
pkgs = []
e = os.environ.copy()
# See: http://tiny.cc/kg91fw
# Or: http://tiny.cc/mh91fw
e['DEBIAN_FRONTEND'] = 'noninteractive'
cmd = ['apt-get', '--option', 'Dpkg::Options::=--force-confold',
'--assume-yes', '--quiet', command]
if args:
cmd = list(self.get_option("apt_get_command", APT_GET_COMMAND))
if args and isinstance(args, str):
cmd.append(args)
elif args and isinstance(args, list):
cmd.extend(args)
cmd.append(command)
pkglist = util.expand_package_list('%s=%s', pkgs)
cmd.extend(pkglist)
# Allow the output of this to flow outwards (ie not be captured)
util.subp(cmd, env=e, capture=False)

View File

@ -60,9 +60,10 @@ class Distro(distros.Distro):
# calls from repeatly happening (when they
# should only happen say once per instance...)
self._runner = helpers.Runners(paths)
self.osfamily = 'redhat'
def install_packages(self, pkglist):
self.package_command('install', pkglist)
self.package_command('install', pkgs=pkglist)
def _adjust_resolve(self, dns_servers, search_servers):
try:
@ -207,7 +208,10 @@ class Distro(distros.Distro):
# This ensures that the correct tz will be used for the system
util.copy(tz_file, self.tz_local_fn)
def package_command(self, command, args=None):
def package_command(self, command, args=None, pkgs=None):
if pkgs is None:
pkgs = []
cmd = ['yum']
# If enabled, then yum will be tolerant of errors on the command line
# with regard to packages.
@ -218,9 +222,17 @@ class Distro(distros.Distro):
# Determines whether or not yum prompts for confirmation
# of critical actions. We don't want to prompt...
cmd.append("-y")
cmd.append(command)
if args:
if args and isinstance(args, str):
cmd.append(args)
elif args and isinstance(args, list):
cmd.extend(args)
cmd.append(command)
pkglist = util.expand_package_list('%s-%s', pkgs)
cmd.extend(pkglist)
# Allow the output of this to flow outwards (ie not be captured)
util.subp(cmd, capture=False)

View File

@ -64,3 +64,15 @@ class UpstartJobPartHandler(handlers.Handler):
payload = util.dos2unix(payload)
path = os.path.join(self.upstart_dir, filename)
util.write_file(path, payload, 0644)
# FIXME LATER (LP: #1124384)
# a bug in upstart means that invoking reload-configuration
# at this stage in boot causes havoc. So, until that is fixed
# we will not do that. However, I'd like to be able to easily
# test to see if this bug is still present in an image with
# a newer upstart. So, a boot hook could easiliy write this file.
if os.path.exists("/run/cloud-init-upstart-reload"):
# if inotify support is not present in the root filesystem
# (overlayroot) then we need to tell upstart to re-read /etc
util.subp(["initctl", "reload-configuration"], capture=False)

View File

@ -3,10 +3,12 @@
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Cosmin Luta
# Copyright (C) 2012 Yahoo! Inc.
# Copyright (C) 2012 Gerard Dethier
#
# Author: Cosmin Luta <q4break@gmail.com>
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
# Author: Gerard Dethier <g.dethier@gmail.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -20,9 +22,6 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from socket import inet_ntoa
from struct import pack
import os
import time
@ -31,6 +30,8 @@ from cloudinit import log as logging
from cloudinit import sources
from cloudinit import url_helper as uhelp
from cloudinit import util
from socket import inet_ntoa
from struct import pack
LOG = logging.getLogger(__name__)
@ -40,24 +41,12 @@ class DataSourceCloudStack(sources.DataSource):
sources.DataSource.__init__(self, sys_cfg, distro, paths)
self.seed_dir = os.path.join(paths.seed_dir, 'cs')
# Cloudstack has its metadata/userdata URLs located at
# http://<default-gateway-ip>/latest/
# http://<virtual-router-ip>/latest/
self.api_ver = 'latest'
gw_addr = self.get_default_gateway()
if not gw_addr:
raise RuntimeError("No default gateway found!")
self.metadata_address = "http://%s/" % (gw_addr)
def get_default_gateway(self):
"""Returns the default gateway ip address in the dotted format."""
lines = util.load_file("/proc/net/route").splitlines()
for line in lines:
items = line.split("\t")
if items[1] == "00000000":
# Found the default route, get the gateway
gw = inet_ntoa(pack("<L", int(items[2], 16)))
LOG.debug("Found default route, gateway is %s", gw)
return gw
return None
vr_addr = get_vr_address()
if not vr_addr:
raise RuntimeError("No virtual router found!")
self.metadata_address = "http://%s/" % (vr_addr)
def _get_url_settings(self):
mcfg = self.ds_cfg
@ -87,7 +76,7 @@ class DataSourceCloudStack(sources.DataSource):
(max_wait, timeout) = self._get_url_settings()
urls = [self.metadata_address]
urls = [self.metadata_address + "/latest/meta-data/instance-id"]
start_time = time.time()
url = uhelp.wait_for_url(urls=urls, max_wait=max_wait,
timeout=timeout, status_cb=LOG.warn)
@ -132,6 +121,72 @@ class DataSourceCloudStack(sources.DataSource):
return self.metadata['availability-zone']
def get_default_gateway():
# Returns the default gateway ip address in the dotted format.
lines = util.load_file("/proc/net/route").splitlines()
for line in lines:
items = line.split("\t")
if items[1] == "00000000":
# Found the default route, get the gateway
gw = inet_ntoa(pack("<L", int(items[2], 16)))
LOG.debug("Found default route, gateway is %s", gw)
return gw
return None
def get_dhclient_d():
# find lease files directory
supported_dirs = ["/var/lib/dhclient", "/var/lib/dhcp"]
for d in supported_dirs:
if os.path.exists(d):
LOG.debug("Using %s lease directory", d)
return d
return None
def get_latest_lease():
# find latest lease file
lease_d = get_dhclient_d()
if not lease_d:
return None
lease_files = os.listdir(lease_d)
latest_mtime = -1
latest_file = None
for file_name in lease_files:
if file_name.endswith(".lease") or file_name.endswith(".leases"):
abs_path = os.path.join(lease_d, file_name)
mtime = os.path.getmtime(abs_path)
if mtime > latest_mtime:
latest_mtime = mtime
latest_file = abs_path
return latest_file
def get_vr_address():
# Get the address of the virtual router via dhcp leases
# see http://bit.ly/T76eKC for documentation on the virtual router.
# If no virtual router is detected, fallback on default gateway.
lease_file = get_latest_lease()
if not lease_file:
LOG.debug("No lease file found, using default gateway")
return get_default_gateway()
latest_address = None
with open(lease_file, "r") as fd:
for line in fd:
if "dhcp-server-identifier" in line:
words = line.strip(" ;\r\n").split(" ")
if len(words) > 2:
dhcp = words[2]
LOG.debug("Found DHCP identifier %s", dhcp)
latest_address = dhcp
if not latest_address:
# No virtual router found, fallback on default gateway
LOG.debug("No DHCP found, using default gateway")
return get_default_gateway()
return latest_address
# Used to match classes to dependencies
datasources = [
(DataSourceCloudStack, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),

View File

@ -272,7 +272,7 @@ def find_candidate_devs():
combined = (by_label + [d for d in by_fstype if d not in by_label])
# We are looking for block device (sda, not sda1), ignore partitions
combined = [d for d in combined if d[-1] not in "0123456789"]
combined = [d for d in combined if not util.is_partition(d)]
return combined

View File

@ -76,37 +76,47 @@ class DataSourceNoCloud(sources.DataSource):
found.append("ds_config")
md["seedfrom"] = self.ds_cfg['seedfrom']
fslist = util.find_devs_with("TYPE=vfat")
fslist.extend(util.find_devs_with("TYPE=iso9660"))
# if ds_cfg has 'user-data' and 'meta-data'
if 'user-data' in self.ds_cfg and 'meta-data' in self.ds_cfg:
if self.ds_cfg['user-data']:
ud = self.ds_cfg['user-data']
if self.ds_cfg['meta-data'] is not False:
md = util.mergemanydict([md, self.ds_cfg['meta-data']])
if 'ds_config' not in found:
found.append("ds_config")
label_list = util.find_devs_with("LABEL=cidata")
devlist = list(set(fslist) & set(label_list))
devlist.sort(reverse=True)
label = self.ds_cfg.get('fs_label', "cidata")
if label is not None:
fslist = util.find_devs_with("TYPE=vfat")
fslist.extend(util.find_devs_with("TYPE=iso9660"))
for dev in devlist:
try:
LOG.debug("Attempting to use data from %s", dev)
label_list = util.find_devs_with("LABEL=%s" % label)
devlist = list(set(fslist) & set(label_list))
devlist.sort(reverse=True)
(newmd, newud) = util.mount_cb(dev, util.read_seeded)
md = util.mergemanydict([newmd, md])
ud = newud
for dev in devlist:
try:
LOG.debug("Attempting to use data from %s", dev)
# For seed from a device, the default mode is 'net'.
# that is more likely to be what is desired.
# If they want dsmode of local, then they must
# specify that.
if 'dsmode' not in md:
md['dsmode'] = "net"
(newmd, newud) = util.mount_cb(dev, util.read_seeded)
md = util.mergemanydict([newmd, md])
ud = newud
LOG.debug("Using data from %s", dev)
found.append(dev)
break
except OSError as e:
if e.errno != errno.ENOENT:
raise
except util.MountFailedError:
util.logexc(LOG, ("Failed to mount %s"
" when looking for data"), dev)
# For seed from a device, the default mode is 'net'.
# that is more likely to be what is desired. If they want
# dsmode of local, then they must specify that.
if 'dsmode' not in md:
md['dsmode'] = "net"
LOG.debug("Using data from %s", dev)
found.append(dev)
break
except OSError as e:
if e.errno != errno.ENOENT:
raise
except util.MountFailedError:
util.logexc(LOG, ("Failed to mount %s"
" when looking for data"), dev)
# There was no indication on kernel cmdline or data
# in the seeddir suggesting this handler should be used.
@ -194,6 +204,8 @@ def parse_cmdline_data(ds_id, fill, cmdline=None):
# short2long mapping to save cmdline typing
s2l = {"h": "local-hostname", "i": "instance-id", "s": "seedfrom"}
for item in kvpairs:
if item == "":
continue
try:
(k, v) = item.split("=", 1)
except:

View File

@ -19,9 +19,6 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from StringIO import StringIO
import csv
import os
import pwd
@ -33,6 +30,15 @@ LOG = logging.getLogger(__name__)
# See: man sshd_config
DEF_SSHD_CFG = "/etc/ssh/sshd_config"
# taken from openssh source key.c/key_type_from_name
VALID_KEY_TYPES = ("rsa", "dsa", "ssh-rsa", "ssh-dss", "ecdsa",
"ssh-rsa-cert-v00@openssh.com", "ssh-dss-cert-v00@openssh.com",
"ssh-rsa-cert-v00@openssh.com", "ssh-dss-cert-v00@openssh.com",
"ssh-rsa-cert-v01@openssh.com", "ssh-dss-cert-v01@openssh.com",
"ecdsa-sha2-nistp256-cert-v01@openssh.com",
"ecdsa-sha2-nistp384-cert-v01@openssh.com",
"ecdsa-sha2-nistp521-cert-v01@openssh.com")
class AuthKeyLine(object):
def __init__(self, source, keytype=None, base64=None,
@ -43,11 +49,8 @@ class AuthKeyLine(object):
self.keytype = keytype
self.source = source
def empty(self):
if (not self.base64 and
not self.comment and not self.keytype and not self.options):
return True
return False
def valid(self):
return (self.base64 and self.keytype)
def __str__(self):
toks = []
@ -107,62 +110,47 @@ class AuthKeyLineParser(object):
i = i + 1
options = ent[0:i]
options_lst = []
# Now use a csv parser to pull the options
# out of the above string that we just found an endpoint for.
#
# No quoting so we don't mess up any of the quoting that
# is already there.
reader = csv.reader(StringIO(options), quoting=csv.QUOTE_NONE)
for row in reader:
for e in row:
# Only keep non-empty csv options
e = e.strip()
if e:
options_lst.append(e)
# Return the rest of the string in 'remain'
remain = ent[i:].lstrip()
return (options, remain)
# Now take the rest of the items before the string
# as long as there is room to do this...
toks = []
if i + 1 < len(ent):
rest = ent[i + 1:]
toks = rest.split(None, 2)
return (options_lst, toks)
def _form_components(self, src_line, toks, options=None):
components = {}
if len(toks) == 1:
components['base64'] = toks[0]
elif len(toks) == 2:
components['base64'] = toks[0]
components['comment'] = toks[1]
elif len(toks) == 3:
components['keytype'] = toks[0]
components['base64'] = toks[1]
components['comment'] = toks[2]
components['options'] = options
if not components:
return AuthKeyLine(src_line)
else:
return AuthKeyLine(src_line, **components)
def parse(self, src_line, def_opt=None):
def parse(self, src_line, options=None):
# modeled after opensshes auth2-pubkey.c:user_key_allowed2
line = src_line.rstrip("\r\n")
if line.startswith("#") or line.strip() == '':
return AuthKeyLine(src_line)
else:
ent = line.strip()
toks = ent.split(None, 3)
if len(toks) < 4:
return self._form_components(src_line, toks, def_opt)
else:
(options, toks) = self._extract_options(ent)
if options:
options = ",".join(options)
else:
options = def_opt
return self._form_components(src_line, toks, options)
def parse_ssh_key(ent):
# return ketype, key, [comment]
toks = ent.split(None, 2)
if len(toks) < 2:
raise TypeError("To few fields: %s" % len(toks))
if toks[0] not in VALID_KEY_TYPES:
raise TypeError("Invalid keytype %s" % toks[0])
# valid key type and 2 or 3 fields:
if len(toks) == 2:
# no comment in line
toks.append("")
return toks
ent = line.strip()
try:
(keytype, base64, comment) = parse_ssh_key(ent)
except TypeError:
(keyopts, remain) = self._extract_options(ent)
if options is None:
options = keyopts
try:
(keytype, base64, comment) = parse_ssh_key(remain)
except TypeError:
return AuthKeyLine(src_line)
return AuthKeyLine(src_line, keytype=keytype, base64=base64,
comment=comment, options=options)
def parse_authorized_keys(fname):
@ -186,11 +174,11 @@ def update_authorized_keys(old_entries, keys):
for i in range(0, len(old_entries)):
ent = old_entries[i]
if ent.empty() or not ent.base64:
if not ent.valid():
continue
# Replace those with the same base64
for k in keys:
if k.empty() or not k.base64:
if not ent.valid():
continue
if k.base64 == ent.base64:
# Replace it with our better one
@ -249,7 +237,7 @@ def extract_authorized_keys(username):
return (auth_key_fn, parse_authorized_keys(auth_key_fn))
def setup_user_keys(keys, username, key_prefix):
def setup_user_keys(keys, username, options=None):
# Make sure the users .ssh dir is setup accordingly
(ssh_dir, pwent) = users_ssh_info(username)
if not os.path.isdir(ssh_dir):
@ -260,7 +248,7 @@ def setup_user_keys(keys, username, key_prefix):
parser = AuthKeyLineParser()
key_entries = []
for k in keys:
key_entries.append(parser.parse(str(k), def_opt=key_prefix))
key_entries.append(parser.parse(str(k), options=options))
# Extract the old and make the new
(auth_key_fn, auth_key_entries) = extract_authorized_keys(username)

View File

@ -64,23 +64,29 @@ class Init(object):
# Changed only when a fetch occurs
self.datasource = NULL_DATA_SOURCE
def _reset(self, ds=False):
def _reset(self, reset_ds=False):
# Recreated on access
self._cfg = None
self._paths = None
self._distro = None
if ds:
if reset_ds:
self.datasource = NULL_DATA_SOURCE
@property
def distro(self):
if not self._distro:
# Try to find the right class to use
scfg = self._extract_cfg('system')
name = scfg.pop('distro', 'ubuntu')
cls = distros.fetch(name)
LOG.debug("Using distro class %s", cls)
self._distro = cls(name, scfg, self.paths)
system_config = self._extract_cfg('system')
distro_name = system_config.pop('distro', 'ubuntu')
distro_cls = distros.fetch(distro_name)
LOG.debug("Using distro class %s", distro_cls)
self._distro = distro_cls(distro_name, system_config, self.paths)
# If we have an active datasource we need to adjust
# said datasource and move its distro/system config
# from whatever it was to a new set...
if self.datasource is not NULL_DATA_SOURCE:
self.datasource.distro = self._distro
self.datasource.sys_cfg = system_config
return self._distro
@property
@ -158,27 +164,12 @@ class Init(object):
self._cfg = self._read_cfg(extra_fns)
# LOG.debug("Loaded 'init' config %s", self._cfg)
def _read_base_cfg(self):
base_cfgs = []
default_cfg = util.get_builtin_cfg()
kern_contents = util.read_cc_from_cmdline()
# Kernel/cmdline parameters override system config
if kern_contents:
base_cfgs.append(util.load_yaml(kern_contents, default={}))
# Anything in your conf.d location??
# or the 'default' cloud.cfg location???
base_cfgs.append(util.read_conf_with_confd(CLOUD_CONFIG))
# And finally the default gets to play
if default_cfg:
base_cfgs.append(default_cfg)
return util.mergemanydict(base_cfgs)
def _read_cfg(self, extra_fns):
no_cfg_paths = helpers.Paths({}, self.datasource)
merger = helpers.ConfigMerger(paths=no_cfg_paths,
datasource=self.datasource,
additional_fns=extra_fns,
base_cfg=self._read_base_cfg())
base_cfg=fetch_base_config())
return merger.cfg
def _restore_from_cache(self):
@ -539,11 +530,16 @@ class Modules(object):
freq = mod.frequency
if not freq in FREQUENCIES:
freq = PER_INSTANCE
worked_distros = mod.distros
worked_distros = set(mod.distros)
worked_distros.update(
distros.Distro.expand_osfamily(mod.osfamilies))
if (worked_distros and d_name not in worked_distros):
LOG.warn(("Module %s is verified on %s distros"
" but not on %s distro. It may or may not work"
" correctly."), name, worked_distros, d_name)
" correctly."), name, list(worked_distros),
d_name)
# Use the configs logger and not our own
# TODO(harlowja): possibly check the module
# for having a LOG attr and just give it back
@ -576,3 +572,23 @@ class Modules(object):
raw_mods = self._read_modules(section_name)
mostly_mods = self._fixup_modules(raw_mods)
return self._run_modules(mostly_mods)
def fetch_base_config():
base_cfgs = []
default_cfg = util.get_builtin_cfg()
kern_contents = util.read_cc_from_cmdline()
# Kernel/cmdline parameters override system config
if kern_contents:
base_cfgs.append(util.load_yaml(kern_contents, default={}))
# Anything in your conf.d location??
# or the 'default' cloud.cfg location???
base_cfgs.append(util.read_conf_with_confd(CLOUD_CONFIG))
# And finally the default gets to play
if default_cfg:
base_cfgs.append(default_cfg)
return util.mergemanydict(base_cfgs)

View File

@ -404,10 +404,9 @@ def get_cfg_option_list(yobj, key, default=None):
return []
val = yobj[key]
if isinstance(val, (list)):
# Should we ensure they are all strings??
cval = [str(v) for v in val]
cval = [v for v in val]
return cval
if not isinstance(val, (str, basestring)):
if not isinstance(val, (basestring)):
val = str(val)
return [val]
@ -1519,7 +1518,7 @@ def get_proc_env(pid):
fn = os.path.join("/proc/", str(pid), "environ")
try:
contents = load_file(fn)
toks = contents.split("\0")
toks = contents.split("\x00")
for tok in toks:
if tok == "":
continue
@ -1541,3 +1540,120 @@ def keyval_str_to_dict(kvstring):
val = True
ret[key] = val
return ret
def is_partition(device):
if device.startswith("/dev/"):
device = device[5:]
return os.path.isfile("/sys/class/block/%s/partition" % device)
def expand_package_list(version_fmt, pkgs):
# we will accept tuples, lists of tuples, or just plain lists
if not isinstance(pkgs, list):
pkgs = [pkgs]
pkglist = []
for pkg in pkgs:
if isinstance(pkg, basestring):
pkglist.append(pkg)
continue
if isinstance(pkg, (tuple, list)):
if len(pkg) < 1 or len(pkg) > 2:
raise RuntimeError("Invalid package & version tuple.")
if len(pkg) == 2 and pkg[1]:
pkglist.append(version_fmt % tuple(pkg))
continue
pkglist.append(pkg[0])
else:
raise RuntimeError("Invalid package type.")
return pkglist
def get_mount_info(path, log=LOG):
# Use /proc/$$/mountinfo to find the device where path is mounted.
# This is done because with a btrfs filesystem using os.stat(path)
# does not return the ID of the device.
#
# Here, / has a device of 18 (decimal).
#
# $ stat /
# File: '/'
# Size: 234 Blocks: 0 IO Block: 4096 directory
# Device: 12h/18d Inode: 256 Links: 1
# Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
# Access: 2013-01-13 07:31:04.358011255 +0000
# Modify: 2013-01-13 18:48:25.930011255 +0000
# Change: 2013-01-13 18:48:25.930011255 +0000
# Birth: -
#
# Find where / is mounted:
#
# $ mount | grep ' / '
# /dev/vda1 on / type btrfs (rw,subvol=@,compress=lzo)
#
# And the device ID for /dev/vda1 is not 18:
#
# $ ls -l /dev/vda1
# brw-rw---- 1 root disk 253, 1 Jan 13 08:29 /dev/vda1
#
# So use /proc/$$/mountinfo to find the device underlying the
# input path.
path_elements = [e for e in path.split('/') if e]
devpth = None
fs_type = None
match_mount_point = None
match_mount_point_elements = None
mountinfo_path = '/proc/%s/mountinfo' % os.getpid()
for line in load_file(mountinfo_path).splitlines():
parts = line.split()
mount_point = parts[4]
mount_point_elements = [e for e in mount_point.split('/') if e]
# Ignore mounts deeper than the path in question.
if len(mount_point_elements) > len(path_elements):
continue
# Ignore mounts where the common path is not the same.
l = min(len(mount_point_elements), len(path_elements))
if mount_point_elements[0:l] != path_elements[0:l]:
continue
# Ignore mount points higher than an already seen mount
# point.
if (match_mount_point_elements is not None and
len(match_mount_point_elements) > len(mount_point_elements)):
continue
# Find the '-' which terminates a list of optional columns to
# find the filesystem type and the path to the device. See
# man 5 proc for the format of this file.
try:
i = parts.index('-')
except ValueError:
log.debug("Did not find column named '-' in %s",
mountinfo_path)
return None
# Get the path to the device.
try:
fs_type = parts[i + 1]
devpth = parts[i + 2]
except IndexError:
log.debug("Too few columns in %s after '-' column", mountinfo_path)
return None
match_mount_point = mount_point
match_mount_point_elements = mount_point_elements
if devpth and fs_type and match_mount_point:
return (devpth, fs_type, match_mount_point)
else:
return None

View File

@ -26,6 +26,7 @@ cloud_init_modules:
- migrator
- bootcmd
- write-files
- growpart
- resizefs
- set_hostname
- update_hostname

View File

@ -0,0 +1,34 @@
#cloud-config
# Add apt repositories
#
# Default: auto select based on cloud metadata
# in ec2, the default is <region>.archive.ubuntu.com
# apt_mirror:
# use the provided mirror
# apt_mirror_search:
# search the list for the first mirror.
# this is currently very limited, only verifying that
# the mirror is dns resolvable or an IP address
#
# if neither apt_mirror nor apt_mirror search is set (the default)
# then use the mirror provided by the DataSource found.
# In EC2, that means using <region>.ec2.archive.ubuntu.com
#
# if no mirror is provided by the DataSource, and 'apt_mirror_search_dns' is
# true, then search for dns names '<distro>-mirror' in each of
# - fqdn of this host per cloud metadata
# - localdomain
# - no domain (which would search domains listed in /etc/resolv.conf)
# If there is a dns entry for <distro>-mirror, then it is assumed that there
# is a distro mirror at http://<distro>-mirror.<domain>/<distro>
#
# That gives the cloud provider the opportunity to set mirrors of a distro
# up and expose them only by creating dns entries.
#
# if none of that is found, then the default distro mirror is used
apt_mirror: http://us.archive.ubuntu.com/ubuntu/
apt_mirror_search:
- http://local-mirror.mydomain
- http://archive.ubuntu.com
apt_mirror_search_dns: False

View File

@ -0,0 +1,15 @@
#cloud-config
# boot commands
# default: none
# this is very similar to runcmd, but commands run very early
# in the boot process, only slightly after a 'boothook' would run.
# bootcmd should really only be used for things that could not be
# done later in the boot process. bootcmd is very much like
# boothook, but possibly with more friendly.
# * bootcmd will run on every boot
# * the INSTANCE_ID variable will be set to the current instance id.
# * you can use 'cloud-init-boot-per' command to help only run once
bootcmd:
- echo 192.168.1.130 us.archive.ubuntu.com > /etc/hosts
- [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]

View File

@ -47,9 +47,13 @@ apt_sources:
chef:
# Valid values are 'gems' and 'packages'
# Valid values are 'gems' and 'packages' and 'omnibus'
install_type: "packages"
# Boolean: run 'install_type' code even if chef-client
# appears already installed.
force_install: false
# Chef settings
server_url: "https://chef.yourorg.com:4000"
@ -80,6 +84,9 @@ chef:
maxclients: 100
keepalive: "off"
# if install_type is 'omnibus', change the url to download
omnibus_url: "https://www.opscode.com/chef/install.sh"
# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues

View File

@ -31,3 +31,14 @@ datasource:
# <url>/user-data and <url>/meta-data
# seedfrom: http://my.example.com/i-abcde
seedfrom: None
# fs_label: the label on filesystems to be searched for NoCloud source
fs_label: cidata
# these are optional, but allow you to basically provide a datasource
# right here
user-data: |
# This is the user-data verbatum
meta-data:
instance-id: i-87018aed
local-hostname: myhost.internal

View File

@ -0,0 +1,7 @@
#cloud-config
# final_message
# default: cloud-init boot finished at $TIMESTAMP. Up $UPTIME seconds
# this message is written by cloud-final when the system is finished
# its first boot
final_message: "The system is finally up, after $UPTIME seconds"

View File

@ -0,0 +1,24 @@
#cloud-config
#
# growpart entry is a dict, if it is not present at all
# in config, then the default is used ({'mode': 'auto', 'devices': ['/']})
#
# mode:
# values:
# * auto: use any option possible (growpart or parted)
# if none are available, do not warn, but debug.
# * growpart: use growpart to grow partitions
# if growpart is not available, this is an error.
# * parted: use parted (parted resizepart) to resize partitions
# if parted is not available, this is an error.
# * off, false
#
# devices:
# a list of things to resize.
# items can be filesystem paths or devices (in /dev)
# examples:
# devices: [/, /dev/vdb1]
#
growpart:
mode: auto
devices: ['/']

View File

@ -0,0 +1,15 @@
#cloud-config
# Install additional packages on first boot
#
# Default: none
#
# if packages are specified, this apt_update will be set to true
#
# packages may be supplied as a single package name or as a list
# with the format [<package>, <version>] wherein the specifc
# package version will be installed.
packages:
- pwgen
- pastebinit
- [libpython2.7, 2.7.3-0ubuntu3.1]

View File

@ -0,0 +1,39 @@
#cloud-config
# set up mount points
# 'mounts' contains a list of lists
# the inner list are entries for an /etc/fstab line
# ie : [ fs_spec, fs_file, fs_vfstype, fs_mntops, fs-freq, fs_passno ]
#
# default:
# mounts:
# - [ ephemeral0, /mnt ]
# - [ swap, none, swap, sw, 0, 0 ]
#
# in order to remove a previously listed mount (ie, one from defaults)
# list only the fs_spec. For example, to override the default, of
# mounting swap:
# - [ swap ]
# or
# - [ swap, null ]
#
# - if a device does not exist at the time, an entry will still be
# written to /etc/fstab.
# - '/dev' can be ommitted for device names that begin with: xvd, sd, hd, vd
# - if an entry does not have all 6 fields, they will be filled in
# with values from 'mount_default_fields' below.
#
# Note, that you should set 'nobootwait' (see man fstab) for volumes that may
# not be attached at instance boot (or reboot)
#
mounts:
- [ ephemeral0, /mnt, auto, "defaults,noexec" ]
- [ sdc, /opt/data ]
- [ xvdh, /opt/data, "auto", "defaults,nobootwait", "0", "0" ]
- [ dd, /dev/zero ]
# mount_default_fields
# These values are used to fill in any entries in 'mounts' that are not
# complete. This must be an array, and must have 7 fields.
mount_default_fields: [ None, None, "auto", "defaults,nobootwait", "0", "2" ]

View File

@ -0,0 +1,14 @@
#cloud-config
# phone_home: if this dictionary is present, then the phone_home
# cloud-config module will post specified data back to the given
# url
# default: none
# phone_home:
# url: http://my.foo.bar/$INSTANCE/
# post: all
# tries: 10
#
phone_home:
url: http://my.example.com/$INSTANCE_ID/
post: [ pub_key_dsa, pub_key_rsa, pub_key_ecdsa, instance_id ]

View File

@ -0,0 +1,22 @@
#cloud-config
## poweroff or reboot system after finished
# default: none
#
# power_state can be used to make the system shutdown, reboot or
# halt after boot is finished. This same thing can be acheived by
# user-data scripts or by runcmd by simply invoking 'shutdown'.
#
# Doing it this way ensures that cloud-init is entirely finished with
# modules that would be executed, and avoids any error/log messages
# that may go to the console as a result of system services like
# syslog being taken down while cloud-init is running.
#
# delay: form accepted by shutdown. default is 'now'. other format
# accepted is +m (m in minutes)
# mode: required. must be one of 'poweroff', 'halt', 'reboot'
# message: provided as the message argument to 'shutdown'. default is none.
power_state:
delay: 30
mode: poweroff
message: Bye Bye

View File

@ -0,0 +1,20 @@
#cloud-config
#
# This is an example file to automatically configure resolv.conf when the
# instance boots for the first time.
#
# Ensure that your yaml is valid and pass this as user-data when starting
# the instance. Also be sure that your cloud.cfg file includes this
# configuration module in the appropirate section.
#
manage-resolv-conf: true
resolv_conf:
nameservers: ['8.8.4.4', '8.8.8.8']
searchdomains:
- foo.example.com
- bar.example.com
domain: example.com
options:
rotate: true
timeout: 1

View File

@ -0,0 +1,21 @@
#cloud-config
# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
#
# Note, that the list has to be proper yaml, so you have to escape
# any characters yaml would eat (':' can be problematic)
runcmd:
- [ ls, -l, / ]
- [ sh, -xc, "echo $(date) ': hello world!'" ]
- [ sh, -c, echo "=========hello world'=========" ]
- ls -l /root
- [ wget, "http://slashdot.org", -O, /tmp/index.html ]

View File

@ -0,0 +1,46 @@
#cloud-config
# add each entry to ~/.ssh/authorized_keys for the configured user or the
# first user defined in the user definition directive.
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUUk8EEAnnkhXlukKoUPND/RRClWz2s5TCzIkd3Ou5+Cyz71X0XmazM3l5WgeErvtIwQMyT1KjNoMhoJMrJnWqQPOt5Q8zWd9qG7PBl9+eiH5qV7NZ mykey@host
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZdQueUq5ozemNSj8T7enqKHOEaFoU2VoPgGEWC9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbDc1pvxzxtchBj78hJP6Cf5TCMFSXw+Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q7NDwfIrJJtO7Hi42GyXtvEONHbiRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhTYWpMfYdPUnE7u536WqzFmsaqJctz3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07/+i1D+ey3ONkZLN+LQ714cgj8fRS4Hj29SCmXp5Kt5/82cD/VN3NtHw== smoser@brickies
# Send pre-generated ssh private keys to the server
# If these are present, they will be written to /etc/ssh and
# new random keys will not be generated
# in addition to 'rsa' and 'dsa' as shown below, 'ecdsa' is also supported
ssh_keys:
rsa_private: |
-----BEGIN RSA PRIVATE KEY-----
MIIBxwIBAAJhAKD0YSHy73nUgysO13XsJmd4fHiFyQ+00R7VVu2iV9Qcon2LZS/x
1cydPZ4pQpfjEha6WxZ6o8ci/Ea/w0n+0HGPwaxlEG2Z9inNtj3pgFrYcRztfECb
1j6HCibZbAzYtwIBIwJgO8h72WjcmvcpZ8OvHSvTwAguO2TkR6mPgHsgSaKy6GJo
PUJnaZRWuba/HX0KGyhz19nPzLpzG5f0fYahlMJAyc13FV7K6kMBPXTRR6FxgHEg
L0MPC7cdqAwOVNcPY6A7AjEA1bNaIjOzFN2sfZX0j7OMhQuc4zP7r80zaGc5oy6W
p58hRAncFKEvnEq2CeL3vtuZAjEAwNBHpbNsBYTRPCHM7rZuG/iBtwp8Rxhc9I5w
ixvzMgi+HpGLWzUIBS+P/XhekIjPAjA285rVmEP+DR255Ls65QbgYhJmTzIXQ2T9
luLvcmFBC6l35Uc4gTgg4ALsmXLn71MCMGMpSWspEvuGInayTCL+vEjmNBT+FAdO
W7D4zCpI43jRS9U06JVOeSc9CDk2lwiA3wIwCTB/6uc8Cq85D9YqpM10FuHjKpnP
REPPOyrAspdeOAV+6VKRavstea7+2DZmSUgE
-----END RSA PRIVATE KEY-----
rsa_public: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEAoPRhIfLvedSDKw7XdewmZ3h8eIXJD7TRHtVW7aJX1ByifYtlL/HVzJ09nilCl+MSFrpbFnqjxyL8Rr/DSf7QcY/BrGUQbZn2Kc22PemAWthxHO18QJvWPocKJtlsDNi3 smoser@localhost
dsa_private: |
-----BEGIN DSA PRIVATE KEY-----
MIIBuwIBAAKBgQDP2HLu7pTExL89USyM0264RCyWX/CMLmukxX0Jdbm29ax8FBJT
pLrO8TIXVY5rPAJm1dTHnpuyJhOvU9G7M8tPUABtzSJh4GVSHlwaCfycwcpLv9TX
DgWIpSj+6EiHCyaRlB1/CBp9RiaB+10QcFbm+lapuET+/Au6vSDp9IRtlQIVAIMR
8KucvUYbOEI+yv+5LW9u3z/BAoGBAI0q6JP+JvJmwZFaeCMMVxXUbqiSko/P1lsa
LNNBHZ5/8MOUIm8rB2FC6ziidfueJpqTMqeQmSAlEBCwnwreUnGfRrKoJpyPNENY
d15MG6N5J+z81sEcHFeprryZ+D3Ge9VjPq3Tf3NhKKwCDQ0240aPezbnjPeFm4mH
bYxxcZ9GAoGAXmLIFSQgiAPu459rCKxT46tHJtM0QfnNiEnQLbFluefZ/yiI4DI3
8UzTCOXLhUA7ybmZha+D/csj15Y9/BNFuO7unzVhikCQV9DTeXX46pG4s1o23JKC
/QaYWNMZ7kTRv+wWow9MhGiVdML4ZN4XnifuO5krqAybngIy66PMEoQCFEIsKKWv
99iziAH0KBMVbxy03Trz
-----END DSA PRIVATE KEY-----
dsa_public: ssh-dss AAAAB3NzaC1kc3MAAACBAM/Ycu7ulMTEvz1RLIzTbrhELJZf8Iwua6TFfQl1ubb1rHwUElOkus7xMhdVjms8AmbV1Meem7ImE69T0bszy09QAG3NImHgZVIeXBoJ/JzByku/1NcOBYilKP7oSIcLJpGUHX8IGn1GJoH7XRBwVub6Vqm4RP78C7q9IOn0hG2VAAAAFQCDEfCrnL1GGzhCPsr/uS1vbt8/wQAAAIEAjSrok/4m8mbBkVp4IwxXFdRuqJKSj8/WWxos00Ednn/ww5QibysHYULrOKJ1+54mmpMyp5CZICUQELCfCt5ScZ9GsqgmnI80Q1h3Xkwbo3kn7PzWwRwcV6muvJn4PcZ71WM+rdN/c2EorAINDTbjRo97NueM94WbiYdtjHFxn0YAAACAXmLIFSQgiAPu459rCKxT46tHJtM0QfnNiEnQLbFluefZ/yiI4DI38UzTCOXLhUA7ybmZha+D/csj15Y9/BNFuO7unzVhikCQV9DTeXX46pG4s1o23JKC/QaYWNMZ7kTRv+wWow9MhGiVdML4ZN4XnifuO5krqAybngIy66PMEoQ= smoser@localhost

View File

@ -0,0 +1,7 @@
#cloud-config
# Update apt database on first boot
# (ie run apt-get update)
#
# Default: true
# Aliases: apt_update
package_update: false

View File

@ -0,0 +1,8 @@
#cloud-config
# Upgrade the instance on first boot
# (ie run apt-get upgrade)
#
# Default: false
# Aliases: apt_upgrade
package_upgrade: true

View File

@ -12,7 +12,7 @@ write_files:
content: CiMgVGhpcyBmaWxlIGNvbnRyb2xzIHRoZSBzdGF0ZSBvZiBTRUxpbnV4...
owner: root:root
path: /etc/sysconfig/selinux
perms: '0644'
permissions: '0644'
- content: |
# My new /etc/sysconfig/samba file
@ -24,10 +24,10 @@ write_files:
AAAAAAAAAwAAAAQAAAAAAgAAAAAAAAACQAAAAAAAAAJAAAAAAAAcAAAAAAAAABwAAAAAAAAAAQAA
....
path: /bin/arch
perms: '0555'
permissions: '0555'
- encoding: gzip
content: !!binary |
H4sIAIDb/U8C/1NW1E/KzNMvzuBKTc7IV8hIzcnJVyjPL8pJ4QIA6N+MVxsAAAA=
path: /usr/bin/hello
perms: '0755'
permissions: '0755'

74
doc/rtd/conf.py Normal file
View File

@ -0,0 +1,74 @@
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
sys.path.insert(0, os.path.abspath('.'))
from cloudinit import version
# Supress warnings for docs that aren't used yet
#unused_docs = [
#]
# General information about the project.
project = 'Cloud-Init'
# -- General configuration ----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.intersphinx',
]
intersphinx_mapping = {
'sphinx': ('http://sphinx.pocoo.org', None)
}
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
version = version.version_string()
release = version
# Set the default Pygments syntax
highlight_language = 'python'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
"bodyfont": "Arial, sans-serif",
"headfont": "Arial, sans-serif"
}
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = 'static/logo.png'

30
doc/rtd/index.rst Normal file
View File

@ -0,0 +1,30 @@
.. _index:
=====================
Documentation
=====================
.. rubric:: Everything about cloud-init, a set of **python** scripts and utilities to make your cloud images be all they can be!
Summary
-----------------
`Cloud-init`_ is the *defacto* multi-distribution package that handles early initialization of a cloud instance.
----
.. toctree::
:maxdepth: 2
topics/capabilities
topics/availability
topics/format
topics/dir_layout
topics/examples
topics/datasources
topics/modules
topics/moreinfo
topics/hacking
.. _Cloud-init: https://launchpad.net/cloud-init

BIN
doc/rtd/static/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

14356
doc/rtd/static/logo.svg Executable file

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 719 KiB

View File

@ -0,0 +1,20 @@
============
Availability
============
It is currently installed in the `Ubuntu Cloud Images`_ and also in the official `Ubuntu`_ images available on EC2.
Versions for other systems can be (or have been) created for the following distributions:
- Ubuntu
- Fedora
- Debian
- RHEL
- CentOS
- *and more...*
So ask your distribution provider where you can obtain an image with it built-in if one is not already available ☺
.. _Ubuntu Cloud Images: http://cloud-images.ubuntu.com/
.. _Ubuntu: http://www.ubuntu.com/

View File

@ -0,0 +1,24 @@
=====================
Capabilities
=====================
- Setting a default locale
- Setting a instance hostname
- Generating instance ssh private keys
- Adding ssh keys to a users ``.ssh/authorized_keys`` so they can log in
- Setting up ephemeral mount points
User configurability
--------------------
`Cloud-init`_ 's behavior can be configured via user-data.
User-data can be given by the user at instance launch time.
This is done via the ``--user-data`` or ``--user-data-file`` argument to ec2-run-instances for example.
* Check your local clients documentation for how to provide a `user-data` string
or `user-data` file for usage by cloud-init on instance creation.
.. _Cloud-init: https://launchpad.net/cloud-init

View File

@ -0,0 +1,192 @@
.. _datasources:
=========
Datasources
=========
----------
What is a datasource?
----------
Datasources are sources of configuration data for cloud-init that typically come
from the user (aka userdata) or come from the stack that created the configuration
drive (aka metadata). Typical userdata would include files, yaml, and shell scripts
while typical metadata would include server name, instance id, display name and other
cloud specific details. Since there are multiple ways to provide this data (each cloud
solution seems to prefer its own way) internally a datasource abstract class was
created to allow for a single way to access the different cloud systems methods
to provide this data through the typical usage of subclasses.
The current interface that a datasource object must provide is the following:
.. sourcecode:: python
# returns a mime multipart message that contains
# all the various fully-expanded components that
# were found from processing the raw userdata string
# - when filtering only the mime messages targeting
# this instance id will be returned (or messages with
# no instance id)
def get_userdata(self, apply_filter=False)
# returns the raw userdata string (or none)
def get_userdata_raw(self)
# returns a integer (or none) which can be used to identify
# this instance in a group of instances which are typically
# created from a single command, thus allowing programatic
# filtering on this launch index (or other selective actions)
@property
def launch_index(self)
# the data sources' config_obj is a cloud-config formated
# object that came to it from ways other than cloud-config
# because cloud-config content would be handled elsewhere
def get_config_obj(self)
#returns a list of public ssh keys
def get_public_ssh_keys(self)
# translates a device 'short' name into the actual physical device
# fully qualified name (or none if said physical device is not attached
# or does not exist)
def device_name_to_device(self, name)
# gets the locale string this instance should be applying
# which typically used to adjust the instances locale settings files
def get_locale(self)
@property
def availability_zone(self)
# gets the instance id that was assigned to this instance by the
# cloud provider or when said instance id does not exist in the backing
# metadata this will return 'iid-datasource'
def get_instance_id(self)
# gets the fully qualified domain name that this host should be using
# when configuring network or hostname releated settings, typically
# assigned either by the cloud provider or the user creating the vm
def get_hostname(self, fqdn=False)
def get_package_mirror_info(self)
---------------------------
EC2
---------------------------
The EC2 datasource is the oldest and most widely used datasource that cloud-init
supports. This datasource interacts with a *magic* ip that is provided to the
instance by the cloud provider. Typically this ip is ``169.254.169.254`` of which
at this ip a http server is provided to the instance so that the instance can make
calls to get instance userdata and instance metadata.
Metadata is accessible via the following URL:
::
GET http://169.254.169.254/2009-04-04/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-id
instance-type
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
Userdata is accessible via the following URL:
::
GET http://169.254.169.254/2009-04-04/user-data
1234,fred,reboot,true | 4512,jimbo, | 173,,,
Note that there are multiple versions of this data provided, cloud-init
by default uses **2009-04-04** but newer versions can be supported with
relative ease (newer versions have more data exposed, while maintaining
backward compatibility with the previous versions).
To see which versions are supported from your cloud provider use the following URL:
::
GET http://169.254.169.254/
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
...
latest
**Note:** internally in cloudinit the `boto`_ library used to fetch the instance
userdata and instance metadata, feel free to check that library out, it provides
many other useful EC2 functionality.
---------------------------
Config Drive
---------------------------
.. include:: ../../sources/configdrive/README.rst
---------------------------
Alt cloud
---------------------------
.. include:: ../../sources/altcloud/README.rst
---------------------------
No cloud
---------------------------
.. include:: ../../sources/nocloud/README.rst
---------------------------
MAAS
---------------------------
*TODO*
For now see: http://maas.ubuntu.com/
---------------------------
CloudStack
---------------------------
*TODO*
---------------------------
OVF
---------------------------
*TODO*
For now see: https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/doc/sources/ovf/
---------------------------
Fallback/None
---------------------------
This is the fallback datasource when no other datasource can be selected. It is
the equivalent of a *empty* datasource in that it provides a empty string as userdata
and a empty dictionary as metadata. It is useful for testing as well as for when
you do not have a need to have an actual datasource to meet your instance
requirements (ie you just want to run modules that are not concerned with any
external data). It is typically put at the end of the datasource search list
so that if all other datasources are not matched, then this one will be so that
the user is not left with an inaccessible instance.
**Note:** the instance id that this datasource provides is ``iid-datasource-none``.
.. _boto: http://docs.pythonboto.org/en/latest/

View File

@ -0,0 +1,81 @@
=========
Directory layout
=========
Cloudinits's directory structure is somewhat different from a regular application::
/var/lib/cloud/
- data/
- instance-id
- previous-instance-id
- datasource
- previous-datasource
- previous-hostname
- handlers/
- instance
- instances/
i-00000XYZ/
- boot-finished
- cloud-config.txt
- datasource
- handlers/
- obj.pkl
- scripts/
- sem/
- user-data.txt
- user-data.txt.i
- scripts/
- per-boot/
- per-instance/
- per-once/
- seed/
- sem/
``/var/lib/cloud``
The main directory containing the cloud-init specific subdirectories.
It is typically located at ``/var/lib`` but there are certain configuration
scenarios where this can be altered.
TBD, describe this overriding more.
``data/``
Contains information releated to instance ids, datasources and hostnames of the previous
and current instance if they are different. These can be examined as needed to
determine any information releated to a previous boot (if applicable).
``handlers/``
Custom ``part-handlers`` code is written out here. Files that end up here are written
out with in the scheme of ``part-handler-XYZ`` where ``XYZ`` is the handler number (the
first handler found starts at 0).
``instance``
A symlink to the current ``instances/`` subdirectory that points to the currently
active instance (which is active is dependent on the datasource loaded).
``instances/``
All instances that were created using this image end up with instance identifer
subdirectories (and corresponding data for each instance). The currently active
instance will be symlinked the the ``instance`` symlink file defined previously.
``scripts/``
Scripts that are downloaded/created by the corresponding ``part-handler`` will end up
in one of these subdirectories.
``seed/``
TBD
``sem/``
Cloud-init has a concept of a module sempahore, which basically consists
of the module name and its frequency. These files are used to ensure a module
is only ran `per-once`, `per-instance`, `per-always`. This folder contains
sempaphore `files` which are only supposed to run `per-once` (not tied to the instance id).

133
doc/rtd/topics/examples.rst Normal file
View File

@ -0,0 +1,133 @@
.. _yaml_examples:
=========
Cloud config examples
=========
Including users and groups
---------------------------
.. literalinclude:: ../../examples/cloud-config-user-groups.txt
:language: yaml
:linenos:
Writing out arbitrary files
---------------------------
.. literalinclude:: ../../examples/cloud-config-write-files.txt
:language: yaml
:linenos:
Adding a yum repository
---------------------------
.. literalinclude:: ../../examples/cloud-config-yum-repo.txt
:language: yaml
:linenos:
Configure an instances trusted CA certificates
------------------------------------------------------
.. literalinclude:: ../../examples/cloud-config-ca-certs.txt
:language: yaml
:linenos:
Configure an instances resolv.conf
------------------------------------------------------
*Note:* when using a config drive and a RHEL like system resolv.conf
will also be managed 'automatically' due to the available information
provided for dns servers in the config drive network format. For those
that wish to have different settings use this module.
.. literalinclude:: ../../examples/cloud-config-resolv-conf.txt
:language: yaml
:linenos:
Install and run `chef`_ recipes
------------------------------------------------------
.. literalinclude:: ../../examples/cloud-config-chef.txt
:language: yaml
:linenos:
Setup and run `puppet`_
------------------------------------------------------
.. literalinclude:: ../../examples/cloud-config-puppet.txt
:language: yaml
:linenos:
Add apt repositories
---------------------------
.. literalinclude:: ../../examples/cloud-config-add-apt-repos.txt
:language: yaml
:linenos:
Run commands on first boot
---------------------------
.. literalinclude:: ../../examples/cloud-config-boot-cmds.txt
:language: yaml
:linenos:
.. literalinclude:: ../../examples/cloud-config-run-cmds.txt
:language: yaml
:linenos:
Alter the completion message
---------------------------
.. literalinclude:: ../../examples/cloud-config-final-message.txt
:language: yaml
:linenos:
Install arbitrary packages
---------------------------
.. literalinclude:: ../../examples/cloud-config-install-packages.txt
:language: yaml
:linenos:
Run apt or yum upgrade
---------------------------
.. literalinclude:: ../../examples/cloud-config-update-packages.txt
:language: yaml
:linenos:
Adjust mount points mounted
---------------------------
.. literalinclude:: ../../examples/cloud-config-mount-points.txt
:language: yaml
:linenos:
Call a url when finished
---------------------------
.. literalinclude:: ../../examples/cloud-config-phone-home.txt
:language: yaml
:linenos:
Reboot/poweroff when finished
---------------------------
.. literalinclude:: ../../examples/cloud-config-power-state.txt
:language: yaml
:linenos:
Configure instances ssh-keys
---------------------------
.. literalinclude:: ../../examples/cloud-config-ssh-keys.txt
:language: yaml
:linenos:
.. _chef: http://www.opscode.com/chef/
.. _puppet: http://puppetlabs.com/

159
doc/rtd/topics/format.rst Normal file
View File

@ -0,0 +1,159 @@
=========
Formats
=========
User data that will be acted upon by cloud-init must be in one of the following types.
Gzip Compressed Content
------------------------
Content found to be gzip compressed will be uncompressed.
The uncompressed data will then be used as if it were not compressed.
This is typically is useful because user-data is limited to ~16384 [#]_ bytes.
Mime Multi Part Archive
------------------------
This list of rules is applied to each part of this multi-part file.
Using a mime-multi part file, the user can specify more than one type of data.
For example, both a user data script and a cloud-config type could be specified.
Supported content-types:
- text/x-include-once-url
- text/x-include-url
- text/cloud-config-archive
- text/upstart-job
- text/cloud-config
- text/part-handler
- text/x-shellscript
- text/cloud-boothook
Helper script to generate mime messages
~~~~~~~~~~~~~~~~
.. code-block:: python
#!/usr/bin/python
import sys
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
if len(sys.argv) == 1:
print("%s input-file:type ..." % (sys.argv[0]))
sys.exit(1)
combined_message = MIMEMultipart()
for i in sys.argv[1:]:
(filename, format_type) = i.split(":", 1)
with open(filename) as fh:
contents = fh.read()
sub_message = MIMEText(contents, format_type, sys.getdefaultencoding())
sub_message.add_header('Content-Disposition', 'attachment; filename="%s"' % (filename))
combined_message.attach(sub_message)
print(combined_message)
User-Data Script
------------------------
Typically used by those who just want to execute a shell script.
Begins with: ``#!`` or ``Content-Type: text/x-shellscript`` when using a MIME archive.
Example
~~~~~~~
::
$ cat myscript.sh
#!/bin/sh
echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
$ euca-run-instances --key mykey --user-data-file myscript.sh ami-a07d95c9
Include File
------------
This content is a ``include`` file.
The file contains a list of urls, one per line.
Each of the URLs will be read, and their content will be passed through this same set of rules.
Ie, the content read from the URL can be gzipped, mime-multi-part, or plain text.
Begins with: ``#include`` or ``Content-Type: text/x-include-url`` when using a MIME archive.
Cloud Config Data
-----------------
Cloud-config is the simplest way to accomplish some things
via user-data. Using cloud-config syntax, the user can specify certain things in a human friendly format.
These things include:
- apt upgrade should be run on first boot
- a different apt mirror should be used
- additional apt sources should be added
- certain ssh keys should be imported
- *and many more...*
**Note:** The file must be valid yaml syntax.
See the :ref:`yaml_examples` section for a commented set of examples of supported cloud config formats.
Begins with: ``#cloud-config`` or ``Content-Type: text/cloud-config`` when using a MIME archive.
Upstart Job
-----------
Content is placed into a file in ``/etc/init``, and will be consumed by upstart as any other upstart job.
Begins with: ``#upstart-job`` or ``Content-Type: text/upstart-job`` when using a MIME archive.
Cloud Boothook
--------------
This content is ``boothook`` data. It is stored in a file under ``/var/lib/cloud`` and then executed immediately.
This is the earliest ``hook`` available. Note, that there is no mechanism provided for running only once. The boothook must take care of this itself.
It is provided with the instance id in the environment variable ``INSTANCE_I``. This could be made use of to provide a 'once-per-instance' type of functionality.
Begins with: ``#cloud-boothook`` or ``Content-Type: text/cloud-boothook`` when using a MIME archive.
Part Handler
------------
This is a ``part-handler``. It will be written to a file in ``/var/lib/cloud/data`` based on its filename (which is generated).
This must be python code that contains a ``list_types`` method and a ``handle_type`` method.
Once the section is read the ``list_types`` method will be called. It must return a list of mime-types that this part-handler handles.
The ``handle_type`` method must be like:
.. code-block:: python
def handle_part(data, ctype, filename, payload):
# data = the cloudinit object
# ctype = "__begin__", "__end__", or the mime-type of the part that is being handled.
# filename = the filename of the part (or a generated filename if none is present in mime data)
# payload = the parts' content
Cloud-init will then call the ``handle_type`` method once at begin, once per part received, and once at end.
The ``begin`` and ``end`` calls are to allow the part handler to do initialization or teardown.
Begins with: ``#part-handler`` or ``Content-Type: text/part-handler`` when using a MIME archive.
Example
~~~~~~~
.. literalinclude:: ../../examples/part-handler.txt
:language: python
:linenos:
Also this `blog`_ post offers another example for more advanced usage.
.. [#] See your cloud provider for applicable user-data size limitations...
.. _blog: http://foss-boss.blogspot.com/2011/01/advanced-cloud-init-custom-handlers.html

View File

@ -0,0 +1 @@
.. include:: ../../../HACKING.rst

View File

@ -0,0 +1,3 @@
=========
Modules
=========

View File

@ -0,0 +1,12 @@
=========
More information
=========
Useful external references
-------------------------
- `The beauty of cloudinit`_
- `Introduction to cloud-init`_ (video)
.. _Introduction to cloud-init: http://www.youtube.com/watch?v=-zL3BdbKyGY
.. _The beauty of cloudinit: http://brandon.fuller.name/archives/2011/05/02/06.40.57/

View File

@ -1,65 +0,0 @@
Data souce AltCloud will be used to pick up user data on
RHEVm and vSphere.
RHEVm:
======
For REHVm v3.0 the userdata is injected into the VM using floppy
injection via the RHEVm dashboard "Custom Properties". The format
of the Custom Properties entry must be:
"floppyinject=user-data.txt:<base64 encoded data>"
e.g.: To pass a simple bash script
% cat simple_script.bash
#!/bin/bash
echo "Hello Joe!" >> /tmp/JJV_Joe_out.txt
% base64 < simple_script.bash
IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
To pass this example script to cloud-init running in a RHEVm v3.0 VM
set the "Custom Properties" when creating the RHEMv v3.0 VM to:
floppyinject=user-data.txt:IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
NOTE: The prefix with file name must be: "floppyinject=user-data.txt:"
It is also possible to launch a RHEVm v3.0 VM and pass optional user
data to it using the Delta Cloud.
For more inforation on Delta Cloud see: http://deltacloud.apache.org
vSphere:
========
For VMWare's vSphere the userdata is injected into the VM an ISO
via the cdrom. This can be done using the vSphere dashboard
by connecting an ISO image to the CD/DVD drive.
To pass this example script to cloud-init running in a vSphere VM
set the CD/DVD drive when creating the vSphere VM to point to an
ISO on the data store.
The ISO must contain the user data:
For example, to pass the same simple_script.bash to vSphere:
Create the ISO:
===============
% mkdir my-iso
NOTE: The file name on the ISO must be: "user-data.txt"
% cp simple_scirpt.bash my-iso/user-data.txt
% genisoimage -o user-data.iso -r my-iso
Verify the ISO:
===============
% sudo mkdir /media/vsphere_iso
% sudo mount -o loop JoeV_CI_02.iso /media/vsphere_iso
% cat /media/vsphere_iso/user-data.txt
% sudo umount /media/vsphere_iso
Then, launch the vSphere VM the ISO user-data.iso attached as a CDrom.
It is also possible to launch a vSphere VM and pass optional user
data to it using the Delta Cloud.
For more inforation on Delta Cloud see: http://deltacloud.apache.org

View File

@ -0,0 +1,87 @@
The datasource altcloud will be used to pick up user data on `RHEVm`_ and `vSphere`_.
RHEVm
~~~~~~
For `RHEVm`_ v3.0 the userdata is injected into the VM using floppy
injection via the `RHEVm`_ dashboard "Custom Properties".
The format of the Custom Properties entry must be:
::
floppyinject=user-data.txt:<base64 encoded data>
For example to pass a simple bash script:
::
% cat simple_script.bash
#!/bin/bash
echo "Hello Joe!" >> /tmp/JJV_Joe_out.txt
% base64 < simple_script.bash
IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
To pass this example script to cloud-init running in a `RHEVm`_ v3.0 VM
set the "Custom Properties" when creating the RHEMv v3.0 VM to:
::
floppyinject=user-data.txt:IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
**NOTE:** The prefix with file name must be: ``floppyinject=user-data.txt:``
It is also possible to launch a `RHEVm`_ v3.0 VM and pass optional user
data to it using the Delta Cloud.
For more information on Delta Cloud see: http://deltacloud.apache.org
vSphere
~~~~~~~~
For VMWare's `vSphere`_ the userdata is injected into the VM as an ISO
via the cdrom. This can be done using the `vSphere`_ dashboard
by connecting an ISO image to the CD/DVD drive.
To pass this example script to cloud-init running in a `vSphere`_ VM
set the CD/DVD drive when creating the vSphere VM to point to an
ISO on the data store.
**Note:** The ISO must contain the user data.
For example, to pass the same ``simple_script.bash`` to vSphere:
Create the ISO
-----------------
::
% mkdir my-iso
NOTE: The file name on the ISO must be: ``user-data.txt``
::
% cp simple_scirpt.bash my-iso/user-data.txt
% genisoimage -o user-data.iso -r my-iso
Verify the ISO
-----------------
::
% sudo mkdir /media/vsphere_iso
% sudo mount -o loop JoeV_CI_02.iso /media/vsphere_iso
% cat /media/vsphere_iso/user-data.txt
% sudo umount /media/vsphere_iso
Then, launch the `vSphere`_ VM the ISO user-data.iso attached as a CDROM.
It is also possible to launch a `vSphere`_ VM and pass optional user
data to it using the Delta Cloud.
For more information on Delta Cloud see: http://deltacloud.apache.org
.. _RHEVm: https://www.redhat.com/virtualization/rhev/desktop/rhevm/
.. _vSphere: https://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html

View File

@ -1,118 +0,0 @@
The 'ConfigDrive' DataSource supports the OpenStack configdrive disk.
See doc/source/api_ext/ext_config_drive.rst in the nova source code for
more information on config drive.
The following criteria are required to be identified by
DataSourceConfigDrive as a config drive:
* must be formated with vfat filesystem
* must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
* must contain one of the following files:
* etc/network/interfaces
* root/.ssh/authorized_keys
* meta.js
By default, cloud-init does not consider this source to be a full-fledged
datasource. Instead, the default behavior is to assume it is really only
present to provide networking information. Cloud-init will copy off the
network information, apply it to the system, and then continue on. The
"full" datasource would then be found in the EC2 metadata service.
== Content of config-drive ==
* etc/network/interfaces
This file is laid down by nova in order to pass static networking
information to the guest. Cloud-init will copy it off of the config-drive
and into /etc/network/interfaces as soon as it can, and then attempt to
bring up all network interfaces.
* root/.ssh/authorized_keys
This file is laid down by nova, and contains the keys that were
provided to it on instance creation (nova-boot --key ....)
Cloud-init will copy those keys and put them into the configured user
('ubuntu') .ssh/authorized_keys.
* meta.js
meta.js is populated on the config-drive in response to the user passing
"meta flags" (nova boot --meta key=value ...). It is expected to be json
formated.
== Configuration ==
Cloud-init's behavior can be modified by keys found in the meta.js file in
the following ways:
* dsmode:
values: local, net, pass
default: pass
This is what indicates if configdrive is a final data source or not.
By default it is 'pass', meaning this datasource should not be read.
Set it to 'local' or 'net' to stop cloud-init from continuing on to
search for other data sources after network config.
The difference between 'local' and 'net' is that local will not require
networking to be up before user-data actions (or boothooks) are run.
* instance-id:
default: iid-dsconfigdrive
This is utilized as the metadata's instance-id. It should generally
be unique, as it is what is used to determine "is this a new instance".
* public-keys:
default: None
if present, these keys will be used as the public keys for the
instance. This value overrides the content in authorized_keys.
Note: it is likely preferable to provide keys via user-data
* user-data:
default: None
This provides cloud-init user-data. See other documentation for what
all can be present here.
== Example ==
Here is an example using the nova client (python-novaclien)
Assuming the following variables set up:
* img_id : set to the nova image id (uuid from image-list)
* flav_id : set to numeric flavor_id (nova flavor-list)
* keyname : set to name of key for this instance (nova keypair-list)
$ cat my-user-data
#!/bin/sh
echo ==== USER_DATA FROM EC2 MD ==== | tee /ud.log
$ ud_value=$(sed 's,EC2 MD,META KEY,')
## Now, 'ud_value' has same content of my-user-data file, but
## with the string "USER_DATA FROM META KEY"
## launch an instance with dsmode=pass
## This will really not use the configdrive for anything as the mode
## for the datasource is 'pass', meaning it will still expect some
## other data source (DataSourceEc2).
$ nova boot --image=$img_id --config-drive=1 --flavor=$flav_id \
--key_name=$keyname \
--user_data=my-user-data \
"--meta=instance-id=iid-001 \
"--meta=user-data=${ud_keyval}" \
"--meta=dsmode=pass" cfgdrive-dsmode-pass
$ euca-get-console-output i-0000001 | grep USER_DATA
echo ==== USER_DATA FROM EC2 MD ==== | tee /ud.log
## Now, launch an instance with dsmode=local
## This time, the only metadata and userdata available to cloud-init
## are on the config-drive
$ nova boot --image=$img_id --config-drive=1 --flavor=$flav_id \
--key_name=$keyname \
--user_data=my-user-data \
"--meta=instance-id=iid-001 \
"--meta=user-data=${ud_keyval}" \
"--meta=dsmode=local" cfgdrive-dsmode-local
$ euca-get-console-output i-0000002 | grep USER_DATA
echo ==== USER_DATA FROM META KEY ==== | tee /ud.log
--
[1] https://github.com/openstack/nova/blob/master/doc/source/api_ext/ext_config_drive.rst for more if

View File

@ -0,0 +1,123 @@
The configuration drive datasource supports the `OpenStack`_ configuration drive disk.
See `the config drive extension`_ and `introduction`_ in the public
documentation for more information.
By default, cloud-init does *always* consider this source to be a full-fledged
datasource. Instead, the typical behavior is to assume it is really only
present to provide networking information. Cloud-init will copy off the
network information, apply it to the system, and then continue on. The
"full" datasource could then be found in the EC2 metadata service. If this is
not the case then the files contained on the located drive must provide equivalents
to what the EC2 metadata service would provide (which is typical of the version
2 support listed below)
Version 1
~~~~~~~~~
The following criteria are required to as a config drive:
1. Must be formatted with `vfat`_ filesystem
2. Must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
3. Must contain *one* of the following files
::
/etc/network/interfaces
/root/.ssh/authorized_keys
/meta.js
``/etc/network/interfaces``
This file is laid down by nova in order to pass static networking
information to the guest. Cloud-init will copy it off of the config-drive
and into /etc/network/interfaces (or convert it to RH format) as soon as it can,
and then attempt to bring up all network interfaces.
``/root/.ssh/authorized_keys``
This file is laid down by nova, and contains the ssk keys that were
provided to nova on instance creation (nova-boot --key ....)
``/meta.js``
meta.js is populated on the config-drive in response to the user passing
"meta flags" (nova boot --meta key=value ...). It is expected to be json
formatted.
Version 2
~~~~~~~~~~~
The following criteria are required to as a config drive:
1. Must be formatted with `vfat`_ or `iso9660`_ filesystem
or have a *filesystem* label of **config-2**
2. Must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
3. The files that will typically be present in the config drive are:
::
openstack/
- 2012-08-10/ or latest/
- meta_data.json
- user_data (not mandatory)
- content/
- 0000 (referenced content files)
- 0001
- ....
ec2
- latest/
- meta-data.json (not mandatory)
Keys and values
~~~~~~~~~~~
Cloud-init's behavior can be modified by keys found in the meta.js (version 1 only) file in the following ways.
::
dsmode:
values: local, net, pass
default: pass
This is what indicates if configdrive is a final data source or not.
By default it is 'pass', meaning this datasource should not be read.
Set it to 'local' or 'net' to stop cloud-init from continuing on to
search for other data sources after network config.
The difference between 'local' and 'net' is that local will not require
networking to be up before user-data actions (or boothooks) are run.
::
instance-id:
default: iid-dsconfigdrive
This is utilized as the metadata's instance-id. It should generally
be unique, as it is what is used to determine "is this a new instance".
::
public-keys:
default: None
If present, these keys will be used as the public keys for the
instance. This value overrides the content in authorized_keys.
Note: it is likely preferable to provide keys via user-data
::
user-data:
default: None
This provides cloud-init user-data. See :ref:`examples <yaml_examples>` for
what all can be present here.
.. _OpenStack: http://www.openstack.org/
.. _introduction: http://docs.openstack.org/trunk/openstack-compute/admin/content/config-drive.html
.. _python-novaclient: https://github.com/openstack/python-novaclient
.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
.. _vfat: https://en.wikipedia.org/wiki/File_Allocation_Table
.. _the config drive extension: http://docs.openstack.org/developer/nova/api_ext/ext_config_drive.html

View File

@ -1,55 +0,0 @@
The data source 'NoCloud' and 'NoCloudNet' allow the user to provide user-data
and meta-data to the instance without running a network service (or even without
having a network at all)
You can provide meta-data and user-data to a local vm boot via files on a vfat
or iso9660 filesystem. These user-data and meta-data files are expected to be
in the format described in doc/example/seed/README . Basically, user-data is
simply user-data and meta-data is a yaml formated file representing what you'd
find in the EC2 metadata service.
Given a disk 12.04 cloud image in 'disk.img', you can create a sufficient disk
by following the example below.
## create user-data and meta-data files that will be used
## to modify image on first boot
$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
$ printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data
## create a disk to attach with some user-data and meta-data
$ genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
## alternatively, create a vfat filesystem with same files
## $ truncate --size 2M seed.img
## $ mkfs.vfat -n cidata seed.img
## $ mcopy -oi seed.img user-data meta-data ::
## create a new qcow image to boot, backed by your original image
$ qemu-img create -f qcow2 -b disk.img boot-disk.img
## boot the image and login as 'ubuntu' with password 'passw0rd'
## note, passw0rd was set as password through the user-data above,
## there is no password set on these images.
$ kvm -m 256 \
-net nic -net user,hostfwd=tcp::2222-:22 \
-drive file=boot-disk.img,if=virtio \
-drive file=seed.iso,if=virtio
Note, that the instance-id provided ('iid-local01' above) is what is used to
determine if this is "first boot". So if you are making updates to user-data
you will also have to change that, or start the disk fresh.
Also, you can inject an /etc/network/interfaces file by providing the content
for that file in the 'network-interfaces' field of metadata. Example metadata:
instance-id: iid-abcdefg
network-interfaces: |
iface eth0 inet static
address 192.168.1.10
network 192.168.1.0
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.254
hostname: myhost

View File

@ -0,0 +1,71 @@
The data source ``NoCloud`` and ``NoCloudNet`` allow the user to provide user-data
and meta-data to the instance without running a network service (or even without
having a network at all).
You can provide meta-data and user-data to a local vm boot via files on a `vfat`_
or `iso9660`_ filesystem.
These user-data and meta-data files are expected to be
in the following format.
::
/user-data
/meta-data
Basically, user-data is simply user-data and meta-data is a yaml formatted file
representing what you'd find in the EC2 metadata service.
Given a disk ubuntu 12.04 cloud image in 'disk.img', you can create a sufficient disk
by following the example below.
::
## create user-data and meta-data files that will be used
## to modify image on first boot
$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
$ printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data
## create a disk to attach with some user-data and meta-data
$ genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
## alternatively, create a vfat filesystem with same files
## $ truncate --size 2M seed.img
## $ mkfs.vfat -n cidata seed.img
## $ mcopy -oi seed.img user-data meta-data ::
## create a new qcow image to boot, backed by your original image
$ qemu-img create -f qcow2 -b disk.img boot-disk.img
## boot the image and login as 'ubuntu' with password 'passw0rd'
## note, passw0rd was set as password through the user-data above,
## there is no password set on these images.
$ kvm -m 256 \
-net nic -net user,hostfwd=tcp::2222-:22 \
-drive file=boot-disk.img,if=virtio \
-drive file=seed.iso,if=virtio
**Note:** that the instance-id provided (``iid-local01`` above) is what is used to
determine if this is "first boot". So if you are making updates to user-data
you will also have to change that, or start the disk fresh.
Also, you can inject an ``/etc/network/interfaces`` file by providing the content
for that file in the ``network-interfaces`` field of metadata.
Example metadata:
::
instance-id: iid-abcdefg
network-interfaces: |
iface eth0 inet static
address 192.168.1.10
network 192.168.1.0
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.254
hostname: myhost
.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
.. _vfat: https://en.wikipedia.org/wiki/File_Allocation_Table

View File

@ -36,10 +36,10 @@ PKG_MP = {
'prettytable': 'python-prettytable',
'pyyaml': 'python-yaml',
}
DEBUILD_ARGS = ["-us", "-S", "-uc"]
DEBUILD_ARGS = ["-us", "-S", "-uc", "-d"]
def write_debian_folder(root, version, revno):
def write_debian_folder(root, version, revno, append_requires=[]):
deb_dir = util.abs_join(root, 'debian')
os.makedirs(deb_dir)
@ -58,7 +58,7 @@ def write_debian_folder(root, version, revno):
pkgs = [p.lower().strip() for p in stdout.splitlines()]
# Map to known packages
requires = []
requires = append_requires
for p in pkgs:
tgt_pkg = PKG_MP.get(p)
if not tgt_pkg:
@ -87,6 +87,11 @@ def main():
" (default: %(default)s)"),
default=False,
action='store_true')
parser.add_argument("--no-cloud-utils", dest="no_cloud_utils",
help=("don't depend on cloud-utils package"
" (default: %(default)s)"),
default=False,
action='store_true')
for ent in DEBUILD_ARGS:
parser.add_argument(ent, dest="debuild_args", action='append_const',
@ -128,7 +133,11 @@ def main():
shutil.move(extracted_name, xdir)
print("Creating a debian/ folder in %r" % (xdir))
write_debian_folder(xdir, version, revno)
if not args.no_cloud_utils:
append_requires=['cloud-utils']
else:
append_requires=[]
write_debian_folder(xdir, version, revno, append_requires)
# The naming here seems to follow some debian standard
# so it will whine if it is changed...

View File

@ -18,8 +18,7 @@ Standards-Version: 3.9.3
Package: cloud-init
Architecture: all
Depends: cloud-utils,
procps,
Depends: procps,
python,
#for $r in $requires
${r},

View File

@ -38,11 +38,13 @@ def is_f(p):
INITSYS_FILES = {
'sysvinit': [f for f in glob('sysvinit/*') if is_f(f)],
'sysvinit_deb': [f for f in glob('sysvinit/*') if is_f(f)],
'systemd': [f for f in glob('systemd/*') if is_f(f)],
'upstart': [f for f in glob('upstart/*') if is_f(f)],
}
INITSYS_ROOTS = {
'sysvinit': '/etc/rc.d/init.d',
'sysvinit_deb': '/etc/init.d',
'systemd': '/etc/systemd/system/',
'upstart': '/etc/init/',
}

View File

@ -29,15 +29,13 @@
# Should-Start: $time
# Required-Stop:
# Should-Stop:
# Default-Start: 3 5
# Default-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The config cloud-init job
# Description: Start cloud-init and runs the config phase
# and any associated config modules as desired.
### END INIT INFO
. /etc/init.d/functions
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
@ -60,8 +58,9 @@ prog="cloud-init"
cloud_init="/usr/bin/cloud-init"
conf="/etc/cloud/cloud.cfg"
# If there exists a sysconfig variable override file use it...
# If there exist sysconfig/default variable override files use it...
[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
start() {
[ -x $cloud_init ] || return 5
@ -80,8 +79,6 @@ stop() {
return $RETVAL
}
. /etc/init.d/functions
case "$1" in
start)
start

View File

@ -29,15 +29,13 @@
# Should-Start: $time
# Required-Stop:
# Should-Stop:
# Default-Start: 3 5
# Default-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The final cloud-init job
# Description: Start cloud-init and runs the final phase
# and any associated final modules as desired.
### END INIT INFO
. /etc/init.d/functions
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
@ -60,8 +58,9 @@ prog="cloud-init"
cloud_init="/usr/bin/cloud-init"
conf="/etc/cloud/cloud.cfg"
# If there exists a sysconfig variable override file use it...
# If there exist sysconfig/default variable override files use it...
[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
start() {
[ -x $cloud_init ] || return 5
@ -80,8 +79,6 @@ stop() {
return $RETVAL
}
. /etc/init.d/functions
case "$1" in
start)
start

View File

@ -29,15 +29,13 @@
# Should-Start: $time
# Required-Stop:
# Should-Stop:
# Default-Start: 3 5
# Default-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The initial cloud-init job (net and fs contingent)
# Description: Start cloud-init and runs the initialization phase
# and any associated initial modules as desired.
### END INIT INFO
. /etc/init.d/functions
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
@ -60,8 +58,9 @@ prog="cloud-init"
cloud_init="/usr/bin/cloud-init"
conf="/etc/cloud/cloud.cfg"
# If there exists a sysconfig variable override file use it...
# If there exist sysconfig/default variable override files use it...
[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
start() {
[ -x $cloud_init ] || return 5
@ -80,8 +79,6 @@ stop() {
return $RETVAL
}
. /etc/init.d/functions
case "$1" in
start)
start

View File

@ -29,15 +29,13 @@
# Should-Start: $time
# Required-Stop:
# Should-Stop:
# Default-Start: 3 5
# Default-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The initial cloud-init job (local fs contingent)
# Description: Start cloud-init and runs the initialization phases
# and any associated initial modules as desired.
### END INIT INFO
. /etc/init.d/functions
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
@ -60,8 +58,9 @@ prog="cloud-init"
cloud_init="/usr/bin/cloud-init"
conf="/etc/cloud/cloud.cfg"
# If there exists a sysconfig variable override file use it...
# If there exist sysconfig/default variable override files use it...
[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
start() {
[ -x $cloud_init ] || return 5
@ -80,8 +79,6 @@ stop() {
return $RETVAL
}
. /etc/init.d/functions
case "$1" in
start)
start

View File

@ -0,0 +1,39 @@
#
# Your system has been configured with 'manage-resolv-conf' set to true.
# As a result, cloud-init has written this file with configuration data
# that it has been provided. Cloud-init, by default, will write this file
# a single time (PER_ONCE).
#
#if $varExists('nameservers')
#for $server in $nameservers
nameserver $server
#end for
#end if
#if $varExists('searchdomains')
search #slurp
#for $search in $searchdomains
$search #slurp
#end for
#end if
#if $varExists('domain')
domain $domain
#end if
#if $varExists('sortlist')
sortlist #slurp
#for $sort in $sortlist
$sort #slurp
#end for
#end if
#if $varExists('options') or $varExists('flags')
options #slurp
#for $flag in $flags
$flag #slurp
#end for
#for $key, $value in $options.items()
$key:$value #slurp
#end for
#end if

View File

@ -0,0 +1,28 @@
\## Note, this file is written by cloud-init on first boot of an instance
\## modifications made here will not survive a re-bundle.
\## if you wish to make changes you can:
\## a.) add 'apt_preserve_sources_list: true' to /etc/cloud/cloud.cfg
\## or do the same in user-data
\## b.) add sources in /etc/apt/sources.list.d
\## c.) make changes to template file /etc/cloud/templates/sources.list.debian.tmpl
\###
# See http://www.debian.org/releases/stable/i386/release-notes/ch-upgrading.html
# for how to upgrade to newer versions of the distribution.
deb $mirror $codename main contrib non-free
deb-src $mirror $codename main contrib non-free
\## Major bug fix updates produced after the final release of the
\## distribution.
deb $security $codename/updates main contrib non-free
deb-src $security $codename/updates main contrib non-free
deb $mirror $codename-updates main contrib non-free
deb-src $mirror $codename-updates main contrib non-free
\## Uncomment the following two lines to add software from the 'backports'
\## repository.
\## N.B. software from this repository may not have been tested as
\## extensively as that contained in the main release, although it includes
\## newer versions of some applications which may provide useful features.
# deb http://backports.debian.org/debian-backports $codename-backports main contrib non-free
# deb-src http://backports.debian.org/debian-backports $codename-backports main contrib non-free

View File

@ -2,6 +2,9 @@ import os
import sys
import unittest
from contextlib import contextmanager
from mocker import Mocker
from mocker import MockerTestCase
from cloudinit import helpers as ch
@ -31,6 +34,17 @@ else:
pass
@contextmanager
def mocker(verify_calls=True):
m = Mocker()
try:
yield m
finally:
m.restore()
if verify_calls:
m.verify()
# Makes the old path start
# with new base instead of whatever
# it previously had
@ -168,3 +182,11 @@ class FilesystemMockingTestCase(ResourceUsingTestCase):
trap_func = retarget_many_wrapper(new_root, 1, func)
setattr(mod, f, trap_func)
self.patched_funcs.append((mod, f, func))
def populate_dir(path, files):
os.makedirs(path)
for (name, content) in files.iteritems():
with open(os.path.join(path, name), "w") as fp:
fp.write(content)
fp.close()

View File

@ -1,11 +1,13 @@
"""Tests of the built-in user data handlers."""
import os
import unittest
from mocker import MockerTestCase
from cloudinit import handlers
from cloudinit import helpers
from cloudinit import util
from cloudinit.handlers import upstart_job
@ -33,7 +35,9 @@ class TestBuiltins(MockerTestCase):
None, None, None)
self.assertEquals(0, len(os.listdir(up_root)))
@unittest.skip("until LP: #1124384 fixed")
def test_upstart_frequency_single(self):
# files should be written out when frequency is ! per-instance
c_root = self.makeDir()
up_root = self.makeDir()
paths = helpers.Paths({
@ -41,9 +45,12 @@ class TestBuiltins(MockerTestCase):
'upstart_dir': up_root,
})
freq = PER_INSTANCE
mock_subp = self.mocker.replace(util.subp, passthrough=False)
mock_subp(["initctl", "reload-configuration"], capture=False)
self.mocker.replay()
h = upstart_job.UpstartJobPartHandler(paths)
# No files should be written out when
# the frequency is ! per-instance
h.handle_part('', handlers.CONTENT_START,
None, None, None)
h.handle_part('blah', 'text/upstart-job',

View File

@ -11,6 +11,7 @@ from cloudinit import settings
from cloudinit.sources import DataSourceConfigDrive as ds
from cloudinit import util
from tests.unittests import helpers as unit_helpers
PUBKEY = u'ssh-rsa AAAAB3NzaC1....sIkJhq8wdX+4I3A4cYbYP ubuntu@server-460\n'
EC2_META = {
@ -89,23 +90,22 @@ class TestConfigDriveDataSource(MockerTestCase):
'swap': '/dev/vda3',
}
for name, dev_name in name_tests.items():
my_mock = mocker.Mocker()
find_mock = my_mock.replace(util.find_devs_with,
spec=False, passthrough=False)
provided_name = dev_name[len('/dev/'):]
provided_name = "s" + provided_name[1:]
find_mock(mocker.ARGS)
my_mock.result([provided_name])
exists_mock = my_mock.replace(os.path.exists,
spec=False, passthrough=False)
exists_mock(mocker.ARGS)
my_mock.result(False)
exists_mock(mocker.ARGS)
my_mock.result(True)
my_mock.replay()
device = cfg_ds.device_name_to_device(name)
my_mock.restore()
self.assertEquals(dev_name, device)
with unit_helpers.mocker() as my_mock:
find_mock = my_mock.replace(util.find_devs_with,
spec=False, passthrough=False)
provided_name = dev_name[len('/dev/'):]
provided_name = "s" + provided_name[1:]
find_mock(mocker.ARGS)
my_mock.result([provided_name])
exists_mock = my_mock.replace(os.path.exists,
spec=False, passthrough=False)
exists_mock(mocker.ARGS)
my_mock.result(False)
exists_mock(mocker.ARGS)
my_mock.result(True)
my_mock.replay()
device = cfg_ds.device_name_to_device(name)
self.assertEquals(dev_name, device)
def test_dev_os_map(self):
populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
@ -122,19 +122,18 @@ class TestConfigDriveDataSource(MockerTestCase):
'swap': '/dev/vda3',
}
for name, dev_name in name_tests.items():
my_mock = mocker.Mocker()
find_mock = my_mock.replace(util.find_devs_with,
spec=False, passthrough=False)
find_mock(mocker.ARGS)
my_mock.result([dev_name])
exists_mock = my_mock.replace(os.path.exists,
spec=False, passthrough=False)
exists_mock(mocker.ARGS)
my_mock.result(True)
my_mock.replay()
device = cfg_ds.device_name_to_device(name)
my_mock.restore()
self.assertEquals(dev_name, device)
with unit_helpers.mocker() as my_mock:
find_mock = my_mock.replace(util.find_devs_with,
spec=False, passthrough=False)
find_mock(mocker.ARGS)
my_mock.result([dev_name])
exists_mock = my_mock.replace(os.path.exists,
spec=False, passthrough=False)
exists_mock(mocker.ARGS)
my_mock.result(True)
my_mock.replay()
device = cfg_ds.device_name_to_device(name)
self.assertEquals(dev_name, device)
def test_dev_ec2_remap(self):
populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
@ -156,17 +155,16 @@ class TestConfigDriveDataSource(MockerTestCase):
'root2k': None,
}
for name, dev_name in name_tests.items():
my_mock = mocker.Mocker()
exists_mock = my_mock.replace(os.path.exists,
spec=False, passthrough=False)
exists_mock(mocker.ARGS)
my_mock.result(False)
exists_mock(mocker.ARGS)
my_mock.result(True)
my_mock.replay()
device = cfg_ds.device_name_to_device(name)
self.assertEquals(dev_name, device)
my_mock.restore()
with unit_helpers.mocker(verify_calls=False) as my_mock:
exists_mock = my_mock.replace(os.path.exists,
spec=False, passthrough=False)
exists_mock(mocker.ARGS)
my_mock.result(False)
exists_mock(mocker.ARGS)
my_mock.result(True)
my_mock.replay()
device = cfg_ds.device_name_to_device(name)
self.assertEquals(dev_name, device)
def test_dev_ec2_map(self):
populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
@ -259,19 +257,25 @@ class TestConfigDriveDataSource(MockerTestCase):
ds.read_config_drive_dir, my_d)
def test_find_candidates(self):
devs_with_answers = {
"TYPE=vfat": [],
"TYPE=iso9660": ["/dev/vdb"],
"LABEL=config-2": ["/dev/vdb"],
}
devs_with_answers = {}
def my_devs_with(criteria):
return devs_with_answers[criteria]
def my_is_partition(dev):
return dev[-1] in "0123456789" and not dev.startswith("sr")
try:
orig_find_devs_with = util.find_devs_with
util.find_devs_with = my_devs_with
orig_is_partition = util.is_partition
util.is_partition = my_is_partition
devs_with_answers = {"TYPE=vfat": [],
"TYPE=iso9660": ["/dev/vdb"],
"LABEL=config-2": ["/dev/vdb"],
}
self.assertEqual(["/dev/vdb"], ds.find_candidate_devs())
# add a vfat item
@ -287,6 +291,7 @@ class TestConfigDriveDataSource(MockerTestCase):
finally:
util.find_devs_with = orig_find_devs_with
util.is_partition = orig_is_partition
def test_pubkeys_v2(self):
"""Verify that public-keys work in config-drive-v2."""

View File

@ -3,6 +3,7 @@ import os
from cloudinit.sources import DataSourceMAAS
from cloudinit import url_helper
from tests.unittests.helpers import populate_dir
from mocker import MockerTestCase
@ -137,11 +138,4 @@ class TestMAASDataSource(MockerTestCase):
pass
def populate_dir(seed_dir, files):
os.mkdir(seed_dir)
for (name, content) in files.iteritems():
with open(os.path.join(seed_dir, name), "w") as fp:
fp.write(content)
fp.close()
# vi: ts=4 expandtab

View File

@ -0,0 +1,157 @@
from cloudinit import helpers
from cloudinit.sources import DataSourceNoCloud
from cloudinit import util
from tests.unittests.helpers import populate_dir
from mocker import MockerTestCase
import os
import yaml
class TestNoCloudDataSource(MockerTestCase):
def setUp(self):
self.tmp = self.makeDir()
self.paths = helpers.Paths({'cloud_dir': self.tmp})
self.cmdline = "root=TESTCMDLINE"
self.unapply = []
self.apply_patches([(util, 'get_cmdline', self._getcmdline)])
super(TestNoCloudDataSource, self).setUp()
def tearDown(self):
apply_patches([i for i in reversed(self.unapply)])
super(TestNoCloudDataSource, self).setUp()
def apply_patches(self, patches):
ret = apply_patches(patches)
self.unapply += ret
def _getcmdline(self):
return self.cmdline
def test_nocloud_seed_dir(self):
md = {'instance-id': 'IID', 'dsmode': 'local'}
ud = "USER_DATA_HERE"
populate_dir(os.path.join(self.paths.seed_dir, "nocloud"),
{'user-data': ud, 'meta-data': yaml.safe_dump(md)})
sys_cfg = {
'datasource': {'NoCloud': {'fs_label': None}}
}
ds = DataSourceNoCloud.DataSourceNoCloud
dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
ret = dsrc.get_data()
self.assertEqual(dsrc.userdata_raw, ud)
self.assertEqual(dsrc.metadata, md)
self.assertTrue(ret)
def test_fs_label(self):
#find_devs_with should not be called ff fs_label is None
ds = DataSourceNoCloud.DataSourceNoCloud
class PsuedoException(Exception):
pass
def my_find_devs_with(*args, **kwargs):
_f = (args, kwargs)
raise PsuedoException
self.apply_patches([(util, 'find_devs_with', my_find_devs_with)])
# by default, NoCloud should search for filesystems by label
sys_cfg = {'datasource': {'NoCloud': {}}}
dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
self.assertRaises(PsuedoException, dsrc.get_data)
# but disabling searching should just end up with None found
sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
ret = dsrc.get_data()
self.assertFalse(ret)
def test_no_datasource_expected(self):
#no source should be found if no cmdline, config, and fs_label=None
sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
ds = DataSourceNoCloud.DataSourceNoCloud
dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
self.assertFalse(dsrc.get_data())
def test_seed_in_config(self):
ds = DataSourceNoCloud.DataSourceNoCloud
data = {
'fs_label': None,
'meta-data': {'instance-id': 'IID'},
'user-data': "USER_DATA_RAW",
}
sys_cfg = {'datasource': {'NoCloud': data}}
dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
ret = dsrc.get_data()
self.assertEqual(dsrc.userdata_raw, "USER_DATA_RAW")
self.assertEqual(dsrc.metadata.get('instance-id'), 'IID')
self.assertTrue(ret)
class TestParseCommandLineData(MockerTestCase):
def test_parse_cmdline_data_valid(self):
ds_id = "ds=nocloud"
pairs = (
("root=/dev/sda1 %(ds_id)s", {}),
("%(ds_id)s; root=/dev/foo", {}),
("%(ds_id)s", {}),
("%(ds_id)s;", {}),
("%(ds_id)s;s=SEED", {'seedfrom': 'SEED'}),
("%(ds_id)s;seedfrom=SEED;local-hostname=xhost",
{'seedfrom': 'SEED', 'local-hostname': 'xhost'}),
("%(ds_id)s;h=xhost",
{'local-hostname': 'xhost'}),
("%(ds_id)s;h=xhost;i=IID",
{'local-hostname': 'xhost', 'instance-id': 'IID'}),
)
for (fmt, expected) in pairs:
fill = {}
cmdline = fmt % {'ds_id': ds_id}
ret = DataSourceNoCloud.parse_cmdline_data(ds_id=ds_id, fill=fill,
cmdline=cmdline)
self.assertEqual(expected, fill)
self.assertTrue(ret)
def test_parse_cmdline_data_none(self):
ds_id = "ds=foo"
cmdlines = (
"root=/dev/sda1 ro",
"console=/dev/ttyS0 root=/dev/foo",
"",
"ds=foocloud",
"ds=foo-net",
"ds=nocloud;s=SEED",
)
for cmdline in cmdlines:
fill = {}
ret = DataSourceNoCloud.parse_cmdline_data(ds_id=ds_id, fill=fill,
cmdline=cmdline)
self.assertEqual(fill, {})
self.assertFalse(ret)
def apply_patches(patches):
ret = []
for (ref, name, replace) in patches:
if replace is None:
continue
orig = getattr(ref, name)
setattr(ref, name, replace)
ret.append((ref, name, orig))
return ret
# vi: ts=4 expandtab

View File

@ -173,26 +173,29 @@ class TestUGNormalize(MockerTestCase):
'users': 'default'
}
(users, _groups) = self._norm(ug_cfg, distro)
self.assertIn('bob', users)
self.assertNotIn('bob', users) # Bob is not the default now, zetta is
self.assertIn('zetta', users)
self.assertTrue(users['zetta']['default'])
self.assertNotIn('default', users)
ug_cfg = {
'user': 'zetta',
'users': 'default, joe'
}
(users, _groups) = self._norm(ug_cfg, distro)
self.assertIn('bob', users)
self.assertNotIn('bob', users) # Bob is not the default now, zetta is
self.assertIn('joe', users)
self.assertIn('zetta', users)
self.assertTrue(users['zetta']['default'])
self.assertNotIn('default', users)
ug_cfg = {
'user': 'zetta',
'users': ['bob', 'joe']
}
(users, _groups) = self._norm(ug_cfg, distro)
self.assertNotIn('bob', users)
self.assertIn('bob', users)
self.assertIn('joe', users)
self.assertIn('zetta', users)
self.assertTrue(users['zetta']['default'])
ug_cfg = {
'user': 'zetta',
'users': {
@ -204,6 +207,7 @@ class TestUGNormalize(MockerTestCase):
self.assertIn('bob', users)
self.assertIn('joe', users)
self.assertIn('zetta', users)
self.assertTrue(users['zetta']['default'])
ug_cfg = {
'user': 'zetta',
}

View File

@ -138,15 +138,47 @@ class TestAddCaCerts(MockerTestCase):
self.mocker.replay()
cc_ca_certs.add_ca_certs([])
def test_single_cert(self):
"""Test adding a single certificate to the trusted CAs."""
def test_single_cert_trailing_cr(self):
"""Test adding a single certificate to the trusted CAs
when existing ca-certificates has trailing newline"""
cert = "CERT1\nLINE2\nLINE3"
ca_certs_content = "line1\nline2\ncloud-init-ca-certs.crt\nline3\n"
expected = "line1\nline2\nline3\ncloud-init-ca-certs.crt\n"
mock_write = self.mocker.replace(util.write_file, passthrough=False)
mock_load = self.mocker.replace(util.load_file, passthrough=False)
mock_write("/usr/share/ca-certificates/cloud-init-ca-certs.crt",
cert, mode=0644)
mock_load("/etc/ca-certificates.conf")
self.mocker.result(ca_certs_content)
mock_write("/etc/ca-certificates.conf", expected, omode="wb")
self.mocker.replay()
cc_ca_certs.add_ca_certs([cert])
def test_single_cert_no_trailing_cr(self):
"""Test adding a single certificate to the trusted CAs
when existing ca-certificates has no trailing newline"""
cert = "CERT1\nLINE2\nLINE3"
ca_certs_content = "line1\nline2\nline3"
mock_write = self.mocker.replace(util.write_file, passthrough=False)
mock_load = self.mocker.replace(util.load_file, passthrough=False)
mock_write("/usr/share/ca-certificates/cloud-init-ca-certs.crt",
cert, mode=0644)
mock_load("/etc/ca-certificates.conf")
self.mocker.result(ca_certs_content)
mock_write("/etc/ca-certificates.conf",
"\ncloud-init-ca-certs.crt", omode="ab")
"%s\n%s\n" % (ca_certs_content, "cloud-init-ca-certs.crt"),
omode="wb")
self.mocker.replay()
cc_ca_certs.add_ca_certs([cert])
@ -157,10 +189,18 @@ class TestAddCaCerts(MockerTestCase):
expected_cert_file = "\n".join(certs)
mock_write = self.mocker.replace(util.write_file, passthrough=False)
mock_load = self.mocker.replace(util.load_file, passthrough=False)
mock_write("/usr/share/ca-certificates/cloud-init-ca-certs.crt",
expected_cert_file, mode=0644)
mock_write("/etc/ca-certificates.conf",
"\ncloud-init-ca-certs.crt", omode="ab")
ca_certs_content = "line1\nline2\nline3"
mock_load("/etc/ca-certificates.conf")
self.mocker.result(ca_certs_content)
out = "%s\n%s\n" % (ca_certs_content, "cloud-init-ca-certs.crt")
mock_write("/etc/ca-certificates.conf", out, omode="wb")
self.mocker.replay()
cc_ca_certs.add_ca_certs(certs)

View File

@ -0,0 +1,255 @@
from mocker import MockerTestCase
from cloudinit import cloud
from cloudinit import util
from cloudinit.config import cc_growpart
import errno
import logging
import os
import re
# growpart:
# mode: auto # off, on, auto, 'growpart', 'parted'
# devices: ['root']
HELP_PARTED_NO_RESIZE = """
Usage: parted [OPTION]... [DEVICE [COMMAND [PARAMETERS]...]...]
Apply COMMANDs with PARAMETERS to DEVICE. If no COMMAND(s) are given, run in
interactive mode.
OPTIONs:
<SNIP>
COMMANDs:
<SNIP>
quit exit program
rescue START END rescue a lost partition near START
and END
resize NUMBER START END resize partition NUMBER and its file
system
rm NUMBER delete partition NUMBER
<SNIP>
Report bugs to bug-parted@gnu.org
"""
HELP_PARTED_RESIZE = """
Usage: parted [OPTION]... [DEVICE [COMMAND [PARAMETERS]...]...]
Apply COMMANDs with PARAMETERS to DEVICE. If no COMMAND(s) are given, run in
interactive mode.
OPTIONs:
<SNIP>
COMMANDs:
<SNIP>
quit exit program
rescue START END rescue a lost partition near START
and END
resize NUMBER START END resize partition NUMBER and its file
system
resizepart NUMBER END resize partition NUMBER
rm NUMBER delete partition NUMBER
<SNIP>
Report bugs to bug-parted@gnu.org
"""
HELP_GROWPART_RESIZE = """
growpart disk partition
rewrite partition table so that partition takes up all the space it can
options:
-h | --help print Usage and exit
<SNIP>
-u | --update R update the the kernel partition table info after growing
this requires kernel support and 'partx --update'
R is one of:
- 'auto' : [default] update partition if possible
<SNIP>
Example:
- growpart /dev/sda 1
Resize partition 1 on /dev/sda
"""
HELP_GROWPART_NO_RESIZE = """
growpart disk partition
rewrite partition table so that partition takes up all the space it can
options:
-h | --help print Usage and exit
<SNIP>
Example:
- growpart /dev/sda 1
Resize partition 1 on /dev/sda
"""
class TestDisabled(MockerTestCase):
def setUp(self):
super(TestDisabled, self).setUp()
self.name = "growpart"
self.cloud_init = None
self.log = logging.getLogger("TestDisabled")
self.args = []
self.handle = cc_growpart.handle
def test_mode_off(self):
#Test that nothing is done if mode is off.
# this really only verifies that resizer_factory isn't called
config = {'growpart': {'mode': 'off'}}
self.mocker.replace(cc_growpart.resizer_factory,
passthrough=False)
self.mocker.replay()
self.handle(self.name, config, self.cloud_init, self.log, self.args)
class TestConfig(MockerTestCase):
def setUp(self):
super(TestConfig, self).setUp()
self.name = "growpart"
self.paths = None
self.cloud = cloud.Cloud(None, self.paths, None, None, None)
self.log = logging.getLogger("TestConfig")
self.args = []
os.environ = {}
self.cloud_init = None
self.handle = cc_growpart.handle
# Order must be correct
self.mocker.order()
def test_no_resizers_auto_is_fine(self):
subp = self.mocker.replace(util.subp, passthrough=False)
subp(['parted', '--help'], env={'LANG': 'C'})
self.mocker.result((HELP_PARTED_NO_RESIZE, ""))
subp(['growpart', '--help'], env={'LANG': 'C'})
self.mocker.result((HELP_GROWPART_NO_RESIZE, ""))
self.mocker.replay()
config = {'growpart': {'mode': 'auto'}}
self.handle(self.name, config, self.cloud_init, self.log, self.args)
def test_no_resizers_mode_growpart_is_exception(self):
subp = self.mocker.replace(util.subp, passthrough=False)
subp(['growpart', '--help'], env={'LANG': 'C'})
self.mocker.result((HELP_GROWPART_NO_RESIZE, ""))
self.mocker.replay()
config = {'growpart': {'mode': "growpart"}}
self.assertRaises(ValueError, self.handle, self.name, config,
self.cloud_init, self.log, self.args)
def test_mode_auto_prefers_parted(self):
subp = self.mocker.replace(util.subp, passthrough=False)
subp(['parted', '--help'], env={'LANG': 'C'})
self.mocker.result((HELP_PARTED_RESIZE, ""))
self.mocker.replay()
ret = cc_growpart.resizer_factory(mode="auto")
self.assertTrue(isinstance(ret, cc_growpart.ResizeParted))
def test_handle_with_no_growpart_entry(self):
#if no 'growpart' entry in config, then mode=auto should be used
myresizer = object()
factory = self.mocker.replace(cc_growpart.resizer_factory,
passthrough=False)
rsdevs = self.mocker.replace(cc_growpart.resize_devices,
passthrough=False)
factory("auto")
self.mocker.result(myresizer)
rsdevs(myresizer, ["/"])
self.mocker.result((("/", cc_growpart.RESIZE.CHANGED, "my-message",),))
self.mocker.replay()
try:
orig_resizers = cc_growpart.RESIZERS
cc_growpart.RESIZERS = (('mysizer', object),)
self.handle(self.name, {}, self.cloud_init, self.log, self.args)
finally:
cc_growpart.RESIZERS = orig_resizers
class TestResize(MockerTestCase):
def setUp(self):
super(TestResize, self).setUp()
self.name = "growpart"
self.log = logging.getLogger("TestResize")
# Order must be correct
self.mocker.order()
def test_simple_devices(self):
#test simple device list
# this patches out devent2dev, os.stat, and device_part_info
# so in the end, doesn't test a lot
devs = ["/dev/XXda1", "/dev/YYda2"]
devstat_ret = Bunch(st_mode=25008, st_ino=6078, st_dev=5L,
st_nlink=1, st_uid=0, st_gid=6, st_size=0,
st_atime=0, st_mtime=0, st_ctime=0)
enoent = ["/dev/NOENT"]
real_stat = os.stat
resize_calls = []
class myresizer(object):
def resize(self, diskdev, partnum, partdev):
resize_calls.append((diskdev, partnum, partdev))
if partdev == "/dev/YYda2":
return (1024, 2048)
return (1024, 1024) # old size, new size
def mystat(path):
if path in devs:
return devstat_ret
if path in enoent:
e = OSError("%s: does not exist" % path)
e.errno = errno.ENOENT
raise e
return real_stat(path)
try:
opinfo = cc_growpart.device_part_info
cc_growpart.device_part_info = simple_device_part_info
os.stat = mystat
resized = cc_growpart.resize_devices(myresizer(), devs + enoent)
def find(name, res):
for f in res:
if f[0] == name:
return f
return None
self.assertEqual(cc_growpart.RESIZE.NOCHANGE,
find("/dev/XXda1", resized)[1])
self.assertEqual(cc_growpart.RESIZE.CHANGED,
find("/dev/YYda2", resized)[1])
self.assertEqual(cc_growpart.RESIZE.SKIPPED,
find(enoent[0], resized)[1])
#self.assertEqual(resize_calls,
#[("/dev/XXda", "1", "/dev/XXda1"),
#("/dev/YYda", "2", "/dev/YYda2")])
finally:
cc_growpart.device_part_info = opinfo
os.stat = real_stat
def simple_device_part_info(devpath):
# simple stupid return (/dev/vda, 1) for /dev/vda
ret = re.search("([^0-9]*)([0-9]*)$", devpath)
x = (ret.group(1), ret.group(2))
return x
class Bunch:
st_mode = None # fix pylint complaint
def __init__(self, **kwds):
self.__dict__.update(kwds)
# vi: ts=4 expandtab

View File

@ -0,0 +1,101 @@
from cloudinit import ssh_util
from unittest import TestCase
VALID_CONTENT = {
'dsa': (
"AAAAB3NzaC1kc3MAAACBAIrjOQSlSea19bExXBMBKBvcLhBoVvNBjCppNzllipF"
"W4jgIOMcNanULRrZGjkOKat6MWJNetSbV1E6IOFDQ16rQgsh/OvYU9XhzM8seLa"
"A21VszZuhIV7/2DE3vxu7B54zVzueG1O1Deq6goQCRGWBUnqO2yluJiG4HzrnDa"
"jzRAAAAFQDMPO96qXd4F5A+5b2f2MO7SpVomQAAAIBpC3K2zIbDLqBBs1fn7rsv"
"KcJvwihdlVjG7UXsDB76P2GNqVG+IlYPpJZ8TO/B/fzTMtrdXp9pSm9OY1+BgN4"
"REsZ2WNcvfgY33aWaEM+ieCcQigvxrNAF2FTVcbUIIxAn6SmHuQSWrLSfdHc8H7"
"hsrgeUPPdzjBD/cv2ZmqwZ1AAAAIAplIsScrJut5wJMgyK1JG0Kbw9JYQpLe95P"
"obB069g8+mYR8U0fysmTEdR44mMu0VNU5E5OhTYoTGfXrVrkR134LqFM2zpVVbE"
"JNDnIqDHxTkc6LY2vu8Y2pQ3/bVnllZZOda2oD5HQ7ovygQa6CH+fbaZHbdDUX/"
"5z7u2rVAlDw=="
),
'ecdsa': (
"AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITrGBB3cgJ"
"J7fPxvtMW9H3oRisNpJ3OAslxZeyP7I0A9BPAW0RQIwHVtVnM7zrp4nI+JLZov/"
"Ql7lc2leWL7CY="
),
'rsa': (
"AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZdQueUq5oz"
"emNSj8T7enqKHOEaFoU2VoPgGEWC9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbD"
"c1pvxzxtchBj78hJP6Cf5TCMFSXw+Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q"
"7NDwfIrJJtO7Hi42GyXtvEONHbiRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhT"
"YWpMfYdPUnE7u536WqzFmsaqJctz3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07"
"/+i1D+ey3ONkZLN+LQ714cgj8fRS4Hj29SCmXp5Kt5/82cD/VN3NtHw=="
),
}
TEST_OPTIONS = ("no-port-forwarding,no-agent-forwarding,no-X11-forwarding,"
'command="echo \'Please login as the user \"ubuntu\" rather than the'
'user \"root\".\';echo;sleep 10"')
class TestAuthKeyLineParser(TestCase):
def test_simple_parse(self):
# test key line with common 3 fields (keytype, base64, comment)
parser = ssh_util.AuthKeyLineParser()
for ktype in ['rsa', 'ecdsa', 'dsa']:
content = VALID_CONTENT[ktype]
comment = 'user-%s@host' % ktype
line = ' '.join((ktype, content, comment,))
key = parser.parse(line)
self.assertEqual(key.base64, content)
self.assertFalse(key.options)
self.assertEqual(key.comment, comment)
self.assertEqual(key.keytype, ktype)
def test_parse_no_comment(self):
# test key line with key type and base64 only
parser = ssh_util.AuthKeyLineParser()
for ktype in ['rsa', 'ecdsa', 'dsa']:
content = VALID_CONTENT[ktype]
line = ' '.join((ktype, content,))
key = parser.parse(line)
self.assertEqual(key.base64, content)
self.assertFalse(key.options)
self.assertFalse(key.comment)
self.assertEqual(key.keytype, ktype)
def test_parse_with_keyoptions(self):
# test key line with options in it
parser = ssh_util.AuthKeyLineParser()
options = TEST_OPTIONS
for ktype in ['rsa', 'ecdsa', 'dsa']:
content = VALID_CONTENT[ktype]
comment = 'user-%s@host' % ktype
line = ' '.join((options, ktype, content, comment,))
key = parser.parse(line)
self.assertEqual(key.base64, content)
self.assertEqual(key.options, options)
self.assertEqual(key.comment, comment)
self.assertEqual(key.keytype, ktype)
def test_parse_with_options_passed_in(self):
# test key line with key type and base64 only
parser = ssh_util.AuthKeyLineParser()
baseline = ' '.join(("rsa", VALID_CONTENT['rsa'], "user@host"))
myopts = "no-port-forwarding,no-agent-forwarding"
key = parser.parse("allowedopt" + " " + baseline)
self.assertEqual(key.options, "allowedopt")
key = parser.parse("overridden_opt " + baseline, options=myopts)
self.assertEqual(key.options, myopts)
def test_parse_invalid_keytype(self):
parser = ssh_util.AuthKeyLineParser()
key = parser.parse(' '.join(["badkeytype", VALID_CONTENT['rsa']]))
self.assertFalse(key.valid())
# vi: ts=4 expandtab

60
tools/make-mime.py Executable file
View File

@ -0,0 +1,60 @@
#!/usr/bin/python
import argparse
import sys
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
KNOWN_CONTENT_TYPES = [
'text/x-include-once-url',
'text/x-include-url',
'text/cloud-config-archive',
'text/upstart-job',
'text/cloud-config',
'text/part-handler',
'text/x-shellscript',
'text/cloud-boothook',
]
def file_content_type(text):
try:
filename, content_type = text.split(":", 1)
return (open(filename, 'r'), filename, content_type.strip())
except:
raise argparse.ArgumentError("Invalid value for %r" % (text))
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-a", "--attach",
dest="files",
type=file_content_type,
action='append',
default=[],
required=True,
metavar="<file>:<content-type>",
help="attach the given file in the specified "
"content type")
args = parser.parse_args()
sub_messages = []
for i, (fh, filename, format_type) in enumerate(args.files):
contents = fh.read()
sub_message = MIMEText(contents, format_type, sys.getdefaultencoding())
sub_message.add_header('Content-Disposition',
'attachment; filename="%s"' % (filename))
content_type = sub_message.get_content_type().lower()
if content_type not in KNOWN_CONTENT_TYPES:
sys.stderr.write(("WARNING: content type %r for attachment %s "
"may be incorrect!\n") % (content_type, i + 1))
sub_messages.append(sub_message)
combined_message = MIMEMultipart()
for msg in sub_messages:
combined_message.attach(msg)
print(combined_message)
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@ -1,5 +1,14 @@
#!/bin/sh
logger_opts="-p user.info -t ec2"
# rhels' version of logger_opts does not support long
# for of -s (--stderr), so use short form.
logger_opts="$logger_opts -s"
# Redirect stderr to stdout
exec 2>&1
fp_blist=",${1},"
key_blist=",${2},"
{
@ -16,9 +25,9 @@ done
echo "-----END SSH HOST KEY FINGERPRINTS-----"
echo "#############################################################"
} | logger -p user.info --stderr -t "ec2"
} | logger $logger_opts
echo -----BEGIN SSH HOST KEY KEYS-----
echo "-----BEGIN SSH HOST KEY KEYS-----"
for f in /etc/ssh/ssh_host_*key.pub; do
[ -f "$f" ] || continue
read ktype line < "$f"
@ -26,4 +35,4 @@ for f in /etc/ssh/ssh_host_*key.pub; do
[ "${key_blist#*,$ktype,}" = "${key_blist}" ] || continue
cat $f
done
echo -----END SSH HOST KEY KEYS-----
echo "-----END SSH HOST KEY KEYS-----"

View File

@ -21,6 +21,12 @@ script
# if the all static network interfaces are already up, nothing to do
[ -f "$MARK_STATIC_NETWORK_EMITTED" ] && exit 0
# ifquery will exit failure if there is no /run/network directory.
# normally that would get created by one of network-interface.conf
# or networking.conf. But, it is possible that we're running
# before either of those have.
mkdir -p /run/network
# get list of all 'auto' interfaces. if there are none, nothing to do.
auto_list=$(ifquery --list --allow auto 2>/dev/null) || :
[ -z "$auto_list" ] && exit 0

View File

@ -10,19 +10,55 @@ task
console output
script
# /run/network/static-network-up-emitted is written by
# upstart (via /etc/network/if-up.d/upstart). its presense would
# indicate that static-network-up has already fired.
EMITTED="/run/network/static-network-up-emitted"
[ -e "$EMITTED" -o -e "/var/$EMITTED" ] && exit 0
set +e # you cannot trap TERM reliably with 'set -e'
SLEEP_CHILD=""
static_network_up() {
local emitted="/run/network/static-network-up-emitted"
# /run/network/static-network-up-emitted is written by
# upstart (via /etc/network/if-up.d/upstart). its presense would
# indicate that static-network-up has already fired.
[ -e "$emitted" -o -e "/var/$emitted" ]
}
msg() {
local uptime="" idle=""
if [ -r /proc/uptime ]; then
read uptime idle < /proc/uptime
fi
echo "$UPSTART_JOB${uptime:+[${uptime}]}:" "$1"
}
handle_sigterm() {
# if we received sigterm and static networking is up then it probably
# came from upstart as a result of 'stop on static-network-up'
[ -z "$SLEEP_CHILD" ] || kill $SLEEP_CHILD
if static_network_up; then
msg "static networking is now up"
exit 0
fi
msg "recieved SIGTERM, networking not up"
exit 2
}
dowait() {
msg "waiting $1 seconds for network device"
sleep "$1" &
SLEEP_CHILD=$!
wait $SLEEP_CHILD
SLEEP_CHILD=""
}
trap handle_sigterm TERM
# static_network_up already occurred
static_network_up && exit 0
# obj.pkl comes from cloud-init-local (or previous boot and
# manual_cache_clean)
[ -f /var/lib/cloud/instance/obj.pkl ] && exit 0
short=10; long=120;
sleep ${short}
echo $UPSTART_JOB "waiting ${long} seconds for a network device."
sleep ${long}
echo $UPSTART_JOB "gave up waiting for a network device."
dowait 10
dowait 120
msg "gave up waiting for a network device."
: > /var/lib/cloud/data/no-net
end script
# EOF