merge from trunk

This commit is contained in:
Scott Moser 2014-07-21 14:33:27 -04:00
commit efbef3eaca
30 changed files with 634 additions and 113 deletions

View File

@ -1,3 +1,12 @@
0.7.6:
- open 0.7.6
- Enable vendordata on CloudSigma datasource (LP: #1303986)
- Poll on /dev/ttyS1 in CloudSigma datasource only if dmidecode says
we're running on cloudsigma (LP: #1316475) [Kiril Vladimiroff]
- SmartOS test: do not require existance of /dev/ttyS1. [LP: #1316597]
- doc: fix user-groups doc to reference plural ssh-authorized-keys
(LP: #1327065) [Joern Heissler]
- fix 'make test' in python 2.6
0.7.5:
- open 0.7.5
- Add a debug log message around import failures
@ -33,6 +42,14 @@
rather than relying on EC2 data in openstack metadata service.
- SmartOS, AltCloud: disable running on arm systems due to bug
(LP: #1243287, #1285686) [Oleg Strikov]
- Allow running a command to seed random, default is 'pollinate -q'
(LP: #1286316) [Dustin Kirkland]
- Write status to /run/cloud-init/status.json for consumption by
other programs (LP: #1284439)
- Azure: if a reboot causes ephemeral storage to be re-provisioned
Then we need to re-format it. (LP: #1292648)
- OpenNebula: support base64 encoded user-data
[Enol Fernandez, Peter Kotcauer]
0.7.4:
- fix issue mounting 'ephemeral0' if ephemeral0 was an alias for a
partitioned block device with target filesystem on ephemeral0.1.

46
TODO
View File

@ -1,46 +0,0 @@
- Consider a 'failsafe' DataSource
If all others fail, setting a default that
- sets the user password, writing it to console
- logs to console that this happened
- Consider a 'previous' DataSource
If no other data source is found, fall back to the 'previous' one
keep a indication of what instance id that is in /var/lib/cloud
- Rewrite "cloud-init-query" (currently not implemented)
Possibly have DataSource and cloudinit expose explicit fields
- instance-id
- hostname
- mirror
- release
- ssh public keys
- Remove the conversion of the ubuntu network interface format conversion
to a RH/fedora format and replace it with a top level format that uses
the netcf libraries format instead (which itself knows how to translate
into the specific formats)
- Replace the 'apt*' modules with variants that now use the distro classes
to perform distro independent packaging commands (where possible)
- Canonicalize the semaphore/lock name for modules and user data handlers
a. It is most likely a bug that currently exists that if a module in config
alters its name and it has already ran, then it will get ran again since
the lock name hasn't be canonicalized
- Replace some the LOG.debug calls with a LOG.info where appropriate instead
of how right now there is really only 2 levels (WARN and DEBUG)
- Remove the 'cc_' for config modules, either have them fully specified (ie
'cloudinit.config.resizefs') or by default only look in the 'cloudinit.config'
for these modules (or have a combination of the above), this avoids having
to understand where your modules are coming from (which can be altered by
the current python inclusion path)
- Depending on if people think the wrapper around 'os.path.join' provided
by the 'paths' object is useful (allowing us to modify based off a 'read'
and 'write' configuration based 'root') or is just to confusing, it might be
something to remove later, and just recommend using 'chroot' instead (or the X
different other options which are similar to 'chroot'), which is might be more
natural and less confusing...
- Instead of just warning when a module is being ran on a 'unknown' distribution
perhaps we should not run that module in that case? Or we might want to start
reworking those modules so they will run on all distributions? Or if that is
not the case, then maybe we want to allow fully specified python paths for
modules and start encouraging packages of 'ubuntu' modules, packages of 'rhel'
specific modules that people can add instead of having them all under the
cloud-init 'root' tree? This might encourage more development of other modules
instead of having to go edit the cloud-init code to accomplish this.

43
TODO.rst Normal file
View File

@ -0,0 +1,43 @@
==============================================
Things that cloud-init may do (better) someday
==============================================
- Consider making ``failsafe`` ``DataSource``
- sets the user password, writing it to console
- Consider a ``previous`` ``DataSource``, if no other data source is
found, fall back to the ``previous`` one that worked.
- Rewrite ``cloud-init-query`` (currently not implemented)
- Possibly have a ``DataSource`` expose explicit fields:
- instance-id
- hostname
- mirror
- release
- ssh public keys
- Remove the conversion of the ubuntu network interface format conversion
to a RH/fedora format and replace it with a top level format that uses
the netcf libraries format instead (which itself knows how to translate
into the specific formats). See for example `netcf`_ which seems to be
an active project that has this capability.
- Replace the ``apt*`` modules with variants that now use the distro classes
to perform distro independent packaging commands (wherever possible).
- Replace some the LOG.debug calls with a LOG.info where appropriate instead
of how right now there is really only 2 levels (``WARN`` and ``DEBUG``)
- Remove the ``cc_`` prefix for config modules, either have them fully
specified (ie ``cloudinit.config.resizefs``) or by default only look in
the ``cloudinit.config`` namespace for these modules (or have a combination
of the above), this avoids having to understand where your modules are
coming from (which can be altered by the current python inclusion path)
- Instead of just warning when a module is being ran on a ``unknown``
distribution perhaps we should not run that module in that case? Or we might
want to start reworking those modules so they will run on all
distributions? Or if that is not the case, then maybe we want to allow
fully specified python paths for modules and start encouraging
packages of ``ubuntu`` modules, packages of ``rhel`` specific modules that
people can add instead of having them all under the cloud-init ``root``
tree? This might encourage more development of other modules instead of
having to go edit the cloud-init code to accomplish this.
.. _netcf: https://fedorahosted.org/netcf/

View File

@ -22,8 +22,11 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import argparse
import json
import os
import sys
import time
import tempfile
import traceback
# This is more just for running from the bin folder so that
@ -126,11 +129,11 @@ def run_module_section(mods, action_name, section):
" under section '%s'") % (action_name, full_section_name)
sys.stderr.write("%s\n" % (msg))
LOG.debug(msg)
return 0
return []
else:
LOG.debug("Ran %s modules with %s failures",
len(which_ran), len(failures))
return len(failures)
return failures
def main_init(name, args):
@ -220,7 +223,10 @@ def main_init(name, args):
if existing_files:
LOG.debug("Exiting early due to the existence of %s files",
existing_files)
return 0
return (None, [])
else:
LOG.debug("Execution continuing, no previous run detected that"
" would allow us to stop early.")
else:
# The cache is not instance specific, so it has to be purged
# but we want 'start' to benefit from a cache if
@ -249,9 +255,9 @@ def main_init(name, args):
" Likely bad things to come!"))
if not args.force:
if args.local:
return 0
return (None, [])
else:
return 1
return (None, ["No instance datasource found."])
# Stage 6
iid = init.instancify()
LOG.debug("%s will now be targeting instance id: %s", name, iid)
@ -274,7 +280,7 @@ def main_init(name, args):
init.consume_data(PER_ALWAYS)
except Exception:
util.logexc(LOG, "Consuming user data failed!")
return 1
return (init.datasource, ["Consuming user data failed!"])
# Stage 8 - re-read and apply relevant cloud-config to include user-data
mods = stages.Modules(init, extract_fns(args))
@ -291,7 +297,7 @@ def main_init(name, args):
logging.setupLogging(mods.cfg)
# Stage 10
return run_module_section(mods, name, name)
return (init.datasource, run_module_section(mods, name, name))
def main_modules(action_name, args):
@ -315,14 +321,12 @@ def main_modules(action_name, args):
init.fetch()
except sources.DataSourceNotFoundException:
# There was no datasource found, theres nothing to do
util.logexc(LOG, ('Can not apply stage %s, '
'no datasource found!'
" Likely bad things to come!"), name)
print_exc(('Can not apply stage %s, '
'no datasource found!'
" Likely bad things to come!") % (name))
msg = ('Can not apply stage %s, no datasource found! Likely bad '
'things to come!' % name)
util.logexc(LOG, msg)
print_exc(msg)
if not args.force:
return 1
return [(msg)]
# Stage 3
mods = stages.Modules(init, extract_fns(args))
# Stage 4
@ -419,6 +423,110 @@ def main_single(name, args):
return 0
def atomic_write_json(path, data):
tf = None
try:
tf = tempfile.NamedTemporaryFile(dir=os.path.dirname(path),
delete=False)
tf.write(json.dumps(data, indent=1) + "\n")
tf.close()
os.rename(tf.name, path)
except Exception as e:
if tf is not None:
util.del_file(tf.name)
raise e
def status_wrapper(name, args, data_d=None, link_d=None):
if data_d is None:
data_d = os.path.normpath("/var/lib/cloud/data")
if link_d is None:
link_d = os.path.normpath("/run/cloud-init")
status_path = os.path.join(data_d, "status.json")
status_link = os.path.join(link_d, "status.json")
result_path = os.path.join(data_d, "result.json")
result_link = os.path.join(link_d, "result.json")
util.ensure_dirs((data_d, link_d,))
(_name, functor) = args.action
if name == "init":
if args.local:
mode = "init-local"
else:
mode = "init"
elif name == "modules":
mode = "modules-%s" % args.mode
else:
raise ValueError("unknown name: %s" % name)
modes = ('init', 'init-local', 'modules-config', 'modules-final')
status = None
if mode == 'init-local':
for f in (status_link, result_link, status_path, result_path):
util.del_file(f)
else:
try:
status = json.loads(util.load_file(status_path))
except:
pass
if status is None:
nullstatus = {
'errors': [],
'start': None,
'end': None,
}
status = {'v1': {}}
for m in modes:
status['v1'][m] = nullstatus.copy()
status['v1']['datasource'] = None
v1 = status['v1']
v1['stage'] = mode
v1[mode]['start'] = time.time()
atomic_write_json(status_path, status)
util.sym_link(os.path.relpath(status_path, link_d), status_link,
force=True)
try:
ret = functor(name, args)
if mode in ('init', 'init-local'):
(datasource, errors) = ret
if datasource is not None:
v1['datasource'] = str(datasource)
else:
errors = ret
v1[mode]['errors'] = [str(e) for e in errors]
except Exception as e:
v1[mode]['errors'] = [str(e)]
v1[mode]['finished'] = time.time()
v1['stage'] = None
atomic_write_json(status_path, status)
if mode == "modules-final":
# write the 'finished' file
errors = []
for m in modes:
if v1[m]['errors']:
errors.extend(v1[m].get('errors', []))
atomic_write_json(result_path,
{'v1': {'datasource': v1['datasource'], 'errors': errors}})
util.sym_link(os.path.relpath(result_path, link_d), result_link,
force=True)
return len(v1[mode]['errors'])
def main():
parser = argparse.ArgumentParser()
@ -502,6 +610,8 @@ def main():
signal_handler.attach_handlers()
(name, functor) = args.action
if name in ("modules", "init"):
functor = status_wrapper
return util.log_time(logfunc=LOG.debug, msg="cloud-init mode '%s'" % name,
get_uptime=True, func=functor, args=(name, args))

View File

@ -53,6 +53,7 @@ def handle(_name, cfg, cloud, log, args):
'version': cver,
'datasource': str(cloud.datasource),
}
subs.update(dict([(k.upper(), v) for k, v in subs.items()]))
util.multi_log("%s\n" % (templater.render_string(msg_in, subs)),
console=False, stderr=True, log=log)
except Exception:

View File

@ -22,7 +22,6 @@ from cloudinit import util
import errno
import os
import re
import signal
import subprocess
import time

View File

@ -1,8 +1,11 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2013 Yahoo! Inc.
# Copyright (C) 2014 Canonical, Ltd
#
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
# Author: Dustin Kirkland <kirkland@ubuntu.com>
# Author: Scott Moser <scott.moser@canonical.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -17,12 +20,15 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import base64
import os
from StringIO import StringIO
from cloudinit.settings import PER_INSTANCE
from cloudinit import log as logging
from cloudinit import util
frequency = PER_INSTANCE
LOG = logging.getLogger(__name__)
def _decode(data, encoding=None):
@ -38,24 +44,50 @@ def _decode(data, encoding=None):
raise IOError("Unknown random_seed encoding: %s" % (encoding))
def handle(name, cfg, cloud, log, _args):
if not cfg or "random_seed" not in cfg:
log.debug(("Skipping module named %s, "
"no 'random_seed' configuration found"), name)
def handle_random_seed_command(command, required, env=None):
if not command and required:
raise ValueError("no command found but required=true")
elif not command:
LOG.debug("no command provided")
return
my_cfg = cfg['random_seed']
seed_path = my_cfg.get('file', '/dev/urandom')
seed_buf = StringIO()
seed_buf.write(_decode(my_cfg.get('data', ''),
encoding=my_cfg.get('encoding')))
cmd = command[0]
if not util.which(cmd):
if required:
raise ValueError("command '%s' not found but required=true", cmd)
else:
LOG.debug("command '%s' not found for seed_command", cmd)
return
util.subp(command, env=env, capture=False)
def handle(name, cfg, cloud, log, _args):
mycfg = cfg.get('random_seed', {})
seed_path = mycfg.get('file', '/dev/urandom')
seed_data = mycfg.get('data', '')
seed_buf = StringIO()
if seed_data:
seed_buf.write(_decode(seed_data, encoding=mycfg.get('encoding')))
# 'random_seed' is set up by Azure datasource, and comes already in
# openstack meta_data.json
metadata = cloud.datasource.metadata
if metadata and 'random_seed' in metadata:
seed_buf.write(metadata['random_seed'])
seed_data = seed_buf.getvalue()
if len(seed_data):
log.debug("%s: adding %s bytes of random seed entrophy to %s", name,
log.debug("%s: adding %s bytes of random seed entropy to %s", name,
len(seed_data), seed_path)
util.append_file(seed_path, seed_data)
command = mycfg.get('command', ['pollinate', '-q'])
req = mycfg.get('command_required', False)
try:
env = os.environ.copy()
env['RANDOM_SEED_FILE'] = seed_path
handle_random_seed_command(command=command, required=req, env=env)
except ValueError as e:
log.warn("handling random command [%s] failed: %s", command, e)
raise e

View File

@ -35,6 +35,10 @@ import platform
import serial
# these high timeouts are necessary as read may read a lot of data.
READ_TIMEOUT = 60
WRITE_TIMEOUT = 10
SERIAL_PORT = '/dev/ttyS1'
if platform.system() == 'Windows':
SERIAL_PORT = 'COM2'
@ -76,7 +80,9 @@ class CepkoResult(object):
self.result = self._marshal(self.raw_result)
def _execute(self):
connection = serial.Serial(SERIAL_PORT)
connection = serial.Serial(port=SERIAL_PORT,
timeout=READ_TIMEOUT,
writeTimeout=WRITE_TIMEOUT)
connection.write(self.request)
return connection.readline().strip('\x04\n')

View File

@ -45,8 +45,6 @@ def find_module(base_name, search_paths, required_attrs=None):
real_path.append(base_name)
full_path = '.'.join(real_path)
real_paths.append(full_path)
LOG.debug("Looking for modules %s that have attributes %s",
real_paths, required_attrs)
for full_path in real_paths:
mod = None
try:
@ -62,6 +60,4 @@ def find_module(base_name, search_paths, required_attrs=None):
found_attrs += 1
if found_attrs == len(required_attrs):
found_places.append(full_path)
LOG.debug("Found %s with attributes %s in %s", base_name,
required_attrs, found_places)
return found_places

View File

@ -55,9 +55,6 @@ class UnknownMerger(object):
if not meth:
meth = self._handle_unknown
args.insert(0, method_name)
LOG.debug("Merging '%s' into '%s' using method '%s' of '%s'",
type_name, type_utils.obj_name(merge_with),
meth.__name__, self)
return meth(*args)
@ -84,8 +81,6 @@ class LookupMerger(UnknownMerger):
# First one that has that method/attr gets to be
# the one that will be called
meth = getattr(merger, meth_wanted)
LOG.debug(("Merging using located merger '%s'"
" since it had method '%s'"), merger, meth_wanted)
break
if not meth:
return UnknownMerger._handle_unknown(self, meth_wanted,

View File

@ -18,12 +18,14 @@
import base64
import crypt
import fnmatch
import os
import os.path
import time
from xml.dom import minidom
from cloudinit import log as logging
from cloudinit.settings import PER_ALWAYS
from cloudinit import sources
from cloudinit import util
@ -53,14 +55,15 @@ BUILTIN_CLOUD_CONFIG = {
'disk_setup': {
'ephemeral0': {'table_type': 'mbr',
'layout': True,
'overwrite': False}
},
'overwrite': False},
},
'fs_setup': [{'filesystem': 'ext4',
'device': 'ephemeral0.1',
'replace_fs': 'ntfs'}]
'replace_fs': 'ntfs'}],
}
DS_CFG_PATH = ['datasource', DS_NAME]
DEF_EPHEMERAL_LABEL = 'Temporary Storage'
class DataSourceAzureNet(sources.DataSource):
@ -189,8 +192,17 @@ class DataSourceAzureNet(sources.DataSource):
LOG.warn("failed to get instance id in %s: %s", shcfgxml, e)
pubkeys = pubkeys_from_crt_files(fp_files)
self.metadata['public-keys'] = pubkeys
found_ephemeral = find_ephemeral_disk()
if found_ephemeral:
self.ds_cfg['disk_aliases']['ephemeral0'] = found_ephemeral
LOG.debug("using detected ephemeral0 of %s", found_ephemeral)
cc_modules_override = support_new_ephemeral(self.sys_cfg)
if cc_modules_override:
self.cfg['cloud_config_modules'] = cc_modules_override
return True
def device_name_to_device(self, name):
@ -200,6 +212,92 @@ class DataSourceAzureNet(sources.DataSource):
return self.cfg
def count_files(mp):
return len(fnmatch.filter(os.listdir(mp), '*[!cdrom]*'))
def find_ephemeral_part():
"""
Locate the default ephmeral0.1 device. This will be the first device
that has a LABEL of DEF_EPHEMERAL_LABEL and is a NTFS device. If Azure
gets more ephemeral devices, this logic will only identify the first
such device.
"""
c_label_devs = util.find_devs_with("LABEL=%s" % DEF_EPHEMERAL_LABEL)
c_fstype_devs = util.find_devs_with("TYPE=ntfs")
for dev in c_label_devs:
if dev in c_fstype_devs:
return dev
return None
def find_ephemeral_disk():
"""
Get the ephemeral disk.
"""
part_dev = find_ephemeral_part()
if part_dev and str(part_dev[-1]).isdigit():
return part_dev[:-1]
elif part_dev:
return part_dev
return None
def support_new_ephemeral(cfg):
"""
Windows Azure makes ephemeral devices ephemeral to boot; a ephemeral device
may be presented as a fresh device, or not.
Since the knowledge of when a disk is supposed to be plowed under is
specific to Windows Azure, the logic resides here in the datasource. When a
new ephemeral device is detected, cloud-init overrides the default
frequency for both disk-setup and mounts for the current boot only.
"""
device = find_ephemeral_part()
if not device:
LOG.debug("no default fabric formated ephemeral0.1 found")
return None
LOG.debug("fabric formated ephemeral0.1 device at %s", device)
file_count = 0
try:
file_count = util.mount_cb(device, count_files)
except:
return None
LOG.debug("fabric prepared ephmeral0.1 has %s files on it", file_count)
if file_count >= 1:
LOG.debug("fabric prepared ephemeral0.1 will be preserved")
return None
else:
# if device was already mounted, then we need to unmount it
# race conditions could allow for a check-then-unmount
# to have a false positive. so just unmount and then check.
try:
util.subp(['umount', device])
except util.ProcessExecutionError as e:
if device in util.mounts():
LOG.warn("Failed to unmount %s, will not reformat.", device)
LOG.debug("Failed umount: %s", e)
return None
LOG.debug("cloud-init will format ephemeral0.1 this boot.")
LOG.debug("setting disk_setup and mounts modules 'always' for this boot")
cc_modules = cfg.get('cloud_config_modules')
if not cc_modules:
return None
mod_list = []
for mod in cc_modules:
if mod in ("disk_setup", "mounts"):
mod_list.append([mod, PER_ALWAYS])
LOG.debug("set module '%s' to 'always' for this boot", mod)
else:
mod_list.append(mod)
return mod_list
def handle_set_hostname(enabled, hostname, cfg):
if not util.is_true(enabled):
return

View File

@ -15,10 +15,13 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from base64 import b64decode
import os
import re
from cloudinit import log as logging
from cloudinit import sources
from cloudinit import util
from cloudinit.cs_utils import Cepko
LOG = logging.getLogger(__name__)
@ -39,12 +42,40 @@ class DataSourceCloudSigma(sources.DataSource):
self.ssh_public_key = ''
sources.DataSource.__init__(self, sys_cfg, distro, paths)
def is_running_in_cloudsigma(self):
"""
Uses dmidecode to detect if this instance of cloud-init is running
in the CloudSigma's infrastructure.
"""
uname_arch = os.uname()[4]
if uname_arch.startswith("arm") or uname_arch == "aarch64":
# Disabling because dmidecode in CMD_DMI_SYSTEM crashes kvm process
LOG.debug("Disabling CloudSigma datasource on arm (LP: #1243287)")
return False
dmidecode_path = util.which('dmidecode')
if not dmidecode_path:
return False
LOG.debug("Determining hypervisor product name via dmidecode")
try:
cmd = [dmidecode_path, "--string", "system-product-name"]
system_product_name, _ = util.subp(cmd)
return 'cloudsigma' in system_product_name.lower()
except:
LOG.warn("Failed to get hypervisor product name via dmidecode")
return False
def get_data(self):
"""
Metadata is the whole server context and /meta/cloud-config is used
as userdata.
"""
dsmode = None
if not self.is_running_in_cloudsigma():
return False
try:
server_context = self.cepko.all().result
server_meta = server_context['meta']
@ -61,7 +92,13 @@ class DataSourceCloudSigma(sources.DataSource):
if dsmode == "disabled" or dsmode != self.dsmode:
return False
base64_fields = server_meta.get('base64_fields', '').split(',')
self.userdata_raw = server_meta.get('cloudinit-user-data', "")
if 'cloudinit-user-data' in base64_fields:
self.userdata_raw = b64decode(self.userdata_raw)
if 'cloudinit' in server_context.get('vendor_data', {}):
self.vendordata_raw = server_context["vendor_data"]["cloudinit"]
self.metadata = server_context
self.ssh_public_key = server_meta['ssh_public_key']

View File

@ -57,7 +57,7 @@ class DataSourceNoCloud(sources.DataSource):
md = {}
if parse_cmdline_data(self.cmdline_id, md):
found.append("cmdline")
mydata.update(md)
mydata['meta-data'].update(md)
except:
util.logexc(LOG, "Unable to parse command line data")
return False

View File

@ -4,11 +4,13 @@
# Copyright (C) 2012 Yahoo! Inc.
# Copyright (C) 2012-2013 CERIT Scientific Cloud
# Copyright (C) 2012-2013 OpenNebula.org
# Copyright (C) 2014 Consejo Superior de Investigaciones Cientificas
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
# Author: Vlastimil Holer <xholer@mail.muni.cz>
# Author: Javier Fontan <jfontan@opennebula.org>
# Author: Enol Fernandez <enolfc@ifca.unican.es>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -22,6 +24,7 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import base64
import os
import pwd
import re
@ -417,6 +420,16 @@ def read_context_disk_dir(source_dir, asuser=None):
elif "USERDATA" in context:
results['userdata'] = context["USERDATA"]
# b64decode user data if necessary (default)
if 'userdata' in results:
encoding = context.get('USERDATA_ENCODING',
context.get('USER_DATA_ENCODING'))
if encoding == "base64":
try:
results['userdata'] = base64.b64decode(results['userdata'])
except TypeError:
LOG.warn("Failed base64 decoding of userdata")
# generate static /etc/network/interfaces
# only if there are any required context variables
# http://opennebula.org/documentation:rel3.8:cong#network_configuration

View File

@ -170,8 +170,9 @@ class DataSourceSmartOS(sources.DataSource):
md = {}
ud = ""
if not os.path.exists(self.seed):
LOG.debug("Host does not appear to be on SmartOS")
if not device_exists(self.seed):
LOG.debug("No serial device '%s' found for SmartOS datasource",
self.seed)
return False
uname_arch = os.uname()[4]
@ -274,6 +275,11 @@ class DataSourceSmartOS(sources.DataSource):
b64=b64)
def device_exists(device):
"""Symplistic method to determine if the device exists or not"""
return os.path.exists(device)
def get_serial(seed_device, seed_timeout):
"""This is replaced in unit testing, allowing us to replace
serial.Serial with a mocked class.

View File

@ -397,8 +397,8 @@ class Init(object):
mod = handlers.fixup_handler(mod)
types = c_handlers.register(mod)
if types:
LOG.debug("Added custom handler for %s from %s",
types, fname)
LOG.debug("Added custom handler for %s [%s] from %s",
types, mod, fname)
except Exception:
util.logexc(LOG, "Failed to register handler from %s",
fname)
@ -644,6 +644,8 @@ class Modules(object):
freq = mod.frequency
if not freq in FREQUENCIES:
freq = PER_INSTANCE
LOG.debug("Running module %s (%s) with frequency %s",
name, mod, freq)
# Use the configs logger and not our own
# TODO(harlowja): possibly check the module
@ -657,7 +659,7 @@ class Modules(object):
run_name = "config-%s" % (name)
cc.run(run_name, mod.handle, func_args, freq=freq)
except Exception as e:
util.logexc(LOG, "Running %s (%s) failed", name, mod)
util.logexc(LOG, "Running module %s (%s) failed", name, mod)
failures.append((name, e))
return (which_ran, failures)

View File

@ -1395,8 +1395,10 @@ def get_builtin_cfg():
return obj_copy.deepcopy(CFG_BUILTIN)
def sym_link(source, link):
def sym_link(source, link, force=False):
LOG.debug("Creating symbolic link from %r => %r", link, source)
if force and os.path.exists(link):
del_file(link)
os.symlink(source, link)

View File

@ -20,7 +20,7 @@ from distutils import version as vr
def version():
return vr.StrictVersion("0.7.5")
return vr.StrictVersion("0.7.6")
def version_string():

View File

@ -69,7 +69,7 @@ users:
# no-user-group: When set to true, do not create a group named after the user.
# no-log-init: When set to true, do not initialize lastlog and faillog database.
# ssh-import-id: Optional. Import SSH ids
# ssh-authorized-key: Optional. Add key to user's ssh authorized keys file
# ssh-authorized-keys: Optional. [list] Add keys to user's authorized keys file
# sudo: Defaults to none. Set to the sudo string you want to use, i.e.
# ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following
# format.

View File

@ -23,6 +23,10 @@ You can provide user-data to the VM using the dedicated `meta field`_ in the `se
header could be omitted. However since this is a raw-text field you could provide any of the valid
`config formats`_.
You have the option to encode your user-data using Base64. In order to do that you have to add the
``cloudinit-user-data`` field to the ``base64_fields``. The latter is a comma-separated field with
all the meta fields whit base64 encoded values.
If your user-data does not need an internet connection you can create a
`meta field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as value.
If this field does not exist the default value is "net".

53
doc/status.txt Normal file
View File

@ -0,0 +1,53 @@
cloud-init will keep a 'status' file up to date for other applications
wishing to use it to determine cloud-init status.
It will manage 2 files:
status.json
result.json
The files will be written to /var/lib/cloud/data/ .
A symlink will be created in /run/cloud-init. The link from /run is to ensure
that if the file exists, it is not stale for this boot.
status.json's format is:
{
'v1': {
'init': {
errors: [] # list of strings for each error that occurred
start: float # time.time() that this stage started or None
end: float # time.time() that this stage finished or None
},
'init-local': {
'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
},
'modules-config': {
'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
},
'modules-final': {
'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
},
'datasource': string describing datasource found or None
'stage': string representing stage that is currently running
('init', 'init-local', 'modules-final', 'modules-config', None)
if None, then no stage is running. Reader must read the start/end
of each of the above stages to determine the state.
}
result.json's format is:
{
'v1': {
'datasource': string describing the datasource found
'errors': [] # list of errors reported
}
}
Thus, to determine if cloud-init is finished:
fin = "/run/cloud-init/result.json"
if os.path.exists(fin):
ret = json.load(open(fin, "r"))
if len(ret['v1']['errors']):
print "Finished with errors:" + "\n".join(ret['v1']['errors'])
else:
print "Finished no errors"
else:
print "Not Finished"

View File

@ -52,6 +52,30 @@ if PY26:
standardMsg = standardMsg % (value)
self.fail(self._formatMessage(msg, standardMsg))
def assertDictContainsSubset(self, expected, actual, msg=None):
missing = []
mismatched = []
for k, v in expected.iteritems():
if k not in actual:
missing.append(k)
elif actual[k] != v:
mismatched.append('%r, expected: %r, actual: %r'
% (k, v, actual[k]))
if len(missing) == 0 and len(mismatched) == 0:
return
standardMsg = ''
if missing:
standardMsg = 'Missing: %r' % ','.join(m for m in missing)
if mismatched:
if standardMsg:
standardMsg += '; '
standardMsg += 'Mismatched values: %s' % ','.join(mismatched)
self.fail(self._formatMessage(msg, standardMsg))
else:
class TestCase(unittest.TestCase):
pass

View File

@ -1,14 +1,10 @@
import logging
import os
import StringIO
import sys
from mocker import MockerTestCase, ANY, ARGS, KWARGS
from mocker import MockerTestCase, ARGS, KWARGS
from cloudinit import handlers
from cloudinit import helpers
from cloudinit import importer
from cloudinit import log
from cloudinit import settings
from cloudinit import url_helper
from cloudinit import util

View File

@ -1,9 +1,11 @@
# coding: utf-8
from unittest import TestCase
import copy
from cloudinit.cs_utils import Cepko
from cloudinit.sources import DataSourceCloudSigma
from tests.unittests import helpers as test_helpers
SERVER_CONTEXT = {
"cpu": 1000,
@ -19,21 +21,27 @@ SERVER_CONTEXT = {
"smp": 1,
"tags": ["much server", "very performance"],
"uuid": "65b2fb23-8c03-4187-a3ba-8b7c919e8890",
"vnc_password": "9e84d6cb49e46379"
"vnc_password": "9e84d6cb49e46379",
"vendor_data": {
"location": "zrh",
"cloudinit": "#cloud-config\n\n...",
}
}
class CepkoMock(Cepko):
result = SERVER_CONTEXT
def __init__(self, mocked_context):
self.result = mocked_context
def all(self):
return self
class DataSourceCloudSigmaTest(TestCase):
class DataSourceCloudSigmaTest(test_helpers.TestCase):
def setUp(self):
self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "")
self.datasource.cepko = CepkoMock()
self.datasource.is_running_in_cloudsigma = lambda: True
self.datasource.cepko = CepkoMock(SERVER_CONTEXT)
self.datasource.get_data()
def test_get_hostname(self):
@ -57,3 +65,34 @@ class DataSourceCloudSigmaTest(TestCase):
def test_user_data(self):
self.assertEqual(self.datasource.userdata_raw,
SERVER_CONTEXT['meta']['cloudinit-user-data'])
def test_encoded_user_data(self):
encoded_context = copy.deepcopy(SERVER_CONTEXT)
encoded_context['meta']['base64_fields'] = 'cloudinit-user-data'
encoded_context['meta']['cloudinit-user-data'] = 'aGkgd29ybGQK'
self.datasource.cepko = CepkoMock(encoded_context)
self.datasource.get_data()
self.assertEqual(self.datasource.userdata_raw, b'hi world\n')
def test_vendor_data(self):
self.assertEqual(self.datasource.vendordata_raw,
SERVER_CONTEXT['vendor_data']['cloudinit'])
def test_lack_of_vendor_data(self):
stripped_context = copy.deepcopy(SERVER_CONTEXT)
del stripped_context["vendor_data"]
self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "")
self.datasource.cepko = CepkoMock(stripped_context)
self.datasource.get_data()
self.assertIsNone(self.datasource.vendordata_raw)
def test_lack_of_cloudinit_key_in_vendor_data(self):
stripped_context = copy.deepcopy(SERVER_CONTEXT)
del stripped_context["vendor_data"]["cloudinit"]
self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "")
self.datasource.cepko = CepkoMock(stripped_context)
self.datasource.get_data()
self.assertIsNone(self.datasource.vendordata_raw)

View File

@ -15,7 +15,6 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest
import httpretty
import re
@ -25,6 +24,8 @@ from cloudinit import settings
from cloudinit import helpers
from cloudinit.sources import DataSourceGCE
from tests.unittests import helpers as test_helpers
GCE_META = {
'instance/id': '123',
'instance/zone': 'foo/bar',
@ -54,7 +55,7 @@ def _request_callback(method, uri, headers):
return (404, headers, '')
class TestDataSourceGCE(unittest.TestCase):
class TestDataSourceGCE(test_helpers.TestCase):
def setUp(self):
self.ds = DataSourceGCE.DataSourceGCE(

View File

@ -3,7 +3,6 @@ import os
from cloudinit.sources import DataSourceMAAS
from cloudinit import url_helper
from cloudinit import util
from tests.unittests.helpers import populate_dir
import mocker

View File

@ -4,6 +4,7 @@ from cloudinit import util
from mocker import MockerTestCase
from tests.unittests.helpers import populate_dir
from base64 import b64encode
import os
import pwd
@ -164,10 +165,31 @@ class TestOpenNebulaDataSource(MockerTestCase):
public_keys.append(SSH_KEY % (c + 1,))
def test_user_data(self):
def test_user_data_plain(self):
for k in ('USER_DATA', 'USERDATA'):
my_d = os.path.join(self.tmp, k)
populate_context_dir(my_d, {k: USER_DATA})
populate_context_dir(my_d, {k: USER_DATA,
'USERDATA_ENCODING': ''})
results = ds.read_context_disk_dir(my_d)
self.assertTrue('userdata' in results)
self.assertEqual(USER_DATA, results['userdata'])
def test_user_data_encoding_required_for_decode(self):
b64userdata = b64encode(USER_DATA)
for k in ('USER_DATA', 'USERDATA'):
my_d = os.path.join(self.tmp, k)
populate_context_dir(my_d, {k: b64userdata})
results = ds.read_context_disk_dir(my_d)
self.assertTrue('userdata' in results)
self.assertEqual(b64userdata, results['userdata'])
def test_user_data_base64_encoding(self):
for k in ('USER_DATA', 'USERDATA'):
my_d = os.path.join(self.tmp, k)
populate_context_dir(my_d, {k: b64encode(USER_DATA),
'USERDATA_ENCODING': 'base64'})
results = ds.read_context_disk_dir(my_d)
self.assertTrue('userdata' in results)

View File

@ -24,10 +24,7 @@
import base64
from cloudinit import helpers as c_helpers
from cloudinit import stages
from cloudinit import util
from cloudinit.sources import DataSourceSmartOS
from cloudinit.settings import (PER_INSTANCE)
from tests.unittests import helpers
import os
import os.path
@ -174,6 +171,7 @@ class TestSmartOSDataSource(helpers.FilesystemMockingTestCase):
self.apply_patches([(mod, 'get_serial', _get_serial)])
self.apply_patches([(mod, 'dmi_data', _dmi_data)])
self.apply_patches([(os, 'uname', _os_uname)])
self.apply_patches([(mod, 'device_exists', lambda d: True)])
dsrc = mod.DataSourceSmartOS(sys_cfg, distro=None,
paths=self.paths)
return dsrc

View File

@ -42,10 +42,32 @@ class TestRandomSeed(t_help.TestCase):
def setUp(self):
super(TestRandomSeed, self).setUp()
self._seed_file = tempfile.mktemp()
self.unapply = []
# by default 'which' has nothing in its path
self.apply_patches([(util, 'which', self._which)])
self.apply_patches([(util, 'subp', self._subp)])
self.subp_called = []
self.whichdata = {}
def tearDown(self):
apply_patches([i for i in reversed(self.unapply)])
util.del_file(self._seed_file)
def apply_patches(self, patches):
ret = apply_patches(patches)
self.unapply += ret
def _which(self, program):
return self.whichdata.get(program)
def _subp(self, *args, **kwargs):
# supports subp calling with cmd as args or kwargs
if 'args' not in kwargs:
kwargs['args'] = args[0]
self.subp_called.append(kwargs)
return
def _compress(self, text):
contents = StringIO()
gz_fh = gzip.GzipFile(mode='wb', fileobj=contents)
@ -148,3 +170,56 @@ class TestRandomSeed(t_help.TestCase):
cc_seed_random.handle('test', cfg, c, LOG, [])
contents = util.load_file(self._seed_file)
self.assertEquals('tiny-tim-was-here-so-was-josh', contents)
def test_seed_command_not_provided_pollinate_available(self):
c = self._get_cloud('ubuntu', {})
self.whichdata = {'pollinate': '/usr/bin/pollinate'}
cc_seed_random.handle('test', {}, c, LOG, [])
subp_args = [f['args'] for f in self.subp_called]
self.assertIn(['pollinate', '-q'], subp_args)
def test_seed_command_not_provided_pollinate_not_available(self):
c = self._get_cloud('ubuntu', {})
self.whichdata = {}
cc_seed_random.handle('test', {}, c, LOG, [])
# subp should not have been called as which would say not available
self.assertEquals(self.subp_called, list())
def test_unavailable_seed_command_and_required_raises_error(self):
c = self._get_cloud('ubuntu', {})
self.whichdata = {}
self.assertRaises(ValueError, cc_seed_random.handle,
'test', {'random_seed': {'command_required': True}}, c, LOG, [])
def test_seed_command_and_required(self):
c = self._get_cloud('ubuntu', {})
self.whichdata = {'foo': 'foo'}
cfg = {'random_seed': {'command_required': True, 'command': ['foo']}}
cc_seed_random.handle('test', cfg, c, LOG, [])
self.assertIn(['foo'], [f['args'] for f in self.subp_called])
def test_file_in_environment_for_command(self):
c = self._get_cloud('ubuntu', {})
self.whichdata = {'foo': 'foo'}
cfg = {'random_seed': {'command_required': True, 'command': ['foo'],
'file': self._seed_file}}
cc_seed_random.handle('test', cfg, c, LOG, [])
# this just instists that the first time subp was called,
# RANDOM_SEED_FILE was in the environment set up correctly
subp_env = [f['env'] for f in self.subp_called]
self.assertEqual(subp_env[0].get('RANDOM_SEED_FILE'), self._seed_file)
def apply_patches(patches):
ret = []
for (ref, name, replace) in patches:
if replace is None:
continue
orig = getattr(ref, name)
setattr(ref, name, replace)
ret.append((ref, name, orig))
return ret

View File

@ -1,4 +1,3 @@
from cloudinit import helpers
from cloudinit import util
from cloudinit.config import cc_yum_add_repo