Merge rework branch in [Joshua Harlow]

- unified binary that activates the various stages
   - Now using argparse + subcommands to specify the various CLI options
 - a stage module that clearly separates the stages of the different
   components (also described how they are used and in what order in the
   new unified binary)
 - user_data is now a module that just does user data processing while the
   actual activation and 'handling' of the processed user data is done via
   a separate set of files (and modules) with the main 'init' stage being the
   controller of this
   - creation of boot_hook, cloud_config, shell_script, upstart_job version 2
     modules (with classes that perform there functionality) instead of those
     having functionality that is attached to the cloudinit object (which
     reduces reuse and limits future functionality, and makes testing harder)
 - removal of global config that defined paths, shared config, now this is
   via objects making unit testing testing and global side-effects a non issue
 - creation of a 'helpers.py' 
   - this contains an abstraction for the 'lock' like objects that the various 
     module/handler running stages use to avoid re-running a given 
     module/handler for a given frequency. this makes it separated from 
     the actual usage of that object (thus helpful for testing and clear lines
     usage and how the actual job is accomplished)
     - a common 'runner' class is the main entrypoint using these locks to
       run function objects passed in (along with there arguments) and there
       frequency
   - add in a 'paths' object that provides access to the previously global
     and/or config based paths (thus providing a single entrypoint object/type
     that provides path information)
       - this also adds in the ability to change the path when constructing 
       that path 'object' and adding in additional config that can be used to 
       alter the root paths of 'joins' (useful for testing or possibly useful
       in chroots?)
        - config options now avaiable that can alter the 'write_root' and the 
         'read_root' when backing code uses the paths join() function
   - add a config parser subclass that will automatically add unknown sections
     and return default values (instead of throwing exceptions for these cases)
   - a new config merging class that will be the central object that knows
     how to do the common configuration merging from the various configuration
     sources. The order is the following:
     - cli config files override environment config files
       which override instance configs which override datasource
       configs which override base configuration which overrides
       default configuration.
 - remove the passing around of the 'cloudinit' object as a 'cloud' variable
   and instead pass around an 'interface' object that can be given to modules
   and handlers as there cloud access layer while the backing of that
   object can be varied (good for abstraction and testing)
 - use a single set of functions to do importing of modules
 - add a function in which will search for a given set of module names with
   a given set of attributes and return those which are found
 - refactor logging so that instead of using a single top level 'log' that
   instead each component/module can use its own logger (if desired), this
   should be backwards compatible with handlers and config modules that used
   the passed in logger (its still passed in)
   - ensure that all places where exception are caught and where applicable
     that the util logexc() is called, so that no exceptions that may occur
     are dropped without first being logged (where it makes sense for this 
     to happen)
 - add a 'requires' file that lists cloud-init dependencies
   - applying it in package creation (bdeb and brpm) as well as using it
     in the modified setup.py to ensure dependencies are installed when
     using that method of packaging
 - add a 'version.py' that lists the active version (in code) so that code
   inside cloud-init can report the version in messaging and other config files
 - cleanup of subprocess usage so that all subprocess calls go through the
   subp() utility method, which now has an exception type that will provide
   detailed information on python 2.6 and 2.7
 - forced all code loading, moving, chmod, writing files and other system
   level actions to go through standard set of util functions, this greatly 
   helps in debugging and determining exactly which system actions cloud-init is
   performing
 - switching out the templating engine cheetah for tempita since tempita has
   no external dependencies (minus python) while cheetah has many dependencies
   which makes it more difficult to adopt cloud-init in distros that may not
   have those dependencies
 - adjust url fetching and url trying to go through a single function that
   reads urls in the new 'url helper' file, this helps in tracing, debugging
   and knowing which urls are being called and/or posted to from with-in 
   cloud-init code
   - add in the sending of a 'User-Agent' header for all urls fetched that
     do not provide there own header mapping, derive this user-agent from
     the following template, 'Cloud-Init/{version}' where the version is the
     cloud-init version number
 - using prettytable for netinfo 'debug' printing since it provides a standard
   and defined output that should be easier to parse than a custom format
 - add a set of distro specific classes, that handle distro specific actions
   that modules and or handler code can use as needed, this is organized into
   a base abstract class with child classes that implement the shared 
   functionality. config determines exactly which subclass to load, so it can
   be easily extended as needed.
   - current functionality
      - network interface config file writing
      - hostname setting/updating
      - locale/timezone/ setting
      - updating of /etc/hosts (with templates or generically)
      - package commands (ie installing, removing)/mirror finding
      - interface up/down activating
   - implemented a debian + ubuntu subclass
   - implemented a redhat + fedora subclass
 - adjust the root 'cloud.cfg' file to now have distrobution/path specific 
   configuration values in it. these special configs are merged as the normal
   config is, but the system level config is not passed into modules/handlers
   - modules/handlers must go through the path and distro object instead
 - have the cloudstack datasource test the url before calling into boto to 
   avoid the long wait for boto to finish retrying and finally fail when
   the gateway meta-data address is unavailable
 - add a simple mock ec2 meta-data python based http server that can serve a
   very simple set of ec2 meta-data back to callers
      - useful for testing or for understanding what the ec2 meta-data 
        service can provide in terms of data or functionality
 - for ssh key and authorized key file parsing add in classes and util functions
   that maintain the state of individual lines, allowing for a clearer 
   separation of parsing and modification (useful for testing and tracing)
 - add a set of 'base' init.d scripts that can be used on systems that do
   not have full upstart or systemd support (or support that does not match
   the standard fedora/ubuntu implementation)
   - currently these are being tested on RHEL 6.2
 - separate the datasources into there own subdirectory (instead of being
   a top-level item), this matches how config 'modules' and user-data 'handlers'
   are also in there own subdirectory (thus helping new developers and others
   understand the code layout in a quicker manner)
 - add the building of rpms based off a new cli tool and template 'spec' file
   that will templatize and perform the necessary commands to create a source
   and binary package to be used with a cloud-init install on a 'rpm' supporting
   system
   - uses the new standard set of requires and converts those pypi requirements
     into a local set of package requirments (that are known to exist on RHEL
     systems but should also exist on fedora systems)
 - adjust the bdeb builder to be a python script (instead of a shell script) and
   make its 'control' file a template that takes in the standard set of pypi 
   dependencies and uses a local mapping (known to work on ubuntu) to create the
   packages set of dependencies (that should also work on ubuntu-like systems)
 - pythonify a large set of various pieces of code
   - remove wrapping return statements with () when it has no effect
   - upper case all constants used
   - correctly 'case' class and method names (where applicable)
   - use os.path.join (and similar commands) instead of custom path creation
   - use 'is None' instead of the frowned upon '== None' which picks up a large
     set of 'true' cases than is typically desired (ie for objects that have
     there own equality)
   - use context managers on locks, tempdir, chdir, file, selinux, umask, 
     unmounting commands so that these actions do not have to be closed and/or
     cleaned up manually in finally blocks, which is typically not done and will
     eventually be a bug in the future
   - use the 'abc' module for abstract classes base where possible
      - applied in the datasource root class, the distro root class, and the
        user-data v2 root class
 - when loading yaml, check that the 'root' type matches a predefined set of
   valid types (typically just 'dict') and throw a type error if a mismatch
   occurs, this seems to be a good idea to do when loading user config files
 - when forking a long running task (ie resizing a filesytem) use a new util
   function that will fork and then call a callback, instead of having to
   implement all that code in a non-shared location (thus allowing it to be
   used by others in the future)
 - when writing out filenames, go through a util function that will attempt to
   ensure that the given filename is 'filesystem' safe by replacing '/' with
   '_' and removing characters which do not match a given whitelist of allowed
   filename characters
 - for the varying usages of the 'blkid' command make a function in the util
   module that can be used as the single point of entry for interaction with
   that command (and its results) instead of having X separate implementations
 - place the rfc 8222 time formatting and uptime repeated pieces of code in the
   util module as a set of function with the name 'time_rfc2822'/'uptime'
 - separate the pylint+pep8 calling from one tool into two indivudal tools so
   that they can be called independently, add make file sections that can be 
   used to call these independently
 - remove the support for the old style config that was previously located in
   '/etc/ec2-init/ec2-config.cfg', no longer supported!
 - instead of using a altered config parser that added its own 'dummy' section
   on in the 'mcollective' module, use configobj which handles the parsing of
   config without sections better (and it also maintains comments instead of
   removing them)
 - use the new defaulting config parser (that will not raise errors on sections
   that do not exist or return errors when values are fetched that do not exist)
   in the 'puppet' module
 - for config 'modules' add in the ability for the module to provide a list of 
   distro names which it is known to work with, if when ran and the distro being
   used name does not match one of those in this list, a warning will be written
   out saying that this module may not work correctly on this distrobution
 - for all dynamically imported modules ensure that they are fixed up before 
   they are used by ensuring that they have certain attributes, if they do not
   have those attributes they will be set to a sensible set of defaults instead
 - adjust all 'config' modules and handlers to use the adjusted util functions
   and the new distro objects where applicable so that those pieces of code can 
   benefit from the unified and enhanced functionality being provided in that
   util module
 - fix a potential bug whereby when a #includeonce was encountered it would
   enable checking of urls against a cache, if later a #include was encountered
   it would continue checking against that cache, instead of refetching (which
   would likely be the expected case)
 - add a openstack/nova based pep8 extension utility ('hacking.py') that allows
   for custom checks (along with the standard pep8 checks) to occur when running
   'make pep8' and its derivatives
This commit is contained in:
Scott Moser 2012-07-06 17:19:37 -04:00
commit 1516bfb51d
140 changed files with 11117 additions and 5874 deletions

193
ChangeLog
View File

@ -1,3 +1,196 @@
0.7.0:
- unified binary that activates the various stages
- Now using argparse + subcommands to specify the various CLI options
- a stage module that clearly separates the stages of the different
components (also described how they are used and in what order in the
new unified binary)
- user_data is now a module that just does user data processing while the
actual activation and 'handling' of the processed user data is done via
a separate set of files (and modules) with the main 'init' stage being the
controller of this
- creation of boot_hook, cloud_config, shell_script, upstart_job version 2
modules (with classes that perform there functionality) instead of those
having functionality that is attached to the cloudinit object (which
reduces reuse and limits future functionality, and makes testing harder)
- removal of global config that defined paths, shared config, now this is
via objects making unit testing testing and global side-effects a non issue
- creation of a 'helpers.py'
- this contains an abstraction for the 'lock' like objects that the various
module/handler running stages use to avoid re-running a given
module/handler for a given frequency. this makes it separated from
the actual usage of that object (thus helpful for testing and clear lines
usage and how the actual job is accomplished)
- a common 'runner' class is the main entrypoint using these locks to
run function objects passed in (along with there arguments) and there
frequency
- add in a 'paths' object that provides access to the previously global
and/or config based paths (thus providing a single entrypoint object/type
that provides path information)
- this also adds in the ability to change the path when constructing
that path 'object' and adding in additional config that can be used to
alter the root paths of 'joins' (useful for testing or possibly useful
in chroots?)
- config options now avaiable that can alter the 'write_root' and the
'read_root' when backing code uses the paths join() function
- add a config parser subclass that will automatically add unknown sections
and return default values (instead of throwing exceptions for these cases)
- a new config merging class that will be the central object that knows
how to do the common configuration merging from the various configuration
sources. The order is the following:
- cli config files override environment config files
which override instance configs which override datasource
configs which override base configuration which overrides
default configuration.
- remove the passing around of the 'cloudinit' object as a 'cloud' variable
and instead pass around an 'interface' object that can be given to modules
and handlers as there cloud access layer while the backing of that
object can be varied (good for abstraction and testing)
- use a single set of functions to do importing of modules
- add a function in which will search for a given set of module names with
a given set of attributes and return those which are found
- refactor logging so that instead of using a single top level 'log' that
instead each component/module can use its own logger (if desired), this
should be backwards compatible with handlers and config modules that used
the passed in logger (its still passed in)
- ensure that all places where exception are caught and where applicable
that the util logexc() is called, so that no exceptions that may occur
are dropped without first being logged (where it makes sense for this
to happen)
- add a 'requires' file that lists cloud-init dependencies
- applying it in package creation (bdeb and brpm) as well as using it
in the modified setup.py to ensure dependencies are installed when
using that method of packaging
- add a 'version.py' that lists the active version (in code) so that code
inside cloud-init can report the version in messaging and other config files
- cleanup of subprocess usage so that all subprocess calls go through the
subp() utility method, which now has an exception type that will provide
detailed information on python 2.6 and 2.7
- forced all code loading, moving, chmod, writing files and other system
level actions to go through standard set of util functions, this greatly
helps in debugging and determining exactly which system actions cloud-init is
performing
- switching out the templating engine cheetah for tempita since tempita has
no external dependencies (minus python) while cheetah has many dependencies
which makes it more difficult to adopt cloud-init in distros that may not
have those dependencies
- adjust url fetching and url trying to go through a single function that
reads urls in the new 'url helper' file, this helps in tracing, debugging
and knowing which urls are being called and/or posted to from with-in
cloud-init code
- add in the sending of a 'User-Agent' header for all urls fetched that
do not provide there own header mapping, derive this user-agent from
the following template, 'Cloud-Init/{version}' where the version is the
cloud-init version number
- using prettytable for netinfo 'debug' printing since it provides a standard
and defined output that should be easier to parse than a custom format
- add a set of distro specific classes, that handle distro specific actions
that modules and or handler code can use as needed, this is organized into
a base abstract class with child classes that implement the shared
functionality. config determines exactly which subclass to load, so it can
be easily extended as needed.
- current functionality
- network interface config file writing
- hostname setting/updating
- locale/timezone/ setting
- updating of /etc/hosts (with templates or generically)
- package commands (ie installing, removing)/mirror finding
- interface up/down activating
- implemented a debian + ubuntu subclass
- implemented a redhat + fedora subclass
- adjust the root 'cloud.cfg' file to now have distrobution/path specific
configuration values in it. these special configs are merged as the normal
config is, but the system level config is not passed into modules/handlers
- modules/handlers must go through the path and distro object instead
- have the cloudstack datasource test the url before calling into boto to
avoid the long wait for boto to finish retrying and finally fail when
the gateway meta-data address is unavailable
- add a simple mock ec2 meta-data python based http server that can serve a
very simple set of ec2 meta-data back to callers
- useful for testing or for understanding what the ec2 meta-data
service can provide in terms of data or functionality
- for ssh key and authorized key file parsing add in classes and util functions
that maintain the state of individual lines, allowing for a clearer
separation of parsing and modification (useful for testing and tracing)
- add a set of 'base' init.d scripts that can be used on systems that do
not have full upstart or systemd support (or support that does not match
the standard fedora/ubuntu implementation)
- currently these are being tested on RHEL 6.2
- separate the datasources into there own subdirectory (instead of being
a top-level item), this matches how config 'modules' and user-data 'handlers'
are also in there own subdirectory (thus helping new developers and others
understand the code layout in a quicker manner)
- add the building of rpms based off a new cli tool and template 'spec' file
that will templatize and perform the necessary commands to create a source
and binary package to be used with a cloud-init install on a 'rpm' supporting
system
- uses the new standard set of requires and converts those pypi requirements
into a local set of package requirments (that are known to exist on RHEL
systems but should also exist on fedora systems)
- adjust the bdeb builder to be a python script (instead of a shell script) and
make its 'control' file a template that takes in the standard set of pypi
dependencies and uses a local mapping (known to work on ubuntu) to create the
packages set of dependencies (that should also work on ubuntu-like systems)
- pythonify a large set of various pieces of code
- remove wrapping return statements with () when it has no effect
- upper case all constants used
- correctly 'case' class and method names (where applicable)
- use os.path.join (and similar commands) instead of custom path creation
- use 'is None' instead of the frowned upon '== None' which picks up a large
set of 'true' cases than is typically desired (ie for objects that have
there own equality)
- use context managers on locks, tempdir, chdir, file, selinux, umask,
unmounting commands so that these actions do not have to be closed and/or
cleaned up manually in finally blocks, which is typically not done and will
eventually be a bug in the future
- use the 'abc' module for abstract classes base where possible
- applied in the datasource root class, the distro root class, and the
user-data v2 root class
- when loading yaml, check that the 'root' type matches a predefined set of
valid types (typically just 'dict') and throw a type error if a mismatch
occurs, this seems to be a good idea to do when loading user config files
- when forking a long running task (ie resizing a filesytem) use a new util
function that will fork and then call a callback, instead of having to
implement all that code in a non-shared location (thus allowing it to be
used by others in the future)
- when writing out filenames, go through a util function that will attempt to
ensure that the given filename is 'filesystem' safe by replacing '/' with
'_' and removing characters which do not match a given whitelist of allowed
filename characters
- for the varying usages of the 'blkid' command make a function in the util
module that can be used as the single point of entry for interaction with
that command (and its results) instead of having X separate implementations
- place the rfc 8222 time formatting and uptime repeated pieces of code in the
util module as a set of function with the name 'time_rfc2822'/'uptime'
- separate the pylint+pep8 calling from one tool into two indivudal tools so
that they can be called independently, add make file sections that can be
used to call these independently
- remove the support for the old style config that was previously located in
'/etc/ec2-init/ec2-config.cfg', no longer supported!
- instead of using a altered config parser that added its own 'dummy' section
on in the 'mcollective' module, use configobj which handles the parsing of
config without sections better (and it also maintains comments instead of
removing them)
- use the new defaulting config parser (that will not raise errors on sections
that do not exist or return errors when values are fetched that do not exist)
in the 'puppet' module
- for config 'modules' add in the ability for the module to provide a list of
distro names which it is known to work with, if when ran and the distro being
used name does not match one of those in this list, a warning will be written
out saying that this module may not work correctly on this distrobution
- for all dynamically imported modules ensure that they are fixed up before
they are used by ensuring that they have certain attributes, if they do not
have those attributes they will be set to a sensible set of defaults instead
- adjust all 'config' modules and handlers to use the adjusted util functions
and the new distro objects where applicable so that those pieces of code can
benefit from the unified and enhanced functionality being provided in that
util module
- fix a potential bug whereby when a #includeonce was encountered it would
enable checking of urls against a cache, if later a #include was encountered
it would continue checking against that cache, instead of refetching (which
would likely be the expected case)
- add a openstack/nova based pep8 extension utility ('hacking.py') that allows
for custom checks (along with the standard pep8 checks) to occur when running
'make pep8' and its derivatives
0.6.4:
- support relative path in AuthorizedKeysFile (LP: #970071).
- make apt-get update run with --quiet (suitable for logging) (LP: #1012613)

View File

@ -1,14 +1,33 @@
CWD=$(shell pwd)
PY_FILES=$(shell find cloudinit bin -name "*.py")
PY_FILES+="bin/cloud-init"
all: test
pep8:
$(CWD)/tools/run-pep8 $(PY_FILES)
pylint:
pylint cloudinit
$(CWD)/tools/run-pylint $(PY_FILES)
pyflakes:
pyflakes .
pyflakes $(PY_FILES)
test:
nosetests tests/unittests/
nosetests $(noseopts) tests/unittests/
.PHONY: test pylint pyflakes
2to3:
2to3 $(PY_FILES)
clean:
rm -rf /var/log/cloud-init.log \
/var/lib/cloud/
rpm:
cd packages && ./brpm
deb:
cd packages && ./bddeb
.PHONY: test pylint pyflakes 2to3 clean pep8 rpm deb

30
Requires Normal file
View File

@ -0,0 +1,30 @@
# Pypi requirements for cloud-init to work
# Used for templating any files or strings that are considered
# to be templates, not cheetah since it pulls in alot of extra libs.
# This one is pretty dinky and does want we want (var substituion)
Tempita
# This is used for any pretty printing of tabular data.
PrettyTable
# This one is currently only used by the MAAS datasource. If that
# datasource is removed, this is no longer needed
oauth
# This is used to fetch the ec2 metadata into a easily
# parseable format, instead of having to have cloud-init perform
# those same fetchs and decodes and signing (...) that ec2 requires.
boto
# This is only needed for places where we need to support configs in a manner
# that the built-in config parser is not sufficent (ie
# when we need to preserve comments, or do not have a top-level
# section)...
configobj
# All new style configurations are in the yaml format
pyyaml
# The new main entrypoint uses argparse instead of optparse
argparse

31
TODO
View File

@ -1,14 +1,37 @@
- consider 'failsafe' DataSource
- Consider a 'failsafe' DataSource
If all others fail, setting a default that
- sets the user password, writing it to console
- logs to console that this happened
- consider 'previous' DataSource
- Consider a 'previous' DataSource
If no other data source is found, fall back to the 'previous' one
keep a indication of what instance id that is in /var/lib/cloud
- rewrite "cloud-init-query"
have DataSource and cloudinit expose explicit fields
- Rewrite "cloud-init-query" (currently not implemented)
Possibly have DataSource and cloudinit expose explicit fields
- instance-id
- hostname
- mirror
- release
- ssh public keys
- Remove the conversion of the ubuntu network interface format conversion
to a RH/fedora format and replace it with a top level format that uses
the netcf libraries format instead (which itself knows how to translate
into the specific formats)
- Replace the 'apt*' modules with variants that now use the distro classes
to perform distro independent packaging commands (where possible)
- Canonicalize the semaphore/lock name for modules and user data handlers
a. It is most likely a bug that currently exists that if a module in config
alters its name and it has already ran, then it will get ran again since
the lock name hasn't be canonicalized
- Replace some the LOG.debug calls with a LOG.info where appropriate instead
of how right now there is really only 2 levels (WARN and DEBUG)
- Remove the 'cc_' for config modules, either have them fully specified (ie
'cloudinit.config.resizefs') or by default only look in the 'cloudinit.config'
for these modules (or have a combination of the above), this avoids having
to understand where your modules are coming from (which can be altered by
the current python inclusion path)
- Depending on if people think the wrapper around 'os.path.join' provided
by the 'paths' object is useful (allowing us to modify based off a 'read'
and 'write' configuration based 'root') or is just to confusing, it might be
something to remove later, and just recommend using 'chroot' instead (or the X
different other options which are similar to 'chroot'), which is might be more
natural and less confusing...

474
bin/cloud-init Executable file
View File

@ -0,0 +1,474 @@
#!/usr/bin/python
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import argparse
import os
import sys
import traceback
# This is more just for running from the bin folder so that
# cloud-init binary can find the cloudinit module
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(
sys.argv[0]), os.pardir, os.pardir))
if os.path.exists(os.path.join(possible_topdir, "cloudinit", "__init__.py")):
sys.path.insert(0, possible_topdir)
from cloudinit import log as logging
from cloudinit import netinfo
from cloudinit import sources
from cloudinit import stages
from cloudinit import templater
from cloudinit import util
from cloudinit import version
from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE,
CLOUD_CONFIG)
# Pretty little welcome message template
WELCOME_MSG_TPL = ("Cloud-init v. {{version}} running '{{action}}' at "
"{{timestamp}}. Up {{uptime}} seconds.")
# Module section template
MOD_SECTION_TPL = "cloud_%s_modules"
# Things u can query on
QUERY_DATA_TYPES = [
'data',
'data_raw',
'instance_id',
]
# Frequency shortname to full name
# (so users don't have to remember the full name...)
FREQ_SHORT_NAMES = {
'instance': PER_INSTANCE,
'always': PER_ALWAYS,
'once': PER_ONCE,
}
LOG = logging.getLogger()
# Used for when a logger may not be active
# and we still want to print exceptions...
def print_exc(msg=''):
if msg:
sys.stderr.write("%s\n" % (msg))
sys.stderr.write('-' * 60)
sys.stderr.write("\n")
traceback.print_exc(file=sys.stderr)
sys.stderr.write('-' * 60)
sys.stderr.write("\n")
def welcome(action):
tpl_params = {
'version': version.version_string(),
'uptime': util.uptime(),
'timestamp': util.time_rfc2822(),
'action': action,
}
tpl_msg = templater.render_string(WELCOME_MSG_TPL, tpl_params)
util.multi_log("%s\n" % (tpl_msg),
console=False, stderr=True)
def extract_fns(args):
# Files are already opened so lets just pass that along
# since it would of broke if it couldn't have
# read that file already...
fn_cfgs = []
if args.files:
for fh in args.files:
# The realpath is more useful in logging
# so lets resolve to that...
fn_cfgs.append(os.path.realpath(fh.name))
return fn_cfgs
def run_module_section(mods, action_name, section):
full_section_name = MOD_SECTION_TPL % (section)
(which_ran, failures) = mods.run_section(full_section_name)
total_attempted = len(which_ran) + len(failures)
if total_attempted == 0:
msg = ("No '%s' modules to run"
" under section '%s'") % (action_name, full_section_name)
sys.stderr.write("%s\n" % (msg))
LOG.debug(msg)
return 0
else:
LOG.debug("Ran %s modules with %s failures",
len(which_ran), len(failures))
return len(failures)
def main_init(name, args):
deps = [sources.DEP_FILESYSTEM, sources.DEP_NETWORK]
if args.local:
deps = [sources.DEP_FILESYSTEM]
if not args.local:
# See doc/kernel-cmdline.txt
#
# This is used in maas datasource, in "ephemeral" (read-only root)
# environment where the instance netboots to iscsi ro root.
# and the entity that controls the pxe config has to configure
# the maas datasource.
#
# Could be used elsewhere, only works on network based (not local).
root_name = "%s.d" % (CLOUD_CONFIG)
target_fn = os.path.join(root_name, "91_kernel_cmdline_url.cfg")
util.read_write_cmdline_url(target_fn)
# Cloud-init 'init' stage is broken up into the following sub-stages
# 1. Ensure that the init object fetches its config without errors
# 2. Setup logging/output redirections with resultant config (if any)
# 3. Initialize the cloud-init filesystem
# 4. Check if we can stop early by looking for various files
# 5. Fetch the datasource
# 6. Connect to the current instance location + update the cache
# 7. Consume the userdata (handlers get activated here)
# 8. Construct the modules object
# 9. Adjust any subsequent logging/output redirections using
# the modules objects configuration
# 10. Run the modules for the 'init' stage
# 11. Done!
welcome(name)
init = stages.Init(deps)
# Stage 1
init.read_cfg(extract_fns(args))
# Stage 2
outfmt = None
errfmt = None
try:
LOG.debug("Closing stdin")
util.close_stdin()
(outfmt, errfmt) = util.fixup_output(init.cfg, name)
except:
util.logexc(LOG, "Failed to setup output redirection!")
print_exc("Failed to setup output redirection!")
if args.debug:
# Reset so that all the debug handlers are closed out
LOG.debug(("Logging being reset, this logger may no"
" longer be active shortly"))
logging.resetLogging()
logging.setupLogging(init.cfg)
# Stage 3
try:
init.initialize()
except Exception:
util.logexc(LOG, "Failed to initialize, likely bad things to come!")
# Stage 4
path_helper = init.paths
if not args.local:
sys.stderr.write("%s\n" % (netinfo.debug_info()))
LOG.debug(("Checking to see if files that we need already"
" exist from a previous run that would allow us"
" to stop early."))
stop_files = [
os.path.join(path_helper.get_cpath("data"), "no-net"),
path_helper.get_ipath_cur("obj_pkl"),
]
existing_files = []
for fn in stop_files:
try:
c = util.load_file(fn)
if len(c):
existing_files.append((fn, len(c)))
except Exception:
pass
if existing_files:
LOG.debug("Exiting early due to the existence of %s files",
existing_files)
return 0
else:
# The cache is not instance specific, so it has to be purged
# but we want 'start' to benefit from a cache if
# a previous start-local populated one...
manual_clean = util.get_cfg_option_bool(init.cfg,
'manual_cache_clean', False)
if manual_clean:
LOG.debug("Not purging instance link, manual cleaning enabled")
init.purge_cache(False)
else:
init.purge_cache()
# Delete the non-net file as well
util.del_file(os.path.join(path_helper.get_cpath("data"), "no-net"))
# Stage 5
try:
init.fetch()
except sources.DataSourceNotFoundException:
util.logexc(LOG, ("No instance datasource found!"
" Likely bad things to come!"))
# In the case of cloud-init (net mode) it is a bit
# more likely that the user would consider it
# failure if nothing was found. When using
# upstart it will also mentions job failure
# in console log if exit code is != 0.
if not args.force:
if args.local:
return 0
else:
return 1
# Stage 6
iid = init.instancify()
LOG.debug("%s will now be targeting instance id: %s", name, iid)
init.update()
# Stage 7
try:
# Attempt to consume the data per instance.
# This may run user-data handlers and/or perform
# url downloads and such as needed.
(ran, _results) = init.cloudify().run('consume_userdata',
init.consume_userdata,
args=[PER_INSTANCE],
freq=PER_INSTANCE)
if not ran:
# Just consume anything that is set to run per-always
# if nothing ran in the per-instance code
#
# See: https://bugs.launchpad.net/bugs/819507 for a little
# reason behind this...
init.consume_userdata(PER_ALWAYS)
except Exception:
util.logexc(LOG, "Consuming user data failed!")
return 1
# Stage 8 - TODO - do we really need to re-extract our configs?
mods = stages.Modules(init, extract_fns(args))
# Stage 9 - TODO is this really needed??
try:
outfmt_orig = outfmt
errfmt_orig = errfmt
(outfmt, errfmt) = util.get_output_cfg(mods.cfg, name)
if outfmt_orig != outfmt or errfmt_orig != errfmt:
LOG.warn("Stdout, stderr changing to (%s, %s)", outfmt, errfmt)
(outfmt, errfmt) = util.fixup_output(mods.cfg, name)
except:
util.logexc(LOG, "Failed to re-adjust output redirection!")
# Stage 10
return run_module_section(mods, name, name)
def main_modules(action_name, args):
name = args.mode
# Cloud-init 'modules' stages are broken up into the following sub-stages
# 1. Ensure that the init object fetches its config without errors
# 2. Get the datasource from the init object, if it does
# not exist then that means the main_init stage never
# worked, and thus this stage can not run.
# 3. Construct the modules object
# 4. Adjust any subsequent logging/output redirections using
# the modules objects configuration
# 5. Run the modules for the given stage name
# 6. Done!
welcome("%s:%s" % (action_name, name))
init = stages.Init(ds_deps=[])
# Stage 1
init.read_cfg(extract_fns(args))
# Stage 2
try:
init.fetch()
except sources.DataSourceNotFoundException:
# There was no datasource found, theres nothing to do
util.logexc(LOG, ('Can not apply stage %s, '
'no datasource found!'
" Likely bad things to come!"), name)
print_exc(('Can not apply stage %s, '
'no datasource found!'
" Likely bad things to come!") % (name))
if not args.force:
return 1
# Stage 3
mods = stages.Modules(init, extract_fns(args))
# Stage 4
try:
LOG.debug("Closing stdin")
util.close_stdin()
util.fixup_output(mods.cfg, name)
except:
util.logexc(LOG, "Failed to setup output redirection!")
if args.debug:
# Reset so that all the debug handlers are closed out
LOG.debug(("Logging being reset, this logger may no"
" longer be active shortly"))
logging.resetLogging()
logging.setupLogging(mods.cfg)
# Stage 5
return run_module_section(mods, name, name)
def main_query(name, _args):
raise NotImplementedError(("Action '%s' is not"
" currently implemented") % (name))
def main_single(name, args):
# Cloud-init single stage is broken up into the following sub-stages
# 1. Ensure that the init object fetches its config without errors
# 2. Attempt to fetch the datasource (warn if it doesn't work)
# 3. Construct the modules object
# 4. Adjust any subsequent logging/output redirections using
# the modules objects configuration
# 5. Run the single module
# 6. Done!
mod_name = args.name
welcome("%s:%s" % (name, mod_name))
init = stages.Init(ds_deps=[])
# Stage 1
init.read_cfg(extract_fns(args))
# Stage 2
try:
init.fetch()
except sources.DataSourceNotFoundException:
# There was no datasource found,
# that might be bad (or ok) depending on
# the module being ran (so continue on)
util.logexc(LOG, ("Failed to fetch your datasource,"
" likely bad things to come!"))
print_exc(("Failed to fetch your datasource,"
" likely bad things to come!"))
if not args.force:
return 1
# Stage 3
mods = stages.Modules(init, extract_fns(args))
mod_args = args.module_args
if mod_args:
LOG.debug("Using passed in arguments %s", mod_args)
mod_freq = args.frequency
if mod_freq:
LOG.debug("Using passed in frequency %s", mod_freq)
mod_freq = FREQ_SHORT_NAMES.get(mod_freq)
# Stage 4
try:
LOG.debug("Closing stdin")
util.close_stdin()
util.fixup_output(mods.cfg, None)
except:
util.logexc(LOG, "Failed to setup output redirection!")
if args.debug:
# Reset so that all the debug handlers are closed out
LOG.debug(("Logging being reset, this logger may no"
" longer be active shortly"))
logging.resetLogging()
logging.setupLogging(mods.cfg)
# Stage 5
(which_ran, failures) = mods.run_single(mod_name,
mod_args,
mod_freq)
if failures:
LOG.warn("Ran %s but it failed!", mod_name)
return 1
elif not which_ran:
LOG.warn("Did not run %s, does it exist?", mod_name)
return 1
else:
# Guess it worked
return 0
def main():
parser = argparse.ArgumentParser()
# Top level args
parser.add_argument('--version', '-v', action='version',
version='%(prog)s ' + (version.version_string()))
parser.add_argument('--file', '-f', action='append',
dest='files',
help=('additional yaml configuration'
' files to use'),
type=argparse.FileType('rb'))
parser.add_argument('--debug', '-d', action='store_true',
help=('show additional pre-action'
' logging (default: %(default)s)'),
default=False)
parser.add_argument('--force', action='store_true',
help=('force running even if no datasource is'
' found (use at your own risk)'),
dest='force',
default=False)
subparsers = parser.add_subparsers()
# Each action and its sub-options (if any)
parser_init = subparsers.add_parser('init',
help=('initializes cloud-init and'
' performs initial modules'))
parser_init.add_argument("--local", '-l', action='store_true',
help="start in local mode (default: %(default)s)",
default=False)
# This is used so that we can know which action is selected +
# the functor to use to run this subcommand
parser_init.set_defaults(action=('init', main_init))
# These settings are used for the 'config' and 'final' stages
parser_mod = subparsers.add_parser('modules',
help=('activates modules '
'using a given configuration key'))
parser_mod.add_argument("--mode", '-m', action='store',
help=("module configuration name "
"to use (default: %(default)s)"),
default='config',
choices=('init', 'config', 'final'))
parser_mod.set_defaults(action=('modules', main_modules))
# These settings are used when you want to query information
# stored in the cloud-init data objects/directories/files
parser_query = subparsers.add_parser('query',
help=('query information stored '
'in cloud-init'))
parser_query.add_argument("--name", '-n', action="store",
help="item name to query on",
required=True,
choices=QUERY_DATA_TYPES)
parser_query.set_defaults(action=('query', main_query))
# This subcommand allows you to run a single module
parser_single = subparsers.add_parser('single',
help=('run a single module '))
parser_single.set_defaults(action=('single', main_single))
parser_single.add_argument("--name", '-n', action="store",
help="module name to run",
required=True)
parser_single.add_argument("--frequency", action="store",
help=("frequency of the module"),
required=False,
choices=list(FREQ_SHORT_NAMES.keys()))
parser_single.add_argument("module_args", nargs="*",
metavar='argument',
help=('any additional arguments to'
' pass to this module'))
parser_single.set_defaults(action=('single', main_single))
args = parser.parse_args()
# Setup basic logging to start (until reinitialized)
# iff in debug mode...
if args.debug:
logging.setupBasicLogging()
(name, functor) = args.action
return functor(name, args)
if __name__ == '__main__':
sys.exit(main())

View File

@ -1,115 +0,0 @@
#!/usr/bin/python
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
import cloudinit
import cloudinit.util as util
import cloudinit.CloudConfig as CC
import logging
import os
def Usage(out=sys.stdout):
out.write("Usage: %s name\n" % sys.argv[0])
def main():
# expect to be called with
# name [ freq [ args ]
# run the cloud-config job 'name' at with given args
# or
# read cloud config jobs from config (builtin -> system)
# and run all in order
util.close_stdin()
modename = "config"
if len(sys.argv) < 2:
Usage(sys.stderr)
sys.exit(1)
if sys.argv[1] == "all":
name = "all"
if len(sys.argv) > 2:
modename = sys.argv[2]
else:
freq = None
run_args = []
name = sys.argv[1]
if len(sys.argv) > 2:
freq = sys.argv[2]
if freq == "None":
freq = None
if len(sys.argv) > 3:
run_args = sys.argv[3:]
cfg_path = cloudinit.get_ipath_cur("cloud_config")
cfg_env_name = cloudinit.cfg_env_name
if cfg_env_name in os.environ:
cfg_path = os.environ[cfg_env_name]
cloud = cloudinit.CloudInit(ds_deps=[]) # ds_deps=[], get only cached
try:
cloud.get_data_source()
except cloudinit.DataSourceNotFoundException as e:
# there was no datasource found, theres nothing to do
sys.exit(0)
cc = CC.CloudConfig(cfg_path, cloud)
try:
(outfmt, errfmt) = CC.get_output_cfg(cc.cfg, modename)
CC.redirect_output(outfmt, errfmt)
except Exception as e:
err("Failed to get and set output config: %s\n" % e)
cloudinit.logging_set_from_cfg(cc.cfg)
log = logging.getLogger()
log.info("cloud-init-cfg %s" % sys.argv[1:])
module_list = []
if name == "all":
modlist_cfg_name = "cloud_%s_modules" % modename
module_list = CC.read_cc_modules(cc.cfg, modlist_cfg_name)
if not len(module_list):
err("no modules to run in cloud_config [%s]" % modename, log)
sys.exit(0)
else:
module_list.append([name, freq] + run_args)
failures = CC.run_cc_modules(cc, module_list, log)
if len(failures):
err("errors running cloud_config [%s]: %s" % (modename, failures), log)
sys.exit(len(failures))
def err(msg, log=None):
if log:
log.error(msg)
sys.stderr.write(msg + "\n")
def fail(msg, log=None):
err(msg, log)
sys.exit(1)
if __name__ == '__main__':
main()

View File

@ -1,56 +0,0 @@
#!/usr/bin/python
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
import cloudinit
import cloudinit.CloudConfig
def Usage(out=sys.stdout):
out.write("Usage: %s name\n" % sys.argv[0])
def main():
# expect to be called with name of item to fetch
if len(sys.argv) != 2:
Usage(sys.stderr)
sys.exit(1)
cfg_path = cloudinit.get_ipath_cur("cloud_config")
cc = cloudinit.CloudConfig.CloudConfig(cfg_path)
data = {
'user_data': cc.cloud.get_userdata(),
'user_data_raw': cc.cloud.get_userdata_raw(),
'instance_id': cc.cloud.get_instance_id(),
}
name = sys.argv[1].replace('-', '_')
if name not in data:
sys.stderr.write("unknown name '%s'. Known values are:\n %s\n" %
(sys.argv[1], ' '.join(data.keys())))
sys.exit(1)
print data[name]
sys.exit(0)
if __name__ == '__main__':
main()

View File

@ -1,229 +0,0 @@
#!/usr/bin/python
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import subprocess
import sys
import cloudinit
import cloudinit.util as util
import cloudinit.CloudConfig as CC
import cloudinit.DataSource as ds
import cloudinit.netinfo as netinfo
import time
import traceback
import logging
import errno
import os
def warn(wstr):
sys.stderr.write("WARN:%s" % wstr)
def main():
util.close_stdin()
cmds = ("start", "start-local")
deps = {"start": (ds.DEP_FILESYSTEM, ds.DEP_NETWORK),
"start-local": (ds.DEP_FILESYSTEM, )}
cmd = ""
if len(sys.argv) > 1:
cmd = sys.argv[1]
cfg_path = None
if len(sys.argv) > 2:
# this is really for debugging only
# but you can invoke on development system with ./config/cloud.cfg
cfg_path = sys.argv[2]
if not cmd in cmds:
sys.stderr.write("bad command %s. use one of %s\n" % (cmd, cmds))
sys.exit(1)
now = time.strftime("%a, %d %b %Y %H:%M:%S %z", time.gmtime())
try:
uptimef = open("/proc/uptime")
uptime = uptimef.read().split(" ")[0]
uptimef.close()
except IOError as e:
warn("unable to open /proc/uptime\n")
uptime = "na"
cmdline_msg = None
cmdline_exc = None
if cmd == "start":
target = "%s.d/%s" % (cloudinit.system_config,
"91_kernel_cmdline_url.cfg")
if os.path.exists(target):
cmdline_msg = "cmdline: %s existed" % target
else:
cmdline = util.get_cmdline()
try:
(key, url, content) = cloudinit.get_cmdline_url(
cmdline=cmdline)
if key and content:
util.write_file(target, content, mode=0600)
cmdline_msg = ("cmdline: wrote %s from %s, %s" %
(target, key, url))
elif key:
cmdline_msg = ("cmdline: %s, %s had no cloud-config" %
(key, url))
except Exception:
cmdline_exc = ("cmdline: '%s' raised exception\n%s" %
(cmdline, traceback.format_exc()))
warn(cmdline_exc)
try:
cfg = cloudinit.get_base_cfg(cfg_path)
except Exception as e:
warn("Failed to get base config. falling back to builtin: %s\n" % e)
try:
cfg = cloudinit.get_builtin_cfg()
except Exception as e:
warn("Unable to load builtin config\n")
raise
try:
(outfmt, errfmt) = CC.get_output_cfg(cfg, "init")
CC.redirect_output(outfmt, errfmt)
except Exception as e:
warn("Failed to get and set output config: %s\n" % e)
cloudinit.logging_set_from_cfg(cfg)
log = logging.getLogger()
if cmdline_exc:
log.debug(cmdline_exc)
elif cmdline_msg:
log.debug(cmdline_msg)
try:
cloudinit.initfs()
except Exception as e:
warn("failed to initfs, likely bad things to come: %s\n" % str(e))
nonet_path = "%s/%s" % (cloudinit.get_cpath("data"), "no-net")
if cmd == "start":
print netinfo.debug_info()
stop_files = (cloudinit.get_ipath_cur("obj_pkl"), nonet_path)
# if starting as the network start, there are cases
# where everything is already done for us, and it makes
# most sense to exit early and silently
for f in stop_files:
try:
fp = open(f, "r")
fp.close()
except:
continue
log.debug("no need for cloud-init start to run (%s)\n", f)
sys.exit(0)
elif cmd == "start-local":
# cache is not instance specific, so it has to be purged
# but we want 'start' to benefit from a cache if
# a previous start-local populated one
manclean = util.get_cfg_option_bool(cfg, 'manual_cache_clean', False)
if manclean:
log.debug("not purging cache, manual_cache_clean = True")
cloudinit.purge_cache(not manclean)
try:
os.unlink(nonet_path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
msg = "cloud-init %s running: %s. up %s seconds" % (cmd, now, uptime)
sys.stderr.write(msg + "\n")
sys.stderr.flush()
log.info(msg)
cloud = cloudinit.CloudInit(ds_deps=deps[cmd])
try:
cloud.get_data_source()
except cloudinit.DataSourceNotFoundException as e:
sys.stderr.write("no instance data found in %s\n" % cmd)
sys.exit(0)
# set this as the current instance
cloud.set_cur_instance()
# store the metadata
cloud.update_cache()
msg = "found data source: %s" % cloud.datasource
sys.stderr.write(msg + "\n")
log.debug(msg)
# parse the user data (ec2-run-userdata.py)
try:
ran = cloud.sem_and_run("consume_userdata", cloudinit.per_instance,
cloud.consume_userdata, [cloudinit.per_instance], False)
if not ran:
cloud.consume_userdata(cloudinit.per_always)
except:
warn("consuming user data failed!\n")
raise
cfg_path = cloudinit.get_ipath_cur("cloud_config")
cc = CC.CloudConfig(cfg_path, cloud)
# if the output config changed, update output and err
try:
outfmt_orig = outfmt
errfmt_orig = errfmt
(outfmt, errfmt) = CC.get_output_cfg(cc.cfg, "init")
if outfmt_orig != outfmt or errfmt_orig != errfmt:
warn("stdout, stderr changing to (%s,%s)" % (outfmt, errfmt))
CC.redirect_output(outfmt, errfmt)
except Exception as e:
warn("Failed to get and set output config: %s\n" % e)
# send the cloud-config ready event
cc_path = cloudinit.get_ipath_cur('cloud_config')
cc_ready = cc.cfg.get("cc_ready_cmd",
['initctl', 'emit', 'cloud-config',
'%s=%s' % (cloudinit.cfg_env_name, cc_path)])
if cc_ready:
if isinstance(cc_ready, str):
cc_ready = ['sh', '-c', cc_ready]
subprocess.Popen(cc_ready).communicate()
module_list = CC.read_cc_modules(cc.cfg, "cloud_init_modules")
failures = []
if len(module_list):
failures = CC.run_cc_modules(cc, module_list, log)
else:
msg = "no cloud_init_modules to run"
sys.stderr.write(msg + "\n")
log.debug(msg)
sys.exit(0)
sys.exit(len(failures))
if __name__ == '__main__':
main()

View File

@ -1,274 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2008-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Chuck Short <chuck.short@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import yaml
import cloudinit
import cloudinit.util as util
import sys
import traceback
import os
import subprocess
import time
per_instance = cloudinit.per_instance
per_always = cloudinit.per_always
per_once = cloudinit.per_once
class CloudConfig():
cfgfile = None
cfg = None
def __init__(self, cfgfile, cloud=None, ds_deps=None):
if cloud == None:
self.cloud = cloudinit.CloudInit(ds_deps)
self.cloud.get_data_source()
else:
self.cloud = cloud
self.cfg = self.get_config_obj(cfgfile)
def get_config_obj(self, cfgfile):
try:
cfg = util.read_conf(cfgfile)
except:
# TODO: this 'log' could/should be passed in
cloudinit.log.critical("Failed loading of cloud config '%s'. "
"Continuing with empty config\n" % cfgfile)
cloudinit.log.debug(traceback.format_exc() + "\n")
cfg = None
if cfg is None:
cfg = {}
try:
ds_cfg = self.cloud.datasource.get_config_obj()
except:
ds_cfg = {}
cfg = util.mergedict(cfg, ds_cfg)
return(util.mergedict(cfg, self.cloud.cfg))
def handle(self, name, args, freq=None):
try:
mod = __import__("cc_" + name.replace("-", "_"), globals())
def_freq = getattr(mod, "frequency", per_instance)
handler = getattr(mod, "handle")
if not freq:
freq = def_freq
self.cloud.sem_and_run("config-" + name, freq, handler,
[name, self.cfg, self.cloud, cloudinit.log, args])
except:
raise
# reads a cloudconfig module list, returns
# a 2 dimensional array suitable to pass to run_cc_modules
def read_cc_modules(cfg, name):
if name not in cfg:
return([])
module_list = []
# create 'module_list', an array of arrays
# where array[0] = config
# array[1] = freq
# array[2:] = arguemnts
for item in cfg[name]:
if isinstance(item, str):
module_list.append((item,))
elif isinstance(item, list):
module_list.append(item)
else:
raise TypeError("failed to read '%s' item in config")
return(module_list)
def run_cc_modules(cc, module_list, log):
failures = []
for cfg_mod in module_list:
name = cfg_mod[0]
freq = None
run_args = []
if len(cfg_mod) > 1:
freq = cfg_mod[1]
if len(cfg_mod) > 2:
run_args = cfg_mod[2:]
try:
log.debug("handling %s with freq=%s and args=%s" %
(name, freq, run_args))
cc.handle(name, run_args, freq=freq)
except:
log.warn(traceback.format_exc())
log.error("config handling of %s, %s, %s failed\n" %
(name, freq, run_args))
failures.append(name)
return(failures)
# always returns well formated values
# cfg is expected to have an entry 'output' in it, which is a dictionary
# that includes entries for 'init', 'config', 'final' or 'all'
# init: /var/log/cloud.out
# config: [ ">> /var/log/cloud-config.out", /var/log/cloud-config.err ]
# final:
# output: "| logger -p"
# error: "> /dev/null"
# this returns the specific 'mode' entry, cleanly formatted, with value
# None if if none is given
def get_output_cfg(cfg, mode="init"):
ret = [None, None]
if not 'output' in cfg:
return ret
outcfg = cfg['output']
if mode in outcfg:
modecfg = outcfg[mode]
else:
if 'all' not in outcfg:
return ret
# if there is a 'all' item in the output list
# then it applies to all users of this (init, config, final)
modecfg = outcfg['all']
# if value is a string, it specifies stdout and stderr
if isinstance(modecfg, str):
ret = [modecfg, modecfg]
# if its a list, then we expect (stdout, stderr)
if isinstance(modecfg, list):
if len(modecfg) > 0:
ret[0] = modecfg[0]
if len(modecfg) > 1:
ret[1] = modecfg[1]
# if it is a dictionary, expect 'out' and 'error'
# items, which indicate out and error
if isinstance(modecfg, dict):
if 'output' in modecfg:
ret[0] = modecfg['output']
if 'error' in modecfg:
ret[1] = modecfg['error']
# if err's entry == "&1", then make it same as stdout
# as in shell syntax of "echo foo >/dev/null 2>&1"
if ret[1] == "&1":
ret[1] = ret[0]
swlist = [">>", ">", "|"]
for i in range(len(ret)):
if not ret[i]:
continue
val = ret[i].lstrip()
found = False
for s in swlist:
if val.startswith(s):
val = "%s %s" % (s, val[len(s):].strip())
found = True
break
if not found:
# default behavior is append
val = "%s %s" % (">>", val.strip())
ret[i] = val
return(ret)
# redirect_output(outfmt, errfmt, orig_out, orig_err)
# replace orig_out and orig_err with filehandles specified in outfmt or errfmt
# fmt can be:
# > FILEPATH
# >> FILEPATH
# | program [ arg1 [ arg2 [ ... ] ] ]
#
# with a '|', arguments are passed to shell, so one level of
# shell escape is required.
def redirect_output(outfmt, errfmt, o_out=sys.stdout, o_err=sys.stderr):
if outfmt:
(mode, arg) = outfmt.split(" ", 1)
if mode == ">" or mode == ">>":
owith = "ab"
if mode == ">":
owith = "wb"
new_fp = open(arg, owith)
elif mode == "|":
proc = subprocess.Popen(arg, shell=True, stdin=subprocess.PIPE)
new_fp = proc.stdin
else:
raise TypeError("invalid type for outfmt: %s" % outfmt)
if o_out:
os.dup2(new_fp.fileno(), o_out.fileno())
if errfmt == outfmt:
os.dup2(new_fp.fileno(), o_err.fileno())
return
if errfmt:
(mode, arg) = errfmt.split(" ", 1)
if mode == ">" or mode == ">>":
owith = "ab"
if mode == ">":
owith = "wb"
new_fp = open(arg, owith)
elif mode == "|":
proc = subprocess.Popen(arg, shell=True, stdin=subprocess.PIPE)
new_fp = proc.stdin
else:
raise TypeError("invalid type for outfmt: %s" % outfmt)
if o_err:
os.dup2(new_fp.fileno(), o_err.fileno())
return
def run_per_instance(name, func, args, clear_on_fail=False):
semfile = "%s/%s" % (cloudinit.get_ipath_cur("data"), name)
if os.path.exists(semfile):
return
util.write_file(semfile, str(time.time()))
try:
func(*args)
except:
if clear_on_fail:
os.unlink(semfile)
raise
# apt_get top level command (install, update...), and args to pass it
def apt_get(tlc, args=None):
if args is None:
args = []
e = os.environ.copy()
e['DEBIAN_FRONTEND'] = 'noninteractive'
cmd = ['apt-get', '--option', 'Dpkg::Options::=--force-confold',
'--assume-yes', '--quiet', tlc]
cmd.extend(args)
subprocess.check_call(cmd, env=e)
def update_package_sources():
run_per_instance("update-sources", apt_get, ("update",))
def install_packages(pkglist):
update_package_sources()
apt_get("install", pkglist)

View File

@ -1,53 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
#
# Author: Ben Howard <ben.howard@canonical.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
from cloudinit.CloudConfig import per_instance
frequency = per_instance
default_file = "/etc/apt/apt.conf.d/90cloud-init-pipelining"
def handle(_name, cfg, _cloud, log, _args):
apt_pipe_value = util.get_cfg_option_str(cfg, "apt_pipelining", False)
apt_pipe_value = str(apt_pipe_value).lower()
if apt_pipe_value == "false":
write_apt_snippet("0", log)
elif apt_pipe_value in ("none", "unchanged", "os"):
return
elif apt_pipe_value in str(range(0, 6)):
write_apt_snippet(apt_pipe_value, log)
else:
log.warn("Invalid option for apt_pipeling: %s" % apt_pipe_value)
def write_apt_snippet(setting, log, f_name=default_file):
""" Writes f_name with apt pipeline depth 'setting' """
acquire_pipeline_depth = 'Acquire::http::Pipeline-Depth "%s";\n'
file_contents = ("//Written by cloud-init per 'apt_pipelining'\n"
+ (acquire_pipeline_depth % setting))
util.write_file(f_name, file_contents)
log.debug("Wrote %s with APT pipeline setting" % f_name)

View File

@ -1,241 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import subprocess
import traceback
import os
import glob
import cloudinit.CloudConfig as cc
def handle(_name, cfg, cloud, log, _args):
update = util.get_cfg_option_bool(cfg, 'apt_update', False)
upgrade = util.get_cfg_option_bool(cfg, 'apt_upgrade', False)
release = get_release()
mirror = find_apt_mirror(cloud, cfg)
log.debug("selected mirror at: %s" % mirror)
if not util.get_cfg_option_bool(cfg, \
'apt_preserve_sources_list', False):
generate_sources_list(release, mirror)
old_mir = util.get_cfg_option_str(cfg, 'apt_old_mirror', \
"archive.ubuntu.com/ubuntu")
rename_apt_lists(old_mir, mirror)
# set up proxy
proxy = cfg.get("apt_proxy", None)
proxy_filename = "/etc/apt/apt.conf.d/95cloud-init-proxy"
if proxy:
try:
contents = "Acquire::HTTP::Proxy \"%s\";\n"
with open(proxy_filename, "w") as fp:
fp.write(contents % proxy)
except Exception as e:
log.warn("Failed to write proxy to %s" % proxy_filename)
elif os.path.isfile(proxy_filename):
os.unlink(proxy_filename)
# process 'apt_sources'
if 'apt_sources' in cfg:
errors = add_sources(cfg['apt_sources'],
{'MIRROR': mirror, 'RELEASE': release})
for e in errors:
log.warn("Source Error: %s\n" % ':'.join(e))
dconf_sel = util.get_cfg_option_str(cfg, 'debconf_selections', False)
if dconf_sel:
log.debug("setting debconf selections per cloud config")
try:
util.subp(('debconf-set-selections', '-'), dconf_sel)
except:
log.error("Failed to run debconf-set-selections")
log.debug(traceback.format_exc())
pkglist = util.get_cfg_option_list_or_str(cfg, 'packages', [])
errors = []
if update or len(pkglist) or upgrade:
try:
cc.update_package_sources()
except subprocess.CalledProcessError as e:
log.warn("apt-get update failed")
log.debug(traceback.format_exc())
errors.append(e)
if upgrade:
try:
cc.apt_get("upgrade")
except subprocess.CalledProcessError as e:
log.warn("apt upgrade failed")
log.debug(traceback.format_exc())
errors.append(e)
if len(pkglist):
try:
cc.install_packages(pkglist)
except subprocess.CalledProcessError as e:
log.warn("Failed to install packages: %s " % pkglist)
log.debug(traceback.format_exc())
errors.append(e)
if len(errors):
raise errors[0]
return(True)
def mirror2lists_fileprefix(mirror):
string = mirror
# take of http:// or ftp://
if string.endswith("/"):
string = string[0:-1]
pos = string.find("://")
if pos >= 0:
string = string[pos + 3:]
string = string.replace("/", "_")
return string
def rename_apt_lists(omirror, new_mirror, lists_d="/var/lib/apt/lists"):
oprefix = "%s/%s" % (lists_d, mirror2lists_fileprefix(omirror))
nprefix = "%s/%s" % (lists_d, mirror2lists_fileprefix(new_mirror))
if(oprefix == nprefix):
return
olen = len(oprefix)
for filename in glob.glob("%s_*" % oprefix):
os.rename(filename, "%s%s" % (nprefix, filename[olen:]))
def get_release():
stdout, _stderr = subprocess.Popen(['lsb_release', '-cs'],
stdout=subprocess.PIPE).communicate()
return(str(stdout).strip())
def generate_sources_list(codename, mirror):
util.render_to_file('sources.list', '/etc/apt/sources.list', \
{'mirror': mirror, 'codename': codename})
def add_sources(srclist, searchList=None):
"""
add entries in /etc/apt/sources.list.d for each abbreviated
sources.list entry in 'srclist'. When rendering template, also
include the values in dictionary searchList
"""
if searchList is None:
searchList = {}
elst = []
for ent in srclist:
if 'source' not in ent:
elst.append(["", "missing source"])
continue
source = ent['source']
if source.startswith("ppa:"):
try:
util.subp(["add-apt-repository", source])
except:
elst.append([source, "add-apt-repository failed"])
continue
source = util.render_string(source, searchList)
if 'filename' not in ent:
ent['filename'] = 'cloud_config_sources.list'
if not ent['filename'].startswith("/"):
ent['filename'] = "%s/%s" % \
("/etc/apt/sources.list.d/", ent['filename'])
if ('keyid' in ent and 'key' not in ent):
ks = "keyserver.ubuntu.com"
if 'keyserver' in ent:
ks = ent['keyserver']
try:
ent['key'] = util.getkeybyid(ent['keyid'], ks)
except:
elst.append([source, "failed to get key from %s" % ks])
continue
if 'key' in ent:
try:
util.subp(('apt-key', 'add', '-'), ent['key'])
except:
elst.append([source, "failed add key"])
try:
util.write_file(ent['filename'], source + "\n", omode="ab")
except:
elst.append([source, "failed write to file %s" % ent['filename']])
return(elst)
def find_apt_mirror(cloud, cfg):
""" find an apt_mirror given the cloud and cfg provided """
# TODO: distro and defaults should be configurable
distro = "ubuntu"
defaults = {
'ubuntu': "http://archive.ubuntu.com/ubuntu",
'debian': "http://archive.debian.org/debian",
}
mirror = None
cfg_mirror = cfg.get("apt_mirror", None)
if cfg_mirror:
mirror = cfg["apt_mirror"]
elif "apt_mirror_search" in cfg:
mirror = util.search_for_mirror(cfg['apt_mirror_search'])
else:
if cloud:
mirror = cloud.get_mirror()
mydom = ""
doms = []
if not mirror and cloud:
# if we have a fqdn, then search its domain portion first
(_hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)
mydom = ".".join(fqdn.split(".")[1:])
if mydom:
doms.append(".%s" % mydom)
if not mirror:
doms.extend((".localdomain", "",))
mirror_list = []
mirrorfmt = "http://%s-mirror%s/%s" % (distro, "%s", distro)
for post in doms:
mirror_list.append(mirrorfmt % post)
mirror = util.search_for_mirror(mirror_list)
if not mirror:
mirror = defaults[distro]
return mirror

View File

@ -1,48 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import subprocess
import tempfile
import os
from cloudinit.CloudConfig import per_always
frequency = per_always
def handle(_name, cfg, cloud, log, _args):
if "bootcmd" not in cfg:
return
try:
content = util.shellify(cfg["bootcmd"])
tmpf = tempfile.TemporaryFile()
tmpf.write(content)
tmpf.seek(0)
except:
log.warn("failed to shellify bootcmd")
raise
try:
env = os.environ.copy()
env['INSTANCE_ID'] = cloud.get_instance_id()
subprocess.check_call(['/bin/sh'], env=env, stdin=tmpf)
tmpf.close()
except:
log.warn("failed to run commands from bootcmd")
raise

View File

@ -1,119 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Avishai Ish-Shalom <avishai@fewbytes.com>
# Author: Mike Moulton <mike@meltmedia.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import subprocess
import json
import cloudinit.CloudConfig as cc
import cloudinit.util as util
ruby_version_default = "1.8"
def handle(_name, cfg, cloud, log, _args):
# If there isn't a chef key in the configuration don't do anything
if 'chef' not in cfg:
return
chef_cfg = cfg['chef']
# ensure the chef directories we use exist
mkdirs(['/etc/chef', '/var/log/chef', '/var/lib/chef',
'/var/cache/chef', '/var/backups/chef', '/var/run/chef'])
# set the validation key based on the presence of either 'validation_key'
# or 'validation_cert'. In the case where both exist, 'validation_key'
# takes precedence
for key in ('validation_key', 'validation_cert'):
if key in chef_cfg and chef_cfg[key]:
with open('/etc/chef/validation.pem', 'w') as validation_key_fh:
validation_key_fh.write(chef_cfg[key])
break
# create the chef config from template
util.render_to_file('chef_client.rb', '/etc/chef/client.rb',
{'server_url': chef_cfg['server_url'],
'node_name': util.get_cfg_option_str(chef_cfg, 'node_name',
cloud.datasource.get_instance_id()),
'environment': util.get_cfg_option_str(chef_cfg, 'environment',
'_default'),
'validation_name': chef_cfg['validation_name']})
# set the firstboot json
with open('/etc/chef/firstboot.json', 'w') as firstboot_json_fh:
initial_json = {}
if 'run_list' in chef_cfg:
initial_json['run_list'] = chef_cfg['run_list']
if 'initial_attributes' in chef_cfg:
initial_attributes = chef_cfg['initial_attributes']
for k in initial_attributes.keys():
initial_json[k] = initial_attributes[k]
firstboot_json_fh.write(json.dumps(initial_json))
# If chef is not installed, we install chef based on 'install_type'
if not os.path.isfile('/usr/bin/chef-client'):
install_type = util.get_cfg_option_str(chef_cfg, 'install_type',
'packages')
if install_type == "gems":
# this will install and run the chef-client from gems
chef_version = util.get_cfg_option_str(chef_cfg, 'version', None)
ruby_version = util.get_cfg_option_str(chef_cfg, 'ruby_version',
ruby_version_default)
install_chef_from_gems(ruby_version, chef_version)
# and finally, run chef-client
log.debug('running chef-client')
subprocess.check_call(['/usr/bin/chef-client', '-d', '-i', '1800',
'-s', '20'])
else:
# this will install and run the chef-client from packages
cc.install_packages(('chef',))
def get_ruby_packages(version):
# return a list of packages needed to install ruby at version
pkgs = ['ruby%s' % version, 'ruby%s-dev' % version]
if version == "1.8":
pkgs.extend(('libopenssl-ruby1.8', 'rubygems1.8'))
return(pkgs)
def install_chef_from_gems(ruby_version, chef_version=None):
cc.install_packages(get_ruby_packages(ruby_version))
if not os.path.exists('/usr/bin/gem'):
os.symlink('/usr/bin/gem%s' % ruby_version, '/usr/bin/gem')
if not os.path.exists('/usr/bin/ruby'):
os.symlink('/usr/bin/ruby%s' % ruby_version, '/usr/bin/ruby')
if chef_version:
subprocess.check_call(['/usr/bin/gem', 'install', 'chef',
'-v %s' % chef_version, '--no-ri',
'--no-rdoc', '--bindir', '/usr/bin', '-q'])
else:
subprocess.check_call(['/usr/bin/gem', 'install', 'chef',
'--no-ri', '--no-rdoc', '--bindir',
'/usr/bin', '-q'])
def ensure_dir(d):
if not os.path.exists(d):
os.makedirs(d)
def mkdirs(dirs):
for d in dirs:
ensure_dir(d)

View File

@ -1,58 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit.CloudConfig import per_always
import sys
from cloudinit import util, boot_finished
import time
frequency = per_always
final_message = "cloud-init boot finished at $TIMESTAMP. Up $UPTIME seconds"
def handle(_name, cfg, _cloud, log, args):
if len(args) != 0:
msg_in = args[0]
else:
msg_in = util.get_cfg_option_str(cfg, "final_message", final_message)
try:
uptimef = open("/proc/uptime")
uptime = uptimef.read().split(" ")[0]
uptimef.close()
except IOError as e:
log.warn("unable to open /proc/uptime\n")
uptime = "na"
try:
ts = time.strftime("%a, %d %b %Y %H:%M:%S %z", time.gmtime())
except:
ts = "na"
try:
subs = {'UPTIME': uptime, 'TIMESTAMP': ts}
sys.stdout.write("%s\n" % util.render_string(msg_in, subs))
except Exception as e:
log.warn("failed to render string to stdout: %s" % e)
fp = open(boot_finished, "wb")
fp.write(uptime + "\n")
fp.close()

View File

@ -1,42 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit.CloudConfig import per_instance
import cloudinit.util as util
import subprocess
frequency = per_instance
def handle(_name, cfg, _cloud, log, _args):
cmd = ['/usr/lib/cloud-init/write-ssh-key-fingerprints']
fp_blacklist = util.get_cfg_option_list_or_str(cfg,
"ssh_fp_console_blacklist", [])
key_blacklist = util.get_cfg_option_list_or_str(cfg,
"ssh_key_console_blacklist", ["ssh-dss"])
try:
confp = open('/dev/console', "wb")
cmd.append(','.join(fp_blacklist))
cmd.append(','.join(key_blacklist))
subprocess.call(cmd, stdout=confp)
confp.close()
except:
log.warn("writing keys to console value")
raise

View File

@ -1,54 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import os.path
import subprocess
import traceback
def apply_locale(locale, cfgfile):
if os.path.exists('/usr/sbin/locale-gen'):
subprocess.Popen(['locale-gen', locale]).communicate()
if os.path.exists('/usr/sbin/update-locale'):
subprocess.Popen(['update-locale', locale]).communicate()
util.render_to_file('default-locale', cfgfile, {'locale': locale})
def handle(_name, cfg, cloud, log, args):
if len(args) != 0:
locale = args[0]
else:
locale = util.get_cfg_option_str(cfg, "locale", cloud.get_locale())
locale_cfgfile = util.get_cfg_option_str(cfg, "locale_configfile",
"/etc/default/locale")
if not locale:
return
log.debug("setting locale to %s" % locale)
try:
apply_locale(locale, locale_cfgfile)
except Exception as e:
log.debug(traceback.format_exc(e))
raise Exception("failed to apply locale %s" % locale)

View File

@ -1,99 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Marc Cluet <marc.cluet@canonical.com>
# Based on code by Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import subprocess
import StringIO
import ConfigParser
import cloudinit.CloudConfig as cc
import cloudinit.util as util
pubcert_file = "/etc/mcollective/ssl/server-public.pem"
pricert_file = "/etc/mcollective/ssl/server-private.pem"
# Our fake header section
class FakeSecHead(object):
def __init__(self, fp):
self.fp = fp
self.sechead = '[nullsection]\n'
def readline(self):
if self.sechead:
try:
return self.sechead
finally:
self.sechead = None
else:
return self.fp.readline()
def handle(_name, cfg, _cloud, _log, _args):
# If there isn't a mcollective key in the configuration don't do anything
if 'mcollective' not in cfg:
return
mcollective_cfg = cfg['mcollective']
# Start by installing the mcollective package ...
cc.install_packages(("mcollective",))
# ... and then update the mcollective configuration
if 'conf' in mcollective_cfg:
# Create object for reading server.cfg values
mcollective_config = ConfigParser.ConfigParser()
# Read server.cfg values from original file in order to be able to mix
# the rest up
mcollective_config.readfp(FakeSecHead(open('/etc/mcollective/'
'server.cfg')))
for cfg_name, cfg in mcollective_cfg['conf'].iteritems():
if cfg_name == 'public-cert':
util.write_file(pubcert_file, cfg, mode=0644)
mcollective_config.set(cfg_name,
'plugin.ssl_server_public', pubcert_file)
mcollective_config.set(cfg_name, 'securityprovider', 'ssl')
elif cfg_name == 'private-cert':
util.write_file(pricert_file, cfg, mode=0600)
mcollective_config.set(cfg_name,
'plugin.ssl_server_private', pricert_file)
mcollective_config.set(cfg_name, 'securityprovider', 'ssl')
else:
# Iterate throug the config items, we'll use ConfigParser.set
# to overwrite or create new items as needed
for o, v in cfg.iteritems():
mcollective_config.set(cfg_name, o, v)
# We got all our config as wanted we'll rename
# the previous server.cfg and create our new one
os.rename('/etc/mcollective/server.cfg',
'/etc/mcollective/server.cfg.old')
outputfile = StringIO.StringIO()
mcollective_config.write(outputfile)
# Now we got the whole file, write to disk except first line
# Note below, that we've just used ConfigParser because it generally
# works. Below, we remove the initial 'nullsection' header
# and then change 'key = value' to 'key: value'. The global
# search and replace of '=' with ':' could be problematic though.
# this most likely needs fixing.
util.write_file('/etc/mcollective/server.cfg',
outputfile.getvalue().replace('[nullsection]\n', '').replace(' =',
':'),
mode=0644)
# Start mcollective
subprocess.check_call(['service', 'mcollective', 'start'])

View File

@ -1,108 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import os.path
import pwd
import socket
import subprocess
import StringIO
import ConfigParser
import cloudinit.CloudConfig as cc
import cloudinit.util as util
def handle(_name, cfg, cloud, log, _args):
# If there isn't a puppet key in the configuration don't do anything
if 'puppet' not in cfg:
return
puppet_cfg = cfg['puppet']
# Start by installing the puppet package ...
cc.install_packages(("puppet",))
# ... and then update the puppet configuration
if 'conf' in puppet_cfg:
# Add all sections from the conf object to puppet.conf
puppet_conf_fh = open('/etc/puppet/puppet.conf', 'r')
# Create object for reading puppet.conf values
puppet_config = ConfigParser.ConfigParser()
# Read puppet.conf values from original file in order to be able to
# mix the rest up
puppet_config.readfp(StringIO.StringIO(''.join(i.lstrip() for i in
puppet_conf_fh.readlines())))
# Close original file, no longer needed
puppet_conf_fh.close()
for cfg_name, cfg in puppet_cfg['conf'].iteritems():
# ca_cert configuration is a special case
# Dump the puppetmaster ca certificate in the correct place
if cfg_name == 'ca_cert':
# Puppet ssl sub-directory isn't created yet
# Create it with the proper permissions and ownership
os.makedirs('/var/lib/puppet/ssl')
os.chmod('/var/lib/puppet/ssl', 0771)
os.chown('/var/lib/puppet/ssl',
pwd.getpwnam('puppet').pw_uid, 0)
os.makedirs('/var/lib/puppet/ssl/certs/')
os.chown('/var/lib/puppet/ssl/certs/',
pwd.getpwnam('puppet').pw_uid, 0)
ca_fh = open('/var/lib/puppet/ssl/certs/ca.pem', 'w')
ca_fh.write(cfg)
ca_fh.close()
os.chown('/var/lib/puppet/ssl/certs/ca.pem',
pwd.getpwnam('puppet').pw_uid, 0)
util.restorecon_if_possible('/var/lib/puppet', recursive=True)
else:
#puppet_conf_fh.write("\n[%s]\n" % (cfg_name))
# If puppet.conf already has this section we don't want to
# write it again
if puppet_config.has_section(cfg_name) == False:
puppet_config.add_section(cfg_name)
# Iterate throug the config items, we'll use ConfigParser.set
# to overwrite or create new items as needed
for o, v in cfg.iteritems():
if o == 'certname':
# Expand %f as the fqdn
v = v.replace("%f", socket.getfqdn())
# Expand %i as the instance id
v = v.replace("%i",
cloud.datasource.get_instance_id())
# certname needs to be downcase
v = v.lower()
puppet_config.set(cfg_name, o, v)
#puppet_conf_fh.write("%s=%s\n" % (o, v))
# We got all our config as wanted we'll rename
# the previous puppet.conf and create our new one
os.rename('/etc/puppet/puppet.conf', '/etc/puppet/puppet.conf.old')
with open('/etc/puppet/puppet.conf', 'wb') as configfile:
puppet_config.write(configfile)
util.restorecon_if_possible('/etc/puppet/puppet.conf')
# Set puppet to automatically start
if os.path.exists('/etc/default/puppet'):
subprocess.check_call(['sed', '-i',
'-e', 's/^START=.*/START=yes/',
'/etc/default/puppet'])
elif os.path.exists('/bin/systemctl'):
subprocess.check_call(['/bin/systemctl', 'enable', 'puppet.service'])
elif os.path.exists('/sbin/chkconfig'):
subprocess.check_call(['/sbin/chkconfig', 'puppet', 'on'])
else:
log.warn("Do not know how to enable puppet service on this system")
# Start puppetd
subprocess.check_call(['service', 'puppet', 'start'])

View File

@ -1,108 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import subprocess
import os
import stat
import sys
import time
import tempfile
from cloudinit.CloudConfig import per_always
frequency = per_always
def handle(_name, cfg, _cloud, log, args):
if len(args) != 0:
resize_root = False
if str(args[0]).lower() in ['true', '1', 'on', 'yes']:
resize_root = True
else:
resize_root = util.get_cfg_option_str(cfg, "resize_rootfs", True)
if str(resize_root).lower() in ['false', '0']:
return
# we use mktemp rather than mkstemp because early in boot nothing
# else should be able to race us for this, and we need to mknod.
devpth = tempfile.mktemp(prefix="cloudinit.resizefs.", dir="/run")
try:
st_dev = os.stat("/").st_dev
dev = os.makedev(os.major(st_dev), os.minor(st_dev))
os.mknod(devpth, 0400 | stat.S_IFBLK, dev)
except:
if util.is_container():
log.debug("inside container, ignoring mknod failure in resizefs")
return
log.warn("Failed to make device node to resize /")
raise
cmd = ['blkid', '-c', '/dev/null', '-sTYPE', '-ovalue', devpth]
try:
(fstype, _err) = util.subp(cmd)
except subprocess.CalledProcessError as e:
log.warn("Failed to get filesystem type of maj=%s, min=%s via: %s" %
(os.major(st_dev), os.minor(st_dev), cmd))
log.warn("output=%s\nerror=%s\n", e.output[0], e.output[1])
os.unlink(devpth)
raise
if str(fstype).startswith("ext"):
resize_cmd = ['resize2fs', devpth]
elif fstype == "xfs":
resize_cmd = ['xfs_growfs', devpth]
else:
os.unlink(devpth)
log.debug("not resizing unknown filesystem %s" % fstype)
return
if resize_root == "noblock":
fid = os.fork()
if fid == 0:
try:
do_resize(resize_cmd, devpth, log)
os._exit(0) # pylint: disable=W0212
except Exception as exc:
sys.stderr.write("Failed: %s" % exc)
os._exit(1) # pylint: disable=W0212
else:
do_resize(resize_cmd, devpth, log)
log.debug("resizing root filesystem (type=%s, maj=%i, min=%i, val=%s)" %
(str(fstype).rstrip("\n"), os.major(st_dev), os.minor(st_dev),
resize_root))
return
def do_resize(resize_cmd, devpth, log):
try:
start = time.time()
util.subp(resize_cmd)
except subprocess.CalledProcessError as e:
log.warn("Failed to resize filesystem (%s)" % resize_cmd)
log.warn("output=%s\nerror=%s\n", e.output[0], e.output[1])
os.unlink(devpth)
raise
os.unlink(devpth)
log.debug("resize took %s seconds" % (time.time() - start))

View File

@ -1,78 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
##
## The purpose of this script is to allow cloud-init to consume
## rightscale style userdata. rightscale user data is key-value pairs
## in a url-query-string like format.
##
## for cloud-init support, there will be a key named
## 'CLOUD_INIT_REMOTE_HOOK'.
##
## This cloud-config module will
## - read the blob of data from raw user data, and parse it as key/value
## - for each key that is found, download the content to
## the local instance/scripts directory and set them executable.
## - the files in that directory will be run by the user-scripts module
## Therefore, this must run before that.
##
##
import cloudinit.util as util
from cloudinit.CloudConfig import per_instance
from cloudinit import get_ipath_cur
from urlparse import parse_qs
frequency = per_instance
my_name = "cc_rightscale_userdata"
my_hookname = 'CLOUD_INIT_REMOTE_HOOK'
def handle(_name, _cfg, cloud, log, _args):
try:
ud = cloud.get_userdata_raw()
except:
log.warn("failed to get raw userdata in %s" % my_name)
return
try:
mdict = parse_qs(ud)
if not my_hookname in mdict:
return
except:
log.warn("failed to urlparse.parse_qa(userdata_raw())")
raise
scripts_d = get_ipath_cur('scripts')
i = 0
first_e = None
for url in mdict[my_hookname]:
fname = "%s/rightscale-%02i" % (scripts_d, i)
i = i + 1
try:
content = util.readurl(url)
util.write_file(fname, content, mode=0700)
except Exception as e:
if not first_e:
first_e = None
log.warn("%s failed to read %s: %s" % (my_name, url, e))
if first_e:
raise(e)

View File

@ -1,106 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import cloudinit.SshUtil as sshutil
import os
import glob
import subprocess
DISABLE_ROOT_OPTS = "no-port-forwarding,no-agent-forwarding," \
"no-X11-forwarding,command=\"echo \'Please login as the user \\\"$USER\\\" " \
"rather than the user \\\"root\\\".\';echo;sleep 10\""
def handle(_name, cfg, cloud, log, _args):
# remove the static keys from the pristine image
if cfg.get("ssh_deletekeys", True):
for f in glob.glob("/etc/ssh/ssh_host_*key*"):
try:
os.unlink(f)
except:
pass
if "ssh_keys" in cfg:
# if there are keys in cloud-config, use them
key2file = {
"rsa_private": ("/etc/ssh/ssh_host_rsa_key", 0600),
"rsa_public": ("/etc/ssh/ssh_host_rsa_key.pub", 0644),
"dsa_private": ("/etc/ssh/ssh_host_dsa_key", 0600),
"dsa_public": ("/etc/ssh/ssh_host_dsa_key.pub", 0644),
"ecdsa_private": ("/etc/ssh/ssh_host_ecdsa_key", 0600),
"ecdsa_public": ("/etc/ssh/ssh_host_ecdsa_key.pub", 0644),
}
for key, val in cfg["ssh_keys"].items():
if key in key2file:
util.write_file(key2file[key][0], val, key2file[key][1])
priv2pub = {'rsa_private': 'rsa_public', 'dsa_private': 'dsa_public',
'ecdsa_private': 'ecdsa_public', }
cmd = 'o=$(ssh-keygen -yf "%s") && echo "$o" root@localhost > "%s"'
for priv, pub in priv2pub.iteritems():
if pub in cfg['ssh_keys'] or not priv in cfg['ssh_keys']:
continue
pair = (key2file[priv][0], key2file[pub][0])
subprocess.call(('sh', '-xc', cmd % pair))
log.debug("generated %s from %s" % pair)
else:
# if not, generate them
for keytype in util.get_cfg_option_list_or_str(cfg, 'ssh_genkeytypes',
['rsa', 'dsa', 'ecdsa']):
keyfile = '/etc/ssh/ssh_host_%s_key' % keytype
if not os.path.exists(keyfile):
subprocess.call(['ssh-keygen', '-t', keytype, '-N', '',
'-f', keyfile])
util.restorecon_if_possible('/etc/ssh', recursive=True)
try:
user = util.get_cfg_option_str(cfg, 'user')
disable_root = util.get_cfg_option_bool(cfg, "disable_root", True)
disable_root_opts = util.get_cfg_option_str(cfg, "disable_root_opts",
DISABLE_ROOT_OPTS)
keys = cloud.get_public_ssh_keys()
if "ssh_authorized_keys" in cfg:
cfgkeys = cfg["ssh_authorized_keys"]
keys.extend(cfgkeys)
apply_credentials(keys, user, disable_root, disable_root_opts, log)
except:
util.logexc(log)
log.warn("applying credentials failed!\n")
def apply_credentials(keys, user, disable_root,
disable_root_opts=DISABLE_ROOT_OPTS, log=None):
keys = set(keys)
if user:
sshutil.setup_user_keys(keys, user, '', log)
if disable_root:
key_prefix = disable_root_opts.replace('$USER', user)
else:
key_prefix = ''
sshutil.setup_user_keys(keys, 'root', key_prefix, log)

View File

@ -1,67 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit.CloudConfig import per_instance
from cloudinit import util
import os.path
import shutil
frequency = per_instance
tz_base = "/usr/share/zoneinfo"
def handle(_name, cfg, _cloud, log, args):
if len(args) != 0:
timezone = args[0]
else:
timezone = util.get_cfg_option_str(cfg, "timezone", False)
if not timezone:
return
tz_file = "%s/%s" % (tz_base, timezone)
if not os.path.isfile(tz_file):
log.debug("Invalid timezone %s" % tz_file)
raise Exception("Invalid timezone %s" % tz_file)
try:
fp = open("/etc/timezone", "wb")
fp.write("%s\n" % timezone)
fp.close()
except:
log.debug("failed to write to /etc/timezone")
raise
if os.path.exists("/etc/sysconfig/clock"):
try:
with open("/etc/sysconfig/clock", "w") as fp:
fp.write('ZONE="%s"\n' % timezone)
except:
log.debug("failed to write to /etc/sysconfig/clock")
raise
try:
shutil.copy(tz_file, "/etc/localtime")
except:
log.debug("failed to copy %s to /etc/localtime" % tz_file)
raise
log.debug("set timezone to %s" % timezone)
return

View File

@ -1,87 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
from cloudinit.CloudConfig import per_always
import StringIO
frequency = per_always
def handle(_name, cfg, cloud, log, _args):
(hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)
manage_hosts = util.get_cfg_option_str(cfg, "manage_etc_hosts", False)
if manage_hosts in ("True", "true", True, "template"):
# render from template file
try:
if not hostname:
log.info("manage_etc_hosts was set, but no hostname found")
return
util.render_to_file('hosts', '/etc/hosts',
{'hostname': hostname, 'fqdn': fqdn})
except Exception:
log.warn("failed to update /etc/hosts")
raise
elif manage_hosts == "localhost":
log.debug("managing 127.0.1.1 in /etc/hosts")
update_etc_hosts(hostname, fqdn, log)
return
else:
if manage_hosts not in ("False", False):
log.warn("Unknown value for manage_etc_hosts. Assuming False")
else:
log.debug("not managing /etc/hosts")
def update_etc_hosts(hostname, fqdn, _log):
with open('/etc/hosts', 'r') as etchosts:
header = "# Added by cloud-init\n"
hosts_line = "127.0.1.1\t%s %s\n" % (fqdn, hostname)
need_write = False
need_change = True
new_etchosts = StringIO.StringIO()
for line in etchosts:
split_line = [s.strip() for s in line.split()]
if len(split_line) < 2:
new_etchosts.write(line)
continue
if line == header:
continue
ip, hosts = split_line[0], split_line[1:]
if ip == "127.0.1.1":
if sorted([hostname, fqdn]) == sorted(hosts):
need_change = False
if need_change == True:
line = "%s%s" % (header, hosts_line)
need_change = False
need_write = True
new_etchosts.write(line)
etchosts.close()
if need_change == True:
new_etchosts.write("%s%s" % (header, hosts_line))
need_write = True
if need_write == True:
new_etcfile = open('/etc/hosts', 'wb')
new_etcfile.write(new_etchosts.getvalue())
new_etcfile.close()
new_etchosts.close()
return

View File

@ -1,101 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import subprocess
import errno
from cloudinit.CloudConfig import per_always
frequency = per_always
def handle(_name, cfg, cloud, log, _args):
if util.get_cfg_option_bool(cfg, "preserve_hostname", False):
log.debug("preserve_hostname is set. not updating hostname")
return
(hostname, _fqdn) = util.get_hostname_fqdn(cfg, cloud)
try:
prev = "%s/%s" % (cloud.get_cpath('data'), "previous-hostname")
update_hostname(hostname, prev, log)
except Exception:
log.warn("failed to set hostname\n")
raise
# read hostname from a 'hostname' file
# allow for comments and stripping line endings.
# if file doesn't exist, or no contents, return default
def read_hostname(filename, default=None):
try:
fp = open(filename, "r")
lines = fp.readlines()
fp.close()
for line in lines:
hpos = line.find("#")
if hpos != -1:
line = line[0:hpos]
line = line.rstrip()
if line:
return line
except IOError as e:
if e.errno != errno.ENOENT:
raise
return default
def update_hostname(hostname, prev_file, log):
etc_file = "/etc/hostname"
hostname_prev = None
hostname_in_etc = None
try:
hostname_prev = read_hostname(prev_file)
except Exception as e:
log.warn("Failed to open %s: %s" % (prev_file, e))
try:
hostname_in_etc = read_hostname(etc_file)
except:
log.warn("Failed to open %s" % etc_file)
update_files = []
if not hostname_prev or hostname_prev != hostname:
update_files.append(prev_file)
if (not hostname_in_etc or
(hostname_in_etc == hostname_prev and hostname_in_etc != hostname)):
update_files.append(etc_file)
try:
for fname in update_files:
util.write_file(fname, "%s\n" % hostname, 0644)
log.debug("wrote %s to %s" % (hostname, fname))
except:
log.warn("failed to write hostname to %s" % fname)
if hostname_in_etc and hostname_prev and hostname_in_etc != hostname_prev:
log.debug("%s differs from %s. assuming user maintained" %
(prev_file, etc_file))
if etc_file in update_files:
log.debug("setting hostname to %s" % hostname)
subprocess.Popen(['hostname', hostname]).communicate()

View File

@ -1,214 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Hafliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
DEP_FILESYSTEM = "FILESYSTEM"
DEP_NETWORK = "NETWORK"
import cloudinit.UserDataHandler as ud
import cloudinit.util as util
import socket
class DataSource:
userdata = None
metadata = None
userdata_raw = None
cfgname = ""
# system config (passed in from cloudinit,
# cloud-config before input from the DataSource)
sys_cfg = {}
# datasource config, the cloud-config['datasource']['__name__']
ds_cfg = {} # datasource config
def __init__(self, sys_cfg=None):
if not self.cfgname:
name = str(self.__class__).split(".")[-1]
if name.startswith("DataSource"):
name = name[len("DataSource"):]
self.cfgname = name
if sys_cfg:
self.sys_cfg = sys_cfg
self.ds_cfg = util.get_cfg_by_path(self.sys_cfg,
("datasource", self.cfgname), self.ds_cfg)
def get_userdata(self):
if self.userdata == None:
self.userdata = ud.preprocess_userdata(self.userdata_raw)
return self.userdata
def get_userdata_raw(self):
return(self.userdata_raw)
# the data sources' config_obj is a cloud-config formated
# object that came to it from ways other than cloud-config
# because cloud-config content would be handled elsewhere
def get_config_obj(self):
return({})
def get_public_ssh_keys(self):
keys = []
if 'public-keys' not in self.metadata:
return([])
if isinstance(self.metadata['public-keys'], str):
return(str(self.metadata['public-keys']).splitlines())
if isinstance(self.metadata['public-keys'], list):
return(self.metadata['public-keys'])
for _keyname, klist in self.metadata['public-keys'].items():
# lp:506332 uec metadata service responds with
# data that makes boto populate a string for 'klist' rather
# than a list.
if isinstance(klist, str):
klist = [klist]
for pkey in klist:
# there is an empty string at the end of the keylist, trim it
if pkey:
keys.append(pkey)
return(keys)
def device_name_to_device(self, _name):
# translate a 'name' to a device
# the primary function at this point is on ec2
# to consult metadata service, that has
# ephemeral0: sdb
# and return 'sdb' for input 'ephemeral0'
return(None)
def get_locale(self):
return('en_US.UTF-8')
def get_local_mirror(self):
return None
def get_instance_id(self):
if 'instance-id' not in self.metadata:
return "iid-datasource"
return(self.metadata['instance-id'])
def get_hostname(self, fqdn=False):
defdomain = "localdomain"
defhost = "localhost"
domain = defdomain
if not 'local-hostname' in self.metadata:
# this is somewhat questionable really.
# the cloud datasource was asked for a hostname
# and didn't have one. raising error might be more appropriate
# but instead, basically look up the existing hostname
toks = []
hostname = socket.gethostname()
fqdn = util.get_fqdn_from_hosts(hostname)
if fqdn and fqdn.find(".") > 0:
toks = str(fqdn).split(".")
elif hostname:
toks = [hostname, defdomain]
else:
toks = [defhost, defdomain]
else:
# if there is an ipv4 address in 'local-hostname', then
# make up a hostname (LP: #475354) in format ip-xx.xx.xx.xx
lhost = self.metadata['local-hostname']
if is_ipv4(lhost):
toks = "ip-%s" % lhost.replace(".", "-")
else:
toks = lhost.split(".")
if len(toks) > 1:
hostname = toks[0]
domain = '.'.join(toks[1:])
else:
hostname = toks[0]
if fqdn:
return "%s.%s" % (hostname, domain)
else:
return hostname
# return a list of classes that have the same depends as 'depends'
# iterate through cfg_list, loading "DataSourceCollections" modules
# and calling their "get_datasource_list".
# return an ordered list of classes that match
#
# - modules must be named "DataSource<item>", where 'item' is an entry
# in cfg_list
# - if pkglist is given, it will iterate try loading from that package
# ie, pkglist=[ "foo", "" ]
# will first try to load foo.DataSource<item>
# then DataSource<item>
def list_sources(cfg_list, depends, pkglist=None):
if pkglist is None:
pkglist = []
retlist = []
for ds_coll in cfg_list:
for pkg in pkglist:
if pkg:
pkg = "%s." % pkg
try:
mod = __import__("%sDataSource%s" % (pkg, ds_coll))
if pkg:
mod = getattr(mod, "DataSource%s" % ds_coll)
lister = getattr(mod, "get_datasource_list")
retlist.extend(lister(depends))
break
except:
raise
return(retlist)
# depends is a list of dependencies (DEP_FILESYSTEM)
# dslist is a list of 2 item lists
# dslist = [
# ( class, ( depends-that-this-class-needs ) )
# }
# it returns a list of 'class' that matched these deps exactly
# it is a helper function for DataSourceCollections
def list_from_depends(depends, dslist):
retlist = []
depset = set(depends)
for elem in dslist:
(cls, deps) = elem
if depset == set(deps):
retlist.append(cls)
return(retlist)
def is_ipv4(instr):
""" determine if input string is a ipv4 address. return boolean"""
toks = instr.split('.')
if len(toks) != 4:
return False
try:
toks = [x for x in toks if (int(x) < 256 and int(x) > 0)]
except:
return False
return (len(toks) == 4)

View File

@ -1,92 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Cosmin Luta
#
# Author: Cosmin Luta <q4break@gmail.com>
# Author: Scott Moser <scott.moser@canonical.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.DataSource as DataSource
from cloudinit import seeddir as base_seeddir
from cloudinit import log
import cloudinit.util as util
from socket import inet_ntoa
import time
import boto.utils as boto_utils
from struct import pack
class DataSourceCloudStack(DataSource.DataSource):
api_ver = 'latest'
seeddir = base_seeddir + '/cs'
metadata_address = None
def __init__(self, sys_cfg=None):
DataSource.DataSource.__init__(self, sys_cfg)
# Cloudstack has its metadata/userdata URLs located at
# http://<default-gateway-ip>/latest/
self.metadata_address = "http://%s/" % self.get_default_gateway()
def get_default_gateway(self):
""" Returns the default gateway ip address in the dotted format
"""
with open("/proc/net/route", "r") as f:
for line in f.readlines():
items = line.split("\t")
if items[1] == "00000000":
# found the default route, get the gateway
gw = inet_ntoa(pack("<L", int(items[2], 16)))
log.debug("found default route, gateway is %s" % gw)
return gw
def __str__(self):
return "DataSourceCloudStack"
def get_data(self):
seedret = {}
if util.read_optional_seed(seedret, base=self.seeddir + "/"):
self.userdata_raw = seedret['user-data']
self.metadata = seedret['meta-data']
log.debug("using seeded cs data in %s" % self.seeddir)
return True
try:
start = time.time()
self.userdata_raw = boto_utils.get_instance_userdata(self.api_ver,
None, self.metadata_address)
self.metadata = boto_utils.get_instance_metadata(self.api_ver,
self.metadata_address)
log.debug("crawl of metadata service took %ds" %
(time.time() - start))
return True
except Exception as e:
log.exception(e)
return False
def get_instance_id(self):
return self.metadata['instance-id']
def get_availability_zone(self):
return self.metadata['availability-zone']
datasources = [
(DataSourceCloudStack, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
]
# return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return DataSource.list_from_depends(depends, datasources)

View File

@ -1,231 +0,0 @@
# Copyright (C) 2012 Canonical Ltd.
#
# Author: Scott Moser <scott.moser@canonical.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.DataSource as DataSource
from cloudinit import seeddir as base_seeddir
from cloudinit import log
import cloudinit.util as util
import os.path
import os
import json
import subprocess
DEFAULT_IID = "iid-dsconfigdrive"
class DataSourceConfigDrive(DataSource.DataSource):
seed = None
seeddir = base_seeddir + '/config_drive'
cfg = {}
userdata_raw = None
metadata = None
dsmode = "local"
def __str__(self):
mstr = "DataSourceConfigDrive[%s]" % self.dsmode
mstr = mstr + " [seed=%s]" % self.seed
return(mstr)
def get_data(self):
found = None
md = {}
ud = ""
defaults = {"instance-id": DEFAULT_IID, "dsmode": "pass"}
if os.path.isdir(self.seeddir):
try:
(md, ud) = read_config_drive_dir(self.seeddir)
found = self.seeddir
except nonConfigDriveDir:
pass
if not found:
dev = cfg_drive_device()
if dev:
try:
(md, ud) = util.mount_callback_umount(dev,
read_config_drive_dir)
found = dev
except (nonConfigDriveDir, util.mountFailedError):
pass
if not found:
return False
if 'dsconfig' in md:
self.cfg = md['dscfg']
md = util.mergedict(md, defaults)
# update interfaces and ifup only on the local datasource
# this way the DataSourceConfigDriveNet doesn't do it also.
if 'network-interfaces' in md and self.dsmode == "local":
if md['dsmode'] == "pass":
log.info("updating network interfaces from configdrive")
else:
log.debug("updating network interfaces from configdrive")
util.write_file("/etc/network/interfaces",
md['network-interfaces'])
try:
(out, err) = util.subp(['ifup', '--all'])
if len(out) or len(err):
log.warn("ifup --all had stderr: %s" % err)
except subprocess.CalledProcessError as exc:
log.warn("ifup --all failed: %s" % (exc.output[1]))
self.seed = found
self.metadata = md
self.userdata_raw = ud
if md['dsmode'] == self.dsmode:
return True
log.debug("%s: not claiming datasource, dsmode=%s" %
(self, md['dsmode']))
return False
def get_public_ssh_keys(self):
if not 'public-keys' in self.metadata:
return([])
return(self.metadata['public-keys'])
# the data sources' config_obj is a cloud-config formated
# object that came to it from ways other than cloud-config
# because cloud-config content would be handled elsewhere
def get_config_obj(self):
return(self.cfg)
class DataSourceConfigDriveNet(DataSourceConfigDrive):
dsmode = "net"
class nonConfigDriveDir(Exception):
pass
def cfg_drive_device():
""" get the config drive device. return a string like '/dev/vdb'
or None (if there is no non-root device attached). This does not
check the contents, only reports that if there *were* a config_drive
attached, it would be this device.
per config_drive documentation, this is
"associated as the last available disk on the instance"
"""
if 'CLOUD_INIT_CONFIG_DRIVE_DEVICE' in os.environ:
return(os.environ['CLOUD_INIT_CONFIG_DRIVE_DEVICE'])
# we are looking for a raw block device (sda, not sda1) with a vfat
# filesystem on it.
letters = "abcdefghijklmnopqrstuvwxyz"
devs = util.find_devs_with("TYPE=vfat")
# filter out anything not ending in a letter (ignore partitions)
devs = [f for f in devs if f[-1] in letters]
# sort them in reverse so "last" device is first
devs.sort(reverse=True)
if len(devs):
return(devs[0])
return(None)
def read_config_drive_dir(source_dir):
"""
read_config_drive_dir(source_dir):
read source_dir, and return a tuple with metadata dict and user-data
string populated. If not a valid dir, raise a nonConfigDriveDir
"""
md = {}
ud = ""
flist = ("etc/network/interfaces", "root/.ssh/authorized_keys", "meta.js")
found = [f for f in flist if os.path.isfile("%s/%s" % (source_dir, f))]
keydata = ""
if len(found) == 0:
raise nonConfigDriveDir("%s: %s" % (source_dir, "no files found"))
if "etc/network/interfaces" in found:
with open("%s/%s" % (source_dir, "/etc/network/interfaces")) as fp:
md['network-interfaces'] = fp.read()
if "root/.ssh/authorized_keys" in found:
with open("%s/%s" % (source_dir, "root/.ssh/authorized_keys")) as fp:
keydata = fp.read()
meta_js = {}
if "meta.js" in found:
content = ''
with open("%s/%s" % (source_dir, "meta.js")) as fp:
content = fp.read()
md['meta_js'] = content
try:
meta_js = json.loads(content)
except ValueError:
raise nonConfigDriveDir("%s: %s" %
(source_dir, "invalid json in meta.js"))
keydata = meta_js.get('public-keys', keydata)
if keydata:
lines = keydata.splitlines()
md['public-keys'] = [l for l in lines
if len(l) and not l.startswith("#")]
for copy in ('dsmode', 'instance-id', 'dscfg'):
if copy in meta_js:
md[copy] = meta_js[copy]
if 'user-data' in meta_js:
ud = meta_js['user-data']
return(md, ud)
datasources = (
(DataSourceConfigDrive, (DataSource.DEP_FILESYSTEM, )),
(DataSourceConfigDriveNet,
(DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
)
# return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return(DataSource.list_from_depends(depends, datasources))
if __name__ == "__main__":
def main():
import sys
import pprint
print cfg_drive_device()
(md, ud) = read_config_drive_dir(sys.argv[1])
print "=== md ==="
pprint.pprint(md)
print "=== ud ==="
print(ud)
main()
# vi: ts=4 expandtab

View File

@ -1,217 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Hafliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.DataSource as DataSource
from cloudinit import seeddir as base_seeddir
from cloudinit import log
import cloudinit.util as util
import socket
import time
import boto.utils as boto_utils
import os.path
class DataSourceEc2(DataSource.DataSource):
api_ver = '2009-04-04'
seeddir = base_seeddir + '/ec2'
metadata_address = "http://169.254.169.254"
def __str__(self):
return("DataSourceEc2")
def get_data(self):
seedret = {}
if util.read_optional_seed(seedret, base=self.seeddir + "/"):
self.userdata_raw = seedret['user-data']
self.metadata = seedret['meta-data']
log.debug("using seeded ec2 data in %s" % self.seeddir)
return True
try:
if not self.wait_for_metadata_service():
return False
start = time.time()
self.userdata_raw = boto_utils.get_instance_userdata(self.api_ver,
None, self.metadata_address)
self.metadata = boto_utils.get_instance_metadata(self.api_ver,
self.metadata_address)
log.debug("crawl of metadata service took %ds" % (time.time() -
start))
return True
except Exception as e:
print e
return False
def get_instance_id(self):
return(self.metadata['instance-id'])
def get_availability_zone(self):
return(self.metadata['placement']['availability-zone'])
def get_local_mirror(self):
return(self.get_mirror_from_availability_zone())
def get_mirror_from_availability_zone(self, availability_zone=None):
# availability is like 'us-west-1b' or 'eu-west-1a'
if availability_zone == None:
availability_zone = self.get_availability_zone()
fallback = None
if self.is_vpc():
return fallback
try:
host = "%s.ec2.archive.ubuntu.com" % availability_zone[:-1]
socket.getaddrinfo(host, None, 0, socket.SOCK_STREAM)
return 'http://%s/ubuntu/' % host
except:
return fallback
def wait_for_metadata_service(self):
mcfg = self.ds_cfg
if not hasattr(mcfg, "get"):
mcfg = {}
max_wait = 120
try:
max_wait = int(mcfg.get("max_wait", max_wait))
except Exception:
util.logexc(log)
log.warn("Failed to get max wait. using %s" % max_wait)
if max_wait == 0:
return False
timeout = 50
try:
timeout = int(mcfg.get("timeout", timeout))
except Exception:
util.logexc(log)
log.warn("Failed to get timeout, using %s" % timeout)
def_mdurls = ["http://169.254.169.254", "http://instance-data:8773"]
mdurls = mcfg.get("metadata_urls", def_mdurls)
# Remove addresses from the list that wont resolve.
filtered = [x for x in mdurls if util.is_resolvable_url(x)]
if set(filtered) != set(mdurls):
log.debug("removed the following from metadata urls: %s" %
list((set(mdurls) - set(filtered))))
if len(filtered):
mdurls = filtered
else:
log.warn("Empty metadata url list! using default list")
mdurls = def_mdurls
urls = []
url2base = {False: False}
for url in mdurls:
cur = "%s/%s/meta-data/instance-id" % (url, self.api_ver)
urls.append(cur)
url2base[cur] = url
starttime = time.time()
url = util.wait_for_url(urls=urls, max_wait=max_wait,
timeout=timeout, status_cb=log.warn)
if url:
log.debug("Using metadata source: '%s'" % url2base[url])
else:
log.critical("giving up on md after %i seconds\n" %
int(time.time() - starttime))
self.metadata_address = url2base[url]
return (bool(url))
def device_name_to_device(self, name):
# consult metadata service, that has
# ephemeral0: sdb
# and return 'sdb' for input 'ephemeral0'
if 'block-device-mapping' not in self.metadata:
return(None)
found = None
for entname, device in self.metadata['block-device-mapping'].items():
if entname == name:
found = device
break
# LP: #513842 mapping in Euca has 'ephemeral' not 'ephemeral0'
if entname == "ephemeral" and name == "ephemeral0":
found = device
if found == None:
log.debug("unable to convert %s to a device" % name)
return None
# LP: #611137
# the metadata service may believe that devices are named 'sda'
# when the kernel named them 'vda' or 'xvda'
# we want to return the correct value for what will actually
# exist in this instance
mappings = {"sd": ("vd", "xvd")}
ofound = found
short = os.path.basename(found)
if not found.startswith("/"):
found = "/dev/%s" % found
if os.path.exists(found):
return(found)
for nfrom, tlist in mappings.items():
if not short.startswith(nfrom):
continue
for nto in tlist:
cand = "/dev/%s%s" % (nto, short[len(nfrom):])
if os.path.exists(cand):
log.debug("remapped device name %s => %s" % (found, cand))
return(cand)
# on t1.micro, ephemeral0 will appear in block-device-mapping from
# metadata, but it will not exist on disk (and never will)
# at this pint, we've verified that the path did not exist
# in the special case of 'ephemeral0' return None to avoid bogus
# fstab entry (LP: #744019)
if name == "ephemeral0":
return None
return ofound
def is_vpc(self):
# per comment in LP: #615545
ph = "public-hostname"
p4 = "public-ipv4"
if ((ph not in self.metadata or self.metadata[ph] == "") and
(p4 not in self.metadata or self.metadata[p4] == "")):
return True
return False
datasources = [
(DataSourceEc2, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
]
# return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return(DataSource.list_from_depends(depends, datasources))

View File

@ -1,345 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
#
# Author: Scott Moser <scott.moser@canonical.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.DataSource as DataSource
from cloudinit import seeddir as base_seeddir
from cloudinit import log
import cloudinit.util as util
import errno
import oauth.oauth as oauth
import os.path
import urllib2
import time
MD_VERSION = "2012-03-01"
class DataSourceMAAS(DataSource.DataSource):
"""
DataSourceMAAS reads instance information from MAAS.
Given a config metadata_url, and oauth tokens, it expects to find
files under the root named:
instance-id
user-data
hostname
"""
seeddir = base_seeddir + '/maas'
baseurl = None
def __str__(self):
return("DataSourceMAAS[%s]" % self.baseurl)
def get_data(self):
mcfg = self.ds_cfg
try:
(userdata, metadata) = read_maas_seed_dir(self.seeddir)
self.userdata_raw = userdata
self.metadata = metadata
self.baseurl = self.seeddir
return True
except MAASSeedDirNone:
pass
except MAASSeedDirMalformed as exc:
log.warn("%s was malformed: %s\n" % (self.seeddir, exc))
raise
try:
# if there is no metadata_url, then we're not configured
url = mcfg.get('metadata_url', None)
if url == None:
return False
if not self.wait_for_metadata_service(url):
return False
self.baseurl = url
(userdata, metadata) = read_maas_seed_url(self.baseurl,
self.md_headers)
self.userdata_raw = userdata
self.metadata = metadata
return True
except Exception:
util.logexc(log)
return False
def md_headers(self, url):
mcfg = self.ds_cfg
# if we are missing token_key, token_secret or consumer_key
# then just do non-authed requests
for required in ('token_key', 'token_secret', 'consumer_key'):
if required not in mcfg:
return({})
consumer_secret = mcfg.get('consumer_secret', "")
return(oauth_headers(url=url, consumer_key=mcfg['consumer_key'],
token_key=mcfg['token_key'], token_secret=mcfg['token_secret'],
consumer_secret=consumer_secret))
def wait_for_metadata_service(self, url):
mcfg = self.ds_cfg
max_wait = 120
try:
max_wait = int(mcfg.get("max_wait", max_wait))
except Exception:
util.logexc(log)
log.warn("Failed to get max wait. using %s" % max_wait)
if max_wait == 0:
return False
timeout = 50
try:
timeout = int(mcfg.get("timeout", timeout))
except Exception:
util.logexc(log)
log.warn("Failed to get timeout, using %s" % timeout)
starttime = time.time()
check_url = "%s/%s/meta-data/instance-id" % (url, MD_VERSION)
url = util.wait_for_url(urls=[check_url], max_wait=max_wait,
timeout=timeout, status_cb=log.warn,
headers_cb=self.md_headers)
if url:
log.debug("Using metadata source: '%s'" % url)
else:
log.critical("giving up on md after %i seconds\n" %
int(time.time() - starttime))
return (bool(url))
def read_maas_seed_dir(seed_d):
"""
Return user-data and metadata for a maas seed dir in seed_d.
Expected format of seed_d are the following files:
* instance-id
* local-hostname
* user-data
"""
files = ('local-hostname', 'instance-id', 'user-data', 'public-keys')
md = {}
if not os.path.isdir(seed_d):
raise MAASSeedDirNone("%s: not a directory")
for fname in files:
try:
with open(os.path.join(seed_d, fname)) as fp:
md[fname] = fp.read()
fp.close()
except IOError as e:
if e.errno != errno.ENOENT:
raise
return(check_seed_contents(md, seed_d))
def read_maas_seed_url(seed_url, header_cb=None, timeout=None,
version=MD_VERSION):
"""
Read the maas datasource at seed_url.
header_cb is a method that should return a headers dictionary that will
be given to urllib2.Request()
Expected format of seed_url is are the following files:
* <seed_url>/<version>/meta-data/instance-id
* <seed_url>/<version>/meta-data/local-hostname
* <seed_url>/<version>/user-data
"""
files = ('meta-data/local-hostname',
'meta-data/instance-id',
'meta-data/public-keys',
'user-data')
base_url = "%s/%s" % (seed_url, version)
md = {}
for fname in files:
url = "%s/%s" % (base_url, fname)
if header_cb:
headers = header_cb(url)
else:
headers = {}
try:
req = urllib2.Request(url, data=None, headers=headers)
resp = urllib2.urlopen(req, timeout=timeout)
md[os.path.basename(fname)] = resp.read()
except urllib2.HTTPError as e:
if e.code != 404:
raise
return(check_seed_contents(md, seed_url))
def check_seed_contents(content, seed):
"""Validate if content is Is the content a dict that is valid as a
return for a datasource.
Either return a (userdata, metadata) tuple or
Raise MAASSeedDirMalformed or MAASSeedDirNone
"""
md_required = ('instance-id', 'local-hostname')
found = content.keys()
if len(content) == 0:
raise MAASSeedDirNone("%s: no data files found" % seed)
missing = [k for k in md_required if k not in found]
if len(missing):
raise MAASSeedDirMalformed("%s: missing files %s" % (seed, missing))
userdata = content.get('user-data', "")
md = {}
for (key, val) in content.iteritems():
if key == 'user-data':
continue
md[key] = val
return(userdata, md)
def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret):
consumer = oauth.OAuthConsumer(consumer_key, consumer_secret)
token = oauth.OAuthToken(token_key, token_secret)
params = {
'oauth_version': "1.0",
'oauth_nonce': oauth.generate_nonce(),
'oauth_timestamp': int(time.time()),
'oauth_token': token.key,
'oauth_consumer_key': consumer.key,
}
req = oauth.OAuthRequest(http_url=url, parameters=params)
req.sign_request(oauth.OAuthSignatureMethod_PLAINTEXT(),
consumer, token)
return(req.to_header())
class MAASSeedDirNone(Exception):
pass
class MAASSeedDirMalformed(Exception):
pass
datasources = [
(DataSourceMAAS, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
]
# return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return(DataSource.list_from_depends(depends, datasources))
if __name__ == "__main__":
def main():
"""
Call with single argument of directory or http or https url.
If url is given additional arguments are allowed, which will be
interpreted as consumer_key, token_key, token_secret, consumer_secret
"""
import argparse
import pprint
parser = argparse.ArgumentParser(description='Interact with MAAS DS')
parser.add_argument("--config", metavar="file",
help="specify DS config file", default=None)
parser.add_argument("--ckey", metavar="key",
help="the consumer key to auth with", default=None)
parser.add_argument("--tkey", metavar="key",
help="the token key to auth with", default=None)
parser.add_argument("--csec", metavar="secret",
help="the consumer secret (likely '')", default="")
parser.add_argument("--tsec", metavar="secret",
help="the token secret to auth with", default=None)
parser.add_argument("--apiver", metavar="version",
help="the apiver to use ("" can be used)", default=MD_VERSION)
subcmds = parser.add_subparsers(title="subcommands", dest="subcmd")
subcmds.add_parser('crawl', help="crawl the datasource")
subcmds.add_parser('get', help="do a single GET of provided url")
subcmds.add_parser('check-seed', help="read andn verify seed at url")
parser.add_argument("url", help="the data source to query")
args = parser.parse_args()
creds = {'consumer_key': args.ckey, 'token_key': args.tkey,
'token_secret': args.tsec, 'consumer_secret': args.csec}
if args.config:
import yaml
with open(args.config) as fp:
cfg = yaml.safe_load(fp)
if 'datasource' in cfg:
cfg = cfg['datasource']['MAAS']
for key in creds.keys():
if key in cfg and creds[key] == None:
creds[key] = cfg[key]
def geturl(url, headers_cb):
req = urllib2.Request(url, data=None, headers=headers_cb(url))
return(urllib2.urlopen(req).read())
def printurl(url, headers_cb):
print "== %s ==\n%s\n" % (url, geturl(url, headers_cb))
def crawl(url, headers_cb=None):
if url.endswith("/"):
for line in geturl(url, headers_cb).splitlines():
if line.endswith("/"):
crawl("%s%s" % (url, line), headers_cb)
else:
printurl("%s%s" % (url, line), headers_cb)
else:
printurl(url, headers_cb)
def my_headers(url):
headers = {}
if creds.get('consumer_key', None) != None:
headers = oauth_headers(url, **creds)
return headers
if args.subcmd == "check-seed":
if args.url.startswith("http"):
(userdata, metadata) = read_maas_seed_url(args.url,
header_cb=my_headers, version=args.apiver)
else:
(userdata, metadata) = read_maas_seed_url(args.url)
print "=== userdata ==="
print userdata
print "=== metadata ==="
pprint.pprint(metadata)
elif args.subcmd == "get":
printurl(args.url, my_headers)
elif args.subcmd == "crawl":
if not args.url.endswith("/"):
args.url = "%s/" % args.url
crawl(args.url, my_headers)
main()

View File

@ -1,227 +0,0 @@
#!/usr/bin/python
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Hafliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import os.path
import cloudinit.util as util
class AuthKeyEntry():
# lines are options, keytype, base64-encoded key, comment
# man page says the following which I did not understand:
# The options field is optional; its presence is determined by whether
# the line starts with a number or not (the options field never starts
# with a number)
options = None
keytype = None
base64 = None
comment = None
is_comment = False
line_in = ""
def __init__(self, line, def_opt=None):
line = line.rstrip("\n\r")
self.line_in = line
if line.startswith("#") or line.strip() == "":
self.is_comment = True
else:
ent = line.strip()
toks = ent.split(None, 3)
if len(toks) == 1:
self.base64 = toks[0]
elif len(toks) == 2:
(self.base64, self.comment) = toks
elif len(toks) == 3:
(self.keytype, self.base64, self.comment) = toks
elif len(toks) == 4:
i = 0
ent = line.strip()
quoted = False
# taken from auth_rsa_key_allowed in auth-rsa.c
try:
while (i < len(ent) and
((quoted) or (ent[i] not in (" ", "\t")))):
curc = ent[i]
nextc = ent[i + 1]
if curc == "\\" and nextc == '"':
i = i + 1
elif curc == '"':
quoted = not quoted
i = i + 1
except IndexError:
self.is_comment = True
return
try:
self.options = ent[0:i]
(self.keytype, self.base64, self.comment) = \
ent[i + 1:].split(None, 3)
except ValueError:
# we did not understand this line
self.is_comment = True
if self.options == None and def_opt:
self.options = def_opt
return
def debug(self):
print("line_in=%s\ncomment: %s\noptions=%s\nkeytype=%s\nbase64=%s\n"
"comment=%s\n" % (self.line_in, self.is_comment, self.options,
self.keytype, self.base64, self.comment)),
def __repr__(self):
if self.is_comment:
return(self.line_in)
else:
toks = []
for e in (self.options, self.keytype, self.base64, self.comment):
if e:
toks.append(e)
return(' '.join(toks))
def update_authorized_keys(fname, keys):
# keys is a list of AuthKeyEntries
# key_prefix is the prefix (options) to prepend
try:
fp = open(fname, "r")
lines = fp.readlines() # lines have carriage return
fp.close()
except IOError:
lines = []
ka_stats = {} # keys_added status
for k in keys:
ka_stats[k] = False
to_add = []
for key in keys:
to_add.append(key)
for i in range(0, len(lines)):
ent = AuthKeyEntry(lines[i])
for k in keys:
if k.base64 == ent.base64 and not k.is_comment:
ent = k
try:
to_add.remove(k)
except ValueError:
pass
lines[i] = str(ent)
# now append any entries we did not match above
for key in to_add:
lines.append(str(key))
if len(lines) == 0:
return("")
else:
return('\n'.join(lines) + "\n")
def setup_user_keys(keys, user, key_prefix, log=None):
import pwd
saved_umask = os.umask(077)
pwent = pwd.getpwnam(user)
ssh_dir = '%s/.ssh' % pwent.pw_dir
if not os.path.exists(ssh_dir):
os.mkdir(ssh_dir)
os.chown(ssh_dir, pwent.pw_uid, pwent.pw_gid)
try:
ssh_cfg = parse_ssh_config()
akeys = ssh_cfg.get("AuthorizedKeysFile", "%h/.ssh/authorized_keys")
akeys = akeys.replace("%h", pwent.pw_dir)
akeys = akeys.replace("%u", user)
if not akeys.startswith('/'):
akeys = os.path.join(pwent.pw_dir, akeys)
authorized_keys = akeys
except Exception:
authorized_keys = '%s/.ssh/authorized_keys' % pwent.pw_dir
if log:
util.logexc(log)
key_entries = []
for k in keys:
ke = AuthKeyEntry(k, def_opt=key_prefix)
key_entries.append(ke)
content = update_authorized_keys(authorized_keys, key_entries)
util.write_file(authorized_keys, content, 0600)
os.chown(authorized_keys, pwent.pw_uid, pwent.pw_gid)
util.restorecon_if_possible(ssh_dir, recursive=True)
os.umask(saved_umask)
def parse_ssh_config(fname="/etc/ssh/sshd_config"):
ret = {}
fp = open(fname)
for l in fp.readlines():
l = l.strip()
if not l or l.startswith("#"):
continue
key, val = l.split(None, 1)
ret[key] = val
fp.close()
return(ret)
if __name__ == "__main__":
def main():
import sys
# usage: orig_file, new_keys, [key_prefix]
# prints out merged, where 'new_keys' will trump old
## example
## ### begin auth_keys ###
# ssh-rsa AAAAB3NzaC1xxxxxxxxxV3csgm8cJn7UveKHkYjJp8= smoser-work
# ssh-rsa AAAAB3NzaC1xxxxxxxxxCmXp5Kt5/82cD/VN3NtHw== smoser@brickies
# ### end authorized_keys ###
#
# ### begin new_keys ###
# ssh-rsa nonmatch smoser@newhost
# ssh-rsa AAAAB3NzaC1xxxxxxxxxV3csgm8cJn7UveKHkYjJp8= new_comment
# ### end new_keys ###
#
# Then run as:
# program auth_keys new_keys \
# 'no-port-forwarding,command=\"echo hi world;\"'
def_prefix = None
orig_key_file = sys.argv[1]
new_key_file = sys.argv[2]
if len(sys.argv) > 3:
def_prefix = sys.argv[3]
fp = open(new_key_file)
newkeys = []
for line in fp.readlines():
newkeys.append(AuthKeyEntry(line, def_prefix))
fp.close()
print update_authorized_keys(orig_key_file, newkeys)
main()
# vi: ts=4 expandtab

View File

@ -1,262 +0,0 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Hafliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import email
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
import yaml
import cloudinit
import cloudinit.util as util
import hashlib
import urllib
starts_with_mappings = {
'#include': 'text/x-include-url',
'#include-once': 'text/x-include-once-url',
'#!': 'text/x-shellscript',
'#cloud-config': 'text/cloud-config',
'#upstart-job': 'text/upstart-job',
'#part-handler': 'text/part-handler',
'#cloud-boothook': 'text/cloud-boothook',
'#cloud-config-archive': 'text/cloud-config-archive',
}
# if 'string' is compressed return decompressed otherwise return it
def decomp_str(string):
import StringIO
import gzip
try:
uncomp = gzip.GzipFile(None, "rb", 1, StringIO.StringIO(string)).read()
return(uncomp)
except:
return(string)
def do_include(content, appendmsg):
import os
# is just a list of urls, one per line
# also support '#include <url here>'
includeonce = False
for line in content.splitlines():
if line == "#include":
continue
if line == "#include-once":
includeonce = True
continue
if line.startswith("#include-once"):
line = line[len("#include-once"):].lstrip()
includeonce = True
elif line.startswith("#include"):
line = line[len("#include"):].lstrip()
if line.startswith("#"):
continue
if line.strip() == "":
continue
# urls cannot not have leading or trailing white space
msum = hashlib.md5() # pylint: disable=E1101
msum.update(line.strip())
includeonce_filename = "%s/urlcache/%s" % (
cloudinit.get_ipath_cur("data"), msum.hexdigest())
try:
if includeonce and os.path.isfile(includeonce_filename):
with open(includeonce_filename, "r") as fp:
content = fp.read()
else:
content = urllib.urlopen(line).read()
if includeonce:
util.write_file(includeonce_filename, content, mode=0600)
except Exception:
raise
process_includes(message_from_string(decomp_str(content)), appendmsg)
def explode_cc_archive(archive, appendmsg):
for ent in yaml.safe_load(archive):
# ent can be one of:
# dict { 'filename' : 'value', 'content' : 'value', 'type' : 'value' }
# filename and type not be present
# or
# scalar(payload)
def_type = "text/cloud-config"
if isinstance(ent, str):
ent = {'content': ent}
content = ent.get('content', '')
mtype = ent.get('type', None)
if mtype == None:
mtype = type_from_startswith(content, def_type)
maintype, subtype = mtype.split('/', 1)
if maintype == "text":
msg = MIMEText(content, _subtype=subtype)
else:
msg = MIMEBase(maintype, subtype)
msg.set_payload(content)
if 'filename' in ent:
msg.add_header('Content-Disposition', 'attachment',
filename=ent['filename'])
for header in ent.keys():
if header in ('content', 'filename', 'type'):
continue
msg.add_header(header, ent['header'])
_attach_part(appendmsg, msg)
def multi_part_count(outermsg, newcount=None):
"""
Return the number of attachments to this MIMEMultipart by looking
at its 'Number-Attachments' header.
"""
nfield = 'Number-Attachments'
if nfield not in outermsg:
outermsg[nfield] = "0"
if newcount != None:
outermsg.replace_header(nfield, str(newcount))
return(int(outermsg.get('Number-Attachments', 0)))
def _attach_part(outermsg, part):
"""
Attach an part to an outer message. outermsg must be a MIMEMultipart.
Modifies a header in outermsg to keep track of number of attachments.
"""
cur = multi_part_count(outermsg)
if not part.get_filename(None):
part.add_header('Content-Disposition', 'attachment',
filename='part-%03d' % (cur + 1))
outermsg.attach(part)
multi_part_count(outermsg, cur + 1)
def type_from_startswith(payload, default=None):
# slist is sorted longest first
slist = sorted(starts_with_mappings.keys(), key=lambda e: 0 - len(e))
for sstr in slist:
if payload.startswith(sstr):
return(starts_with_mappings[sstr])
return default
def process_includes(msg, appendmsg=None):
if appendmsg == None:
appendmsg = MIMEMultipart()
for part in msg.walk():
# multipart/* are just containers
if part.get_content_maintype() == 'multipart':
continue
ctype = None
ctype_orig = part.get_content_type()
payload = part.get_payload(decode=True)
if ctype_orig in ("text/plain", "text/x-not-multipart"):
ctype = type_from_startswith(payload)
if ctype is None:
ctype = ctype_orig
if ctype in ('text/x-include-url', 'text/x-include-once-url'):
do_include(payload, appendmsg)
continue
if ctype == "text/cloud-config-archive":
explode_cc_archive(payload, appendmsg)
continue
if 'Content-Type' in msg:
msg.replace_header('Content-Type', ctype)
else:
msg['Content-Type'] = ctype
_attach_part(appendmsg, part)
def message_from_string(data, headers=None):
if headers is None:
headers = {}
if "mime-version:" in data[0:4096].lower():
msg = email.message_from_string(data)
for (key, val) in headers.items():
if key in msg:
msg.replace_header(key, val)
else:
msg[key] = val
else:
mtype = headers.get("Content-Type", "text/x-not-multipart")
maintype, subtype = mtype.split("/", 1)
msg = MIMEBase(maintype, subtype, *headers)
msg.set_payload(data)
return(msg)
# this is heavily wasteful, reads through userdata string input
def preprocess_userdata(data):
newmsg = MIMEMultipart()
process_includes(message_from_string(decomp_str(data)), newmsg)
return(newmsg.as_string())
# callback is a function that will be called with (data, content_type,
# filename, payload)
def walk_userdata(istr, callback, data=None):
partnum = 0
for part in message_from_string(istr).walk():
# multipart/* are just containers
if part.get_content_maintype() == 'multipart':
continue
ctype = part.get_content_type()
if ctype is None:
ctype = 'application/octet-stream'
filename = part.get_filename()
if not filename:
filename = 'part-%03d' % partnum
callback(data, ctype, filename, part.get_payload(decode=True))
partnum = partnum + 1
if __name__ == "__main__":
def main():
import sys
data = decomp_str(file(sys.argv[1]).read())
newmsg = MIMEMultipart()
process_includes(message_from_string(data), newmsg)
print newmsg
print "#found %s parts" % multi_part_count(newmsg)
main()

View File

@ -1,11 +1,12 @@
# vi: ts=4 expandtab
#
# Common code for the EC2 initialisation scripts in Ubuntu
# Copyright (C) 2008-2009 Canonical Ltd
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Soren Hansen <soren@canonical.com>
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -18,650 +19,3 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
varlibdir = '/var/lib/cloud'
cur_instance_link = varlibdir + "/instance"
boot_finished = cur_instance_link + "/boot-finished"
system_config = '/etc/cloud/cloud.cfg'
seeddir = varlibdir + "/seed"
cfg_env_name = "CLOUD_CFG"
cfg_builtin = """
log_cfgs: []
datasource_list: ["NoCloud", "ConfigDrive", "OVF", "MAAS", "Ec2", "CloudStack"]
def_log_file: /var/log/cloud-init.log
syslog_fix_perms: syslog:adm
"""
logger_name = "cloudinit"
pathmap = {
"handlers": "/handlers",
"scripts": "/scripts",
"sem": "/sem",
"boothooks": "/boothooks",
"userdata_raw": "/user-data.txt",
"userdata": "/user-data.txt.i",
"obj_pkl": "/obj.pkl",
"cloud_config": "/cloud-config.txt",
"data": "/data",
None: "",
}
per_instance = "once-per-instance"
per_always = "always"
per_once = "once"
parsed_cfgs = {}
import os
import cPickle
import sys
import os.path
import errno
import subprocess
import yaml
import logging
import logging.config
import StringIO
import glob
import traceback
import cloudinit.util as util
class NullHandler(logging.Handler):
def emit(self, record):
pass
log = logging.getLogger(logger_name)
log.addHandler(NullHandler())
def logging_set_from_cfg_file(cfg_file=system_config):
logging_set_from_cfg(util.get_base_cfg(cfg_file, cfg_builtin, parsed_cfgs))
def logging_set_from_cfg(cfg):
log_cfgs = []
logcfg = util.get_cfg_option_str(cfg, "log_cfg", False)
if logcfg:
# if there is a 'logcfg' entry in the config, respect
# it, it is the old keyname
log_cfgs = [logcfg]
elif "log_cfgs" in cfg:
for cfg in cfg['log_cfgs']:
if isinstance(cfg, list):
log_cfgs.append('\n'.join(cfg))
else:
log_cfgs.append()
if not len(log_cfgs):
sys.stderr.write("Warning, no logging configured\n")
return
for logcfg in log_cfgs:
try:
logging.config.fileConfig(StringIO.StringIO(logcfg))
return
except:
pass
raise Exception("no valid logging found\n")
import cloudinit.DataSource as DataSource
import cloudinit.UserDataHandler as UserDataHandler
class CloudInit:
cfg = None
part_handlers = {}
old_conffile = '/etc/ec2-init/ec2-config.cfg'
ds_deps = [DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK]
datasource = None
cloud_config_str = ''
datasource_name = ''
builtin_handlers = []
def __init__(self, ds_deps=None, sysconfig=system_config):
self.builtin_handlers = [
['text/x-shellscript', self.handle_user_script, per_always],
['text/cloud-config', self.handle_cloud_config, per_always],
['text/upstart-job', self.handle_upstart_job, per_instance],
['text/cloud-boothook', self.handle_cloud_boothook, per_always],
]
if ds_deps != None:
self.ds_deps = ds_deps
self.sysconfig = sysconfig
self.cfg = self.read_cfg()
def read_cfg(self):
if self.cfg:
return(self.cfg)
try:
conf = util.get_base_cfg(self.sysconfig, cfg_builtin, parsed_cfgs)
except Exception:
conf = get_builtin_cfg()
# support reading the old ConfigObj format file and merging
# it into the yaml dictionary
try:
from configobj import ConfigObj
oldcfg = ConfigObj(self.old_conffile)
if oldcfg is None:
oldcfg = {}
conf = util.mergedict(conf, oldcfg)
except:
pass
return(conf)
def restore_from_cache(self):
try:
# we try to restore from a current link and static path
# by using the instance link, if purge_cache was called
# the file wont exist
cache = get_ipath_cur('obj_pkl')
f = open(cache, "rb")
data = cPickle.load(f)
f.close()
self.datasource = data
return True
except:
return False
def write_to_cache(self):
cache = self.get_ipath("obj_pkl")
try:
os.makedirs(os.path.dirname(cache))
except OSError as e:
if e.errno != errno.EEXIST:
return False
try:
f = open(cache, "wb")
cPickle.dump(self.datasource, f)
f.close()
os.chmod(cache, 0400)
except:
raise
def get_data_source(self):
if self.datasource is not None:
return True
if self.restore_from_cache():
log.debug("restored from cache type %s" % self.datasource)
return True
cfglist = self.cfg['datasource_list']
dslist = list_sources(cfglist, self.ds_deps)
dsnames = [f.__name__ for f in dslist]
log.debug("searching for data source in %s" % dsnames)
for cls in dslist:
ds = cls.__name__
try:
s = cls(sys_cfg=self.cfg)
if s.get_data():
self.datasource = s
self.datasource_name = ds
log.debug("found data source %s" % ds)
return True
except Exception as e:
log.warn("get_data of %s raised %s" % (ds, e))
util.logexc(log)
msg = "Did not find data source. searched classes: %s" % dsnames
log.debug(msg)
raise DataSourceNotFoundException(msg)
def set_cur_instance(self):
try:
os.unlink(cur_instance_link)
except OSError as e:
if e.errno != errno.ENOENT:
raise
iid = self.get_instance_id()
os.symlink("./instances/%s" % iid, cur_instance_link)
idir = self.get_ipath()
dlist = []
for d in ["handlers", "scripts", "sem"]:
dlist.append("%s/%s" % (idir, d))
util.ensure_dirs(dlist)
ds = "%s: %s\n" % (self.datasource.__class__, str(self.datasource))
dp = self.get_cpath('data')
util.write_file("%s/%s" % (idir, 'datasource'), ds)
util.write_file("%s/%s" % (dp, 'previous-datasource'), ds)
util.write_file("%s/%s" % (dp, 'previous-instance-id'), "%s\n" % iid)
def get_userdata(self):
return(self.datasource.get_userdata())
def get_userdata_raw(self):
return(self.datasource.get_userdata_raw())
def get_instance_id(self):
return(self.datasource.get_instance_id())
def update_cache(self):
self.write_to_cache()
self.store_userdata()
def store_userdata(self):
util.write_file(self.get_ipath('userdata_raw'),
self.datasource.get_userdata_raw(), 0600)
util.write_file(self.get_ipath('userdata'),
self.datasource.get_userdata(), 0600)
def sem_getpath(self, name, freq):
if freq == 'once-per-instance':
return("%s/%s" % (self.get_ipath("sem"), name))
return("%s/%s.%s" % (get_cpath("sem"), name, freq))
def sem_has_run(self, name, freq):
if freq == per_always:
return False
semfile = self.sem_getpath(name, freq)
if os.path.exists(semfile):
return True
return False
def sem_acquire(self, name, freq):
from time import time
semfile = self.sem_getpath(name, freq)
try:
os.makedirs(os.path.dirname(semfile))
except OSError as e:
if e.errno != errno.EEXIST:
raise e
if os.path.exists(semfile) and freq != per_always:
return False
# race condition
try:
f = open(semfile, "w")
f.write("%s\n" % str(time()))
f.close()
except:
return(False)
return(True)
def sem_clear(self, name, freq):
semfile = self.sem_getpath(name, freq)
try:
os.unlink(semfile)
except OSError as e:
if e.errno != errno.ENOENT:
return False
return True
# acquire lock on 'name' for given 'freq'
# if that does not exist, then call 'func' with given 'args'
# if 'clear_on_fail' is True and func throws an exception
# then remove the lock (so it would run again)
def sem_and_run(self, semname, freq, func, args=None, clear_on_fail=False):
if args is None:
args = []
if self.sem_has_run(semname, freq):
log.debug("%s already ran %s", semname, freq)
return False
try:
if not self.sem_acquire(semname, freq):
raise Exception("Failed to acquire lock on %s" % semname)
func(*args)
except:
if clear_on_fail:
self.sem_clear(semname, freq)
raise
return True
# get_ipath : get the instance path for a name in pathmap
# (/var/lib/cloud/instances/<instance>/name)<name>)
def get_ipath(self, name=None):
return("%s/instances/%s%s"
% (varlibdir, self.get_instance_id(), pathmap[name]))
def consume_userdata(self, frequency=per_instance):
self.get_userdata()
data = self
cdir = get_cpath("handlers")
idir = self.get_ipath("handlers")
# add the path to the plugins dir to the top of our list for import
# instance dir should be read before cloud-dir
sys.path.insert(0, cdir)
sys.path.insert(0, idir)
part_handlers = {}
# add handlers in cdir
for fname in glob.glob("%s/*.py" % cdir):
if not os.path.isfile(fname):
continue
modname = os.path.basename(fname)[0:-3]
try:
mod = __import__(modname)
handler_register(mod, part_handlers, data, frequency)
log.debug("added handler for [%s] from %s" % (mod.list_types(),
fname))
except:
log.warn("failed to initialize handler in %s" % fname)
util.logexc(log)
# add the internal handers if their type hasn't been already claimed
for (btype, bhand, bfreq) in self.builtin_handlers:
if btype in part_handlers:
continue
handler_register(InternalPartHandler(bhand, [btype], bfreq),
part_handlers, data, frequency)
# walk the data
pdata = {'handlers': part_handlers, 'handlerdir': idir,
'data': data, 'frequency': frequency}
UserDataHandler.walk_userdata(self.get_userdata(),
partwalker_callback, data=pdata)
# give callbacks opportunity to finalize
called = []
for (_mtype, mod) in part_handlers.iteritems():
if mod in called:
continue
handler_call_end(mod, data, frequency)
def handle_user_script(self, _data, ctype, filename, payload, _frequency):
if ctype == "__end__":
return
if ctype == "__begin__":
# maybe delete existing things here
return
filename = filename.replace(os.sep, '_')
scriptsdir = get_ipath_cur('scripts')
util.write_file("%s/%s" %
(scriptsdir, filename), util.dos2unix(payload), 0700)
def handle_upstart_job(self, _data, ctype, filename, payload, frequency):
# upstart jobs are only written on the first boot
if frequency != per_instance:
return
if ctype == "__end__" or ctype == "__begin__":
return
if not filename.endswith(".conf"):
filename = filename + ".conf"
util.write_file("%s/%s" % ("/etc/init", filename),
util.dos2unix(payload), 0644)
def handle_cloud_config(self, _data, ctype, filename, payload, _frequency):
if ctype == "__begin__":
self.cloud_config_str = ""
return
if ctype == "__end__":
cloud_config = self.get_ipath("cloud_config")
util.write_file(cloud_config, self.cloud_config_str, 0600)
## this could merge the cloud config with the system config
## for now, not doing this as it seems somewhat circular
## as CloudConfig does that also, merging it with this cfg
##
# ccfg = yaml.safe_load(self.cloud_config_str)
# if ccfg is None: ccfg = {}
# self.cfg = util.mergedict(ccfg, self.cfg)
return
self.cloud_config_str += "\n#%s\n%s" % (filename, payload)
def handle_cloud_boothook(self, _data, ctype, filename, payload,
_frequency):
if ctype == "__end__":
return
if ctype == "__begin__":
return
filename = filename.replace(os.sep, '_')
payload = util.dos2unix(payload)
prefix = "#cloud-boothook"
start = 0
if payload.startswith(prefix):
start = len(prefix) + 1
boothooks_dir = self.get_ipath("boothooks")
filepath = "%s/%s" % (boothooks_dir, filename)
util.write_file(filepath, payload[start:], 0700)
try:
env = os.environ.copy()
env['INSTANCE_ID'] = self.datasource.get_instance_id()
subprocess.check_call([filepath], env=env)
except subprocess.CalledProcessError as e:
log.error("boothooks script %s returned %i" %
(filepath, e.returncode))
except Exception as e:
log.error("boothooks unknown exception %s when running %s" %
(e, filepath))
def get_public_ssh_keys(self):
return(self.datasource.get_public_ssh_keys())
def get_locale(self):
return(self.datasource.get_locale())
def get_mirror(self):
return(self.datasource.get_local_mirror())
def get_hostname(self, fqdn=False):
return(self.datasource.get_hostname(fqdn=fqdn))
def device_name_to_device(self, name):
return(self.datasource.device_name_to_device(name))
# I really don't know if this should be here or not, but
# I needed it in cc_update_hostname, where that code had a valid 'cloud'
# reference, but did not have a cloudinit handle
# (ie, no cloudinit.get_cpath())
def get_cpath(self, name=None):
return(get_cpath(name))
def initfs():
subds = ['scripts/per-instance', 'scripts/per-once', 'scripts/per-boot',
'seed', 'instances', 'handlers', 'sem', 'data']
dlist = []
for subd in subds:
dlist.append("%s/%s" % (varlibdir, subd))
util.ensure_dirs(dlist)
cfg = util.get_base_cfg(system_config, cfg_builtin, parsed_cfgs)
log_file = util.get_cfg_option_str(cfg, 'def_log_file', None)
perms = util.get_cfg_option_str(cfg, 'syslog_fix_perms', None)
if log_file:
fp = open(log_file, "ab")
fp.close()
if log_file and perms:
(u, g) = perms.split(':', 1)
if u == "-1" or u == "None":
u = None
if g == "-1" or g == "None":
g = None
util.chownbyname(log_file, u, g)
def purge_cache(rmcur=True):
rmlist = [boot_finished]
if rmcur:
rmlist.append(cur_instance_link)
for f in rmlist:
try:
os.unlink(f)
except OSError as e:
if e.errno == errno.ENOENT:
continue
return(False)
except:
return(False)
return(True)
# get_ipath_cur: get the current instance path for an item
def get_ipath_cur(name=None):
return("%s/%s%s" % (varlibdir, "instance", pathmap[name]))
# get_cpath : get the "clouddir" (/var/lib/cloud/<name>)
# for a name in dirmap
def get_cpath(name=None):
return("%s%s" % (varlibdir, pathmap[name]))
def get_base_cfg(cfg_path=None):
if cfg_path is None:
cfg_path = system_config
return(util.get_base_cfg(cfg_path, cfg_builtin, parsed_cfgs))
def get_builtin_cfg():
return(yaml.safe_load(cfg_builtin))
class DataSourceNotFoundException(Exception):
pass
def list_sources(cfg_list, depends):
return(DataSource.list_sources(cfg_list, depends, ["cloudinit", ""]))
def handler_register(mod, part_handlers, data, frequency=per_instance):
if not hasattr(mod, "handler_version"):
setattr(mod, "handler_version", 1)
for mtype in mod.list_types():
part_handlers[mtype] = mod
handler_call_begin(mod, data, frequency)
return(mod)
def handler_call_begin(mod, data, frequency):
handler_handle_part(mod, data, "__begin__", None, None, frequency)
def handler_call_end(mod, data, frequency):
handler_handle_part(mod, data, "__end__", None, None, frequency)
def handler_handle_part(mod, data, ctype, filename, payload, frequency):
# only add the handler if the module should run
modfreq = getattr(mod, "frequency", per_instance)
if not (modfreq == per_always or
(frequency == per_instance and modfreq == per_instance)):
return
try:
if mod.handler_version == 1:
mod.handle_part(data, ctype, filename, payload)
else:
mod.handle_part(data, ctype, filename, payload, frequency)
except:
util.logexc(log)
traceback.print_exc(file=sys.stderr)
def partwalker_handle_handler(pdata, _ctype, _filename, payload):
curcount = pdata['handlercount']
modname = 'part-handler-%03d' % curcount
frequency = pdata['frequency']
modfname = modname + ".py"
util.write_file("%s/%s" % (pdata['handlerdir'], modfname), payload, 0600)
try:
mod = __import__(modname)
handler_register(mod, pdata['handlers'], pdata['data'], frequency)
pdata['handlercount'] = curcount + 1
except:
util.logexc(log)
traceback.print_exc(file=sys.stderr)
def partwalker_callback(pdata, ctype, filename, payload):
# data here is the part_handlers array and then the data to pass through
if ctype == "text/part-handler":
if 'handlercount' not in pdata:
pdata['handlercount'] = 0
partwalker_handle_handler(pdata, ctype, filename, payload)
return
if ctype not in pdata['handlers'] and payload:
if ctype == "text/x-not-multipart":
# Extract the first line or 24 bytes for displaying in the log
start = payload.split("\n", 1)[0][:24]
if start < payload:
details = "starting '%s...'" % start.encode("string-escape")
else:
details = repr(payload)
log.warning("Unhandled non-multipart userdata %s", details)
return
handler_handle_part(pdata['handlers'][ctype], pdata['data'],
ctype, filename, payload, pdata['frequency'])
class InternalPartHandler:
freq = per_instance
mtypes = []
handler_version = 1
handler = None
def __init__(self, handler, mtypes, frequency, version=2):
self.handler = handler
self.mtypes = mtypes
self.frequency = frequency
self.handler_version = version
def __repr__(self):
return("InternalPartHandler: [%s]" % self.mtypes)
def list_types(self):
return(self.mtypes)
def handle_part(self, data, ctype, filename, payload, frequency):
return(self.handler(data, ctype, filename, payload, frequency))
def get_cmdline_url(names=('cloud-config-url', 'url'),
starts="#cloud-config", cmdline=None):
if cmdline == None:
cmdline = util.get_cmdline()
data = util.keyval_str_to_dict(cmdline)
url = None
key = None
for key in names:
if key in data:
url = data[key]
break
if url == None:
return (None, None, None)
contents = util.readurl(url)
if contents.startswith(starts):
return (key, url, contents)
return (key, url, None)

101
cloudinit/cloud.py Normal file
View File

@ -0,0 +1,101 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import copy
import os
from cloudinit import log as logging
LOG = logging.getLogger(__name__)
# This class is the high level wrapper that provides
# access to cloud-init objects without exposing the stage objects
# to handler and or module manipulation. It allows for cloud
# init to restrict what those types of user facing code may see
# and or adjust (which helps avoid code messing with each other)
#
# It also provides util functions that avoid having to know
# how to get a certain member from this submembers as well
# as providing a backwards compatible object that can be maintained
# while the stages/other objects can be worked on independently...
class Cloud(object):
def __init__(self, datasource, paths, cfg, distro, runners):
self.datasource = datasource
self.paths = paths
self.distro = distro
self._cfg = cfg
self._runners = runners
# If a 'user' manipulates logging or logging services
# it is typically useful to cause the logging to be
# setup again.
def cycle_logging(self):
logging.resetLogging()
logging.setupLogging(self.cfg)
@property
def cfg(self):
# Ensure that not indirectly modified
return copy.deepcopy(self._cfg)
def run(self, name, functor, args, freq=None, clear_on_fail=False):
return self._runners.run(name, functor, args, freq, clear_on_fail)
def get_template_filename(self, name):
fn = self.paths.template_tpl % (name)
if not os.path.isfile(fn):
LOG.warn("No template found at %s for template named %s", fn, name)
return None
return fn
# The rest of thes are just useful proxies
def get_userdata(self):
return self.datasource.get_userdata()
def get_instance_id(self):
return self.datasource.get_instance_id()
def get_public_ssh_keys(self):
return self.datasource.get_public_ssh_keys()
def get_locale(self):
return self.datasource.get_locale()
def get_local_mirror(self):
return self.datasource.get_local_mirror()
def get_hostname(self, fqdn=False):
return self.datasource.get_hostname(fqdn=fqdn)
def device_name_to_device(self, name):
return self.datasource.device_name_to_device(name)
def get_ipath_cur(self, name=None):
return self.paths.get_ipath_cur(name)
def get_cpath(self, name=None):
return self.paths.get_cpath(name)
def get_ipath(self, name=None):
return self.paths.get_ipath(name)

View File

@ -0,0 +1,56 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2008-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Chuck Short <chuck.short@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
from cloudinit.settings import (PER_INSTANCE, FREQUENCIES)
from cloudinit import log as logging
LOG = logging.getLogger(__name__)
# This prefix is used to make it less
# of a chance that when importing
# we will not find something else with the same
# name in the lookup path...
MOD_PREFIX = "cc_"
def form_module_name(name):
canon_name = name.replace("-", "_")
if canon_name.lower().endswith(".py"):
canon_name = canon_name[0:(len(canon_name) - 3)]
canon_name = canon_name.strip()
if not canon_name:
return None
if not canon_name.startswith(MOD_PREFIX):
canon_name = '%s%s' % (MOD_PREFIX, canon_name)
return canon_name
def fixup_module(mod, def_freq=PER_INSTANCE):
if not hasattr(mod, 'frequency'):
setattr(mod, 'frequency', def_freq)
else:
freq = mod.frequency
if freq and freq not in FREQUENCIES:
LOG.warn("Module %s has an unknown frequency %s", mod, freq)
if not hasattr(mod, 'distros'):
setattr(mod, 'distros', None)
return mod

View File

@ -0,0 +1,59 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
#
# Author: Ben Howard <ben.howard@canonical.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit import util
from cloudinit.settings import PER_INSTANCE
frequency = PER_INSTANCE
distros = ['ubuntu', 'debian']
DEFAULT_FILE = "/etc/apt/apt.conf.d/90cloud-init-pipelining"
APT_PIPE_TPL = ("//Written by cloud-init per 'apt_pipelining'\n"
'Acquire::http::Pipeline-Depth "%s";\n')
# Acquire::http::Pipeline-Depth can be a value
# from 0 to 5 indicating how many outstanding requests APT should send.
# A value of zero MUST be specified if the remote host does not properly linger
# on TCP connections - otherwise data corruption will occur.
def handle(_name, cfg, cloud, log, _args):
apt_pipe_value = util.get_cfg_option_str(cfg, "apt_pipelining", False)
apt_pipe_value_s = str(apt_pipe_value).lower().strip()
if apt_pipe_value_s == "false":
write_apt_snippet(cloud, "0", log, DEFAULT_FILE)
elif apt_pipe_value_s in ("none", "unchanged", "os"):
return
elif apt_pipe_value_s in [str(b) for b in xrange(0, 6)]:
write_apt_snippet(cloud, apt_pipe_value_s, log, DEFAULT_FILE)
else:
log.warn("Invalid option for apt_pipeling: %s", apt_pipe_value)
def write_apt_snippet(cloud, setting, log, f_name):
""" Writes f_name with apt pipeline depth 'setting' """
file_contents = APT_PIPE_TPL % (setting)
util.write_file(cloud.paths.join(False, f_name), file_contents)
log.debug("Wrote %s with apt pipeline depth setting %s", f_name, setting)

View File

@ -0,0 +1,272 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import glob
import os
from cloudinit import templater
from cloudinit import util
distros = ['ubuntu', 'debian']
PROXY_TPL = "Acquire::HTTP::Proxy \"%s\";\n"
PROXY_FN = "/etc/apt/apt.conf.d/95cloud-init-proxy"
# A temporary shell program to get a given gpg key
# from a given keyserver
EXPORT_GPG_KEYID = """
k=${1} ks=${2};
exec 2>/dev/null
[ -n "$k" ] || exit 1;
armour=$(gpg --list-keys --armour "${k}")
if [ -z "${armour}" ]; then
gpg --keyserver ${ks} --recv $k >/dev/null &&
armour=$(gpg --export --armour "${k}") &&
gpg --batch --yes --delete-keys "${k}"
fi
[ -n "${armour}" ] && echo "${armour}"
"""
def handle(name, cfg, cloud, log, _args):
update = util.get_cfg_option_bool(cfg, 'apt_update', False)
upgrade = util.get_cfg_option_bool(cfg, 'apt_upgrade', False)
release = get_release()
mirror = find_apt_mirror(cloud, cfg)
if not mirror:
log.debug(("Skipping module named %s,"
" no package 'mirror' located"), name)
return
log.debug("Selected mirror at: %s" % mirror)
if not util.get_cfg_option_bool(cfg,
'apt_preserve_sources_list', False):
generate_sources_list(release, mirror, cloud, log)
old_mir = util.get_cfg_option_str(cfg, 'apt_old_mirror',
"archive.ubuntu.com/ubuntu")
rename_apt_lists(old_mir, mirror)
# Set up any apt proxy
proxy = cfg.get("apt_proxy", None)
proxy_filename = PROXY_FN
if proxy:
try:
# See man 'apt.conf'
contents = PROXY_TPL % (proxy)
util.write_file(cloud.paths.join(False, proxy_filename),
contents)
except Exception as e:
util.logexc(log, "Failed to write proxy to %s", proxy_filename)
elif os.path.isfile(proxy_filename):
util.del_file(proxy_filename)
# Process 'apt_sources'
if 'apt_sources' in cfg:
errors = add_sources(cloud, cfg['apt_sources'],
{'MIRROR': mirror, 'RELEASE': release})
for e in errors:
log.warn("Source Error: %s", ':'.join(e))
dconf_sel = util.get_cfg_option_str(cfg, 'debconf_selections', False)
if dconf_sel:
log.debug("setting debconf selections per cloud config")
try:
util.subp(('debconf-set-selections', '-'), dconf_sel)
except:
util.logexc(log, "Failed to run debconf-set-selections")
pkglist = util.get_cfg_option_list(cfg, 'packages', [])
errors = []
if update or len(pkglist) or upgrade:
try:
cloud.distro.update_package_sources()
except Exception as e:
util.logexc(log, "Package update failed")
errors.append(e)
if upgrade:
try:
cloud.distro.package_command("upgrade")
except Exception as e:
util.logexc(log, "Package upgrade failed")
errors.append(e)
if len(pkglist):
try:
cloud.distro.install_packages(pkglist)
except Exception as e:
util.logexc(log, "Failed to install packages: %s ", pkglist)
errors.append(e)
if len(errors):
log.warn("%s failed with exceptions, re-raising the last one",
len(errors))
raise errors[-1]
# get gpg keyid from keyserver
def getkeybyid(keyid, keyserver):
with util.ExtendedTemporaryFile(suffix='.sh') as fh:
fh.write(EXPORT_GPG_KEYID)
fh.flush()
cmd = ['/bin/sh', fh.name, keyid, keyserver]
(stdout, _stderr) = util.subp(cmd)
return stdout.strip()
def mirror2lists_fileprefix(mirror):
string = mirror
# take off http:// or ftp://
if string.endswith("/"):
string = string[0:-1]
pos = string.find("://")
if pos >= 0:
string = string[pos + 3:]
string = string.replace("/", "_")
return string
def rename_apt_lists(omirror, new_mirror, lists_d="/var/lib/apt/lists"):
oprefix = os.path.join(lists_d, mirror2lists_fileprefix(omirror))
nprefix = os.path.join(lists_d, mirror2lists_fileprefix(new_mirror))
if oprefix == nprefix:
return
olen = len(oprefix)
for filename in glob.glob("%s_*" % oprefix):
# TODO use the cloud.paths.join...
util.rename(filename, "%s%s" % (nprefix, filename[olen:]))
def get_release():
(stdout, _stderr) = util.subp(['lsb_release', '-cs'])
return stdout.strip()
def generate_sources_list(codename, mirror, cloud, log):
template_fn = cloud.get_template_filename('sources.list')
if template_fn:
params = {'mirror': mirror, 'codename': codename}
out_fn = cloud.paths.join(False, '/etc/apt/sources.list')
templater.render_to_file(template_fn, out_fn, params)
else:
log.warn("No template found, not rendering /etc/apt/sources.list")
def add_sources(cloud, srclist, template_params=None):
"""
add entries in /etc/apt/sources.list.d for each abbreviated
sources.list entry in 'srclist'. When rendering template, also
include the values in dictionary searchList
"""
if template_params is None:
template_params = {}
errorlist = []
for ent in srclist:
if 'source' not in ent:
errorlist.append(["", "missing source"])
continue
source = ent['source']
if source.startswith("ppa:"):
try:
util.subp(["add-apt-repository", source])
except:
errorlist.append([source, "add-apt-repository failed"])
continue
source = templater.render_string(source, template_params)
if 'filename' not in ent:
ent['filename'] = 'cloud_config_sources.list'
if not ent['filename'].startswith("/"):
ent['filename'] = os.path.join("/etc/apt/sources.list.d/",
ent['filename'])
if ('keyid' in ent and 'key' not in ent):
ks = "keyserver.ubuntu.com"
if 'keyserver' in ent:
ks = ent['keyserver']
try:
ent['key'] = getkeybyid(ent['keyid'], ks)
except:
errorlist.append([source, "failed to get key from %s" % ks])
continue
if 'key' in ent:
try:
util.subp(('apt-key', 'add', '-'), ent['key'])
except:
errorlist.append([source, "failed add key"])
try:
contents = "%s\n" % (source)
util.write_file(cloud.paths.join(False, ent['filename']),
contents, omode="ab")
except:
errorlist.append([source,
"failed write to file %s" % ent['filename']])
return errorlist
def find_apt_mirror(cloud, cfg):
""" find an apt_mirror given the cloud and cfg provided """
mirror = None
cfg_mirror = cfg.get("apt_mirror", None)
if cfg_mirror:
mirror = cfg["apt_mirror"]
elif "apt_mirror_search" in cfg:
mirror = util.search_for_mirror(cfg['apt_mirror_search'])
else:
mirror = cloud.get_local_mirror()
mydom = ""
doms = []
if not mirror:
# if we have a fqdn, then search its domain portion first
(_hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)
mydom = ".".join(fqdn.split(".")[1:])
if mydom:
doms.append(".%s" % mydom)
if not mirror:
doms.extend((".localdomain", "",))
mirror_list = []
distro = cloud.distro.name
mirrorfmt = "http://%s-mirror%s/%s" % (distro, "%s", distro)
for post in doms:
mirror_list.append(mirrorfmt % (post))
mirror = util.search_for_mirror(mirror_list)
if not mirror:
mirror = cloud.distro.get_package_mirror()
return mirror

View File

@ -0,0 +1,55 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit import util
from cloudinit.settings import PER_ALWAYS
frequency = PER_ALWAYS
def handle(name, cfg, cloud, log, _args):
if "bootcmd" not in cfg:
log.debug(("Skipping module named %s,"
" no 'bootcmd' key in configuration"), name)
return
with util.ExtendedTemporaryFile(suffix=".sh") as tmpf:
try:
content = util.shellify(cfg["bootcmd"])
tmpf.write(content)
tmpf.flush()
except:
util.logexc(log, "Failed to shellify bootcmd")
raise
try:
env = os.environ.copy()
iid = cloud.get_instance_id()
if iid:
env['INSTANCE_ID'] = str(iid)
cmd = ['/bin/sh', tmpf.name]
util.subp(cmd, env=env, capture=False)
except:
util.logexc(log,
("Failed to run bootcmd module %s"), name)
raise

View File

@ -18,18 +18,19 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import subprocess
import traceback
from cloudinit import util
distros = ['ubuntu', 'debian']
def handle(_name, cfg, _cloud, log, args):
def handle(name, cfg, _cloud, log, args):
if len(args) != 0:
value = args[0]
else:
value = util.get_cfg_option_str(cfg, "byobu_by_default", "")
if not value:
log.debug("Skipping module named %s, no 'byobu' values found", name)
return
if value == "user" or value == "system":
@ -38,7 +39,7 @@ def handle(_name, cfg, _cloud, log, args):
valid = ("enable-user", "enable-system", "enable",
"disable-user", "disable-system", "disable")
if not value in valid:
log.warn("Unknown value %s for byobu_by_default" % value)
log.warn("Unknown value %s for byobu_by_default", value)
mod_user = value.endswith("-user")
mod_sys = value.endswith("-system")
@ -65,13 +66,6 @@ def handle(_name, cfg, _cloud, log, args):
cmd = ["/bin/sh", "-c", "%s %s %s" % ("X=0;", shcmd, "exit $X")]
log.debug("setting byobu to %s" % value)
log.debug("Setting byobu to %s", value)
try:
subprocess.check_call(cmd)
except subprocess.CalledProcessError as e:
log.debug(traceback.format_exc(e))
raise Exception("Cmd returned %s: %s" % (e.returncode, cmd))
except OSError as e:
log.debug(traceback.format_exc(e))
raise Exception("Cmd failed to execute: %s" % (cmd))
util.subp(cmd, capture=False)

View File

@ -13,25 +13,27 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from subprocess import check_call
from cloudinit.util import (write_file, get_cfg_option_list_or_str,
delete_dir_contents, subp)
from cloudinit import util
CA_CERT_PATH = "/usr/share/ca-certificates/"
CA_CERT_FILENAME = "cloud-init-ca-certs.crt"
CA_CERT_CONFIG = "/etc/ca-certificates.conf"
CA_CERT_SYSTEM_PATH = "/etc/ssl/certs/"
distros = ['ubuntu', 'debian']
def update_ca_certs():
"""
Updates the CA certificate cache on the current machine.
"""
check_call(["update-ca-certificates"])
util.subp(["update-ca-certificates"], capture=False)
def add_ca_certs(certs):
def add_ca_certs(paths, certs):
"""
Adds certificates to the system. To actually apply the new certificates
you must also call L{update_ca_certs}.
@ -39,26 +41,29 @@ def add_ca_certs(certs):
@param certs: A list of certificate strings.
"""
if certs:
cert_file_contents = "\n".join(certs)
# First ensure they are strings...
cert_file_contents = "\n".join([str(c) for c in certs])
cert_file_fullpath = os.path.join(CA_CERT_PATH, CA_CERT_FILENAME)
write_file(cert_file_fullpath, cert_file_contents, mode=0644)
cert_file_fullpath = paths.join(False, cert_file_fullpath)
util.write_file(cert_file_fullpath, cert_file_contents, mode=0644)
# Append cert filename to CA_CERT_CONFIG file.
write_file(CA_CERT_CONFIG, "\n%s" % CA_CERT_FILENAME, omode="a")
util.write_file(paths.join(False, CA_CERT_CONFIG),
"\n%s" % CA_CERT_FILENAME, omode="ab")
def remove_default_ca_certs():
def remove_default_ca_certs(paths):
"""
Removes all default trusted CA certificates from the system. To actually
apply the change you must also call L{update_ca_certs}.
"""
delete_dir_contents(CA_CERT_PATH)
delete_dir_contents(CA_CERT_SYSTEM_PATH)
write_file(CA_CERT_CONFIG, "", mode=0644)
util.delete_dir_contents(paths.join(False, CA_CERT_PATH))
util.delete_dir_contents(paths.join(False, CA_CERT_SYSTEM_PATH))
util.write_file(paths.join(False, CA_CERT_CONFIG), "", mode=0644)
debconf_sel = "ca-certificates ca-certificates/trust_new_crts select no"
subp(('debconf-set-selections', '-'), debconf_sel)
util.subp(('debconf-set-selections', '-'), debconf_sel)
def handle(_name, cfg, _cloud, log, _args):
def handle(name, cfg, cloud, log, _args):
"""
Call to handle ca-cert sections in cloud-config file.
@ -70,21 +75,25 @@ def handle(_name, cfg, _cloud, log, _args):
"""
# If there isn't a ca-certs section in the configuration don't do anything
if "ca-certs" not in cfg:
log.debug(("Skipping module named %s,"
" no 'ca-certs' key in configuration"), name)
return
ca_cert_cfg = cfg['ca-certs']
# If there is a remove-defaults option set to true, remove the system
# default trusted CA certs first.
if ca_cert_cfg.get("remove-defaults", False):
log.debug("removing default certificates")
remove_default_ca_certs()
log.debug("Removing default certificates")
remove_default_ca_certs(cloud.paths)
# If we are given any new trusted CA certs to add, add them.
if "trusted" in ca_cert_cfg:
trusted_certs = get_cfg_option_list_or_str(ca_cert_cfg, "trusted")
trusted_certs = util.get_cfg_option_list(ca_cert_cfg, "trusted")
if trusted_certs:
log.debug("adding %d certificates" % len(trusted_certs))
add_ca_certs(trusted_certs)
log.debug("Adding %d certificates" % len(trusted_certs))
add_ca_certs(cloud.paths, trusted_certs)
# Update the system with the new cert configuration.
log.debug("Updating certificates")
update_ca_certs()

129
cloudinit/config/cc_chef.py Normal file
View File

@ -0,0 +1,129 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Avishai Ish-Shalom <avishai@fewbytes.com>
# Author: Mike Moulton <mike@meltmedia.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import json
import os
from cloudinit import templater
from cloudinit import util
RUBY_VERSION_DEFAULT = "1.8"
def handle(name, cfg, cloud, log, _args):
# If there isn't a chef key in the configuration don't do anything
if 'chef' not in cfg:
log.debug(("Skipping module named %s,"
" no 'chef' key in configuration"), name)
return
chef_cfg = cfg['chef']
# Ensure the chef directories we use exist
c_dirs = [
'/etc/chef',
'/var/log/chef',
'/var/lib/chef',
'/var/cache/chef',
'/var/backups/chef',
'/var/run/chef',
]
for d in c_dirs:
util.ensure_dir(cloud.paths.join(False, d))
# Set the validation key based on the presence of either 'validation_key'
# or 'validation_cert'. In the case where both exist, 'validation_key'
# takes precedence
for key in ('validation_key', 'validation_cert'):
if key in chef_cfg and chef_cfg[key]:
v_fn = cloud.paths.join(False, '/etc/chef/validation.pem')
util.write_file(v_fn, chef_cfg[key])
break
# Create the chef config from template
template_fn = cloud.get_template_filename('chef_client.rb')
if template_fn:
iid = str(cloud.datasource.get_instance_id())
params = {
'server_url': chef_cfg['server_url'],
'node_name': util.get_cfg_option_str(chef_cfg, 'node_name', iid),
'environment': util.get_cfg_option_str(chef_cfg, 'environment',
'_default'),
'validation_name': chef_cfg['validation_name']
}
out_fn = cloud.paths.join(False, '/etc/chef/client.rb')
templater.render_to_file(template_fn, out_fn, params)
else:
log.warn("No template found, not rendering to /etc/chef/client.rb")
# set the firstboot json
initial_json = {}
if 'run_list' in chef_cfg:
initial_json['run_list'] = chef_cfg['run_list']
if 'initial_attributes' in chef_cfg:
initial_attributes = chef_cfg['initial_attributes']
for k in list(initial_attributes.keys()):
initial_json[k] = initial_attributes[k]
firstboot_fn = cloud.paths.join(False, '/etc/chef/firstboot.json')
util.write_file(firstboot_fn, json.dumps(initial_json))
# If chef is not installed, we install chef based on 'install_type'
if not os.path.isfile('/usr/bin/chef-client'):
install_type = util.get_cfg_option_str(chef_cfg, 'install_type',
'packages')
if install_type == "gems":
# this will install and run the chef-client from gems
chef_version = util.get_cfg_option_str(chef_cfg, 'version', None)
ruby_version = util.get_cfg_option_str(chef_cfg, 'ruby_version',
RUBY_VERSION_DEFAULT)
install_chef_from_gems(cloud.distro, ruby_version, chef_version)
# and finally, run chef-client
log.debug('Running chef-client')
util.subp(['/usr/bin/chef-client',
'-d', '-i', '1800', '-s', '20'], capture=False)
elif install_type == 'packages':
# this will install and run the chef-client from packages
cloud.distro.install_packages(('chef',))
else:
log.warn("Unknown chef install type %s", install_type)
def get_ruby_packages(version):
# return a list of packages needed to install ruby at version
pkgs = ['ruby%s' % version, 'ruby%s-dev' % version]
if version == "1.8":
pkgs.extend(('libopenssl-ruby1.8', 'rubygems1.8'))
return pkgs
def install_chef_from_gems(ruby_version, chef_version, distro):
distro.install_packages(get_ruby_packages(ruby_version))
if not os.path.exists('/usr/bin/gem'):
util.sym_link('/usr/bin/gem%s' % ruby_version, '/usr/bin/gem')
if not os.path.exists('/usr/bin/ruby'):
util.sym_link('/usr/bin/ruby%s' % ruby_version, '/usr/bin/ruby')
if chef_version:
util.subp(['/usr/bin/gem', 'install', 'chef',
'-v %s' % chef_version, '--no-ri',
'--no-rdoc', '--bindir', '/usr/bin', '-q'], capture=False)
else:
util.subp(['/usr/bin/gem', 'install', 'chef',
'--no-ri', '--no-rdoc', '--bindir',
'/usr/bin', '-q'], capture=False)

View File

@ -0,0 +1,36 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit import util
from cloudinit.settings import PER_ALWAYS
frequency = PER_ALWAYS
REJECT_CMD = ['route', 'add', '-host', '169.254.169.254', 'reject']
def handle(name, cfg, _cloud, log, _args):
disabled = util.get_cfg_option_bool(cfg, "disable_ec2_metadata", False)
if disabled:
util.subp(REJECT_CMD, capture=False)
else:
log.debug(("Skipping module named %s,"
" disabling the ec2 route not enabled"), name)

View File

@ -0,0 +1,68 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit import templater
from cloudinit import util
from cloudinit import version
from cloudinit.settings import PER_ALWAYS
frequency = PER_ALWAYS
FINAL_MESSAGE_DEF = ("Cloud-init v. {{version}} finished at {{timestamp}}."
" Up {{uptime}} seconds.")
def handle(_name, cfg, cloud, log, args):
msg_in = None
if len(args) != 0:
msg_in = args[0]
else:
msg_in = util.get_cfg_option_str(cfg, "final_message")
if not msg_in:
template_fn = cloud.get_template_filename('final_message')
if template_fn:
msg_in = util.load_file(template_fn)
if not msg_in:
msg_in = FINAL_MESSAGE_DEF
uptime = util.uptime()
ts = util.time_rfc2822()
cver = version.version_string()
try:
subs = {
'uptime': uptime,
'timestamp': ts,
'version': cver,
}
util.multi_log("%s\n" % (templater.render_string(msg_in, subs)),
console=False, stderr=True)
except Exception:
util.logexc(log, "Failed to render final message template")
boot_fin_fn = cloud.paths.boot_finished
try:
contents = "%s - %s - v. %s\n" % (uptime, ts, cver)
util.write_file(boot_fin_fn, contents)
except:
util.logexc(log, "Failed to write boot finished file %s", boot_fin_fn)

View File

@ -0,0 +1,52 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit.settings import PER_INSTANCE
# Modules are expected to have the following attributes.
# 1. A required 'handle' method which takes the following params.
# a) The name will not be this files name, but instead
# the name specified in configuration (which is the name
# which will be used to find this module).
# b) A configuration object that is the result of the merging
# of cloud configs configuration with legacy configuration
# as well as any datasource provided configuration
# c) A cloud object that can be used to access various
# datasource and paths for the given distro and data provided
# by the various datasource instance types.
# d) A argument list that may or may not be empty to this module.
# Typically those are from module configuration where the module
# is defined with some extra configuration that will eventually
# be translated from yaml into arguments to this module.
# 2. A optional 'frequency' that defines how often this module should be ran.
# Typically one of PER_INSTANCE, PER_ALWAYS, PER_ONCE. If not
# provided PER_INSTANCE will be assumed.
# See settings.py for these constants.
# 3. A optional 'distros' array/set/tuple that defines the known distros
# this module will work with (if not all of them). This is used to write
# a warning out if a module is being ran on a untested distribution for
# informational purposes. If non existent all distros are assumed and
# no warning occurs.
frequency = PER_INSTANCE
def handle(name, _cfg, _cloud, log, _args):
log.debug("Hi from module %s", name)

View File

@ -18,10 +18,12 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import traceback
import os
from cloudinit import util
distros = ['ubuntu', 'debian']
def handle(_name, cfg, _cloud, log, _args):
idevs = None
@ -35,14 +37,14 @@ def handle(_name, cfg, _cloud, log, _args):
if ((os.path.exists("/dev/sda1") and not os.path.exists("/dev/sda")) or
(os.path.exists("/dev/xvda1") and not os.path.exists("/dev/xvda"))):
if idevs == None:
if idevs is None:
idevs = ""
if idevs_empty == None:
if idevs_empty is None:
idevs_empty = "true"
else:
if idevs_empty == None:
if idevs_empty is None:
idevs_empty = "false"
if idevs == None:
if idevs is None:
idevs = "/dev/sda"
for dev in ("/dev/sda", "/dev/vda", "/dev/sda1", "/dev/vda1"):
if os.path.exists(dev):
@ -52,13 +54,14 @@ def handle(_name, cfg, _cloud, log, _args):
# now idevs and idevs_empty are set to determined values
# or, those set by user
dconf_sel = "grub-pc grub-pc/install_devices string %s\n" % idevs + \
"grub-pc grub-pc/install_devices_empty boolean %s\n" % idevs_empty
log.debug("setting grub debconf-set-selections with '%s','%s'" %
dconf_sel = (("grub-pc grub-pc/install_devices string %s\n"
"grub-pc grub-pc/install_devices_empty boolean %s\n") %
(idevs, idevs_empty))
log.debug("Setting grub debconf-set-selections with '%s','%s'" %
(idevs, idevs_empty))
try:
util.subp(('debconf-set-selections'), dconf_sel)
util.subp(['debconf-set-selections'], dconf_sel)
except:
log.error("Failed to run debconf-set-selections for grub-dpkg")
log.debug(traceback.format_exc())
util.logexc(log, "Failed to run debconf-set-selections for grub-dpkg")

View File

@ -0,0 +1,53 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit.settings import PER_INSTANCE
from cloudinit import util
frequency = PER_INSTANCE
# This is a tool that cloud init provides
HELPER_TOOL = '/usr/lib/cloud-init/write-ssh-key-fingerprints'
def handle(name, cfg, _cloud, log, _args):
if not os.path.exists(HELPER_TOOL):
log.warn(("Unable to activate module %s,"
" helper tool not found at %s"), name, HELPER_TOOL)
return
fp_blacklist = util.get_cfg_option_list(cfg,
"ssh_fp_console_blacklist", [])
key_blacklist = util.get_cfg_option_list(cfg,
"ssh_key_console_blacklist",
["ssh-dss"])
try:
cmd = [HELPER_TOOL]
cmd.append(','.join(fp_blacklist))
cmd.append(','.join(key_blacklist))
(stdout, _stderr) = util.subp(cmd)
util.multi_log("%s\n" % (stdout.strip()),
stderr=False, console=True)
except:
log.warn("Writing keys to the system console failed!")
raise

View File

@ -19,16 +19,23 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import os.path
from cloudinit.CloudConfig import per_instance
from StringIO import StringIO
from configobj import ConfigObj
frequency = per_instance
from cloudinit import util
lsc_client_cfg_file = "/etc/landscape/client.conf"
from cloudinit.settings import PER_INSTANCE
frequency = PER_INSTANCE
LSC_CLIENT_CFG_FILE = "/etc/landscape/client.conf"
distros = ['ubuntu']
# defaults taken from stock client.conf in landscape-client 11.07.1.1-0ubuntu2
lsc_builtincfg = {
LSC_BUILTIN_CFG = {
'client': {
'log_level': "info",
'url': "https://landscape.canonical.com/message-system",
@ -38,7 +45,7 @@ lsc_builtincfg = {
}
def handle(_name, cfg, _cloud, log, _args):
def handle(_name, cfg, cloud, log, _args):
"""
Basically turn a top level 'landscape' entry with a 'client' dict
and render it to ConfigObj format under '[client]' section in
@ -47,27 +54,40 @@ def handle(_name, cfg, _cloud, log, _args):
ls_cloudcfg = cfg.get("landscape", {})
if not isinstance(ls_cloudcfg, dict):
raise(Exception("'landscape' existed in config, but not a dict"))
if not isinstance(ls_cloudcfg, (dict)):
raise RuntimeError(("'landscape' key existed in config,"
" but not a dictionary type,"
" is a %s instead"), util.obj_name(ls_cloudcfg))
merged = mergeTogether([lsc_builtincfg, lsc_client_cfg_file, ls_cloudcfg])
merge_data = [
LSC_BUILTIN_CFG,
cloud.paths.join(True, LSC_CLIENT_CFG_FILE),
ls_cloudcfg,
]
merged = merge_together(merge_data)
if not os.path.isdir(os.path.dirname(lsc_client_cfg_file)):
os.makedirs(os.path.dirname(lsc_client_cfg_file))
lsc_client_fn = cloud.paths.join(False, LSC_CLIENT_CFG_FILE)
lsc_dir = cloud.paths.join(False, os.path.dirname(lsc_client_fn))
if not os.path.isdir(lsc_dir):
util.ensure_dir(lsc_dir)
with open(lsc_client_cfg_file, "w") as fp:
merged.write(fp)
contents = StringIO()
merged.write(contents)
contents.flush()
log.debug("updated %s" % lsc_client_cfg_file)
util.write_file(lsc_client_fn, contents.getvalue())
log.debug("Wrote landscape config file to %s", lsc_client_fn)
def mergeTogether(objs):
def merge_together(objs):
"""
merge together ConfigObj objects or things that ConfigObj() will take in
later entries override earlier
"""
cfg = ConfigObj({})
for obj in objs:
if not obj:
continue
if isinstance(obj, ConfigObj):
cfg.merge(obj)
else:

View File

@ -0,0 +1,37 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit import util
def handle(name, cfg, cloud, log, args):
if len(args) != 0:
locale = args[0]
else:
locale = util.get_cfg_option_str(cfg, "locale", cloud.get_locale())
if not locale:
log.debug(("Skipping module named %s, "
"no 'locale' configuration found"), name)
return
log.debug("Setting locale to %s", locale)
locale_cfgfile = util.get_cfg_option_str(cfg, "locale_configfile")
cloud.distro.apply_locale(locale, locale_cfgfile)

View File

@ -0,0 +1,91 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Marc Cluet <marc.cluet@canonical.com>
# Based on code by Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from StringIO import StringIO
# Used since this can maintain comments
# and doesn't need a top level section
from configobj import ConfigObj
from cloudinit import util
PUBCERT_FILE = "/etc/mcollective/ssl/server-public.pem"
PRICERT_FILE = "/etc/mcollective/ssl/server-private.pem"
def handle(name, cfg, cloud, log, _args):
# If there isn't a mcollective key in the configuration don't do anything
if 'mcollective' not in cfg:
log.debug(("Skipping module named %s, "
"no 'mcollective' key in configuration"), name)
return
mcollective_cfg = cfg['mcollective']
# Start by installing the mcollective package ...
cloud.distro.install_packages(("mcollective",))
# ... and then update the mcollective configuration
if 'conf' in mcollective_cfg:
# Read server.cfg values from the
# original file in order to be able to mix the rest up
server_cfg_fn = cloud.paths.join(True, '/etc/mcollective/server.cfg')
mcollective_config = ConfigObj(server_cfg_fn)
# See: http://tiny.cc/jh9agw
for (cfg_name, cfg) in mcollective_cfg['conf'].iteritems():
if cfg_name == 'public-cert':
pubcert_fn = cloud.paths.join(True, PUBCERT_FILE)
util.write_file(pubcert_fn, cfg, mode=0644)
mcollective_config['plugin.ssl_server_public'] = pubcert_fn
mcollective_config['securityprovider'] = 'ssl'
elif cfg_name == 'private-cert':
pricert_fn = cloud.paths.join(True, PRICERT_FILE)
util.write_file(pricert_fn, cfg, mode=0600)
mcollective_config['plugin.ssl_server_private'] = pricert_fn
mcollective_config['securityprovider'] = 'ssl'
else:
if isinstance(cfg, (basestring, str)):
# Just set it in the 'main' section
mcollective_config[cfg_name] = cfg
elif isinstance(cfg, (dict)):
# Iterate throug the config items, create a section
# if it is needed and then add/or create items as needed
if cfg_name not in mcollective_config.sections:
mcollective_config[cfg_name] = {}
for (o, v) in cfg.iteritems():
mcollective_config[cfg_name][o] = v
else:
# Otherwise just try to convert it to a string
mcollective_config[cfg_name] = str(cfg)
# We got all our config as wanted we'll rename
# the previous server.cfg and create our new one
old_fn = cloud.paths.join(False, '/etc/mcollective/server.cfg.old')
util.rename(server_cfg_fn, old_fn)
# Now we got the whole file, write to disk...
contents = StringIO()
mcollective_config.write(contents)
contents = contents.getvalue()
server_cfg_rw = cloud.paths.join(False, '/etc/mcollective/server.cfg')
util.write_file(server_cfg_rw, contents, mode=0644)
# Start mcollective
util.subp(['service', 'mcollective', 'start'], capture=False)

View File

@ -18,11 +18,17 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import os
import re
from string import whitespace # pylint: disable=W0402
import re
from cloudinit import util
# Shortname matches 'sda', 'sda1', 'xvda', 'hda', 'sdb', xvdb, vda, vdd1
SHORTNAME_FILTER = r"^[x]{0,1}[shv]d[a-z][0-9]*$"
SHORTNAME = re.compile(SHORTNAME_FILTER)
WS = re.compile("[%s]+" % (whitespace))
def is_mdname(name):
# return true if this is a metadata service name
@ -49,38 +55,46 @@ def handle(_name, cfg, cloud, log, _args):
if "mounts" in cfg:
cfgmnt = cfg["mounts"]
# shortname matches 'sda', 'sda1', 'xvda', 'hda', 'sdb', xvdb, vda, vdd1
shortname_filter = r"^[x]{0,1}[shv]d[a-z][0-9]*$"
shortname = re.compile(shortname_filter)
for i in range(len(cfgmnt)):
# skip something that wasn't a list
if not isinstance(cfgmnt[i], list):
log.warn("Mount option %s not a list, got a %s instead",
(i + 1), util.obj_name(cfgmnt[i]))
continue
startname = str(cfgmnt[i][0])
log.debug("Attempting to determine the real name of %s", startname)
# workaround, allow user to specify 'ephemeral'
# rather than more ec2 correct 'ephemeral0'
if cfgmnt[i][0] == "ephemeral":
if startname == "ephemeral":
cfgmnt[i][0] = "ephemeral0"
log.debug(("Adjusted mount option %s "
"name from ephemeral to ephemeral0"), (i + 1))
if is_mdname(cfgmnt[i][0]):
newname = cloud.device_name_to_device(cfgmnt[i][0])
if is_mdname(startname):
newname = cloud.device_name_to_device(startname)
if not newname:
log.debug("ignoring nonexistant named mount %s" % cfgmnt[i][0])
log.debug("Ignoring nonexistant named mount %s", startname)
cfgmnt[i][1] = None
else:
if newname.startswith("/"):
cfgmnt[i][0] = newname
else:
cfgmnt[i][0] = "/dev/%s" % newname
renamed = newname
if not newname.startswith("/"):
renamed = "/dev/%s" % newname
cfgmnt[i][0] = renamed
log.debug("Mapped metadata name %s to %s", startname, renamed)
else:
if shortname.match(cfgmnt[i][0]):
cfgmnt[i][0] = "/dev/%s" % cfgmnt[i][0]
if SHORTNAME.match(startname):
renamed = "/dev/%s" % startname
log.debug("Mapped shortname name %s to %s", startname, renamed)
cfgmnt[i][0] = renamed
# in case the user did not quote a field (likely fs-freq, fs_passno)
# but do not convert None to 'None' (LP: #898365)
for j in range(len(cfgmnt[i])):
if isinstance(cfgmnt[i][j], int):
if j is None:
continue
else:
cfgmnt[i][j] = str(cfgmnt[i][j])
for i in range(len(cfgmnt)):
@ -102,14 +116,18 @@ def handle(_name, cfg, cloud, log, _args):
# for each of the "default" mounts, add them only if no other
# entry has the same device name
for defmnt in defmnts:
devname = cloud.device_name_to_device(defmnt[0])
startname = defmnt[0]
devname = cloud.device_name_to_device(startname)
if devname is None:
log.debug("Ignoring nonexistant named default mount %s", startname)
continue
if devname.startswith("/"):
defmnt[0] = devname
else:
defmnt[0] = "/dev/%s" % devname
log.debug("Mapped default device %s to %s", startname, defmnt[0])
cfgmnt_has = False
for cfgm in cfgmnt:
if cfgm[0] == defmnt[0]:
@ -117,14 +135,22 @@ def handle(_name, cfg, cloud, log, _args):
break
if cfgmnt_has:
log.debug(("Not including %s, already"
" previously included"), startname)
continue
cfgmnt.append(defmnt)
# now, each entry in the cfgmnt list has all fstab values
# if the second field is None (not the string, the value) we skip it
actlist = [x for x in cfgmnt if x[1] is not None]
actlist = []
for x in cfgmnt:
if x[1] is None:
log.debug("Skipping non-existent device named %s", x[0])
else:
actlist.append(x)
if len(actlist) == 0:
log.debug("No modifications to fstab needed.")
return
comment = "comment=cloudconfig"
@ -133,7 +159,7 @@ def handle(_name, cfg, cloud, log, _args):
dirs = []
for line in actlist:
# write 'comment' in the fs_mntops, entry, claiming this
line[3] = "%s,comment=cloudconfig" % line[3]
line[3] = "%s,%s" % (line[3], comment)
if line[2] == "swap":
needswap = True
if line[1].startswith("/"):
@ -141,11 +167,10 @@ def handle(_name, cfg, cloud, log, _args):
cc_lines.append('\t'.join(line))
fstab_lines = []
fstab = open("/etc/fstab", "r+")
ws = re.compile("[%s]+" % whitespace)
for line in fstab.read().splitlines():
fstab = util.load_file(cloud.paths.join(True, "/etc/fstab"))
for line in fstab.splitlines():
try:
toks = ws.split(line)
toks = WS.split(line)
if toks[3].find(comment) != -1:
continue
except:
@ -153,27 +178,23 @@ def handle(_name, cfg, cloud, log, _args):
fstab_lines.append(line)
fstab_lines.extend(cc_lines)
fstab.seek(0)
fstab.write("%s\n" % '\n'.join(fstab_lines))
fstab.truncate()
fstab.close()
contents = "%s\n" % ('\n'.join(fstab_lines))
util.write_file(cloud.paths.join(False, "/etc/fstab"), contents)
if needswap:
try:
util.subp(("swapon", "-a"))
except:
log.warn("Failed to enable swap")
util.logexc(log, "Activating swap via 'swapon -a' failed")
for d in dirs:
if os.path.exists(d):
continue
real_dir = cloud.paths.join(False, d)
try:
os.makedirs(d)
util.ensure_dir(real_dir)
except:
log.warn("Failed to make '%s' config-mount\n", d)
util.logexc(log, "Failed to make '%s' config-mount", d)
try:
util.subp(("mount", "-a"))
except:
log.warn("'mount -a' failed")
util.logexc(log, "Activating mounts via 'mount -a' failed")

View File

@ -17,13 +17,22 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit.CloudConfig import per_instance
import cloudinit.util as util
from time import sleep
frequency = per_instance
post_list_all = ['pub_key_dsa', 'pub_key_rsa', 'pub_key_ecdsa', 'instance_id',
'hostname']
from cloudinit import templater
from cloudinit import url_helper as uhelp
from cloudinit import util
from cloudinit.settings import PER_INSTANCE
frequency = PER_INSTANCE
POST_LIST_ALL = [
'pub_key_dsa',
'pub_key_rsa',
'pub_key_ecdsa',
'instance_id',
'hostname'
]
# phone_home:
@ -35,29 +44,33 @@ post_list_all = ['pub_key_dsa', 'pub_key_rsa', 'pub_key_ecdsa', 'instance_id',
# url: http://my.foo.bar/$INSTANCE_ID/
# post: [ pub_key_dsa, pub_key_rsa, pub_key_ecdsa, instance_id
#
def handle(_name, cfg, cloud, log, args):
def handle(name, cfg, cloud, log, args):
if len(args) != 0:
ph_cfg = util.read_conf(args[0])
else:
if not 'phone_home' in cfg:
log.debug(("Skipping module named %s, "
"no 'phone_home' configuration found"), name)
return
ph_cfg = cfg['phone_home']
if 'url' not in ph_cfg:
log.warn("no 'url' token in phone_home")
log.warn(("Skipping module named %s, "
"no 'url' found in 'phone_home' configuration"), name)
return
url = ph_cfg['url']
post_list = ph_cfg.get('post', 'all')
tries = ph_cfg.get('tries', 10)
tries = ph_cfg.get('tries')
try:
tries = int(tries)
except:
log.warn("tries is not an integer. using 10")
tries = 10
util.logexc(log, ("Configuration entry 'tries'"
" is not an integer, using %s instead"), tries)
if post_list == "all":
post_list = post_list_all
post_list = POST_LIST_ALL
all_keys = {}
all_keys['instance_id'] = cloud.get_instance_id()
@ -69,38 +82,37 @@ def handle(_name, cfg, cloud, log, args):
'pub_key_ecdsa': '/etc/ssh/ssh_host_ecdsa_key.pub',
}
for n, path in pubkeys.iteritems():
for (n, path) in pubkeys.iteritems():
try:
fp = open(path, "rb")
all_keys[n] = fp.read()
fp.close()
all_keys[n] = util.load_file(cloud.paths.join(True, path))
except:
log.warn("%s: failed to open in phone_home" % path)
util.logexc(log, ("%s: failed to open, can not"
" phone home that data"), path)
submit_keys = {}
for k in post_list:
if k in all_keys:
submit_keys[k] = all_keys[k]
else:
submit_keys[k] = "N/A"
log.warn("requested key %s from 'post' list not available")
submit_keys[k] = None
log.warn(("Requested key %s from 'post'"
" configuration list not available"), k)
url = util.render_string(url, {'INSTANCE_ID': all_keys['instance_id']})
# Get them read to be posted
real_submit_keys = {}
for (k, v) in submit_keys.iteritems():
if v is None:
real_submit_keys[k] = 'N/A'
else:
real_submit_keys[k] = str(v)
null_exc = object()
last_e = null_exc
for i in range(0, tries):
try:
util.readurl(url, submit_keys)
log.debug("succeeded submit to %s on try %i" % (url, i + 1))
return
except Exception as e:
log.debug("failed to post to %s on try %i" % (url, i + 1))
last_e = e
sleep(3)
log.warn("failed to post to %s in %i tries" % (url, tries))
if last_e is not null_exc:
raise(last_e)
return
# Incase the url is parameterized
url_params = {
'INSTANCE_ID': all_keys['instance_id'],
}
url = templater.render_string(url, url_params)
try:
uhelp.readurl(url, data=real_submit_keys, retries=tries, sec_between=3)
except:
util.logexc(log, ("Failed to post phone home data to"
" %s in %s tries"), url, tries)

View File

@ -0,0 +1,113 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from StringIO import StringIO
import os
import pwd
import socket
from cloudinit import helpers
from cloudinit import util
def handle(name, cfg, cloud, log, _args):
# If there isn't a puppet key in the configuration don't do anything
if 'puppet' not in cfg:
log.debug(("Skipping module named %s,"
" no 'puppet' configuration found"), name)
return
puppet_cfg = cfg['puppet']
# Start by installing the puppet package ...
cloud.distro.install_packages(["puppet"])
# ... and then update the puppet configuration
if 'conf' in puppet_cfg:
# Add all sections from the conf object to puppet.conf
puppet_conf_fn = cloud.paths.join(True, '/etc/puppet/puppet.conf')
contents = util.load_file(puppet_conf_fn)
# Create object for reading puppet.conf values
puppet_config = helpers.DefaultingConfigParser()
# Read puppet.conf values from original file in order to be able to
# mix the rest up. First clean them up (TODO is this really needed??)
cleaned_lines = [i.lstrip() for i in contents.splitlines()]
cleaned_contents = '\n'.join(cleaned_lines)
puppet_config.readfp(StringIO(cleaned_contents),
filename=puppet_conf_fn)
for (cfg_name, cfg) in puppet_cfg['conf'].iteritems():
# Cert configuration is a special case
# Dump the puppet master ca certificate in the correct place
if cfg_name == 'ca_cert':
# Puppet ssl sub-directory isn't created yet
# Create it with the proper permissions and ownership
pp_ssl_dir = cloud.paths.join(False, '/var/lib/puppet/ssl')
util.ensure_dir(pp_ssl_dir, 0771)
util.chownbyid(pp_ssl_dir,
pwd.getpwnam('puppet').pw_uid, 0)
pp_ssl_certs = cloud.paths.join(False,
'/var/lib/puppet/ssl/certs/')
util.ensure_dir(pp_ssl_certs)
util.chownbyid(pp_ssl_certs,
pwd.getpwnam('puppet').pw_uid, 0)
pp_ssl_ca_certs = cloud.paths.join(False,
('/var/lib/puppet/'
'ssl/certs/ca.pem'))
util.write_file(pp_ssl_ca_certs, cfg)
util.chownbyid(pp_ssl_ca_certs,
pwd.getpwnam('puppet').pw_uid, 0)
else:
# Iterate throug the config items, we'll use ConfigParser.set
# to overwrite or create new items as needed
for (o, v) in cfg.iteritems():
if o == 'certname':
# Expand %f as the fqdn
# TODO should this use the cloud fqdn??
v = v.replace("%f", socket.getfqdn())
# Expand %i as the instance id
v = v.replace("%i", cloud.get_instance_id())
# certname needs to be downcased
v = v.lower()
puppet_config.set(cfg_name, o, v)
# We got all our config as wanted we'll rename
# the previous puppet.conf and create our new one
conf_old_fn = cloud.paths.join(False,
'/etc/puppet/puppet.conf.old')
util.rename(puppet_conf_fn, conf_old_fn)
puppet_conf_rw = cloud.paths.join(False, '/etc/puppet/puppet.conf')
util.write_file(puppet_conf_rw, puppet_config.stringify())
# Set puppet to automatically start
if os.path.exists('/etc/default/puppet'):
util.subp(['sed', '-i',
'-e', 's/^START=.*/START=yes/',
'/etc/default/puppet'], capture=False)
elif os.path.exists('/bin/systemctl'):
util.subp(['/bin/systemctl', 'enable', 'puppet.service'],
capture=False)
elif os.path.exists('/sbin/chkconfig'):
util.subp(['/sbin/chkconfig', 'puppet', 'on'], capture=False)
else:
log.warn(("Sorry we do not know how to enable"
" puppet services on this system"))
# Start puppetd
util.subp(['service', 'puppet', 'start'], capture=False)

View File

@ -0,0 +1,140 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import stat
import time
from cloudinit import util
from cloudinit.settings import PER_ALWAYS
frequency = PER_ALWAYS
RESIZE_FS_PREFIXES_CMDS = [
('ext', 'resize2fs'),
('xfs', 'xfs_growfs'),
]
def nodeify_path(devpth, where, log):
try:
st_dev = os.stat(where).st_dev
dev = os.makedev(os.major(st_dev), os.minor(st_dev))
os.mknod(devpth, 0400 | stat.S_IFBLK, dev)
return st_dev
except:
if util.is_container():
log.debug("Inside container, ignoring mknod failure in resizefs")
return
log.warn("Failed to make device node to resize %s at %s",
where, devpth)
raise
def get_fs_type(st_dev, path, log):
try:
dev_entries = util.find_devs_with(tag='TYPE', oformat='value',
no_cache=True, path=path)
if not dev_entries:
return None
return dev_entries[0].strip()
except util.ProcessExecutionError:
util.logexc(log, ("Failed to get filesystem type"
" of maj=%s, min=%s for path %s"),
os.major(st_dev), os.minor(st_dev), path)
raise
def handle(name, cfg, cloud, log, args):
if len(args) != 0:
resize_root = args[0]
else:
resize_root = util.get_cfg_option_str(cfg, "resize_rootfs", True)
if not util.translate_bool(resize_root):
log.debug("Skipping module named %s, resizing disabled", name)
return
# TODO is the directory ok to be used??
resize_root_d = util.get_cfg_option_str(cfg, "resize_rootfs_tmp", "/run")
resize_root_d = cloud.paths.join(False, resize_root_d)
util.ensure_dir(resize_root_d)
# TODO: allow what is to be resized to be configurable??
resize_what = cloud.paths.join(False, "/")
with util.ExtendedTemporaryFile(prefix="cloudinit.resizefs.",
dir=resize_root_d, delete=True) as tfh:
devpth = tfh.name
# Delete the file so that mknod will work
# but don't change the file handle to know that its
# removed so that when a later call that recreates
# occurs this temporary file will still benefit from
# auto deletion
tfh.unlink_now()
st_dev = nodeify_path(devpth, resize_what, log)
fs_type = get_fs_type(st_dev, devpth, log)
if not fs_type:
log.warn("Could not determine filesystem type of %s", resize_what)
return
resizer = None
fstype_lc = fs_type.lower()
for (pfix, root_cmd) in RESIZE_FS_PREFIXES_CMDS:
if fstype_lc.startswith(pfix):
resizer = root_cmd
break
if not resizer:
log.warn("Not resizing unknown filesystem type %s for %s",
fs_type, resize_what)
return
log.debug("Resizing %s (%s) using %s", resize_what, fs_type, resizer)
resize_cmd = [resizer, devpth]
if resize_root == "noblock":
# Fork to a child that will run
# the resize command
util.fork_cb(do_resize, resize_cmd, log)
# Don't delete the file now in the parent
tfh.delete = False
else:
do_resize(resize_cmd, log)
action = 'Resized'
if resize_root == "noblock":
action = 'Resizing (via forking)'
log.debug("%s root filesystem (type=%s, maj=%i, min=%i, val=%s)",
action, fs_type, os.major(st_dev), os.minor(st_dev), resize_root)
def do_resize(resize_cmd, log):
start = time.time()
try:
util.subp(resize_cmd)
except util.ProcessExecutionError:
util.logexc(log, "Failed to resize filesystem (cmd=%s)", resize_cmd)
raise
tot_time = int(time.time() - start)
log.debug("Resizing took %s seconds", tot_time)
# TODO: Should we add a fsck check after this to make
# sure we didn't corrupt anything?

View File

@ -0,0 +1,102 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
##
## The purpose of this script is to allow cloud-init to consume
## rightscale style userdata. rightscale user data is key-value pairs
## in a url-query-string like format.
##
## for cloud-init support, there will be a key named
## 'CLOUD_INIT_REMOTE_HOOK'.
##
## This cloud-config module will
## - read the blob of data from raw user data, and parse it as key/value
## - for each key that is found, download the content to
## the local instance/scripts directory and set them executable.
## - the files in that directory will be run by the user-scripts module
## Therefore, this must run before that.
##
##
import os
from cloudinit import url_helper as uhelp
from cloudinit import util
from cloudinit.settings import PER_INSTANCE
from urlparse import parse_qs
frequency = PER_INSTANCE
MY_NAME = "cc_rightscale_userdata"
MY_HOOKNAME = 'CLOUD_INIT_REMOTE_HOOK'
def handle(name, _cfg, cloud, log, _args):
try:
ud = cloud.get_userdata_raw()
except:
log.warn("Failed to get raw userdata in module %s", name)
return
try:
mdict = parse_qs(ud)
if not mdict or not MY_HOOKNAME in mdict:
log.debug(("Skipping module %s, "
"did not find %s in parsed"
" raw userdata"), name, MY_HOOKNAME)
return
except:
util.logexc(log, ("Failed to parse query string %s"
" into a dictionary"), ud)
raise
wrote_fns = []
captured_excps = []
# These will eventually be then ran by the cc_scripts_user
# TODO: maybe this should just be a new user data handler??
# Instead of a late module that acts like a user data handler?
scripts_d = cloud.get_ipath_cur('scripts')
urls = mdict[MY_HOOKNAME]
for (i, url) in enumerate(urls):
fname = os.path.join(scripts_d, "rightscale-%02i" % (i))
try:
resp = uhelp.readurl(url)
# Ensure its a valid http response (and something gotten)
if resp.ok() and resp.contents:
util.write_file(fname, str(resp), mode=0700)
wrote_fns.append(fname)
except Exception as e:
captured_excps.append(e)
util.logexc(log, "%s failed to read %s and write %s",
MY_NAME, url, fname)
if wrote_fns:
log.debug("Wrote out rightscale userdata to %s files", len(wrote_fns))
if len(wrote_fns) != len(urls):
skipped = len(urls) - len(wrote_fns)
log.debug("%s urls were skipped or failed", skipped)
if captured_excps:
log.warn("%s failed with exceptions, re-raising the last one",
len(captured_excps))
raise captured_excps[-1]

View File

@ -18,16 +18,15 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit
import logging
import cloudinit.util as util
import traceback
import os
from cloudinit import util
DEF_FILENAME = "20-cloud-config.conf"
DEF_DIR = "/etc/rsyslog.d"
def handle(_name, cfg, _cloud, log, _args):
def handle(name, cfg, cloud, log, _args):
# rsyslog:
# - "*.* @@192.158.1.1"
# - content: "*.* @@192.0.2.1:10514"
@ -37,17 +36,18 @@ def handle(_name, cfg, _cloud, log, _args):
# process 'rsyslog'
if not 'rsyslog' in cfg:
log.debug(("Skipping module named %s,"
" no 'rsyslog' key in configuration"), name)
return
def_dir = cfg.get('rsyslog_dir', DEF_DIR)
def_fname = cfg.get('rsyslog_filename', DEF_FILENAME)
files = []
elst = []
for ent in cfg['rsyslog']:
for i, ent in enumerate(cfg['rsyslog']):
if isinstance(ent, dict):
if not "content" in ent:
elst.append((ent, "no 'content' entry"))
log.warn("No 'content' entry in config entry %s", i + 1)
continue
content = ent['content']
filename = ent.get("filename", def_fname)
@ -55,47 +55,48 @@ def handle(_name, cfg, _cloud, log, _args):
content = ent
filename = def_fname
if not filename.startswith("/"):
filename = "%s/%s" % (def_dir, filename)
filename = filename.strip()
if not filename:
log.warn("Entry %s has an empty filename", i + 1)
continue
if not filename.startswith("/"):
filename = os.path.join(def_dir, filename)
# Truncate filename first time you see it
omode = "ab"
# truncate filename first time you see it
if filename not in files:
omode = "wb"
files.append(filename)
try:
util.write_file(filename, content + "\n", omode=omode)
except Exception as e:
log.debug(traceback.format_exc(e))
elst.append((content, "failed to write to %s" % filename))
contents = "%s\n" % (content)
util.write_file(cloud.paths.join(False, filename),
contents, omode=omode)
except Exception:
util.logexc(log, "Failed to write to %s", filename)
# need to restart syslogd
# Attempt to restart syslogd
restarted = False
try:
# if this config module is running at cloud-init time
# If this config module is running at cloud-init time
# (before rsyslog is running) we don't actually have to
# restart syslog.
#
# upstart actually does what we want here, in that it doesn't
# Upstart actually does what we want here, in that it doesn't
# start a service that wasn't running already on 'restart'
# it will also return failure on the attempt, so 'restarted'
# won't get set
log.debug("restarting rsyslog")
# won't get set.
log.debug("Restarting rsyslog")
util.subp(['service', 'rsyslog', 'restart'])
restarted = True
except Exception as e:
elst.append(("restart", str(e)))
except Exception:
util.logexc(log, "Failed restarting rsyslog")
if restarted:
# this only needs to run if we *actually* restarted
# This only needs to run if we *actually* restarted
# syslog above.
cloudinit.logging_set_from_cfg_file()
log = logging.getLogger()
log.debug("rsyslog configured %s" % files)
for e in elst:
log.warn("rsyslog error: %s\n" % ':'.join(e))
return
cloud.cycle_logging()
# This should now use rsyslog if
# the logging was setup to use it...
log.debug("%s configured %s files", name, files)

View File

@ -18,15 +18,21 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import os
from cloudinit import util
def handle(_name, cfg, cloud, log, _args):
def handle(name, cfg, cloud, log, _args):
if "runcmd" not in cfg:
log.debug(("Skipping module named %s,"
" no 'runcmd' key in configuration"), name)
return
outfile = "%s/runcmd" % cloud.get_ipath('scripts')
out_fn = os.path.join(cloud.get_ipath('scripts'), "runcmd")
cmd = cfg["runcmd"]
try:
content = util.shellify(cfg["runcmd"])
util.write_file(outfile, content, 0700)
content = util.shellify(cmd)
util.write_file(cloud.paths.join(False, out_fn), content, 0700)
except:
log.warn("failed to open %s for runcmd" % outfile)
util.logexc(log, "Failed to shellify %s into file %s", cmd, out_fn)

View File

@ -15,42 +15,46 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import os.path
import subprocess
import cloudinit.CloudConfig as cc
import yaml
from cloudinit import util
# Note: see http://saltstack.org/topics/installation/
def handle(_name, cfg, _cloud, _log, _args):
def handle(name, cfg, cloud, log, _args):
# If there isn't a salt key in the configuration don't do anything
if 'salt_minion' not in cfg:
log.debug(("Skipping module named %s,"
" no 'salt_minion' key in configuration"), name)
return
salt_cfg = cfg['salt_minion']
# Start by installing the salt package ...
cc.install_packages(("salt-minion",))
config_dir = '/etc/salt'
if not os.path.isdir(config_dir):
os.makedirs(config_dir)
cloud.distro.install_packages(["salt-minion"])
# Ensure we can configure files at the right dir
config_dir = cloud.paths.join(False, salt_cfg.get("config_dir",
'/etc/salt'))
util.ensure_dir(config_dir)
# ... and then update the salt configuration
if 'conf' in salt_cfg:
# Add all sections from the conf object to /etc/salt/minion
minion_config = os.path.join(config_dir, 'minion')
yaml.dump(salt_cfg['conf'],
file(minion_config, 'w'),
default_flow_style=False)
minion_data = util.yaml_dumps(salt_cfg.get('conf'))
util.write_file(minion_config, minion_data)
# ... copy the key pair if specified
if 'public_key' in salt_cfg and 'private_key' in salt_cfg:
pki_dir = '/etc/salt/pki'
cumask = os.umask(077)
if not os.path.isdir(pki_dir):
os.makedirs(pki_dir)
pub_name = os.path.join(pki_dir, 'minion.pub')
pem_name = os.path.join(pki_dir, 'minion.pem')
with open(pub_name, 'w') as f:
f.write(salt_cfg['public_key'])
with open(pem_name, 'w') as f:
f.write(salt_cfg['private_key'])
os.umask(cumask)
pki_dir = cloud.paths.join(False, salt_cfg.get('pki_dir',
'/etc/salt/pki'))
with util.umask(077):
util.ensure_dir(pki_dir)
pub_name = os.path.join(pki_dir, 'minion.pub')
pem_name = os.path.join(pki_dir, 'minion.pem')
util.write_file(pub_name, salt_cfg['public_key'])
util.write_file(pem_name, salt_cfg['private_key'])
# Start salt-minion
subprocess.check_call(['service', 'salt-minion', 'start'])
util.subp(['service', 'salt-minion', 'start'], capture=False)

View File

@ -18,17 +18,24 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
from cloudinit.CloudConfig import per_always
from cloudinit import get_cpath
import os
frequency = per_always
runparts_path = "%s/%s" % (get_cpath(), "scripts/per-boot")
from cloudinit import util
from cloudinit.settings import PER_ALWAYS
frequency = PER_ALWAYS
SCRIPT_SUBDIR = 'per-boot'
def handle(_name, _cfg, _cloud, log, _args):
def handle(name, _cfg, cloud, log, _args):
# Comes from the following:
# https://forums.aws.amazon.com/thread.jspa?threadID=96918
runparts_path = os.path.join(cloud.get_cpath(), 'scripts', SCRIPT_SUBDIR)
try:
util.runparts(runparts_path)
except:
log.warn("failed to run-parts in %s" % runparts_path)
log.warn("Failed to run module %s (%s in %s)",
name, SCRIPT_SUBDIR, runparts_path)
raise

View File

@ -18,17 +18,24 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
from cloudinit.CloudConfig import per_instance
from cloudinit import get_cpath
import os
frequency = per_instance
runparts_path = "%s/%s" % (get_cpath(), "scripts/per-instance")
from cloudinit import util
from cloudinit.settings import PER_INSTANCE
frequency = PER_INSTANCE
SCRIPT_SUBDIR = 'per-instance'
def handle(_name, _cfg, _cloud, log, _args):
def handle(name, _cfg, cloud, log, _args):
# Comes from the following:
# https://forums.aws.amazon.com/thread.jspa?threadID=96918
runparts_path = os.path.join(cloud.get_cpath(), 'scripts', SCRIPT_SUBDIR)
try:
util.runparts(runparts_path)
except:
log.warn("failed to run-parts in %s" % runparts_path)
log.warn("Failed to run module %s (%s in %s)",
name, SCRIPT_SUBDIR, runparts_path)
raise

View File

@ -18,17 +18,24 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
from cloudinit.CloudConfig import per_once
from cloudinit import get_cpath
import os
frequency = per_once
runparts_path = "%s/%s" % (get_cpath(), "scripts/per-once")
from cloudinit import util
from cloudinit.settings import PER_ONCE
frequency = PER_ONCE
SCRIPT_SUBDIR = 'per-once'
def handle(_name, _cfg, _cloud, log, _args):
def handle(name, _cfg, cloud, log, _args):
# Comes from the following:
# https://forums.aws.amazon.com/thread.jspa?threadID=96918
runparts_path = os.path.join(cloud.get_cpath(), 'scripts', SCRIPT_SUBDIR)
try:
util.runparts(runparts_path)
except:
log.warn("failed to run-parts in %s" % runparts_path)
log.warn("Failed to run module %s (%s in %s)",
name, SCRIPT_SUBDIR, runparts_path)
raise

View File

@ -18,17 +18,25 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
from cloudinit.CloudConfig import per_instance
from cloudinit import get_ipath_cur
import os
frequency = per_instance
runparts_path = "%s/%s" % (get_ipath_cur(), "scripts")
from cloudinit import util
from cloudinit.settings import PER_INSTANCE
frequency = PER_INSTANCE
SCRIPT_SUBDIR = 'scripts'
def handle(_name, _cfg, _cloud, log, _args):
def handle(name, _cfg, cloud, log, _args):
# This is written to by the user data handlers
# Ie, any custom shell scripts that come down
# go here...
runparts_path = os.path.join(cloud.get_ipath_cur(), SCRIPT_SUBDIR)
try:
util.runparts(runparts_path)
except:
log.warn("failed to run-parts in %s" % runparts_path)
log.warn("Failed to run module %s (%s in %s)",
name, SCRIPT_SUBDIR, runparts_path)
raise

View File

@ -18,25 +18,18 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
from cloudinit import util
def handle(_name, cfg, cloud, log, _args):
def handle(name, cfg, cloud, log, _args):
if util.get_cfg_option_bool(cfg, "preserve_hostname", False):
log.debug("preserve_hostname is set. not setting hostname")
return(True)
log.debug(("Configuration option 'preserve_hostname' is set,"
" not setting the hostname in module %s"), name)
return
(hostname, _fqdn) = util.get_hostname_fqdn(cfg, cloud)
try:
set_hostname(hostname, log)
log.debug("Setting hostname to %s", hostname)
cloud.distro.set_hostname(hostname)
except Exception:
util.logexc(log)
log.warn("failed to set hostname to %s\n", hostname)
return(True)
def set_hostname(hostname, log):
util.subp(['hostname', hostname])
util.write_file("/etc/hostname", "%s\n" % hostname, 0644)
log.debug("populated /etc/hostname with %s on first boot", hostname)
util.logexc(log, "Failed to set hostname to %s", hostname)

View File

@ -18,13 +18,19 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import sys
import random
from cloudinit import ssh_util
from cloudinit import util
from string import letters, digits # pylint: disable=W0402
# We are removing certain 'painful' letters/numbers
PW_SET = (letters.translate(None, 'loLOI') +
digits.translate(None, '01'))
def handle(_name, cfg, _cloud, log, args):
def handle(_name, cfg, cloud, log, args):
if len(args) != 0:
# if run from command line, and give args, wipe the chpasswd['list']
password = args[0]
@ -62,68 +68,79 @@ def handle(_name, cfg, _cloud, log, args):
ch_in = '\n'.join(plist_in)
try:
log.debug("Changing password for %s:", users)
util.subp(['chpasswd'], ch_in)
log.debug("changed password for %s:" % users)
except Exception as e:
errors.append(e)
log.warn("failed to set passwords with chpasswd: %s" % e)
util.logexc(log,
"Failed to set passwords with chpasswd for %s", users)
if len(randlist):
sys.stdout.write("%s\n%s\n" % ("Set the following passwords\n",
'\n'.join(randlist)))
blurb = ("Set the following 'random' passwords\n",
'\n'.join(randlist))
sys.stderr.write("%s\n%s\n" % blurb)
if expire:
enum = len(errors)
expired_users = []
for u in users:
try:
util.subp(['passwd', '--expire', u])
expired_users.append(u)
except Exception as e:
errors.append(e)
log.warn("failed to expire account for %s" % u)
if enum == len(errors):
log.debug("expired passwords for: %s" % u)
util.logexc(log, "Failed to set 'expire' for %s", u)
if expired_users:
log.debug("Expired passwords for: %s users", expired_users)
change_pwauth = False
pw_auth = None
if 'ssh_pwauth' in cfg:
val = str(cfg['ssh_pwauth']).lower()
if val in ("true", "1", "yes"):
pw_auth = "yes"
change_pwauth = True
elif val in ("false", "0", "no"):
pw_auth = "no"
change_pwauth = True
else:
change_pwauth = False
change_pwauth = True
if util.is_true(cfg['ssh_pwauth']):
pw_auth = 'yes'
if util.is_false(cfg['ssh_pwauth']):
pw_auth = 'no'
if change_pwauth:
pa_s = "\(#*\)\(PasswordAuthentication[[:space:]]\+\)\(yes\|no\)"
msg = "set PasswordAuthentication to '%s'" % pw_auth
try:
cmd = ['sed', '-i', 's,%s,\\2%s,' % (pa_s, pw_auth),
'/etc/ssh/sshd_config']
util.subp(cmd)
log.debug(msg)
except Exception as e:
log.warn("failed %s" % msg)
errors.append(e)
replaced_auth = False
# See: man sshd_config
conf_fn = cloud.paths.join(True, ssh_util.DEF_SSHD_CFG)
old_lines = ssh_util.parse_ssh_config(conf_fn)
new_lines = []
i = 0
for (i, line) in enumerate(old_lines):
# Keywords are case-insensitive and arguments are case-sensitive
if line.key == 'passwordauthentication':
log.debug("Replacing auth line %s with %s", i + 1, pw_auth)
replaced_auth = True
line.value = pw_auth
new_lines.append(line)
if not replaced_auth:
log.debug("Adding new auth line %s", i + 1)
replaced_auth = True
new_lines.append(ssh_util.SshdConfigLine('',
'PasswordAuthentication',
pw_auth))
lines = [str(e) for e in new_lines]
ssh_rw_fn = cloud.paths.join(False, ssh_util.DEF_SSHD_CFG)
util.write_file(ssh_rw_fn, "\n".join(lines))
try:
p = util.subp(['service', cfg.get('ssh_svcname', 'ssh'),
'restart'])
log.debug("restarted sshd")
cmd = ['service']
cmd.append(cloud.distro.get_option('ssh_svcname', 'ssh'))
cmd.append('restart')
util.subp(cmd)
log.debug("Restarted the ssh daemon")
except:
log.warn("restart of ssh failed")
util.logexc(log, "Restarting of the ssh daemon failed")
if len(errors):
raise(errors[0])
return
def rand_str(strlen=32, select_from=letters + digits):
return("".join([random.choice(select_from) for _x in range(0, strlen)]))
log.debug("%s errors occured, re-raising the last one", len(errors))
raise errors[-1]
def rand_user_password(pwlen=9):
selfrom = (letters.translate(None, 'loLOI') +
digits.translate(None, '01'))
return(rand_str(pwlen, select_from=selfrom))
return util.rand_str(pwlen, select_from=PW_SET)

132
cloudinit/config/cc_ssh.py Normal file
View File

@ -0,0 +1,132 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import glob
from cloudinit import util
from cloudinit import ssh_util
DISABLE_ROOT_OPTS = ("no-port-forwarding,no-agent-forwarding,"
"no-X11-forwarding,command=\"echo \'Please login as the user \\\"$USER\\\" "
"rather than the user \\\"root\\\".\';echo;sleep 10\"")
KEY_2_FILE = {
"rsa_private": ("/etc/ssh/ssh_host_rsa_key", 0600),
"rsa_public": ("/etc/ssh/ssh_host_rsa_key.pub", 0644),
"dsa_private": ("/etc/ssh/ssh_host_dsa_key", 0600),
"dsa_public": ("/etc/ssh/ssh_host_dsa_key.pub", 0644),
"ecdsa_private": ("/etc/ssh/ssh_host_ecdsa_key", 0600),
"ecdsa_public": ("/etc/ssh/ssh_host_ecdsa_key.pub", 0644),
}
PRIV_2_PUB = {
'rsa_private': 'rsa_public',
'dsa_private': 'dsa_public',
'ecdsa_private': 'ecdsa_public',
}
KEY_GEN_TPL = 'o=$(ssh-keygen -yf "%s") && echo "$o" root@localhost > "%s"'
GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa']
KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key'
def handle(_name, cfg, cloud, log, _args):
# remove the static keys from the pristine image
if cfg.get("ssh_deletekeys", True):
key_pth = cloud.paths.join(False, "/etc/ssh/", "ssh_host_*key*")
for f in glob.glob(key_pth):
try:
util.del_file(f)
except:
util.logexc(log, "Failed deleting key file %s", f)
if "ssh_keys" in cfg:
# if there are keys in cloud-config, use them
for (key, val) in cfg["ssh_keys"].iteritems():
if key in KEY_2_FILE:
tgt_fn = KEY_2_FILE[key][0]
tgt_perms = KEY_2_FILE[key][1]
util.write_file(cloud.paths.join(False, tgt_fn),
val, tgt_perms)
for (priv, pub) in PRIV_2_PUB.iteritems():
if pub in cfg['ssh_keys'] or not priv in cfg['ssh_keys']:
continue
pair = (KEY_2_FILE[priv][0], KEY_2_FILE[pub][0])
cmd = ['sh', '-xc', KEY_GEN_TPL % pair]
try:
# TODO: Is this guard needed?
with util.SeLinuxGuard("/etc/ssh", recursive=True):
util.subp(cmd, capture=False)
log.debug("Generated a key for %s from %s", pair[0], pair[1])
except:
util.logexc(log, ("Failed generated a key"
" for %s from %s"), pair[0], pair[1])
else:
# if not, generate them
genkeys = util.get_cfg_option_list(cfg,
'ssh_genkeytypes',
GENERATE_KEY_NAMES)
for keytype in genkeys:
keyfile = cloud.paths.join(False, KEY_FILE_TPL % (keytype))
util.ensure_dir(os.path.dirname(keyfile))
if not os.path.exists(keyfile):
cmd = ['ssh-keygen', '-t', keytype, '-N', '', '-f', keyfile]
try:
# TODO: Is this guard needed?
with util.SeLinuxGuard("/etc/ssh", recursive=True):
util.subp(cmd, capture=False)
except:
util.logexc(log, ("Failed generating key type"
" %s to file %s"), keytype, keyfile)
try:
user = util.get_cfg_option_str(cfg, 'user')
disable_root = util.get_cfg_option_bool(cfg, "disable_root", True)
disable_root_opts = util.get_cfg_option_str(cfg, "disable_root_opts",
DISABLE_ROOT_OPTS)
keys = cloud.get_public_ssh_keys() or []
if "ssh_authorized_keys" in cfg:
cfgkeys = cfg["ssh_authorized_keys"]
keys.extend(cfgkeys)
apply_credentials(keys, user, cloud.paths,
disable_root, disable_root_opts)
except:
util.logexc(log, "Applying ssh credentials failed!")
def apply_credentials(keys, user, paths, disable_root, disable_root_opts):
keys = set(keys)
if user:
ssh_util.setup_user_keys(keys, user, '', paths)
if disable_root and user:
key_prefix = disable_root_opts.replace('$USER', user)
else:
key_prefix = ''
ssh_util.setup_user_keys(keys, 'root', key_prefix, paths)

View File

@ -18,12 +18,14 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import subprocess
import traceback
from cloudinit import util
# The ssh-import-id only seems to exist on ubuntu (for now)
# https://launchpad.net/ssh-import-id
distros = ['ubuntu']
def handle(_name, cfg, _cloud, log, args):
def handle(name, cfg, _cloud, log, args):
if len(args) != 0:
user = args[0]
ids = []
@ -31,20 +33,21 @@ def handle(_name, cfg, _cloud, log, args):
ids = args[1:]
else:
user = util.get_cfg_option_str(cfg, "user", "ubuntu")
ids = util.get_cfg_option_list_or_str(cfg, "ssh_import_id", [])
ids = util.get_cfg_option_list(cfg, "ssh_import_id", [])
if len(ids) == 0:
log.debug("Skipping module named %s, no ids found to import", name)
return
if not user:
log.debug("Skipping module named %s, no user found to import", name)
return
cmd = ["sudo", "-Hu", user, "ssh-import-id"] + ids
log.debug("importing ssh ids. cmd = %s" % cmd)
log.debug("Importing ssh ids for user %s.", user)
try:
subprocess.check_call(cmd)
except subprocess.CalledProcessError as e:
log.debug(traceback.format_exc(e))
raise Exception("Cmd returned %s: %s" % (e.returncode, cmd))
except OSError as e:
log.debug(traceback.format_exc(e))
raise Exception("Cmd failed to execute: %s" % (cmd))
util.subp(cmd, capture=False)
except util.ProcessExecutionError as e:
util.logexc(log, "Failed to run command to import %s ssh ids", user)
raise e

View File

@ -0,0 +1,39 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit import util
from cloudinit.settings import PER_INSTANCE
frequency = PER_INSTANCE
def handle(name, cfg, cloud, log, args):
if len(args) != 0:
timezone = args[0]
else:
timezone = util.get_cfg_option_str(cfg, "timezone", False)
if not timezone:
log.debug("Skipping module named %s, no 'timezone' specified", name)
return
# Let the distro handle settings its timezone
cloud.distro.set_timezone(timezone)

View File

@ -0,0 +1,60 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit import util
from cloudinit import templater
from cloudinit.settings import PER_ALWAYS
frequency = PER_ALWAYS
def handle(name, cfg, cloud, log, _args):
manage_hosts = util.get_cfg_option_str(cfg, "manage_etc_hosts", False)
if util.translate_bool(manage_hosts, addons=['template']):
(hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)
if not hostname:
log.warn(("Option 'manage_etc_hosts' was set,"
" but no hostname was found"))
return
# Render from a template file
distro_n = cloud.distro.name
tpl_fn_name = cloud.get_template_filename("hosts.%s" % (distro_n))
if not tpl_fn_name:
raise RuntimeError(("No hosts template could be"
" found for distro %s") % (distro_n))
out_fn = cloud.paths.join(False, '/etc/hosts')
templater.render_to_file(tpl_fn_name, out_fn,
{'hostname': hostname, 'fqdn': fqdn})
elif manage_hosts == "localhost":
(hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)
if not hostname:
log.warn(("Option 'manage_etc_hosts' was set,"
" but no hostname was found"))
return
log.debug("Managing localhost in /etc/hosts")
cloud.distro.update_etc_hosts(hostname, fqdn)
else:
log.debug(("Configuration option 'manage_etc_hosts' is not set,"
" not managing /etc/hosts in module %s"), name)

View File

@ -0,0 +1,41 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit import util
from cloudinit.settings import PER_ALWAYS
frequency = PER_ALWAYS
def handle(name, cfg, cloud, log, _args):
if util.get_cfg_option_bool(cfg, "preserve_hostname", False):
log.debug(("Configuration option 'preserve_hostname' is set,"
" not updating the hostname in module %s"), name)
return
(hostname, _fqdn) = util.get_hostname_fqdn(cfg, cloud)
try:
prev_fn = os.path.join(cloud.get_cpath('data'), "previous-hostname")
cloud.distro.update_hostname(hostname, prev_fn)
except Exception:
util.logexc(log, "Failed to set the hostname to %s", hostname)
raise

View File

@ -0,0 +1,163 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from StringIO import StringIO
import abc
from cloudinit import importer
from cloudinit import log as logging
from cloudinit import util
# TODO: Make this via config??
IFACE_ACTIONS = {
'up': ['ifup', '--all'],
'down': ['ifdown', '--all'],
}
LOG = logging.getLogger(__name__)
class Distro(object):
__metaclass__ = abc.ABCMeta
def __init__(self, name, cfg, paths):
self._paths = paths
self._cfg = cfg
self.name = name
@abc.abstractmethod
def install_packages(self, pkglist):
raise NotImplementedError()
@abc.abstractmethod
def _write_network(self, settings):
# In the future use the http://fedorahosted.org/netcf/
# to write this blob out in a distro format
raise NotImplementedError()
def get_option(self, opt_name, default=None):
return self._cfg.get(opt_name, default)
@abc.abstractmethod
def set_hostname(self, hostname):
raise NotImplementedError()
@abc.abstractmethod
def update_hostname(self, hostname, prev_hostname_fn):
raise NotImplementedError()
@abc.abstractmethod
def package_command(self, cmd, args=None):
raise NotImplementedError()
@abc.abstractmethod
def update_package_sources(self):
raise NotImplementedError()
def get_package_mirror(self):
return self.get_option('package_mirror')
def apply_network(self, settings, bring_up=True):
# Write it out
self._write_network(settings)
# Now try to bring them up
if bring_up:
return self._interface_action('up')
return False
@abc.abstractmethod
def apply_locale(self, locale, out_fn=None):
raise NotImplementedError()
@abc.abstractmethod
def set_timezone(self, tz):
raise NotImplementedError()
def _get_localhost_ip(self):
return "127.0.0.1"
def update_etc_hosts(self, hostname, fqdn):
# Format defined at
# http://unixhelp.ed.ac.uk/CGI/man-cgi?hosts
header = "# Added by cloud-init"
real_header = "%s on %s" % (header, util.time_rfc2822())
local_ip = self._get_localhost_ip()
hosts_line = "%s\t%s %s" % (local_ip, fqdn, hostname)
new_etchosts = StringIO()
need_write = False
need_change = True
hosts_ro_fn = self._paths.join(True, "/etc/hosts")
for line in util.load_file(hosts_ro_fn).splitlines():
if line.strip().startswith(header):
continue
if not line.strip() or line.strip().startswith("#"):
new_etchosts.write("%s\n" % (line))
continue
split_line = [s.strip() for s in line.split()]
if len(split_line) < 2:
new_etchosts.write("%s\n" % (line))
continue
(ip, hosts) = split_line[0], split_line[1:]
if ip == local_ip:
if sorted([hostname, fqdn]) == sorted(hosts):
need_change = False
if need_change:
line = "%s\n%s" % (real_header, hosts_line)
need_change = False
need_write = True
new_etchosts.write("%s\n" % (line))
if need_change:
new_etchosts.write("%s\n%s\n" % (real_header, hosts_line))
need_write = True
if need_write:
contents = new_etchosts.getvalue()
util.write_file(self._paths.join(False, "/etc/hosts"),
contents, mode=0644)
def _interface_action(self, action):
if action not in IFACE_ACTIONS:
raise NotImplementedError("Unknown interface action %s" % (action))
cmd = IFACE_ACTIONS[action]
try:
LOG.debug("Attempting to run %s interface action using command %s",
action, cmd)
(_out, err) = util.subp(cmd)
if len(err):
LOG.warn("Running %s resulted in stderr output: %s", cmd, err)
return True
except util.ProcessExecutionError:
util.logexc(LOG, "Running interface command %s failed", cmd)
return False
def fetch(name):
locs = importer.find_module(name,
['', __name__],
['Distro'])
if not locs:
raise ImportError("No distribution found for distro %s"
% (name))
mod = importer.import_module(locs[0])
cls = getattr(mod, 'Distro')
return cls

149
cloudinit/distros/debian.py Normal file
View File

@ -0,0 +1,149 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit import distros
from cloudinit import helpers
from cloudinit import log as logging
from cloudinit import util
from cloudinit.settings import PER_INSTANCE
LOG = logging.getLogger(__name__)
class Distro(distros.Distro):
def __init__(self, name, cfg, paths):
distros.Distro.__init__(self, name, cfg, paths)
# This will be used to restrict certain
# calls from repeatly happening (when they
# should only happen say once per instance...)
self._runner = helpers.Runners(paths)
def apply_locale(self, locale, out_fn=None):
if not out_fn:
out_fn = self._paths.join(False, '/etc/default/locale')
util.subp(['locale-gen', locale], capture=False)
util.subp(['update-locale', locale], capture=False)
contents = [
"# Created by cloud-init",
'LANG="%s"' % (locale),
]
util.write_file(out_fn, "\n".join(contents))
def install_packages(self, pkglist):
self.update_package_sources()
self.package_command('install', pkglist)
def _write_network(self, settings):
net_fn = self._paths.join(False, "/etc/network/interfaces")
util.write_file(net_fn, settings)
def set_hostname(self, hostname):
out_fn = self._paths.join(False, "/etc/hostname")
self._write_hostname(hostname, out_fn)
if out_fn == '/etc/hostname':
# Only do this if we are running in non-adjusted root mode
LOG.debug("Setting hostname to %s", hostname)
util.subp(['hostname', hostname])
def _write_hostname(self, hostname, out_fn):
lines = []
lines.append("# Created by cloud-init")
lines.append(str(hostname))
contents = "\n".join(lines)
util.write_file(out_fn, contents, 0644)
def update_hostname(self, hostname, prev_fn):
hostname_prev = self._read_hostname(prev_fn)
read_fn = self._paths.join(True, "/etc/hostname")
hostname_in_etc = self._read_hostname(read_fn)
update_files = []
if not hostname_prev or hostname_prev != hostname:
update_files.append(prev_fn)
if (not hostname_in_etc or
(hostname_in_etc == hostname_prev and
hostname_in_etc != hostname)):
write_fn = self._paths.join(False, "/etc/hostname")
update_files.append(write_fn)
for fn in update_files:
try:
self._write_hostname(hostname, fn)
except:
util.logexc(LOG, "Failed to write hostname %s to %s",
hostname, fn)
if (hostname_in_etc and hostname_prev and
hostname_in_etc != hostname_prev):
LOG.debug(("%s differs from /etc/hostname."
" Assuming user maintained hostname."), prev_fn)
if "/etc/hostname" in update_files:
# Only do this if we are running in non-adjusted root mode
LOG.debug("Setting hostname to %s", hostname)
util.subp(['hostname', hostname])
def _read_hostname(self, filename, default=None):
contents = util.load_file(filename, quiet=True)
for line in contents.splitlines():
c_pos = line.find("#")
# Handle inline comments
if c_pos != -1:
line = line[0:c_pos]
line_c = line.strip()
if line_c:
return line_c
return default
def _get_localhost_ip(self):
# Note: http://www.leonardoborda.com/blog/127-0-1-1-ubuntu-debian/
return "127.0.1.1"
def set_timezone(self, tz):
tz_file = os.path.join("/usr/share/zoneinfo", tz)
if not os.path.isfile(tz_file):
raise RuntimeError(("Invalid timezone %s,"
" no file found at %s") % (tz, tz_file))
tz_lines = [
"# Created by cloud-init",
str(tz),
]
tz_contents = "\n".join(tz_lines)
tz_fn = self._paths.join(False, "/etc/timezone")
util.write_file(tz_fn, tz_contents)
util.copy(tz_file, self._paths.join(False, "/etc/localtime"))
def package_command(self, command, args=None):
e = os.environ.copy()
# See: http://tiny.cc/kg91fw
# Or: http://tiny.cc/mh91fw
e['DEBIAN_FRONTEND'] = 'noninteractive'
cmd = ['apt-get', '--option', 'Dpkg::Options::=--force-confold',
'--assume-yes', '--quiet', command]
if args:
cmd.extend(args)
# Allow the output of this to flow outwards (ie not be captured)
util.subp(cmd, env=e, capture=False)
def update_package_sources(self):
self._runner.run("update-sources", self.package_command,
["update"], freq=PER_INSTANCE)

View File

@ -1,10 +1,12 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -18,12 +20,12 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#import cloudinit
#import cloudinit.util as util
from cloudinit.CloudConfig import per_instance
from cloudinit.distros import rhel
frequency = per_instance
from cloudinit import log as logging
LOG = logging.getLogger(__name__)
def handle(_name, _cfg, _cloud, _log, _args):
print "hi"
class Distro(rhel.Distro):
pass

337
cloudinit/distros/rhel.py Normal file
View File

@ -0,0 +1,337 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit import distros
from cloudinit import helpers
from cloudinit import log as logging
from cloudinit import util
from cloudinit.settings import PER_INSTANCE
LOG = logging.getLogger(__name__)
NETWORK_FN_TPL = '/etc/sysconfig/network-scripts/ifcfg-%s'
# See: http://tiny.cc/6r99fw
# For what alot of these files that are being written
# are and the format of them
# This library is used to parse/write
# out the various sysconfig files edited
#
# It has to be slightly modified though
# to ensure that all values are quoted
# since these configs are usually sourced into
# bash scripts...
from configobj import ConfigObj
# See: http://tiny.cc/oezbgw
D_QUOTE_CHARS = {
"\"": "\\\"",
"(": "\\(",
")": "\\)",
"$": '\$',
'`': '\`',
}
class Distro(distros.Distro):
def __init__(self, name, cfg, paths):
distros.Distro.__init__(self, name, cfg, paths)
# This will be used to restrict certain
# calls from repeatly happening (when they
# should only happen say once per instance...)
self._runner = helpers.Runners(paths)
def install_packages(self, pkglist):
self.package_command('install', pkglist)
def _write_network(self, settings):
# TODO fix this... since this is the ubuntu format
entries = translate_network(settings)
LOG.debug("Translated ubuntu style network settings %s into %s",
settings, entries)
# Make the intermediate format as the rhel format...
for (dev, info) in entries.iteritems():
net_fn = NETWORK_FN_TPL % (dev)
net_ro_fn = self._paths.join(True, net_fn)
(prev_exist, net_cfg) = self._read_conf(net_ro_fn)
net_cfg['DEVICE'] = dev
boot_proto = info.get('bootproto')
if boot_proto:
net_cfg['BOOTPROTO'] = boot_proto
net_mask = info.get('netmask')
if net_mask:
net_cfg["NETMASK"] = net_mask
addr = info.get('address')
if addr:
net_cfg["IPADDR"] = addr
if info.get('auto'):
net_cfg['ONBOOT'] = 'yes'
else:
net_cfg['ONBOOT'] = 'no'
gtway = info.get('gateway')
if gtway:
net_cfg["GATEWAY"] = gtway
bcast = info.get('broadcast')
if bcast:
net_cfg["BROADCAST"] = bcast
mac_addr = info.get('hwaddress')
if mac_addr:
net_cfg["MACADDR"] = mac_addr
lines = net_cfg.write()
if not prev_exist:
lines.insert(0, '# Created by cloud-init')
w_contents = "\n".join(lines)
net_rw_fn = self._paths.join(False, net_fn)
util.write_file(net_rw_fn, w_contents, 0644)
def set_hostname(self, hostname):
out_fn = self._paths.join(False, '/etc/sysconfig/network')
self._write_hostname(hostname, out_fn)
if out_fn == '/etc/sysconfig/network':
# Only do this if we are running in non-adjusted root mode
LOG.debug("Setting hostname to %s", hostname)
util.subp(['hostname', hostname])
def apply_locale(self, locale, out_fn=None):
if not out_fn:
out_fn = self._paths.join(False, '/etc/sysconfig/i18n')
ro_fn = self._paths.join(True, '/etc/sysconfig/i18n')
(_exists, contents) = self._read_conf(ro_fn)
contents['LANG'] = locale
w_contents = "\n".join(contents.write())
util.write_file(out_fn, w_contents, 0644)
def _write_hostname(self, hostname, out_fn):
(_exists, contents) = self._read_conf(out_fn)
contents['HOSTNAME'] = hostname
w_contents = "\n".join(contents.write())
util.write_file(out_fn, w_contents, 0644)
def update_hostname(self, hostname, prev_file):
hostname_prev = self._read_hostname(prev_file)
read_fn = self._paths.join(True, "/etc/sysconfig/network")
hostname_in_sys = self._read_hostname(read_fn)
update_files = []
if not hostname_prev or hostname_prev != hostname:
update_files.append(prev_file)
if (not hostname_in_sys or
(hostname_in_sys == hostname_prev
and hostname_in_sys != hostname)):
write_fn = self._paths.join(False, "/etc/sysconfig/network")
update_files.append(write_fn)
for fn in update_files:
try:
self._write_hostname(hostname, fn)
except:
util.logexc(LOG, "Failed to write hostname %s to %s",
hostname, fn)
if (hostname_in_sys and hostname_prev and
hostname_in_sys != hostname_prev):
LOG.debug(("%s differs from /etc/sysconfig/network."
" Assuming user maintained hostname."), prev_file)
if "/etc/sysconfig/network" in update_files:
# Only do this if we are running in non-adjusted root mode
LOG.debug("Setting hostname to %s", hostname)
util.subp(['hostname', hostname])
def _read_hostname(self, filename, default=None):
(_exists, contents) = self._read_conf(filename)
if 'HOSTNAME' in contents:
return contents['HOSTNAME']
else:
return default
def _read_conf(self, fn):
exists = False
if os.path.isfile(fn):
contents = util.load_file(fn).splitlines()
exists = True
else:
contents = []
return (exists, QuotingConfigObj(contents))
def set_timezone(self, tz):
tz_file = os.path.join("/usr/share/zoneinfo", tz)
if not os.path.isfile(tz_file):
raise RuntimeError(("Invalid timezone %s,"
" no file found at %s") % (tz, tz_file))
# Adjust the sysconfig clock zone setting
read_fn = self._paths.join(True, "/etc/sysconfig/clock")
(_exists, contents) = self._read_conf(read_fn)
contents['ZONE'] = tz
tz_contents = "\n".join(contents.write())
write_fn = self._paths.join(False, "/etc/sysconfig/clock")
util.write_file(write_fn, tz_contents)
# This ensures that the correct tz will be used for the system
util.copy(tz_file, self._paths.join(False, "/etc/localtime"))
def package_command(self, command, args=None):
cmd = ['yum']
# If enabled, then yum will be tolerant of errors on the command line
# with regard to packages.
# For example: if you request to install foo, bar and baz and baz is
# installed; yum won't error out complaining that baz is already
# installed.
cmd.append("-t")
# Determines whether or not yum prompts for confirmation
# of critical actions. We don't want to prompt...
cmd.append("-y")
cmd.append(command)
if args:
cmd.extend(args)
# Allow the output of this to flow outwards (ie not be captured)
util.subp(cmd, capture=False)
def update_package_sources(self):
self._runner.run("update-sources", self.package_command,
["update"], freq=PER_INSTANCE)
# This class helps adjust the configobj
# writing to ensure that when writing a k/v
# on a line, that they are properly quoted
# and have no spaces between the '=' sign.
# - This is mainly due to the fact that
# the sysconfig scripts are often sourced
# directly into bash/shell scripts so ensure
# that it works for those types of use cases.
class QuotingConfigObj(ConfigObj):
def __init__(self, lines):
ConfigObj.__init__(self, lines,
interpolation=False,
write_empty_values=True)
def _quote_posix(self, text):
if not text:
return ''
for (k, v) in D_QUOTE_CHARS.iteritems():
text = text.replace(k, v)
return '"%s"' % (text)
def _quote_special(self, text):
if text.lower() in ['yes', 'no', 'true', 'false']:
return text
else:
return self._quote_posix(text)
def _write_line(self, indent_string, entry, this_entry, comment):
# Ensure it is formatted fine for
# how these sysconfig scripts are used
val = self._decode_element(self._quote(this_entry))
# Single quoted strings should
# always work.
if not val.startswith("'"):
# Perform any special quoting
val = self._quote_special(val)
key = self._decode_element(self._quote(entry, multiline=False))
cmnt = self._decode_element(comment)
return '%s%s%s%s%s' % (indent_string,
key,
"=",
val,
cmnt)
# This is a util function to translate a ubuntu /etc/network/interfaces 'blob'
# to a rhel equiv. that can then be written to /etc/sysconfig/network-scripts/
# TODO remove when we have python-netcf active...
def translate_network(settings):
# Get the standard cmd, args from the ubuntu format
entries = []
for line in settings.splitlines():
line = line.strip()
if not line or line.startswith("#"):
continue
split_up = line.split(None, 1)
if len(split_up) <= 1:
continue
entries.append(split_up)
# Figure out where each iface section is
ifaces = []
consume = {}
for (cmd, args) in entries:
if cmd == 'iface':
if consume:
ifaces.append(consume)
consume = {}
consume[cmd] = args
else:
consume[cmd] = args
# Check if anything left over to consume
absorb = False
for (cmd, args) in consume.iteritems():
if cmd == 'iface':
absorb = True
if absorb:
ifaces.append(consume)
# Now translate
real_ifaces = {}
for info in ifaces:
if 'iface' not in info:
continue
iface_details = info['iface'].split(None)
dev_name = None
if len(iface_details) >= 1:
dev = iface_details[0].strip().lower()
if dev:
dev_name = dev
if not dev_name:
continue
iface_info = {}
if len(iface_details) >= 3:
proto_type = iface_details[2].strip().lower()
# Seems like this can be 'loopback' which we don't
# really care about
if proto_type in ['dhcp', 'static']:
iface_info['bootproto'] = proto_type
# These can just be copied over
for k in ['netmask', 'address', 'gateway', 'broadcast']:
if k in info:
val = info[k].strip().lower()
if val:
iface_info[k] = val
# Is any mac address spoofing going on??
if 'hwaddress' in info:
hw_info = info['hwaddress'].lower().strip()
hw_split = hw_info.split(None, 1)
if len(hw_split) == 2 and hw_split[0].startswith('ether'):
hw_addr = hw_split[1]
if hw_addr:
iface_info['hwaddress'] = hw_addr
real_ifaces[dev_name] = iface_info
# Check for those that should be started on boot via 'auto'
for (cmd, args) in entries:
if cmd == 'auto':
# Seems like auto can be like 'auto eth0 eth0:1' so just get the
# first part out as the device name
args = args.split(None)
if not args:
continue
dev_name = args[0].strip().lower()
if dev_name in real_ifaces:
real_ifaces[dev_name]['auto'] = True
return real_ifaces

View File

@ -1,10 +1,12 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -17,14 +19,13 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.util as util
import subprocess
from cloudinit.CloudConfig import per_always
frequency = per_always
from cloudinit.distros import debian
from cloudinit import log as logging
LOG = logging.getLogger(__name__)
def handle(_name, cfg, _cloud, _log, _args):
if util.get_cfg_option_bool(cfg, "disable_ec2_metadata", False):
fwall = "route add -host 169.254.169.254 reject"
subprocess.call(fwall.split(' '))
class Distro(debian.Distro):
pass

View File

@ -0,0 +1,222 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import abc
import os
from cloudinit.settings import (PER_ALWAYS, PER_INSTANCE, FREQUENCIES)
from cloudinit import importer
from cloudinit import log as logging
from cloudinit import util
LOG = logging.getLogger(__name__)
# Used as the content type when a message is not multipart
# and it doesn't contain its own content-type
NOT_MULTIPART_TYPE = "text/x-not-multipart"
# When none is assigned this gets used
OCTET_TYPE = 'application/octet-stream'
# Special content types that signal the start and end of processing
CONTENT_END = "__end__"
CONTENT_START = "__begin__"
CONTENT_SIGNALS = [CONTENT_START, CONTENT_END]
# Used when a part-handler type is encountered
# to allow for registration of new types.
PART_CONTENT_TYPES = ["text/part-handler"]
PART_HANDLER_FN_TMPL = 'part-handler-%03d'
# For parts without filenames
PART_FN_TPL = 'part-%03d'
# Different file beginnings to there content type
INCLUSION_TYPES_MAP = {
'#include': 'text/x-include-url',
'#include-once': 'text/x-include-once-url',
'#!': 'text/x-shellscript',
'#cloud-config': 'text/cloud-config',
'#upstart-job': 'text/upstart-job',
'#part-handler': 'text/part-handler',
'#cloud-boothook': 'text/cloud-boothook',
'#cloud-config-archive': 'text/cloud-config-archive',
}
# Sorted longest first
INCLUSION_SRCH = sorted(list(INCLUSION_TYPES_MAP.keys()),
key=(lambda e: 0 - len(e)))
class Handler(object):
__metaclass__ = abc.ABCMeta
def __init__(self, frequency, version=2):
self.handler_version = version
self.frequency = frequency
def __repr__(self):
return "%s: [%s]" % (util.obj_name(self), self.list_types())
@abc.abstractmethod
def list_types(self):
raise NotImplementedError()
def handle_part(self, data, ctype, filename, payload, frequency):
return self._handle_part(data, ctype, filename, payload, frequency)
@abc.abstractmethod
def _handle_part(self, data, ctype, filename, payload, frequency):
raise NotImplementedError()
def run_part(mod, data, ctype, filename, payload, frequency):
mod_freq = mod.frequency
if not (mod_freq == PER_ALWAYS or
(frequency == PER_INSTANCE and mod_freq == PER_INSTANCE)):
return
mod_ver = mod.handler_version
# Sanity checks on version (should be an int convertable)
try:
mod_ver = int(mod_ver)
except:
mod_ver = 1
try:
LOG.debug("Calling handler %s (%s, %s, %s) with frequency %s",
mod, ctype, filename, mod_ver, frequency)
if mod_ver >= 2:
# Treat as v. 2 which does get a frequency
mod.handle_part(data, ctype, filename, payload, frequency)
else:
# Treat as v. 1 which gets no frequency
mod.handle_part(data, ctype, filename, payload)
except:
util.logexc(LOG, ("Failed calling handler %s (%s, %s, %s)"
" with frequency %s"),
mod, ctype, filename,
mod_ver, frequency)
def call_begin(mod, data, frequency):
run_part(mod, data, CONTENT_START, None, None, frequency)
def call_end(mod, data, frequency):
run_part(mod, data, CONTENT_END, None, None, frequency)
def walker_handle_handler(pdata, _ctype, _filename, payload):
curcount = pdata['handlercount']
modname = PART_HANDLER_FN_TMPL % (curcount)
frequency = pdata['frequency']
modfname = os.path.join(pdata['handlerdir'], "%s" % (modname))
if not modfname.endswith(".py"):
modfname = "%s.py" % (modfname)
# TODO: Check if path exists??
util.write_file(modfname, payload, 0600)
handlers = pdata['handlers']
try:
mod = fixup_handler(importer.import_module(modname))
call_begin(mod, pdata['data'], frequency)
# Only register and increment
# after the above have worked (so we don't if it
# fails)
handlers.register(mod)
pdata['handlercount'] = curcount + 1
except:
util.logexc(LOG, ("Failed at registering python file: %s"
" (part handler %s)"), modfname, curcount)
def _extract_first_or_bytes(blob, size):
# Extract the first line upto X bytes or X bytes from more than the
# first line if the first line does not contain enough bytes
first_line = blob.split("\n", 1)[0]
if len(first_line) >= size:
start = first_line[:size]
else:
start = blob[0:size]
return start
def walker_callback(pdata, ctype, filename, payload):
if ctype in PART_CONTENT_TYPES:
walker_handle_handler(pdata, ctype, filename, payload)
return
handlers = pdata['handlers']
if ctype not in pdata['handlers'] and payload:
# Extract the first line or 24 bytes for displaying in the log
start = _extract_first_or_bytes(payload, 24)
details = "'%s...'" % (start.encode("string-escape"))
if ctype == NOT_MULTIPART_TYPE:
LOG.warning("Unhandled non-multipart (%s) userdata: %s",
ctype, details)
else:
LOG.warning("Unhandled unknown content-type (%s) userdata: %s",
ctype, details)
else:
run_part(handlers[ctype], pdata['data'], ctype, filename,
payload, pdata['frequency'])
# Callback is a function that will be called with
# (data, content_type, filename, payload)
def walk(msg, callback, data):
partnum = 0
for part in msg.walk():
# multipart/* are just containers
if part.get_content_maintype() == 'multipart':
continue
ctype = part.get_content_type()
if ctype is None:
ctype = OCTET_TYPE
filename = part.get_filename()
if not filename:
filename = PART_FN_TPL % (partnum)
callback(data, ctype, filename, part.get_payload(decode=True))
partnum = partnum + 1
def fixup_handler(mod, def_freq=PER_INSTANCE):
if not hasattr(mod, "handler_version"):
setattr(mod, "handler_version", 1)
if not hasattr(mod, 'frequency'):
setattr(mod, 'frequency', def_freq)
else:
freq = mod.frequency
if freq and freq not in FREQUENCIES:
LOG.warn("Handler %s has an unknown frequency %s", mod, freq)
return mod
def type_from_starts_with(payload, default=None):
payload_lc = payload.lower()
payload_lc = payload_lc.lstrip()
for text in INCLUSION_SRCH:
if payload_lc.startswith(text):
return INCLUSION_TYPES_MAP[text]
return default

View File

@ -0,0 +1,73 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit import handlers
from cloudinit import log as logging
from cloudinit import util
from cloudinit.settings import (PER_ALWAYS)
LOG = logging.getLogger(__name__)
class BootHookPartHandler(handlers.Handler):
def __init__(self, paths, datasource, **_kwargs):
handlers.Handler.__init__(self, PER_ALWAYS)
self.boothook_dir = paths.get_ipath("boothooks")
self.instance_id = None
if datasource:
self.instance_id = datasource.get_instance_id()
def list_types(self):
return [
handlers.type_from_starts_with("#cloud-boothook"),
]
def _write_part(self, payload, filename):
filename = util.clean_filename(filename)
payload = util.dos2unix(payload)
prefix = "#cloud-boothook"
start = 0
if payload.startswith(prefix):
start = len(prefix) + 1
filepath = os.path.join(self.boothook_dir, filename)
contents = payload[start:]
util.write_file(filepath, contents, 0700)
return filepath
def _handle_part(self, _data, ctype, filename, payload, _frequency):
if ctype in handlers.CONTENT_SIGNALS:
return
filepath = self._write_part(payload, filename)
try:
env = os.environ.copy()
if self.instance_id is not None:
env['INSTANCE_ID'] = str(self.instance_id)
util.subp([filepath], env=env)
except util.ProcessExecutionError:
util.logexc(LOG, "Boothooks script %s execution error", filepath)
except Exception:
util.logexc(LOG, ("Boothooks unknown "
"error when running %s"), filepath)

View File

@ -0,0 +1,62 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cloudinit import handlers
from cloudinit import log as logging
from cloudinit import util
from cloudinit.settings import (PER_ALWAYS)
LOG = logging.getLogger(__name__)
class CloudConfigPartHandler(handlers.Handler):
def __init__(self, paths, **_kwargs):
handlers.Handler.__init__(self, PER_ALWAYS)
self.cloud_buf = []
self.cloud_fn = paths.get_ipath("cloud_config")
def list_types(self):
return [
handlers.type_from_starts_with("#cloud-config"),
]
def _write_cloud_config(self, buf):
if not self.cloud_fn:
return
lines = [str(b) for b in buf]
payload = "\n".join(lines)
util.write_file(self.cloud_fn, payload, 0600)
def _handle_part(self, _data, ctype, filename, payload, _frequency):
if ctype == handlers.CONTENT_START:
self.cloud_buf = []
return
if ctype == handlers.CONTENT_END:
self._write_cloud_config(self.cloud_buf)
self.cloud_buf = []
return
filename = util.clean_filename(filename)
if not filename:
filename = '??'
self.cloud_buf.extend(["#%s" % (filename), str(payload)])

View File

@ -0,0 +1,52 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit import handlers
from cloudinit import log as logging
from cloudinit import util
from cloudinit.settings import (PER_ALWAYS)
LOG = logging.getLogger(__name__)
class ShellScriptPartHandler(handlers.Handler):
def __init__(self, paths, **_kwargs):
handlers.Handler.__init__(self, PER_ALWAYS)
self.script_dir = paths.get_ipath_cur('scripts')
def list_types(self):
return [
handlers.type_from_starts_with("#!"),
]
def _handle_part(self, _data, ctype, filename, payload, _frequency):
if ctype in handlers.CONTENT_SIGNALS:
# TODO: maybe delete existing things here
return
filename = util.clean_filename(filename)
payload = util.dos2unix(payload)
path = os.path.join(self.script_dir, filename)
util.write_file(path, payload, 0700)

View File

@ -0,0 +1,66 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit import handlers
from cloudinit import log as logging
from cloudinit import util
from cloudinit.settings import (PER_INSTANCE)
LOG = logging.getLogger(__name__)
class UpstartJobPartHandler(handlers.Handler):
def __init__(self, paths, **_kwargs):
handlers.Handler.__init__(self, PER_INSTANCE)
self.upstart_dir = paths.upstart_conf_d
def list_types(self):
return [
handlers.type_from_starts_with("#upstart-job"),
]
def _handle_part(self, _data, ctype, filename, payload, frequency):
if ctype in handlers.CONTENT_SIGNALS:
return
# See: https://bugs.launchpad.net/bugs/819507
if frequency != PER_INSTANCE:
return
if not self.upstart_dir:
return
filename = util.clean_filename(filename)
(_name, ext) = os.path.splitext(filename)
if not ext:
ext = ''
ext = ext.lower()
if ext != ".conf":
filename = filename + ".conf"
payload = util.dos2unix(payload)
path = os.path.join(self.upstart_dir, filename)
util.write_file(path, payload, 0644)

452
cloudinit/helpers.py Normal file
View File

@ -0,0 +1,452 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from time import time
import contextlib
import io
import os
from ConfigParser import (NoSectionError, NoOptionError, RawConfigParser)
from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE,
CFG_ENV_NAME)
from cloudinit import log as logging
from cloudinit import util
LOG = logging.getLogger(__name__)
class LockFailure(Exception):
pass
class DummyLock(object):
pass
class DummySemaphores(object):
def __init__(self):
pass
@contextlib.contextmanager
def lock(self, _name, _freq, _clear_on_fail=False):
yield DummyLock()
def has_run(self, _name, _freq):
return False
def clear(self, _name, _freq):
return True
def clear_all(self):
pass
class FileLock(object):
def __init__(self, fn):
self.fn = fn
class FileSemaphores(object):
def __init__(self, sem_path):
self.sem_path = sem_path
@contextlib.contextmanager
def lock(self, name, freq, clear_on_fail=False):
try:
yield self._acquire(name, freq)
except:
if clear_on_fail:
self.clear(name, freq)
raise
def clear(self, name, freq):
sem_file = self._get_path(name, freq)
try:
util.del_file(sem_file)
except (IOError, OSError):
util.logexc(LOG, "Failed deleting semaphore %s", sem_file)
return False
return True
def clear_all(self):
try:
util.del_dir(self.sem_path)
except (IOError, OSError):
util.logexc(LOG, "Failed deleting semaphore directory %s",
self.sem_path)
def _acquire(self, name, freq):
# Check again if its been already gotten
if self.has_run(name, freq):
return None
# This is a race condition since nothing atomic is happening
# here, but this should be ok due to the nature of when
# and where cloud-init runs... (file writing is not a lock...)
sem_file = self._get_path(name, freq)
contents = "%s: %s\n" % (os.getpid(), time())
try:
util.write_file(sem_file, contents)
except (IOError, OSError):
util.logexc(LOG, "Failed writing semaphore file %s", sem_file)
return None
return FileLock(sem_file)
def has_run(self, name, freq):
if not freq or freq == PER_ALWAYS:
return False
sem_file = self._get_path(name, freq)
# This isn't really a good atomic check
# but it suffices for where and when cloudinit runs
if os.path.exists(sem_file):
return True
return False
def _get_path(self, name, freq):
sem_path = self.sem_path
if not freq or freq == PER_INSTANCE:
return os.path.join(sem_path, name)
else:
return os.path.join(sem_path, "%s.%s" % (name, freq))
class Runners(object):
def __init__(self, paths):
self.paths = paths
self.sems = {}
def _get_sem(self, freq):
if freq == PER_ALWAYS or not freq:
return None
sem_path = None
if freq == PER_INSTANCE:
# This may not exist,
# so thats why we still check for none
# below if say the paths object
# doesn't have a datasource that can
# provide this instance path...
sem_path = self.paths.get_ipath("sem")
elif freq == PER_ONCE:
sem_path = self.paths.get_cpath("sem")
if not sem_path:
return None
if sem_path not in self.sems:
self.sems[sem_path] = FileSemaphores(sem_path)
return self.sems[sem_path]
def run(self, name, functor, args, freq=None, clear_on_fail=False):
sem = self._get_sem(freq)
if not sem:
sem = DummySemaphores()
if not args:
args = []
if sem.has_run(name, freq):
LOG.debug("%s already ran (freq=%s)", name, freq)
return (False, None)
with sem.lock(name, freq, clear_on_fail) as lk:
if not lk:
raise LockFailure("Failed to acquire lock for %s" % name)
else:
LOG.debug("Running %s using lock (%s)", name, lk)
if isinstance(args, (dict)):
results = functor(**args)
else:
results = functor(*args)
return (True, results)
class ConfigMerger(object):
def __init__(self, paths=None, datasource=None,
additional_fns=None, base_cfg=None):
self._paths = paths
self._ds = datasource
self._fns = additional_fns
self._base_cfg = base_cfg
# Created on first use
self._cfg = None
def _get_datasource_configs(self):
d_cfgs = []
if self._ds:
try:
ds_cfg = self._ds.get_config_obj()
if ds_cfg and isinstance(ds_cfg, (dict)):
d_cfgs.append(ds_cfg)
except:
util.logexc(LOG, ("Failed loading of datasource"
" config object from %s"), self._ds)
return d_cfgs
def _get_env_configs(self):
e_cfgs = []
if CFG_ENV_NAME in os.environ:
e_fn = os.environ[CFG_ENV_NAME]
try:
e_cfgs.append(util.read_conf(e_fn))
except:
util.logexc(LOG, ('Failed loading of env. config'
' from %s'), e_fn)
return e_cfgs
def _get_instance_configs(self):
i_cfgs = []
# If cloud-config was written, pick it up as
# a configuration file to use when running...
if not self._paths:
return i_cfgs
cc_fn = self._paths.get_ipath_cur('cloud_config')
if cc_fn and os.path.isfile(cc_fn):
try:
i_cfgs.append(util.read_conf(cc_fn))
except:
util.logexc(LOG, ('Failed loading of cloud-config'
' from %s'), cc_fn)
return i_cfgs
def _read_cfg(self):
# Input config files override
# env config files which
# override instance configs
# which override datasource
# configs which override
# base configuration
cfgs = []
if self._fns:
for c_fn in self._fns:
try:
cfgs.append(util.read_conf(c_fn))
except:
util.logexc(LOG, ("Failed loading of configuration"
" from %s"), c_fn)
cfgs.extend(self._get_env_configs())
cfgs.extend(self._get_instance_configs())
cfgs.extend(self._get_datasource_configs())
if self._base_cfg:
cfgs.append(self._base_cfg)
return util.mergemanydict(cfgs)
@property
def cfg(self):
# None check to avoid empty case causing re-reading
if self._cfg is None:
self._cfg = self._read_cfg()
return self._cfg
class ContentHandlers(object):
def __init__(self):
self.registered = {}
def __contains__(self, item):
return self.is_registered(item)
def __getitem__(self, key):
return self._get_handler(key)
def is_registered(self, content_type):
return content_type in self.registered
def register(self, mod):
types = set()
for t in mod.list_types():
self.registered[t] = mod
types.add(t)
return types
def _get_handler(self, content_type):
return self.registered[content_type]
def items(self):
return self.registered.items()
def iteritems(self):
return self.registered.iteritems()
def register_defaults(self, defs):
registered = set()
for mod in defs:
for t in mod.list_types():
if not self.is_registered(t):
self.registered[t] = mod
registered.add(t)
return registered
class Paths(object):
def __init__(self, path_cfgs, ds=None):
self.cfgs = path_cfgs
# Populate all the initial paths
self.cloud_dir = self.join(False,
path_cfgs.get('cloud_dir',
'/var/lib/cloud'))
self.instance_link = os.path.join(self.cloud_dir, 'instance')
self.boot_finished = os.path.join(self.instance_link, "boot-finished")
self.upstart_conf_d = path_cfgs.get('upstart_dir')
if self.upstart_conf_d:
self.upstart_conf_d = self.join(False, self.upstart_conf_d)
self.seed_dir = os.path.join(self.cloud_dir, 'seed')
# This one isn't joined, since it should just be read-only
template_dir = path_cfgs.get('templates_dir', '/etc/cloud/templates/')
self.template_tpl = os.path.join(template_dir, '%s.tmpl')
self.lookups = {
"handlers": "handlers",
"scripts": "scripts",
"sem": "sem",
"boothooks": "boothooks",
"userdata_raw": "user-data.txt",
"userdata": "user-data.txt.i",
"obj_pkl": "obj.pkl",
"cloud_config": "cloud-config.txt",
"data": "data",
}
# Set when a datasource becomes active
self.datasource = ds
# joins the paths but also appends a read
# or write root if available
def join(self, read_only, *paths):
if read_only:
root = self.cfgs.get('read_root')
else:
root = self.cfgs.get('write_root')
if not paths:
return root
if len(paths) > 1:
joined = os.path.join(*paths)
else:
joined = paths[0]
if root:
pre_joined = joined
# Need to remove any starting '/' since this
# will confuse os.path.join
joined = joined.lstrip("/")
joined = os.path.join(root, joined)
LOG.debug("Translated %s to adjusted path %s (read-only=%s)",
pre_joined, joined, read_only)
return joined
# get_ipath_cur: get the current instance path for an item
def get_ipath_cur(self, name=None):
ipath = self.instance_link
add_on = self.lookups.get(name)
if add_on:
ipath = os.path.join(ipath, add_on)
return ipath
# get_cpath : get the "clouddir" (/var/lib/cloud/<name>)
# for a name in dirmap
def get_cpath(self, name=None):
cpath = self.cloud_dir
add_on = self.lookups.get(name)
if add_on:
cpath = os.path.join(cpath, add_on)
return cpath
# _get_ipath : get the instance path for a name in pathmap
# (/var/lib/cloud/instances/<instance>/<name>)
def _get_ipath(self, name=None):
if not self.datasource:
return None
iid = self.datasource.get_instance_id()
if iid is None:
return None
ipath = os.path.join(self.cloud_dir, 'instances', str(iid))
add_on = self.lookups.get(name)
if add_on:
ipath = os.path.join(ipath, add_on)
return ipath
# get_ipath : get the instance path for a name in pathmap
# (/var/lib/cloud/instances/<instance>/<name>)
# returns None + warns if no active datasource....
def get_ipath(self, name=None):
ipath = self._get_ipath(name)
if not ipath:
LOG.warn(("No per instance data available, "
"is there an datasource/iid set?"))
return None
else:
return ipath
# This config parser will not throw when sections don't exist
# and you are setting values on those sections which is useful
# when writing to new options that may not have corresponding
# sections. Also it can default other values when doing gets
# so that if those sections/options do not exist you will
# get a default instead of an error. Another useful case where
# you can avoid catching exceptions that you typically don't
# care about...
class DefaultingConfigParser(RawConfigParser):
DEF_INT = 0
DEF_FLOAT = 0.0
DEF_BOOLEAN = False
DEF_BASE = None
def get(self, section, option):
value = self.DEF_BASE
try:
value = RawConfigParser.get(self, section, option)
except NoSectionError:
pass
except NoOptionError:
pass
return value
def set(self, section, option, value=None):
if not self.has_section(section) and section.lower() != 'default':
self.add_section(section)
RawConfigParser.set(self, section, option, value)
def remove_option(self, section, option):
if self.has_option(section, option):
RawConfigParser.remove_option(self, section, option)
def getboolean(self, section, option):
if not self.has_option(section, option):
return self.DEF_BOOLEAN
return RawConfigParser.getboolean(self, section, option)
def getfloat(self, section, option):
if not self.has_option(section, option):
return self.DEF_FLOAT
return RawConfigParser.getfloat(self, section, option)
def getint(self, section, option):
if not self.has_option(section, option):
return self.DEF_INT
return RawConfigParser.getint(self, section, option)
def stringify(self, header=None):
contents = ''
with io.BytesIO() as outputstream:
self.write(outputstream)
outputstream.flush()
contents = outputstream.getvalue()
if header:
contents = "\n".join([header, contents])
return contents

65
cloudinit/importer.py Normal file
View File

@ -0,0 +1,65 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
from cloudinit import log as logging
LOG = logging.getLogger(__name__)
def import_module(module_name):
__import__(module_name)
return sys.modules[module_name]
def find_module(base_name, search_paths, required_attrs=None):
found_places = []
if not required_attrs:
required_attrs = []
real_paths = []
for path in search_paths:
real_path = []
if path:
real_path.extend(path.split("."))
real_path.append(base_name)
full_path = '.'.join(real_path)
real_paths.append(full_path)
LOG.debug("Looking for modules %s that have attributes %s",
real_paths, required_attrs)
for full_path in real_paths:
mod = None
try:
mod = import_module(full_path)
except ImportError:
pass
if not mod:
continue
found_attrs = 0
for attr in required_attrs:
if hasattr(mod, attr):
found_attrs += 1
if found_attrs == len(required_attrs):
found_places.append(full_path)
LOG.debug("Found %s with attributes %s in %s", base_name,
required_attrs, found_places)
return found_places

133
cloudinit/log.py Normal file
View File

@ -0,0 +1,133 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import logging
import logging.handlers
import logging.config
import os
import sys
from StringIO import StringIO
# Logging levels for easy access
CRITICAL = logging.CRITICAL
FATAL = logging.FATAL
ERROR = logging.ERROR
WARNING = logging.WARNING
WARN = logging.WARN
INFO = logging.INFO
DEBUG = logging.DEBUG
NOTSET = logging.NOTSET
# Default basic format
DEF_CON_FORMAT = '%(asctime)s - %(filename)s[%(levelname)s]: %(message)s'
def setupBasicLogging():
root = logging.getLogger()
console = logging.StreamHandler(sys.stderr)
console.setFormatter(logging.Formatter(DEF_CON_FORMAT))
console.setLevel(DEBUG)
root.addHandler(console)
root.setLevel(DEBUG)
def setupLogging(cfg=None):
# See if the config provides any logging conf...
if not cfg:
cfg = {}
log_cfgs = []
log_cfg = cfg.get('logcfg')
if log_cfg and isinstance(log_cfg, (str, basestring)):
# If there is a 'logcfg' entry in the config,
# respect it, it is the old keyname
log_cfgs.append(str(log_cfg))
elif "log_cfgs" in cfg and isinstance(cfg['log_cfgs'], (set, list)):
for a_cfg in cfg['log_cfgs']:
if isinstance(a_cfg, (list, set, dict)):
cfg_str = [str(c) for c in a_cfg]
log_cfgs.append('\n'.join(cfg_str))
else:
log_cfgs.append(str(a_cfg))
# See if any of them actually load...
am_tried = 0
am_worked = 0
for i, log_cfg in enumerate(log_cfgs):
try:
am_tried += 1
# Assume its just a string if not a filename
if log_cfg.startswith("/") and os.path.isfile(log_cfg):
pass
else:
log_cfg = StringIO(log_cfg)
# Attempt to load its config
logging.config.fileConfig(log_cfg)
am_worked += 1
except Exception as e:
sys.stderr.write(("WARN: Setup of logging config %s"
" failed due to: %s\n") % (i + 1, e))
# If it didn't work, at least setup a basic logger (if desired)
basic_enabled = cfg.get('log_basic', True)
if not am_worked:
sys.stderr.write(("WARN: no logging configured!"
" (tried %s configs)\n") % (am_tried))
if basic_enabled:
sys.stderr.write("Setting up basic logging...\n")
setupBasicLogging()
def getLogger(name='cloudinit'):
return logging.getLogger(name)
# Fixes this annoyance...
# No handlers could be found for logger XXX annoying output...
try:
from logging import NullHandler
except ImportError:
class NullHandler(logging.Handler):
def emit(self, record):
pass
def _resetLogger(log):
if not log:
return
handlers = list(log.handlers)
for h in handlers:
h.flush()
h.close()
log.removeHandler(h)
log.setLevel(NOTSET)
log.addHandler(NullHandler())
def resetLogging():
_resetLogger(logging.getLogger())
_resetLogger(getLogger())
resetLogging()

View File

@ -1,11 +1,12 @@
#!/usr/bin/python
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -21,6 +22,8 @@
import cloudinit.util as util
from prettytable import PrettyTable
def netdev_info(empty=""):
fields = ("hwaddr", "addr", "bcast", "mask")
@ -66,51 +69,99 @@ def netdev_info(empty=""):
if dev[field] == "":
dev[field] = empty
return(devs)
return devs
def route_info():
(route_out, _err) = util.subp(["route", "-n"])
routes = []
for line in str(route_out).splitlines()[1:]:
entries = route_out.splitlines()[1:]
for line in entries:
if not line:
continue
toks = line.split()
if toks[0] == "Kernel" or toks[0] == "Destination":
if len(toks) < 8 or toks[0] == "Kernel" or toks[0] == "Destination":
continue
routes.append(toks)
return(routes)
entry = {
'destination': toks[0],
'gateway': toks[1],
'genmask': toks[2],
'flags': toks[3],
'metric': toks[4],
'ref': toks[5],
'use': toks[6],
'iface': toks[7],
}
routes.append(entry)
return routes
def getgateway():
for r in route_info():
if r[3].find("G") >= 0:
return("%s[%s]" % (r[1], r[7]))
return(None)
routes = []
try:
routes = route_info()
except:
pass
for r in routes:
if r['flags'].find("G") >= 0:
return "%s[%s]" % (r['gateway'], r['iface'])
return None
def debug_info(pre="ci-info: "):
def netdev_pformat():
lines = []
try:
netdev = netdev_info(empty=".")
except Exception:
lines.append("netdev_info failed!")
netdev = {}
for (dev, d) in netdev.iteritems():
lines.append("%s%-6s: %i %-15s %-15s %s" %
(pre, dev, d["up"], d["addr"], d["mask"], d["hwaddr"]))
lines.append(util.center("Net device info failed", '!', 80))
netdev = None
if netdev is not None:
fields = ['Device', 'Up', 'Address', 'Mask', 'Hw-Address']
tbl = PrettyTable(fields)
for (dev, d) in netdev.iteritems():
tbl.add_row([dev, d["up"], d["addr"], d["mask"], d["hwaddr"]])
netdev_s = tbl.get_string()
max_len = len(max(netdev_s.splitlines(), key=len))
header = util.center("Net device info", "+", max_len)
lines.extend([header, netdev_s])
return "\n".join(lines)
def route_pformat():
lines = []
try:
routes = route_info()
except Exception:
lines.append("route_info failed")
routes = []
n = 0
for r in routes:
lines.append("%sroute-%d: %-15s %-15s %-15s %-6s %s" %
(pre, n, r[0], r[1], r[2], r[7], r[3]))
n = n + 1
return('\n'.join(lines))
lines.append(util.center('Route info failed', '!', 80))
routes = None
if routes is not None:
fields = ['Route', 'Destination', 'Gateway',
'Genmask', 'Interface', 'Flags']
tbl = PrettyTable(fields)
for (n, r) in enumerate(routes):
route_id = str(n)
tbl.add_row([route_id, r['destination'],
r['gateway'], r['genmask'],
r['iface'], r['flags']])
route_s = tbl.get_string()
max_len = len(max(route_s.splitlines(), key=len))
header = util.center("Route info", "+", max_len)
lines.extend([header, route_s])
return "\n".join(lines)
if __name__ == '__main__':
print debug_info()
def debug_info(prefix='ci-info: '):
lines = []
netdev_lines = netdev_pformat().splitlines()
if prefix:
for line in netdev_lines:
lines.append("%s%s" % (prefix, line))
else:
lines.extend(netdev_lines)
route_lines = route_pformat().splitlines()
if prefix:
for line in route_lines:
lines.append("%s%s" % (prefix, line))
else:
lines.extend(route_lines)
return "\n".join(lines)

57
cloudinit/settings.py Normal file
View File

@ -0,0 +1,57 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# Set and read for determining the cloud config file location
CFG_ENV_NAME = "CLOUD_CFG"
# This is expected to be a yaml formatted file
CLOUD_CONFIG = '/etc/cloud/cloud.cfg'
# What u get if no config is provided
CFG_BUILTIN = {
'datasource_list': [
'NoCloud',
'ConfigDrive',
'OVF',
'MAAS',
'Ec2',
'CloudStack'
],
'def_log_file': '/var/log/cloud-init.log',
'log_cfgs': [],
'syslog_fix_perms': 'syslog:adm',
'system_info': {
'paths': {
'cloud_dir': '/var/lib/cloud',
'templates_dir': '/etc/cloud/templates/',
},
'distro': 'ubuntu',
},
}
# Valid frequencies of handlers/modules
PER_INSTANCE = "once-per-instance"
PER_ALWAYS = "always"
PER_ONCE = "once"
# Used to sanity check incoming handlers/modules frequencies
FREQUENCIES = [PER_INSTANCE, PER_ALWAYS, PER_ONCE]

View File

@ -0,0 +1,147 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Cosmin Luta
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Cosmin Luta <q4break@gmail.com>
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from socket import inet_ntoa
from struct import pack
import os
import time
import boto.utils as boto_utils
from cloudinit import log as logging
from cloudinit import sources
from cloudinit import url_helper as uhelp
from cloudinit import util
LOG = logging.getLogger(__name__)
class DataSourceCloudStack(sources.DataSource):
def __init__(self, sys_cfg, distro, paths):
sources.DataSource.__init__(self, sys_cfg, distro, paths)
self.seed_dir = os.path.join(paths.seed_dir, 'cs')
# Cloudstack has its metadata/userdata URLs located at
# http://<default-gateway-ip>/latest/
self.api_ver = 'latest'
gw_addr = self.get_default_gateway()
if not gw_addr:
raise RuntimeError("No default gateway found!")
self.metadata_address = "http://%s/" % (gw_addr)
def get_default_gateway(self):
""" Returns the default gateway ip address in the dotted format
"""
lines = util.load_file("/proc/net/route").splitlines()
for line in lines:
items = line.split("\t")
if items[1] == "00000000":
# Found the default route, get the gateway
gw = inet_ntoa(pack("<L", int(items[2], 16)))
LOG.debug("Found default route, gateway is %s", gw)
return gw
return None
def __str__(self):
return util.obj_name(self)
def _get_url_settings(self):
mcfg = self.ds_cfg
if not mcfg:
mcfg = {}
max_wait = 120
try:
max_wait = int(mcfg.get("max_wait", max_wait))
except Exception:
util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
if max_wait == 0:
return False
timeout = 50
try:
timeout = int(mcfg.get("timeout", timeout))
except Exception:
util.logexc(LOG, "Failed to get timeout, using %s", timeout)
return (max_wait, timeout)
def wait_for_metadata_service(self):
mcfg = self.ds_cfg
if not mcfg:
mcfg = {}
(max_wait, timeout) = self._get_url_settings()
urls = [self.metadata_address]
start_time = time.time()
url = uhelp.wait_for_url(urls=urls, max_wait=max_wait,
timeout=timeout, status_cb=LOG.warn)
if url:
LOG.debug("Using metadata source: '%s'", url)
else:
LOG.critical(("Giving up on waiting for the metadata from %s"
" after %s seconds"),
urls, int(time.time() - start_time))
return bool(url)
def get_data(self):
seed_ret = {}
if util.read_optional_seed(seed_ret, base=(self.seed_dir + "/")):
self.userdata_raw = seed_ret['user-data']
self.metadata = seed_ret['meta-data']
LOG.debug("Using seeded cloudstack data from: %s", self.seed_dir)
return True
try:
if not self.wait_for_metadata_service():
return False
start_time = time.time()
self.userdata_raw = boto_utils.get_instance_userdata(self.api_ver,
None, self.metadata_address)
self.metadata = boto_utils.get_instance_metadata(self.api_ver,
self.metadata_address)
LOG.debug("Crawl of metadata service took %s seconds",
int(time.time() - start_time))
return True
except Exception:
util.logexc(LOG, ('Failed fetching from metadata '
'service %s'), self.metadata_address)
return False
def get_instance_id(self):
return self.metadata['instance-id']
def get_availability_zone(self):
return self.metadata['availability-zone']
# Used to match classes to dependencies
datasources = [
(DataSourceCloudStack, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
]
# Return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return sources.list_from_depends(depends, datasources)

View File

@ -0,0 +1,226 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import json
import os
from cloudinit import log as logging
from cloudinit import sources
from cloudinit import util
LOG = logging.getLogger(__name__)
# Various defaults/constants...
DEFAULT_IID = "iid-dsconfigdrive"
DEFAULT_MODE = 'pass'
CFG_DRIVE_FILES = [
"etc/network/interfaces",
"root/.ssh/authorized_keys",
"meta.js",
]
DEFAULT_METADATA = {
"instance-id": DEFAULT_IID,
"dsmode": DEFAULT_MODE,
}
CFG_DRIVE_DEV_ENV = 'CLOUD_INIT_CONFIG_DRIVE_DEVICE'
class DataSourceConfigDrive(sources.DataSource):
def __init__(self, sys_cfg, distro, paths):
sources.DataSource.__init__(self, sys_cfg, distro, paths)
self.seed = None
self.cfg = {}
self.dsmode = 'local'
self.seed_dir = os.path.join(paths.seed_dir, 'config_drive')
def __str__(self):
mstr = "%s [%s]" % (util.obj_name(self), self.dsmode)
mstr += "[seed=%s]" % (self.seed)
return mstr
def get_data(self):
found = None
md = {}
ud = ""
if os.path.isdir(self.seed_dir):
try:
(md, ud) = read_config_drive_dir(self.seed_dir)
found = self.seed_dir
except NonConfigDriveDir:
util.logexc(LOG, "Failed reading config drive from %s",
self.seed_dir)
if not found:
dev = find_cfg_drive_device()
if dev:
try:
(md, ud) = util.mount_cb(dev, read_config_drive_dir)
found = dev
except (NonConfigDriveDir, util.MountFailedError):
pass
if not found:
return False
if 'dsconfig' in md:
self.cfg = md['dscfg']
md = util.mergedict(md, DEFAULT_METADATA)
# Update interfaces and ifup only on the local datasource
# this way the DataSourceConfigDriveNet doesn't do it also.
if 'network-interfaces' in md and self.dsmode == "local":
LOG.debug("Updating network interfaces from config drive (%s)",
md['dsmode'])
self.distro.apply_network(md['network-interfaces'])
self.seed = found
self.metadata = md
self.userdata_raw = ud
if md['dsmode'] == self.dsmode:
return True
LOG.debug("%s: not claiming datasource, dsmode=%s", self, md['dsmode'])
return False
def get_public_ssh_keys(self):
if not 'public-keys' in self.metadata:
return []
return self.metadata['public-keys']
# The data sources' config_obj is a cloud-config formated
# object that came to it from ways other than cloud-config
# because cloud-config content would be handled elsewhere
def get_config_obj(self):
return self.cfg
class DataSourceConfigDriveNet(DataSourceConfigDrive):
def __init__(self, sys_cfg, distro, paths):
DataSourceConfigDrive.__init__(self, sys_cfg, distro, paths)
self.dsmode = 'net'
class NonConfigDriveDir(Exception):
pass
def find_cfg_drive_device():
""" Get the config drive device. Return a string like '/dev/vdb'
or None (if there is no non-root device attached). This does not
check the contents, only reports that if there *were* a config_drive
attached, it would be this device.
Note: per config_drive documentation, this is
"associated as the last available disk on the instance"
"""
# This seems to be for debugging??
if CFG_DRIVE_DEV_ENV in os.environ:
return os.environ[CFG_DRIVE_DEV_ENV]
# We are looking for a raw block device (sda, not sda1) with a vfat
# filesystem on it....
letters = "abcdefghijklmnopqrstuvwxyz"
devs = util.find_devs_with("TYPE=vfat")
# Filter out anything not ending in a letter (ignore partitions)
devs = [f for f in devs if f[-1] in letters]
# Sort them in reverse so "last" device is first
devs.sort(reverse=True)
if devs:
return devs[0]
return None
def read_config_drive_dir(source_dir):
"""
read_config_drive_dir(source_dir):
read source_dir, and return a tuple with metadata dict and user-data
string populated. If not a valid dir, raise a NonConfigDriveDir
"""
# TODO: fix this for other operating systems...
# Ie: this is where https://fedorahosted.org/netcf/ or similar should
# be hooked in... (or could be)
found = {}
for af in CFG_DRIVE_FILES:
fn = os.path.join(source_dir, af)
if os.path.isfile(fn):
found[af] = fn
if len(found) == 0:
raise NonConfigDriveDir("%s: %s" % (source_dir, "no files found"))
md = {}
ud = ""
keydata = ""
if "etc/network/interfaces" in found:
fn = found["etc/network/interfaces"]
md['network-interfaces'] = util.load_file(fn)
if "root/.ssh/authorized_keys" in found:
fn = found["root/.ssh/authorized_keys"]
keydata = util.load_file(fn)
meta_js = {}
if "meta.js" in found:
fn = found['meta.js']
content = util.load_file(fn)
try:
# Just check if its really json...
meta_js = json.loads(content)
if not isinstance(meta_js, (dict)):
raise TypeError("Dict expected for meta.js root node")
except (ValueError, TypeError) as e:
raise NonConfigDriveDir("%s: %s, %s" %
(source_dir, "invalid json in meta.js", e))
md['meta_js'] = content
# Key data override??
keydata = meta_js.get('public-keys', keydata)
if keydata:
lines = keydata.splitlines()
md['public-keys'] = [l for l in lines
if len(l) and not l.startswith("#")]
for copy in ('dsmode', 'instance-id', 'dscfg'):
if copy in meta_js:
md[copy] = meta_js[copy]
if 'user-data' in meta_js:
ud = meta_js['user-data']
return (md, ud)
# Used to match classes to dependencies
datasources = [
(DataSourceConfigDrive, (sources.DEP_FILESYSTEM, )),
(DataSourceConfigDriveNet, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
]
# Return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return sources.list_from_depends(depends, datasources)

View File

@ -0,0 +1,265 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Hafliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import time
import boto.utils as boto_utils
from cloudinit import log as logging
from cloudinit import sources
from cloudinit import url_helper as uhelp
from cloudinit import util
LOG = logging.getLogger(__name__)
DEF_MD_URL = "http://169.254.169.254"
# Which version we are requesting of the ec2 metadata apis
DEF_MD_VERSION = '2009-04-04'
# Default metadata urls that will be used if none are provided
# They will be checked for 'resolveability' and some of the
# following may be discarded if they do not resolve
DEF_MD_URLS = [DEF_MD_URL, "http://instance-data:8773"]
class DataSourceEc2(sources.DataSource):
def __init__(self, sys_cfg, distro, paths):
sources.DataSource.__init__(self, sys_cfg, distro, paths)
self.metadata_address = DEF_MD_URL
self.seed_dir = os.path.join(paths.seed_dir, "ec2")
self.api_ver = DEF_MD_VERSION
def __str__(self):
return util.obj_name(self)
def get_data(self):
seed_ret = {}
if util.read_optional_seed(seed_ret, base=(self.seed_dir + "/")):
self.userdata_raw = seed_ret['user-data']
self.metadata = seed_ret['meta-data']
LOG.debug("Using seeded ec2 data from %s", self.seed_dir)
return True
try:
if not self.wait_for_metadata_service():
return False
start_time = time.time()
self.userdata_raw = boto_utils.get_instance_userdata(self.api_ver,
None, self.metadata_address)
self.metadata = boto_utils.get_instance_metadata(self.api_ver,
self.metadata_address)
LOG.debug("Crawl of metadata service took %s seconds",
int(time.time() - start_time))
return True
except Exception:
util.logexc(LOG, "Failed reading from metadata address %s",
self.metadata_address)
return False
def get_instance_id(self):
return self.metadata['instance-id']
def get_availability_zone(self):
return self.metadata['placement']['availability-zone']
def get_local_mirror(self):
return self.get_mirror_from_availability_zone()
def get_mirror_from_availability_zone(self, availability_zone=None):
# Availability is like 'us-west-1b' or 'eu-west-1a'
if availability_zone is None:
availability_zone = self.get_availability_zone()
if self.is_vpc():
return None
# Use the distro to get the mirror
if not availability_zone:
return None
mirror_tpl = self.distro.get_option('availability_zone_template')
if not mirror_tpl:
return None
tpl_params = {
'zone': availability_zone.strip(),
}
mirror_url = mirror_tpl % (tpl_params)
(max_wait, timeout) = self._get_url_settings()
worked = uhelp.wait_for_url([mirror_url], max_wait=max_wait,
timeout=timeout, status_cb=LOG.warn)
if not worked:
return None
return mirror_url
def _get_url_settings(self):
mcfg = self.ds_cfg
if not mcfg:
mcfg = {}
max_wait = 120
try:
max_wait = int(mcfg.get("max_wait", max_wait))
except Exception:
util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
if max_wait == 0:
return False
timeout = 50
try:
timeout = int(mcfg.get("timeout", timeout))
except Exception:
util.logexc(LOG, "Failed to get timeout, using %s", timeout)
return (max_wait, timeout)
def wait_for_metadata_service(self):
mcfg = self.ds_cfg
if not mcfg:
mcfg = {}
(max_wait, timeout) = self._get_url_settings()
# Remove addresses from the list that wont resolve.
mdurls = mcfg.get("metadata_urls", DEF_MD_URLS)
filtered = [x for x in mdurls if util.is_resolvable_url(x)]
if set(filtered) != set(mdurls):
LOG.debug("Removed the following from metadata urls: %s",
list((set(mdurls) - set(filtered))))
if len(filtered):
mdurls = filtered
else:
LOG.warn("Empty metadata url list! using default list")
mdurls = DEF_MD_URLS
urls = []
url2base = {}
for url in mdurls:
cur = "%s/%s/meta-data/instance-id" % (url, self.api_ver)
urls.append(cur)
url2base[cur] = url
start_time = time.time()
url = uhelp.wait_for_url(urls=urls, max_wait=max_wait,
timeout=timeout, status_cb=LOG.warn)
if url:
LOG.debug("Using metadata source: '%s'", url2base[url])
else:
LOG.critical("Giving up on md from %s after %s seconds",
urls, int(time.time() - start_time))
self.metadata_address = url2base.get(url)
return bool(url)
def _remap_device(self, short_name):
# LP: #611137
# the metadata service may believe that devices are named 'sda'
# when the kernel named them 'vda' or 'xvda'
# we want to return the correct value for what will actually
# exist in this instance
mappings = {"sd": ("vd", "xvd")}
for (nfrom, tlist) in mappings.iteritems():
if not short_name.startswith(nfrom):
continue
for nto in tlist:
cand = "/dev/%s%s" % (nto, short_name[len(nfrom):])
if os.path.exists(cand):
return cand
return None
def device_name_to_device(self, name):
# Consult metadata service, that has
# ephemeral0: sdb
# and return 'sdb' for input 'ephemeral0'
if 'block-device-mapping' not in self.metadata:
return None
# Example:
# 'block-device-mapping':
# {'ami': '/dev/sda1',
# 'ephemeral0': '/dev/sdb',
# 'root': '/dev/sda1'}
found = None
bdm_items = self.metadata['block-device-mapping'].iteritems()
for (entname, device) in bdm_items:
if entname == name:
found = device
break
# LP: #513842 mapping in Euca has 'ephemeral' not 'ephemeral0'
if entname == "ephemeral" and name == "ephemeral0":
found = device
if found is None:
LOG.debug("Unable to convert %s to a device", name)
return None
ofound = found
if not found.startswith("/"):
found = "/dev/%s" % found
if os.path.exists(found):
return found
remapped = self._remap_device(os.path.basename(found))
if remapped:
LOG.debug("Remapped device name %s => %s", (found, remapped))
return remapped
# On t1.micro, ephemeral0 will appear in block-device-mapping from
# metadata, but it will not exist on disk (and never will)
# at this point, we've verified that the path did not exist
# in the special case of 'ephemeral0' return None to avoid bogus
# fstab entry (LP: #744019)
if name == "ephemeral0":
return None
return ofound
def is_vpc(self):
# See: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/615545
# Detect that the machine was launched in a VPC.
# But I did notice that when in a VPC, meta-data
# does not have public-ipv4 and public-hostname
# listed as a possibility.
ph = "public-hostname"
p4 = "public-ipv4"
if ((ph not in self.metadata or self.metadata[ph] == "") and
(p4 not in self.metadata or self.metadata[p4] == "")):
return True
return False
# Used to match classes to dependencies
datasources = [
(DataSourceEc2, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
]
# Return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return sources.list_from_depends(depends, datasources)

View File

@ -0,0 +1,264 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import errno
import oauth.oauth as oauth
import os
import time
import urllib2
from cloudinit import log as logging
from cloudinit import sources
from cloudinit import url_helper as uhelp
from cloudinit import util
LOG = logging.getLogger(__name__)
MD_VERSION = "2012-03-01"
class DataSourceMAAS(sources.DataSource):
"""
DataSourceMAAS reads instance information from MAAS.
Given a config metadata_url, and oauth tokens, it expects to find
files under the root named:
instance-id
user-data
hostname
"""
def __init__(self, sys_cfg, distro, paths):
sources.DataSource.__init__(self, sys_cfg, distro, paths)
self.base_url = None
self.seed_dir = os.path.join(paths.seed_dir, 'maas')
def __str__(self):
return "%s [%s]" % (util.obj_name(self), self.base_url)
def get_data(self):
mcfg = self.ds_cfg
try:
(userdata, metadata) = read_maas_seed_dir(self.seed_dir)
self.userdata_raw = userdata
self.metadata = metadata
self.base_url = self.seed_dir
return True
except MAASSeedDirNone:
pass
except MAASSeedDirMalformed as exc:
LOG.warn("%s was malformed: %s" % (self.seed_dir, exc))
raise
# If there is no metadata_url, then we're not configured
url = mcfg.get('metadata_url', None)
if not url:
return False
try:
if not self.wait_for_metadata_service(url):
return False
self.base_url = url
(userdata, metadata) = read_maas_seed_url(self.base_url,
self.md_headers)
self.userdata_raw = userdata
self.metadata = metadata
return True
except Exception:
util.logexc(LOG, "Failed fetching metadata from url %s", url)
return False
def md_headers(self, url):
mcfg = self.ds_cfg
# If we are missing token_key, token_secret or consumer_key
# then just do non-authed requests
for required in ('token_key', 'token_secret', 'consumer_key'):
if required not in mcfg:
return {}
consumer_secret = mcfg.get('consumer_secret', "")
return oauth_headers(url=url,
consumer_key=mcfg['consumer_key'],
token_key=mcfg['token_key'],
token_secret=mcfg['token_secret'],
consumer_secret=consumer_secret)
def wait_for_metadata_service(self, url):
mcfg = self.ds_cfg
max_wait = 120
try:
max_wait = int(mcfg.get("max_wait", max_wait))
except Exception:
util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
if max_wait == 0:
return False
timeout = 50
try:
if timeout in mcfg:
timeout = int(mcfg.get("timeout", timeout))
except Exception:
LOG.warn("Failed to get timeout, using %s" % timeout)
starttime = time.time()
check_url = "%s/%s/meta-data/instance-id" % (url, MD_VERSION)
urls = [check_url]
url = uhelp.wait_for_url(urls=urls, max_wait=max_wait,
timeout=timeout, status_cb=LOG.warn,
headers_cb=self.md_headers)
if url:
LOG.debug("Using metadata source: '%s'", url)
else:
LOG.critical("Giving up on md from %s after %i seconds",
urls, int(time.time() - starttime))
return bool(url)
def read_maas_seed_dir(seed_d):
"""
Return user-data and metadata for a maas seed dir in seed_d.
Expected format of seed_d are the following files:
* instance-id
* local-hostname
* user-data
"""
if not os.path.isdir(seed_d):
raise MAASSeedDirNone("%s: not a directory")
files = ('local-hostname', 'instance-id', 'user-data', 'public-keys')
md = {}
for fname in files:
try:
md[fname] = util.load_file(os.path.join(seed_d, fname))
except IOError as e:
if e.errno != errno.ENOENT:
raise
return check_seed_contents(md, seed_d)
def read_maas_seed_url(seed_url, header_cb=None, timeout=None,
version=MD_VERSION):
"""
Read the maas datasource at seed_url.
header_cb is a method that should return a headers dictionary that will
be given to urllib2.Request()
Expected format of seed_url is are the following files:
* <seed_url>/<version>/meta-data/instance-id
* <seed_url>/<version>/meta-data/local-hostname
* <seed_url>/<version>/user-data
"""
base_url = "%s/%s" % (seed_url, version)
file_order = [
'local-hostname',
'instance-id',
'public-keys',
'user-data',
]
files = {
'local-hostname': "%s/%s" % (base_url, 'meta-data/local-hostname'),
'instance-id': "%s/%s" % (base_url, 'meta-data/instance-id'),
'public-keys': "%s/%s" % (base_url, 'meta-data/public-keys'),
'user-data': "%s/%s" % (base_url, 'user-data'),
}
md = {}
for name in file_order:
url = files.get(name)
if header_cb:
headers = header_cb(url)
else:
headers = {}
try:
resp = uhelp.readurl(url, headers=headers, timeout=timeout)
if resp.ok():
md[name] = str(resp)
else:
LOG.warn(("Fetching from %s resulted in"
" an invalid http code %s"), url, resp.code)
except urllib2.HTTPError as e:
if e.code != 404:
raise
return check_seed_contents(md, seed_url)
def check_seed_contents(content, seed):
"""Validate if content is Is the content a dict that is valid as a
return for a datasource.
Either return a (userdata, metadata) tuple or
Raise MAASSeedDirMalformed or MAASSeedDirNone
"""
md_required = ('instance-id', 'local-hostname')
if len(content) == 0:
raise MAASSeedDirNone("%s: no data files found" % seed)
found = list(content.keys())
missing = [k for k in md_required if k not in found]
if len(missing):
raise MAASSeedDirMalformed("%s: missing files %s" % (seed, missing))
userdata = content.get('user-data', "")
md = {}
for (key, val) in content.iteritems():
if key == 'user-data':
continue
md[key] = val
return (userdata, md)
def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret):
consumer = oauth.OAuthConsumer(consumer_key, consumer_secret)
token = oauth.OAuthToken(token_key, token_secret)
params = {
'oauth_version': "1.0",
'oauth_nonce': oauth.generate_nonce(),
'oauth_timestamp': int(time.time()),
'oauth_token': token.key,
'oauth_consumer_key': consumer.key,
}
req = oauth.OAuthRequest(http_url=url, parameters=params)
req.sign_request(oauth.OAuthSignatureMethod_PLAINTEXT(),
consumer, token)
return req.to_header()
class MAASSeedDirNone(Exception):
pass
class MAASSeedDirMalformed(Exception):
pass
# Used to match classes to dependencies
datasources = [
(DataSourceMAAS, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
]
# Return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return sources.list_from_depends(depends, datasources)

View File

@ -2,9 +2,11 @@
#
# Copyright (C) 2009-2010 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Hafliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -18,33 +20,34 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.DataSource as DataSource
from cloudinit import seeddir as base_seeddir
from cloudinit import log
import cloudinit.util as util
import errno
import subprocess
import os
from cloudinit import log as logging
from cloudinit import sources
from cloudinit import util
LOG = logging.getLogger(__name__)
class DataSourceNoCloud(DataSource.DataSource):
metadata = None
userdata = None
userdata_raw = None
supported_seed_starts = ("/", "file://")
dsmode = "local"
seed = None
cmdline_id = "ds=nocloud"
seeddir = base_seeddir + '/nocloud'
class DataSourceNoCloud(sources.DataSource):
def __init__(self, sys_cfg, distro, paths):
sources.DataSource.__init__(self, sys_cfg, distro, paths)
self.dsmode = 'local'
self.seed = None
self.cmdline_id = "ds=nocloud"
self.seed_dir = os.path.join(paths.seed_dir, 'nocloud')
self.supported_seed_starts = ("/", "file://")
def __str__(self):
mstr = "DataSourceNoCloud"
mstr = mstr + " [seed=%s]" % self.seed
return(mstr)
mstr = "%s [seed=%s][dsmode=%s]" % (util.obj_name(self),
self.seed, self.dsmode)
return mstr
def get_data(self):
defaults = {
"instance-id": "nocloud", "dsmode": self.dsmode
"instance-id": "nocloud",
"dsmode": self.dsmode,
}
found = []
@ -52,24 +55,24 @@ class DataSourceNoCloud(DataSource.DataSource):
ud = ""
try:
# parse the kernel command line, getting data passed in
# Parse the kernel command line, getting data passed in
if parse_cmdline_data(self.cmdline_id, md):
found.append("cmdline")
except:
util.logexc(log)
util.logexc(LOG, "Unable to parse command line data")
return False
# check to see if the seeddir has data.
# Check to see if the seed dir has data.
seedret = {}
if util.read_optional_seed(seedret, base=self.seeddir + "/"):
if util.read_optional_seed(seedret, base=self.seed_dir + "/"):
md = util.mergedict(md, seedret['meta-data'])
ud = seedret['user-data']
found.append(self.seeddir)
log.debug("using seeded cache data in %s" % self.seeddir)
found.append(self.seed_dir)
LOG.debug("Using seeded cache data from %s", self.seed_dir)
# if the datasource config had a 'seedfrom' entry, then that takes
# If the datasource config had a 'seedfrom' entry, then that takes
# precedence over a 'seedfrom' that was found in a filesystem
# but not over external medi
# but not over external media
if 'seedfrom' in self.ds_cfg and self.ds_cfg['seedfrom']:
found.append("ds_config")
md["seedfrom"] = self.ds_cfg['seedfrom']
@ -83,35 +86,37 @@ class DataSourceNoCloud(DataSource.DataSource):
for dev in devlist:
try:
(newmd, newud) = util.mount_callback_umount(dev,
util.read_seeded)
LOG.debug("Attempting to use data from %s", dev)
(newmd, newud) = util.mount_cb(dev, util.read_seeded)
md = util.mergedict(newmd, md)
ud = newud
# for seed from a device, the default mode is 'net'.
# For seed from a device, the default mode is 'net'.
# that is more likely to be what is desired.
# If they want dsmode of local, then they must
# specify that.
if 'dsmode' not in md:
md['dsmode'] = "net"
log.debug("using data from %s" % dev)
LOG.debug("Using data from %s", dev)
found.append(dev)
break
except OSError, e:
except OSError as e:
if e.errno != errno.ENOENT:
raise
except util.mountFailedError:
log.warn("Failed to mount %s when looking for seed" % dev)
except util.MountFailedError:
util.logexc(LOG, ("Failed to mount %s"
" when looking for data"), dev)
# there was no indication on kernel cmdline or data
# There was no indication on kernel cmdline or data
# in the seeddir suggesting this handler should be used.
if len(found) == 0:
return False
seeded_interfaces = None
# the special argument "seedfrom" indicates we should
# The special argument "seedfrom" indicates we should
# attempt to seed the userdata / metadata from its value
# its primarily value is in allowing the user to type less
# on the command line, ie: ds=nocloud;s=http://bit.ly/abcdefg
@ -123,57 +128,46 @@ class DataSourceNoCloud(DataSource.DataSource):
seedfound = proto
break
if not seedfound:
log.debug("seed from %s not supported by %s" %
(seedfrom, self.__class__))
LOG.debug("Seed from %s not supported by %s", seedfrom, self)
return False
if 'network-interfaces' in md:
seeded_interfaces = self.dsmode
# this could throw errors, but the user told us to do it
# This could throw errors, but the user told us to do it
# so if errors are raised, let them raise
(md_seed, ud) = util.read_seeded(seedfrom, timeout=None)
log.debug("using seeded cache data from %s" % seedfrom)
LOG.debug("Using seeded cache data from %s", seedfrom)
# values in the command line override those from the seed
# Values in the command line override those from the seed
md = util.mergedict(md, md_seed)
found.append(seedfrom)
# Now that we have exhausted any other places merge in the defaults
md = util.mergedict(md, defaults)
# update the network-interfaces if metadata had 'network-interfaces'
# Update the network-interfaces if metadata had 'network-interfaces'
# entry and this is the local datasource, or 'seedfrom' was used
# and the source of the seed was self.dsmode
# ('local' for NoCloud, 'net' for NoCloudNet')
if ('network-interfaces' in md and
(self.dsmode in ("local", seeded_interfaces))):
log.info("updating network interfaces from nocloud")
util.write_file("/etc/network/interfaces",
md['network-interfaces'])
try:
(out, err) = util.subp(['ifup', '--all'])
if len(out) or len(err):
log.warn("ifup --all had stderr: %s" % err)
except subprocess.CalledProcessError as exc:
log.warn("ifup --all failed: %s" % (exc.output[1]))
self.seed = ",".join(found)
self.metadata = md
self.userdata_raw = ud
LOG.debug("Updating network interfaces from %s", self)
self.distro.apply_network(md['network-interfaces'])
if md['dsmode'] == self.dsmode:
self.seed = ",".join(found)
self.metadata = md
self.userdata_raw = ud
return True
log.debug("%s: not claiming datasource, dsmode=%s" %
(self, md['dsmode']))
LOG.debug("%s: not claiming datasource, dsmode=%s", self, md['dsmode'])
return False
# returns true or false indicating if cmdline indicated
# Returns true or false indicating if cmdline indicated
# that this module should be used
# example cmdline:
# Example cmdline:
# root=LABEL=uec-rootfs ro ds=nocloud
def parse_cmdline_data(ds_id, fill, cmdline=None):
if cmdline is None:
@ -210,23 +204,25 @@ def parse_cmdline_data(ds_id, fill, cmdline=None):
k = s2l[k]
fill[k] = v
return(True)
return True
class DataSourceNoCloudNet(DataSourceNoCloud):
cmdline_id = "ds=nocloud-net"
supported_seed_starts = ("http://", "https://", "ftp://")
seeddir = base_seeddir + '/nocloud-net'
dsmode = "net"
def __init__(self, sys_cfg, distro, paths):
DataSourceNoCloud.__init__(self, sys_cfg, distro, paths)
self.cmdline_id = "ds=nocloud-net"
self.supported_seed_starts = ("http://", "https://", "ftp://")
self.seed_dir = os.path.join(paths.seed_dir, 'nocloud-net')
self.dsmode = "net"
datasources = (
(DataSourceNoCloud, (DataSource.DEP_FILESYSTEM, )),
(DataSourceNoCloudNet,
(DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
)
# Used to match classes to dependencies
datasources = [
(DataSourceNoCloud, (sources.DEP_FILESYSTEM, )),
(DataSourceNoCloudNet, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
]
# return a list of data sources that match this set of dependencies
# Return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return(DataSource.list_from_depends(depends, datasources))
return sources.list_from_depends(depends, datasources)

View File

@ -2,9 +2,11 @@
#
# Copyright (C) 2011 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Hafliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
@ -18,33 +20,30 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cloudinit.DataSource as DataSource
from cloudinit import seeddir as base_seeddir
from cloudinit import log
import cloudinit.util as util
import os.path
import os
from xml.dom import minidom
import base64
import os
import re
import tempfile
import subprocess
from cloudinit import log as logging
from cloudinit import sources
from cloudinit import util
LOG = logging.getLogger(__name__)
class DataSourceOVF(DataSource.DataSource):
seed = None
seeddir = base_seeddir + '/ovf'
environment = None
cfg = {}
userdata_raw = None
metadata = None
supported_seed_starts = ("/", "file://")
class DataSourceOVF(sources.DataSource):
def __init__(self, sys_cfg, distro, paths):
sources.DataSource.__init__(self, sys_cfg, distro, paths)
self.seed = None
self.seed_dir = os.path.join(paths.seed_dir, 'ovf')
self.environment = None
self.cfg = {}
self.supported_seed_starts = ("/", "file://")
def __str__(self):
mstr = "DataSourceOVF"
mstr = mstr + " [seed=%s]" % self.seed
return(mstr)
return "%s [seed=%s]" % (util.obj_name(self), self.seed)
def get_data(self):
found = []
@ -52,26 +51,24 @@ class DataSourceOVF(DataSource.DataSource):
ud = ""
defaults = {
"instance-id": "iid-dsovf"
"instance-id": "iid-dsovf",
}
(seedfile, contents) = get_ovf_env(base_seeddir)
(seedfile, contents) = get_ovf_env(self.paths.seed_dir)
if seedfile:
# found a seed dir
seed = "%s/%s" % (base_seeddir, seedfile)
# Found a seed dir
seed = os.path.join(self.paths.seed_dir, seedfile)
(md, ud, cfg) = read_ovf_environment(contents)
self.environment = contents
found.append(seed)
else:
np = {'iso': transport_iso9660,
'vmware-guestd': transport_vmware_guestd, }
name = None
for name, transfunc in np.iteritems():
for (name, transfunc) in np.iteritems():
(contents, _dev, _fname) = transfunc()
if contents:
break
if contents:
(md, ud, cfg) = read_ovf_environment(contents)
self.environment = contents
@ -89,17 +86,19 @@ class DataSourceOVF(DataSource.DataSource):
seedfound = proto
break
if not seedfound:
log.debug("seed from %s not supported by %s" %
(seedfrom, self.__class__))
LOG.debug("Seed from %s not supported by %s",
seedfrom, self)
return False
(md_seed, ud) = util.read_seeded(seedfrom, timeout=None)
log.debug("using seeded cache data from %s" % seedfrom)
LOG.debug("Using seeded cache data from %s", seedfrom)
md = util.mergedict(md, md_seed)
found.append(seedfrom)
# Now that we have exhausted any other places merge in the defaults
md = util.mergedict(md, defaults)
self.seed = ",".join(found)
self.metadata = md
self.userdata_raw = ud
@ -108,31 +107,37 @@ class DataSourceOVF(DataSource.DataSource):
def get_public_ssh_keys(self):
if not 'public-keys' in self.metadata:
return([])
return([self.metadata['public-keys'], ])
return []
pks = self.metadata['public-keys']
if isinstance(pks, (list)):
return pks
else:
return [pks]
# the data sources' config_obj is a cloud-config formated
# The data sources' config_obj is a cloud-config formatted
# object that came to it from ways other than cloud-config
# because cloud-config content would be handled elsewhere
def get_config_obj(self):
return(self.cfg)
return self.cfg
class DataSourceOVFNet(DataSourceOVF):
seeddir = base_seeddir + '/ovf-net'
supported_seed_starts = ("http://", "https://", "ftp://")
def __init__(self, sys_cfg, distro, paths):
DataSourceOVF.__init__(self, sys_cfg, distro, paths)
self.seed_dir = os.path.join(paths.seed_dir, 'ovf-net')
self.supported_seed_starts = ("http://", "https://", "ftp://")
# this will return a dict with some content
# meta-data, user-data
# This will return a dict with some content
# meta-data, user-data, some config
def read_ovf_environment(contents):
props = getProperties(contents)
props = get_properties(contents)
md = {}
cfg = {}
ud = ""
cfg_props = ['password', ]
cfg_props = ['password']
md_props = ['seedfrom', 'local-hostname', 'public-keys', 'instance-id']
for prop, val in props.iteritems():
for (prop, val) in props.iteritems():
if prop == 'hostname':
prop = "local-hostname"
if prop in md_props:
@ -144,23 +149,25 @@ def read_ovf_environment(contents):
ud = base64.decodestring(val)
except:
ud = val
return(md, ud, cfg)
return (md, ud, cfg)
# returns tuple of filename (in 'dirname', and the contents of the file)
# Returns tuple of filename (in 'dirname', and the contents of the file)
# on "not found", returns 'None' for filename and False for contents
def get_ovf_env(dirname):
env_names = ("ovf-env.xml", "ovf_env.xml", "OVF_ENV.XML", "OVF-ENV.XML")
for fname in env_names:
if os.path.isfile("%s/%s" % (dirname, fname)):
fp = open("%s/%s" % (dirname, fname))
contents = fp.read()
fp.close()
return(fname, contents)
return(None, False)
full_fn = os.path.join(dirname, fname)
if os.path.isfile(full_fn):
try:
contents = util.load_file(full_fn)
return (fname, contents)
except:
util.logexc(LOG, "Failed loading ovf file %s", full_fn)
return (None, False)
# transport functions take no input and return
# Transport functions take no input and return
# a 3 tuple of content, path, filename
def transport_iso9660(require_iso=True):
@ -173,79 +180,46 @@ def transport_iso9660(require_iso=True):
devname_regex = os.environ.get(envname, default_regex)
cdmatch = re.compile(devname_regex)
# go through mounts to see if it was already mounted
fp = open("/proc/mounts")
mounts = fp.readlines()
fp.close()
mounted = {}
for mpline in mounts:
(dev, mp, fstype, _opts, _freq, _passno) = mpline.split()
mounted[dev] = (dev, fstype, mp, False)
mp = mp.replace("\\040", " ")
# Go through mounts to see if it was already mounted
mounts = util.mounts()
for (dev, info) in mounts.iteritems():
fstype = info['fstype']
if fstype != "iso9660" and require_iso:
continue
if cdmatch.match(dev[5:]) == None: # take off '/dev/'
if cdmatch.match(dev[5:]) is None: # take off '/dev/'
continue
mp = info['mountpoint']
(fname, contents) = get_ovf_env(mp)
if contents is not False:
return(contents, dev, fname)
tmpd = None
dvnull = None
return (contents, dev, fname)
devs = os.listdir("/dev/")
devs.sort()
for dev in devs:
fullp = "/dev/%s" % dev
fullp = os.path.join("/dev/", dev)
if fullp in mounted or not cdmatch.match(dev) or os.path.isdir(fullp):
if (fullp in mounts or
not cdmatch.match(dev) or os.path.isdir(fullp)):
continue
fp = None
try:
fp = open(fullp, "rb")
fp.read(512)
fp.close()
# See if we can read anything at all...??
with open(fullp, 'rb') as fp:
fp.read(512)
except:
if fp:
fp.close()
continue
if tmpd is None:
tmpd = tempfile.mkdtemp()
if dvnull is None:
try:
dvnull = open("/dev/null")
except:
pass
cmd = ["mount", "-o", "ro", fullp, tmpd]
if require_iso:
cmd.extend(('-t', 'iso9660'))
rc = subprocess.call(cmd, stderr=dvnull, stdout=dvnull, stdin=dvnull)
if rc:
try:
(fname, contents) = util.mount_cb(fullp,
get_ovf_env, mtype="iso9660")
except util.MountFailedError:
util.logexc(LOG, "Failed mounting %s", fullp)
continue
(fname, contents) = get_ovf_env(tmpd)
subprocess.call(["umount", tmpd])
if contents is not False:
os.rmdir(tmpd)
return(contents, fullp, fname)
return (contents, fullp, fname)
if tmpd:
os.rmdir(tmpd)
if dvnull:
dvnull.close()
return(False, None, None)
return (False, None, None)
def transport_vmware_guestd():
@ -259,74 +233,61 @@ def transport_vmware_guestd():
# # would need to error check here and see why this failed
# # to know if log/error should be raised
# return(False, None, None)
return(False, None, None)
return (False, None, None)
def findChild(node, filter_func):
def find_child(node, filter_func):
ret = []
if not node.hasChildNodes():
return ret
for child in node.childNodes:
if filter_func(child):
ret.append(child)
return(ret)
return ret
def getProperties(environString):
dom = minidom.parseString(environString)
def get_properties(contents):
dom = minidom.parseString(contents)
if dom.documentElement.localName != "Environment":
raise Exception("No Environment Node")
raise XmlError("No Environment Node")
if not dom.documentElement.hasChildNodes():
raise Exception("No Child Nodes")
raise XmlError("No Child Nodes")
envNsURI = "http://schemas.dmtf.org/ovf/environment/1"
# could also check here that elem.namespaceURI ==
# "http://schemas.dmtf.org/ovf/environment/1"
propSections = findChild(dom.documentElement,
propSections = find_child(dom.documentElement,
lambda n: n.localName == "PropertySection")
if len(propSections) == 0:
raise Exception("No 'PropertySection's")
raise XmlError("No 'PropertySection's")
props = {}
propElems = findChild(propSections[0], lambda n: n.localName == "Property")
propElems = find_child(propSections[0],
(lambda n: n.localName == "Property"))
for elem in propElems:
key = elem.attributes.getNamedItemNS(envNsURI, "key").value
val = elem.attributes.getNamedItemNS(envNsURI, "value").value
props[key] = val
return(props)
return props
class XmlError(Exception):
pass
# Used to match classes to dependencies
datasources = (
(DataSourceOVF, (DataSource.DEP_FILESYSTEM, )),
(DataSourceOVFNet,
(DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
(DataSourceOVF, (sources.DEP_FILESYSTEM, )),
(DataSourceOVFNet, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
)
# return a list of data sources that match this set of dependencies
# Return a list of data sources that match this set of dependencies
def get_datasource_list(depends):
return(DataSource.list_from_depends(depends, datasources))
if __name__ == "__main__":
def main():
import sys
envStr = open(sys.argv[1]).read()
props = getProperties(envStr)
import pprint
pprint.pprint(props)
md, ud, cfg = read_ovf_environment(envStr)
print "=== md ==="
pprint.pprint(md)
print "=== ud ==="
pprint.pprint(ud)
print "=== cfg ==="
pprint.pprint(cfg)
main()
return sources.list_from_depends(depends, datasources)

View File

@ -0,0 +1,223 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import abc
from cloudinit import importer
from cloudinit import log as logging
from cloudinit import user_data as ud
from cloudinit import util
DEP_FILESYSTEM = "FILESYSTEM"
DEP_NETWORK = "NETWORK"
DS_PREFIX = 'DataSource'
LOG = logging.getLogger(__name__)
class DataSourceNotFoundException(Exception):
pass
class DataSource(object):
__metaclass__ = abc.ABCMeta
def __init__(self, sys_cfg, distro, paths, ud_proc=None):
self.sys_cfg = sys_cfg
self.distro = distro
self.paths = paths
self.userdata = None
self.metadata = None
self.userdata_raw = None
name = util.obj_name(self)
if name.startswith(DS_PREFIX):
name = name[len(DS_PREFIX):]
self.ds_cfg = util.get_cfg_by_path(self.sys_cfg,
("datasource", name), {})
if not ud_proc:
self.ud_proc = ud.UserDataProcessor(self.paths)
else:
self.ud_proc = ud_proc
def get_userdata(self):
if self.userdata is None:
raw_data = self.get_userdata_raw()
self.userdata = self.ud_proc.process(raw_data)
return self.userdata
def get_userdata_raw(self):
return self.userdata_raw
# the data sources' config_obj is a cloud-config formated
# object that came to it from ways other than cloud-config
# because cloud-config content would be handled elsewhere
def get_config_obj(self):
return {}
def get_public_ssh_keys(self):
keys = []
if not self.metadata or 'public-keys' not in self.metadata:
return keys
if isinstance(self.metadata['public-keys'], (basestring, str)):
return str(self.metadata['public-keys']).splitlines()
if isinstance(self.metadata['public-keys'], (list, set)):
return list(self.metadata['public-keys'])
if isinstance(self.metadata['public-keys'], (dict)):
for (_keyname, klist) in self.metadata['public-keys'].iteritems():
# lp:506332 uec metadata service responds with
# data that makes boto populate a string for 'klist' rather
# than a list.
if isinstance(klist, (str, basestring)):
klist = [klist]
if isinstance(klist, (list, set)):
for pkey in klist:
# There is an empty string at
# the end of the keylist, trim it
if pkey:
keys.append(pkey)
return keys
def device_name_to_device(self, _name):
# translate a 'name' to a device
# the primary function at this point is on ec2
# to consult metadata service, that has
# ephemeral0: sdb
# and return 'sdb' for input 'ephemeral0'
return None
def get_locale(self):
return 'en_US.UTF-8'
def get_local_mirror(self):
# ??
return None
def get_instance_id(self):
if not self.metadata or 'instance-id' not in self.metadata:
# Return a magic not really instance id string
return "iid-datasource"
return str(self.metadata['instance-id'])
def get_hostname(self, fqdn=False):
defdomain = "localdomain"
defhost = "localhost"
domain = defdomain
if not self.metadata or not 'local-hostname' in self.metadata:
# this is somewhat questionable really.
# the cloud datasource was asked for a hostname
# and didn't have one. raising error might be more appropriate
# but instead, basically look up the existing hostname
toks = []
hostname = util.get_hostname()
fqdn = util.get_fqdn_from_hosts(hostname)
if fqdn and fqdn.find(".") > 0:
toks = str(fqdn).split(".")
elif hostname:
toks = [hostname, defdomain]
else:
toks = [defhost, defdomain]
else:
# if there is an ipv4 address in 'local-hostname', then
# make up a hostname (LP: #475354) in format ip-xx.xx.xx.xx
lhost = self.metadata['local-hostname']
if util.is_ipv4(lhost):
toks = "ip-%s" % lhost.replace(".", "-")
else:
toks = lhost.split(".")
if len(toks) > 1:
hostname = toks[0]
domain = '.'.join(toks[1:])
else:
hostname = toks[0]
if fqdn:
return "%s.%s" % (hostname, domain)
else:
return hostname
def find_source(sys_cfg, distro, paths, ds_deps, cfg_list, pkg_list):
ds_list = list_sources(cfg_list, ds_deps, pkg_list)
ds_names = [util.obj_name(f) for f in ds_list]
LOG.debug("Searching for data source in: %s", ds_names)
for cls in ds_list:
try:
LOG.debug("Seeing if we can get any data from %s", cls)
s = cls(sys_cfg, distro, paths)
if s.get_data():
return (s, util.obj_name(cls))
except Exception:
util.logexc(LOG, "Getting data from %s failed", cls)
msg = ("Did not find any data source,"
" searched classes: (%s)") % (", ".join(ds_names))
raise DataSourceNotFoundException(msg)
# Return a list of classes that have the same depends as 'depends'
# iterate through cfg_list, loading "DataSource*" modules
# and calling their "get_datasource_list".
# Return an ordered list of classes that match (if any)
def list_sources(cfg_list, depends, pkg_list):
src_list = []
LOG.debug(("Looking for for data source in: %s,"
" via packages %s that matches dependencies %s"),
cfg_list, pkg_list, depends)
for ds_name in cfg_list:
if not ds_name.startswith(DS_PREFIX):
ds_name = '%s%s' % (DS_PREFIX, ds_name)
m_locs = importer.find_module(ds_name,
pkg_list,
['get_datasource_list'])
for m_loc in m_locs:
mod = importer.import_module(m_loc)
lister = getattr(mod, "get_datasource_list")
matches = lister(depends)
if matches:
src_list.extend(matches)
break
return src_list
# 'depends' is a list of dependencies (DEP_FILESYSTEM)
# ds_list is a list of 2 item lists
# ds_list = [
# ( class, ( depends-that-this-class-needs ) )
# }
# It returns a list of 'class' that matched these deps exactly
# It mainly is a helper function for DataSourceCollections
def list_from_depends(depends, ds_list):
ret_list = []
depset = set(depends)
for (cls, deps) in ds_list:
if depset == set(deps):
ret_list.append(cls)
return ret_list

314
cloudinit/ssh_util.py Normal file
View File

@ -0,0 +1,314 @@
#!/usr/bin/python
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Hafliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from StringIO import StringIO
import csv
import os
import pwd
from cloudinit import log as logging
from cloudinit import util
LOG = logging.getLogger(__name__)
# See: man sshd_config
DEF_SSHD_CFG = "/etc/ssh/sshd_config"
class AuthKeyLine(object):
def __init__(self, source, keytype=None, base64=None,
comment=None, options=None):
self.base64 = base64
self.comment = comment
self.options = options
self.keytype = keytype
self.source = source
def empty(self):
if (not self.base64 and
not self.comment and not self.keytype and not self.options):
return True
return False
def __str__(self):
toks = []
if self.options:
toks.append(self.options)
if self.keytype:
toks.append(self.keytype)
if self.base64:
toks.append(self.base64)
if self.comment:
toks.append(self.comment)
if not toks:
return self.source
else:
return ' '.join(toks)
class AuthKeyLineParser(object):
"""
AUTHORIZED_KEYS FILE FORMAT
AuthorizedKeysFile specifies the file containing public keys for public
key authentication; if none is specified, the default is
~/.ssh/authorized_keys. Each line of the file contains one key (empty
(because of the size of the public key encoding) up to a limit of 8 kilo-
bytes, which permits DSA keys up to 8 kilobits and RSA keys up to 16
kilobits. You don't want to type them in; instead, copy the
identity.pub, id_dsa.pub, or the id_rsa.pub file and edit it.
sshd enforces a minimum RSA key modulus size for protocol 1 and protocol
2 keys of 768 bits.
The options (if present) consist of comma-separated option specifica-
tions. No spaces are permitted, except within double quotes. The fol-
lowing option specifications are supported (note that option keywords are
case-insensitive):
"""
def _extract_options(self, ent):
"""
The options (if present) consist of comma-separated option specifica-
tions. No spaces are permitted, except within double quotes.
Note that option keywords are case-insensitive.
"""
quoted = False
i = 0
while (i < len(ent) and
((quoted) or (ent[i] not in (" ", "\t")))):
curc = ent[i]
if i + 1 >= len(ent):
i = i + 1
break
nextc = ent[i + 1]
if curc == "\\" and nextc == '"':
i = i + 1
elif curc == '"':
quoted = not quoted
i = i + 1
options = ent[0:i]
options_lst = []
# Now use a csv parser to pull the options
# out of the above string that we just found an endpoint for.
#
# No quoting so we don't mess up any of the quoting that
# is already there.
reader = csv.reader(StringIO(options), quoting=csv.QUOTE_NONE)
for row in reader:
for e in row:
# Only keep non-empty csv options
e = e.strip()
if e:
options_lst.append(e)
# Now take the rest of the items before the string
# as long as there is room to do this...
toks = []
if i + 1 < len(ent):
rest = ent[i + 1:]
toks = rest.split(None, 2)
return (options_lst, toks)
def _form_components(self, src_line, toks, options=None):
components = {}
if len(toks) == 1:
components['base64'] = toks[0]
elif len(toks) == 2:
components['base64'] = toks[0]
components['comment'] = toks[1]
elif len(toks) == 3:
components['keytype'] = toks[0]
components['base64'] = toks[1]
components['comment'] = toks[2]
components['options'] = options
if not components:
return AuthKeyLine(src_line)
else:
return AuthKeyLine(src_line, **components)
def parse(self, src_line, def_opt=None):
line = src_line.rstrip("\r\n")
if line.startswith("#") or line.strip() == '':
return AuthKeyLine(src_line)
else:
ent = line.strip()
toks = ent.split(None, 3)
if len(toks) < 4:
return self._form_components(src_line, toks, def_opt)
else:
(options, toks) = self._extract_options(ent)
if options:
options = ",".join(options)
else:
options = def_opt
return self._form_components(src_line, toks, options)
def parse_authorized_keys(fname):
lines = []
try:
if os.path.isfile(fname):
lines = util.load_file(fname).splitlines()
except (IOError, OSError):
util.logexc(LOG, "Error reading lines from %s", fname)
lines = []
parser = AuthKeyLineParser()
contents = []
for line in lines:
contents.append(parser.parse(line))
return contents
def update_authorized_keys(fname, keys):
entries = parse_authorized_keys(fname)
to_add = list(keys)
for i in range(0, len(entries)):
ent = entries[i]
if ent.empty() or not ent.base64:
continue
# Replace those with the same base64
for k in keys:
if k.empty() or not k.base64:
continue
if k.base64 == ent.base64:
# Replace it with our better one
ent = k
# Don't add it later
if k in to_add:
to_add.remove(k)
entries[i] = ent
# Now append any entries we did not match above
for key in to_add:
entries.append(key)
# Now format them back to strings...
lines = [str(b) for b in entries]
# Ensure it ends with a newline
lines.append('')
return '\n'.join(lines)
def setup_user_keys(keys, user, key_prefix, paths):
# Make sure the users .ssh dir is setup accordingly
pwent = pwd.getpwnam(user)
ssh_dir = os.path.join(pwent.pw_dir, '.ssh')
ssh_dir = paths.join(False, ssh_dir)
if not os.path.exists(ssh_dir):
util.ensure_dir(ssh_dir, mode=0700)
util.chownbyid(ssh_dir, pwent.pw_uid, pwent.pw_gid)
# Turn the keys given into actual entries
parser = AuthKeyLineParser()
key_entries = []
for k in keys:
key_entries.append(parser.parse(str(k), def_opt=key_prefix))
sshd_conf_fn = paths.join(True, DEF_SSHD_CFG)
with util.SeLinuxGuard(ssh_dir, recursive=True):
try:
# AuthorizedKeysFile may contain tokens
# of the form %T which are substituted during connection set-up.
# The following tokens are defined: %% is replaced by a literal
# '%', %h is replaced by the home directory of the user being
# authenticated and %u is replaced by the username of that user.
ssh_cfg = parse_ssh_config_map(sshd_conf_fn)
akeys = ssh_cfg.get("authorizedkeysfile", '')
akeys = akeys.strip()
if not akeys:
akeys = "%h/.ssh/authorized_keys"
akeys = akeys.replace("%h", pwent.pw_dir)
akeys = akeys.replace("%u", user)
akeys = akeys.replace("%%", '%')
if not akeys.startswith('/'):
akeys = os.path.join(pwent.pw_dir, akeys)
authorized_keys = paths.join(False, akeys)
except (IOError, OSError):
authorized_keys = os.path.join(ssh_dir, 'authorized_keys')
util.logexc(LOG, ("Failed extracting 'AuthorizedKeysFile'"
" in ssh config"
" from %s, using 'AuthorizedKeysFile' file"
" %s instead"),
sshd_conf_fn, authorized_keys)
content = update_authorized_keys(authorized_keys, key_entries)
util.ensure_dir(os.path.dirname(authorized_keys), mode=0700)
util.write_file(authorized_keys, content, mode=0600)
util.chownbyid(authorized_keys, pwent.pw_uid, pwent.pw_gid)
class SshdConfigLine(object):
def __init__(self, line, k=None, v=None):
self.line = line
self._key = k
self.value = v
@property
def key(self):
if self._key is None:
return None
# Keywords are case-insensitive
return self._key.lower()
def __str__(self):
if self._key is None:
return str(self.line)
else:
v = str(self._key)
if self.value:
v += " " + str(self.value)
return v
def parse_ssh_config(fname):
# See: man sshd_config
# The file contains keyword-argument pairs, one per line.
# Lines starting with '#' and empty lines are interpreted as comments.
# Note: key-words are case-insensitive and arguments are case-sensitive
lines = []
if not os.path.isfile(fname):
return lines
for line in util.load_file(fname).splitlines():
line = line.strip()
if not line or line.startswith("#"):
lines.append(SshdConfigLine(line))
continue
(key, val) = line.split(None, 1)
lines.append(SshdConfigLine(line, key, val))
return lines
def parse_ssh_config_map(fname):
lines = parse_ssh_config(fname)
if not lines:
return {}
ret = {}
for line in lines:
if not line.key:
continue
ret[line.key] = line.value
return ret

551
cloudinit/stages.py Normal file
View File

@ -0,0 +1,551 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import cPickle as pickle
import copy
import os
import sys
from cloudinit.settings import (PER_INSTANCE, FREQUENCIES, CLOUD_CONFIG)
from cloudinit import handlers
# Default handlers (used if not overridden)
from cloudinit.handlers import boot_hook as bh_part
from cloudinit.handlers import cloud_config as cc_part
from cloudinit.handlers import shell_script as ss_part
from cloudinit.handlers import upstart_job as up_part
from cloudinit import cloud
from cloudinit import config
from cloudinit import distros
from cloudinit import helpers
from cloudinit import importer
from cloudinit import log as logging
from cloudinit import sources
from cloudinit import util
LOG = logging.getLogger(__name__)
class Init(object):
def __init__(self, ds_deps=None):
if ds_deps is not None:
self.ds_deps = ds_deps
else:
self.ds_deps = [sources.DEP_FILESYSTEM, sources.DEP_NETWORK]
# Created on first use
self._cfg = None
self._paths = None
self._distro = None
# Created only when a fetch occurs
self.datasource = None
@property
def distro(self):
if not self._distro:
# Try to find the right class to use
scfg = self._extract_cfg('system')
name = scfg.pop('distro', 'ubuntu')
cls = distros.fetch(name)
LOG.debug("Using distro class %s", cls)
self._distro = cls(name, scfg, self.paths)
return self._distro
@property
def cfg(self):
return self._extract_cfg('restricted')
def _extract_cfg(self, restriction):
# Ensure actually read
self.read_cfg()
# Nobody gets the real config
ocfg = copy.deepcopy(self._cfg)
if restriction == 'restricted':
ocfg.pop('system_info', None)
elif restriction == 'system':
ocfg = util.get_cfg_by_path(ocfg, ('system_info',), {})
elif restriction == 'paths':
ocfg = util.get_cfg_by_path(ocfg, ('system_info', 'paths'), {})
if not isinstance(ocfg, (dict)):
ocfg = {}
return ocfg
@property
def paths(self):
if not self._paths:
path_info = self._extract_cfg('paths')
self._paths = helpers.Paths(path_info, self.datasource)
return self._paths
def _initial_subdirs(self):
c_dir = self.paths.cloud_dir
initial_dirs = [
c_dir,
os.path.join(c_dir, 'scripts'),
os.path.join(c_dir, 'scripts', 'per-instance'),
os.path.join(c_dir, 'scripts', 'per-once'),
os.path.join(c_dir, 'scripts', 'per-boot'),
os.path.join(c_dir, 'seed'),
os.path.join(c_dir, 'instances'),
os.path.join(c_dir, 'handlers'),
os.path.join(c_dir, 'sem'),
os.path.join(c_dir, 'data'),
]
return initial_dirs
def purge_cache(self, rm_instance_lnk=True):
rm_list = []
rm_list.append(self.paths.boot_finished)
if rm_instance_lnk:
rm_list.append(self.paths.instance_link)
for f in rm_list:
util.del_file(f)
return len(rm_list)
def initialize(self):
self._initialize_filesystem()
def _initialize_filesystem(self):
util.ensure_dirs(self._initial_subdirs())
log_file = util.get_cfg_option_str(self.cfg, 'def_log_file')
perms = util.get_cfg_option_str(self.cfg, 'syslog_fix_perms')
if log_file:
util.ensure_file(log_file)
if perms:
(u, g) = perms.split(':', 1)
if u == "-1" or u == "None":
u = None
if g == "-1" or g == "None":
g = None
util.chownbyname(log_file, u, g)
def read_cfg(self, extra_fns=None):
# None check so that we don't keep on re-loading if empty
if self._cfg is None:
self._cfg = self._read_cfg(extra_fns)
# LOG.debug("Loaded 'init' config %s", self._cfg)
def _read_base_cfg(self):
base_cfgs = []
default_cfg = util.get_builtin_cfg()
kern_contents = util.read_cc_from_cmdline()
# Kernel/cmdline parameters override system config
if kern_contents:
base_cfgs.append(util.load_yaml(kern_contents, default={}))
# Anything in your conf.d location??
# or the 'default' cloud.cfg location???
base_cfgs.append(util.read_conf_with_confd(CLOUD_CONFIG))
# And finally the default gets to play
if default_cfg:
base_cfgs.append(default_cfg)
return util.mergemanydict(base_cfgs)
def _read_cfg(self, extra_fns):
no_cfg_paths = helpers.Paths({}, self.datasource)
merger = helpers.ConfigMerger(paths=no_cfg_paths,
datasource=self.datasource,
additional_fns=extra_fns,
base_cfg=self._read_base_cfg())
return merger.cfg
def _restore_from_cache(self):
# We try to restore from a current link and static path
# by using the instance link, if purge_cache was called
# the file wont exist.
pickled_fn = self.paths.get_ipath_cur('obj_pkl')
pickle_contents = None
try:
pickle_contents = util.load_file(pickled_fn)
except Exception:
pass
# This is expected so just return nothing
# successfully loaded...
if not pickle_contents:
return None
try:
return pickle.loads(pickle_contents)
except Exception:
util.logexc(LOG, "Failed loading pickled blob from %s", pickled_fn)
return None
def _write_to_cache(self):
if not self.datasource:
return False
pickled_fn = self.paths.get_ipath_cur("obj_pkl")
try:
pk_contents = pickle.dumps(self.datasource)
except Exception:
util.logexc(LOG, "Failed pickling datasource %s", self.datasource)
return False
try:
util.write_file(pickled_fn, pk_contents, mode=0400)
except Exception:
util.logexc(LOG, "Failed pickling datasource to %s", pickled_fn)
return False
return True
def _get_datasources(self):
# Any config provided???
pkg_list = self.cfg.get('datasource_pkg_list') or []
# Add the defaults at the end
for n in ['', util.obj_name(sources)]:
if n not in pkg_list:
pkg_list.append(n)
cfg_list = self.cfg.get('datasource_list') or []
return (cfg_list, pkg_list)
def _get_data_source(self):
if self.datasource:
return self.datasource
ds = self._restore_from_cache()
if ds:
LOG.debug("Restored from cache, datasource: %s", ds)
if not ds:
(cfg_list, pkg_list) = self._get_datasources()
# Deep copy so that user-data handlers can not modify
# (which will affect user-data handlers down the line...)
(ds, dsname) = sources.find_source(self.cfg,
self.distro,
self.paths,
copy.deepcopy(self.ds_deps),
cfg_list,
pkg_list)
LOG.debug("Loaded datasource %s - %s", dsname, ds)
self.datasource = ds
# Ensure we adjust our path members datasource
# now that we have one (thus allowing ipath to be used)
self.paths.datasource = ds
return ds
def _get_instance_subdirs(self):
return ['handlers', 'scripts', 'sems']
def _get_ipath(self, subname=None):
# Force a check to see if anything
# actually comes back, if not
# then a datasource has not been assigned...
instance_dir = self.paths.get_ipath(subname)
if not instance_dir:
raise RuntimeError(("No instance directory is available."
" Has a datasource been fetched??"))
return instance_dir
def _reflect_cur_instance(self):
# Remove the old symlink and attach a new one so
# that further reads/writes connect into the right location
idir = self._get_ipath()
util.del_file(self.paths.instance_link)
util.sym_link(idir, self.paths.instance_link)
# Ensures these dirs exist
dir_list = []
for d in self._get_instance_subdirs():
dir_list.append(os.path.join(idir, d))
util.ensure_dirs(dir_list)
# Write out information on what is being used for the current instance
# and what may have been used for a previous instance...
dp = self.paths.get_cpath('data')
# Write what the datasource was and is..
ds = "%s: %s" % (util.obj_name(self.datasource), self.datasource)
previous_ds = None
ds_fn = os.path.join(idir, 'datasource')
try:
previous_ds = util.load_file(ds_fn).strip()
except Exception:
pass
if not previous_ds:
previous_ds = ds
util.write_file(ds_fn, "%s\n" % ds)
util.write_file(os.path.join(dp, 'previous-datasource'),
"%s\n" % (previous_ds))
# What the instance id was and is...
iid = self.datasource.get_instance_id()
previous_iid = None
iid_fn = os.path.join(dp, 'instance-id')
try:
previous_iid = util.load_file(iid_fn).strip()
except Exception:
pass
if not previous_iid:
previous_iid = iid
util.write_file(iid_fn, "%s\n" % iid)
util.write_file(os.path.join(dp, 'previous-instance-id'),
"%s\n" % (previous_iid))
return iid
def fetch(self):
return self._get_data_source()
def instancify(self):
return self._reflect_cur_instance()
def cloudify(self):
# Form the needed options to cloudify our members
return cloud.Cloud(self.datasource,
self.paths, self.cfg,
self.distro, helpers.Runners(self.paths))
def update(self):
if not self._write_to_cache():
return
self._store_userdata()
def _store_userdata(self):
raw_ud = "%s" % (self.datasource.get_userdata_raw())
util.write_file(self._get_ipath('userdata_raw'), raw_ud, 0600)
processed_ud = "%s" % (self.datasource.get_userdata())
util.write_file(self._get_ipath('userdata'), processed_ud, 0600)
def _default_userdata_handlers(self):
opts = {
'paths': self.paths,
'datasource': self.datasource,
}
# TODO Hmmm, should we dynamically import these??
def_handlers = [
cc_part.CloudConfigPartHandler(**opts),
ss_part.ShellScriptPartHandler(**opts),
bh_part.BootHookPartHandler(**opts),
up_part.UpstartJobPartHandler(**opts),
]
return def_handlers
def consume_userdata(self, frequency=PER_INSTANCE):
cdir = self.paths.get_cpath("handlers")
idir = self._get_ipath("handlers")
# Add the path to the plugins dir to the top of our list for import
# instance dir should be read before cloud-dir
if cdir and cdir not in sys.path:
sys.path.insert(0, cdir)
if idir and idir not in sys.path:
sys.path.insert(0, idir)
# Ensure datasource fetched before activation (just incase)
user_data_msg = self.datasource.get_userdata()
# This keeps track of all the active handlers
c_handlers = helpers.ContentHandlers()
# Add handlers in cdir
potential_handlers = util.find_modules(cdir)
for (fname, mod_name) in potential_handlers.iteritems():
try:
mod_locs = importer.find_module(mod_name, [''],
['list_types',
'handle_part'])
if not mod_locs:
LOG.warn(("Could not find a valid user-data handler"
" named %s in file %s"), mod_name, fname)
continue
mod = importer.import_module(mod_locs[0])
mod = handlers.fixup_handler(mod)
types = c_handlers.register(mod)
LOG.debug("Added handler for %s from %s", types, fname)
except:
util.logexc(LOG, "Failed to register handler from %s", fname)
def_handlers = self._default_userdata_handlers()
applied_def_handlers = c_handlers.register_defaults(def_handlers)
if applied_def_handlers:
LOG.debug("Registered default handlers: %s", applied_def_handlers)
# Form our cloud interface
data = self.cloudify()
# Init the handlers first
called = []
for (_ctype, mod) in c_handlers.iteritems():
if mod in called:
continue
handlers.call_begin(mod, data, frequency)
called.append(mod)
# Walk the user data
part_data = {
'handlers': c_handlers,
# Any new handlers that are encountered get writen here
'handlerdir': idir,
'data': data,
# The default frequency if handlers don't have one
'frequency': frequency,
# This will be used when new handlers are found
# to help write there contents to files with numbered
# names...
'handlercount': 0,
}
handlers.walk(user_data_msg, handlers.walker_callback, data=part_data)
# Give callbacks opportunity to finalize
called = []
for (_ctype, mod) in c_handlers.iteritems():
if mod in called:
continue
handlers.call_end(mod, data, frequency)
called.append(mod)
class Modules(object):
def __init__(self, init, cfg_files=None):
self.init = init
self.cfg_files = cfg_files
# Created on first use
self._cached_cfg = None
@property
def cfg(self):
# None check to avoid empty case causing re-reading
if self._cached_cfg is None:
merger = helpers.ConfigMerger(paths=self.init.paths,
datasource=self.init.datasource,
additional_fns=self.cfg_files,
base_cfg=self.init.cfg)
self._cached_cfg = merger.cfg
# LOG.debug("Loading 'module' config %s", self._cached_cfg)
# Only give out a copy so that others can't modify this...
return copy.deepcopy(self._cached_cfg)
def _read_modules(self, name):
module_list = []
if name not in self.cfg:
return module_list
cfg_mods = self.cfg[name]
# Create 'module_list', an array of hashes
# Where hash['mod'] = module name
# hash['freq'] = frequency
# hash['args'] = arguments
for item in cfg_mods:
if not item:
continue
if isinstance(item, (str, basestring)):
module_list.append({
'mod': item.strip(),
})
elif isinstance(item, (list)):
contents = {}
# Meant to fall through...
if len(item) >= 1:
contents['mod'] = item[0].strip()
if len(item) >= 2:
contents['freq'] = item[1].strip()
if len(item) >= 3:
contents['args'] = item[2:]
if contents:
module_list.append(contents)
elif isinstance(item, (dict)):
contents = {}
valid = False
if 'name' in item:
contents['mod'] = item['name'].strip()
valid = True
if 'frequency' in item:
contents['freq'] = item['frequency'].strip()
if 'args' in item:
contents['args'] = item['args'] or []
if contents and valid:
module_list.append(contents)
else:
raise TypeError(("Failed to read '%s' item in config,"
" unknown type %s") %
(item, util.obj_name(item)))
return module_list
def _fixup_modules(self, raw_mods):
mostly_mods = []
for raw_mod in raw_mods:
raw_name = raw_mod['mod']
freq = raw_mod.get('freq')
run_args = raw_mod.get('args') or []
mod_name = config.form_module_name(raw_name)
if not mod_name:
continue
if freq and freq not in FREQUENCIES:
LOG.warn(("Config specified module %s"
" has an unknown frequency %s"), raw_name, freq)
# Reset it so when ran it will get set to a known value
freq = None
mod_locs = importer.find_module(mod_name,
['', util.obj_name(config)],
['handle'])
if not mod_locs:
LOG.warn("Could not find module named %s", mod_name)
continue
mod = config.fixup_module(importer.import_module(mod_locs[0]))
mostly_mods.append([mod, raw_name, freq, run_args])
return mostly_mods
def _run_modules(self, mostly_mods):
d_name = self.init.distro.name
cc = self.init.cloudify()
# Return which ones ran
# and which ones failed + the exception of why it failed
failures = []
which_ran = []
for (mod, name, freq, args) in mostly_mods:
try:
# Try the modules frequency, otherwise fallback to a known one
if not freq:
freq = mod.frequency
if not freq in FREQUENCIES:
freq = PER_INSTANCE
worked_distros = mod.distros
if (worked_distros and d_name not in worked_distros):
LOG.warn(("Module %s is verified on %s distros"
" but not on %s distro. It may or may not work"
" correctly."), name, worked_distros, d_name)
# Use the configs logger and not our own
# TODO: possibly check the module
# for having a LOG attr and just give it back
# its own logger?
func_args = [name, self.cfg,
cc, config.LOG, args]
# Mark it as having started running
which_ran.append(name)
# This name will affect the semaphore name created
run_name = "config-%s" % (name)
cc.run(run_name, mod.handle, func_args, freq=freq)
except Exception as e:
util.logexc(LOG, "Running %s (%s) failed", name, mod)
failures.append((name, e))
return (which_ran, failures)
def run_single(self, mod_name, args=None, freq=None):
# Form the users module 'specs'
mod_to_be = {
'mod': mod_name,
'args': args,
'freq': freq,
}
# Now resume doing the normal fixups and running
raw_mods = [mod_to_be]
mostly_mods = self._fixup_modules(raw_mods)
return self._run_modules(mostly_mods)
def run_section(self, section_name):
raw_mods = self._read_modules(section_name)
mostly_mods = self._fixup_modules(raw_mods)
return self._run_modules(mostly_mods)

41
cloudinit/templater.py Normal file
View File

@ -0,0 +1,41 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from tempita import Template
from cloudinit import util
def render_from_file(fn, params):
return render_string(util.load_file(fn), params, name=fn)
def render_to_file(fn, outfn, params, mode=0644):
contents = render_from_file(fn, params)
util.write_file(outfn, contents, mode=mode)
def render_string(content, params, name=None):
tpl = Template(content, name=name)
if not params:
params = dict()
return tpl.substitute(params)

226
cloudinit/url_helper.py Normal file
View File

@ -0,0 +1,226 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from contextlib import closing
import errno
import socket
import time
import urllib
import urllib2
from cloudinit import log as logging
from cloudinit import version
LOG = logging.getLogger(__name__)
class UrlResponse(object):
def __init__(self, status_code, contents=None, headers=None):
self._status_code = status_code
self._contents = contents
self._headers = headers
@property
def code(self):
return self._status_code
@property
def contents(self):
return self._contents
@property
def headers(self):
return self._headers
def __str__(self):
if not self.contents:
return ''
else:
return str(self.contents)
def ok(self, redirects_ok=False):
upper = 300
if redirects_ok:
upper = 400
if self.code >= 200 and self.code < upper:
return True
else:
return False
def readurl(url, data=None, timeout=None,
retries=0, sec_between=1, headers=None):
req_args = {}
req_args['url'] = url
if data is not None:
req_args['data'] = urllib.urlencode(data)
if not headers:
headers = {
'User-Agent': 'Cloud-Init/%s' % (version.version_string()),
}
req_args['headers'] = headers
req = urllib2.Request(**req_args)
retries = max(retries, 0)
attempts = retries + 1
excepts = []
LOG.debug(("Attempting to open '%s' with %s attempts"
" (%s retries, timeout=%s) to be performed"),
url, attempts, retries, timeout)
open_args = {}
if timeout is not None:
open_args['timeout'] = int(timeout)
for i in range(0, attempts):
try:
with closing(urllib2.urlopen(req, **open_args)) as rh:
content = rh.read()
status = rh.getcode()
if status is None:
# This seems to happen when files are read...
status = 200
headers = {}
if rh.headers:
headers = dict(rh.headers)
LOG.debug("Read from %s (%s, %sb) after %s attempts",
url, status, len(content), (i + 1))
return UrlResponse(status, content, headers)
except urllib2.HTTPError as e:
excepts.append(e)
except urllib2.URLError as e:
# This can be a message string or
# another exception instance
# (socket.error for remote URLs, OSError for local URLs).
if (isinstance(e.reason, (OSError)) and
e.reason.errno == errno.ENOENT):
excepts.append(e.reason)
else:
excepts.append(e)
except Exception as e:
excepts.append(e)
if i + 1 < attempts:
LOG.debug("Please wait %s seconds while we wait to try again",
sec_between)
time.sleep(sec_between)
# Didn't work out
LOG.warn("Failed reading from %s after %s attempts", url, attempts)
# It must of errored at least once for code
# to get here so re-raise the last error
LOG.debug("%s errors occured, re-raising the last one", len(excepts))
raise excepts[-1]
def wait_for_url(urls, max_wait=None, timeout=None,
status_cb=None, headers_cb=None, sleep_time=1):
"""
urls: a list of urls to try
max_wait: roughly the maximum time to wait before giving up
The max time is *actually* len(urls)*timeout as each url will
be tried once and given the timeout provided.
timeout: the timeout provided to urllib2.urlopen
status_cb: call method with string message when a url is not available
headers_cb: call method with single argument of url to get headers
for request.
the idea of this routine is to wait for the EC2 metdata service to
come up. On both Eucalyptus and EC2 we have seen the case where
the instance hit the MD before the MD service was up. EC2 seems
to have permenantely fixed this, though.
In openstack, the metadata service might be painfully slow, and
unable to avoid hitting a timeout of even up to 10 seconds or more
(LP: #894279) for a simple GET.
Offset those needs with the need to not hang forever (and block boot)
on a system where cloud-init is configured to look for EC2 Metadata
service but is not going to find one. It is possible that the instance
data host (169.254.169.254) may be firewalled off Entirely for a sytem,
meaning that the connection will block forever unless a timeout is set.
"""
start_time = time.time()
def log_status_cb(msg):
LOG.debug(msg)
if status_cb is None:
status_cb = log_status_cb
def timeup(max_wait, start_time):
return ((max_wait <= 0 or max_wait is None) or
(time.time() - start_time > max_wait))
loop_n = 0
while True:
sleep_time = int(loop_n / 5) + 1
for url in urls:
now = time.time()
if loop_n != 0:
if timeup(max_wait, start_time):
break
if timeout and (now + timeout > (start_time + max_wait)):
# shorten timeout to not run way over max_time
timeout = int((start_time + max_wait) - now)
reason = ""
try:
if headers_cb is not None:
headers = headers_cb(url)
else:
headers = {}
resp = readurl(url, headers=headers, timeout=timeout)
if not resp.contents:
reason = "empty response [%s]" % (resp.code)
elif not resp.ok():
reason = "bad status code [%s]" % (resp.code)
else:
return url
except urllib2.HTTPError as e:
reason = "http error [%s]" % e.code
except urllib2.URLError as e:
reason = "url error [%s]" % e.reason
except socket.timeout as e:
reason = "socket timeout [%s]" % e
except Exception as e:
reason = "unexpected error [%s]" % e
time_taken = int(time.time() - start_time)
status_msg = "Calling '%s' failed [%s/%ss]: %s" % (url,
time_taken,
max_wait, reason)
status_cb(status_msg)
if timeup(max_wait, start_time):
break
loop_n = loop_n + 1
LOG.debug("Please wait %s seconds while we wait to try again",
sleep_time)
time.sleep(sleep_time)
return False

243
cloudinit/user_data.py Normal file
View File

@ -0,0 +1,243 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Canonical Ltd.
# Copyright (C) 2012 Hewlett-Packard Development Company, L.P.
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import email
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from cloudinit import handlers
from cloudinit import log as logging
from cloudinit import url_helper
from cloudinit import util
LOG = logging.getLogger(__name__)
# Constants copied in from the handler module
NOT_MULTIPART_TYPE = handlers.NOT_MULTIPART_TYPE
PART_FN_TPL = handlers.PART_FN_TPL
OCTET_TYPE = handlers.OCTET_TYPE
# Saves typing errors
CONTENT_TYPE = 'Content-Type'
# Various special content types that cause special actions
TYPE_NEEDED = ["text/plain", "text/x-not-multipart"]
INCLUDE_TYPES = ['text/x-include-url', 'text/x-include-once-url']
ARCHIVE_TYPES = ["text/cloud-config-archive"]
UNDEF_TYPE = "text/plain"
ARCHIVE_UNDEF_TYPE = "text/cloud-config"
# Msg header used to track attachments
ATTACHMENT_FIELD = 'Number-Attachments'
class UserDataProcessor(object):
def __init__(self, paths):
self.paths = paths
def process(self, blob):
base_msg = convert_string(blob)
process_msg = MIMEMultipart()
self._process_msg(base_msg, process_msg)
return process_msg
def _process_msg(self, base_msg, append_msg):
for part in base_msg.walk():
# multipart/* are just containers
if part.get_content_maintype() == 'multipart':
continue
ctype = None
ctype_orig = part.get_content_type()
payload = part.get_payload(decode=True)
if not ctype_orig:
ctype_orig = UNDEF_TYPE
if ctype_orig in TYPE_NEEDED:
ctype = handlers.type_from_starts_with(payload)
if ctype is None:
ctype = ctype_orig
if ctype in INCLUDE_TYPES:
self._do_include(payload, append_msg)
continue
if ctype in ARCHIVE_TYPES:
self._explode_archive(payload, append_msg)
continue
if CONTENT_TYPE in base_msg:
base_msg.replace_header(CONTENT_TYPE, ctype)
else:
base_msg[CONTENT_TYPE] = ctype
self._attach_part(append_msg, part)
def _get_include_once_filename(self, entry):
entry_fn = util.hash_blob(entry, 'md5', 64)
return os.path.join(self.paths.get_ipath_cur('data'),
'urlcache', entry_fn)
def _do_include(self, content, append_msg):
# Include a list of urls, one per line
# also support '#include <url here>'
# or #include-once '<url here>'
include_once_on = False
for line in content.splitlines():
lc_line = line.lower()
if lc_line.startswith("#include-once"):
line = line[len("#include-once"):].lstrip()
# Every following include will now
# not be refetched.... but will be
# re-read from a local urlcache (if it worked)
include_once_on = True
elif lc_line.startswith("#include"):
line = line[len("#include"):].lstrip()
# Disable the include once if it was on
# if it wasn't, then this has no effect.
include_once_on = False
if line.startswith("#"):
continue
include_url = line.strip()
if not include_url:
continue
include_once_fn = None
content = None
if include_once_on:
include_once_fn = self._get_include_once_filename(include_url)
if include_once_on and os.path.isfile(include_once_fn):
content = util.load_file(include_once_fn)
else:
resp = url_helper.readurl(include_url)
if include_once_on and resp.ok():
util.write_file(include_once_fn, str(resp), mode=0600)
if resp.ok():
content = str(resp)
else:
LOG.warn(("Fetching from %s resulted in"
" a invalid http code of %s"),
include_url, resp.code)
if content is not None:
new_msg = convert_string(content)
self._process_msg(new_msg, append_msg)
def _explode_archive(self, archive, append_msg):
entries = util.load_yaml(archive, default=[], allowed=[list, set])
for ent in entries:
# ent can be one of:
# dict { 'filename' : 'value', 'content' :
# 'value', 'type' : 'value' }
# filename and type not be present
# or
# scalar(payload)
if isinstance(ent, (str, basestring)):
ent = {'content': ent}
if not isinstance(ent, (dict)):
# TODO raise?
continue
content = ent.get('content', '')
mtype = ent.get('type')
if not mtype:
mtype = handlers.type_from_starts_with(content,
ARCHIVE_UNDEF_TYPE)
maintype, subtype = mtype.split('/', 1)
if maintype == "text":
msg = MIMEText(content, _subtype=subtype)
else:
msg = MIMEBase(maintype, subtype)
msg.set_payload(content)
if 'filename' in ent:
msg.add_header('Content-Disposition',
'attachment', filename=ent['filename'])
for header in list(ent.keys()):
if header in ('content', 'filename', 'type'):
continue
msg.add_header(header, ent['header'])
self._attach_part(append_msg, msg)
def _multi_part_count(self, outer_msg, new_count=None):
"""
Return the number of attachments to this MIMEMultipart by looking
at its 'Number-Attachments' header.
"""
if ATTACHMENT_FIELD not in outer_msg:
outer_msg[ATTACHMENT_FIELD] = '0'
if new_count is not None:
outer_msg.replace_header(ATTACHMENT_FIELD, str(new_count))
fetched_count = 0
try:
fetched_count = int(outer_msg.get(ATTACHMENT_FIELD))
except (ValueError, TypeError):
outer_msg.replace_header(ATTACHMENT_FIELD, str(fetched_count))
return fetched_count
def _part_filename(self, _unnamed_part, count):
return PART_FN_TPL % (count + 1)
def _attach_part(self, outer_msg, part):
"""
Attach an part to an outer message. outermsg must be a MIMEMultipart.
Modifies a header in the message to keep track of number of attachments.
"""
cur_c = self._multi_part_count(outer_msg)
if not part.get_filename():
fn = self._part_filename(part, cur_c)
part.add_header('Content-Disposition',
'attachment', filename=fn)
outer_msg.attach(part)
self._multi_part_count(outer_msg, cur_c + 1)
# Coverts a raw string into a mime message
def convert_string(raw_data, headers=None):
if not raw_data:
raw_data = ''
if not headers:
headers = {}
data = util.decomp_str(raw_data)
if "mime-version:" in data[0:4096].lower():
msg = email.message_from_string(data)
for (key, val) in headers.iteritems():
if key in msg:
msg.replace_header(key, val)
else:
msg[key] = val
else:
mtype = headers.get(CONTENT_TYPE, NOT_MULTIPART_TYPE)
maintype, subtype = mtype.split("/", 1)
msg = MIMEBase(maintype, subtype, *headers)
msg.set_payload(data)
return msg

File diff suppressed because it is too large Load Diff

27
cloudinit/version.py Normal file
View File

@ -0,0 +1,27 @@
# vi: ts=4 expandtab
#
# Copyright (C) 2012 Yahoo! Inc.
#
# Author: Joshua Harlow <harlowja@yahoo-inc.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from distutils import version as vr
def version():
return vr.StrictVersion("0.7.0")
def version_string():
return str(version())

View File

@ -1,8 +1,24 @@
user: ubuntu
disable_root: 1
preserve_hostname: False
# datasource_list: ["NoCloud", "ConfigDrive", "OVF", "MAAS", "Ec2", "CloudStack"]
# The top level settings are used as module
# and system configuration.
# This user will have its password adjusted
user: ubuntu
# If this is set, 'root' will not be able to ssh in and they
# will get a message to login instead as the above $user (ubuntu)
disable_root: true
# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: false
# Example datasource config
# datasource:
# Ec2:
# metadata_urls: [ 'blah.com' ]
# timeout: 5 # (defaults to 50 seconds)
# max_wait: 10 # (defaults to 120 seconds)
# The modules that run in the 'init' stage
cloud_init_modules:
- bootcmd
- resizefs
@ -13,6 +29,7 @@ cloud_init_modules:
- rsyslog
- ssh
# The modules that run in the 'config' stage
cloud_config_modules:
- mounts
- ssh-import-id
@ -31,6 +48,7 @@ cloud_config_modules:
- runcmd
- byobu
# The modules that run in the 'final' stage
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
@ -40,3 +58,17 @@ cloud_final_modules:
- keys-to-console
- phone-home
- final-message
# System and/or distro specific settings
# (not accessible to handlers/transforms)
system_info:
# This will affect which distro class gets used
distro: ubuntu
# Other config here will be given to the distro class and/or path classes
paths:
cloud_dir: /var/lib/cloud/
templates_dir: /etc/cloud/templates/
upstart_dir: /etc/init/
package_mirror: http://archive.ubuntu.com/ubuntu
availability_zone_template: http://%(zone)s.ec2.archive.ubuntu.com/ubuntu/
ssh_svcname: ssh

View File

@ -1,4 +1,4 @@
## this yaml formated config file handles setting
## This yaml formated config file handles setting
## logger information. The values that are necessary to be set
## are seen at the bottom. The top '_log' are only used to remove
## redundency in a syslog and fallback-to-file case.
@ -53,5 +53,9 @@ _log:
args=("/dev/log", handlers.SysLogHandler.LOG_USER)
log_cfgs:
# These will be joined into a string that defines the configuration
- [ *log_base, *log_syslog ]
# These will be joined into a string that defines the configuration
- [ *log_base, *log_file ]
# A file path can also be used
# - /etc/log.conf

View File

@ -1,29 +0,0 @@
#!/usr/bin/make -f
DEB_PYTHON2_MODULE_PACKAGES = cloud-init
binary-install/cloud-init::cloud-init-fixups
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
DEB_DH_INSTALL_SOURCEDIR := debian/tmp
cloud-init-fixups:
for x in $(DEB_DESTDIR)/usr/bin/*.py; do mv "$$x" "$${x%.py}"; done
install -d $(DEB_DESTDIR)/etc/rsyslog.d
cp tools/21-cloudinit.conf $(DEB_DESTDIR)/etc/rsyslog.d/21-cloudinit.conf
ln -sf cloud-init-per $(DEB_DESTDIR)/usr/bin/cloud-init-run-module
# You only need to run this immediately after checking out the package from
# revision control.
# http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572204
quilt-setup:
@[ ! -d .pc ] || { echo ".pc exists. remove it and re-run to start fresh"; exit 1; }
set -e; for patch in $$(quilt series | tac); do \
patch -p1 -R --no-backup-if-mismatch <"debian/patches/$$patch"; \
done
quilt push -a
.PHONY: quilt-setup

View File

@ -1,31 +0,0 @@
#!/bin/sh
# cd $(DEB_SRCDIR) && $(call cdbs_python_binary,python$(cdbs_python_compile_version)) $(DEB_PYTHON_SETUP_CMD) install --root=$(cdbs_python_destdir) $(DEB_PYTHON_INSTALL_ARGS_ALL)
# for ddir in $(cdbs_python_destdir)/usr/lib/python?.?/dist-packages; do \
# [ -d $$ddir ] || continue; \
# sdir=$$(dirname $$ddir)/site-packages; \
# mkdir -p $$sdir; \
# tar -c -f - -C $$ddir . | tar -x -f - -C $$sdir; \
# rm -rf $$ddir; \
# done
DEB_PYTHON_INSTALL_ARGS_ALL="-O0 --install-layout=deb"
rm -Rf build
destdir=$(readlink -f ${1})
[ -z "${destdir}" ] && { echo "give destdir"; exit 1; }
cd $(dirname ${0})
./setup.py install --root=${destdir} ${DEB_PYTHON_INSTALL_ARGS_ALL}
#mkdir -p ${destdir}/usr/share/pyshared
#for x in ${destdir}/usr/lib/python2.6/dist-packages/*; do
# [ -d "$x" ] || continue
# [ ! -d "${destdir}/usr/share/pyshared/${x##*/}" ] ||
# rm -Rf "${destdir}/usr/share/pyshared/${x##*/}"
# mv $x ${destdir}/usr/share/pyshared
#done
#rm -Rf ${destdir}/usr/lib/python2.6
for x in "${destdir}/usr/bin/"*.py; do
[ -f "${x}" ] && mv "${x}" "${x%.py}"
done

Some files were not shown because too many files have changed in this diff Show More