Commit f53fc46a authored by Wesley Wiedenmeier's avatar Wesley Wiedenmeier Committed by Scott Moser

integration test: initial commit of integration test framework

The adds in end-to-end testing of cloud-init. The framework utilizes
LXD and cloud images as a backend to test user-data passed in.
Arbitrary data is then captured from predefined commands specified
by the user. After collection, data verification is completed by
running a series of Python unit tests against the collected data.

Currently only the Ubuntu Trusty, Xenial, Yakkety, and Zesty
releases are supported. Test cases for 50% of the modules is
complete and available.

Additionally a Read the Docs file was created to guide test
writing and execution.
parent b2a9f336
......@@ -41,6 +41,7 @@ initialization of a cloud instance.
.. _Cloud-init:
.. vi: textwidth=78
.. contents:: Table of Contents
:depth: 2
Test Development
The purpose of this page is to describe how to write integration tests for
cloud-init. As a test writer you need to develop a test configuration and
a verification file:
* The test configuration specifies a specific cloud-config to be used by
cloud-init and a list of arbitrary commands to capture the output of
(e.g my_test.yaml)
* The verification file runs tests on the collected output to determine
the result of the test (e.g.
The names must match, however the extensions will of course be different,
yaml vs py.
The test configuration is a YAML file such as *ntp_server.yaml* below:
.. code-block:: yaml
# NTP config using specific servers (ntp_server.yaml)
cloud_config: |
ntp_installed_servers: |
dpkg -l | grep ntp | wc -l
ntp_conf_dist_servers: |
ls /etc/ntp.conf.dist | wc -l
ntp_conf_servers: |
cat /etc/ntp.conf | grep '^server'
There are two keys, 1 required and 1 optional, in the YAML file:
1. The required key is ``cloud_config``. This should be a string of valid
YAML that is exactly what would normally be placed in a cloud-config file,
including the cloud-config header. This essentially sets up the scenario
under test.
2. The optional key is ``collect_scripts``. This key has one or more
sub-keys containing strings of arbitrary commands to execute (e.g.
```cat /var/log/cloud-config-output.log```). In the example above the
output of dpkg is captured, grep for ntp, and the number of lines
reported. The name of the sub-key is important. The sub-key is used by
the verification script to recall the output of the commands ran.
Default Collect Scripts
By default the following files will be collected for every test. There is
no need to specify these items:
* ``/var/log/cloud-init.log``
* ``/var/log/cloud-init-output.log``
* ``/run/cloud-init/.instance-id``
* ``/run/cloud-init/result.json``
* ``/run/cloud-init/status.json``
* ```dpkg-query -W -f='${Version}' cloud-init```
The verification script is a Python file with unit tests like the one,
``, below:
.. code-block:: python
"""cloud-init Integration Test Verify Script (ntp_server.yaml)"""
from tests.cloud_tests.testcases import base
class TestNtpServers(base.CloudTestCase):
"""Test ntp module"""
def test_ntp_installed(self):
"""Test ntp installed"""
out = self.get_data_file('ntp_installed_servers')
self.assertEqual(1, int(out))
def test_ntp_dist_entries(self):
"""Test dist config file has one entry"""
out = self.get_data_file('ntp_conf_dist_servers')
self.assertEqual(1, int(out))
def test_ntp_entires(self):
"""Test config entries"""
out = self.get_data_file('ntp_conf_servers')
self.assertIn('server iburst', out)
Here is a breakdown of the unit test file:
* The import statement allows access to the output files.
* The class can be named anything, but must import the ``base.CloudTestCase``
* There can be 1 to N number of functions with any name, however only
tests starting with ``test_*`` will be executed.
* Output from the commands can be accessed via
``self.get_data_file('key')`` where key is the sub-key of
``collect_scripts`` above.
Integration tests are located under the `tests/cloud_tests` directory.
Test configurations are placed under `configs` and the test verification
scripts under `testcases`:
.. code-block:: bash
cloud-init$ tree -d tests/cloud_tests/
├── configs
│   ├── bugs
│   ├── examples
│   ├── main
│   └── modules
└── testcases
├── bugs
├── examples
├── main
└── modules
The sub-folders of bugs, examples, main, and modules help organize the
tests. View the in each to understand in more detail each
Development Checklist
* Configuration File
* Named 'your_test_here.yaml'
* Contains at least a valid cloud-config
* Optionally, commands to capture additional output
* Valid YAML
* Placed in the appropriate sub-folder in the configs directory
* Verification File
* Named ''
* Valid unit tests validating output collected
* Passes pylint & pep8 checks
* Placed in the appropriate sub-folder in the testcsaes directory
* Tested by running the test:
.. code-block:: bash
$ python3 -m tests.cloud_tests run -v -n <release of choice> \
--deb <build of cloud-init> \
-t tests/cloud_tests/configs/<dir>/your_test_here.yaml
Executing tests has three options:
* ``run`` an alias to run both ``collect`` and ``verify``
* ``collect`` deploys on the specified platform and os, patches with the
requested deb or rpm, and finally collects output of the arbitrary
* ``verify`` given a directory of test data, run the Python unit tests on
it to generate results.
The first example will provide a complete end-to-end run of data
collection and verification. There are additional examples below
explaining how to run one or the other independently.
.. code-block:: bash
$ git clone
$ cd cloud-init
$ python3 -m tests.cloud_tests run -v -n trusty -n xenial \
--deb cloud-init_0.7.8~my_patch_all.deb
The above command will do the following:
* ``-v`` verbose output
* ``run`` both collect output and run tests the output
* ``-n trusty`` on the Ubuntu Trusty release
* ``-n xenial`` on the Ubuntu Xenial release
* ``--deb cloud-init_0.7.8~patch_all.deb`` use this deb as the version of
cloud-init to run with
For a more detailed explanation of each option see below.
If developing tests it may be necessary to see if cloud-config works as
expected and the correct files are pulled down. In this case only a
collect can be ran by running:
.. code-block:: bash
$ python3 -m tests.cloud_tests collect -n xenial -d /tmp/collection \
--deb cloud-init_0.7.8~my_patch_all.deb
The above command will run the collection tests on xenial with the
provided deb and place all results into `/tmp/collection`.
When developing tests it is much easier to simply rerun the verify scripts
without the more lengthy collect process. This can be done by running:
.. code-block:: bash
$ python3 -m tests.cloud_tests verify -d /tmp/collection
The above command will run the verify scripts on the data discovered in
The following outlines the process flow during a complete end-to-end LXD-backed test.
1. Configuration
* The back end and specific OS releases are verified as supported
* The test or tests that need to be run are determined either by directory or by individual yaml
2. Image Creation
* Acquire the daily LXD image
* Install the specified cloud-init package
* Clean the image so that it does not appear to have been booted
* A snapshot of the image is created and reused by all tests
3. Configuration
* For each test, the cloud-config is injected into a copy of the
snapshot and booted
* The framework waits for ``/var/lib/cloud/instance/boot-finished``
(up to 120 seconds)
* All default commands are ran and output collected
* Any commands the user specified are executed and output collected
4. Verification
* The default commands are checked for any failures, errors, and
warnings to validate basic functionality of cloud-init completed
* The user generated unit tests are then ran validating against the
collected output
5. Results
* If any failures were detected the test suite returns a failure
# This file is part of cloud-init. See LICENSE file for license information.
import logging
import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
TESTCASES_DIR = os.path.join(BASE_DIR, 'testcases')
TEST_CONF_DIR = os.path.join(BASE_DIR, 'configs')
def _initialize_logging():
configure logging for cloud_tests
logger = logging.getLogger(__name__)
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console = logging.StreamHandler()
return logger
LOG = _initialize_logging()
# vi: ts=4 expandtab
# This file is part of cloud-init. See LICENSE file for license information.
import argparse
import logging
import shutil
import sys
import tempfile
from tests.cloud_tests import (args, collect, manage, verify)
from tests.cloud_tests import LOG
def configure_log(args):
configure logging
level = logging.INFO
if args.verbose:
level = logging.DEBUG
elif args.quiet:
level = logging.WARN
def run(args):
run full test suite
failed = 0
args.data_dir = tempfile.mkdtemp(prefix='cloud_test_data_')
LOG.debug('using tmpdir %s', args.data_dir)
failed += collect.collect(args)
failed += verify.verify(args)
except Exception:
failed += 1
# TODO: make this configurable via environ or cmdline
if failed:
LOG.warn('some tests failed, leaving data in %s', args.data_dir)
return failed
def main():
entry point for cloud test suite
# configure parser
parser = argparse.ArgumentParser(prog='cloud_tests')
subparsers = parser.add_subparsers(dest="subcmd")
subparsers.required = True
def add_subparser(name, description, arg_sets):
add arguments to subparser
subparser = subparsers.add_parser(name, help=description)
for (_args, _kwargs) in (a for arg_set in arg_sets for a in arg_set):
subparser.add_argument(*_args, **_kwargs)
# configure subparsers
for (name, (description, arg_sets)) in args.SUBCMDS.items():
add_subparser(name, description,
[args.ARG_SETS[arg_set] for arg_set in arg_sets])
# parse arguments
parsed = parser.parse_args()
# process arguments
(_, arg_sets) = args.SUBCMDS[parsed.subcmd]
for normalizer in [args.NORMALIZERS[arg_set] for arg_set in arg_sets]:
parsed = normalizer(parsed)
if not parsed:
return -1
# run handler
LOG.debug('running with args: %s\n', parsed)
return {
'collect': collect.collect,
'create': manage.create,
'run': run,
'verify': verify.verify,
if __name__ == "__main__":
# vi: ts=4 expandtab
# This file is part of cloud-init. See LICENSE file for license information.
import os
from tests.cloud_tests import config, util
from tests.cloud_tests import LOG
(('-p', '--platform'),
{'help': 'platform(s) to run tests on', 'metavar': 'PLATFORM',
'action': 'append', 'choices': config.list_enabled_platforms(),
'default': []}),
(('-n', '--os-name'),
{'help': 'the name(s) of the OS(s) to test', 'metavar': 'NAME',
'action': 'append', 'choices': config.list_enabled_distros(),
'default': []}),
(('-t', '--test-config'),
{'help': 'test config file(s) to use', 'metavar': 'FILE',
'action': 'append', 'default': []}),),
(('-c', '--config'),
{'help': 'cloud-config yaml for testcase', 'metavar': 'DATA',
'action': 'store', 'required': False, 'default': None}),
(('-e', '--enable'),
{'help': 'enable testcase', 'required': False, 'default': False,
'action': 'store_true'}),
{'help': 'testcase name, in format "<category>/<test>"',
'action': 'store'}),
(('-d', '--description'),
{'help': 'description of testcase', 'required': False}),
(('-f', '--force'),
{'help': 'overwrite already existing test', 'required': False,
'action': 'store_true', 'default': False}),),
(('-v', '--verbose'),
{'help': 'verbose output', 'action': 'store_true', 'default': False}),
(('-q', '--quiet'),
{'help': 'quiet output', 'action': 'store_true', 'default': False}),),
(('-d', '--data-dir'),
{'help': 'directory to store test data in',
'action': 'store', 'metavar': 'DIR', 'required': True}),),
(('-r', '--result'),
{'help': 'file to write results to',
'action': 'store', 'metavar': 'FILE'}),),
'SETUP': (
{'help': 'install deb', 'metavar': 'FILE', 'action': 'store'}),
{'help': 'install rpm', 'metavar': 'FILE', 'action': 'store'}),
{'help': 'script to set up image', 'metavar': 'DATA',
'action': 'store'}),
{'help': 'repo to enable (implies -u)', 'metavar': 'NAME',
'action': 'store'}),
{'help': 'ppa to enable (implies -u)', 'metavar': 'NAME',
'action': 'store'}),
(('-u', '--upgrade'),
{'help': 'upgrade before starting tests', 'action': 'store_true',
'default': False}),),
'collect': ('collect test data',
'create': ('create new test case', ('CREATE', 'INTERFACE')),
'run': ('run test suite', ('COLLECT', 'INTERFACE', 'RESULT', 'SETUP')),
'verify': ('verify test data', ('INTERFACE', 'OUTPUT', 'RESULT')),
def _empty_normalizer(args):
do not normalize arguments
return args
def normalize_create_args(args):
normalize CREATE arguments
args: parsed args
return_value: updated args, or None if errors occurred
# ensure valid name for new test
if len('/')) != 2:
LOG.error('invalid test name: %s',
return None
if os.path.exists(config.name_to_path(
msg = 'test: {} already exists'.format(
if args.force:
LOG.warn('%s but ignoring due to --force', msg)
return None
# ensure test config valid if specified
if isinstance(args.config, str) and len(args.config) == 0:
LOG.error('test config cannot be empty if specified')
return None
# ensure description valid if specified
if (isinstance(args.description, str) and
(len(args.description) > 70 or len(args.description) == 0)):
LOG.error('test description must be between 1 and 70 characters')
return None
return args
def normalize_collect_args(args):
normalize COLLECT arguments
args: parsed args
return_value: updated args, or None if errors occurred
# platform should default to all supported
if len(args.platform) == 0:
args.platform = config.list_enabled_platforms()
args.platform = util.sorted_unique(args.platform)
# os name should default to all enabled
# if os name is provided ensure that all provided are supported
if len(args.os_name) == 0:
args.os_name = config.list_enabled_distros()
supported = config.list_enabled_distros()
invalid = [os_name for os_name in args.os_name
if os_name not in supported]
if len(invalid) != 0:
LOG.error('invalid os name(s): %s', invalid)
return None
args.os_name = util.sorted_unique(args.os_name)
# test configs should default to all enabled
# if test configs are provided, ensure that all provided are valid
if len(args.test_config) == 0:
args.test_config = config.list_test_configs()
valid = []
invalid = []
for name in args.test_config:
if os.path.exists(name):
elif os.path.exists(config.name_to_path(name)):
if len(invalid) != 0:
LOG.error('invalid test config(s): %s', invalid)
return None
args.test_config = valid
args.test_config = util.sorted_unique(args.test_config)
return args
def normalize_output_args(args):
normalize OUTPUT arguments
args: parsed args
return_value: updated args, or None if errors occurred
if not args.data_dir:
LOG.error('--data-dir must be specified')
return None
# ensure clean output dir if collect
# ensure data exists if verify
if args.subcmd == 'collect':
if not util.is_clean_writable_dir(args.data_dir):
LOG.error('data_dir must be empty/new and must be writable')
return None
elif args.subcmd == 'verify':
if not os.path.exists(args.data_dir):
LOG.error('data_dir %s does not exist', args.data_dir)
return None
return args
def normalize_setup_args(args):
normalize SETUP arguments
args: parsed args
return_value: updated_args, or None if errors occurred
# ensure deb or rpm valid if specified
for pkg in (args.deb, args.rpm):
if pkg is not None and not os.path.exists(pkg):
LOG.error('cannot find package: %s', pkg)
return None
# if repo or ppa to be enabled run upgrade
if args.repo or args.ppa:
args.upgrade = True
# if ppa is specified, remove leading 'ppa:' if any
_ppa_header = 'ppa:'
if args.ppa and args.ppa.startswith(_ppa_header):
args.ppa = args.ppa[len(_ppa_header):]
return args
'COLLECT': normalize_collect_args,
'CREATE': normalize_create_args,
'INTERFACE': _empty_normalizer,
'OUTPUT': normalize_output_args,
'RESULT': _empty_normalizer,
'SETUP': normalize_setup_args,
# vi: ts=4 expandtab
# This file is part of cloud-init. See LICENSE file for license information.
from tests.cloud_tests import (config, LOG, setup_image, util)
from tests.cloud_tests.stage import (PlatformComponent, run_stage, run_single)
from tests.cloud_tests import (platforms, images, snapshots, instances)
from functools import partial
import os
def collect_script(instance, base_dir, script, script_name):
collect script data
instance: instance to run script on
base_dir: base directory for output data
script: script contents
script_name: name of script to run
return_value: None, may raise errors
LOG.debug('running collect script: %s', script_name)
util.write_file(os.path.join(base_dir, script_name),
def collect_test_data(args, snapshot, os_name, test_name):
collect data for test case
args: cmdline arguments
snapshot: instantiated snapshot
test_name: name or path of test to run
return_value: tuple of results and fail count
res = ({}, 1)
# load test config
test_name = config.path_to_name(test_name)
test_config = config.load_test_config(test_name)
user_data = test_config['cloud_config']