Commit f53fc46a authored by Wesley Wiedenmeier's avatar Wesley Wiedenmeier Committed by Scott Moser

integration test: initial commit of integration test framework

The adds in end-to-end testing of cloud-init. The framework utilizes
LXD and cloud images as a backend to test user-data passed in.
Arbitrary data is then captured from predefined commands specified
by the user. After collection, data verification is completed by
running a series of Python unit tests against the collected data.

Currently only the Ubuntu Trusty, Xenial, Yakkety, and Zesty
releases are supported. Test cases for 50% of the modules is
complete and available.

Additionally a Read the Docs file was created to guide test
writing and execution.
parent b2a9f336
......@@ -41,6 +41,7 @@ initialization of a cloud instance.
topics/vendordata.rst
topics/moreinfo.rst
topics/hacking.rst
topics/tests.rst
.. _Cloud-init: https://launchpad.net/cloud-init
.. vi: textwidth=78
.. contents:: Table of Contents
:depth: 2
============================
Test Development
============================
Overview
--------
The purpose of this page is to describe how to write integration tests for
cloud-init. As a test writer you need to develop a test configuration and
a verification file:
* The test configuration specifies a specific cloud-config to be used by
cloud-init and a list of arbitrary commands to capture the output of
(e.g my_test.yaml)
* The verification file runs tests on the collected output to determine
the result of the test (e.g. my_test.py)
The names must match, however the extensions will of course be different,
yaml vs py.
Configuration
-------------
The test configuration is a YAML file such as *ntp_server.yaml* below:
.. code-block:: yaml
#
# NTP config using specific servers (ntp_server.yaml)
#
cloud_config: |
#cloud-config
ntp:
servers:
- pool.ntp.org
collect_scripts:
ntp_installed_servers: |
#!/bin/bash
dpkg -l | grep ntp | wc -l
ntp_conf_dist_servers: |
#!/bin/bash
ls /etc/ntp.conf.dist | wc -l
ntp_conf_servers: |
#!/bin/bash
cat /etc/ntp.conf | grep '^server'
There are two keys, 1 required and 1 optional, in the YAML file:
1. The required key is ``cloud_config``. This should be a string of valid
YAML that is exactly what would normally be placed in a cloud-config file,
including the cloud-config header. This essentially sets up the scenario
under test.
2. The optional key is ``collect_scripts``. This key has one or more
sub-keys containing strings of arbitrary commands to execute (e.g.
```cat /var/log/cloud-config-output.log```). In the example above the
output of dpkg is captured, grep for ntp, and the number of lines
reported. The name of the sub-key is important. The sub-key is used by
the verification script to recall the output of the commands ran.
Default Collect Scripts
~~~~~~~~~~~~~~~~~~~~~~~
By default the following files will be collected for every test. There is
no need to specify these items:
* ``/var/log/cloud-init.log``
* ``/var/log/cloud-init-output.log``
* ``/run/cloud-init/.instance-id``
* ``/run/cloud-init/result.json``
* ``/run/cloud-init/status.json``
* ```dpkg-query -W -f='${Version}' cloud-init```
Verification
------------
The verification script is a Python file with unit tests like the one,
`ntp_server.py`, below:
.. code-block:: python
"""cloud-init Integration Test Verify Script (ntp_server.yaml)"""
from tests.cloud_tests.testcases import base
class TestNtpServers(base.CloudTestCase):
"""Test ntp module"""
def test_ntp_installed(self):
"""Test ntp installed"""
out = self.get_data_file('ntp_installed_servers')
self.assertEqual(1, int(out))
def test_ntp_dist_entries(self):
"""Test dist config file has one entry"""
out = self.get_data_file('ntp_conf_dist_servers')
self.assertEqual(1, int(out))
def test_ntp_entires(self):
"""Test config entries"""
out = self.get_data_file('ntp_conf_servers')
self.assertIn('server pool.ntp.org iburst', out)
Here is a breakdown of the unit test file:
* The import statement allows access to the output files.
* The class can be named anything, but must import the ``base.CloudTestCase``
* There can be 1 to N number of functions with any name, however only
tests starting with ``test_*`` will be executed.
* Output from the commands can be accessed via
``self.get_data_file('key')`` where key is the sub-key of
``collect_scripts`` above.
Layout
------
Integration tests are located under the `tests/cloud_tests` directory.
Test configurations are placed under `configs` and the test verification
scripts under `testcases`:
.. code-block:: bash
cloud-init$ tree -d tests/cloud_tests/
tests/cloud_tests/
├── configs
│   ├── bugs
│   ├── examples
│   ├── main
│   └── modules
└── testcases
├── bugs
├── examples
├── main
└── modules
The sub-folders of bugs, examples, main, and modules help organize the
tests. View the README.md in each to understand in more detail each
directory.
=====================
Development Checklist
=====================
* Configuration File
* Named 'your_test_here.yaml'
* Contains at least a valid cloud-config
* Optionally, commands to capture additional output
* Valid YAML
* Placed in the appropriate sub-folder in the configs directory
* Verification File
* Named 'your_test_here.py'
* Valid unit tests validating output collected
* Passes pylint & pep8 checks
* Placed in the appropriate sub-folder in the testcsaes directory
* Tested by running the test:
.. code-block:: bash
$ python3 -m tests.cloud_tests run -v -n <release of choice> \
--deb <build of cloud-init> \
-t tests/cloud_tests/configs/<dir>/your_test_here.yaml
=========
Execution
=========
Executing tests has three options:
* ``run`` an alias to run both ``collect`` and ``verify``
* ``collect`` deploys on the specified platform and os, patches with the
requested deb or rpm, and finally collects output of the arbitrary
commands.
* ``verify`` given a directory of test data, run the Python unit tests on
it to generate results.
Run
---
The first example will provide a complete end-to-end run of data
collection and verification. There are additional examples below
explaining how to run one or the other independently.
.. code-block:: bash
$ git clone https://git.launchpad.net/cloud-init
$ cd cloud-init
$ python3 -m tests.cloud_tests run -v -n trusty -n xenial \
--deb cloud-init_0.7.8~my_patch_all.deb
The above command will do the following:
* ``-v`` verbose output
* ``run`` both collect output and run tests the output
* ``-n trusty`` on the Ubuntu Trusty release
* ``-n xenial`` on the Ubuntu Xenial release
* ``--deb cloud-init_0.7.8~patch_all.deb`` use this deb as the version of
cloud-init to run with
For a more detailed explanation of each option see below.
Collect
-------
If developing tests it may be necessary to see if cloud-config works as
expected and the correct files are pulled down. In this case only a
collect can be ran by running:
.. code-block:: bash
$ python3 -m tests.cloud_tests collect -n xenial -d /tmp/collection \
--deb cloud-init_0.7.8~my_patch_all.deb
The above command will run the collection tests on xenial with the
provided deb and place all results into `/tmp/collection`.
Verify
------
When developing tests it is much easier to simply rerun the verify scripts
without the more lengthy collect process. This can be done by running:
.. code-block:: bash
$ python3 -m tests.cloud_tests verify -d /tmp/collection
The above command will run the verify scripts on the data discovered in
`/tmp/collection`.
============
Architecture
============
The following outlines the process flow during a complete end-to-end LXD-backed test.
1. Configuration
* The back end and specific OS releases are verified as supported
* The test or tests that need to be run are determined either by directory or by individual yaml
2. Image Creation
* Acquire the daily LXD image
* Install the specified cloud-init package
* Clean the image so that it does not appear to have been booted
* A snapshot of the image is created and reused by all tests
3. Configuration
* For each test, the cloud-config is injected into a copy of the
snapshot and booted
* The framework waits for ``/var/lib/cloud/instance/boot-finished``
(up to 120 seconds)
* All default commands are ran and output collected
* Any commands the user specified are executed and output collected
4. Verification
* The default commands are checked for any failures, errors, and
warnings to validate basic functionality of cloud-init completed
successfully
* The user generated unit tests are then ran validating against the
collected output
5. Results
* If any failures were detected the test suite returns a failure
# This file is part of cloud-init. See LICENSE file for license information.
import logging
import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
TESTCASES_DIR = os.path.join(BASE_DIR, 'testcases')
TEST_CONF_DIR = os.path.join(BASE_DIR, 'configs')
def _initialize_logging():
"""
configure logging for cloud_tests
"""
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(formatter)
logger.addHandler(console)
return logger
LOG = _initialize_logging()
# vi: ts=4 expandtab
# This file is part of cloud-init. See LICENSE file for license information.
import argparse
import logging
import shutil
import sys
import tempfile
from tests.cloud_tests import (args, collect, manage, verify)
from tests.cloud_tests import LOG
def configure_log(args):
"""
configure logging
"""
level = logging.INFO
if args.verbose:
level = logging.DEBUG
elif args.quiet:
level = logging.WARN
LOG.setLevel(level)
def run(args):
"""
run full test suite
"""
failed = 0
args.data_dir = tempfile.mkdtemp(prefix='cloud_test_data_')
LOG.debug('using tmpdir %s', args.data_dir)
try:
failed += collect.collect(args)
failed += verify.verify(args)
except Exception:
failed += 1
raise
finally:
# TODO: make this configurable via environ or cmdline
if failed:
LOG.warn('some tests failed, leaving data in %s', args.data_dir)
else:
shutil.rmtree(args.data_dir)
return failed
def main():
"""
entry point for cloud test suite
"""
# configure parser
parser = argparse.ArgumentParser(prog='cloud_tests')
subparsers = parser.add_subparsers(dest="subcmd")
subparsers.required = True
def add_subparser(name, description, arg_sets):
"""
add arguments to subparser
"""
subparser = subparsers.add_parser(name, help=description)
for (_args, _kwargs) in (a for arg_set in arg_sets for a in arg_set):
subparser.add_argument(*_args, **_kwargs)
# configure subparsers
for (name, (description, arg_sets)) in args.SUBCMDS.items():
add_subparser(name, description,
[args.ARG_SETS[arg_set] for arg_set in arg_sets])
# parse arguments
parsed = parser.parse_args()
# process arguments
configure_log(parsed)
(_, arg_sets) = args.SUBCMDS[parsed.subcmd]
for normalizer in [args.NORMALIZERS[arg_set] for arg_set in arg_sets]:
parsed = normalizer(parsed)
if not parsed:
return -1
# run handler
LOG.debug('running with args: %s\n', parsed)
return {
'collect': collect.collect,
'create': manage.create,
'run': run,
'verify': verify.verify,
}[parsed.subcmd](parsed)
if __name__ == "__main__":
sys.exit(main())
# vi: ts=4 expandtab
# This file is part of cloud-init. See LICENSE file for license information.
import os
from tests.cloud_tests import config, util
from tests.cloud_tests import LOG
ARG_SETS = {
'COLLECT': (
(('-p', '--platform'),
{'help': 'platform(s) to run tests on', 'metavar': 'PLATFORM',
'action': 'append', 'choices': config.list_enabled_platforms(),
'default': []}),
(('-n', '--os-name'),
{'help': 'the name(s) of the OS(s) to test', 'metavar': 'NAME',
'action': 'append', 'choices': config.list_enabled_distros(),
'default': []}),
(('-t', '--test-config'),
{'help': 'test config file(s) to use', 'metavar': 'FILE',
'action': 'append', 'default': []}),),
'CREATE': (
(('-c', '--config'),
{'help': 'cloud-config yaml for testcase', 'metavar': 'DATA',
'action': 'store', 'required': False, 'default': None}),
(('-e', '--enable'),
{'help': 'enable testcase', 'required': False, 'default': False,
'action': 'store_true'}),
(('name',),
{'help': 'testcase name, in format "<category>/<test>"',
'action': 'store'}),
(('-d', '--description'),
{'help': 'description of testcase', 'required': False}),
(('-f', '--force'),
{'help': 'overwrite already existing test', 'required': False,
'action': 'store_true', 'default': False}),),
'INTERFACE': (
(('-v', '--verbose'),
{'help': 'verbose output', 'action': 'store_true', 'default': False}),
(('-q', '--quiet'),
{'help': 'quiet output', 'action': 'store_true', 'default': False}),),
'OUTPUT': (
(('-d', '--data-dir'),
{'help': 'directory to store test data in',
'action': 'store', 'metavar': 'DIR', 'required': True}),),
'RESULT': (
(('-r', '--result'),
{'help': 'file to write results to',
'action': 'store', 'metavar': 'FILE'}),),
'SETUP': (
(('--deb',),
{'help': 'install deb', 'metavar': 'FILE', 'action': 'store'}),
(('--rpm',),
{'help': 'install rpm', 'metavar': 'FILE', 'action': 'store'}),
(('--script',),
{'help': 'script to set up image', 'metavar': 'DATA',
'action': 'store'}),
(('--repo',),
{'help': 'repo to enable (implies -u)', 'metavar': 'NAME',
'action': 'store'}),
(('--ppa',),
{'help': 'ppa to enable (implies -u)', 'metavar': 'NAME',
'action': 'store'}),
(('-u', '--upgrade'),
{'help': 'upgrade before starting tests', 'action': 'store_true',
'default': False}),),
}
SUBCMDS = {
'collect': ('collect test data',
('COLLECT', 'INTERFACE', 'OUTPUT', 'RESULT', 'SETUP')),
'create': ('create new test case', ('CREATE', 'INTERFACE')),
'run': ('run test suite', ('COLLECT', 'INTERFACE', 'RESULT', 'SETUP')),
'verify': ('verify test data', ('INTERFACE', 'OUTPUT', 'RESULT')),
}
def _empty_normalizer(args):
"""
do not normalize arguments
"""
return args
def normalize_create_args(args):
"""
normalize CREATE arguments
args: parsed args
return_value: updated args, or None if errors occurred
"""
# ensure valid name for new test
if len(args.name.split('/')) != 2:
LOG.error('invalid test name: %s', args.name)
return None
if os.path.exists(config.name_to_path(args.name)):
msg = 'test: {} already exists'.format(args.name)
if args.force:
LOG.warn('%s but ignoring due to --force', msg)
else:
LOG.error(msg)
return None
# ensure test config valid if specified
if isinstance(args.config, str) and len(args.config) == 0:
LOG.error('test config cannot be empty if specified')
return None
# ensure description valid if specified
if (isinstance(args.description, str) and
(len(args.description) > 70 or len(args.description) == 0)):
LOG.error('test description must be between 1 and 70 characters')
return None
return args
def normalize_collect_args(args):
"""
normalize COLLECT arguments
args: parsed args
return_value: updated args, or None if errors occurred
"""
# platform should default to all supported
if len(args.platform) == 0:
args.platform = config.list_enabled_platforms()
args.platform = util.sorted_unique(args.platform)
# os name should default to all enabled
# if os name is provided ensure that all provided are supported
if len(args.os_name) == 0:
args.os_name = config.list_enabled_distros()
else:
supported = config.list_enabled_distros()
invalid = [os_name for os_name in args.os_name
if os_name not in supported]
if len(invalid) != 0:
LOG.error('invalid os name(s): %s', invalid)
return None
args.os_name = util.sorted_unique(args.os_name)
# test configs should default to all enabled
# if test configs are provided, ensure that all provided are valid
if len(args.test_config) == 0:
args.test_config = config.list_test_configs()
else:
valid = []
invalid = []
for name in args.test_config:
if os.path.exists(name):
valid.append(name)
elif os.path.exists(config.name_to_path(name)):
valid.append(config.name_to_path(name))
else:
invalid.append(name)
if len(invalid) != 0:
LOG.error('invalid test config(s): %s', invalid)
return None
else:
args.test_config = valid
args.test_config = util.sorted_unique(args.test_config)
return args
def normalize_output_args(args):
"""
normalize OUTPUT arguments
args: parsed args
return_value: updated args, or None if errors occurred
"""
if not args.data_dir:
LOG.error('--data-dir must be specified')
return None
# ensure clean output dir if collect
# ensure data exists if verify
if args.subcmd == 'collect':
if not util.is_clean_writable_dir(args.data_dir):
LOG.error('data_dir must be empty/new and must be writable')
return None
elif args.subcmd == 'verify':
if not os.path.exists(args.data_dir):
LOG.error('data_dir %s does not exist', args.data_dir)
return None
return args
def normalize_setup_args(args):
"""
normalize SETUP arguments
args: parsed args
return_value: updated_args, or None if errors occurred
"""
# ensure deb or rpm valid if specified
for pkg in (args.deb, args.rpm):
if pkg is not None and not os.path.exists(pkg):
LOG.error('cannot find package: %s', pkg)
return None
# if repo or ppa to be enabled run upgrade
if args.repo or args.ppa:
args.upgrade = True
# if ppa is specified, remove leading 'ppa:' if any
_ppa_header = 'ppa:'
if args.ppa and args.ppa.startswith(_ppa_header):
args.ppa = args.ppa[len(_ppa_header):]
return args
NORMALIZERS = {
'COLLECT': normalize_collect_args,
'CREATE': normalize_create_args,
'INTERFACE': _empty_normalizer,
'OUTPUT': normalize_output_args,
'RESULT': _empty_normalizer,
'SETUP': normalize_setup_args,
}
# vi: ts=4 expandtab
# This file is part of cloud-init. See LICENSE file for license information.
from tests.cloud_tests import (config, LOG, setup_image, util)
from tests.cloud_tests.stage import (PlatformComponent, run_stage, run_single)
from tests.cloud_tests import (platforms, images, snapshots, instances)
from functools import partial
import os
def collect_script(instance, base_dir, script, script_name):
"""
collect script data
instance: instance to run script on
base_dir: base directory for output data
script: script contents
script_name: name of script to run
return_value: None, may raise errors
"""
LOG.debug('running collect script: %s', script_name)
util.write_file(os.path.join(base_dir, script_name),
instance.run_script(script))
def collect_test_data(args, snapshot, os_name, test_name):
"""
collect data for test case
args: cmdline arguments
snapshot: instantiated snapshot
test_name: name or path of test to run
return_value: tuple of results and fail count
"""
res = ({}, 1)
# load test config
test_name = config.path_to_name(test_name)
test_config = config.load_test_config(test_name)
user_data = test_config['cloud_config']