Commit 6dab094b authored by SVN-Git Migration's avatar SVN-Git Migration

Imported Upstream version 0.0.5

parent c4339f48
......@@ -8,10 +8,13 @@ Run time dependencies
* subunit
* fixtures (https://launchpad.net/python-fixtures, or
http://pypi.python.org/pypi/fixtures/).
Test dependencies
~~~~~~~~~~~~~~~~~
* testtools (the python-testtools package, or
* testtools 0.9.8 or newer (the python-testtools package, or
http://pypi.python.org/pypi/testtools/).
* testresources (https://launchpad.net/testresources, or
......
......@@ -18,7 +18,7 @@ all: README.txt check
./testr init
check: .testrepository
./testr run
./testr run --parallel
check-xml:
python -m subunit.run testrepository.tests.test_suite | subunit2junitxml -o test.xml -f | subunit2pyunit
......
......@@ -5,6 +5,97 @@ testrepository release notes
NEXT (In development)
+++++++++++++++++++++
0.0.5
+++++
CHANGES
-------
* The testrepository test suite depends on testtools 0.9.8. (Robert Collins)
* If interrupted while updating the ``failing`` list, temp files are now
cleaned up - previously a carefully timed interrupt would leave the
temporary failing file in place. (Robert Collins, #531665)
* Local implementation of MatchesException has been removed in favour of the
testtools implementation. All ``self.assertRaises`` have been migrated to
this new testing interface.
* ``setup.py`` will read the version number from PKG-INFO when it is running
without a bzr tree : this makes it easier to snapshot without doing a
release. (Jonathan Lange)
* Testrepository should be more compatible with win32 environments.
(Martin [gz])
* ``testr init-repo`` now has a ``--force-init`` option which when provided
will cause a repository to be created just-in-time. (Jonathan Lange)
* ``testr load`` and ``testr run`` now have a flag ``--partial``. When set
this will cause existing failures to be preserved. When not set, doing a
load will reset existing failures. The ``testr run`` flag ``--failing``
implicitly sets ``--partial`` (so that an interrupted incremental test run
does not incorrectly discard a failure record). The ``--partial`` flag exists
so that deleted or renamed tests do not persist forever in the database.
(Robert Collins)
* ``testr load`` now loads all input streams in parallel. This has no impact
on the CLI as yet, but permits API users to load from parallel processes.
(Robert Collins)
* ``testr list-tests`` is a new command that will list the tests for a project
when ``.testr.conf`` has been configured with a ``test_list_option``.
(Robert Collins)
* ``test run --parallel`` will parallelise by running each test command once
per CPU in the machine (detection for this only implemented on Linux so far).
An internally parallelising command will not benefit from this, but for
many projects it will be a win either from simplicity or because getting
their test runner to parallise is nontrivial. The observed duration of tests
is used to inform the partitioning algorithm - so each test runner should
complete at approximately the same time, minimising total runtime.
(Robert Collins)
* ``testr run`` no longer attempts to expand unknown variables. This permits
the use of environmen variables to control the test run. For instance,
${PYTHON:-python} in the test_command setting will run the command with
$PYTHON or python if $PYTHON is not set. (Robert Collins, #595295)
* ``testr run`` now resets the SIGPIPE handler to default - which is what
most Unix processes expect. (Robert Collins)
* ``testr run`` now uses a unique file name rather than hard coding
failing.list - while not as clear, this permits concurrent testr invocations,
or parallel testing from within testr, to execute safely. (Robert Collins)
* ``testr run`` uses an in-process load rather than reinvoking testr. This
should be faster on Windows and avoids the issue with running the wrong
testr when PYTHONPATH but not PATH is set. (Robert Collins, #613129)
* ``testr run`` will now pass -d to the ``testr load`` invocation, so that
running ``testr run -d /some/path`` will work correctly.
(Robert Collins, #529698)
* ``testr run`` will now pass ``-q`` down to ``testr load``.
(Robert Collins, #529701)
* The ``testrepository.repository.Repository`` interface now tracks test times
for use in estimating test run duration and parallel test partitioning.
(Robert Collins)
* There are the beginnings of a samba buildfarm backend for testrepository,
though it is not hooked into the UI yet, so is only useful for API users.
(Jelmer Vernooij)
* Updates to next-stream are done via a temporary file to reduce the chance
of an empty next-stream being written to disk. (Robert Collins, #531664)
* Variable expansion no longer does python \ escape expansion.
(Robert Collins, #694800)
* When next-stream is damaged testr will report that it is corrupt rather than
reporting an invalid literal. (Robert Collins, #531663)
0.0.4
+++++
......@@ -22,6 +113,9 @@ IMPROVEMENTS
* The file implementation of Repository.open now performs ~ expansion.
(Jonathan Lange, #529665)
* Test failures and errors are now shown as we get them in 'load',
'failing' and 'last'. (Jonathan Lange, #613152)
0.0.3
+++++
......
Metadata-Version: 1.0
Name: testrepository
Version: 0.0.4
Version: 0.0.5
Summary: A repository of test results.
Home-page: https://launchpad.net/testrepository
Author: Robert Collins
......
......@@ -28,6 +28,9 @@ Most commands in testr have comprehensive online help, and the commands::
Will be useful to explore the system.
Running tests
~~~~~~~~~~~~~
Test Repository can be taught how to run your tests by setting up a .testr.conf
file in your cwd. A file like::
......@@ -35,24 +38,65 @@ file in your cwd. A file like::
test_command=foo $IDOPTION
test_id_option=--bar $IDFILE
will cause 'testr run' to run 'foo | testr load', and 'testr run --failing' to
run 'foo --bar failing.list | testr load'. failing.list will be a newline
separated list of the test ids that your test runner outputs. Arguments passed
to run are passed through to your test runner command line. To pass options
through to your test running, use a ``--`` before your options.
For instance, ``testr run foo -- bar --no-plugins`` would run
``foo foo bar --no-plugins | testr load`` using the above config example. The
command help for ``testr run`` describes the available options for .testr.conf.
will cause 'testr run' to run 'foo' and process it as 'testr load' would.
Likewise 'testr run --failing' will run 'foo --bar failing.list' and process it
as 'testr load' would. failing.list will be a newline separated list of the
test ids that your test runner outputs. Arguments passed to run are passed
through to your test runner command line. To pass options through to your test
running, use a ``--`` before your options. For instance,
``testr run quux -- bar --no-plugins`` would run
``foo quux bar --no-plugins`` using the above config example. Shell variables
are expanded in these commands on platforms that have a shell. The command
help for ``testr run`` describes the available options for .testr.conf.
Having setup a .testr.conf, a common workflow then becomes::
# Fix currently broken tests - repeat until there are no failures.
$ testr run --failing
# Do a full run to find anything thrown out during the reduction process.
# Do a full run to find anything that regressed during the reduction process.
$ testr run
# And either commit or loop around this again depending on whether errors
# were found.
The --failing option turns on ``--partial`` automatically (so that if the
partial test run were to be interrupted, the failing tests that aren't run are
not lost).
Listing tests
~~~~~~~~~~~~~
It is useful to be able to query the test program to see what tests will be
run - this permits partitioning the tests and running multiple instances with
separate partitions at once. Set 'test_list_option' in .testr.conf like so::
test_list_option=--list-tests
You also need to use the $LISTOPT option to tell testr where to expand things:
test_command=foo $LISTOPT $IDOPTION
All the normal rules for invoking test program commands apply: extra parameters
will be passed through, if a test list is being supplied test_option can be
used via $IDOPTION.
The output of the test command when this option is supplied should be a series
of test ids, in any order, `\n' separated on stdout.
To test whether this is working the `testr list-tests` command can be useful.
Parallel testing
~~~~~~~~~~~~~~~~
If both test listing and filtering (via either IDLIST or IDFILE) are configured
then testr is able to run your tests in parallel::
$ testr run --parallel
This will first list the tests, partition the tests into one partition per CPU
on the machine, and then invoke multiple test runners at the same time, with
each test runner getting one partition. Currently the partitioning algorithm
is a simple round-robin, and the CPU detection is only implemented for Linux.
Repositories
~~~~~~~~~~~~
......
......@@ -14,24 +14,54 @@
# limitations under that license.
from distutils.core import setup
import email
import os
import testrepository
version = '.'.join(str(component) for component in testrepository.__version__[0:3])
phase = testrepository.__version__[3]
if phase != 'final':
def get_revno():
import bzrlib.workingtree
t = bzrlib.workingtree.WorkingTree.open_containing(__file__)[0]
return t.branch.revno()
def get_version_from_pkg_info():
"""Get the version from PKG-INFO file if we can."""
pkg_info_path = os.path.join(os.path.dirname(__file__), 'PKG-INFO')
try:
pkg_info_file = open(pkg_info_path, 'r')
except (IOError, OSError):
return None
try:
pkg_info = email.message_from_file(pkg_info_file)
except email.MessageError:
return None
return pkg_info.get('Version', None)
def get_version():
"""Return the version of testtools that we are building."""
version = '.'.join(
str(component) for component in testrepository.__version__[0:3])
phase = testrepository.__version__[3]
if phase == 'final':
return version
pkg_info_version = get_version_from_pkg_info()
if pkg_info_version:
return pkg_info_version
revno = get_revno()
if phase == 'alpha':
# No idea what the next version will be
version = 'next-%s' % t.branch.revno()
return 'next-r%s' % revno
else:
# Preserve the version number but give it a revno prefix
version = version + '~%s' % t.branch.revno()
return version + '-r%s' % revno
description = file(os.path.join(os.path.dirname(__file__), 'README.txt'), 'rb').read()
setup(name='testrepository',
author='Robert Collins',
author_email='robertc@robertcollins.net',
......@@ -39,7 +69,7 @@ setup(name='testrepository',
description='A repository of test results.',
long_description=description,
scripts=['testr'],
version=version,
version=get_version(),
packages=['testrepository',
'testrepository.arguments',
'testrepository.commands',
......
......@@ -33,4 +33,4 @@ The tests package contains tests and test specific support code.
# established at this point, and setup.py will use a version of next-$(revno).
# If the releaselevel is 'final', then the tarball will be major.minor.micro.
# Otherwise it is major.minor.micro~$(revno).
__version__ = (0, 0, 4, 'final', 0)
__version__ = (0, 0, 5, 'final', 0)
......@@ -40,6 +40,8 @@ import subunit
from testrepository.repository import file
def _find_command(cmd_name):
orig_cmd_name = cmd_name
cmd_name = cmd_name.replace('-', '_')
classname = "%s" % cmd_name
modname = "testrepository.commands.%s" % cmd_name
try:
......@@ -53,7 +55,7 @@ def _find_command(cmd_name):
% (classname, modname))
if getattr(result, 'name', None) is None:
# Store the name for the common case of name == lookup path.
result.name = classname
result.name = orig_cmd_name
return result
......@@ -68,8 +70,9 @@ def iter_commands():
if base.startswith('.'):
continue
name = base.split('.', 1)[0]
name = name.replace('_', '-')
names.add(name)
names.discard('__init__')
names.discard('--init--')
names = sorted(names)
for name in names:
yield _find_command(name)
......@@ -150,27 +153,6 @@ class Command(object):
def _init(self):
"""Per command init call, called into by Command.__init__."""
def output_run(self, run_id, output, evaluator):
"""Output a test run.
:param run_id: The run id.
:param output: A StringIO containing a subunit stream for some portion of the run to show.
:param evaluator: A TestResult evaluating the entire run.
"""
if self.ui.options.quiet:
return
if output.getvalue():
output.seek(0)
self.ui.output_results(subunit.ProtocolTestCase(output))
values = [('id', run_id), ('tests', evaluator.testsRun)]
failures = len(evaluator.failures) + len(evaluator.errors)
if failures:
values.append(('failures', failures))
skips = sum(map(len, evaluator.skip_reasons.itervalues()))
if skips:
values.append(('skips', skips))
self.ui.output_values(values)
def run(self):
"""The core logic for this command to be implemented by subclasses."""
raise NotImplementedError(self.run)
......
......@@ -14,13 +14,13 @@
"""Show the current failures in the repository."""
from cStringIO import StringIO
import optparse
import subunit.test_results
from testtools import MultiTestResult, TestResult
from testrepository.commands import Command
from testrepository.results import TestResultFilter
class failing(Command):
"""Show the current failures known by the repository.
......@@ -41,44 +41,43 @@ class failing(Command):
default=False, help="Show only a list of failing tests."),
]
def _list_subunit(self, run):
# TODO only failing tests.
stream = run.get_subunit_stream()
self.ui.output_stream(stream)
if stream:
return 1
else:
return 0
def _make_result(self, repo, list_result):
if self.ui.options.list:
return list_result
output_result = self.ui.make_result(repo.latest_id)
filtered = TestResultFilter(output_result, filter_skip=True)
return MultiTestResult(list_result, filtered)
def run(self):
repo = self.repository_factory.open(self.ui.here)
run = repo.get_failing()
if self.ui.options.subunit:
return self._list_subunit(run)
case = run.get_test()
failed = False
evaluator = TestResult()
output = StringIO()
output_stream = subunit.TestProtocolClient(output)
filtered = subunit.test_results.TestResultFilter(output_stream,
filter_skip=True)
result = MultiTestResult(evaluator, filtered)
list_result = TestResult()
result = self._make_result(repo, list_result)
result.startTestRun()
try:
case.run(result)
finally:
result.stopTestRun()
failed = not evaluator.wasSuccessful()
failed = not list_result.wasSuccessful()
if failed:
result = 1
else:
result = 0
if self.ui.options.list:
failing_tests = [
test for test, _ in evaluator.errors + evaluator.failures]
test for test, _ in list_result.errors + list_result.failures]
self.ui.output_tests(failing_tests)
return result
if self.ui.options.subunit:
# TODO only failing tests.
self.ui.output_stream(run.get_subunit_stream())
return result
if self.ui.options.quiet:
return result
if output.getvalue():
output.seek(0)
self.ui.output_results(subunit.ProtocolTestCase(output))
values = []
failures = len(evaluator.failures) + len(evaluator.errors)
if failures:
values.append(('failures', failures))
self.ui.output_values(values)
return result
......@@ -5,7 +5,7 @@
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
......@@ -14,16 +14,13 @@
"""Show the last run loaded into a repository."""
from cStringIO import StringIO
import subunit.test_results
from testtools import MultiTestResult, TestResult
from testrepository.commands import Command
from testrepository.results import TestResultFilter
class last(Command):
"""Show the last run loaded into a repository.
Failing tests are shown on the console and a summary of the run is printed
at the end.
"""
......@@ -33,19 +30,14 @@ class last(Command):
run_id = repo.latest_id()
case = repo.get_test_run(run_id).get_test()
failed = False
evaluator = TestResult()
output = StringIO()
output_stream = subunit.TestProtocolClient(output)
filtered = subunit.test_results.TestResultFilter(output_stream,
filter_skip=True)
result = MultiTestResult(evaluator, filtered)
output_result = self.ui.make_result(lambda: run_id)
result = TestResultFilter(output_result, filter_skip=True)
result.startTestRun()
try:
case.run(result)
finally:
result.stopTestRun()
failed = not evaluator.wasSuccessful()
self.output_run(run_id, output, evaluator)
failed = not result.wasSuccessful()
if failed:
return 1
else:
......
......@@ -12,45 +12,37 @@
# license you chose for the specific language governing permissions and
# limitations under that license.
"""testtools.matchers.Matcher style matchers to help test testrepository."""
from testtools.matchers import Matcher, Mismatch
__all__ = ['MatchesException']
class MatchesException(Matcher):
"""Match an exc_info tuple against an exception."""
def __init__(self, exception):
"""Create a MatchesException that will match exc_info's for exception.
:param exception: An exception to check against an exc_info tuple. The
traceback object is not inspected, only the type and arguments of
the exception.
"""
Matcher.__init__(self)
self.expected = exception
def match(self, other):
if type(other) != tuple:
return _StringMismatch('%r is not an exc_info tuple' % other)
if not issubclass(other[0], type(self.expected)):
return _StringMismatch('%r is not a %r' % (
other[0], type(self.expected)))
if other[1].args != self.expected.args:
return _StringMismatch('%r has different arguments to %r.' % (
other[1], self.expected))
def __str__(self):
return "MatchesException(%r)" % self.expected
class _StringMismatch(Mismatch):
"""Convenience mismatch for simply-calculated string descriptions."""
def __init__(self, description):
self.description = description
def describe(self):
return self.description
"""List the tests from a project and show them."""
from cStringIO import StringIO
from testtools import TestResult
from testrepository.arguments.string import StringArgument
from testrepository.commands import Command
from testrepository.testcommand import testrconf_help, TestCommand
class list_tests(Command):
__doc__ = """Lists the tests for a project.
""" + testrconf_help
args = [StringArgument('testargs', 0, None)]
# Can be assigned to to inject a custom command factory.
command_factory = TestCommand
def run(self):
testcommand = self.command_factory(self.ui, None)
ids = None
cmd = testcommand.get_run_command(ids, self.ui.arguments['testargs'])
cmd.setUp()
try:
ids = cmd.list_tests()
stream = StringIO()
for id in ids:
stream.write('%s\n' % id)
stream.seek(0)
self.ui.output_stream(stream)
return 0
finally:
cmd.cleanUp()
#
# Copyright (c) 2009 Testrepository Contributors
#
#
# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
# license at the users choice. A copy of both licenses are available in the
# project source as Apache-2.0 and BSD. You may not use this file except in
# compliance with one of these two licences.
#
#
# Unless required by applicable law or agreed to in writing, software
# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
......@@ -14,42 +14,67 @@
"""Load data into a repository."""
from cStringIO import StringIO
import optparse
import subunit.test_results
from testtools import MultiTestResult, TestResult
import subunit
from testtools import ConcurrentTestSuite, MultiTestResult
from testrepository.commands import Command
from testrepository.repository import RepositoryNotFound
from testrepository.results import TestResultFilter
class load(Command):
"""Load a subunit stream into a repository.
Failing tests are shown on the console and a summary of the stream is
printed at the end.
Unless the stream is a partial stream, any existing failures are discarded.
"""
input_streams = ['subunit+']
options = [
optparse.Option("--partial", action="store_true",
default=False, help="The stream being loaded was a partial run."),
optparse.Option(
"--force-init", action="store_true",
default=False,
help="Initialise the repository if it does not exist already"),
]
def run(self):
path = self.ui.here
repo = self.repository_factory.open(path)
failed = False
for stream in self.ui.iter_streams('subunit'):
inserter = repo.get_inserter()
evaluator = TestResult()
output = StringIO()
output_stream = subunit.TestProtocolClient(output)
filtered = subunit.test_results.TestResultFilter(output_stream,
filter_skip=True)
case = subunit.ProtocolTestCase(stream)
inserter.startTestRun()
try:
case.run(MultiTestResult(inserter, evaluator, filtered))
finally:
run_id = inserter.stopTestRun()
failed = failed or not evaluator.wasSuccessful()
self.output_run(run_id, output, evaluator)
if failed:
try:
repo = self.repository_factory.open(path)
except RepositoryNotFound:
if self.ui.options.force_init:
repo = self.repository_factory.initialise(path)
else:
raise
run_id = None
# Not a full implementation of TestCase, but we only need to iterate
# back to it. Needs to be a callable - its a head fake for
# testsuite.add.
cases = lambda:self.ui.iter_streams('subunit')
def make_tests(suite):
streams = list(suite)[0]
for stream in streams():
yield subunit.ProtocolTestCase(stream)
case = ConcurrentTestSuite(cases, make_tests)
inserter = repo.get_inserter(partial=self.ui.options.partial)
output_result = self.ui.make_result(lambda: run_id)
# XXX: We want to *count* skips, but not show them.
filtered = TestResultFilter(output_result, filter_skip=False)
filtered.startTestRun()
inserter.startTestRun()
try:
case.run(MultiTestResult(inserter, filtered))
finally:
run_id = inserter.stopTestRun()
filtered.stopTestRun()
if not filtered.wasSuccessful():
return 1
else:
return 0
......@@ -24,63 +24,32 @@ from testtools import TestResult
from testrepository.arguments.string import StringArgument
from testrepository.commands import Command
from testrepository.commands.load import load
from testrepository.ui import decorator
from testrepository.testcommand import TestCommand, testrconf_help
class run(Command):
"""Run the tests for a project and load them into testrepository.
This reads the commands to run from .testr.conf. Setting that file to
---
[DEFAULT]
test_command=foo $IDOPTION
test_id_option=--bar $IDFILE
---
will cause 'testr run' to run 'foo | testr load', and 'testr run --failing'
to run 'foo --bar failing.list | testr load'.
The full list of options and variables for .testr.conf:
* test_command -- command line to run to execute tests.
* test_id_option -- the value to substitute into test_command when specific
test ids should be run.
* test_id_list_default -- the value to use for $IDLIST when no specific
test ids are being run.
* $IDOPTION -- the variable to use to trigger running some specific tests.
* $IDFILE -- A file created before the test command is run and deleted
afterwards which contains a list of test ids, one per line. This can
handle test ids with emedded whitespace.
* $IDLIST -- A list of the test ids to run, separated by spaces. IDLIST
defaults to an empty string when no test ids are known and no explicit
default is provided. This will not handle test ids with spaces.
"""
class run(Command):
__doc__ = """Run the tests for a project and load them into testrepository.
""" + testrconf_help
options = [optparse.Option("--failing", action="store_true",
default=False, help="Run only tests known to be failing.")]
options = [
optparse.Option("--failing", action="store_true",
default=False, help="Run only tests known to be failing."),
optparse.Option("--parallel", action="store_true",
default=False, help="Run tests in parallel processes."),
optparse.Option("--partial", action="store_true",
default=False, help="Only some tests will be run. Implied by --failing."),
]
args = [StringArgument('testargs', 0, None)]
# Can be assigned to to inject a custom command factory.
command_factory = TestCommand
def run(self):
parser = ConfigParser.ConfigParser()
if not parser.read(os.path.join(self.ui.here, '.testr.conf')):
raise ValueError("No .testr.conf config file")
try: