Skip to content
Snippets Groups Projects
Commit 67477d37 authored by Ole Streicher's avatar Ole Streicher
Browse files

New upstream version 0.1

parents
No related branches found
Tags upstream/0.1
No related merge requests found
Showing
with 1127 additions and 0 deletions
0.1 (2016-11-26)
----------------
- Initial version
LICENSE 0 → 100755
Copyright (c) 2016, Thomas P. Robitaille
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This package was adapted from pytest-mpl, which is released under a BSD
license and can be found here:
https://github.com/astrofrog/pytest-mpl
include LICENSE
include README.rst
include CHANGES.md
include tox.ini
recursive-include tests *.py *.fits *.txt
PKG-INFO 0 → 100644
Metadata-Version: 1.0
Name: pytest-arraydiff
Version: 0.1
Summary: pytest plugin to help with comparing array output from tests
Home-page: https://github.com/astrofrog/pytest-fits
Author: Thomas Robitaille
Author-email: thomas.robitaille@gmail.com
License: BSD
Description: |Travis Build Status| |AppVeyor Build status| |Coveralls coverage|
About
-----
This is a `py.test <http://pytest.org>`__ plugin to facilitate the
generation and comparison of arrays produced during tests (this is a
spin-off from
`pytest-arraydiff <https://github.com/astrofrog/pytest-arraydiff>`__).
The basic idea is that you can write a test that generates a Numpy
array. You can then either run the tests in a mode to **generate**
reference files from the arrays, or you can run the tests in
**comparison** mode, which will compare the results of the tests to the
reference ones within some tolerance.
At the moment, the supported file formats for the reference files are:
- The FITS format (requires `astropy <http://www.astropy.org>`__)
- A plain text-based format (baed on Numpy ``loadtxt`` output)
For more information on how to write tests to do this, see the **Using**
section below.
Installing
----------
This plugin is compatible with Python 2.7, and 3.3 and later, and
requires `pytest <http://pytest.org>`__ and
`numpy <http://www.numpy.org>`__ to be installed.
To install, you can do:
::
pip install pytest-arraydiff
You can check that the plugin is registered with pytest by doing:
::
py.test --version
which will show a list of plugins:
::
This is pytest version 2.7.1, imported from ...
setuptools registered plugins:
pytest-arraydiff-0.1 at ...
Using
-----
To use, you simply need to mark the function where you want to compare
images using ``@pytest.mark.array_compare``, and make sure that the
function returns a plain Numpy array:
::
python
import pytest
import numpy as np
@pytest.mark.array_compare
def test_succeeds():
return np.arange(3 * 5 * 4).reshape((3, 5, 4))
To generate the reference FITS files, run the tests with the
``--arraydiff-generate-path`` option with the name of the directory
where the generated files should be placed:
::
py.test --arraydiff-generate-path=reference
If the directory does not exist, it will be created. The directory will
be interpreted as being relative to where you are running ``py.test``.
Make sure you manually check the reference images to ensure they are
correct.
Once you are happy with the generated FITS files, you should move them
to a sub-directory called ``reference`` relative to the test files (this
name is configurable, see below). You can also generate the baseline
images directly in the right directory.
You can then run the tests simply with:
::
py.test --arraydiff
and the tests will pass if the images are the same. If you omit the
``--arraydiff`` option, the tests will run but will only check that the
code runs without checking the output images.
Options
-------
The ``@pytest.mark.array_compare`` marker take an argument to specify
the format to use for the reference files:
.. code:: python
@pytest.mark.array_compare(file_format='text')
def test_image():
...
The default file format can also be specified using the
``--arraydiff-default-format=<format>`` flag when running ``py.test``,
and ``<format>`` should be either ``fits`` or ``text``.
The supported formats at this time are ``text`` and ``fits``, and
contributions for other formats are welcome. The default format is
``text``.
Another argument is the relative tolerance for floating point values
(which defaults to 1e-7):
.. code:: python
@pytest.mark.array_compare(rtol=20)
def test_image():
...
You can also pass keyword arguments to the writers using the
``write_kwargs``. For the ``text`` format, these arguments are passed to
``savetxt`` while for the ``fits`` format they are passed to Astropy's
``fits.writeto`` function.
.. code:: python
@pytest.mark.array_compare(file_format='fits', write_kwargs={'output_verify': 'silentfix'})
def test_image():
...
Other options include the name of the reference directory (which
defaults to ``reference`` ) and the filename for the reference file
(which defaults to the name of the test with a format-dependent
extension).
.. code:: python
@pytest.mark.array_compare(reference_dir='baseline_images',
filename='other_name.fits')
def test_image():
...
The reference directory in the decorator above will be interpreted as
being relative to the test file. Note that the baseline directory can
also be a URL (which should start with ``http://`` or ``https://`` and
end in a slash).
Finally, you can also set a custom baseline directory globally when
running tests by running ``py.test`` with:
::
py.test --arraydiff --arraydiff-reference-path=baseline_images
This directory will be interpreted as being relative to where the tests
are run. In addition, if both this option and the ``reference_dir``
option in the ``array_compare`` decorator are used, the one in the
decorator takes precedence.
Test failure example
--------------------
If the images produced by the tests are correct, then the test will
pass, but if they are not, the test will fail with a message similar to
the following:
::
E AssertionError:
E
E a: /var/folders/zy/t1l3sx310d3d6p0kyxqzlrnr0000gr/T/tmpbvjkzt_q/test_to_mask_rect-mode_subpixels-subpixels_18.txt
E b: /var/folders/zy/t1l3sx310d3d6p0kyxqzlrnr0000gr/T/tmpbvjkzt_q/reference-test_to_mask_rect-mode_subpixels-subpixels_18.txt
E
E Not equal to tolerance rtol=1e-07, atol=0
E
E (mismatch 47.22222222222222%)
E x: array([[ 0. , 0. , 0. , 0. , 0.404012, 0.55 ,
E 0.023765, 0. , 0. ],
E [ 0. , 0. , 0. , 0.112037, 1.028704, 1.1 ,...
E y: array([[ 0. , 0. , 0. , 0. , 0.367284, 0.5 ,
E 0.021605, 0. , 0. ],
E [ 0. , 0. , 0. , 0.101852, 0.935185, 1. ,...
The file paths included in the exception are then available for
inspection.
Running the tests for pytest-arraydiff
--------------------------------------
If you are contributing some changes and want to run the tests, first
install the latest version of the plugin then do:
::
cd tests
py.test --arraydiff
The reason for having to install the plugin first is to ensure that the
plugin is correctly loaded as part of the test suite.
.. |Travis Build Status| image:: https://travis-ci.org/astrofrog/pytest-arraydiff.svg?branch=master
:target: https://travis-ci.org/astrofrog/pytest-arraydiff
.. |AppVeyor Build status| image:: https://ci.appveyor.com/api/projects/status/kwbvm9u79mrq6i0w?svg=true
:target: https://ci.appveyor.com/project/astrofrog/pytest-arraydiff
.. |Coveralls coverage| image:: https://coveralls.io/repos/matplotlib/pytest-arraydiff/badge.svg
:target: https://coveralls.io/r/matplotlib/pytest-arraydiff
Platform: UNKNOWN
|Travis Build Status| |AppVeyor Build status| |Coveralls coverage|
About
-----
This is a `py.test <http://pytest.org>`__ plugin to facilitate the
generation and comparison of arrays produced during tests (this is a
spin-off from
`pytest-arraydiff <https://github.com/astrofrog/pytest-arraydiff>`__).
The basic idea is that you can write a test that generates a Numpy
array. You can then either run the tests in a mode to **generate**
reference files from the arrays, or you can run the tests in
**comparison** mode, which will compare the results of the tests to the
reference ones within some tolerance.
At the moment, the supported file formats for the reference files are:
- The FITS format (requires `astropy <http://www.astropy.org>`__)
- A plain text-based format (baed on Numpy ``loadtxt`` output)
For more information on how to write tests to do this, see the **Using**
section below.
Installing
----------
This plugin is compatible with Python 2.7, and 3.3 and later, and
requires `pytest <http://pytest.org>`__ and
`numpy <http://www.numpy.org>`__ to be installed.
To install, you can do:
::
pip install pytest-arraydiff
You can check that the plugin is registered with pytest by doing:
::
py.test --version
which will show a list of plugins:
::
This is pytest version 2.7.1, imported from ...
setuptools registered plugins:
pytest-arraydiff-0.1 at ...
Using
-----
To use, you simply need to mark the function where you want to compare
images using ``@pytest.mark.array_compare``, and make sure that the
function returns a plain Numpy array:
::
python
import pytest
import numpy as np
@pytest.mark.array_compare
def test_succeeds():
return np.arange(3 * 5 * 4).reshape((3, 5, 4))
To generate the reference FITS files, run the tests with the
``--arraydiff-generate-path`` option with the name of the directory
where the generated files should be placed:
::
py.test --arraydiff-generate-path=reference
If the directory does not exist, it will be created. The directory will
be interpreted as being relative to where you are running ``py.test``.
Make sure you manually check the reference images to ensure they are
correct.
Once you are happy with the generated FITS files, you should move them
to a sub-directory called ``reference`` relative to the test files (this
name is configurable, see below). You can also generate the baseline
images directly in the right directory.
You can then run the tests simply with:
::
py.test --arraydiff
and the tests will pass if the images are the same. If you omit the
``--arraydiff`` option, the tests will run but will only check that the
code runs without checking the output images.
Options
-------
The ``@pytest.mark.array_compare`` marker take an argument to specify
the format to use for the reference files:
.. code:: python
@pytest.mark.array_compare(file_format='text')
def test_image():
...
The default file format can also be specified using the
``--arraydiff-default-format=<format>`` flag when running ``py.test``,
and ``<format>`` should be either ``fits`` or ``text``.
The supported formats at this time are ``text`` and ``fits``, and
contributions for other formats are welcome. The default format is
``text``.
Another argument is the relative tolerance for floating point values
(which defaults to 1e-7):
.. code:: python
@pytest.mark.array_compare(rtol=20)
def test_image():
...
You can also pass keyword arguments to the writers using the
``write_kwargs``. For the ``text`` format, these arguments are passed to
``savetxt`` while for the ``fits`` format they are passed to Astropy's
``fits.writeto`` function.
.. code:: python
@pytest.mark.array_compare(file_format='fits', write_kwargs={'output_verify': 'silentfix'})
def test_image():
...
Other options include the name of the reference directory (which
defaults to ``reference`` ) and the filename for the reference file
(which defaults to the name of the test with a format-dependent
extension).
.. code:: python
@pytest.mark.array_compare(reference_dir='baseline_images',
filename='other_name.fits')
def test_image():
...
The reference directory in the decorator above will be interpreted as
being relative to the test file. Note that the baseline directory can
also be a URL (which should start with ``http://`` or ``https://`` and
end in a slash).
Finally, you can also set a custom baseline directory globally when
running tests by running ``py.test`` with:
::
py.test --arraydiff --arraydiff-reference-path=baseline_images
This directory will be interpreted as being relative to where the tests
are run. In addition, if both this option and the ``reference_dir``
option in the ``array_compare`` decorator are used, the one in the
decorator takes precedence.
Test failure example
--------------------
If the images produced by the tests are correct, then the test will
pass, but if they are not, the test will fail with a message similar to
the following:
::
E AssertionError:
E
E a: /var/folders/zy/t1l3sx310d3d6p0kyxqzlrnr0000gr/T/tmpbvjkzt_q/test_to_mask_rect-mode_subpixels-subpixels_18.txt
E b: /var/folders/zy/t1l3sx310d3d6p0kyxqzlrnr0000gr/T/tmpbvjkzt_q/reference-test_to_mask_rect-mode_subpixels-subpixels_18.txt
E
E Not equal to tolerance rtol=1e-07, atol=0
E
E (mismatch 47.22222222222222%)
E x: array([[ 0. , 0. , 0. , 0. , 0.404012, 0.55 ,
E 0.023765, 0. , 0. ],
E [ 0. , 0. , 0. , 0.112037, 1.028704, 1.1 ,...
E y: array([[ 0. , 0. , 0. , 0. , 0.367284, 0.5 ,
E 0.021605, 0. , 0. ],
E [ 0. , 0. , 0. , 0.101852, 0.935185, 1. ,...
The file paths included in the exception are then available for
inspection.
Running the tests for pytest-arraydiff
--------------------------------------
If you are contributing some changes and want to run the tests, first
install the latest version of the plugin then do:
::
cd tests
py.test --arraydiff
The reason for having to install the plugin first is to ensure that the
plugin is correctly loaded as part of the test suite.
.. |Travis Build Status| image:: https://travis-ci.org/astrofrog/pytest-arraydiff.svg?branch=master
:target: https://travis-ci.org/astrofrog/pytest-arraydiff
.. |AppVeyor Build status| image:: https://ci.appveyor.com/api/projects/status/kwbvm9u79mrq6i0w?svg=true
:target: https://ci.appveyor.com/project/astrofrog/pytest-arraydiff
.. |Coveralls coverage| image:: https://coveralls.io/repos/matplotlib/pytest-arraydiff/badge.svg
:target: https://coveralls.io/r/matplotlib/pytest-arraydiff
Metadata-Version: 1.0
Name: pytest-arraydiff
Version: 0.1
Summary: pytest plugin to help with comparing array output from tests
Home-page: https://github.com/astrofrog/pytest-fits
Author: Thomas Robitaille
Author-email: thomas.robitaille@gmail.com
License: BSD
Description: |Travis Build Status| |AppVeyor Build status| |Coveralls coverage|
About
-----
This is a `py.test <http://pytest.org>`__ plugin to facilitate the
generation and comparison of arrays produced during tests (this is a
spin-off from
`pytest-arraydiff <https://github.com/astrofrog/pytest-arraydiff>`__).
The basic idea is that you can write a test that generates a Numpy
array. You can then either run the tests in a mode to **generate**
reference files from the arrays, or you can run the tests in
**comparison** mode, which will compare the results of the tests to the
reference ones within some tolerance.
At the moment, the supported file formats for the reference files are:
- The FITS format (requires `astropy <http://www.astropy.org>`__)
- A plain text-based format (baed on Numpy ``loadtxt`` output)
For more information on how to write tests to do this, see the **Using**
section below.
Installing
----------
This plugin is compatible with Python 2.7, and 3.3 and later, and
requires `pytest <http://pytest.org>`__ and
`numpy <http://www.numpy.org>`__ to be installed.
To install, you can do:
::
pip install pytest-arraydiff
You can check that the plugin is registered with pytest by doing:
::
py.test --version
which will show a list of plugins:
::
This is pytest version 2.7.1, imported from ...
setuptools registered plugins:
pytest-arraydiff-0.1 at ...
Using
-----
To use, you simply need to mark the function where you want to compare
images using ``@pytest.mark.array_compare``, and make sure that the
function returns a plain Numpy array:
::
python
import pytest
import numpy as np
@pytest.mark.array_compare
def test_succeeds():
return np.arange(3 * 5 * 4).reshape((3, 5, 4))
To generate the reference FITS files, run the tests with the
``--arraydiff-generate-path`` option with the name of the directory
where the generated files should be placed:
::
py.test --arraydiff-generate-path=reference
If the directory does not exist, it will be created. The directory will
be interpreted as being relative to where you are running ``py.test``.
Make sure you manually check the reference images to ensure they are
correct.
Once you are happy with the generated FITS files, you should move them
to a sub-directory called ``reference`` relative to the test files (this
name is configurable, see below). You can also generate the baseline
images directly in the right directory.
You can then run the tests simply with:
::
py.test --arraydiff
and the tests will pass if the images are the same. If you omit the
``--arraydiff`` option, the tests will run but will only check that the
code runs without checking the output images.
Options
-------
The ``@pytest.mark.array_compare`` marker take an argument to specify
the format to use for the reference files:
.. code:: python
@pytest.mark.array_compare(file_format='text')
def test_image():
...
The default file format can also be specified using the
``--arraydiff-default-format=<format>`` flag when running ``py.test``,
and ``<format>`` should be either ``fits`` or ``text``.
The supported formats at this time are ``text`` and ``fits``, and
contributions for other formats are welcome. The default format is
``text``.
Another argument is the relative tolerance for floating point values
(which defaults to 1e-7):
.. code:: python
@pytest.mark.array_compare(rtol=20)
def test_image():
...
You can also pass keyword arguments to the writers using the
``write_kwargs``. For the ``text`` format, these arguments are passed to
``savetxt`` while for the ``fits`` format they are passed to Astropy's
``fits.writeto`` function.
.. code:: python
@pytest.mark.array_compare(file_format='fits', write_kwargs={'output_verify': 'silentfix'})
def test_image():
...
Other options include the name of the reference directory (which
defaults to ``reference`` ) and the filename for the reference file
(which defaults to the name of the test with a format-dependent
extension).
.. code:: python
@pytest.mark.array_compare(reference_dir='baseline_images',
filename='other_name.fits')
def test_image():
...
The reference directory in the decorator above will be interpreted as
being relative to the test file. Note that the baseline directory can
also be a URL (which should start with ``http://`` or ``https://`` and
end in a slash).
Finally, you can also set a custom baseline directory globally when
running tests by running ``py.test`` with:
::
py.test --arraydiff --arraydiff-reference-path=baseline_images
This directory will be interpreted as being relative to where the tests
are run. In addition, if both this option and the ``reference_dir``
option in the ``array_compare`` decorator are used, the one in the
decorator takes precedence.
Test failure example
--------------------
If the images produced by the tests are correct, then the test will
pass, but if they are not, the test will fail with a message similar to
the following:
::
E AssertionError:
E
E a: /var/folders/zy/t1l3sx310d3d6p0kyxqzlrnr0000gr/T/tmpbvjkzt_q/test_to_mask_rect-mode_subpixels-subpixels_18.txt
E b: /var/folders/zy/t1l3sx310d3d6p0kyxqzlrnr0000gr/T/tmpbvjkzt_q/reference-test_to_mask_rect-mode_subpixels-subpixels_18.txt
E
E Not equal to tolerance rtol=1e-07, atol=0
E
E (mismatch 47.22222222222222%)
E x: array([[ 0. , 0. , 0. , 0. , 0.404012, 0.55 ,
E 0.023765, 0. , 0. ],
E [ 0. , 0. , 0. , 0.112037, 1.028704, 1.1 ,...
E y: array([[ 0. , 0. , 0. , 0. , 0.367284, 0.5 ,
E 0.021605, 0. , 0. ],
E [ 0. , 0. , 0. , 0.101852, 0.935185, 1. ,...
The file paths included in the exception are then available for
inspection.
Running the tests for pytest-arraydiff
--------------------------------------
If you are contributing some changes and want to run the tests, first
install the latest version of the plugin then do:
::
cd tests
py.test --arraydiff
The reason for having to install the plugin first is to ensure that the
plugin is correctly loaded as part of the test suite.
.. |Travis Build Status| image:: https://travis-ci.org/astrofrog/pytest-arraydiff.svg?branch=master
:target: https://travis-ci.org/astrofrog/pytest-arraydiff
.. |AppVeyor Build status| image:: https://ci.appveyor.com/api/projects/status/kwbvm9u79mrq6i0w?svg=true
:target: https://ci.appveyor.com/project/astrofrog/pytest-arraydiff
.. |Coveralls coverage| image:: https://coveralls.io/repos/matplotlib/pytest-arraydiff/badge.svg
:target: https://coveralls.io/r/matplotlib/pytest-arraydiff
Platform: UNKNOWN
CHANGES.md
LICENSE
MANIFEST.in
README.rst
setup.py
pytest_arraydiff/__init__.py
pytest_arraydiff/plugin.py
pytest_arraydiff.egg-info/PKG-INFO
pytest_arraydiff.egg-info/SOURCES.txt
pytest_arraydiff.egg-info/dependency_links.txt
pytest_arraydiff.egg-info/entry_points.txt
pytest_arraydiff.egg-info/top_level.txt
tests/test_pytest_arraydiff.py
tests/baseline/test_succeeds_class.fits
tests/baseline/test_succeeds_func_default.txt
tests/baseline/test_succeeds_func_fits.fits
tests/baseline/test_succeeds_func_text.txt
tests/baseline/test_tolerance.fits
\ No newline at end of file
[pytest11]
pytest_arraydiff = pytest_arraydiff.plugin
pytest_arraydiff
__version__ = '0.1'
# Copyright (c) 2016, Thomas P. Robitaille
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# This package was derived from pytest-mpl, which is released under a BSD
# license and can be found here:
#
# https://github.com/astrofrog/pytest-mpl
from functools import wraps
import os
import sys
import shutil
import tempfile
import warnings
import pytest
import numpy as np
if sys.version_info[0] == 2:
from urllib import urlopen
else:
from urllib.request import urlopen
class FITSDiff(object):
extension = 'fits'
@staticmethod
def read(filename):
from astropy.io import fits
return fits.getdata(filename)
@staticmethod
def write(filename, array, **kwargs):
from astropy.io import fits
return fits.writeto(filename, array, **kwargs)
class TextDiff(object):
extension = 'txt'
@staticmethod
def read(filename):
return np.loadtxt(filename)
@staticmethod
def write(filename, array, **kwargs):
if 'fmt' not in kwargs:
kwargs['fmt'] = '%g'
return np.savetxt(filename, array, **kwargs)
FORMATS = {}
FORMATS['fits'] = FITSDiff
FORMATS['text'] = TextDiff
def _download_file(url):
u = urlopen(url)
result_dir = tempfile.mkdtemp()
filename = os.path.join(result_dir, 'downloaded')
with open(filename, 'wb') as tmpfile:
tmpfile.write(u.read())
return filename
def pytest_addoption(parser):
group = parser.getgroup("general")
group.addoption('--arraydiff', action='store_true',
help="Enable comparison of arrays to reference arrays stored in files")
group.addoption('--arraydiff-generate-path',
help="directory to generate reference files in, relative to location where py.test is run", action='store')
group.addoption('--arraydiff-reference-path',
help="directory containing reference files, relative to location where py.test is run", action='store')
group.addoption('--arraydiff-default-format',
help="Default format for the reference arrays (can be 'fits' or 'text' currently)")
def pytest_configure(config):
if config.getoption("--arraydiff") or config.getoption("--arraydiff-generate-path") is not None:
reference_dir = config.getoption("--arraydiff-reference-path")
generate_dir = config.getoption("--arraydiff-generate-path")
if reference_dir is not None and generate_dir is not None:
warnings.warn("Ignoring --arraydiff-reference-path since --arraydiff-generate-path is set")
if reference_dir is not None:
reference_dir = os.path.abspath(reference_dir)
if generate_dir is not None:
reference_dir = os.path.abspath(generate_dir)
default_format = config.getoption("--arraydiff-default-format") or 'text'
config.pluginmanager.register(ArrayComparison(config,
reference_dir=reference_dir,
generate_dir=generate_dir,
default_format=default_format))
class ArrayComparison(object):
def __init__(self, config, reference_dir=None, generate_dir=None, default_format='text'):
self.config = config
self.reference_dir = reference_dir
self.generate_dir = generate_dir
self.default_format = default_format
def pytest_runtest_setup(self, item):
compare = item.keywords.get('array_compare')
if compare is None:
return
file_format = compare.kwargs.get('file_format', self.default_format)
if file_format not in FORMATS:
raise ValueError("Unknown format: {0}".format(file_format))
if 'extension' in compare.kwargs:
extension = compare.kwargs['extension']
else:
extension = FORMATS[file_format].extension
atol = compare.kwargs.get('atol', 0.)
rtol = compare.kwargs.get('rtol', 1e-7)
single_reference = compare.kwargs.get('single_reference', False)
write_kwargs = compare.kwargs.get('write_kwargs', {})
original = item.function
@wraps(item.function)
def item_function_wrapper(*args, **kwargs):
reference_dir = compare.kwargs.get('reference_dir', None)
if reference_dir is None:
if self.reference_dir is None:
reference_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'reference')
else:
reference_dir = self.reference_dir
else:
if not reference_dir.startswith(('http://', 'https://')):
reference_dir = os.path.join(os.path.dirname(item.fspath.strpath), reference_dir)
baseline_remote = reference_dir.startswith('http')
# Run test and get figure object
import inspect
if inspect.ismethod(original): # method
array = original(*args[1:], **kwargs)
else: # function
array = original(*args, **kwargs)
# Find test name to use as plot name
filename = compare.kwargs.get('filename', None)
if filename is None:
if single_reference:
filename = original.__name__ + '.' + extension
else:
filename = item.name + '.' + extension
filename = filename.replace('[', '_').replace(']', '_')
filename = filename.replace('_.' + extension, '.' + extension)
# What we do now depends on whether we are generating the reference
# files or simply running the test.
if self.generate_dir is None:
# Save the figure
result_dir = tempfile.mkdtemp()
test_image = os.path.abspath(os.path.join(result_dir, filename))
FORMATS[file_format].write(test_image, array, **write_kwargs)
# Find path to baseline image
if baseline_remote:
baseline_file_ref = _download_file(reference_dir + filename)
else:
baseline_file_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), reference_dir, filename))
if not os.path.exists(baseline_file_ref):
raise Exception("""File not found for comparison test
Generated file:
\t{test}
This is expected for new tests.""".format(
test=test_image))
# distutils may put the baseline images in non-accessible places,
# copy to our tmpdir to be sure to keep them in case of failure
baseline_file = os.path.abspath(os.path.join(result_dir, 'reference-' + filename))
shutil.copyfile(baseline_file_ref, baseline_file)
array_ref = FORMATS[file_format].read(baseline_file)
try:
np.testing.assert_allclose(array_ref, array, atol=atol, rtol=rtol)
except AssertionError as exc:
message = "\n\na: {0}".format(test_image) + '\n'
message += "b: {0}".format(baseline_file) + '\n'
message += exc.args[0]
raise AssertionError(message)
shutil.rmtree(result_dir)
else:
if not os.path.exists(self.generate_dir):
os.makedirs(self.generate_dir)
FORMATS[file_format].write(os.path.abspath(os.path.join(self.generate_dir, filename)), array, **write_kwargs)
pytest.skip("Skipping test, since generating data")
if item.cls is not None:
setattr(item.cls, item.function.__name__, item_function_wrapper)
else:
item.obj = item_function_wrapper
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
setup.py 0 → 100755
from setuptools import setup
from pytest_arraydiff import __version__
# IMPORTANT: we deliberately use rst here instead of markdown because long_description
# needs to be in rst, and requiring pandoc to be installed to convert markdown to rst
# on-the-fly is over-complicated and sometimes the generated rst has warnings that
# cause PyPI to not display it correctly.
with open('README.rst') as infile:
long_description = infile.read()
setup(
version=__version__,
url="https://github.com/astrofrog/pytest-fits",
name="pytest-arraydiff",
description='pytest plugin to help with comparing array output from tests',
long_description=long_description,
packages = ['pytest_arraydiff'],
license='BSD',
author='Thomas Robitaille',
author_email='thomas.robitaille@gmail.com',
entry_points = {'pytest11': ['pytest_arraydiff = pytest_arraydiff.plugin',]},
)
File added
0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
File added
0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
File added
import os
import subprocess
import tempfile
import pytest
import numpy as np
reference_dir = 'baseline'
@pytest.mark.array_compare(reference_dir=reference_dir)
def test_succeeds_func_default():
return np.arange(3 * 5).reshape((3, 5))
@pytest.mark.array_compare(file_format='text', reference_dir=reference_dir)
def test_succeeds_func_text():
return np.arange(3 * 5).reshape((3, 5))
@pytest.mark.array_compare(file_format='fits', reference_dir=reference_dir)
def test_succeeds_func_fits():
return np.arange(3 * 5).reshape((3, 5))
class TestClass(object):
@pytest.mark.array_compare(file_format='fits', reference_dir=reference_dir)
def test_succeeds_class(self):
return np.arange(2 * 4 * 3).reshape((2, 4, 3))
TEST_FAILING = """
import pytest
import numpy as np
from astropy.io import fits
@pytest.mark.array_compare
def test_fail():
return np.ones((3, 4))
"""
def test_fails():
tmpdir = tempfile.mkdtemp()
test_file = os.path.join(tmpdir, 'test.py')
with open(test_file, 'w') as f:
f.write(TEST_FAILING)
# If we use --arraydiff, it should detect that the file is missing
code = subprocess.call('py.test --arraydiff {0}'.format(test_file), shell=True)
assert code != 0
# If we don't use --arraydiff option, the test should succeed
code = subprocess.call('py.test {0}'.format(test_file), shell=True)
assert code == 0
TEST_GENERATE = """
import pytest
import numpy as np
from astropy.io import fits
@pytest.mark.array_compare(file_format='{file_format}')
def test_gen():
return np.arange(6 * 5).reshape((6, 5))
"""
@pytest.mark.parametrize('file_format', ('fits', 'text'))
def test_generate(file_format):
tmpdir = tempfile.mkdtemp()
test_file = os.path.join(tmpdir, 'test.py')
with open(test_file, 'w') as f:
f.write(TEST_GENERATE.format(file_format=file_format))
gen_dir = os.path.join(tmpdir, 'spam', 'egg')
# If we don't generate, the test will fail
p = subprocess.Popen('py.test --arraydiff {0}'.format(test_file), shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
output = p.stdout.read()
assert b'File not found for comparison test' in output
# If we do generate, the test should succeed and a new file will appear
code = subprocess.call('py.test --arraydiff-generate-path={0} {1}'.format(gen_dir, test_file), shell=True)
assert code == 0
assert os.path.exists(os.path.join(gen_dir, 'test_gen.' + ('fits' if file_format == 'fits' else 'txt')))
TEST_DEFAULT = """
import pytest
import numpy as np
from astropy.io import fits
@pytest.mark.array_compare
def test_default():
return np.arange(6 * 5).reshape((6, 5))
"""
@pytest.mark.parametrize('file_format', ('fits', 'text'))
def test_default_format(file_format):
tmpdir = tempfile.mkdtemp()
test_file = os.path.join(tmpdir, 'test.py')
with open(test_file, 'w') as f:
f.write(TEST_DEFAULT)
gen_dir = os.path.join(tmpdir, 'spam', 'egg')
# If we do generate, the test should succeed and a new file will appear
code = subprocess.call('py.test -s --arraydiff-default-format={0}'
' --arraydiff-generate-path={1} {2}'.format(file_format, gen_dir, test_file), shell=True)
assert code == 0
assert os.path.exists(os.path.join(gen_dir, 'test_default.' + ('fits' if file_format == 'fits' else 'txt')))
@pytest.mark.array_compare(reference_dir=reference_dir, rtol=0.5, file_format='fits')
def test_tolerance():
return np.ones((3,4)) * 1.6
def test_nofile():
pass
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment