Commit 0f67bc75 authored by Hugo Lefeuvre's avatar Hugo Lefeuvre

New upstream version 3.1.1

parent 7d70f0d2
......@@ -3,12 +3,14 @@ parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)(?P<pre>.*)
serialize =
{major}.{minor}.{patch}{pre}
{major}.{minor}.{patch}
current_version = 3.1.0a1
current_version = 3.1.1
commit = True
tag = True
[bumpversion:file:setup.py]
[bumpversion:file:README.rst]
[bumpversion:file:docs/conf.py]
[bumpversion:file:src/pytest_benchmark/__init__.py]
......
......@@ -9,7 +9,7 @@ cookiecutter:
codeclimate: 'yes'
codecov: 'yes'
command_line_interface: 'no'
command_line_interface_bin_name: pytest-benchmark
command_line_interface_bin_name: py.test-benchmark
coveralls: 'yes'
distribution_name: pytest-benchmark
email: contact@ionelmc.ro
......@@ -21,7 +21,7 @@ cookiecutter:
project_short_description: A ``py.test`` fixture for benchmarking code. It will
group the tests into rounds that are calibrated to the chosen timer. See calibration_
and FAQ_.
release_date: '2015-11-08'
release_date: unreleased
repo_name: pytest-benchmark
requiresio: 'yes'
scrutinizer: 'yes'
......@@ -31,6 +31,6 @@ cookiecutter:
test_matrix_separate_coverage: 'yes'
test_runner: pytest
travis: 'yes'
version: 3.0.0
version: 3.1.0a1
website: http://blog.ionelmc.ro
year: 2014-2016
year: 2014-2017
This diff is collapsed.
......@@ -9,3 +9,8 @@ Authors
* Thomas Waldmann - https://github.com/ThomasWaldmann
* Antonio Cuni - http://antocuni.eu/en/
* Petr Šebek - https://github.com/Artimi
* Swen Kooij - https://github.com/Photonios
* "varac" - https://github.com/varac
* Andre Bianchi - https://github.com/drebs
* Jeremy Dobbins-Bucklad - https://github.com/jad-b
* Alexey Popravka - https://github.com/popravich
......@@ -2,6 +2,51 @@
Changelog
=========
3.1.1 (2017-07-26)
------------------
* Fixed loading data from old json files (missing ``ops`` field, see
`#81 <https://github.com/ionelmc/pytest-benchmark/issues/81>`_).
* Fixed regression on broken SCM (see
`#82 <https://github.com/ionelmc/pytest-benchmark/issues/82>`_).
3.1.0 (2017-07-21)
------------------
* Added "operations per second" (``ops`` field in ``Stats``) metric --
shows the call rate of code being tested. Contributed by Alexey Popravka in
`#78 <https://github.com/ionelmc/pytest-benchmark/pull/78>`_.
* Added a ``time`` field in ``commit_info``. Contributed by "varac" in
`#71 <https://github.com/ionelmc/pytest-benchmark/pull/71>`_.
* Added a ``author_time`` field in ``commit_info``. Contributed by "varac" in
`#75 <https://github.com/ionelmc/pytest-benchmark/pull/75>`_.
* Fixed the leaking of credentials by masking the URL printed when storing
data to elasticsearch.
* Added a `--benchmark-netrc` option to use credentials from a netrc file when
storing data to elasticsearch. Both contributed by Andre Bianchi in
`#73 <https://github.com/ionelmc/pytest-benchmark/pull/73>`_.
* Fixed docs on hooks. Contributed by Andre Bianchi in `#74 <https://github.com/ionelmc/pytest-benchmark/pull/74>`_.
* Remove `git` and `hg` as system dependencies when guessing the project name.
3.1.0a2 (2017-03-27)
--------------------
* ``machine_info`` now contains more detailed information about the CPU, in
particular the exact model. Contributed by Antonio Cuni in `#61 <https://github.com/ionelmc/pytest-benchmark/pull/61>`_.
* Added ``benchmark.extra_info``, which you can use to save arbitrary stuff in
the JSON. Contributed by Antonio Cuni in the same PR as above.
* Fix support for latest PyGal version (histograms). Contributed by Swen Kooij in
`#68 <https://github.com/ionelmc/pytest-benchmark/pull/68>`_.
* Added support for getting ``commit_info`` when not running in the root of the repository. Contributed by Vara Canero in
`#69 <https://github.com/ionelmc/pytest-benchmark/pull/69>` _.
* Added short form for ``--storage``/``--verbose`` options in CLI.
* Added an alternate ``pytest-benchmark`` CLI bin (in addition to ``py.test-benchmark``) to match the madness in pytest.
* Fix some issues with `--help`` in CLI.
* Improved git remote parsing (for ``commit_info`` in JSON outputs).
* Fixed default value for `--benchmark-columns``.
* Fixed comparison mode (loading was done too late).
* Remove the project name from the autosave name. This will get the old brief naming from 3.0 back.
3.1.0a1 (2016-10-29)
--------------------
......
Copyright (c) 2014-2016, Ionel Cristian Mărieș
Copyright (c) 2014-2017, Ionel Cristian Mărieș
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
......
......@@ -14,7 +14,8 @@ Overview
| |coveralls| |codecov|
| |landscape| |scrutinizer| |codacy| |codeclimate|
* - package
- |version| |downloads| |wheel| |supported-versions| |supported-implementations|
- | |version| |wheel| |supported-versions| |supported-implementations|
| |commits-since|
.. |docs| image:: https://readthedocs.org/projects/pytest-benchmark/badge/?style=flat
:target: https://readthedocs.org/projects/pytest-benchmark
......@@ -56,27 +57,27 @@ Overview
:target: https://codeclimate.com/github/ionelmc/pytest-benchmark
:alt: CodeClimate Quality Status
.. |version| image:: https://img.shields.io/pypi/v/pytest-benchmark.svg?style=flat
.. |version| image:: https://img.shields.io/pypi/v/pytest-benchmark.svg
:alt: PyPI Package latest release
:target: https://pypi.python.org/pypi/pytest-benchmark
.. |downloads| image:: https://img.shields.io/pypi/dm/pytest-benchmark.svg?style=flat
:alt: PyPI Package monthly downloads
:target: https://pypi.python.org/pypi/pytest-benchmark
.. |commits-since| image:: https://img.shields.io/github/commits-since/ionelmc/pytest-benchmark/v3.1.1.svg
:alt: Commits since latest release
:target: https://github.com/ionelmc/pytest-benchmark/compare/v3.1.1...master
.. |wheel| image:: https://img.shields.io/pypi/wheel/pytest-benchmark.svg?style=flat
.. |wheel| image:: https://img.shields.io/pypi/wheel/pytest-benchmark.svg
:alt: PyPI Wheel
:target: https://pypi.python.org/pypi/pytest-benchmark
.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/pytest-benchmark.svg?style=flat
.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/pytest-benchmark.svg
:alt: Supported versions
:target: https://pypi.python.org/pypi/pytest-benchmark
.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/pytest-benchmark.svg?style=flat
.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/pytest-benchmark.svg
:alt: Supported implementations
:target: https://pypi.python.org/pypi/pytest-benchmark
.. |scrutinizer| image:: https://img.shields.io/scrutinizer/g/ionelmc/pytest-benchmark/master.svg?style=flat
.. |scrutinizer| image:: https://img.shields.io/scrutinizer/g/ionelmc/pytest-benchmark/master.svg
:alt: Scrutinizer Status
:target: https://scrutinizer-ci.com/g/ionelmc/pytest-benchmark/
......
This diff is collapsed.
......@@ -5,10 +5,8 @@ This is a port of https://github.com/pypa/python-packaging-user-guide/blob/maste
with various fixes and improvements that just weren't feasible to implement in PowerShell.
"""
from __future__ import print_function
from os import environ
from os.path import exists
from subprocess import CalledProcessError
from subprocess import check_call
try:
......@@ -20,22 +18,20 @@ BASE_URL = "https://www.python.org/ftp/python/"
GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py"
GET_PIP_PATH = "C:\get-pip.py"
URLS = {
("2.6", "64"): BASE_URL + "2.6.6/python-2.6.6.amd64.msi",
("2.6", "32"): BASE_URL + "2.6.6/python-2.6.6.msi",
("2.7", "64"): BASE_URL + "2.7.10/python-2.7.10.amd64.msi",
("2.7", "32"): BASE_URL + "2.7.10/python-2.7.10.msi",
("2.7", "64"): BASE_URL + "2.7.10/python-2.7.13.amd64.msi",
("2.7", "32"): BASE_URL + "2.7.10/python-2.7.13.msi",
# NOTE: no .msi installer for 3.3.6
("3.3", "64"): BASE_URL + "3.3.3/python-3.3.3.amd64.msi",
("3.3", "32"): BASE_URL + "3.3.3/python-3.3.3.msi",
("3.4", "64"): BASE_URL + "3.4.3/python-3.4.3.amd64.msi",
("3.4", "32"): BASE_URL + "3.4.3/python-3.4.3.msi",
("3.5", "64"): BASE_URL + "3.5.0/python-3.5.0-amd64.exe",
("3.5", "32"): BASE_URL + "3.5.0/python-3.5.0.exe",
("3.3", "64"): BASE_URL + "3.3.3/python-3.3.5.amd64.msi",
("3.3", "32"): BASE_URL + "3.3.3/python-3.3.5.msi",
("3.4", "64"): BASE_URL + "3.4.3/python-3.4.6.amd64.msi",
("3.4", "32"): BASE_URL + "3.4.3/python-3.4.6.msi",
("3.5", "64"): BASE_URL + "3.5.0/python-3.5.3-amd64.exe",
("3.5", "32"): BASE_URL + "3.5.0/python-3.5.3.exe",
("3.6", "64"): BASE_URL + "3.6.0/python-3.6.0-amd64.exe",
("3.6", "32"): BASE_URL + "3.6.0/python-3.6.0.exe",
}
INSTALL_CMD = {
# Commands are allowed to fail only if they are not the last command. Eg: uninstall (/x) allowed to fail.
"2.6": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"],
["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]],
"2.7": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"],
["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]],
"3.3": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"],
......@@ -43,6 +39,7 @@ INSTALL_CMD = {
"3.4": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"],
["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]],
"3.5": [["{path}", "/quiet", "TargetDir={home}"]],
"3.6": [["{path}", "/quiet", "TargetDir={home}"]],
}
......@@ -73,7 +70,7 @@ def install_python(version, arch, home):
print("Running:", " ".join(cmd))
try:
check_call(cmd)
except CalledProcessError as exc:
except Exception as exc:
print("Failed command", cmd, "with:", exc)
if exists("install.log"):
with open("install.log") as fh:
......
language: python
sudo: false
cache: pip
env:
global:
- LD_PRELOAD=/lib/x86_64-linux-gnu/libSegFault.so
- SEGFAULT_SIGNALS=all
matrix:
- TOXENV=check
- TOXENV=docs
matrix:
include:
- python: '3.5'
env: TOXENV=check
- python: '3.5'
env: TOXENV=docs
{% for env in tox_environments %}{{ '' }}
- python: '{% if env.startswith('pypy-') %}pypy{% else %}{{ env[2] }}.{{ env[3] }}{% endif %}'
env: TOXENV={{ env }}{% if 'cover' in env %},coveralls,codecov{% endif -%}
{% endfor %}
- python: '{{ '{0[0]}-5.4'.format(env.split('-')) if env.startswith('pypy') else '{0[2]}.{0[3]}'.format(env) }}'
env:
- TOXENV={{ env }}{% if 'cover' in env %},report,coveralls,codecov{% endif -%}
{% endfor %}{{ '' }}
before_install:
- python --version
- uname -a
- lsb_release -a
install:
- pip install -U tox setuptools wheel $(python -V |& grep -q 'Python 3.2' && echo 'pip<8.0 virtualenv<14.0')
- pip install -U tox wheel $(if python -V |& grep -q 'Python 3.2'; then echo 'pip<8.0 virtualenv<14.0 setuptools<30.0'; else echo setuptools; fi)
- virtualenv --version
- easy_install --version
- pip --version
- tox --version
- |
set -ex
if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then
(cd $HOME
wget https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-5.7.1-linux_x86_64-portable.tar.bz2
tar xf pypy-5.7.1-linux_x86_64-portable.tar.bz2
pypy-5.7.1-linux_x86_64-portable/bin/pypy -m ensurepip
pypy-5.7.1-linux_x86_64-portable/bin/pypy -m pip install -U virtualenv)
export PATH=$HOME/pypy-5.7.1-linux_x86_64-portable/bin/:$PATH
export TOXPYTHON=$HOME/pypy-5.7.1-linux_x86_64-portable/bin/pypy
fi
if [[ $TRAVIS_PYTHON_VERSION == 'pypy3' ]]; then
(cd $HOME
wget https://bitbucket.org/squeaky/portable-pypy/downloads/pypy3.5-5.7.1-beta-linux_x86_64-portable.tar.bz2
tar xf pypy3.5-5.7.1-beta-linux_x86_64-portable.tar.bz2
pypy3.5-5.7.1-beta-linux_x86_64-portable/bin/pypy3 -m ensurepip
pypy3.5-5.7.1-beta-linux_x86_64-portable/bin/pypy3 -m pip install -U virtualenv)
export PATH=$HOME/pypy3.5-5.7.1-beta-linux_x86_64-portable/bin/:$PATH
export TOXPYTHON=$HOME/pypy3.5-5.7.1-beta-linux_x86_64-portable/bin/pypy3
fi
set +x
script:
- tox -v
after_failure:
- more .tox/log/* | cat
- more .tox/*/log/* | cat
before_cache:
- rm -rf $HOME/.cache/pip/log
cache:
directories:
- $HOME/.cache/pip
notifications:
email:
on_success: never
......
......@@ -11,14 +11,14 @@ environment:
PYTHON_VERSION: '2.7'
PYTHON_ARCH: '32'
{% for env in tox_environments %}{% if env.startswith(('py27', 'py34', 'py35')) %}
- TOXENV: '{{ env }}{% if 'cover' in env %},codecov{% endif %}'
{% for env in tox_environments %}{% if env.startswith(('py27', 'py3')) %}
- TOXENV: '{{ env }}{% if 'cover' in env %},report,codecov{% endif %}'
TOXPYTHON: C:\Python{{ env[2:4] }}\python.exe
PYTHON_HOME: C:\Python{{ env[2:4] }}
PYTHON_VERSION: '{{ env[2] }}.{{ env[3] }}'
PYTHON_ARCH: '32'
- TOXENV: '{{ env }}{% if 'cover' in env %},codecov{%- endif %}'
- TOXENV: '{{ env }}{% if 'cover' in env %},report,codecov{%- endif %}'
TOXPYTHON: C:\Python{{ env[2:4] }}-x64\python.exe
{%- if env.startswith(('py2', 'py33', 'py34')) %}
......
......@@ -2,7 +2,7 @@
envlist =
clean,
check,
{py26,py27,py33,py34,py35,pypy}-{pytest28,pytest29,pytest30}-{pygal20,pygal21,pygal22,pygal23}-{nodist,xdist}-{cover,nocov},
{py26,py27,py33,py34,py35,py36,pypy,pypy3}-{pytest28,pytest29,pytest30}-{pygal22,pygal23}-{nodist,xdist}-{cover,nocov},
{py32}-{pytest28,pytest29},
report,
docs
......@@ -10,42 +10,44 @@ envlist =
[testenv]
basepython =
pypy: {env:TOXPYTHON:pypy}
pypy3: {env:TOXPYTHON:pypy3}
py26: {env:TOXPYTHON:python2.6}
{py27,docs}: {env:TOXPYTHON:python2.7}
py32: {env:TOXPYTHON:python3.2}
py33: {env:TOXPYTHON:python3.3}
py34: {env:TOXPYTHON:python3.4}
py35: {env:TOXPYTHON:python3.5}
{clean,report,check,codecov,coveralls}: {env:TOXPYTHON:python3.4}
py36: {env:TOXPYTHON:python3.6}
{clean,bootstrap,report,check,codecov,coveralls}: {env:TOXPYTHON:python}
setenv =
PYTHONPATH={toxinidir}/tests
PYTHONUNBUFFERED=yes
COV_CORE_SOURCE={toxinidir}/src
COV_CORE_CONFIG={toxinidir}/.coveragerc
COV_CORE_DATAFILE={toxinidir}/.coverage.eager
passenv =
*
deps =
pytest-instafail==0.3.0
xdist: pytest-xdist==1.15.0
xdist: pytest-xdist==1.16.0
{py26,py27,py32,py33,pypy}: statistics==1.0.3.5
{py26,py27,py32,py33,pypy}: pathlib==1.0.1
{py26,py27,py32,pypy}: mock==2.0.0
{py26,py32}: py-cpuinfo<3.0.0
pytest28: pytest==2.8.7
pytest29: pytest==2.9.2
pytest30: pytest==3.0.2
pytest30: pytest==3.0.7
pytest-travis-fold
cover: pytest-cov
cover: coverage
pypy: jitviewer
aspectlib==1.4.2
pygal23: pygal==2.3.0
pygal23: pygal==2.3.1
pygal22: pygal==2.2.3
pygal21: pygal==2.1.1
pygal20: pygal==2.0.13
pygaljs==1.0.1
freezegun==0.3.7
freezegun==0.3.8
hunter==1.4.1
elasticsearch==2.4.0
elasticsearch==5.3.0
commands =
cover: {posargs:py.test --cov=src --cov-report=term-missing --cov-append -vv}
nocov: {posargs:py.test -vv tests}
......@@ -79,7 +81,6 @@ commands =
sphinx-build -b linkcheck docs dist/docs
[testenv:check]
basepython = python3.4
deps =
docutils
check-manifest
......@@ -98,10 +99,9 @@ commands =
[testenv:coveralls]
deps =
coveralls
urllib3[secure]
skip_install = true
commands =
coverage combine --append
coverage report
coveralls []
[testenv:codecov]
......@@ -109,8 +109,6 @@ deps =
codecov
skip_install = true
commands =
coverage combine --append
coverage report
coverage xml --ignore-errors
codecov []
......
......@@ -23,10 +23,10 @@ if os.getenv('SPELLCHECK'):
source_suffix = '.rst'
master_doc = 'index'
project = 'pytest-benchmark'
year = '2014-2016'
year = '2014-2017'
author = 'Ionel Cristian Mărieș'
copyright = '{0}, {1}'.format(year, author)
version = release = '3.1.0a1'
version = release = '3.1.1'
pygments_style = 'trac'
templates_path = ['.']
......
......@@ -186,6 +186,19 @@ You can set per-test options with the ``benchmark`` marker:
# Note: this code is not measured.
assert result is None
Extra info
==========
You can set arbirary values in the ``benchmark.extra_info`` dictionary, which
will be saved in the JSON if you use ``--benchmark-autosave`` or similar:
.. code-block:: python
def test_my_stuff(benchmark):
benchmark.extra_info['foo'] = 'bar'
benchmark(time.sleep, 0.02)
Patch utilities
===============
......
......@@ -5,7 +5,7 @@ universal = 1
max-line-length = 140
exclude = tests/*,*/migrations/*,*/south_migrations/*
[pytest]
[tool:pytest]
norecursedirs =
.git
.tox
......@@ -36,3 +36,4 @@ line_length=120
known_first_party=pytest_benchmark
default_section=THIRDPARTY
forced_separate=test_pytest_benchmark
not_skip = __init__.py
......@@ -24,7 +24,7 @@ def read(*names, **kwargs):
setup(
name='pytest-benchmark',
version='3.1.0a1',
version='3.1.1',
license='BSD',
description='A ``py.test`` fixture for benchmarking code. '
'It will group the tests into rounds that are calibrated to the chosen timer. '
......@@ -57,6 +57,7 @@ setup(
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
'Topic :: Software Development :: Testing',
......@@ -68,6 +69,7 @@ setup(
],
install_requires=[
'pytest>=2.8',
'py-cpuinfo',
],
extras_require={
'aspect': ['aspectlib'],
......@@ -80,7 +82,8 @@ setup(
'benchmark = pytest_benchmark.plugin'
],
'console_scripts': [
'py.test-benchmark = pytest_benchmark.cli:main'
'py.test-benchmark = pytest_benchmark.cli:main',
'pytest-benchmark = pytest_benchmark.cli:main',
]
}
......
__version__ = "3.1.0a1"
__version__ = "3.1.1"
from pytest_benchmark.cli import main
if __name__ == "__main__":
main()
......@@ -16,6 +16,7 @@ from .utils import first_or_value
from .utils import load_storage
from .utils import report_noprogress
COMPARE_HELP = '''examples:
pytest-benchmark {0} 'Linux-CPython-3.5-64bit/*'
......@@ -34,8 +35,8 @@ COMPARE_HELP = '''examples:
class HelpAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
namespace.command = values
namespace.help = True
namespace.command = values or 'help'
class CommandArgumentParser(argparse.ArgumentParser):
......@@ -64,7 +65,7 @@ class CommandArgumentParser(argparse.ArgumentParser):
def add_command(self, name, **opts):
if self.commands is None:
self.commands = self.add_subparsers(
title='commands', dest='command', parser_class=argparse.ArgumentParser
title='commands', dest='command', parser_class=argparse.ArgumentParser,
)
self.commands_dispatch = {}
if 'description' in opts and 'help' not in opts:
......@@ -147,7 +148,7 @@ def main():
parser = make_parser()
args = parser.parse_args()
logger = Logger(args.verbose)
storage = load_storage(args.storage, logger=logger)
storage = load_storage(args.storage, logger=logger, netrc=args.netrc)
if args.command == 'list':
for file in storage.query():
......@@ -164,8 +165,8 @@ def main():
output_file, = args.csv
results_csv.render(output_file, groups)
else:
parser.error("Unknown command {0!r}".format(args.command))
elif args.command is None:
parser.error("missing command (available commands: %s)" % ', '.join(map(repr, parser.commands.choices)))
class TerminalReporter(object):
......
......@@ -51,6 +51,7 @@ class BenchmarkFixture(object):
self.params = None
self.group = group
self.has_error = False
self.extra_info = {}
self._disable_gc = disable_gc
self._timer = timer.target
......
from collections import Iterable
import py
from .utils import TIME_UNITS
......@@ -5,7 +7,6 @@ from .utils import slugify
try:
from pygal.graph.box import Box
from pygal.graph.graph import is_list_like
from pygal.style import DefaultStyle
except ImportError as exc:
raise ImportError(exc.args, "Please install pygal and pygaljs or pytest-benchmark[histogram]")
......@@ -28,7 +29,7 @@ class CustomBox(Box):
val = x.values
else:
val = x
if is_list_like(val):
if isinstance(val, Iterable):
return self._value_format(val), val[7]
else:
return sup(x, *args)
......
......@@ -4,13 +4,13 @@ def pytest_benchmark_generate_machine_info(config):
.. sourcecode:: python
def pytest_benchmark_update_machine_info(config):
def pytest_benchmark_generate_machine_info(config):
return {'user': getpass.getuser()}
"""
pass
def pytest_benchmark_update_machine_info(config, info):
def pytest_benchmark_update_machine_info(config, machine_info):
"""
If benchmarks are compared and machine_info is different then warnings will be shown.
......@@ -18,8 +18,8 @@ def pytest_benchmark_update_machine_info(config, info):
.. sourcecode:: python
def pytest_benchmark_update_machine_info(config, info):
info['user'] = getpass.getuser()
def pytest_benchmark_update_machine_info(config, machine_info):
machine_info['user'] = getpass.getuser()
"""
pass
......@@ -36,14 +36,14 @@ def pytest_benchmark_generate_commit_info(config):
pass
def pytest_benchmark_update_commit_info(config, info):
def pytest_benchmark_update_commit_info(config, commit_info):
"""
To add something into the commit_info, like the commit message do something like this:
.. sourcecode:: python
def pytest_benchmark_update_commit_info(config, info):
info['message'] = subprocess.check_output(['git', 'log', '-1', '--pretty=%B']).strip()
def pytest_benchmark_update_commit_info(config, commit_info):
commit_info['message'] = subprocess.check_output(['git', 'log', '-1', '--pretty=%B']).strip()
"""
pass
......
......@@ -45,9 +45,11 @@ class Logger(object):
self.term.line(text, red=True, bold=True)
self.term.sep("-", red=True, bold=True)
def info(self, text, **kwargs):
def info(self, text, newline=True, **kwargs):
if not kwargs or kwargs == {'bold': True}:
kwargs['purple'] = True
if newline:
self.term.line("")
self.term.line(text, **kwargs)
def debug(self, text, **kwargs):
......
......@@ -68,8 +68,9 @@ def add_display_options(addoption, prefix="benchmark-"):
addoption(
"--{0}columns".format(prefix),
metavar="LABELS", type=parse_columns,
default="min, max, mean, stddev, median, iqr, outliers, rounds, iterations",
help="Comma-separated list of columns to show in the result table. Default: %(default)r"
default=["min", "max", "mean", "stddev", "median", "iqr", "outliers", "ops", "rounds", "iterations"],
help="Comma-separated list of columns to show in the result table. Default: "
"'min, max, mean, stddev, median, iqr, outliers, rounds, iterations'"
)
addoption(
"--{0}name".format(prefix),
......@@ -101,15 +102,20 @@ def add_csv_options(addoption, prefix="benchmark-"):
def add_global_options(addoption, prefix="benchmark-"):
addoption(
"--{0}storage".format(prefix),
"--{0}storage".format(prefix), *[] if prefix else ['-s'],
metavar="URI", default="file://./.benchmarks",
help="Specify a path to store the runs as uri in form file://path or"
" elasticsearch+http[s]://host1,host2/[index/doctype?project_name=Project] "
"(when --benchmark-save or --benchmark-autosave are used). For backwards compatibility unexpected values "
"are converted to file://<value>. Default: %(default)r.",
"are converted to file://<value>. Default: %(default)r."
)
addoption(
"--{0}verbose".format(prefix),
"--{0}netrc".format(prefix),
nargs="?", default='', const='~/.netrc',
help="Load elasticsearch credentials from a netrc file. Default: %(default)r.",
)
addoption(
"--{0}verbose".format(prefix), *[] if prefix else ['-v'],
action="store_true", default=False,
help="Dump diagnostic and progress information."
)
......@@ -255,6 +261,7 @@ def pytest_benchmark_compare_machine_info(config, benchmarksession, machine_info
fslocation=benchmarksession.storage.location
)
if hasattr(pytest, 'hookimpl'):
_hookwrapper = pytest.hookimpl(hookwrapper=True)
else:
......@@ -322,6 +329,16 @@ def pytest_terminal_summary(terminalreporter):
raise
def get_cpu_info():
import cpuinfo
all_info = cpuinfo.get_cpu_info()
all_info = all_info or {}
info = {}
for key in ('vendor_id', 'hardware', 'brand'):
info[key] = all_info.get(key, 'unknown')
return info
def pytest_benchmark_generate_machine_info():
python_implementation = platform.python_implementation()
python_implementation_version = platform.python_version()
......@@ -339,7 +356,8 @@ def pytest_benchmark_generate_machine_info():
"python_version": platform.python_version(),
"python_build": platform.python_build(),
"release": platform.release(),
"system": platform.system()
"system": platform.system(),
"cpu": get_cpu_info(),
}
......@@ -406,5 +424,6 @@ def pytest_runtest_setup(item):
@pytest.mark.trylast # force the other plugins to initialise, fixes issue with capture not being properly initialised
def pytest_configure(config):
config.addinivalue_line("markers", "benchmark: mark a test with custom benchmark settings.")
config._benchmarksession = BenchmarkSession(config)
config.pluginmanager.register(config._benchmarksession, "pytest-benchmark")
bs = config._benchmarksession = BenchmarkSession(config)
bs.handle_loading()
config.pluginmanager.register(bs, "pytest-benchmark")
......@@ -36,7 +36,8 @@ class BenchmarkSession(object):
self.storage = load_storage(
config.getoption("benchmark_storage"),
logger=self.logger,
default_machine_id=self.machine_id
default_machine_id=self.machine_id,
netrc=config.getoption("benchmark_netrc")
)
self.options = dict(
min_time=SecondsDecimal(config.getoption("benchmark_min_time")),
......@@ -130,8 +131,11 @@ class BenchmarkSession(object):
self.logger.info("Wrote benchmark data in: %s" % self.json, purple=True)
def handle_saving(self):
save = self.benchmarks and self.save or self.autosave
save = self.save or self.autosave
if save or self.json:
if not self.benchmarks:
self.logger.warn("BENCHMARK-U2", "Not saving anything, no benchmarks have been run!")
return
commit_info = self.config.hook.pytest_benchmark_generate_commit_info(config=self.config)
self.config.hook.pytest_benchmark_update_commit_info(config=self.config, commit_info=commit_info)
......@@ -193,12 +197,11 @@ class BenchmarkSession(object):
compared_mapping[path] = dict(
(bench['fullname'], bench) for bench in compared_benchmark['benchmarks']
)
self.logger.info("Comparing against benchmarks from: %s" % path)
self.logger.info("Comparing against benchmarks from: %s" % path, newline=False)
self.compared_mapping = compared_mapping
def finish(self):
self.handle_saving()
self.handle_loading()
prepared_benchmarks = list(self.prepare_benchmarks())
if prepared_benchmarks:
self.groups = self.config