Commit 7d70f0d2 authored by Hugo Lefeuvre's avatar Hugo Lefeuvre

New upstream version 3.1.0a1

parent bccb26e3
......@@ -3,7 +3,7 @@ parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)(?P<pre>.*)
serialize =
{major}.{minor}.{patch}{pre}
{major}.{minor}.{patch}
current_version = 3.0.0
current_version = 3.1.0a1
commit = True
tag = True
......
# This file exists so you can easily regenerate your project.
#
# Unfortunatelly cookiecutter can't use this right away so
# you have to copy this file to ~/.cookiecutterrc
# Generated by cookiepatcher, a small shim around cookiecutter (pip install cookiepatcher)
default_context:
appveyor: 'yes'
c_extension_optional: 'no'
c_extension_support: 'no'
codacy: 'yes'
codeclimate: 'yes'
codecov: 'yes'
command_line_interface: 'no'
coveralls: 'yes'
distribution_name: 'pytest-benchmark'
email: 'contact@ionelmc.ro'
full_name: 'Ionel Cristian Mărieș'
github_username: 'ionelmc'
landscape: 'yes'
package_name: 'pytest-benchmark'
project_name: 'pytest-benchmark'
project_short_description: 'A ``py.test`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer. See: calibration_.'
release_date: '2015-06-10'
repo_name: 'pytest-benchmark'
requiresio: 'yes'
scrutinizer: 'yes'
sphinx_theme: 'sphinx-py3doc-enhanced-theme'
test_matrix_configurator: 'no'
test_runner: 'pytest'
travis: 'yes'
version: '2.5.0'
website: 'http://blog.ionelmc.ro'
year: '2015'
cookiecutter:
appveyor: 'yes'
c_extension_cython: 'no'
c_extension_optional: 'no'
c_extension_support: 'no'
codacy: 'yes'
codeclimate: 'yes'
codecov: 'yes'
command_line_interface: 'no'
command_line_interface_bin_name: pytest-benchmark
coveralls: 'yes'
distribution_name: pytest-benchmark
email: contact@ionelmc.ro
full_name: Ionel Cristian Mărieș
github_username: ionelmc
landscape: 'yes'
package_name: pytest_benchmark
project_name: pytest-benchmark
project_short_description: A ``py.test`` fixture for benchmarking code. It will
group the tests into rounds that are calibrated to the chosen timer. See calibration_
and FAQ_.
release_date: '2015-11-08'
repo_name: pytest-benchmark
requiresio: 'yes'
scrutinizer: 'yes'
sphinx_doctest: 'no'
sphinx_theme: sphinx-py3doc-enhanced-theme
test_matrix_configurator: 'no'
test_matrix_separate_coverage: 'yes'
test_runner: pytest
travis: 'yes'
version: 3.0.0
website: http://blog.ionelmc.ro
year: 2014-2016
......@@ -2,8 +2,10 @@
source = src
[run]
branch = True
source = src
branch = true
source =
src
tests
parallel = true
[report]
......
# see http://editorconfig.org
root = true
[*]
end_of_line = lf
trim_trailing_whitespace = true
insert_final_newline = true
indent_style = space
indent_size = 4
charset = utf-8
[*.{bat,cmd,ps1}]
end_of_line = crlf
......@@ -14,6 +14,7 @@ parts
bin
var
sdist
wheelhouse
develop-eggs
.installed.cfg
lib
......@@ -59,6 +60,7 @@ docs/_build
.cache
.pytest
.bootstrap
.appveyor.token
*.bak
*.t.err
logfile
......
This diff is collapsed.
......@@ -2,5 +2,10 @@
Authors
=======
* Ionel Cristian Mărieș - http://blog.ionelmc.ro
* Ionel Cristian Mărieș - https://blog.ionelmc.ro
* Marc Abramowitz - http://marc-abramowitz.com
* Dave Collins - https://github.com/thedavecollins
* Stefan Krastanov - http://blog.krastanov.org/
* Thomas Waldmann - https://github.com/ThomasWaldmann
* Antonio Cuni - http://antocuni.eu/en/
* Petr Šebek - https://github.com/Artimi
Changelog
=========
3.0.0 (2015-08-11)
3.1.0a1 (2016-10-29)
--------------------
* Added ``--benchmark-colums`` command line option. It selects what columns are displayed in the result table. Contributed by
Antonio Cuni in `#34 <https://github.com/ionelmc/pytest-benchmark/pull/34>`_.
* Added support for grouping by specific test parametrization (``--benchmark-group-by=param:NAME`` where ``NAME`` is your
param name). Contributed by Antonio Cuni in `#37 <https://github.com/ionelmc/pytest-benchmark/pull/37>`_.
* Added support for `name` or `fullname` in ``--benchmark-sort``.
Contributed by Antonio Cuni in `#37 <https://github.com/ionelmc/pytest-benchmark/pull/37>`_.
* Changed signature for ``pytest_benchmark_generate_json`` hook to take 2 new arguments: ``machine_info`` and ``commit_info``.
* Changed `--benchmark-histogram`` to plot groups instead of name-matching runs.
* Changed `--benchmark-histogram`` to plot exactly what you compared against. Now it's ``1:1`` with the compare feature.
* Changed `--benchmark-compare`` to allow globs. You can compare against all the previous runs now.
* Changed `--benchmark-group-by`` to allow multiple values separated by comma.
Example: ``--benchmark-group-by=param:foo,param:bar``
* Added a command line tool to compare previous data: ``py.test-benchmark``. It has two commands:
* ``list`` - Lists all the available files.
* ``compare`` - Displays result tables. Takes optional arguments:
* ``--sort=COL``
* ``--group-by=LABEL``
* ``--columns=LABELS``
* ``--histogram=[FILENAME-PREFIX]``
* Added ``--benchmark-cprofile`` that profiles last run of benchmarked function. Contributed by Petr Šebek.
* Changed ``--benchmark-storage`` so it now allows elasticsearch storage. It allows to store data to elasticsearch instead to
json files. Contributed by Petr Šebek in `#58 <https://github.com/ionelmc/pytest-benchmark/pull/58>`_.
3.0.0 (2015-11-08)
------------------
* Improved ``--help`` text for ``--benchmark-histogram``, ``--benchmark-save`` and ``--benchmark-autosave``.
......@@ -13,13 +42,13 @@ Changelog
* The red warnings are only shown if ``--benchmark-verbose`` is used. They still will be always be shown in the
pytest-warnings section.
* Using the benchmark fixture more than one time is disallowed (will raise exception).
* Not using the benchmark fixutre (but requiring it) will issue a warning (``WBENCHMARK-U1``).
* Not using the benchmark fixture (but requiring it) will issue a warning (``WBENCHMARK-U1``).
3.0.0rc1 (2015-10-25)
---------------------
* Changed ``--benchmark-warmup`` to take optional value and automatically activate on PyPy (default value is ``auto``).
*MAY BE BACKWARDS INCOMPATIBLE*
**MAY BE BACKWARDS INCOMPATIBLE**
* Removed the version check in compare mode (previously there was a warning if current version is lower than what's in
the file).
......@@ -37,7 +66,7 @@ Changelog
* Add a ``--benchmark-disable`` option. It's automatically activated when xdist is on
* When xdist is on or `statistics` can't be imported then ``--benchmark-disable`` is automatically activated (instead
of ``--benchmark-skip``). *BACKWARDS INCOMPATIBLE*
of ``--benchmark-skip``). **BACKWARDS INCOMPATIBLE**
* Replace the deprecated ``__multicall__`` with the new hookwrapper system.
* Improved description for ``--benchmark-max-time``.
......@@ -67,7 +96,8 @@ Changelog
3.0.0a1 (2015-09-13)
--------------------
* Added JSON report saving (the ``--benchmark-json`` command line arguments).
* Added JSON report saving (the ``--benchmark-json`` command line arguments). Based on initial work from Dave Collins in
`#8 <https://github.com/ionelmc/pytest-benchmark/pull/8>`_.
* Added benchmark data storage(the ``--benchmark-save`` and ``--benchmark-autosave`` command line arguments).
* Added comparison to previous runs (the ``--benchmark-compare`` command line argument).
* Added performance regression checks (the ``--benchmark-compare-fail`` command line argument).
......@@ -76,8 +106,8 @@ Changelog
* Added option to fine tune the calibration (the ``--benchmark-calibration-precision`` command line argument and
``calibration_precision`` marker option).
* Changed ``benchmark_weave`` to no longer be a context manager. Cleanup is performed automatically. *BACKWARDS
INCOMPATIBLE*
* Changed ``benchmark_weave`` to no longer be a context manager. Cleanup is performed automatically.
**BACKWARDS INCOMPATIBLE**
* Added ``benchmark.weave`` method (alternative to ``benchmark_weave`` fixture).
* Added new hooks to allow customization:
......@@ -144,7 +174,7 @@ Changelog
2.0.0 (2014-12-19)
------------------
* Replace the context-manager based API with a simple callback interface. *BACKWARDS INCOMPATIBLE*
* Replace the context-manager based API with a simple callback interface. **BACKWARDS INCOMPATIBLE**
* Implement timer calibration for precise measurements.
1.0.0 (2014-12-15)
......@@ -155,5 +185,5 @@ Changelog
? (?)
-----
* Readme and styling fixes (contributed by Marc Abramowitz)
* Readme and styling fixes. Contributed by Marc Abramowitz in `#4 <https://github.com/ionelmc/pytest-benchmark/pull/4>`_.
* Lots of wild changes.
......@@ -30,14 +30,15 @@ If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions are welcome :)
* Remember that this is a volunteer-driven project, and that code contributions are welcome :)
Development
===========
To set up `pytest-benchmark` for local development:
1. `Fork pytest-benchmark on GitHub <https://github.com/ionelmc/pytest-benchmark/fork>`_.
1. Fork `pytest-benchmark <https://github.com/ionelmc/pytest-benchmark>`_
(look for the "Fork" button).
2. Clone your fork locally::
git clone git@github.com:your_name_here/pytest-benchmark.git
......@@ -68,15 +69,15 @@ If you need some code review or feedback while you're developing the code just m
For merging, you should:
1. Include passing tests (run ``tox``) [1]_.
2. Update documentation when there's new API, functionality etc.
2. Update documentation when there's new API, functionality etc.
3. Add a note to ``CHANGELOG.rst`` about the changes.
4. Add yourself to ``AUTHORS.rst``.
.. [1] If you don't have all the necessary python versions available locally you can rely on Travis - it will
.. [1] If you don't have all the necessary python versions available locally you can rely on Travis - it will
`run the tests <https://travis-ci.org/ionelmc/pytest-benchmark/pull_requests>`_ for each change you add in the pull request.
It will be slower though ...
Tips
----
......@@ -86,4 +87,4 @@ To run a subset of tests::
To run all the test environments in *parallel* (you need to ``pip install detox``)::
detox
\ No newline at end of file
detox
Copyright (c) 2014-2015, Ionel Cristian Mărieș
Copyright (c) 2014-2016, Ionel Cristian Mărieș
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
......
......@@ -7,8 +7,8 @@ graft tests
include .bumpversion.cfg
include .coveragerc
include .cookiecutterrc
include .editorconfig
include .isort.cfg
include .pylintrc
include AUTHORS.rst
include CHANGELOG.rst
......
================
pytest-benchmark
================
========
Overview
========
.. start-badges
.. list-table::
:stub-columns: 1
* - docs
- |docs|
- |docs| |gitter|
* - tests
- | |travis| |appveyor| |requires| |coveralls| |codecov|
| |scrutinizer| |codacy| |codeclimate|
- | |travis| |appveyor| |requires|
| |coveralls| |codecov|
| |landscape| |scrutinizer| |codacy| |codeclimate|
* - package
- |version| |downloads| |wheel| |supported-versions| |supported-implementations|
......@@ -17,6 +20,10 @@ pytest-benchmark
:target: https://readthedocs.org/projects/pytest-benchmark
:alt: Documentation Status
.. |gitter| image:: https://badges.gitter.im/ionelmc/pytest-benchmark.svg
:alt: Join the chat at https://gitter.im/ionelmc/pytest-benchmark
:target: https://gitter.im/ionelmc/pytest-benchmark
.. |travis| image:: https://travis-ci.org/ionelmc/pytest-benchmark.svg?branch=master
:alt: Travis-CI Build Status
:target: https://travis-ci.org/ionelmc/pytest-benchmark
......@@ -73,6 +80,9 @@ pytest-benchmark
:alt: Scrutinizer Status
:target: https://scrutinizer-ci.com/g/ionelmc/pytest-benchmark/
.. end-badges
A ``py.test`` fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen
timer. See calibration_ and FAQ_.
......@@ -93,6 +103,21 @@ Available at: `pytest-benchmark.readthedocs.org <http://pytest-benchmark.readthe
Examples
========
But first, a prologue:
This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first.
Take a look at the `introductory material <http://docs.pytest.org/en/latest/getting-started.html>`_
or watch `talks <http://docs.pytest.org/en/latest/talks.html>`_.
Few notes:
* This plugin benchmarks functions and only that. If you want to measure block of code
or whole programs you will need to write a wrapper function.
* In a test you can only benchmark one function. If you want to benchmark many functions write more tests or
use `parametrization <http://docs.pytest.org/en/latest/parametrize.html>`.
* To run the benchmarks you simply use `py.test` to run your "tests". The plugin will automatically do the
benchmarking and generate a result table. Run ``py.test --help`` for more details.
This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark any function passed
to it.
......@@ -193,7 +218,7 @@ Credits
* Timing code and ideas taken from: https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py
.. _FAQ: http://pytest-benchmark.readthedocs.org/en/latest/faq.html
.. _calibration: http://pytest-benchmark.readthedocs.org/en/latest/features.html#calibration
.. _calibration: http://pytest-benchmark.readthedocs.org/en/latest/calibration.html
.. _pedantic: http://pytest-benchmark.readthedocs.org/en/latest/pedantic.html
......
This diff is collapsed.
# Source: https://github.com/pypa/python-packaging-user-guide/blob/master/source/code/install.ps1
# Sample script to install Python and pip under Windows
# Authors: Olivier Grisel and Kyle Kastner
# License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/
$BASE_URL = "https://www.python.org/ftp/python/"
$GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py"
$GET_PIP_PATH = "C:\get-pip.py"
function DownloadPython ($python_version, $platform_suffix) {
$webclient = New-Object System.Net.WebClient
$filename = "python-" + $python_version + $platform_suffix + ".msi"
$url = $BASE_URL + $python_version + "/" + $filename
$basedir = $pwd.Path + "\"
$filepath = $basedir + $filename
if (Test-Path $filename) {
Write-Host "Reusing" $filepath
return $filepath
}
# Download and retry up to 5 times in case of network transient errors.
Write-Host "Downloading" $filename "from" $url
$retry_attempts = 3
for($i=0; $i -lt $retry_attempts; $i++){
try {
$webclient.DownloadFile($url, $filepath)
break
}
Catch [Exception]{
Start-Sleep 1
}
}
Write-Host "File saved at" $filepath
return $filepath
}
function InstallPython ($python_version, $architecture, $python_home) {
Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home
if (Test-Path $python_home) {
Write-Host $python_home "already exists, skipping."
return $false
}
if ($architecture -eq "32") {
$platform_suffix = ""
} else {
$platform_suffix = ".amd64"
}
$filepath = DownloadPython $python_version $platform_suffix
Write-Host "Installing" $filepath "to" $python_home
$args = "/qn /i $filepath TARGETDIR=$python_home"
Write-Host "msiexec.exe" $args
Start-Process -FilePath "msiexec.exe" -ArgumentList $args -Wait -Passthru
Write-Host "Python $python_version ($architecture) installation complete"
return $true
}
function InstallPip ($python_home) {
$pip_path = $python_home + "/Scripts/pip.exe"
$python_path = $python_home + "/python.exe"
if (-not(Test-Path $pip_path)) {
Write-Host "Installing pip..."
$webclient = New-Object System.Net.WebClient
$webclient.DownloadFile($GET_PIP_URL, $GET_PIP_PATH)
Write-Host "Executing:" $python_path $GET_PIP_PATH
Start-Process -FilePath "$python_path" -ArgumentList "$GET_PIP_PATH" -Wait -Passthru
} else {
Write-Host "pip already installed."
}
}
function InstallPackage ($python_home, $pkg) {
$pip_path = $python_home + "/Scripts/pip.exe"
& $pip_path install $pkg
}
function main () {
InstallPython $env:PYTHON_VERSION $env:PYTHON_ARCH $env:PYTHON_HOME
InstallPip $env:PYTHON_HOME
InstallPackage $env:PYTHON_HOME "setuptools>=18.0.1"
InstallPackage $env:PYTHON_HOME wheel
InstallPackage $env:PYTHON_HOME tox
InstallPackage $env:PYTHON_HOME "virtualenv>=13.1.0"
}
main
"""
AppVeyor will at least have few Pythons around so there's no point of implementing a bootstrapper in PowerShell.
This is a port of https://github.com/pypa/python-packaging-user-guide/blob/master/source/code/install.ps1
with various fixes and improvements that just weren't feasible to implement in PowerShell.
"""
from __future__ import print_function
from os import environ
from os.path import exists
from subprocess import CalledProcessError
from subprocess import check_call
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
BASE_URL = "https://www.python.org/ftp/python/"
GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py"
GET_PIP_PATH = "C:\get-pip.py"
URLS = {
("2.6", "64"): BASE_URL + "2.6.6/python-2.6.6.amd64.msi",
("2.6", "32"): BASE_URL + "2.6.6/python-2.6.6.msi",
("2.7", "64"): BASE_URL + "2.7.10/python-2.7.10.amd64.msi",
("2.7", "32"): BASE_URL + "2.7.10/python-2.7.10.msi",
# NOTE: no .msi installer for 3.3.6
("3.3", "64"): BASE_URL + "3.3.3/python-3.3.3.amd64.msi",
("3.3", "32"): BASE_URL + "3.3.3/python-3.3.3.msi",
("3.4", "64"): BASE_URL + "3.4.3/python-3.4.3.amd64.msi",
("3.4", "32"): BASE_URL + "3.4.3/python-3.4.3.msi",
("3.5", "64"): BASE_URL + "3.5.0/python-3.5.0-amd64.exe",
("3.5", "32"): BASE_URL + "3.5.0/python-3.5.0.exe",
}
INSTALL_CMD = {
# Commands are allowed to fail only if they are not the last command. Eg: uninstall (/x) allowed to fail.
"2.6": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"],
["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]],
"2.7": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"],
["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]],
"3.3": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"],
["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]],
"3.4": [["msiexec.exe", "/L*+!", "install.log", "/qn", "/x", "{path}"],
["msiexec.exe", "/L*+!", "install.log", "/qn", "/i", "{path}", "TARGETDIR={home}"]],
"3.5": [["{path}", "/quiet", "TargetDir={home}"]],
}
def download_file(url, path):
print("Downloading: {} (into {})".format(url, path))
progress = [0, 0]
def report(count, size, total):
progress[0] = count * size
if progress[0] - progress[1] > 1000000:
progress[1] = progress[0]
print("Downloaded {:,}/{:,} ...".format(progress[1], total))
dest, _ = urlretrieve(url, path, reporthook=report)
return dest
def install_python(version, arch, home):
print("Installing Python", version, "for", arch, "bit architecture to", home)
if exists(home):
return
path = download_python(version, arch)
print("Installing", path, "to", home)
success = False
for cmd in INSTALL_CMD[version]:
cmd = [part.format(home=home, path=path) for part in cmd]
print("Running:", " ".join(cmd))
try:
check_call(cmd)
except CalledProcessError as exc:
print("Failed command", cmd, "with:", exc)
if exists("install.log"):
with open("install.log") as fh:
print(fh.read())
else:
success = True
if success:
print("Installation complete!")
else:
print("Installation failed")
def download_python(version, arch):
for _ in range(3):
try:
return download_file(URLS[version, arch], "installer.exe")
except Exception as exc:
print("Failed to download:", exc)
print("Retrying ...")
def install_pip(home):
pip_path = home + "/Scripts/pip.exe"
python_path = home + "/python.exe"
if exists(pip_path):
print("pip already installed.")
else:
print("Installing pip...")
download_file(GET_PIP_URL, GET_PIP_PATH)
print("Executing:", python_path, GET_PIP_PATH)
check_call([python_path, GET_PIP_PATH])
def install_packages(home, *packages):
cmd = [home + "/Scripts/pip.exe", "install"]
cmd.extend(packages)
check_call(cmd)
if __name__ == "__main__":
install_python(environ['PYTHON_VERSION'], environ['PYTHON_ARCH'], environ['PYTHON_HOME'])
install_pip(environ['PYTHON_HOME'])
install_packages(environ['PYTHON_HOME'], "setuptools>=18.0.1", "wheel", "tox", "virtualenv>=13.1.0")
......@@ -17,21 +17,30 @@
::
:: Author: Olivier Grisel
:: License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/
@ECHO OFF
SET COMMAND_TO_RUN=%*
SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows
SET WIN_WDK="c:\Program Files (x86)\Windows Kits\10\Include\wdf"
ECHO SDK: %WINDOWS_SDK_VERSION% ARCH: %PYTHON_ARCH%
IF "%PYTHON_VERSION%"=="3.5" (
IF EXIST %WIN_WDK% (
REM See: https://connect.microsoft.com/VisualStudio/feedback/details/1610302/
REN %WIN_WDK% 0wdf
)
GOTO main
)
IF "%PYTHON_ARCH%"=="64" (
ECHO SDK: %WINDOWS_SDK_VERSION% ARCH: %PYTHON_ARCH%
SET DISTUTILS_USE_SDK=1
SET MSSdk=1
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION%
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release
ECHO Executing: %COMMAND_TO_RUN%
call %COMMAND_TO_RUN% || EXIT 1
) ELSE (
ECHO SDK: %WINDOWS_SDK_VERSION% ARCH: %PYTHON_ARCH%
ECHO Executing: %COMMAND_TO_RUN%
call %COMMAND_TO_RUN% || EXIT 1
IF "%PYTHON_ARCH%"=="32" (
GOTO main
)
SET DISTUTILS_USE_SDK=1
SET MSSdk=1
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION%
CALL "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release
:main
ECHO Executing: %COMMAND_TO_RUN%
CALL %COMMAND_TO_RUN% || EXIT 1
......@@ -4,11 +4,10 @@ from __future__ import absolute_import, print_function, unicode_literals
import os
import sys
import subprocess
from os.path import abspath
from os.path import dirname
from os.path import exists
from os.path import join
from os.path import dirname
from os.path import abspath
if __name__ == "__main__":
......@@ -21,18 +20,22 @@ if __name__ == "__main__":
bin_path = join(env_path, "bin")
if not exists(env_path):
import subprocess
print("Making bootstrap env in: {0} ...".format(env_path))
try:
subprocess.check_call(["virtualenv", env_path])
except Exception:
except subprocess.CalledProcessError:
subprocess.check_call([sys.executable, "-m", "virtualenv", env_path])
print("Installing `jinja2` into bootstrap environment ...")
print("Installing `jinja2` into bootstrap environment...")
subprocess.check_call([join(bin_path, "pip"), "install", "jinja2"])
activate = join(bin_path, "activate_this.py")
# noinspection PyCompatibility
exec(compile(open(activate, "rb").read(), activate, "exec"), dict(__file__=activate))
import jinja2
import subprocess
jinja = jinja2.Environment(
loader=jinja2.FileSystemLoader(join(base_path, "ci", "templates")),
trim_blocks=True,
......@@ -40,8 +43,13 @@ if __name__ == "__main__":
keep_trailing_newline=True
)
tox_environments = [line.strip() for line in subprocess.check_output(['tox', '--listenvs']).splitlines()]
tox_environments = [
line.strip()
# WARNING: 'tox' must be installed globally or in the project's virtualenv
for line in subprocess.check_output(['tox', '--listenvs'], universal_newlines=True).splitlines()
]
tox_environments = [line for line in tox_environments if line not in ['clean', 'report', 'docs', 'check']]
for name in os.listdir(join("ci", "templates")):
with open(join(base_path, name), "w") as fh:
fh.write(jinja.get_template(name).render(tox_environments=tox_environments))
......
language: python
python: '3.5'
sudo: false
env:
global:
LD_PRELOAD=/lib/x86_64-linux-gnu/libSegFault.so
matrix:
- TOXENV=check
{% for env in tox_environments %}
- TOXENV={{ env }}{% if 'cover' in env %},coveralls,codecov{% endif %}
- LD_PRELOAD=/lib/x86_64-linux-gnu/libSegFault.so
- SEGFAULT_SIGNALS=all
matrix:
include:
- python: '3.5'
env: TOXENV=check
- python: '3.5'
env: TOXENV=docs
{% for env in tox_environments %}{{ '' }}
- python: '{% if env.startswith('pypy-') %}pypy{% else %}{{ env[2] }}.{{ env[3] }}{% endif %}'
env: TOXENV={{ env }}{% if 'cover' in env %},coveralls,codecov{% endif -%}
{% endfor %}
before_install:
- python --version
- uname -a
- lsb_release -a
install:
- pip install -U tox virtualenv setuptools wheel
- pip install -U tox setuptools wheel $(python -V |& grep -q 'Python 3.2' && echo 'pip<8.0 virtualenv<14.0')
- virtualenv --version
- easy_install --version
- pip --version
......@@ -25,6 +30,11 @@ script:
after_failure:
- more .tox/log/* | cat
- more .tox/*/log/* | cat
before_cache:
- rm -rf $HOME/.cache/pip/log
cache:
directories:
- $HOME/.cache/pip
notifications:
email:
on_success: never
......
......@@ -10,21 +10,23 @@ environment:
PYTHON_HOME: C:\Python27
PYTHON_VERSION: '2.7'
PYTHON_ARCH: '32'
{% for env in tox_environments %}{% if env.startswith(('2.6', '2.7', '3.3', '3.4', '3.5')) %}
{% for env in tox_environments %}{% if env.startswith(('py27', 'py34', 'py35')) %}
- TOXENV: '{{ env }}{% if 'cover' in env %},codecov{% endif %}'
TOXPYTHON: C:\Python{{ env.split('-')[0].replace('.', '') }}\python.exe
PYTHON_HOME: C:\Python{{ env.split('-')[0].replace('.', '') }}
PYTHON_VERSION: '{{ env.split('-')[0] }}'
TOXPYTHON: C:\Python{{ env[2:4] }}\python.exe
PYTHON_HOME: C:\Python{{ env[2:4] }}
PYTHON_VERSION: '{{ env[2] }}.{{ env[3] }}'
PYTHON_ARCH: '32'