Compare commits
89 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b4c47c0ac0 | ||
|
|
5af262daa3 | ||
|
|
16720b96b4 | ||
|
|
f6506fa6ca | ||
|
|
4d8e3cbcb0 | ||
|
|
7a918a1617 | ||
|
|
92c61b0de3 | ||
|
|
ed2c06c8cd | ||
|
|
91e8e59cea | ||
|
|
4f1ae8c45e | ||
|
|
b84fcbc85e | ||
|
|
3cd19a7e45 | ||
|
|
d104487282 | ||
|
|
35bbcc39a2 | ||
|
|
5a17e797c7 | ||
|
|
ce96973ed5 | ||
|
|
0e26de2218 | ||
|
|
81f18f8a0f | ||
|
|
0769bb4898 | ||
|
|
31cfbac1f4 | ||
|
|
412b43b216 | ||
|
|
de65737cb1 | ||
|
|
953916df49 | ||
|
|
2f7d0f8bd9 | ||
|
|
23aaa8a62c | ||
|
|
da1d5712cf | ||
|
|
30ff723d57 | ||
|
|
e227950b06 | ||
|
|
6719a818e7 | ||
|
|
08432c3e97 | ||
|
|
15497dcd77 | ||
|
|
8a0867c580 | ||
|
|
d4789f1ac4 | ||
|
|
26e7532756 | ||
|
|
570c4cc55a | ||
|
|
728d8fbdc5 | ||
|
|
fad569ae1b | ||
|
|
a4dbb27fab | ||
|
|
ec5286ea8c | ||
|
|
195afa0733 | ||
|
|
8bde0c5957 | ||
|
|
b18e6439bd | ||
|
|
360c09a1e7 | ||
|
|
85f7aa2f9b | ||
|
|
a7b4ed89da | ||
|
|
dcdc823dd2 | ||
|
|
330de0a93d | ||
|
|
aa25fb05a9 | ||
|
|
94cdec2cfe | ||
|
|
ebf32ae8a9 | ||
|
|
7a71b69a87 | ||
|
|
c5d26ae1bb | ||
|
|
5750ae784a | ||
|
|
5ec2a17f08 | ||
|
|
27a98788a8 | ||
|
|
9fb1637ce2 | ||
|
|
75679f08c9 | ||
|
|
444cdfe6e3 | ||
|
|
89b00d714f | ||
|
|
e84c00efae | ||
|
|
53021ea264 | ||
|
|
01d067ec2b | ||
|
|
cdd25c9512 | ||
|
|
898b63b665 | ||
|
|
080dfb9841 | ||
|
|
02421790bf | ||
|
|
48d91def7e | ||
|
|
eb73db56c7 | ||
|
|
26f590babe | ||
|
|
f90b2f845c | ||
|
|
9d4e0365da | ||
|
|
0431b8bb74 | ||
|
|
2653024409 | ||
|
|
923174718e | ||
|
|
73f37d0989 | ||
|
|
0722b95e53 | ||
|
|
ae89436d97 | ||
|
|
af77a23501 | ||
|
|
2a1424e563 | ||
|
|
1871d526ac | ||
|
|
9346e18d8c | ||
|
|
1db5c95414 | ||
|
|
0c05b906d4 | ||
|
|
f04e01f55f | ||
|
|
ca44e88e54 | ||
|
|
b5fd3cfb84 | ||
|
|
dc727832a0 | ||
|
|
bf837164b4 | ||
|
|
f6d589caa1 |
1
.hgtags
1
.hgtags
@@ -74,3 +74,4 @@ a4f25c5e649892b5cc746d21be971e4773478af9 2.6.2
|
||||
2967aa416a4f3cdb65fc75073a2a148e1f372742 2.6.3
|
||||
f03b6de8325f5b6c35cea7c3de092f134ea8ef07 2.6.4
|
||||
7ed701fa2fb554bfc0618d447dfec700cc697407 2.7.0
|
||||
edc1d080bab5a970da8f6c776be50768829a7b09 2.7.1
|
||||
|
||||
30
.travis.yml
30
.travis.yml
@@ -1,8 +1,34 @@
|
||||
sudo: false
|
||||
language: python
|
||||
python:
|
||||
- '3.5.0b3'
|
||||
# command to install dependencies
|
||||
install: "pip install -U detox"
|
||||
install: "pip install -U tox"
|
||||
# # command to run tests
|
||||
script: detox --recreate -i ALL=https://devpi.net/hpk/dev/
|
||||
env:
|
||||
matrix:
|
||||
- TESTENV=flakes
|
||||
- TESTENV=py26
|
||||
- TESTENV=py27
|
||||
- TESTENV=py33
|
||||
- TESTENV=py34
|
||||
- TESTENV=py35
|
||||
- TESTENV=pypy
|
||||
- TESTENV=py27-pexpect
|
||||
- TESTENV=py34-pexpect
|
||||
- TESTENV=py27-nobyte
|
||||
- TESTENV=py27-xdist
|
||||
- TESTENV=py34-xdist
|
||||
- TESTENV=py27-trial
|
||||
- TESTENV=py33
|
||||
- TESTENV=py34-trial
|
||||
# inprocess tests by default were introduced in 2.8 only;
|
||||
# this TESTENV should be enabled when merged back to master
|
||||
#- TESTENV=py27-subprocess
|
||||
- TESTENV=doctesting
|
||||
- TESTENV=py27-cxfreeze
|
||||
- TESTENV=coveralls
|
||||
script: tox --recreate -i ALL=https://devpi.net/hpk/dev/ -e $TESTENV
|
||||
|
||||
notifications:
|
||||
irc:
|
||||
|
||||
78
AUTHORS
78
AUTHORS
@@ -3,48 +3,52 @@ merlinux GmbH, Germany, office at merlinux eu
|
||||
|
||||
Contributors include::
|
||||
|
||||
Ronny Pfannschmidt
|
||||
Benjamin Peterson
|
||||
Floris Bruynooghe
|
||||
Jason R. Coombs
|
||||
Wouter van Ackooy
|
||||
Samuele Pedroni
|
||||
Anatoly Bubenkoff
|
||||
Andreas Zeidler
|
||||
Andy Freeland
|
||||
Anthon van der Neut
|
||||
Armin Rigo
|
||||
Aron Curzon
|
||||
Benjamin Peterson
|
||||
Bob Ippolito
|
||||
Brian Dorsey
|
||||
Brian Okken
|
||||
Brianna Laugher
|
||||
Carl Friedrich Bolz
|
||||
Armin Rigo
|
||||
Maho
|
||||
Jaap Broekhuizen
|
||||
Maciek Fijalkowski
|
||||
Guido Wesdorp
|
||||
Brian Dorsey
|
||||
Ross Lawley
|
||||
Ralf Schmitt
|
||||
Charles Cloud
|
||||
Chris Lamb
|
||||
Harald Armin Massa
|
||||
Martijn Faassen
|
||||
Ian Bicking
|
||||
Jan Balster
|
||||
Grig Gheorghiu
|
||||
Bob Ippolito
|
||||
Christian Tismer
|
||||
Daniel Nuri
|
||||
Graham Horler
|
||||
Andreas Zeidler
|
||||
Brian Okken
|
||||
Katarzyna Jachim
|
||||
Christian Theunert
|
||||
Anthon van der Neut
|
||||
Mark Abramowitz
|
||||
Piotr Banaszkiewicz
|
||||
Jurko Gospodnetić
|
||||
Marc Schlaich
|
||||
Christian Tismer
|
||||
Christopher Gilling
|
||||
Daniel Grana
|
||||
Andy Freeland
|
||||
Trevor Bekolay
|
||||
David Mohr
|
||||
Nicolas Delaby
|
||||
Tom Viner
|
||||
Daniel Nuri
|
||||
Dave Hunt
|
||||
Charles Cloud
|
||||
David Mohr
|
||||
Edison Gustavo Muenz
|
||||
Floris Bruynooghe
|
||||
Graham Horler
|
||||
Grig Gheorghiu
|
||||
Guido Wesdorp
|
||||
Harald Armin Massa
|
||||
Ian Bicking
|
||||
Jaap Broekhuizen
|
||||
Jan Balster
|
||||
Jason R. Coombs
|
||||
Jurko Gospodnetić
|
||||
Katarzyna Jachim
|
||||
Maciek Fijalkowski
|
||||
Maho
|
||||
Marc Schlaich
|
||||
Mark Abramowitz
|
||||
Martijn Faassen
|
||||
Nicolas Delaby
|
||||
Pieter Mulder
|
||||
Piotr Banaszkiewicz
|
||||
Punyashloka Biswal
|
||||
Ralf Schmitt
|
||||
Ronny Pfannschmidt
|
||||
Ross Lawley
|
||||
Samuele Pedroni
|
||||
Tom Viner
|
||||
Trevor Bekolay
|
||||
Wouter van Ackooy
|
||||
|
||||
73
CHANGELOG
73
CHANGELOG
@@ -1,3 +1,76 @@
|
||||
2.7.3 (compared to 2.7.2)
|
||||
-----------------------------
|
||||
|
||||
- Allow 'dev', 'rc', or other non-integer version strings in `importorskip`.
|
||||
Thanks to Eric Hunsberger for the PR.
|
||||
|
||||
- fix issue856: consider --color parameter in all outputs (for example
|
||||
--fixtures). Thanks Barney Gale for the report and Bruno Oliveira for the PR.
|
||||
|
||||
- fix issue855: passing str objects as `plugins` argument to pytest.main
|
||||
is now interpreted as a module name to be imported and registered as a
|
||||
plugin, instead of silently having no effect.
|
||||
Thanks xmo-odoo for the report and Bruno Oliveira for the PR.
|
||||
|
||||
- fix issue744: fix for ast.Call changes in Python 3.5+. Thanks
|
||||
Guido van Rossum, Matthias Bussonnier, Stefan Zimmermann and
|
||||
Thomas Kluyver.
|
||||
|
||||
- fix issue842: applying markers in classes no longer propagate this markers
|
||||
to superclasses which also have markers.
|
||||
Thanks xmo-odoo for the report and Bruno Oliveira for the PR.
|
||||
|
||||
- preserve warning functions after call to pytest.deprecated_call. Thanks
|
||||
Pieter Mulder for PR.
|
||||
|
||||
- fix issue854: autouse yield_fixtures defined as class members of
|
||||
unittest.TestCase subclasses now work as expected.
|
||||
Thannks xmo-odoo for the report and Bruno Oliveira for the PR.
|
||||
|
||||
- fix issue833: --fixtures now shows all fixtures of collected test files, instead of just the
|
||||
fixtures declared on the first one.
|
||||
Thanks Florian Bruhin for reporting and Bruno Oliveira for the PR.
|
||||
|
||||
- fix issue863: skipped tests now report the correct reason when a skip/xfail
|
||||
condition is met when using multiple markers.
|
||||
Thanks Raphael Pierzina for reporting and Bruno Oliveira for the PR.
|
||||
|
||||
- optimized tmpdir fixture initialization, which should make test sessions
|
||||
faster (specially when using pytest-xdist). The only visible effect
|
||||
is that now pytest uses a subdirectory in the $TEMP directory for all
|
||||
directories created by this fixture (defaults to $TEMP/pytest-$USER).
|
||||
Thanks Bruno Oliveira for the PR.
|
||||
|
||||
|
||||
2.7.2 (compared to 2.7.1)
|
||||
-----------------------------
|
||||
|
||||
- fix issue767: pytest.raises value attribute does not contain the exception
|
||||
instance on Python 2.6. Thanks Eric Siegerman for providing the test
|
||||
case and Bruno Oliveira for PR.
|
||||
|
||||
- Automatically create directory for junitxml and results log.
|
||||
Thanks Aron Curzon.
|
||||
|
||||
- fix issue713: JUnit XML reports for doctest failures.
|
||||
Thanks Punyashloka Biswal.
|
||||
|
||||
- fix issue735: assertion failures on debug versions of Python 3.4+
|
||||
Thanks Benjamin Peterson.
|
||||
|
||||
- fix issue114: skipif marker reports to internal skipping plugin;
|
||||
Thanks Floris Bruynooghe for reporting and Bruno Oliveira for the PR.
|
||||
|
||||
- fix issue748: unittest.SkipTest reports to internal pytest unittest plugin.
|
||||
Thanks Thomas De Schampheleire for reporting and Bruno Oliveira for the PR.
|
||||
|
||||
- fix issue718: failed to create representation of sets containing unsortable
|
||||
elements in python 2. Thanks Edison Gustavo Muenz.
|
||||
|
||||
- fix issue756, fix issue752 (and similar issues): depend on py-1.4.29
|
||||
which has a refined algorithm for traceback generation.
|
||||
|
||||
|
||||
2.7.1 (compared to 2.7.0)
|
||||
-----------------------------
|
||||
|
||||
|
||||
@@ -17,10 +17,10 @@ Submit a plugin, co-develop pytest
|
||||
Pytest development of the core, some plugins and support code happens
|
||||
in repositories living under:
|
||||
|
||||
- `the pytest-dev bitbucket team <https://bitbucket.org/pytest-dev>`_
|
||||
|
||||
- `the pytest-dev github organisation <https://github.com/pytest-dev>`_
|
||||
|
||||
- `the pytest-dev bitbucket team <https://bitbucket.org/pytest-dev>`_
|
||||
|
||||
All pytest-dev team members have write access to all contained
|
||||
repositories. pytest core and plugins are generally developed
|
||||
using `pull requests`_ to respective repositories.
|
||||
@@ -56,7 +56,7 @@ right to release to pypi.
|
||||
Report bugs
|
||||
-----------
|
||||
|
||||
Report bugs for pytest at https://bitbucket.org/pytest-dev/pytest/issues
|
||||
Report bugs for pytest at https://github.com/pytest-dev/pytest/issues
|
||||
|
||||
If you are reporting a bug, please include:
|
||||
|
||||
@@ -74,7 +74,7 @@ Submit feedback for developers
|
||||
Do you like pytest? Share some love on Twitter or in your blog posts!
|
||||
|
||||
We'd also like to hear about your propositions and suggestions. Feel free to
|
||||
`submit them as issues <https://bitbucket.org/pytest-dev/pytest/issues>`__ and:
|
||||
`submit them as issues <https://github.com/pytest-dev/pytest/issues>`__ and:
|
||||
|
||||
* Set the "kind" to "enhancement" or "proposal" so that we can quickly find
|
||||
about them.
|
||||
@@ -88,8 +88,8 @@ We'd also like to hear about your propositions and suggestions. Feel free to
|
||||
Fix bugs
|
||||
--------
|
||||
|
||||
Look through the BitBucket issues for bugs. Here is sample filter you can use:
|
||||
https://bitbucket.org/pytest-dev/pytest/issues?status=new&status=open&kind=bug
|
||||
Look through the GitHub issues for bugs. Here is sample filter you can use:
|
||||
https://github.com/pytest-dev/pytest/labels/bug
|
||||
|
||||
:ref:`Talk <contact>` to developers to find out how you can fix specific bugs.
|
||||
|
||||
@@ -98,9 +98,9 @@ https://bitbucket.org/pytest-dev/pytest/issues?status=new&status=open&kind=bug
|
||||
Implement features
|
||||
------------------
|
||||
|
||||
Look through the BitBucket issues for enhancements. Here is sample filter you
|
||||
Look through the GitHub issues for enhancements. Here is sample filter you
|
||||
can use:
|
||||
https://bitbucket.org/pytest-dev/pytest/issues?status=new&status=open&kind=enhancement
|
||||
https://github.com/pytest-dev/pytest/labels/enhancement
|
||||
|
||||
:ref:`Talk <contact>` to developers to find out how you can implement specific
|
||||
features.
|
||||
@@ -118,35 +118,35 @@ pytest could always use more documentation. What exactly is needed?
|
||||
.. _`pull requests`:
|
||||
.. _pull-requests:
|
||||
|
||||
Preparing Pull Requests on Bitbucket
|
||||
------------------------------------
|
||||
Preparing Pull Requests on GitHub
|
||||
---------------------------------
|
||||
|
||||
.. note::
|
||||
What is a "pull request"? It informs project's core developers about the
|
||||
changes you want to review and merge. Pull requests are stored on
|
||||
`BitBucket servers <https://bitbucket.org/pytest-dev/pytest/pull-requests>`__.
|
||||
`GitHub servers <https://github.com/pytest-dev/pytest/pulls>`_.
|
||||
Once you send pull request, we can discuss it's potential modifications and
|
||||
even add more commits to it later on.
|
||||
|
||||
The primary development platform for pytest is BitBucket. You can find all
|
||||
the issues there and submit your pull requests.
|
||||
There's an excellent tutorial on how Pull Requests work in the
|
||||
`GitHub Help Center <https://help.github.com/articles/using-pull-requests/>`_,
|
||||
but here is a simple overview:
|
||||
|
||||
#. Fork the
|
||||
`pytest BitBucket repository <https://bitbucket.org/pytest-dev/pytest>`__. It's
|
||||
`pytest GitHub repository <https://github.com/pytest-dev/pytest>`__. It's
|
||||
fine to use ``pytest`` as your fork repository name because it will live
|
||||
under your user.
|
||||
|
||||
#. Clone your fork locally using `Mercurial <http://mercurial.selenic.com/>`_
|
||||
(``hg``) and create a branch::
|
||||
#. Clone your fork locally using `git <https://git-scm.com/>`_ and create a branch::
|
||||
|
||||
$ hg clone ssh://hg@bitbucket.org/YOUR_BITBUCKET_USERNAME/pytest
|
||||
$ git clone git@github.com:YOUR_GITHUB_USERNAME/pytest.git
|
||||
$ cd pytest
|
||||
$ hg up pytest-2.7 # if you want to fix a bug for the pytest-2.7 series
|
||||
$ hg up default # if you want to add a feature bound for the next minor release
|
||||
$ hg branch your-branch-name # your feature/bugfix branch
|
||||
$ git checkout pytest-2.7 # if you want to fix a bug for the pytest-2.7 series
|
||||
$ git checkout master # if you want to add a feature bound for the next minor release
|
||||
$ git branch your-branch-name # your feature/bugfix branch
|
||||
|
||||
If you need some help with Mercurial, follow this quick start
|
||||
guide: http://mercurial.selenic.com/wiki/QuickStart
|
||||
If you need some help with Git, follow this quick start
|
||||
guide: https://git.wiki.kernel.org/index.php/QuickStart
|
||||
|
||||
#. Create a development environment
|
||||
(will implicitly use http://www.virtualenv.org/en/latest/)::
|
||||
@@ -178,10 +178,10 @@ the issues there and submit your pull requests.
|
||||
|
||||
#. Commit and push once your tests pass and you are happy with your change(s)::
|
||||
|
||||
$ hg commit -m"<commit message>"
|
||||
$ hg push -b .
|
||||
$ git commit -a -m "<commit message>"
|
||||
$ git push -u
|
||||
|
||||
#. Finally, submit a pull request through the BitBucket website:
|
||||
#. Finally, submit a pull request through the GitHub website:
|
||||
|
||||
.. image:: img/pullrequest.png
|
||||
:width: 700px
|
||||
@@ -189,26 +189,11 @@ the issues there and submit your pull requests.
|
||||
|
||||
::
|
||||
|
||||
source: YOUR_BITBUCKET_USERNAME/pytest
|
||||
branch: your-branch-name
|
||||
head-fork: YOUR_GITHUB_USERNAME/pytest
|
||||
compare: your-branch-name
|
||||
|
||||
target: pytest-dev/pytest
|
||||
branch: default # if it's a feature
|
||||
branch: pytest-VERSION # if it's a bugfix
|
||||
base-fork: pytest-dev/pytest
|
||||
base: master # if it's a feature
|
||||
base: pytest-VERSION # if it's a bugfix
|
||||
|
||||
|
||||
.. _contribution-using-git:
|
||||
|
||||
Using git with bitbucket/hg
|
||||
-------------------------------
|
||||
|
||||
There used to be the pytest GitHub mirror. It was removed in favor of the
|
||||
Mercurial one, to remove confusion of people not knowing where it's better to
|
||||
put their issues and pull requests. Also it wasn't easily possible to automate
|
||||
the mirroring process.
|
||||
|
||||
In general we recommend to work with the same version control system of the
|
||||
original repository. If you insist on using git with bitbucket/hg you
|
||||
may try `gitifyhg <https://github.com/buchuki/gitifyhg>`_ but are on your
|
||||
own and need to submit pull requests through the respective platform,
|
||||
nevertheless.
|
||||
|
||||
@@ -11,7 +11,7 @@ How to release pytest (draft)
|
||||
|
||||
4. use devpi for uploading a release tarball to a staging area:
|
||||
- ``devpi use https://devpi.net/USER/dev``
|
||||
- ``devpi upload``
|
||||
- ``devpi upload --formats sdist,bdist_wheel``
|
||||
|
||||
5. run from multiple machines:
|
||||
- ``devpi use https://devpi.net/USER/dev``
|
||||
@@ -35,7 +35,10 @@ How to release pytest (draft)
|
||||
cd docs/en
|
||||
make html
|
||||
|
||||
9. Upload the docs using docs/en/Makefile::
|
||||
9. Tag the release::
|
||||
hg tag VERSION
|
||||
|
||||
10. Upload the docs using docs/en/Makefile::
|
||||
cd docs/en
|
||||
make install # or "installall" if you have LaTeX installed
|
||||
This requires ssh-login permission on pytest.org because it uses
|
||||
@@ -43,12 +46,12 @@ How to release pytest (draft)
|
||||
Note that the "install" target of doc/en/Makefile defines where the
|
||||
rsync goes to, typically to the "latest" section of pytest.org.
|
||||
|
||||
10. publish to pypi "devpi push pytest-2.6.2 pypi:NAME" where NAME
|
||||
11. publish to pypi "devpi push pytest-VERSION pypi:NAME" where NAME
|
||||
is the name of pypi.python.org as configured in your
|
||||
~/.pypirc file -- it's the same you would use with
|
||||
"setup.py upload -r NAME"
|
||||
|
||||
11. send release announcement to mailing lists:
|
||||
12. send release announcement to mailing lists:
|
||||
|
||||
pytest-dev
|
||||
testing-in-python
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
#
|
||||
__version__ = '2.7.1'
|
||||
__version__ = '2.7.3'
|
||||
|
||||
@@ -33,6 +33,12 @@ else:
|
||||
def _is_ast_stmt(node):
|
||||
return isinstance(node, ast.stmt)
|
||||
|
||||
try:
|
||||
_Starred = ast.Starred
|
||||
except AttributeError:
|
||||
# Python 2. Define a dummy class so isinstance() will always be False.
|
||||
class _Starred(object): pass
|
||||
|
||||
|
||||
class Failure(Exception):
|
||||
"""Error found while interpreting AST."""
|
||||
@@ -232,24 +238,38 @@ class DebugInterpreter(ast.NodeVisitor):
|
||||
arguments = []
|
||||
for arg in call.args:
|
||||
arg_explanation, arg_result = self.visit(arg)
|
||||
arg_name = "__exprinfo_%s" % (len(ns),)
|
||||
ns[arg_name] = arg_result
|
||||
arguments.append(arg_name)
|
||||
arg_explanations.append(arg_explanation)
|
||||
if isinstance(arg, _Starred):
|
||||
arg_name = "__exprinfo_star"
|
||||
ns[arg_name] = arg_result
|
||||
arguments.append("*%s" % (arg_name,))
|
||||
arg_explanations.append("*%s" % (arg_explanation,))
|
||||
else:
|
||||
arg_name = "__exprinfo_%s" % (len(ns),)
|
||||
ns[arg_name] = arg_result
|
||||
arguments.append(arg_name)
|
||||
arg_explanations.append(arg_explanation)
|
||||
for keyword in call.keywords:
|
||||
arg_explanation, arg_result = self.visit(keyword.value)
|
||||
arg_name = "__exprinfo_%s" % (len(ns),)
|
||||
if keyword.arg:
|
||||
arg_name = "__exprinfo_%s" % (len(ns),)
|
||||
keyword_source = "%s=%%s" % (keyword.arg)
|
||||
arguments.append(keyword_source % (arg_name,))
|
||||
arg_explanations.append(keyword_source % (arg_explanation,))
|
||||
else:
|
||||
arg_name = "__exprinfo_kwds"
|
||||
arguments.append("**%s" % (arg_name,))
|
||||
arg_explanations.append("**%s" % (arg_explanation,))
|
||||
|
||||
ns[arg_name] = arg_result
|
||||
keyword_source = "%s=%%s" % (keyword.arg)
|
||||
arguments.append(keyword_source % (arg_name,))
|
||||
arg_explanations.append(keyword_source % (arg_explanation,))
|
||||
if call.starargs:
|
||||
|
||||
if getattr(call, 'starargs', None):
|
||||
arg_explanation, arg_result = self.visit(call.starargs)
|
||||
arg_name = "__exprinfo_star"
|
||||
ns[arg_name] = arg_result
|
||||
arguments.append("*%s" % (arg_name,))
|
||||
arg_explanations.append("*%s" % (arg_explanation,))
|
||||
if call.kwargs:
|
||||
|
||||
if getattr(call, 'kwargs', None):
|
||||
arg_explanation, arg_result = self.visit(call.kwargs)
|
||||
arg_name = "__exprinfo_kwds"
|
||||
ns[arg_name] = arg_result
|
||||
|
||||
@@ -35,6 +35,12 @@ PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
|
||||
REWRITE_NEWLINES = sys.version_info[:2] != (2, 7) and sys.version_info < (3, 2)
|
||||
ASCII_IS_DEFAULT_ENCODING = sys.version_info[0] < 3
|
||||
|
||||
if sys.version_info >= (3,5):
|
||||
ast_Call = ast.Call
|
||||
else:
|
||||
ast_Call = lambda a,b,c: ast.Call(a, b, c, None, None)
|
||||
|
||||
|
||||
class AssertionRewritingHook(object):
|
||||
"""PEP302 Import hook which rewrites asserts."""
|
||||
|
||||
@@ -442,6 +448,13 @@ binop_map = {
|
||||
ast.NotIn: "not in"
|
||||
}
|
||||
|
||||
# Python 3.4+ compatibility
|
||||
if hasattr(ast, "NameConstant"):
|
||||
_NameConstant = ast.NameConstant
|
||||
else:
|
||||
def _NameConstant(c):
|
||||
return ast.Name(str(c), ast.Load())
|
||||
|
||||
|
||||
def set_location(node, lineno, col_offset):
|
||||
"""Set node location information recursively."""
|
||||
@@ -580,7 +593,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
"""Call a helper in this module."""
|
||||
py_name = ast.Name("@pytest_ar", ast.Load())
|
||||
attr = ast.Attribute(py_name, "_" + name, ast.Load())
|
||||
return ast.Call(attr, list(args), [], None, None)
|
||||
return ast_Call(attr, list(args), [])
|
||||
|
||||
def builtin(self, name):
|
||||
"""Return the builtin called *name*."""
|
||||
@@ -670,7 +683,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
msg = self.pop_format_context(template)
|
||||
fmt = self.helper("format_explanation", msg)
|
||||
err_name = ast.Name("AssertionError", ast.Load())
|
||||
exc = ast.Call(err_name, [fmt], [], None, None)
|
||||
exc = ast_Call(err_name, [fmt], [])
|
||||
if sys.version_info[0] >= 3:
|
||||
raise_ = ast.Raise(exc, None)
|
||||
else:
|
||||
@@ -680,7 +693,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
if self.variables:
|
||||
variables = [ast.Name(name, ast.Store())
|
||||
for name in self.variables]
|
||||
clear = ast.Assign(variables, ast.Name("None", ast.Load()))
|
||||
clear = ast.Assign(variables, _NameConstant(None))
|
||||
self.statements.append(clear)
|
||||
# Fix line numbers.
|
||||
for stmt in self.statements:
|
||||
@@ -690,7 +703,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
def visit_Name(self, name):
|
||||
# Display the repr of the name if it's a local variable or
|
||||
# _should_repr_global_name() thinks it's acceptable.
|
||||
locs = ast.Call(self.builtin("locals"), [], [], None, None)
|
||||
locs = ast_Call(self.builtin("locals"), [], [])
|
||||
inlocs = ast.Compare(ast.Str(name.id), [ast.In()], [locs])
|
||||
dorepr = self.helper("should_repr_global_name", name)
|
||||
test = ast.BoolOp(ast.Or(), [inlocs, dorepr])
|
||||
@@ -717,7 +730,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
res, expl = self.visit(v)
|
||||
body.append(ast.Assign([ast.Name(res_var, ast.Store())], res))
|
||||
expl_format = self.pop_format_context(ast.Str(expl))
|
||||
call = ast.Call(app, [expl_format], [], None, None)
|
||||
call = ast_Call(app, [expl_format], [])
|
||||
self.on_failure.append(ast.Expr(call))
|
||||
if i < levels:
|
||||
cond = res
|
||||
@@ -746,7 +759,42 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
res = self.assign(ast.BinOp(left_expr, binop.op, right_expr))
|
||||
return res, explanation
|
||||
|
||||
def visit_Call(self, call):
|
||||
def visit_Call_35(self, call):
|
||||
"""
|
||||
visit `ast.Call` nodes on Python3.5 and after
|
||||
"""
|
||||
new_func, func_expl = self.visit(call.func)
|
||||
arg_expls = []
|
||||
new_args = []
|
||||
new_kwargs = []
|
||||
for arg in call.args:
|
||||
res, expl = self.visit(arg)
|
||||
arg_expls.append(expl)
|
||||
new_args.append(res)
|
||||
for keyword in call.keywords:
|
||||
res, expl = self.visit(keyword.value)
|
||||
new_kwargs.append(ast.keyword(keyword.arg, res))
|
||||
if keyword.arg:
|
||||
arg_expls.append(keyword.arg + "=" + expl)
|
||||
else: ## **args have `arg` keywords with an .arg of None
|
||||
arg_expls.append("**" + expl)
|
||||
|
||||
expl = "%s(%s)" % (func_expl, ', '.join(arg_expls))
|
||||
new_call = ast.Call(new_func, new_args, new_kwargs)
|
||||
res = self.assign(new_call)
|
||||
res_expl = self.explanation_param(self.display(res))
|
||||
outer_expl = "%s\n{%s = %s\n}" % (res_expl, res_expl, expl)
|
||||
return res, outer_expl
|
||||
|
||||
def visit_Starred(self, starred):
|
||||
# From Python 3.5, a Starred node can appear in a function call
|
||||
res, expl = self.visit(starred.value)
|
||||
return starred, '*' + expl
|
||||
|
||||
def visit_Call_legacy(self, call):
|
||||
"""
|
||||
visit `ast.Call nodes on 3.4 and below`
|
||||
"""
|
||||
new_func, func_expl = self.visit(call.func)
|
||||
arg_expls = []
|
||||
new_args = []
|
||||
@@ -774,6 +822,15 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
outer_expl = "%s\n{%s = %s\n}" % (res_expl, res_expl, expl)
|
||||
return res, outer_expl
|
||||
|
||||
# ast.Call signature changed on 3.5,
|
||||
# conditionally change which methods is named
|
||||
# visit_Call depending on Python version
|
||||
if sys.version_info >= (3, 5):
|
||||
visit_Call = visit_Call_35
|
||||
else:
|
||||
visit_Call = visit_Call_legacy
|
||||
|
||||
|
||||
def visit_Attribute(self, attr):
|
||||
if not isinstance(attr.ctx, ast.Load):
|
||||
return self.generic_visit(attr)
|
||||
|
||||
@@ -225,10 +225,18 @@ def _compare_eq_iterable(left, right, verbose=False):
|
||||
# dynamic import to speedup pytest
|
||||
import difflib
|
||||
|
||||
left = pprint.pformat(left).splitlines()
|
||||
right = pprint.pformat(right).splitlines()
|
||||
explanation = [u('Full diff:')]
|
||||
explanation.extend(line.strip() for line in difflib.ndiff(left, right))
|
||||
try:
|
||||
left_formatting = pprint.pformat(left).splitlines()
|
||||
right_formatting = pprint.pformat(right).splitlines()
|
||||
explanation = [u('Full diff:')]
|
||||
except Exception:
|
||||
# hack: PrettyPrinter.pformat() in python 2 fails when formatting items that can't be sorted(), ie, calling
|
||||
# sorted() on a list would raise. See issue #718.
|
||||
# As a workaround, the full diff is generated by using the repr() string of each item of each container.
|
||||
left_formatting = sorted(repr(x) for x in left)
|
||||
right_formatting = sorted(repr(x) for x in right)
|
||||
explanation = [u('Full diff (fallback to calling repr on each item):')]
|
||||
explanation.extend(line.strip() for line in difflib.ndiff(left_formatting, right_formatting))
|
||||
return explanation
|
||||
|
||||
|
||||
|
||||
@@ -80,7 +80,10 @@ def _prepareconfig(args=None, plugins=None):
|
||||
try:
|
||||
if plugins:
|
||||
for plugin in plugins:
|
||||
pluginmanager.register(plugin)
|
||||
if isinstance(plugin, py.builtin._basestring):
|
||||
pluginmanager.consider_pluginarg(plugin)
|
||||
else:
|
||||
pluginmanager.register(plugin)
|
||||
return pluginmanager.hook.pytest_cmdline_parse(
|
||||
pluginmanager=pluginmanager, args=args)
|
||||
except Exception:
|
||||
@@ -933,3 +936,16 @@ def setns(obj, dic):
|
||||
#if obj != pytest:
|
||||
# pytest.__all__.append(name)
|
||||
setattr(pytest, name, value)
|
||||
|
||||
|
||||
def create_terminal_writer(config, *args, **kwargs):
|
||||
"""Create a TerminalWriter instance configured according to the options
|
||||
in the config object. Every code which requires a TerminalWriter object
|
||||
and has access to a config object should use this function.
|
||||
"""
|
||||
tw = py.io.TerminalWriter(*args, **kwargs)
|
||||
if config.option.color == 'yes':
|
||||
tw.hasmarkup = True
|
||||
if config.option.color == 'no':
|
||||
tw.hasmarkup = False
|
||||
return tw
|
||||
|
||||
@@ -66,9 +66,10 @@ def pytest_addoption(parser):
|
||||
help="create standalone pytest script at given target path.")
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
import _pytest.config
|
||||
genscript = config.getvalue("genscript")
|
||||
if genscript:
|
||||
tw = py.io.TerminalWriter()
|
||||
tw = _pytest.config.create_terminal_writer(config)
|
||||
deps = ['py', '_pytest', 'pytest']
|
||||
if sys.version_info < (2,7):
|
||||
deps.append("argparse")
|
||||
|
||||
@@ -64,7 +64,8 @@ def pytest_cmdline_main(config):
|
||||
return 0
|
||||
|
||||
def showhelp(config):
|
||||
tw = py.io.TerminalWriter()
|
||||
import _pytest.config
|
||||
tw = _pytest.config.create_terminal_writer(config)
|
||||
tw.write(config._parser.optparser.format_help())
|
||||
tw.line()
|
||||
tw.line()
|
||||
|
||||
@@ -123,10 +123,12 @@ class LogXML(object):
|
||||
Junit.skipped(message="xfail-marked test passes unexpectedly"))
|
||||
self.skipped += 1
|
||||
else:
|
||||
if isinstance(report.longrepr, (unicode, str)):
|
||||
if hasattr(report.longrepr, "reprcrash"):
|
||||
message = report.longrepr.reprcrash.message
|
||||
elif isinstance(report.longrepr, (unicode, str)):
|
||||
message = report.longrepr
|
||||
else:
|
||||
message = report.longrepr.reprcrash.message
|
||||
message = str(report.longrepr)
|
||||
message = bin_xml_escape(message)
|
||||
fail = Junit.failure(message=message)
|
||||
fail.append(bin_xml_escape(report.longrepr))
|
||||
@@ -203,6 +205,9 @@ class LogXML(object):
|
||||
self.suite_start_time = time.time()
|
||||
|
||||
def pytest_sessionfinish(self):
|
||||
dirname = os.path.dirname(os.path.abspath(self.logfile))
|
||||
if not os.path.isdir(dirname):
|
||||
os.makedirs(dirname)
|
||||
logfile = open(self.logfile, 'w', encoding='utf-8')
|
||||
suite_stop_time = time.time()
|
||||
suite_time_delta = suite_stop_time - self.suite_start_time
|
||||
|
||||
@@ -502,10 +502,12 @@ class Item(Node):
|
||||
class NoMatch(Exception):
|
||||
""" raised if matching cannot locate a matching names. """
|
||||
|
||||
class Interrupted(KeyboardInterrupt):
|
||||
""" signals an interrupted test run. """
|
||||
__module__ = 'builtins' # for py3
|
||||
|
||||
class Session(FSCollector):
|
||||
class Interrupted(KeyboardInterrupt):
|
||||
""" signals an interrupted test run. """
|
||||
__module__ = 'builtins' # for py3
|
||||
Interrupted = Interrupted
|
||||
|
||||
def __init__(self, config):
|
||||
FSCollector.__init__(self, config.rootdir, parent=None,
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
""" generic mechanism for marking and selecting python functions. """
|
||||
import py
|
||||
import inspect
|
||||
|
||||
|
||||
class MarkerError(Exception):
|
||||
@@ -43,9 +43,10 @@ def pytest_addoption(parser):
|
||||
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
import _pytest.config
|
||||
if config.option.markers:
|
||||
config.do_configure()
|
||||
tw = py.io.TerminalWriter()
|
||||
tw = _pytest.config.create_terminal_writer(config)
|
||||
for line in config.getini("markers"):
|
||||
name, rest = line.split(":", 1)
|
||||
tw.write("@pytest.mark.%s:" % name, bold=True)
|
||||
@@ -253,15 +254,17 @@ class MarkDecorator:
|
||||
otherwise add *args/**kwargs in-place to mark information. """
|
||||
if args and not kwargs:
|
||||
func = args[0]
|
||||
if len(args) == 1 and (istestfunc(func) or
|
||||
hasattr(func, '__bases__')):
|
||||
if hasattr(func, '__bases__'):
|
||||
is_class = inspect.isclass(func)
|
||||
if len(args) == 1 and (istestfunc(func) or is_class):
|
||||
if is_class:
|
||||
if hasattr(func, 'pytestmark'):
|
||||
l = func.pytestmark
|
||||
if not isinstance(l, list):
|
||||
func.pytestmark = [l, self]
|
||||
else:
|
||||
l.append(self)
|
||||
mark_list = func.pytestmark
|
||||
if not isinstance(mark_list, list):
|
||||
mark_list = [mark_list]
|
||||
# always work on a copy to avoid updating pytestmark
|
||||
# from a superclass by accident
|
||||
mark_list = mark_list + [self]
|
||||
func.pytestmark = mark_list
|
||||
else:
|
||||
func.pytestmark = [self]
|
||||
else:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
""" submit failure or test session information to a pastebin service. """
|
||||
import pytest
|
||||
import py, sys
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
|
||||
@@ -69,6 +69,7 @@ def create_new_paste(contents):
|
||||
return 'bad response: ' + response
|
||||
|
||||
def pytest_terminal_summary(terminalreporter):
|
||||
import _pytest.config
|
||||
if terminalreporter.config.option.pastebin != "failed":
|
||||
return
|
||||
tr = terminalreporter
|
||||
@@ -79,7 +80,7 @@ def pytest_terminal_summary(terminalreporter):
|
||||
msg = rep.longrepr.reprtraceback.reprentries[-1].reprfileloc
|
||||
except AttributeError:
|
||||
msg = tr._getfailureheadline(rep)
|
||||
tw = py.io.TerminalWriter(stringio=True)
|
||||
tw = _pytest.config.create_terminal_writer(terminalreporter.config, stringio=True)
|
||||
rep.toterminal(tw)
|
||||
s = tw.stringio.getvalue()
|
||||
assert len(s)
|
||||
|
||||
@@ -4,7 +4,6 @@ import pdb
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
import py
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
@@ -23,23 +22,27 @@ def pytest_configure(config):
|
||||
old = (pdb.set_trace, pytestPDB._pluginmanager)
|
||||
def fin():
|
||||
pdb.set_trace, pytestPDB._pluginmanager = old
|
||||
pytestPDB._config = None
|
||||
pdb.set_trace = pytest.set_trace
|
||||
pytestPDB._pluginmanager = config.pluginmanager
|
||||
pytestPDB._config = config
|
||||
config._cleanup.append(fin)
|
||||
|
||||
class pytestPDB:
|
||||
""" Pseudo PDB that defers to the real pdb. """
|
||||
_pluginmanager = None
|
||||
_config = None
|
||||
|
||||
def set_trace(self):
|
||||
""" invoke PDB set_trace debugging, dropping any IO capturing. """
|
||||
import _pytest.config
|
||||
frame = sys._getframe().f_back
|
||||
capman = None
|
||||
if self._pluginmanager is not None:
|
||||
capman = self._pluginmanager.getplugin("capturemanager")
|
||||
if capman:
|
||||
capman.suspendcapture(in_=True)
|
||||
tw = py.io.TerminalWriter()
|
||||
tw = _pytest.config.create_terminal_writer(self._config)
|
||||
tw.line()
|
||||
tw.sep(">", "PDB set_trace (IO-capturing turned off)")
|
||||
self._pluginmanager.hook.pytest_enter_pdb()
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
""" Python test discovery, setup and run of test functions. """
|
||||
import fnmatch
|
||||
import functools
|
||||
import py
|
||||
import inspect
|
||||
import sys
|
||||
@@ -18,10 +19,19 @@ callable = py.builtin.callable
|
||||
# used to work around a python2 exception info leak
|
||||
exc_clear = getattr(sys, 'exc_clear', lambda: None)
|
||||
|
||||
def getfslineno(obj):
|
||||
# xxx let decorators etc specify a sane ordering
|
||||
def get_real_func(obj):
|
||||
"""gets the real function object of the (possibly) wrapped object by
|
||||
functools.wraps or functools.partial.
|
||||
"""
|
||||
while hasattr(obj, "__wrapped__"):
|
||||
obj = obj.__wrapped__
|
||||
if isinstance(obj, functools.partial):
|
||||
obj = obj.func
|
||||
return obj
|
||||
|
||||
def getfslineno(obj):
|
||||
# xxx let decorators etc specify a sane ordering
|
||||
obj = get_real_func(obj)
|
||||
if hasattr(obj, 'place_as'):
|
||||
obj = obj.place_as
|
||||
fslineno = py.code.getfslineno(obj)
|
||||
@@ -594,7 +604,7 @@ class FunctionMixin(PyobjMixin):
|
||||
|
||||
def _prunetraceback(self, excinfo):
|
||||
if hasattr(self, '_obj') and not self.config.option.fulltrace:
|
||||
code = py.code.Code(self.obj)
|
||||
code = py.code.Code(get_real_func(self.obj))
|
||||
path, firstlineno = code.path, code.firstlineno
|
||||
traceback = excinfo.traceback
|
||||
ntraceback = traceback.cut(path=path, firstlineno=firstlineno)
|
||||
@@ -937,23 +947,16 @@ def showfixtures(config):
|
||||
return wrap_session(config, _showfixtures_main)
|
||||
|
||||
def _showfixtures_main(config, session):
|
||||
import _pytest.config
|
||||
session.perform_collect()
|
||||
curdir = py.path.local()
|
||||
if session.items:
|
||||
nodeid = session.items[0].nodeid
|
||||
else:
|
||||
part = session._initialparts[0]
|
||||
nodeid = "::".join(map(str, [curdir.bestrelpath(part[0])] + part[1:]))
|
||||
nodeid.replace(session.fspath.sep, "/")
|
||||
|
||||
tw = py.io.TerminalWriter()
|
||||
tw = _pytest.config.create_terminal_writer(config)
|
||||
verbose = config.getvalue("verbose")
|
||||
|
||||
fm = session._fixturemanager
|
||||
|
||||
available = []
|
||||
for argname in fm._arg2fixturedefs:
|
||||
fixturedefs = fm.getfixturedefs(argname, nodeid)
|
||||
for argname, fixturedefs in fm._arg2fixturedefs.items():
|
||||
assert fixturedefs is not None
|
||||
if not fixturedefs:
|
||||
continue
|
||||
@@ -1099,6 +1102,13 @@ class RaisesContext(object):
|
||||
__tracebackhide__ = True
|
||||
if tp[0] is None:
|
||||
pytest.fail("DID NOT RAISE")
|
||||
if sys.version_info < (2, 7):
|
||||
# py26: on __exit__() exc_value often does not contain the
|
||||
# exception value.
|
||||
# http://bugs.python.org/issue7853
|
||||
if not isinstance(tp[1], BaseException):
|
||||
exc_type, value, traceback = tp
|
||||
tp = exc_type, exc_type(value), traceback
|
||||
self.excinfo.__init__(tp)
|
||||
return issubclass(self.excinfo.type, self.ExpectedException)
|
||||
|
||||
@@ -1538,7 +1548,7 @@ class FixtureLookupError(LookupError):
|
||||
for function in stack:
|
||||
fspath, lineno = getfslineno(function)
|
||||
try:
|
||||
lines, _ = inspect.getsourcelines(function)
|
||||
lines, _ = inspect.getsourcelines(get_real_func(function))
|
||||
except IOError:
|
||||
error_msg = "file %s, line %s: source code not available"
|
||||
addline(error_msg % (fspath, lineno+1))
|
||||
@@ -1891,10 +1901,13 @@ class FixtureDef:
|
||||
self.finish()
|
||||
assert not hasattr(self, "cached_result")
|
||||
|
||||
fixturefunc = self.func
|
||||
|
||||
if self.unittest:
|
||||
result = self.func(request.instance, **kwargs)
|
||||
if request.instance is not None:
|
||||
# bind the unbound method to the TestCase instance
|
||||
fixturefunc = self.func.__get__(request.instance)
|
||||
else:
|
||||
fixturefunc = self.func
|
||||
# the fixture function needs to be bound to the actual
|
||||
# request.instance so that code working with "self" behaves
|
||||
# as expected.
|
||||
@@ -1902,12 +1915,13 @@ class FixtureDef:
|
||||
fixturefunc = getimfunc(self.func)
|
||||
if fixturefunc != self.func:
|
||||
fixturefunc = fixturefunc.__get__(request.instance)
|
||||
try:
|
||||
result = call_fixture_func(fixturefunc, request, kwargs,
|
||||
self.yieldctx)
|
||||
except Exception:
|
||||
self.cached_result = (None, my_cache_key, sys.exc_info())
|
||||
raise
|
||||
|
||||
try:
|
||||
result = call_fixture_func(fixturefunc, request, kwargs,
|
||||
self.yieldctx)
|
||||
except Exception:
|
||||
self.cached_result = (None, my_cache_key, sys.exc_info())
|
||||
raise
|
||||
self.cached_result = (result, my_cache_key, None)
|
||||
return result
|
||||
|
||||
@@ -1938,7 +1952,15 @@ def getfuncargnames(function, startindex=None):
|
||||
if realfunction != function:
|
||||
startindex += num_mock_patch_args(function)
|
||||
function = realfunction
|
||||
argnames = inspect.getargs(py.code.getrawcode(function))[0]
|
||||
if isinstance(function, functools.partial):
|
||||
argnames = inspect.getargs(py.code.getrawcode(function.func))[0]
|
||||
partial = function
|
||||
argnames = argnames[len(partial.args):]
|
||||
if partial.keywords:
|
||||
for kw in partial.keywords:
|
||||
argnames.remove(kw)
|
||||
else:
|
||||
argnames = inspect.getargs(py.code.getrawcode(function))[0]
|
||||
defaults = getattr(function, 'func_defaults',
|
||||
getattr(function, '__defaults__', None)) or ()
|
||||
numdefaults = len(defaults)
|
||||
|
||||
@@ -45,8 +45,8 @@ def deprecated_call(func, *args, **kwargs):
|
||||
try:
|
||||
ret = func(*args, **kwargs)
|
||||
finally:
|
||||
warnings.warn_explicit = warn_explicit
|
||||
warnings.warn = warn
|
||||
warnings.warn_explicit = oldwarn_explicit
|
||||
warnings.warn = oldwarn
|
||||
if not l:
|
||||
__tracebackhide__ = True
|
||||
raise AssertionError("%r did not produce DeprecationWarning" %(func,))
|
||||
|
||||
@@ -3,6 +3,7 @@ text file.
|
||||
"""
|
||||
|
||||
import py
|
||||
import os
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting", "resultlog plugin options")
|
||||
@@ -14,6 +15,9 @@ def pytest_configure(config):
|
||||
resultlog = config.option.resultlog
|
||||
# prevent opening resultlog on slave nodes (xdist)
|
||||
if resultlog and not hasattr(config, 'slaveinput'):
|
||||
dirname = os.path.dirname(os.path.abspath(resultlog))
|
||||
if not os.path.isdir(dirname):
|
||||
os.makedirs(dirname)
|
||||
logfile = open(resultlog, 'w', 1) # line buffered
|
||||
config._resultlog = ResultLog(config, logfile)
|
||||
config.pluginmanager.register(config._resultlog)
|
||||
|
||||
@@ -483,8 +483,6 @@ def importorskip(modname, minversion=None):
|
||||
""" return imported module if it has at least "minversion" as its
|
||||
__version__ attribute. If no minversion is specified the a skip
|
||||
is only triggered if the module can not be imported.
|
||||
Note that version comparison only works with simple version strings
|
||||
like "1.2.3" but not "1.2.3.dev1" or others.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
compile(modname, '', 'eval') # to catch syntaxerrors
|
||||
@@ -496,9 +494,14 @@ def importorskip(modname, minversion=None):
|
||||
if minversion is None:
|
||||
return mod
|
||||
verattr = getattr(mod, '__version__', None)
|
||||
def intver(verstring):
|
||||
return [int(x) for x in verstring.split(".")]
|
||||
if verattr is None or intver(verattr) < intver(minversion):
|
||||
skip("module %r has __version__ %r, required is: %r" %(
|
||||
modname, verattr, minversion))
|
||||
if minversion is not None:
|
||||
try:
|
||||
from pkg_resources import parse_version as pv
|
||||
except ImportError:
|
||||
skip("we have a required version for %r but can not import "
|
||||
"no pkg_resources to parse version strings." %(modname,))
|
||||
if verattr is None or pv(verattr) < pv(minversion):
|
||||
skip("module %r has __version__ %r, required is: %r" %(
|
||||
modname, verattr, minversion))
|
||||
return mod
|
||||
|
||||
|
||||
@@ -98,24 +98,36 @@ class MarkEvaluator:
|
||||
return d
|
||||
|
||||
def _istrue(self):
|
||||
if hasattr(self, 'result'):
|
||||
return self.result
|
||||
if self.holder:
|
||||
d = self._getglobals()
|
||||
if self.holder.args:
|
||||
self.result = False
|
||||
for expr in self.holder.args:
|
||||
self.expr = expr
|
||||
if isinstance(expr, py.builtin._basestring):
|
||||
result = cached_eval(self.item.config, expr, d)
|
||||
else:
|
||||
if self.get("reason") is None:
|
||||
# XXX better be checked at collection time
|
||||
pytest.fail("you need to specify reason=STRING "
|
||||
"when using booleans as conditions.")
|
||||
result = bool(expr)
|
||||
if result:
|
||||
self.result = True
|
||||
# "holder" might be a MarkInfo or a MarkDecorator; only
|
||||
# MarkInfo keeps track of all parameters it received in an
|
||||
# _arglist attribute
|
||||
if hasattr(self.holder, '_arglist'):
|
||||
arglist = self.holder._arglist
|
||||
else:
|
||||
arglist = [(self.holder.args, self.holder.kwargs)]
|
||||
for args, kwargs in arglist:
|
||||
for expr in args:
|
||||
self.expr = expr
|
||||
break
|
||||
if isinstance(expr, py.builtin._basestring):
|
||||
result = cached_eval(self.item.config, expr, d)
|
||||
else:
|
||||
if "reason" not in kwargs:
|
||||
# XXX better be checked at collection time
|
||||
msg = "you need to specify reason=STRING " \
|
||||
"when using booleans as conditions."
|
||||
pytest.fail(msg)
|
||||
result = bool(expr)
|
||||
if result:
|
||||
self.result = True
|
||||
self.reason = kwargs.get('reason', None)
|
||||
self.expr = expr
|
||||
return self.result
|
||||
else:
|
||||
self.result = True
|
||||
return getattr(self, 'result', False)
|
||||
@@ -124,7 +136,7 @@ class MarkEvaluator:
|
||||
return self.holder.kwargs.get(attr, default)
|
||||
|
||||
def getexplanation(self):
|
||||
expl = self.get('reason', None)
|
||||
expl = getattr(self, 'reason', None) or self.get('reason', None)
|
||||
if not expl:
|
||||
if not hasattr(self, 'expr'):
|
||||
return ""
|
||||
@@ -137,6 +149,7 @@ class MarkEvaluator:
|
||||
def pytest_runtest_setup(item):
|
||||
evalskip = MarkEvaluator(item, 'skipif')
|
||||
if evalskip.istrue():
|
||||
item._evalskip = evalskip
|
||||
pytest.skip(evalskip.getexplanation())
|
||||
item._evalxfail = MarkEvaluator(item, 'xfail')
|
||||
check_xfail_no_run(item)
|
||||
@@ -156,6 +169,7 @@ def pytest_runtest_makereport(item, call):
|
||||
outcome = yield
|
||||
rep = outcome.get_result()
|
||||
evalxfail = getattr(item, '_evalxfail', None)
|
||||
evalskip = getattr(item, '_evalskip', None)
|
||||
# unitttest special case, see setting of _unexpectedsuccess
|
||||
if hasattr(item, '_unexpectedsuccess') and rep.when == "call":
|
||||
# we need to translate into how pytest encodes xpass
|
||||
@@ -177,6 +191,13 @@ def pytest_runtest_makereport(item, call):
|
||||
elif call.when == "call":
|
||||
rep.outcome = "failed" # xpass outcome
|
||||
rep.wasxfail = evalxfail.getexplanation()
|
||||
elif evalskip is not None and rep.skipped and type(rep.longrepr) is tuple:
|
||||
# skipped by mark.skipif; change the location of the failure
|
||||
# to point to the item definition, otherwise it will display
|
||||
# the location of where the skip exception was raised within pytest
|
||||
filename, line, reason = rep.longrepr
|
||||
filename, line = item.location[:2]
|
||||
rep.longrepr = filename, line, reason
|
||||
|
||||
# called by terminalreporter progress reporting
|
||||
def pytest_report_teststatus(report):
|
||||
|
||||
@@ -87,6 +87,7 @@ class WarningReport:
|
||||
|
||||
class TerminalReporter:
|
||||
def __init__(self, config, file=None):
|
||||
import _pytest.config
|
||||
self.config = config
|
||||
self.verbosity = self.config.option.verbose
|
||||
self.showheader = self.verbosity >= 0
|
||||
@@ -98,11 +99,8 @@ class TerminalReporter:
|
||||
self.startdir = py.path.local()
|
||||
if file is None:
|
||||
file = sys.stdout
|
||||
self._tw = self.writer = py.io.TerminalWriter(file)
|
||||
if self.config.option.color == 'yes':
|
||||
self._tw.hasmarkup = True
|
||||
if self.config.option.color == 'no':
|
||||
self._tw.hasmarkup = False
|
||||
self._tw = self.writer = _pytest.config.create_terminal_writer(config,
|
||||
file)
|
||||
self.currentfspath = None
|
||||
self.reportchars = getreportopt(config)
|
||||
self.hasmarkup = self._tw.hasmarkup
|
||||
|
||||
@@ -43,7 +43,14 @@ class TempdirHandler:
|
||||
basetemp.remove()
|
||||
basetemp.mkdir()
|
||||
else:
|
||||
basetemp = py.path.local.make_numbered_dir(prefix='pytest-')
|
||||
# use a sub-directory in the temproot to speed-up
|
||||
# make_numbered_dir() call
|
||||
import getpass
|
||||
temproot = py.path.local.get_temproot()
|
||||
rootdir = temproot.join('pytest-%s' % getpass.getuser())
|
||||
rootdir.ensure(dir=1)
|
||||
basetemp = py.path.local.make_numbered_dir(prefix='pytest-',
|
||||
rootdir=rootdir)
|
||||
self._basetemp = t = basetemp.realpath()
|
||||
self.trace("new basetemp", t)
|
||||
return t
|
||||
|
||||
@@ -9,6 +9,7 @@ import py
|
||||
|
||||
# for transfering markers
|
||||
from _pytest.python import transfer_markers
|
||||
from _pytest.skipping import MarkEvaluator
|
||||
|
||||
|
||||
def pytest_pycollect_makeitem(collector, name, obj):
|
||||
@@ -113,6 +114,8 @@ class TestCaseFunction(pytest.Function):
|
||||
try:
|
||||
pytest.skip(reason)
|
||||
except pytest.skip.Exception:
|
||||
self._evalskip = MarkEvaluator(self, 'SkipTest')
|
||||
self._evalskip.result = True
|
||||
self._addexcinfo(sys.exc_info())
|
||||
|
||||
def addExpectedFailure(self, testcase, rawexcinfo, reason=""):
|
||||
|
||||
82
appveyor.yml
Normal file
82
appveyor.yml
Normal file
@@ -0,0 +1,82 @@
|
||||
environment:
|
||||
global:
|
||||
# SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the
|
||||
# /E:ON and /V:ON options are not enabled in the batch script intepreter
|
||||
# See: http://stackoverflow.com/a/13751649/163740
|
||||
CMD_IN_ENV: "cmd /E:ON /V:ON /C .\\appveyor\\run_with_env.cmd"
|
||||
|
||||
matrix:
|
||||
|
||||
# Pre-installed Python versions, which Appveyor may upgrade to
|
||||
# a later point release.
|
||||
|
||||
- PYTHON: "C:\\Python27"
|
||||
PYTHON_VERSION: "2.7.x" # currently 2.7.9
|
||||
PYTHON_ARCH: "32"
|
||||
TESTENV: "py27"
|
||||
|
||||
- PYTHON: "C:\\Python27-x64"
|
||||
PYTHON_VERSION: "2.7.x" # currently 2.7.9
|
||||
PYTHON_ARCH: "64"
|
||||
TESTENV: "py27"
|
||||
|
||||
- PYTHON: "C:\\Python33"
|
||||
PYTHON_VERSION: "3.3.x" # currently 3.3.5
|
||||
PYTHON_ARCH: "32"
|
||||
TESTENV: "py33"
|
||||
|
||||
- PYTHON: "C:\\Python33-x64"
|
||||
PYTHON_VERSION: "3.3.x" # currently 3.3.5
|
||||
PYTHON_ARCH: "64"
|
||||
TESTENV: "py33"
|
||||
|
||||
- PYTHON: "C:\\Python34"
|
||||
PYTHON_VERSION: "3.4.x" # currently 3.4.3
|
||||
PYTHON_ARCH: "32"
|
||||
TESTENV: "py34"
|
||||
|
||||
- PYTHON: "C:\\Python34-x64"
|
||||
PYTHON_VERSION: "3.4.x" # currently 3.4.3
|
||||
PYTHON_ARCH: "64"
|
||||
TESTENV: "py34"
|
||||
|
||||
# Also test a Python version not pre-installed
|
||||
# See: https://github.com/ogrisel/python-appveyor-demo/issues/10
|
||||
|
||||
- PYTHON: "C:\\Python266"
|
||||
PYTHON_VERSION: "2.6.6"
|
||||
PYTHON_ARCH: "32"
|
||||
TESTENV: "py26"
|
||||
|
||||
|
||||
install:
|
||||
- ECHO "Filesystem root:"
|
||||
- ps: "ls \"C:/\""
|
||||
|
||||
- ECHO "Installed SDKs:"
|
||||
- ps: "ls \"C:/Program Files/Microsoft SDKs/Windows\""
|
||||
|
||||
# Install Python (from the official .msi of http://python.org) and pip when
|
||||
# not already installed.
|
||||
- ps: if (-not(Test-Path($env:PYTHON))) { & appveyor\install.ps1 }
|
||||
|
||||
# Prepend newly installed Python to the PATH of this build (this cannot be
|
||||
# done from inside the powershell script as it would require to restart
|
||||
# the parent CMD process).
|
||||
- "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
|
||||
|
||||
# Check that we have the expected version and architecture for Python
|
||||
- "python --version"
|
||||
- "python -c \"import struct; print(struct.calcsize('P') * 8)\""
|
||||
|
||||
# Install the build dependencies of the project. If some dependencies contain
|
||||
# compiled extensions and are not provided as pre-built wheel packages,
|
||||
# pip will build them from source using the MSVC compiler matching the
|
||||
# target Python version and architecture
|
||||
- "%CMD_IN_ENV% pip install tox"
|
||||
|
||||
build: false # Not a C# project, build stuff at the test step instead.
|
||||
|
||||
test_script:
|
||||
# Build the compiled extension and run the project tests
|
||||
- "%CMD_IN_ENV% tox -e %TESTENV%"
|
||||
180
appveyor/install.ps1
Normal file
180
appveyor/install.ps1
Normal file
@@ -0,0 +1,180 @@
|
||||
# Sample script to install Python and pip under Windows
|
||||
# Authors: Olivier Grisel, Jonathan Helmus and Kyle Kastner
|
||||
# License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/
|
||||
|
||||
$MINICONDA_URL = "http://repo.continuum.io/miniconda/"
|
||||
$BASE_URL = "https://www.python.org/ftp/python/"
|
||||
$GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py"
|
||||
$GET_PIP_PATH = "C:\get-pip.py"
|
||||
|
||||
|
||||
function DownloadPython ($python_version, $platform_suffix) {
|
||||
$webclient = New-Object System.Net.WebClient
|
||||
$filename = "python-" + $python_version + $platform_suffix + ".msi"
|
||||
$url = $BASE_URL + $python_version + "/" + $filename
|
||||
|
||||
$basedir = $pwd.Path + "\"
|
||||
$filepath = $basedir + $filename
|
||||
if (Test-Path $filename) {
|
||||
Write-Host "Reusing" $filepath
|
||||
return $filepath
|
||||
}
|
||||
|
||||
# Download and retry up to 3 times in case of network transient errors.
|
||||
Write-Host "Downloading" $filename "from" $url
|
||||
$retry_attempts = 2
|
||||
for($i=0; $i -lt $retry_attempts; $i++){
|
||||
try {
|
||||
$webclient.DownloadFile($url, $filepath)
|
||||
break
|
||||
}
|
||||
Catch [Exception]{
|
||||
Start-Sleep 1
|
||||
}
|
||||
}
|
||||
if (Test-Path $filepath) {
|
||||
Write-Host "File saved at" $filepath
|
||||
} else {
|
||||
# Retry once to get the error message if any at the last try
|
||||
$webclient.DownloadFile($url, $filepath)
|
||||
}
|
||||
return $filepath
|
||||
}
|
||||
|
||||
|
||||
function InstallPython ($python_version, $architecture, $python_home) {
|
||||
Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home
|
||||
if (Test-Path $python_home) {
|
||||
Write-Host $python_home "already exists, skipping."
|
||||
return $false
|
||||
}
|
||||
if ($architecture -eq "32") {
|
||||
$platform_suffix = ""
|
||||
} else {
|
||||
$platform_suffix = ".amd64"
|
||||
}
|
||||
$msipath = DownloadPython $python_version $platform_suffix
|
||||
Write-Host "Installing" $msipath "to" $python_home
|
||||
$install_log = $python_home + ".log"
|
||||
$install_args = "/qn /log $install_log /i $msipath TARGETDIR=$python_home"
|
||||
$uninstall_args = "/qn /x $msipath"
|
||||
RunCommand "msiexec.exe" $install_args
|
||||
if (-not(Test-Path $python_home)) {
|
||||
Write-Host "Python seems to be installed else-where, reinstalling."
|
||||
RunCommand "msiexec.exe" $uninstall_args
|
||||
RunCommand "msiexec.exe" $install_args
|
||||
}
|
||||
if (Test-Path $python_home) {
|
||||
Write-Host "Python $python_version ($architecture) installation complete"
|
||||
} else {
|
||||
Write-Host "Failed to install Python in $python_home"
|
||||
Get-Content -Path $install_log
|
||||
Exit 1
|
||||
}
|
||||
}
|
||||
|
||||
function RunCommand ($command, $command_args) {
|
||||
Write-Host $command $command_args
|
||||
Start-Process -FilePath $command -ArgumentList $command_args -Wait -Passthru
|
||||
}
|
||||
|
||||
|
||||
function InstallPip ($python_home) {
|
||||
$pip_path = $python_home + "\Scripts\pip.exe"
|
||||
$python_path = $python_home + "\python.exe"
|
||||
if (-not(Test-Path $pip_path)) {
|
||||
Write-Host "Installing pip..."
|
||||
$webclient = New-Object System.Net.WebClient
|
||||
$webclient.DownloadFile($GET_PIP_URL, $GET_PIP_PATH)
|
||||
Write-Host "Executing:" $python_path $GET_PIP_PATH
|
||||
Start-Process -FilePath "$python_path" -ArgumentList "$GET_PIP_PATH" -Wait -Passthru
|
||||
} else {
|
||||
Write-Host "pip already installed."
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
function DownloadMiniconda ($python_version, $platform_suffix) {
|
||||
$webclient = New-Object System.Net.WebClient
|
||||
if ($python_version -eq "3.4") {
|
||||
$filename = "Miniconda3-3.5.5-Windows-" + $platform_suffix + ".exe"
|
||||
} else {
|
||||
$filename = "Miniconda-3.5.5-Windows-" + $platform_suffix + ".exe"
|
||||
}
|
||||
$url = $MINICONDA_URL + $filename
|
||||
|
||||
$basedir = $pwd.Path + "\"
|
||||
$filepath = $basedir + $filename
|
||||
if (Test-Path $filename) {
|
||||
Write-Host "Reusing" $filepath
|
||||
return $filepath
|
||||
}
|
||||
|
||||
# Download and retry up to 3 times in case of network transient errors.
|
||||
Write-Host "Downloading" $filename "from" $url
|
||||
$retry_attempts = 2
|
||||
for($i=0; $i -lt $retry_attempts; $i++){
|
||||
try {
|
||||
$webclient.DownloadFile($url, $filepath)
|
||||
break
|
||||
}
|
||||
Catch [Exception]{
|
||||
Start-Sleep 1
|
||||
}
|
||||
}
|
||||
if (Test-Path $filepath) {
|
||||
Write-Host "File saved at" $filepath
|
||||
} else {
|
||||
# Retry once to get the error message if any at the last try
|
||||
$webclient.DownloadFile($url, $filepath)
|
||||
}
|
||||
return $filepath
|
||||
}
|
||||
|
||||
|
||||
function InstallMiniconda ($python_version, $architecture, $python_home) {
|
||||
Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home
|
||||
if (Test-Path $python_home) {
|
||||
Write-Host $python_home "already exists, skipping."
|
||||
return $false
|
||||
}
|
||||
if ($architecture -eq "32") {
|
||||
$platform_suffix = "x86"
|
||||
} else {
|
||||
$platform_suffix = "x86_64"
|
||||
}
|
||||
$filepath = DownloadMiniconda $python_version $platform_suffix
|
||||
Write-Host "Installing" $filepath "to" $python_home
|
||||
$install_log = $python_home + ".log"
|
||||
$args = "/S /D=$python_home"
|
||||
Write-Host $filepath $args
|
||||
Start-Process -FilePath $filepath -ArgumentList $args -Wait -Passthru
|
||||
if (Test-Path $python_home) {
|
||||
Write-Host "Python $python_version ($architecture) installation complete"
|
||||
} else {
|
||||
Write-Host "Failed to install Python in $python_home"
|
||||
Get-Content -Path $install_log
|
||||
Exit 1
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
function InstallMinicondaPip ($python_home) {
|
||||
$pip_path = $python_home + "\Scripts\pip.exe"
|
||||
$conda_path = $python_home + "\Scripts\conda.exe"
|
||||
if (-not(Test-Path $pip_path)) {
|
||||
Write-Host "Installing pip..."
|
||||
$args = "install --yes pip"
|
||||
Write-Host $conda_path $args
|
||||
Start-Process -FilePath "$conda_path" -ArgumentList $args -Wait -Passthru
|
||||
} else {
|
||||
Write-Host "pip already installed."
|
||||
}
|
||||
}
|
||||
|
||||
function main () {
|
||||
InstallPython $env:PYTHON_VERSION $env:PYTHON_ARCH $env:PYTHON
|
||||
InstallPip $env:PYTHON
|
||||
}
|
||||
|
||||
main
|
||||
47
appveyor/run_with_env.cmd
Normal file
47
appveyor/run_with_env.cmd
Normal file
@@ -0,0 +1,47 @@
|
||||
:: To build extensions for 64 bit Python 3, we need to configure environment
|
||||
:: variables to use the MSVC 2010 C++ compilers from GRMSDKX_EN_DVD.iso of:
|
||||
:: MS Windows SDK for Windows 7 and .NET Framework 4 (SDK v7.1)
|
||||
::
|
||||
:: To build extensions for 64 bit Python 2, we need to configure environment
|
||||
:: variables to use the MSVC 2008 C++ compilers from GRMSDKX_EN_DVD.iso of:
|
||||
:: MS Windows SDK for Windows 7 and .NET Framework 3.5 (SDK v7.0)
|
||||
::
|
||||
:: 32 bit builds do not require specific environment configurations.
|
||||
::
|
||||
:: Note: this script needs to be run with the /E:ON and /V:ON flags for the
|
||||
:: cmd interpreter, at least for (SDK v7.0)
|
||||
::
|
||||
:: More details at:
|
||||
:: https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows
|
||||
:: http://stackoverflow.com/a/13751649/163740
|
||||
::
|
||||
:: Author: Olivier Grisel
|
||||
:: License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/
|
||||
@ECHO OFF
|
||||
|
||||
SET COMMAND_TO_RUN=%*
|
||||
SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows
|
||||
|
||||
SET MAJOR_PYTHON_VERSION="%PYTHON_VERSION:~0,1%"
|
||||
IF %MAJOR_PYTHON_VERSION% == "2" (
|
||||
SET WINDOWS_SDK_VERSION="v7.0"
|
||||
) ELSE IF %MAJOR_PYTHON_VERSION% == "3" (
|
||||
SET WINDOWS_SDK_VERSION="v7.1"
|
||||
) ELSE (
|
||||
ECHO Unsupported Python version: "%MAJOR_PYTHON_VERSION%"
|
||||
EXIT 1
|
||||
)
|
||||
|
||||
IF "%PYTHON_ARCH%"=="64" (
|
||||
ECHO Configuring Windows SDK %WINDOWS_SDK_VERSION% for Python %MAJOR_PYTHON_VERSION% on a 64 bit architecture
|
||||
SET DISTUTILS_USE_SDK=1
|
||||
SET MSSdk=1
|
||||
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION%
|
||||
"%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release
|
||||
ECHO Executing: %COMMAND_TO_RUN%
|
||||
call %COMMAND_TO_RUN% || EXIT 1
|
||||
) ELSE (
|
||||
ECHO Using default MSVC build environment for 32 bit architecture
|
||||
ECHO Executing: %COMMAND_TO_RUN%
|
||||
call %COMMAND_TO_RUN% || EXIT 1
|
||||
)
|
||||
@@ -3,9 +3,9 @@
|
||||
<li><a href="{{ pathto('index') }}">The pytest Website</a></li>
|
||||
<li><a href="{{ pathto('contributing') }}">Contribution Guide</a></li>
|
||||
<li><a href="https://pypi.python.org/pypi/pytest">pytest @ PyPI</a></li>
|
||||
<li><a href="https://bitbucket.org/pytest-dev/pytest/">pytest @ Bitbucket</a></li>
|
||||
<li><a href="https://github.com/pytest-dev/pytest/">pytest @ GitHub</a></li>
|
||||
<li><a href="http://pytest.org/latest/plugins_index/index.html">3rd party plugins</a></li>
|
||||
<li><a href="https://bitbucket.org/pytest-dev/pytest/issues?status=new&status=open">Issue Tracker</a></li>
|
||||
<li><a href="https://github.com/pytest-dev/pytest/issues">Issue Tracker</a></li>
|
||||
<li><a href="http://pytest.org/latest/pytest.pdf">PDF Documentation</a>
|
||||
</ul>
|
||||
|
||||
|
||||
58
doc/en/announce/release-2.7.2.txt
Normal file
58
doc/en/announce/release-2.7.2.txt
Normal file
@@ -0,0 +1,58 @@
|
||||
pytest-2.7.2: bug fixes
|
||||
=======================
|
||||
|
||||
pytest is a mature Python testing tool with more than a 1100 tests
|
||||
against itself, passing on many different interpreters and platforms.
|
||||
This release is supposed to be drop-in compatible to 2.7.1.
|
||||
|
||||
See below for the changes and see docs at:
|
||||
|
||||
http://pytest.org
|
||||
|
||||
As usual, you can upgrade from pypi via::
|
||||
|
||||
pip install -U pytest
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
Bruno Oliveira
|
||||
Floris Bruynooghe
|
||||
Punyashloka Biswal
|
||||
Aron Curzon
|
||||
Benjamin Peterson
|
||||
Thomas De Schampheleire
|
||||
Edison Gustavo Muenz
|
||||
Holger Krekel
|
||||
|
||||
Happy testing,
|
||||
The py.test Development Team
|
||||
|
||||
|
||||
2.7.2 (compared to 2.7.1)
|
||||
-----------------------------
|
||||
|
||||
- fix issue767: pytest.raises value attribute does not contain the exception
|
||||
instance on Python 2.6. Thanks Eric Siegerman for providing the test
|
||||
case and Bruno Oliveira for PR.
|
||||
|
||||
- Automatically create directory for junitxml and results log.
|
||||
Thanks Aron Curzon.
|
||||
|
||||
- fix issue713: JUnit XML reports for doctest failures.
|
||||
Thanks Punyashloka Biswal.
|
||||
|
||||
- fix issue735: assertion failures on debug versions of Python 3.4+
|
||||
Thanks Benjamin Peterson.
|
||||
|
||||
- fix issue114: skipif marker reports to internal skipping plugin;
|
||||
Thanks Floris Bruynooghe for reporting and Bruno Oliveira for the PR.
|
||||
|
||||
- fix issue748: unittest.SkipTest reports to internal pytest unittest plugin.
|
||||
Thanks Thomas De Schampheleire for reporting and Bruno Oliveira for the PR.
|
||||
|
||||
- fix issue718: failed to create representation of sets containing unsortable
|
||||
elements in python 2. Thanks Edison Gustavo Muenz
|
||||
|
||||
- fix issue756, fix issue752 (and similar issues): depend on py-1.4.29
|
||||
which has a refined algorithm for traceback generation.
|
||||
|
||||
@@ -10,7 +10,7 @@ Pass different values to a test function, depending on command line options
|
||||
.. regendoc:wipe
|
||||
|
||||
Suppose we want to write a test that depends on a command line option.
|
||||
Here is a basic pattern how to achieve this::
|
||||
Here is a basic pattern to achieve this::
|
||||
|
||||
# content of test_sample.py
|
||||
def test_answer(cmdopt):
|
||||
@@ -41,9 +41,9 @@ Let's run this without supplying our new option::
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
|
||||
|
||||
cmdopt = 'type1'
|
||||
|
||||
|
||||
def test_answer(cmdopt):
|
||||
if cmdopt == "type1":
|
||||
print ("first")
|
||||
@@ -51,7 +51,7 @@ Let's run this without supplying our new option::
|
||||
print ("second")
|
||||
> assert 0 # to see what was printed
|
||||
E assert 0
|
||||
|
||||
|
||||
test_sample.py:6: AssertionError
|
||||
--------------------------- Captured stdout call ---------------------------
|
||||
first
|
||||
@@ -109,9 +109,9 @@ directory with the above conftest.py::
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 0 items
|
||||
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
.. _`excontrolskip`:
|
||||
@@ -154,13 +154,13 @@ and when running it will see a skipped "slow" test::
|
||||
$ py.test -rs # "-rs" means report details on the little 's'
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 2 items
|
||||
|
||||
|
||||
test_module.py .s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /tmp/doc-exec-162/conftest.py:9: need --runslow option to run
|
||||
|
||||
|
||||
=================== 1 passed, 1 skipped in 0.01 seconds ====================
|
||||
|
||||
Or run it including the ``slow`` marked test::
|
||||
@@ -168,11 +168,11 @@ Or run it including the ``slow`` marked test::
|
||||
$ py.test --runslow
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 2 items
|
||||
|
||||
|
||||
test_module.py ..
|
||||
|
||||
|
||||
========================= 2 passed in 0.01 seconds =========================
|
||||
|
||||
Writing well integrated assertion helpers
|
||||
@@ -205,11 +205,11 @@ Let's run our little function::
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_something ______________________________
|
||||
|
||||
|
||||
def test_something():
|
||||
> checkconfig(42)
|
||||
E Failed: not configured: 42
|
||||
|
||||
|
||||
test_checkconfig.py:8: Failed
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
@@ -260,10 +260,10 @@ which will add the string to the test header accordingly::
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
project deps: mylib-1.1
|
||||
collected 0 items
|
||||
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
.. regendoc:wipe
|
||||
@@ -284,11 +284,11 @@ which will add info only when run with "--v"::
|
||||
$ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1 -- /tmp/sandbox/pytest/.tox/regen/bin/python3.4
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
info1: did you know that ...
|
||||
did you?
|
||||
collecting ... collected 0 items
|
||||
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
and nothing when run plainly::
|
||||
@@ -296,9 +296,9 @@ and nothing when run plainly::
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 0 items
|
||||
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
profiling test duration
|
||||
@@ -329,11 +329,11 @@ Now we can profile which test functions execute the slowest::
|
||||
$ py.test --durations=3
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 3 items
|
||||
|
||||
|
||||
test_some_are_slow.py ...
|
||||
|
||||
|
||||
========================= slowest 3 test durations =========================
|
||||
0.20s call test_some_are_slow.py::test_funcslow2
|
||||
0.10s call test_some_are_slow.py::test_funcslow1
|
||||
@@ -391,20 +391,20 @@ If we run this::
|
||||
$ py.test -rx
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 4 items
|
||||
|
||||
|
||||
test_step.py .Fx.
|
||||
|
||||
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
|
||||
self = <test_step.TestUserHandling object at 0x7ff60bbb83c8>
|
||||
|
||||
|
||||
def test_modification(self):
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
|
||||
test_step.py:9: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||
@@ -462,14 +462,14 @@ We can run this::
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 7 items
|
||||
|
||||
|
||||
test_step.py .Fx.
|
||||
a/test_db.py F
|
||||
a/test_db2.py F
|
||||
b/test_error.py E
|
||||
|
||||
|
||||
================================== ERRORS ==================================
|
||||
_______________________ ERROR at setup of test_root ________________________
|
||||
file /tmp/doc-exec-162/b/test_error.py, line 1
|
||||
@@ -477,37 +477,37 @@ We can run this::
|
||||
fixture 'db' not found
|
||||
available fixtures: pytestconfig, capsys, recwarn, monkeypatch, tmpdir, capfd
|
||||
use 'py.test --fixtures [testpath]' for help on them.
|
||||
|
||||
|
||||
/tmp/doc-exec-162/b/test_error.py:1
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
|
||||
self = <test_step.TestUserHandling object at 0x7f8ecd5b87f0>
|
||||
|
||||
|
||||
def test_modification(self):
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
|
||||
test_step.py:9: AssertionError
|
||||
_________________________________ test_a1 __________________________________
|
||||
|
||||
|
||||
db = <conftest.DB object at 0x7f8ecdc11470>
|
||||
|
||||
|
||||
def test_a1(db):
|
||||
> assert 0, db # to show value
|
||||
E AssertionError: <conftest.DB object at 0x7f8ecdc11470>
|
||||
E assert 0
|
||||
|
||||
|
||||
a/test_db.py:2: AssertionError
|
||||
_________________________________ test_a2 __________________________________
|
||||
|
||||
|
||||
db = <conftest.DB object at 0x7f8ecdc11470>
|
||||
|
||||
|
||||
def test_a2(db):
|
||||
> assert 0, db # to show value
|
||||
E AssertionError: <conftest.DB object at 0x7f8ecdc11470>
|
||||
E assert 0
|
||||
|
||||
|
||||
a/test_db2.py:2: AssertionError
|
||||
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.05 seconds ==========
|
||||
|
||||
@@ -565,27 +565,27 @@ and run them::
|
||||
$ py.test test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 2 items
|
||||
|
||||
|
||||
test_module.py FF
|
||||
|
||||
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_fail1 ________________________________
|
||||
|
||||
|
||||
tmpdir = local('/tmp/pytest-22/test_fail10')
|
||||
|
||||
|
||||
def test_fail1(tmpdir):
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
|
||||
test_module.py:2: AssertionError
|
||||
________________________________ test_fail2 ________________________________
|
||||
|
||||
|
||||
def test_fail2():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
|
||||
test_module.py:4: AssertionError
|
||||
========================= 2 failed in 0.02 seconds =========================
|
||||
|
||||
@@ -656,38 +656,38 @@ and run it::
|
||||
$ py.test -s test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
rootdir: /tmp/doc-exec-162, inifile:
|
||||
collected 3 items
|
||||
|
||||
|
||||
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
||||
Fexecuting test failed test_module.py::test_call_fails
|
||||
F
|
||||
|
||||
|
||||
================================== ERRORS ==================================
|
||||
____________________ ERROR at setup of test_setup_fails ____________________
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def other():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
|
||||
test_module.py:6: AssertionError
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_call_fails ______________________________
|
||||
|
||||
|
||||
something = None
|
||||
|
||||
|
||||
def test_call_fails(something):
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
|
||||
test_module.py:12: AssertionError
|
||||
________________________________ test_fail2 ________________________________
|
||||
|
||||
|
||||
def test_fail2():
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
|
||||
test_module.py:15: AssertionError
|
||||
==================== 2 failed, 1 error in 0.02 seconds =====================
|
||||
|
||||
|
||||
@@ -106,15 +106,16 @@ Is using pytest fixtures versus xUnit setup a style question?
|
||||
For simple applications and for people experienced with nose_ or
|
||||
unittest-style test setup using `xUnit style setup`_ probably
|
||||
feels natural. For larger test suites, parametrized testing
|
||||
or setup of complex test resources using funcargs_ may feel more natural.
|
||||
Moreover, funcargs are ideal for writing advanced test support
|
||||
code (like e.g. the monkeypatch_, the tmpdir_ or capture_ funcargs)
|
||||
or setup of complex test resources using fixtures_ may feel more natural.
|
||||
Moreover, fixtures are ideal for writing advanced test support
|
||||
code (like e.g. the monkeypatch_, the tmpdir_ or capture_ fixtures)
|
||||
because the support code can register setup/teardown functions
|
||||
in a managed class/module/function scope.
|
||||
|
||||
.. _monkeypatch: monkeypatch.html
|
||||
.. _tmpdir: tmpdir.html
|
||||
.. _capture: capture.html
|
||||
.. _fixtures: fixture.html
|
||||
|
||||
.. _`why pytest_pyfuncarg__ methods?`:
|
||||
|
||||
|
||||
@@ -62,7 +62,7 @@ using it::
|
||||
@pytest.fixture
|
||||
def smtp():
|
||||
import smtplib
|
||||
return smtplib.SMTP("merlinux.eu")
|
||||
return smtplib.SMTP("smtp.gmail.com")
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response, msg = smtp.ehlo()
|
||||
@@ -169,7 +169,7 @@ access the fixture function::
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def smtp():
|
||||
return smtplib.SMTP("merlinux.eu")
|
||||
return smtplib.SMTP("smtp.gmail.com")
|
||||
|
||||
The name of the fixture again is ``smtp`` and you can access its result by
|
||||
listing the name ``smtp`` as an input parameter in any test or fixture
|
||||
@@ -178,14 +178,14 @@ function (in or below the directory where ``conftest.py`` is located)::
|
||||
# content of test_module.py
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
assert response[0] == 250
|
||||
assert "merlinux" in response[1]
|
||||
response, msg = smtp.ehlo()
|
||||
assert response == 250
|
||||
assert "smtp.gmail.com" in str(msg, 'ascii')
|
||||
assert 0 # for demo purposes
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
assert response[0] == 250
|
||||
response, msg = smtp.noop()
|
||||
assert response == 250
|
||||
assert 0 # for demo purposes
|
||||
|
||||
We deliberately insert failing ``assert 0`` statements in order to
|
||||
@@ -255,7 +255,7 @@ or multiple times::
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def smtp(request):
|
||||
smtp = smtplib.SMTP("merlinux.eu")
|
||||
smtp = smtplib.SMTP("smtp.gmail.com")
|
||||
def fin():
|
||||
print ("teardown smtp")
|
||||
smtp.close()
|
||||
@@ -296,7 +296,7 @@ read an optional server URL from the test module which uses our fixture::
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def smtp(request):
|
||||
server = getattr(request.module, "smtpserver", "merlinux.eu")
|
||||
server = getattr(request.module, "smtpserver", "smtp.gmail.com")
|
||||
smtp = smtplib.SMTP(server)
|
||||
|
||||
def fin():
|
||||
@@ -359,7 +359,7 @@ through the special :py:class:`request <FixtureRequest>` object::
|
||||
import smtplib
|
||||
|
||||
@pytest.fixture(scope="module",
|
||||
params=["merlinux.eu", "mail.python.org"])
|
||||
params=["smtp.gmail.com", "mail.python.org"])
|
||||
def smtp(request):
|
||||
smtp = smtplib.SMTP(request.param)
|
||||
def fin():
|
||||
@@ -431,7 +431,7 @@ connection the second test fails in ``test_ehlo`` because a
|
||||
different server string is expected than what arrived.
|
||||
|
||||
pytest will build a string that is the test ID for each fixture value
|
||||
in a parametrized fixture, e.g. ``test_ehlo[merlinux.eu]`` and
|
||||
in a parametrized fixture, e.g. ``test_ehlo[smtp.gmail.com]`` and
|
||||
``test_ehlo[mail.python.org]`` in the above examples. These IDs can
|
||||
be used with ``-k`` to select specific cases to run, and they will
|
||||
also identify the specific case when one is failing. Running pytest
|
||||
@@ -443,6 +443,7 @@ make a string based on the argument name. It is possible to customise
|
||||
the string used in a test ID for a certain fixture value by using the
|
||||
``ids`` keyword argument::
|
||||
|
||||
# content of test_ids.py
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(params=[0, 1], ids=["spam", "ham"])
|
||||
@@ -472,7 +473,7 @@ return ``None`` then pytest's auto-generated ID will be used.
|
||||
|
||||
Running the above tests results in the following test IDs being used::
|
||||
|
||||
$ py.test --collect-only
|
||||
$ py.test --collect-only test_ids.py
|
||||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.4.1 -- py-1.4.27 -- pytest-2.7.1
|
||||
rootdir: /tmp/doc-exec-98, inifile:
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
.. highlightlang:: python
|
||||
.. _`goodpractises`:
|
||||
|
||||
Good Integration Practises
|
||||
Good Integration Practices
|
||||
=================================================
|
||||
|
||||
Work with virtual environments
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 17 KiB |
@@ -6,7 +6,7 @@
|
||||
.. _`pytest_nose`: plugin/nose.html
|
||||
.. _`reStructured Text`: http://docutils.sourceforge.net
|
||||
.. _`Python debugger`: http://docs.python.org/lib/module-pdb.html
|
||||
.. _nose: http://somethingaboutorange.com/mrl/projects/nose/
|
||||
.. _nose: https://nose.readthedocs.org/en/latest/
|
||||
.. _pytest: http://pypi.python.org/pypi/pytest
|
||||
.. _mercurial: http://mercurial.selenic.com/wiki/
|
||||
.. _`setuptools`: http://pypi.python.org/pypi/setuptools
|
||||
|
||||
@@ -8,3 +8,7 @@ upload-dir = doc/en/build/html
|
||||
|
||||
[bdist_wheel]
|
||||
universal = 1
|
||||
|
||||
[devpi:upload]
|
||||
formats=sdist.tgz,bdist_wheel
|
||||
|
||||
|
||||
10
setup.py
10
setup.py
@@ -31,12 +31,12 @@ def get_version():
|
||||
def has_environment_marker_support():
|
||||
"""
|
||||
Tests that setuptools has support for PEP-426 environment marker support.
|
||||
|
||||
The first known release to support it is 0.7 (and the earliest on PyPI seems to be 0.7.2
|
||||
|
||||
The first known release to support it is 0.7 (and the earliest on PyPI seems to be 0.7.2
|
||||
so we're using that), see: http://pythonhosted.org/setuptools/history.html#id142
|
||||
|
||||
|
||||
References:
|
||||
|
||||
|
||||
* https://wheel.readthedocs.org/en/latest/index.html#defining-conditional-dependencies
|
||||
* https://www.python.org/dev/peps/pep-0426/#environment-markers
|
||||
"""
|
||||
@@ -48,7 +48,7 @@ def has_environment_marker_support():
|
||||
|
||||
|
||||
def main():
|
||||
install_requires = ['py>=1.4.25']
|
||||
install_requires = ['py>=1.4.29']
|
||||
extras_require = {}
|
||||
if has_environment_marker_support():
|
||||
extras_require[':python_version=="2.6" or python_version=="3.0" or python_version=="3.1"'] = ['argparse']
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import sys
|
||||
import py, pytest
|
||||
|
||||
class TestGeneralUsage:
|
||||
@@ -370,6 +371,21 @@ class TestGeneralUsage:
|
||||
"*fixture 'invalid_fixture' not found",
|
||||
])
|
||||
|
||||
def test_plugins_given_as_strings(self, tmpdir, monkeypatch):
|
||||
"""test that str values passed to main() as `plugins` arg
|
||||
are interpreted as module names to be imported and registered.
|
||||
#855.
|
||||
"""
|
||||
with pytest.raises(ImportError) as excinfo:
|
||||
pytest.main([str(tmpdir)], plugins=['invalid.module'])
|
||||
assert 'invalid' in str(excinfo.value)
|
||||
|
||||
p = tmpdir.join('test_test_plugins_given_as_strings.py')
|
||||
p.write('def test_foo(): pass')
|
||||
mod = py.std.types.ModuleType("myplugin")
|
||||
monkeypatch.setitem(sys.modules, 'myplugin', mod)
|
||||
assert pytest.main(args=[str(tmpdir)], plugins=['myplugin']) == 0
|
||||
|
||||
|
||||
class TestInvocationVariants:
|
||||
def test_earlyinit(self, testdir):
|
||||
|
||||
@@ -3,17 +3,25 @@ import sys
|
||||
|
||||
pytest_plugins = "pytester",
|
||||
|
||||
import os, py
|
||||
import os, py, gc
|
||||
|
||||
class LsofFdLeakChecker(object):
|
||||
def get_open_files(self):
|
||||
gc.collect()
|
||||
out = self._exec_lsof()
|
||||
open_files = self._parse_lsof_output(out)
|
||||
return open_files
|
||||
|
||||
def _exec_lsof(self):
|
||||
pid = os.getpid()
|
||||
return py.process.cmdexec("lsof -Ffn0 -p %d" % pid)
|
||||
#return py.process.cmdexec("lsof -Ffn0 -p %d" % pid)
|
||||
try:
|
||||
return py.process.cmdexec("lsof -p %d" % pid)
|
||||
except UnicodeDecodeError:
|
||||
# cmdexec may raise UnicodeDecodeError on Windows systems
|
||||
# with locale other than english:
|
||||
# https://bitbucket.org/pytest-dev/py/issues/66
|
||||
return ''
|
||||
|
||||
def _parse_lsof_output(self, out):
|
||||
def isopen(line):
|
||||
|
||||
@@ -851,3 +851,47 @@ def test_unorderable_types(testdir):
|
||||
result = testdir.runpytest()
|
||||
assert "TypeError" not in result.stdout.str()
|
||||
assert result.ret == 0
|
||||
|
||||
|
||||
def test_collect_functools_partial(testdir):
|
||||
"""
|
||||
Test that collection of functools.partial object works, and arguments
|
||||
to the wrapped functions are dealt correctly (see #811).
|
||||
"""
|
||||
testdir.makepyfile("""
|
||||
import functools
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def fix1():
|
||||
return 'fix1'
|
||||
|
||||
@pytest.fixture
|
||||
def fix2():
|
||||
return 'fix2'
|
||||
|
||||
def check1(i, fix1):
|
||||
assert i == 2
|
||||
assert fix1 == 'fix1'
|
||||
|
||||
def check2(fix1, i):
|
||||
assert i == 2
|
||||
assert fix1 == 'fix1'
|
||||
|
||||
def check3(fix1, i, fix2):
|
||||
assert i == 2
|
||||
assert fix1 == 'fix1'
|
||||
assert fix2 == 'fix2'
|
||||
|
||||
test_ok_1 = functools.partial(check1, i=2)
|
||||
test_ok_2 = functools.partial(check1, i=2, fix1='fix1')
|
||||
test_ok_3 = functools.partial(check1, 2)
|
||||
test_ok_4 = functools.partial(check2, i=2)
|
||||
test_ok_5 = functools.partial(check3, i=2)
|
||||
test_ok_6 = functools.partial(check3, i=2, fix1='fix1')
|
||||
|
||||
test_fail_1 = functools.partial(check2, 2)
|
||||
test_fail_2 = functools.partial(check3, 2)
|
||||
""")
|
||||
result = testdir.inline_run()
|
||||
result.assertoutcome(passed=6, failed=2)
|
||||
|
||||
@@ -623,6 +623,11 @@ class TestRequestBasic:
|
||||
*arg1*
|
||||
""")
|
||||
|
||||
def test_show_fixtures_color_yes(self, testdir):
|
||||
testdir.makepyfile("def test_this(): assert 1")
|
||||
result = testdir.runpytest('--color=yes', '--fixtures')
|
||||
assert '\x1b[32mtmpdir' in result.stdout.str()
|
||||
|
||||
def test_newstyle_with_request(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
@@ -2483,6 +2488,44 @@ class TestShowFixtures:
|
||||
""")
|
||||
|
||||
|
||||
def test_show_fixtures_different_files(self, testdir):
|
||||
"""
|
||||
#833: --fixtures only shows fixtures from first file
|
||||
"""
|
||||
testdir.makepyfile(test_a='''
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def fix_a():
|
||||
"""Fixture A"""
|
||||
pass
|
||||
|
||||
def test_a(fix_a):
|
||||
pass
|
||||
''')
|
||||
testdir.makepyfile(test_b='''
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def fix_b():
|
||||
"""Fixture B"""
|
||||
pass
|
||||
|
||||
def test_b(fix_b):
|
||||
pass
|
||||
''')
|
||||
result = testdir.runpytest("--fixtures")
|
||||
result.stdout.fnmatch_lines("""
|
||||
* fixtures defined from test_a *
|
||||
fix_a
|
||||
Fixture A
|
||||
|
||||
* fixtures defined from test_b *
|
||||
fix_b
|
||||
Fixture B
|
||||
""")
|
||||
|
||||
|
||||
class TestContextManagerFixtureFuncs:
|
||||
def test_simple(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
|
||||
@@ -46,6 +46,7 @@ class TestRaises:
|
||||
1/0
|
||||
print (excinfo)
|
||||
assert excinfo.type == ZeroDivisionError
|
||||
assert isinstance(excinfo.value, ZeroDivisionError)
|
||||
|
||||
def test_noraise():
|
||||
with pytest.raises(pytest.raises.Exception):
|
||||
|
||||
@@ -569,3 +569,39 @@ def test_AssertionError_message(testdir):
|
||||
*assert 0, (x,y)*
|
||||
*AssertionError: (1, 2)*
|
||||
""")
|
||||
|
||||
@pytest.mark.skipif(PY3, reason='This bug does not exist on PY3')
|
||||
def test_set_with_unsortable_elements():
|
||||
# issue #718
|
||||
class UnsortableKey(object):
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
def __lt__(self, other):
|
||||
raise RuntimeError()
|
||||
|
||||
def __repr__(self):
|
||||
return 'repr({0})'.format(self.name)
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.name == other.name
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.name)
|
||||
|
||||
left_set = set(UnsortableKey(str(i)) for i in range(1, 3))
|
||||
right_set = set(UnsortableKey(str(i)) for i in range(2, 4))
|
||||
expl = callequal(left_set, right_set, verbose=True)
|
||||
# skip first line because it contains the "construction" of the set, which does not have a guaranteed order
|
||||
expl = expl[1:]
|
||||
dedent = textwrap.dedent("""
|
||||
Extra items in the left set:
|
||||
repr(1)
|
||||
Extra items in the right set:
|
||||
repr(3)
|
||||
Full diff (fallback to calling repr on each item):
|
||||
- repr(1)
|
||||
repr(2)
|
||||
+ repr(3)
|
||||
""").strip()
|
||||
assert '\n'.join(expl) == dedent
|
||||
|
||||
@@ -651,7 +651,8 @@ def lsof_check():
|
||||
pid = os.getpid()
|
||||
try:
|
||||
out = py.process.cmdexec("lsof -p %d" % pid)
|
||||
except py.process.cmdexec.Error:
|
||||
except (py.process.cmdexec.Error, UnicodeDecodeError):
|
||||
# about UnicodeDecodeError, see note on conftest.py
|
||||
pytest.skip("could not run 'lsof'")
|
||||
yield
|
||||
out2 = py.process.cmdexec("lsof -p %d" % pid)
|
||||
|
||||
@@ -171,6 +171,7 @@ def test_conftest_confcutdir(testdir):
|
||||
"""))
|
||||
result = testdir.runpytest("-h", "--confcutdir=%s" % x, x)
|
||||
result.stdout.fnmatch_lines(["*--xyz*"])
|
||||
assert 'warning: could not load initial' not in result.stdout.str()
|
||||
|
||||
def test_conftest_existing_resultlog(testdir):
|
||||
x = testdir.mkdir("tests")
|
||||
|
||||
@@ -354,3 +354,19 @@ class TestDoctests:
|
||||
reprec = testdir.inline_run(p, "--doctest-modules",
|
||||
"--doctest-ignore-import-errors")
|
||||
reprec.assertoutcome(skipped=1, failed=1, passed=0)
|
||||
|
||||
def test_junit_report_for_doctest(self, testdir):
|
||||
"""
|
||||
#713: Fix --junit-xml option when used with --doctest-modules.
|
||||
"""
|
||||
p = testdir.makepyfile("""
|
||||
def foo():
|
||||
'''
|
||||
>>> 1 + 1
|
||||
3
|
||||
'''
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run(p, "--doctest-modules",
|
||||
"--junit-xml=junit.xml")
|
||||
reprec.assertoutcome(failed=1)
|
||||
|
||||
@@ -474,6 +474,16 @@ def test_logxml_changingdir(testdir):
|
||||
assert result.ret == 0
|
||||
assert testdir.tmpdir.join("a/x.xml").check()
|
||||
|
||||
def test_logxml_makedir(testdir):
|
||||
"""--junitxml should automatically create directories for the xml file"""
|
||||
testdir.makepyfile("""
|
||||
def test_pass():
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest("--junitxml=path/to/results.xml")
|
||||
assert result.ret == 0
|
||||
assert testdir.tmpdir.join("path/to/results.xml").check()
|
||||
|
||||
def test_escaped_parametrized_names_xml(testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@@ -369,6 +369,45 @@ class TestFunctional:
|
||||
print (item, item.keywords)
|
||||
assert 'a' in item.keywords
|
||||
|
||||
def test_mark_decorator_subclass_does_not_propagate_to_base(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.mark.a
|
||||
class Base: pass
|
||||
|
||||
@pytest.mark.b
|
||||
class Test1(Base):
|
||||
def test_foo(self): pass
|
||||
|
||||
class Test2(Base):
|
||||
def test_bar(self): pass
|
||||
""")
|
||||
items, rec = testdir.inline_genitems(p)
|
||||
self.assert_markers(items, test_foo=('a', 'b'), test_bar=('a',))
|
||||
|
||||
def test_mark_decorator_baseclasses_merged(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.mark.a
|
||||
class Base: pass
|
||||
|
||||
@pytest.mark.b
|
||||
class Base2(Base): pass
|
||||
|
||||
@pytest.mark.c
|
||||
class Test1(Base2):
|
||||
def test_foo(self): pass
|
||||
|
||||
class Test2(Base2):
|
||||
@pytest.mark.d
|
||||
def test_bar(self): pass
|
||||
""")
|
||||
items, rec = testdir.inline_genitems(p)
|
||||
self.assert_markers(items, test_foo=('a', 'b', 'c'),
|
||||
test_bar=('a', 'b', 'd'))
|
||||
|
||||
def test_mark_with_wrong_marker(self, testdir):
|
||||
reprec = testdir.inline_runsource("""
|
||||
import pytest
|
||||
@@ -477,6 +516,22 @@ class TestFunctional:
|
||||
reprec = testdir.inline_run("-m", "mark1")
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
def assert_markers(self, items, **expected):
|
||||
"""assert that given items have expected marker names applied to them.
|
||||
expected should be a dict of (item name -> seq of expected marker names)
|
||||
|
||||
.. note:: this could be moved to ``testdir`` if proven to be useful
|
||||
to other modules.
|
||||
"""
|
||||
from _pytest.mark import MarkInfo
|
||||
items = dict((x.name, x) for x in items)
|
||||
for name, expected_markers in expected.items():
|
||||
markers = items[name].keywords._markers
|
||||
marker_names = set([name for (name, v) in markers.items()
|
||||
if isinstance(v, MarkInfo)])
|
||||
assert marker_names == set(expected_markers)
|
||||
|
||||
|
||||
class TestKeywordSelection:
|
||||
def test_select_simple(self, testdir):
|
||||
file_test = testdir.makepyfile("""
|
||||
|
||||
@@ -64,12 +64,16 @@ def test_deprecated_call_ret():
|
||||
assert ret == 42
|
||||
|
||||
def test_deprecated_call_preserves():
|
||||
r = py.std.warnings.onceregistry.copy()
|
||||
f = py.std.warnings.filters[:]
|
||||
onceregistry = py.std.warnings.onceregistry.copy()
|
||||
filters = py.std.warnings.filters[:]
|
||||
warn = py.std.warnings.warn
|
||||
warn_explicit = py.std.warnings.warn_explicit
|
||||
test_deprecated_call_raises()
|
||||
test_deprecated_call()
|
||||
assert r == py.std.warnings.onceregistry
|
||||
assert f == py.std.warnings.filters
|
||||
assert onceregistry == py.std.warnings.onceregistry
|
||||
assert filters == py.std.warnings.filters
|
||||
assert warn is py.std.warnings.warn
|
||||
assert warn_explicit is py.std.warnings.warn_explicit
|
||||
|
||||
def test_deprecated_explicit_call_raises():
|
||||
pytest.raises(AssertionError,
|
||||
|
||||
@@ -180,6 +180,21 @@ def test_generic(testdir, LineMatcher):
|
||||
"x *:test_xfail_norun",
|
||||
])
|
||||
|
||||
def test_makedir_for_resultlog(testdir, LineMatcher):
|
||||
"""--resultlog should automatically create directories for the log file"""
|
||||
testdir.plugins.append("resultlog")
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
def test_pass():
|
||||
pass
|
||||
""")
|
||||
testdir.runpytest("--resultlog=path/to/result.log")
|
||||
lines = testdir.tmpdir.join("path/to/result.log").readlines(cr=0)
|
||||
LineMatcher(lines).fnmatch_lines([
|
||||
". *:test_pass",
|
||||
])
|
||||
|
||||
|
||||
def test_no_resultlog_on_slaves(testdir):
|
||||
config = testdir.parseconfig("-p", "resultlog", "--resultlog=resultlog")
|
||||
|
||||
|
||||
@@ -293,7 +293,7 @@ class TestExecutionForked(BaseFunctionalTests):
|
||||
|
||||
def getrunner(self):
|
||||
# XXX re-arrange this test to live in pytest-xdist
|
||||
xplugin = pytest.importorskip("xdist.plugin")
|
||||
xplugin = pytest.importorskip("xdist.boxed")
|
||||
return xplugin.forked_run_report
|
||||
|
||||
def test_suicide(self, testdir):
|
||||
@@ -439,7 +439,7 @@ def test_exception_printing_skip():
|
||||
s = excinfo.exconly(tryshort=True)
|
||||
assert s.startswith("Skipped")
|
||||
|
||||
def test_importorskip():
|
||||
def test_importorskip(monkeypatch):
|
||||
importorskip = pytest.importorskip
|
||||
def f():
|
||||
importorskip("asdlkj")
|
||||
@@ -457,7 +457,7 @@ def test_importorskip():
|
||||
pytest.raises(SyntaxError, "pytest.importorskip('x=y')")
|
||||
mod = py.std.types.ModuleType("hello123")
|
||||
mod.__version__ = "1.3"
|
||||
sys.modules["hello123"] = mod
|
||||
monkeypatch.setitem(sys.modules, "hello123", mod)
|
||||
pytest.raises(pytest.skip.Exception, """
|
||||
pytest.importorskip("hello123", minversion="1.3.1")
|
||||
""")
|
||||
@@ -471,6 +471,19 @@ def test_importorskip_imports_last_module_part():
|
||||
ospath = pytest.importorskip("os.path")
|
||||
assert os.path == ospath
|
||||
|
||||
def test_importorskip_dev_module(monkeypatch):
|
||||
try:
|
||||
mod = py.std.types.ModuleType("mockmodule")
|
||||
mod.__version__ = '0.13.0.dev-43290'
|
||||
monkeypatch.setitem(sys.modules, 'mockmodule', mod)
|
||||
mod2 = pytest.importorskip('mockmodule', minversion='0.12.0')
|
||||
assert mod2 == mod
|
||||
pytest.raises(pytest.skip.Exception, """
|
||||
pytest.importorskip('mockmodule1', minversion='0.14.0')""")
|
||||
except pytest.skip.Exception:
|
||||
print(py.code.ExceptionInfo())
|
||||
pytest.fail("spurious skip")
|
||||
|
||||
|
||||
def test_pytest_cmdline_main(testdir):
|
||||
p = testdir.makepyfile("""
|
||||
|
||||
@@ -396,7 +396,7 @@ class TestSkipif:
|
||||
|
||||
|
||||
def test_skipif_reporting(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
p = testdir.makepyfile(test_foo="""
|
||||
import pytest
|
||||
@pytest.mark.skipif("hasattr(sys, 'platform')")
|
||||
def test_that():
|
||||
@@ -404,11 +404,31 @@ class TestSkipif:
|
||||
""")
|
||||
result = testdir.runpytest(p, '-s', '-rs')
|
||||
result.stdout.fnmatch_lines([
|
||||
"*SKIP*1*platform*",
|
||||
"*SKIP*1*test_foo.py*platform*",
|
||||
"*1 skipped*"
|
||||
])
|
||||
assert result.ret == 0
|
||||
|
||||
@pytest.mark.parametrize('marker, msg1, msg2', [
|
||||
('skipif', 'SKIP', 'skipped'),
|
||||
('xfail', 'XPASS', 'xpassed'),
|
||||
])
|
||||
def test_skipif_reporting_multiple(self, testdir, marker, msg1, msg2):
|
||||
testdir.makepyfile(test_foo="""
|
||||
import pytest
|
||||
@pytest.mark.{marker}(False, reason='first_condition')
|
||||
@pytest.mark.{marker}(True, reason='second_condition')
|
||||
def test_foobar():
|
||||
assert 1
|
||||
""".format(marker=marker))
|
||||
result = testdir.runpytest('-s', '-rsxX')
|
||||
result.stdout.fnmatch_lines([
|
||||
"*{msg1}*test_foo.py*second_condition*".format(msg1=msg1),
|
||||
"*1 {msg2}*".format(msg2=msg2),
|
||||
])
|
||||
assert result.ret == 0
|
||||
|
||||
|
||||
def test_skip_not_report_default(testdir):
|
||||
p = testdir.makepyfile(test_one="""
|
||||
import pytest
|
||||
|
||||
@@ -607,18 +607,23 @@ def test_unittest_unexpected_failure(testdir):
|
||||
])
|
||||
|
||||
|
||||
|
||||
def test_unittest_setup_interaction(testdir):
|
||||
@pytest.mark.parametrize('fix_type, stmt', [
|
||||
('fixture', 'return'),
|
||||
('yield_fixture', 'yield'),
|
||||
])
|
||||
def test_unittest_setup_interaction(testdir, fix_type, stmt):
|
||||
testdir.makepyfile("""
|
||||
import unittest
|
||||
import pytest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
@pytest.fixture(scope="class", autouse=True)
|
||||
@pytest.{fix_type}(scope="class", autouse=True)
|
||||
def perclass(self, request):
|
||||
request.cls.hello = "world"
|
||||
@pytest.fixture(scope="function", autouse=True)
|
||||
{stmt}
|
||||
@pytest.{fix_type}(scope="function", autouse=True)
|
||||
def perfunction(self, request):
|
||||
request.instance.funcname = request.function.__name__
|
||||
{stmt}
|
||||
|
||||
def test_method1(self):
|
||||
assert self.funcname == "test_method1"
|
||||
@@ -629,7 +634,7 @@ def test_unittest_setup_interaction(testdir):
|
||||
|
||||
def test_classattr(self):
|
||||
assert self.__class__.hello == "world"
|
||||
""")
|
||||
""".format(fix_type=fix_type, stmt=stmt))
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines("*3 passed*")
|
||||
|
||||
@@ -700,4 +705,17 @@ def test_issue333_result_clearing(testdir):
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(failed=1)
|
||||
|
||||
@pytest.mark.skipif("sys.version_info < (2,7)")
|
||||
def test_unittest_raise_skip_issue748(testdir):
|
||||
testdir.makepyfile(test_foo="""
|
||||
import unittest
|
||||
|
||||
class MyTestCase(unittest.TestCase):
|
||||
def test_one(self):
|
||||
raise unittest.SkipTest('skipping due to reasons')
|
||||
""")
|
||||
result = testdir.runpytest("-v", '-rs')
|
||||
result.stdout.fnmatch_lines("""
|
||||
*SKIP*[1]*test_foo.py*skipping due to reasons*
|
||||
*1 skipped*
|
||||
""")
|
||||
|
||||
91
tox.ini
91
tox.ini
@@ -1,84 +1,79 @@
|
||||
[tox]
|
||||
minversion=2.0
|
||||
distshare={homedir}/.tox/distshare
|
||||
envlist=flakes,py26,py27,py34,pypy,py27-pexpect,py33-pexpect,py27-nobyte,py33,py27-xdist,py33-xdist,py27-trial,py33-trial,doctesting,py27-cxfreeze
|
||||
envlist=
|
||||
flakes,py26,py27,py33,py34,py35,pypy,
|
||||
{py27,py34}-{pexpect,xdist,trial},
|
||||
py27-nobyte,doctesting,py27-cxfreeze
|
||||
|
||||
[testenv]
|
||||
changedir=testing
|
||||
commands= py.test --lsof -rfsxX --junitxml={envlogdir}/junit-{envname}.xml []
|
||||
commands= py.test --lsof -rfsxX {posargs:testing}
|
||||
passenv = USER USERNAME
|
||||
deps=
|
||||
nose
|
||||
mock
|
||||
|
||||
[testenv:py26]
|
||||
commands= py.test --lsof -rfsxX {posargs:testing}
|
||||
deps=
|
||||
nose
|
||||
mock<1.1 # last supported version for py26
|
||||
|
||||
[testenv:genscript]
|
||||
changedir=.
|
||||
commands= py.test --genscript=pytest1
|
||||
|
||||
[testenv:flakes]
|
||||
changedir=
|
||||
basepython = python2.7
|
||||
deps = pytest-flakes>=0.2
|
||||
commands = py.test --flakes -m flakes _pytest testing
|
||||
|
||||
[testenv:py27-xdist]
|
||||
changedir=.
|
||||
basepython=python2.7
|
||||
deps=pytest-xdist
|
||||
mock
|
||||
nose
|
||||
commands=
|
||||
py.test -n1 -rfsxX \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml {posargs:testing}
|
||||
py.test -n1 -rfsxX {posargs:testing}
|
||||
|
||||
[testenv:py33-xdist]
|
||||
changedir=.
|
||||
basepython=python3.3
|
||||
[testenv:py34-xdist]
|
||||
deps={[testenv:py27-xdist]deps}
|
||||
commands=
|
||||
py.test -n3 -rfsxX \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml testing
|
||||
py.test -n3 -rfsxX testing
|
||||
|
||||
[testenv:py27-pexpect]
|
||||
changedir=testing
|
||||
basepython=python2.7
|
||||
platform=linux|darwin
|
||||
deps=pexpect
|
||||
commands=
|
||||
py.test -rfsxX test_pdb.py test_terminal.py test_unittest.py
|
||||
|
||||
[testenv:py33-pexpect]
|
||||
[testenv:py34-pexpect]
|
||||
changedir=testing
|
||||
basepython=python3.3
|
||||
platform=linux|darwin
|
||||
deps={[testenv:py27-pexpect]deps}
|
||||
commands=
|
||||
py.test -rfsxX test_pdb.py test_terminal.py test_unittest.py
|
||||
|
||||
[testenv:py27-nobyte]
|
||||
changedir=.
|
||||
basepython=python2.7
|
||||
deps=pytest-xdist
|
||||
distribute=true
|
||||
setenv=
|
||||
PYTHONDONTWRITEBYTECODE=1
|
||||
commands=
|
||||
py.test -n3 -rfsxX \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml {posargs:testing}
|
||||
py.test -n3 -rfsxX {posargs:testing}
|
||||
|
||||
[testenv:py27-trial]
|
||||
changedir=.
|
||||
basepython=python2.7
|
||||
deps=twisted
|
||||
commands=
|
||||
py.test -rsxf \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml {posargs:testing/test_unittest.py}
|
||||
py.test -rsxf {posargs:testing/test_unittest.py}
|
||||
|
||||
[testenv:py33-trial]
|
||||
changedir=.
|
||||
basepython=python3.3
|
||||
[testenv:py34-trial]
|
||||
# py34-trial does not work
|
||||
platform=linux|darwin
|
||||
deps={[testenv:py27-trial]deps}
|
||||
commands=
|
||||
py.test -rsxf \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml {posargs:testing/test_unittest.py}
|
||||
py.test -rsxf {posargs:testing/test_unittest.py}
|
||||
|
||||
[testenv:doctest]
|
||||
changedir=.
|
||||
commands=py.test --doctest-modules _pytest
|
||||
deps=
|
||||
|
||||
@@ -94,13 +89,12 @@ commands=
|
||||
make html
|
||||
|
||||
[testenv:doctesting]
|
||||
basepython=python3.3
|
||||
basepython = python3.4
|
||||
changedir=doc/en
|
||||
deps=PyYAML
|
||||
commands= py.test -rfsxX --junitxml={envlogdir}/junit-{envname}.xml []
|
||||
commands= py.test -rfsxX {posargs}
|
||||
|
||||
[testenv:regen]
|
||||
basepython=python3.4
|
||||
changedir=doc/en
|
||||
deps=sphinx
|
||||
PyYAML
|
||||
@@ -109,28 +103,31 @@ commands=
|
||||
#pip install pytest==2.3.4
|
||||
make regen
|
||||
|
||||
[testenv:py31]
|
||||
deps=nose>=1.0
|
||||
|
||||
[testenv:py31-xdist]
|
||||
deps=pytest-xdist
|
||||
commands=
|
||||
py.test -n3 -rfsxX \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml []
|
||||
|
||||
[testenv:jython]
|
||||
changedir=testing
|
||||
commands=
|
||||
{envpython} {envbindir}/py.test-jython \
|
||||
-rfsxX --junitxml={envlogdir}/junit-{envname}2.xml []
|
||||
{envpython} {envbindir}/py.test-jython -rfsxX {posargs}
|
||||
|
||||
[testenv:py27-cxfreeze]
|
||||
changedir=testing/cx_freeze
|
||||
basepython=python2.7
|
||||
platform=linux|darwin
|
||||
commands=
|
||||
{envpython} install_cx_freeze.py
|
||||
{envpython} runtests_setup.py build --build-exe build
|
||||
{envpython} tox_run.py
|
||||
|
||||
|
||||
[testenv:coveralls]
|
||||
basepython = python3.4
|
||||
changedir=testing
|
||||
deps =
|
||||
{[testenv]deps}
|
||||
coveralls
|
||||
commands=
|
||||
coverage run --source=_pytest {envdir}/bin/py.test
|
||||
coverage report -m
|
||||
coveralls
|
||||
passenv=COVERALLS_REPO_TOKEN
|
||||
|
||||
[pytest]
|
||||
minversion=2.0
|
||||
@@ -142,4 +139,4 @@ python_files=test_*.py *_test.py testing/*/*.py
|
||||
python_classes=Test Acceptance
|
||||
python_functions=test
|
||||
pep8ignore = E401 E225 E261 E128 E124 E302
|
||||
norecursedirs = .tox ja .hg
|
||||
norecursedirs = .tox ja .hg cx_freeze_source
|
||||
|
||||
Reference in New Issue
Block a user